Pulina, Luca (2006) Empirical evaluation of scoring methods. In: STAIRS 2006: Third Starting AI Researchers' Symposium: proceedings, 28-29 August 2006, Riva del Garda, Italy. Amsterdam, Netherlands, IOS Press. p. 108-119. (Frontiers in Artificial Intelligence and Applications, 142). ISBN 1-58603-645-9. Conference or Workshop Item.
Full text not available from this repository.
The automated reasoning research community has grown accustomed to competitive events where a pool of systems is run on a pool of problem instances with the purpose of ranking the systems according to their performances. At the heart of such ranking lies the method used to score the systems, i.e., the procedure used to compute a numerical quantity that should summarize the performances of a system with respect to the other systems and to the pool of problem instances. In this paper we evaluate several scoring methods, including methods used in automated reasoning contests, as well as methods based on voting theory, and a new method that we introduce. Our research aims to establish which of the above methods maximizes the effectiveness measures that we devised to quantify desirable properties of the scoring procedures. Our method is empirical, in that we compare the scoring methods by computing the effectiveness measures using the data from the 2005 comparative evaluation of solvers for quantified Boolean formulas. The results of our experiments give useful indications about the relative strengths and weaknesses of the scoring methods, and allow us to infer also some conclusions that are independent of the specific method adopted.
I documenti depositati in UnissResearch sono protetti dalle leggi che regolano il diritto d'autore
Repository Staff Only: item control page