Date: July 13th-17th 2019
Benchmarks are one of the primary tools that machine learning researchers use to demonstrate the strengths and weaknesses of an algorithm, and to compare new algorithms to existing ones on a common ground. However, numerous researchers—including prominent researchers in the evolutionary computation field [1, 2, 3]—have raised concerns that the current benchmarking practices in machine learning are insufficient: most commonly-used benchmarks are too small, lack the complexity of real-world problems, or are easily solved by basic machine learning algorithms. As such, we need to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
This workshop will host speakers from around the world who will propose new standards for benchmarking evolutionary computation algorithms. These talks will focus on (i) characterizing current benchmarking methods to better understand what properties of an algorithm are tested via a benchmark comparison, and (ii) proposing improvements to benchmarking standards, for example via new benchmarks that fill gaps in current benchmarking suites or via better experimental methods. At the end of the workshop, we will host a panel discussion to review the merits of the proposed benchmarking standards and how we can integrate them into existing benchmarking workflows.
16:10: Introduction William La Cava
16:25: HIBACHI: A heuristic method for simulating data of arbitrary complexity to benchmark machine learning methods Jason H. Moore
16:55: Exploring the MLDA Benchmark on the Nevergrad Platform Jeremy Rapin, Marcus Gallagher, Pascal Kerschke, Mike Preuss, Olivier Teytaud
Comparison of Contemporary Evolutionary Algorithms on the Rotated Klee-Minty Problem Michael Hellwig, Patrick Spettel, Hans-Georg Beyer
17:55: Program Synthesis Benchmark Suite Thomas Helmuth
This workshop will be organized by Drs. William La Cava, Randal S. Olson, Patryk Orzechowski, and Ryan J. Urbanowicz. Bill, Patryk and Ryan are from the Institute for Biomedical Informatics at the University of Pennsylvania (Philadelphia, PA, USA).
William La Cava
Dr. La Cava is a postdoctoral fellow in the Institute for Biomedical Informatics at Penn. He received his Ph.D. in 2016 from UMass Amherst under Professors Kourosh Danai and Lee Spector. His research focuses on identifying causal models of disease from patient health records and genome wide association studies. His contributions in genetic programming include methods for local search, parent selection, and representation learning.
Dr. Olson is the Lead Data Scientist at Life Epigenetics, Inc., where he is merging epigenetics research with advanced machine learning methods to improve life expectancy prediction for the life insurance industry. Dr. Olson received his Ph.D. from Michigan State University where he studied under Prof. Christoph Adami at the BEACON Center. He has been actively involved in GECCO for several years and won best paper awards at GECCO in 2014 and 2016 for his work in evolutionary agent-based modeling and Automated Machine Learning.
Dr. Orzechowski is a postdoctoral researcher in AI. He obtained his Ph.D. in Computer Science and a Masters of Automation and Robotics from AGH University of Science and Technology, Krakow, Poland. His scientific interests are in the areas of machine learning, bioinformatics and artificial intelligence. He also specializes in data mining and mobile technologies.
Dr. Urbanowicz is an Assistant Professor of Informatics in the Perelman School of Medicine at the University of Pennsylvania. He received a Ph.D in Genetics from Dartmouth College and a Bachelors and Masters of Bioengineering from Cornell University. His research focuses on the development of data mining and machine learning methodologies for interpretably modeling complex patterns of association. He has specialized in areas including rule-based machine learning methods for complex bioinformatics problems, feature selection methodologies, and complex data simulation for proper algorithm evaluation and comparison. At GECCO he has authored two best papers, and organized a number of workshops and tutorial over the last decade.