Date: Sunday July 16, 2017 (14:00-15:50)
We will have four presentations during the workshop, each 15 minutes long (+ 5 minutes for questions).
Generating custom classification datasets by targeting the instance space
by Mario Andrés Muñoz & Kate Smith-Miles
CryptoBench: Benchmarking Evolutionary Algorithms with Cryptographic Problems
by Stjepan Picek, Domagoj Jakobovic, & Una-May O’Reilly
On the Difficulty of Benchmarking Inductive Program Synthesis Methods
by Edward Pantridge, Thomas Helmuth, Nicholas Freitag McPhee, & Lee Spector
Performance Testing of Automated Modeling for Industrial Applications
by Dylan Sherry & Michael Schmidt
Following the talks, we will have about 15 minutes for general discussion of the talks and benchmarking standards for evolutionary computation.
Benchmarks are one of the primary tools that machine learning researchers use to demonstrate the strengths and weaknesses of an algorithm, and to compare new algorithms to existing ones on a common ground. However, numerous researchers—including prominent researchers in the evolutionary computation field [1, 2, 3]—have raised concerns that the current benchmarking practices in machine learning are insufficient: most commonly-used benchmarks are too small, lack the complexity of real-world problems, or are easily solved by basic machine learning algorithms. As such, we need to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
This workshop will host speakers from around the world who will propose new standards for benchmarking evolutionary computation algorithms. These talks will focus on (i) characterizing current benchmarking methods to better understand what properties of an algorithm are tested via a benchmark comparison, and (ii) proposing improvements to benchmarking standards, for example via new benchmarks that fill gaps in current benchmarking suites or via better experimental methods. At the end of the workshop, we will host a panel discussion to review the merits of the proposed benchmarking standards and how we can integrate them into existing benchmarking workflows.
Call for Papers
The focus of this workshop is to highlight promising new standards for benchmarking practices in evolutionary computation research. As such, we are soliciting papers on topics that could include but are not limited to:
- Examining the merits or issues regarding benchmarking practices.
- Development or expansion of benchmark data archives or tools.
- The importance of simulated vs. real-world bechmarks.
- Analysis of, or comparison between established benchmarks.
- Targeting benchmarks to different domains.
Workshop paper submission deadline: March 27, 2017
Notification of acceptance: April 10, 2017
Camera-ready deadline: April 27, 2017
Registration deadline: May 1, 2017
Submitted papers must not exceed 8 pages and are required to be in compliance with the GECCO 2017 Call for Papers Preparation Instructions. However, note that the review process of the workshop is not double-blind, so authors’ information should be included in the paper.
All accepted papers will be presented at the workshop and appear in the GECCO Conference Companion Proceedings.
This workshop will be organized by Drs. William La Cava, Randal S. Olson, Patryk Orzechowski, and Ryan J. Urbanowicz, all from the Institute for Biomedical Informatics at the University of Pennsylvania (Philadelphia, PA, USA).
Dr. La Cava is a postdoctoral fellow who received his Ph.D. from the University of Massachusetts Amherst under Professors Kourosh Danai and Lee Spector. His research focus is system identification for dynamic systems in statistical genetics. He has contributed papers to GECCO in the genetic programming track on methods for local search and parent selection.
Dr. Olson is a Senior Data Scientist working on open source software for evolutionary computation and machine learning research. Dr. Olson received his Ph.D. from Michigan State University, where he studied under Prof. Christoph Adami at the BEACON Center. He has been actively involved in GECCO for several years and won best paper awards at GECCO in 2014 and 2016 for his work in evolutionary agent-based modeling and automated machine learning.
Dr. Orzechowski is a postdoctoral researcher in AI. He obtained his Ph.D. in Computer Science and a Masters of Automation and Robotics from AGH University of Science and Technology, Krakow, Poland. His scientific interests are in the areas of machine learning, bioinformatics and artificial intelligence. He also specializes in data mining and mobile technologies.
Dr. Urbanowicz is a research associate with a Ph.D in Genetics from Dartmouth College and a Masters of Bioengineering from Cornell University. His research focuses on the development of rule-based machine learning methods for complex bioinformatics problems and complex data simulation for proper algorithm evaluation and comparison. At GECCO he has authored two best papers, and organized the rule-based machine learning workshop and tutorial for 4 years each.