Strategies for reducing computational load¶
This tutorial covers two strategies for pruning the computational load of TPOT to decrease run time.
Successive Halving¶
This idea was first tested with TPOT by Parmentier et al. in "TPOT-SH: a Faster Optimization Algorithm to Solve the AutoML Problem on Large Datasets". The algorithm operates in two stages. Initially, it trains early generations using a small data subset and a large population size. Later generations then evaluate a smaller set of promising pipelines on larger, or even full, data portions. This approach rapidly identifies top-performing pipeline configurations through initial rough evaluations, followed by more comprehensive assessments. More information on this strategy in Tutorial 8.
In this tutorial, we will cover the following parameters:
population_size
initial_population_size
population_scaling
generations_until_end_population
budget_range
generations_until_end_budget
budget_scaling
stepwise_steps
Population size is the number of individuals evaluated each generation. Budget refers to the proportion of data to sample. By manipulating these parameters, we can control how quickly the budget increases and how population size changes over time. Most often, this will be used to start the algorithm by evaluating a large number of pipelines on small subsets of the data to quickly narrow now best models, before later getting a better estimate with larger samples on fewer datasets. This can reduce overall computational cost by not spending as much time evaluating poor performing pipelines.
population_size
determines the number of individuals to evalaute each generation. Sometimes we may want to evaluate more or fewer individuals in the earlier generations. The initial_population_size
parameter specifies the starting size of the population. The population size will gradually move from initial_population_size
to population_size
over the course of generations_until_end_population
generations. population_scaling
dictates how fast that scaling takes place. The interpolation over generations_until_end_population
is done stepwise with the number of steps specified by stepwise_steps
.
The same process goes for the budget scaling.
The following cell illustrates how the population size and budget change over time with the given settings. (Note that tpot happens to converge on this dataset fairly quickly, but we turn off early stop to get the full run. )
import matplotlib.pyplot as plt
import tpot2
population_size=30
initial_population_size=100
population_scaling = .5
generations_until_end_population = 50
budget_range = [.3,1]
generations_until_end_budget=50
budget_scaling = .5
stepwise_steps = 5
#Population and budget use stepwise
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
interpolated_values_population = tpot2.utils.beta_interpolation(start=initial_population_size, end=population_size, n=generations_until_end_population, n_steps=stepwise_steps, scale=population_scaling)
interpolated_values_budget = tpot2.utils.beta_interpolation(start=budget_range[0], end=budget_range[1], n=generations_until_end_budget, n_steps=stepwise_steps, scale=budget_scaling)
ax1.step(list(range(len(interpolated_values_population))), interpolated_values_population, label=f"population size")
ax2.step(list(range(len(interpolated_values_budget))), interpolated_values_budget, label=f"budget", color='r')
ax1.set_xlabel("generation")
ax1.set_ylabel("population size")
ax2.set_ylabel("bugdet")
ax1.legend(loc='center left', bbox_to_anchor=(1.1, 0.4))
ax2.legend(loc='center left', bbox_to_anchor=(1.1, 0.3))
plt.show()
# A Graph pipeline starting with at least one selector as a leaf, potentially followed by a series
# of stacking classifiers or transformers, and ending with a classifier. The graph will have at most 15 nodes and a max depth of 6.
import tpot2
import sklearn
import sklearn.datasets
import numpy as np
import time
import tpot2
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
import sklearn
X, y = sklearn.datasets.load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, random_state=1)
scorer = sklearn.metrics.make_scorer(sklearn.metrics.roc_auc_score, needs_proba=True, multi_class='ovr')
est = tpot2.TPOTEstimator(
generations=50,
max_time_mins=None,
scorers=['roc_auc_ovr'],
scorers_weights=[1],
classification=True,
search_space = 'linear',
n_jobs=32,
cv=10,
verbose=3,
population_size=population_size,
initial_population_size=initial_population_size,
population_scaling = population_scaling,
generations_until_end_population = generations_until_end_population,
budget_range = budget_range,
generations_until_end_budget=generations_until_end_budget,
)
start = time.time()
est.fit(X_train, y_train)
print(f"total time: {time.time()-start}")
print("test score: ", scorer(est, X_test, y_test))
CV early pruning¶
Most often, we will be evaluating pipelines using cross validation. However, we can often tell within the first few folds whether or not the pipeline is going have a reasonable change of outperforming the previous best pipelines. For example, if the best score so far is .92 AUROC and the average score of the first five folds of our current pipeline is only around .61, we can be reasonably confident that the next five folds are unlikely to this pipeline ahead of the others. We can save a significant amount of compute by not computing the rest of the folds. There are two strategies that TPOT can use to accomplish this (More information on these strategies in Tutorial 8).
- Threshold Pruning: Pipelines must achieve a score above a predefined percentile threshold (based on previous pipeline scores) to proceed in each cross-validation (CV) fold.
- Selection Pruning: Within each population, only the top N% of pipelines (ranked by performance in the previous CV fold) are selected to evaluate in the next fold."
We can further reduce computational load by terminating the evaluation of individual pipelines early if the first few CV scores are not promising. Note that this is different than early stopping of the full algorithm. In this section we will cover:
threshold_evaluation_pruning
threshold_evaluation_scaling
min_history_threshold
selection_evaluation_pruning
selection_evaluation_scaling
Threshold early stopping uses previous scores to identify and terminate the cross validation evaluation of poorly performing pipelines. We calculate the percentile scores from the previously evaluated pipelines. A pipeline must reach the given percentile each fold for the next to be evaluated, otherwise the pipeline is discarded.
The threshold_evaluation_pruning
parameter is a list that specifies the starting and ending percentiles to use as a threshold for the evaluation early stopping. W The threshold_evaluation_scaling
parameter is a float that controls the rate at which the threshold moves from the start to end percentile. The min_history_threshold
parameter specifies the minimum number of previous scores needed before using threshold early stopping. This ensures that the algorithm has enough historical data to make an informed decision about when to stop evaluating pipelines.
Selection early stopping uses a selection algorithm after each fold to select which algorithms will be evaluated for the next fold. For example, after evaluating 100 individuals on fold 1, we may want to only evaluate the best 50 for the remaining folds.
The selection_evaluation_pruning
parameter is a list that specifies the lower and upper percentage of the population size to select each round of CV. This is used to determine which individuals to evaluate in the next generation. The selection_evaluation_scaling
parameter is a float that controls the rate at which the selection threshold moves from the start to end percentile.
By manipulating these parameters, we can control how the algorithm selects individuals to evaluate in the next generation and when to stop evaluating pipelines that are not performing well.
In practice, the values of these parameters will depend on the specific problem and the available computational resources.
In the following sections, we will show you how to set and manipulate these parameters using Python code in a Jupyter Notebook. We will also provide examples of how these parameters can affect the performance of the algorithm.
(Note that in these small test cases, you may not notice much or any performance improvements, these are more likely to be more beneficial in real world scenarios with larger datasets and slower evaluating pipelines.)
Considerations: It is important to be aware of how CV pruning interacts with the evolutionary algorithm. When pipelines are pruned with one of these methods, they are removed from the live population and thus are no longer used to inform the TPOT algorithm. If too many pipelines are pruned, this could reduce the diversity of pipelines per generation, and limit TPOT's ability to learn. Additionally, the pruning methods may interact with how long it takes TPOT to run. If the pruning algorithm removes the slightly less performant but faster running pipelines, TPOT will most likely fill the next generation with only slower running pipelines, thus technically increasing the total runtime. This may be acceptable since more compute is dedicated to the higher performing pipelines.
import matplotlib.pyplot as plt
import tpot2
import time
import sklearn
import sklearn.datasets
threshold_evaluation_pruning = [30, 90]
threshold_evaluation_scaling = .2 #.5
cv = 10
#Population and budget use stepwise
fig, ax1 = plt.subplots()
interpolated_values = tpot2.utils.beta_interpolation(start=threshold_evaluation_pruning[0], end=threshold_evaluation_pruning[-1], n=cv, n_steps=cv, scale=threshold_evaluation_scaling)
ax1.step(list(range(len(interpolated_values))), interpolated_values, label=f"threshold")
ax1.set_xlabel("fold")
ax1.set_ylabel("percentile")
#ax1.legend(loc='center left', bbox_to_anchor=(1.1, 0.4))
plt.show()
import tpot2
from tpot2.search_spaces.pipelines import *
from tpot2.search_spaces.nodes import *
from tpot2.config.get_configspace import get_search_space
import sklearn.model_selection
import sklearn
selectors = get_search_space(["selectors","selectors_classification", "Passthrough"], random_state=42,)
estimators = get_search_space(['XGBClassifier'],random_state=42,)
scalers = get_search_space(["scalers","Passthrough"],random_state=42,)
transformers_layer =UnionPipeline([
ChoicePipeline([
DynamicUnionPipeline(get_search_space(["transformers"], random_state=42,)),
get_search_space("SkipTransformer"),
]),
get_search_space("Passthrough")
]
)
search_space = SequentialPipeline(search_spaces=[
scalers,
selectors,
transformers_layer,
estimators,
])
import matplotlib.pyplot as plt
import tpot2
import time
import sklearn
import sklearn.datasets
scorer = sklearn.metrics.make_scorer(sklearn.metrics.roc_auc_score, needs_proba=True, multi_class='ovr')
X, y = sklearn.datasets.make_classification(n_samples=5000, n_features=20, n_classes=5, random_state=1, n_informative=15, n_redundant=5, n_repeated=0, n_clusters_per_class=3, class_sep=.8)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, random_state=1)
# search_space = tpot2.config.template_search_spaces.get_template_search_spaces("linear",inner_predictors=False, random_state=42)
/home/ribeirop/common/miniconda3/envs/tpot2env/lib/python3.10/site-packages/sklearn/metrics/_scorer.py:548: FutureWarning: The `needs_threshold` and `needs_proba` parameter are deprecated in version 1.4 and will be removed in 1.6. You can either let `response_method` be `None` or set it to `predict` to preserve the same behaviour. warnings.warn(
# no pruning
est = tpot2.TPOTEstimator(
generations=10,
max_time_mins=None,
scorers=['roc_auc_ovr'],
scorers_weights=[1],
classification=True,
search_space = search_space,
population_size=100,
n_jobs=32,
cv=cv,
verbose=3,
random_state=42,
)
start = time.time()
est.fit(X_train, y_train)
print(f"total time: {time.time()-start}")
print("test score: ", scorer(est, X_test, y_test))
Generation: 10%|█ | 1/10 [03:02<27:26, 182.98s/it]
Generation: 1 Best roc_auc_score score: 0.915278983783422
Generation: 20%|██ | 2/10 [06:11<24:51, 186.47s/it]
Generation: 2 Best roc_auc_score score: 0.9253965903409787
Generation: 30%|███ | 3/10 [10:33<25:46, 220.92s/it]
Generation: 3 Best roc_auc_score score: 0.9340480147661712
Generation: 40%|████ | 4/10 [15:07<24:10, 241.71s/it]
Generation: 4 Best roc_auc_score score: 0.9340480147661712
Generation: 50%|█████ | 5/10 [21:46<24:52, 298.58s/it]
Generation: 5 Best roc_auc_score score: 0.9340480147661712
Generation: 60%|██████ | 6/10 [25:45<18:32, 278.19s/it]
Generation: 6 Best roc_auc_score score: 0.9340480147661712
Generation: 70%|███████ | 7/10 [29:00<12:32, 250.97s/it]
Generation: 7 Best roc_auc_score score: 0.9340480147661712
Generation: 80%|████████ | 8/10 [34:14<09:02, 271.09s/it]
Generation: 8 Best roc_auc_score score: 0.9349103682633716
Generation: 90%|█████████ | 9/10 [37:44<04:12, 252.09s/it]
Generation: 9 Best roc_auc_score score: 0.9372520424560744
Generation: 100%|██████████| 10/10 [43:09<00:00, 258.96s/it]
Generation: 10 Best roc_auc_score score: 0.9398288783489072
total time: 2602.0502972602844 test score: 0.9426160568071598
import tpot2.config
import tpot2.config.template_search_spaces
import tpot2.search_spaces
# search_space = tpot2.config.get_search_space(["RandomForestClassifier"])
est = tpot2.TPOTEstimator(
generations=10,
max_time_mins=None,
scorers=['roc_auc_ovr'],
scorers_weights=[1],
classification=True,
search_space = search_space,
population_size=100,
n_jobs=32,
cv=cv,
verbose=3,
random_state=42,
threshold_evaluation_pruning = threshold_evaluation_pruning,
threshold_evaluation_scaling = threshold_evaluation_scaling,
)
start = time.time()
est.fit(X_train, y_train)
print(f"total time: {time.time()-start}")
print("test score: ", scorer(est, X_test, y_test))
Generation: 10%|█ | 1/10 [03:37<32:37, 217.51s/it]
Generation: 1 Best roc_auc_score score: 0.915278983783422
Generation: 20%|██ | 2/10 [05:38<21:26, 160.86s/it]
Generation: 2 Best roc_auc_score score: 0.915278983783422
Generation: 30%|███ | 3/10 [08:28<19:14, 164.99s/it]
Generation: 3 Best roc_auc_score score: 0.9212056169353746
Generation: 40%|████ | 4/10 [10:14<14:09, 141.53s/it]
Generation: 4 Best roc_auc_score score: 0.9212056169353746
Generation: 50%|█████ | 5/10 [13:23<13:13, 158.71s/it]
Generation: 5 Best roc_auc_score score: 0.9212056169353746
Generation: 60%|██████ | 6/10 [16:59<11:52, 178.22s/it]
Generation: 6 Best roc_auc_score score: 0.9212056169353746
Generation: 70%|███████ | 7/10 [19:54<08:51, 177.14s/it]
Generation: 7 Best roc_auc_score score: 0.9212056169353746
Generation: 80%|████████ | 8/10 [23:15<06:09, 184.68s/it]
Generation: 8 Best roc_auc_score score: 0.9277731650225494
Generation: 90%|█████████ | 9/10 [26:06<03:00, 180.55s/it]
Generation: 9 Best roc_auc_score score: 0.930278306286007
Generation: 100%|██████████| 10/10 [28:29<00:00, 170.96s/it]
Generation: 10 Best roc_auc_score score: 0.9336783524637781
/home/ribeirop/common/miniconda3/envs/tpot2env/lib/python3.10/site-packages/sklearn/decomposition/_fastica.py:595: UserWarning: n_components is too large: it will be set to 16 warnings.warn( /home/ribeirop/common/miniconda3/envs/tpot2env/lib/python3.10/site-packages/sklearn/decomposition/_fastica.py:128: ConvergenceWarning: FastICA did not converge. Consider increasing tolerance or the maximum number of iterations. warnings.warn(
total time: 1717.3964531421661 test score: 0.9449535389019192
import matplotlib.pyplot as plt
import tpot2
selection_evaluation_pruning = [.9, .3]
selection_evaluation_scaling = .2
#Population and budget use stepwise
fig, ax1 = plt.subplots()
interpolated_values = tpot2.utils.beta_interpolation(start=selection_evaluation_pruning[0], end=selection_evaluation_pruning[-1], n=cv, n_steps=cv, scale=selection_evaluation_scaling)
ax1.step(list(range(len(interpolated_values))), interpolated_values, label=f"threshold")
ax1.set_xlabel("fold")
ax1.set_ylabel("percent to select")
#ax1.legend(loc='center left', bbox_to_anchor=(1.1, 0.4))
plt.show()
est = tpot2.TPOTEstimator(
generations=10,
max_time_mins=None,
scorers=['roc_auc_ovr'],
scorers_weights=[1],
classification=True,
search_space = search_space,
population_size=100,
n_jobs=32,
cv=cv,
verbose=3,
random_state=42,
selection_evaluation_pruning = selection_evaluation_pruning,
selection_evaluation_scaling = selection_evaluation_scaling,
)
start = time.time()
est.fit(X_train, y_train)
print(f"total time: {time.time()-start}")
print("test score: ", scorer(est, X_test, y_test))
Generation: 10%|█ | 1/10 [03:28<31:13, 208.18s/it]
Generation: 1 Best roc_auc_score score: 0.915278983783422
Generation: 20%|██ | 2/10 [05:29<20:54, 156.85s/it]
Generation: 2 Best roc_auc_score score: 0.9169454884359916
Generation: 30%|███ | 3/10 [08:32<19:42, 168.98s/it]
Generation: 3 Best roc_auc_score score: 0.9176524001433647
Generation: 40%|████ | 4/10 [11:08<16:22, 163.77s/it]
Generation: 4 Best roc_auc_score score: 0.9176524001433647
Generation: 50%|█████ | 5/10 [15:05<15:51, 190.38s/it]
Generation: 5 Best roc_auc_score score: 0.9176524001433647
Generation: 60%|██████ | 6/10 [18:02<12:23, 185.84s/it]
Generation: 6 Best roc_auc_score score: 0.9206270411396777
Generation: 70%|███████ | 7/10 [21:12<09:20, 186.94s/it]
Generation: 7 Best roc_auc_score score: 0.9224227652034017
Generation: 80%|████████ | 8/10 [23:53<05:57, 178.80s/it]
Generation: 8 Best roc_auc_score score: 0.9224227652034017
Generation: 90%|█████████ | 9/10 [26:37<02:54, 174.24s/it]
Generation: 9 Best roc_auc_score score: 0.9224227652034017
Generation: 100%|██████████| 10/10 [29:21<00:00, 176.14s/it]
Generation: 10 Best roc_auc_score score: 0.9224227652034017
/home/ribeirop/common/miniconda3/envs/tpot2env/lib/python3.10/site-packages/sklearn/decomposition/_fastica.py:595: UserWarning: n_components is too large: it will be set to 20 warnings.warn(
total time: 1777.245548248291 test score: 0.9253163988063096
est.evaluated_individuals[est.evaluated_individuals['roc_auc_score_step_9']>0]
roc_auc_score | Parents | Variation_Function | Individual | Generation | roc_auc_score_step_0 | Submitted Timestamp | Completed Timestamp | Eval Error | roc_auc_score_step_1 | roc_auc_score_step_2 | roc_auc_score_step_3 | roc_auc_score_step_4 | roc_auc_score_step_5 | roc_auc_score_step_6 | roc_auc_score_step_7 | roc_auc_score_step_8 | roc_auc_score_step_9 | Pareto_Front | Instance | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0.848068 | NaN | NaN | <tpot2.search_spaces.pipelines.sequential.Sequ... | 0.0 | 0.846478 | 1.727821e+09 | 1.727821e+09 | None | 0.839894 | 0.844619 | 0.848321 | 0.846915 | 0.857902 | 0.855875 | 0.827655 | 0.850938 | 0.862081 | NaN | (Passthrough(), RFE(estimator=ExtraTreesClassi... |
4 | 0.831502 | NaN | NaN | <tpot2.search_spaces.pipelines.sequential.Sequ... | 0.0 | 0.817219 | 1.727822e+09 | 1.727822e+09 | None | 0.827888 | 0.821911 | 0.825558 | 0.830020 | 0.831529 | 0.836955 | 0.844634 | 0.832499 | 0.846805 | NaN | (StandardScaler(), VarianceThreshold(threshold... |
5 | 0.830374 | NaN | NaN | <tpot2.search_spaces.pipelines.sequential.Sequ... | 0.0 | 0.817150 | 1.727822e+09 | 1.727822e+09 | None | 0.831885 | 0.820694 | 0.824899 | 0.824409 | 0.827861 | 0.833923 | 0.844308 | 0.832798 | 0.845818 | NaN | (MinMaxScaler(), SelectFromModel(estimator=Ext... |
6 | 0.850091 | NaN | NaN | <tpot2.search_spaces.pipelines.sequential.Sequ... | 0.0 | 0.843524 | 1.727821e+09 | 1.727821e+09 | None | 0.841176 | 0.840619 | 0.846209 | 0.849561 | 0.854367 | 0.858035 | 0.860165 | 0.845179 | 0.862077 | NaN | (Normalizer(norm='max'), SelectFwe(alpha=0.000... |
9 | 0.855569 | NaN | NaN | <tpot2.search_spaces.pipelines.sequential.Sequ... | 0.0 | 0.847828 | 1.727821e+09 | 1.727821e+09 | None | 0.846977 | 0.849937 | 0.853201 | 0.857401 | 0.859119 | 0.857783 | 0.863300 | 0.851526 | 0.868619 | NaN | (Normalizer(norm='l1'), Passthrough(), Feature... |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
990 | 0.821990 | (742, 742) | ind_mutate | <tpot2.search_spaces.pipelines.sequential.Sequ... | 9.0 | 0.813408 | 1.727823e+09 | 1.727823e+09 | None | 0.815070 | 0.830230 | 0.823377 | 0.823119 | 0.831668 | 0.827358 | 0.817293 | 0.814207 | 0.824168 | NaN | (MinMaxScaler(), SelectPercentile(percentile=5... |
991 | 0.899339 | (100, 100) | ind_mutate | <tpot2.search_spaces.pipelines.sequential.Sequ... | 9.0 | 0.893247 | 1.727823e+09 | 1.727823e+09 | None | 0.903268 | 0.894318 | 0.890992 | 0.902956 | 0.902985 | 0.898020 | 0.904124 | 0.898141 | 0.905341 | NaN | (Normalizer(norm='l1'), SelectFwe(alpha=0.0034... |
992 | 0.870868 | (179, 14) | ind_crossover | <tpot2.search_spaces.pipelines.sequential.Sequ... | 9.0 | 0.871226 | 1.727823e+09 | 1.727823e+09 | None | 0.854742 | 0.865197 | 0.872427 | 0.869312 | 0.880744 | 0.872265 | 0.877524 | 0.866365 | 0.878881 | NaN | (Normalizer(norm='l1'), SelectFromModel(estima... |
994 | 0.815212 | (362, 362) | ind_mutate | <tpot2.search_spaces.pipelines.sequential.Sequ... | 9.0 | 0.830802 | 1.727823e+09 | 1.727823e+09 | None | 0.807573 | 0.817124 | 0.800161 | 0.819611 | 0.818811 | 0.833061 | 0.803679 | 0.794094 | 0.827200 | NaN | (Normalizer(norm='l1'), SelectPercentile(perce... |
995 | 0.865588 | (670, 670) | ind_mutate | <tpot2.search_spaces.pipelines.sequential.Sequ... | 9.0 | 0.867900 | 1.727823e+09 | 1.727823e+09 | None | 0.870824 | 0.876060 | 0.871468 | 0.853827 | 0.864080 | 0.867749 | 0.853553 | 0.849620 | 0.880796 | NaN | (Normalizer(), SelectPercentile(percentile=71.... |
324 rows × 20 columns
All of the above methods can be used independently or simultaneously as done below:
est = tpot2.TPOTEstimator(
generations=10,
max_time_mins=None,
scorers=['roc_auc_ovr'],
scorers_weights=[1],
classification=True,
search_space = search_space,
population_size=30,
n_jobs=3,
cv=cv,
verbose=3,
population_size=population_size,
initial_population_size=initial_population_size,
population_scaling = population_scaling,
generations_until_end_population = generations_until_end_population,
budget_range = budget_range,
generations_until_end_budget=generations_until_end_budget,
threshold_evaluation_pruning = threshold_evaluation_pruning,
threshold_evaluation_scaling = threshold_evaluation_scaling,
selection_evaluation_pruning = selection_evaluation_pruning,
selection_evaluation_scaling = selection_evaluation_scaling,
)
start = time.time()
est.fit(X_train, y_train)
print(f"total time: {time.time()-start}")
print("test score: ", scorer(est, X_test, y_test))