Using TPOT
What to expect from AutoML software
Automated machine learning (AutoML) takes a higher-level approach to machine learning than most practitioners are used to, so we've gathered a handful of guidelines on what to expect when running AutoML software such as TPOT.
AutoML algorithms aren't intended to run for only a few minutes
Of course, you can run TPOT for only a few minutes and it will find a reasonably good pipeline for your dataset.
However, if you don't run TPOT for long enough, it may not find the best possible pipeline for your dataset. It may even not
find any suitable pipeline at all, in which case a RuntimeError('A pipeline has not yet been optimized. Please call fit() first.')
will be raised.
Often it is worthwhile to run multiple instances of TPOT in parallel for a long time (hours to days) to allow TPOT to thoroughly search
the pipeline space for your dataset.
AutoML algorithms can take a long time to finish their search
AutoML algorithms aren't as simple as fitting one model on the dataset; they are considering multiple machine learning algorithms (random forests, linear models, SVMs, etc.) in a pipeline with multiple preprocessing steps (missing value imputation, scaling, PCA, feature selection, etc.), the hyperparameters for all of the models and preprocessing steps, as well as multiple ways to ensemble or stack the algorithms within the pipeline.
As such, TPOT will take a while to run on larger datasets, but it's important to realize why. With the default TPOT settings (100 generations with 100 population size), TPOT will evaluate 10,000 pipeline configurations before finishing. To put this number into context, think about a grid search of 10,000 hyperparameter combinations for a machine learning algorithm and how long that grid search will take. That is 10,000 model configurations to evaluate with 10-fold cross-validation, which means that roughly 100,000 models are fit and evaluated on the training data in one grid search. That's a time-consuming procedure, even for simpler models like decision trees.
Typical TPOT runs will take hours to days to finish (unless it's a small dataset), but you can always interrupt
the run partway through and see the best results so far. TPOT also provides a warm_start
parameter that
lets you restart a TPOT run from where it left off.
AutoML algorithms can recommend different solutions for the same dataset
If you're working with a reasonably complex dataset or run TPOT for a short amount of time, different TPOT runs may result in different pipeline recommendations. TPOT's optimization algorithm is stochastic in nature, which means that it uses randomness (in part) to search the possible pipeline space. When two TPOT runs recommend different pipelines, this means that the TPOT runs didn't converge due to lack of time or that multiple pipelines perform more-or-less the same on your dataset.
This is actually an advantage over fixed grid search techniques: TPOT is meant to be an assistant that gives you ideas on how to solve a particular machine learning problem by exploring pipeline configurations that you might have never considered, then leaves the fine-tuning to more constrained parameter tuning techniques such as grid search.
TPOT with code
We've taken care to design the TPOT interface to be as similar as possible to scikit-learn.
TPOT can be imported just like any regular Python module. To import TPOT, type:
then create an instance of TPOT as follows:
It's also possible to use TPOT for regression problems with the TPOTRegressor
class. Other than the class name,
a TPOTRegressor
is used the same way as a TPOTClassifier
. You can read more about the TPOTClassifier
and TPOTRegressor
classes in the API documentation.
Some example code with custom TPOT parameters might look like:
pipeline_optimizer = TPOTClassifier(generations=5, population_size=20, cv=5,
random_state=42, verbosity=2)
Now TPOT is ready to optimize a pipeline for you. You can tell TPOT to optimize a pipeline based on a data set with the fit
function:
The fit
function initializes the genetic programming algorithm to find the highest-scoring pipeline based on average k-fold cross-validation
Then, the pipeline is trained on the entire set of provided samples, and the TPOT instance can be used as a fitted model.
You can then proceed to evaluate the final pipeline on the testing set with the score
function:
Finally, you can tell TPOT to export the corresponding Python code for the optimized pipeline to a text file with the export
function:
Once this code finishes running, tpot_exported_pipeline.py
will contain the Python code for the optimized pipeline.
Below is a full example script using TPOT to optimize a pipeline, score it, and export the best pipeline to a file.
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
pipeline_optimizer = TPOTClassifier(generations=5, population_size=20, cv=5,
random_state=42, verbosity=2)
pipeline_optimizer.fit(X_train, y_train)
print(pipeline_optimizer.score(X_test, y_test))
pipeline_optimizer.export('tpot_exported_pipeline.py')
Check our examples to see TPOT applied to some specific data sets.
TPOT on the command line
To use TPOT via the command line, enter the following command with a path to the data file:
An example command-line call to TPOT may look like:
TPOT offers several arguments that can be provided at the command line. To see brief descriptions of these arguments, enter the following command:
Detailed descriptions of the command-line arguments are below.
Argument | Parameter | Valid values | Effect |
---|---|---|---|
-is | INPUT_SEPARATOR | Any string | Character used to separate columns in the input file. |
-target | TARGET_NAME | Any string | Name of the target column in the input file. |
-mode | TPOT_MODE | ['classification', 'regression'] | Whether TPOT is being used for a supervised classification or regression problem. |
-o | OUTPUT_FILE | String path to a file | File to export the code for the final optimized pipeline. |
-g | GENERATIONS | Any positive integer or None | Number of iterations to run the pipeline optimization process. It must be a positive number or None. If None, the parameter max_time_mins must be defined as the runtime limit. Generally, TPOT will work better when you give it more generations (and therefore time) to optimize the pipeline.
TPOT will evaluate POPULATION_SIZE + GENERATIONS x OFFSPRING_SIZE pipelines in total. |
-p | POPULATION_SIZE | Any positive integer | Number of individuals to retain in the GP population every generation. Generally, TPOT will work better when you give it more individuals (and therefore time) to optimize the pipeline.
TPOT will evaluate POPULATION_SIZE + GENERATIONS x OFFSPRING_SIZE pipelines in total. |
-os | OFFSPRING_SIZE | Any positive integer | Number of offspring to produce in each GP generation.
By default, OFFSPRING_SIZE = POPULATION_SIZE. |
-mr | MUTATION_RATE | [0.0, 1.0] | GP mutation rate in the range [0.0, 1.0]. This tells the GP algorithm how many pipelines to apply random changes to every generation.
We recommend using the default parameter unless you understand how the mutation rate affects GP algorithms. |
-xr | CROSSOVER_RATE | [0.0, 1.0] | GP crossover rate in the range [0.0, 1.0]. This tells the GP algorithm how many pipelines to "breed" every generation.
We recommend using the default parameter unless you understand how the crossover rate affects GP algorithms. |
-scoring | SCORING_FN | 'accuracy', 'adjusted_rand_score', 'average_precision', 'balanced_accuracy', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_median_absolute_error', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc', 'my_module.scorer_name*' |
Function used to evaluate the quality of a given pipeline for the problem. By default, accuracy is used for classification and mean squared error (MSE) is used for regression.
TPOT assumes that any function with "error" or "loss" in the name is meant to be minimized, whereas any other functions will be maximized. my_module.scorer_name: You can also specify your own function or a full python path to an existing one. See the section on scoring functions for more details. |
-cv | CV | Any integer > 1 | Number of folds to evaluate each pipeline over in k-fold cross-validation during the TPOT optimization process. | -sub | SUBSAMPLE | (0.0, 1.0] | Subsample ratio of the training instance. Setting it to 0.5 means that TPOT randomly collects half of training samples for pipeline optimization process. |
-njobs | NUM_JOBS | Any positive integer or -1 | Number of CPUs for evaluating pipelines in parallel during the TPOT optimization process.
Assigning this to -1 will use as many cores as available on the computer. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one are used. |
-maxtime | MAX_TIME_MINS | Any positive integer | How many minutes TPOT has to optimize the pipeline.
How many minutes TPOT has to optimize the pipeline.If not None, this setting will allow TPOT to run until max_time_mins minutes elapsed and then stop. TPOT will stop earlier if generationsis set and all generations are already evaluated. |
-maxeval | MAX_EVAL_MINS | Any positive float | How many minutes TPOT has to evaluate a single pipeline.
Setting this parameter to higher values will allow TPOT to consider more complex pipelines but will also allow TPOT to run longer. |
-s | RANDOM_STATE | Any positive integer | Random number generator seed for reproducibility.
Set this seed if you want your TPOT run to be reproducible with the same seed and data set in the future. |
-config | CONFIG_FILE | String or file path | Operators and parameter configurations in TPOT:
|
-template | TEMPLATE | String | Template of predefined pipeline structure. The option is for specifying a desired structure for the machine learning pipeline evaluated in TPOT. So far this option only supports linear pipeline structure. Each step in the pipeline should be a main class of operators (Selector, Transformer, Classifier or Regressor) or a specific operator (e.g. `SelectPercentile`) defined in TPOT operator configuration. If one step is a main class, TPOT will randomly assign all subclass operators (subclasses of [`SelectorMixin`](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_selection/base.py#L17), [`TransformerMixin`](https://scikit-learn.org/stable/modules/generated/sklearn.base.TransformerMixin.html), [`ClassifierMixin`](https://scikit-learn.org/stable/modules/generated/sklearn.base.ClassifierMixin.html) or [`RegressorMixin`](https://scikit-learn.org/stable/modules/generated/sklearn.base.RegressorMixin.html) in scikit-learn) to that step. Steps in the template are delimited by "-", e.g. "SelectPercentile-Transformer-Classifier". By default value of template is None, TPOT generates tree-based pipeline randomly. See the template option in tpot section for more details. |
-memory | MEMORY | String or file path | If supplied, pipeline will cache each transformer after calling fit. This feature is used to avoid computing the fit transformers within a pipeline if the parameters and input data are identical with another fitted pipeline during optimization process. Memory caching mode in TPOT:
|
-cf | CHECKPOINT_FOLDER | Folder path |
If supplied, a folder you created, in which tpot will periodically save pipelines in pareto front so far while optimizing.
This is useful in multiple cases:
Example: mkdir my_checkpoints -cf ./my_checkpoints |
-es | EARLY_STOP | Any positive integer |
How many generations TPOT checks whether there is no improvement in optimization process.
End optimization process if there is no improvement in the set number of generations. |
-v | VERBOSITY | {0, 1, 2, 3} | How much information TPOT communicates while it is running.
0 = none, 1 = minimal, 2 = high, 3 = all. A setting of 2 or higher will add a progress bar during the optimization procedure. |
-log | LOG | Folder path | Save progress content to a file. |
--no-update-check | Flag indicating whether the TPOT version checker should be disabled. | ||
--version | Show TPOT's version number and exit. | ||
--help | Show TPOT's help documentation and exit. |
Scoring functions
TPOT makes use of sklearn.model_selection.cross_val_score
for evaluating pipelines, and as such offers the same support for scoring functions. There are two ways to make use of scoring functions with TPOT:
-
You can pass in a string to the
scoring
parameter from the list above. Any other strings will cause TPOT to throw an exception. -
You can pass the callable object/function with signature
scorer(estimator, X, y)
, whereestimator
is trained estimator to use for scoring,X
are features that will be passed toestimator.predict
andy
are target values forX
. To do this, you should implement your own function. See the example below for further explanation.
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import make_scorer
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
# Make a custom metric function
def my_custom_accuracy(y_true, y_pred):
return float(sum(y_pred == y_true)) / len(y_true)
# Make a custom a scorer from the custom metric function
# Note: greater_is_better=False in make_scorer below would mean that the scoring function should be minimized.
my_custom_scorer = make_scorer(my_custom_accuracy, greater_is_better=True)
tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
scoring=my_custom_scorer)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_digits_pipeline.py')
- my_module.scorer_name: You can also use a custom
score_func(y_true, y_pred)
orscorer(estimator, X, y)
function through the command line by adding the argument-scoring my_module.scorer
to your command-line call. TPOT will import your module and use the custom scoring function from there. TPOT will include your current working directory when importing the module, so you can place it in the same directory where you are going to run TPOT. Example:-scoring sklearn.metrics.auc
will use the function auc from sklearn.metrics module.
Built-in TPOT configurations
TPOT comes with a handful of default operators and parameter configurations that we believe work well for optimizing machine learning pipelines. Below is a list of the current built-in configurations that come with TPOT.
Configuration Name | Description | Operators |
---|---|---|
Default TPOT | TPOT will search over a broad range of preprocessors, feature constructors, feature selectors, models, and parameters to find a series of operators that minimize the error of the model predictions. Some of these operators are complex and may take a long time to run, especially on larger datasets.
Note: This is the default configuration for TPOT. To use this configuration, use the default value (None) for the config_dict parameter. |
Classification
Regression |
TPOT light | TPOT will search over a restricted range of preprocessors, feature constructors, feature selectors, models, and parameters to find a series of operators that minimize the error of the model predictions. Only simpler and fast-running operators will be used in these pipelines, so TPOT light is useful for finding quick and simple pipelines for a classification or regression problem.
This configuration works for both the TPOTClassifier and TPOTRegressor. |
Classification
Regression |
TPOT MDR | TPOT will search over a series of feature selectors and Multifactor Dimensionality Reduction models to find a series of operators that maximize prediction accuracy. The TPOT MDR configuration is specialized for genome-wide association studies (GWAS), and is described in detail online here.
Note that TPOT MDR may be slow to run because the feature selection routines are computationally expensive, especially on large datasets. |
Classification
Regression |
TPOT sparse | TPOT uses a configuration dictionary with a one-hot encoder and the operators normally included in TPOT that also support sparse matrices.
This configuration works for both the TPOTClassifier and TPOTRegressor. |
Classification
Regression |
TPOT NN | TPOT uses the same configuration as "Default TPOT" plus additional neural network estimators written in PyTorch (currently only `tpot.builtins.PytorchLRClassifier` and `tpot.builtins.PytorchMLPClassifier`).
Currently only classification is supported, but future releases will include regression estimators. |
Classification |
TPOT cuML | TPOT will search over a restricted configuration using the GPU-accelerated estimators in RAPIDS cuML and DMLC XGBoost. This configuration requires an NVIDIA Pascal architecture or better GPU with compute capability 6.0+, and that the library cuML is installed. With this configuration, all model training and predicting will be GPU-accelerated.
This configuration is particularly useful for medium-sized and larger datasets on which CPU-based estimators are a common bottleneck, and works for both the TPOTClassifier and TPOTRegressor. |
Classification
Regression |
To use any of these configurations, simply pass the string name of the configuration to the config_dict
parameter (or -config
on the command line). For example, to use the "TPOT light" configuration:
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
config_dict='TPOT light')
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_digits_pipeline.py')
Customizing TPOT's operators and parameters
Beyond the default configurations that come with TPOT, in some cases it is useful to limit the algorithms and parameters that TPOT considers. For that reason, we allow users to provide TPOT with a custom configuration for its operators and parameters.
The custom TPOT configuration must be in nested dictionary format, where the first level key is the path and name of the operator (e.g., sklearn.naive_bayes.MultinomialNB
) and the second level key is the corresponding parameter name for that operator (e.g., fit_prior
). The second level key should point to a list of parameter values for that parameter, e.g., 'fit_prior': [True, False]
.
For a simple example, the configuration could be:
tpot_config = {
'sklearn.naive_bayes.GaussianNB': {
},
'sklearn.naive_bayes.BernoulliNB': {
'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
'fit_prior': [True, False]
},
'sklearn.naive_bayes.MultinomialNB': {
'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
'fit_prior': [True, False]
}
}
in which case TPOT would only consider pipelines containing GaussianNB
, BernoulliNB
, MultinomialNB
, and tune those algorithm's parameters in the ranges provided. This dictionary can be passed directly within the code to the TPOTClassifier
/TPOTRegressor
config_dict
parameter, described above. For example:
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
tpot_config = {
'sklearn.naive_bayes.GaussianNB': {
},
'sklearn.naive_bayes.BernoulliNB': {
'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
'fit_prior': [True, False]
},
'sklearn.naive_bayes.MultinomialNB': {
'alpha': [1e-3, 1e-2, 1e-1, 1., 10., 100.],
'fit_prior': [True, False]
}
}
tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
config_dict=tpot_config)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_digits_pipeline.py')
Command-line users must create a separate .py
file with the custom configuration and provide the path to the file to the tpot
call. For example, if the simple example configuration above is saved in tpot_classifier_config.py
, that configuration could be used on the command line with the command:
tpot data/mnist.csv -is , -target class -config tpot_classifier_config.py -g 5 -p 20 -v 2 -o tpot_exported_pipeline.py
When using the command-line interface, the configuration file specified in the -config
parameter must name its custom TPOT configuration tpot_config
. Otherwise, TPOT will not be able to locate the configuration dictionary.
For more detailed examples of how to customize TPOT's operator configuration, see the default configurations for classification and regression in TPOT's source code.
Note that you must have all of the corresponding packages for the operators installed on your computer, otherwise TPOT will not be able to use them. For example, if XGBoost is not installed on your computer, then TPOT will simply not import nor use XGBoost in the pipelines it considers.
Template option in TPOT
Template option provides a way to specify a desired structure for machine learning pipeline, which may reduce TPOT computation time and potentially provide more interpretable results. Current implementation only supports linear pipelines.
Below is a simple example to use template
option. The pipelines generated/evaluated in TPOT will follow this structure: 1st step is a feature selector (a subclass of SelectorMixin
), 2nd step is a feature transformer (a subclass of TransformerMixin
) and 3rd step is a classifier for classification (a subclass of ClassifierMixin
). The last step must be Classifier
for TPOTClassifier
's template but Regressor
for TPOTRegressor
. Note: although SelectorMixin
is subclass of TransformerMixin
in scikit-learn, but Transformer
in this option excludes those subclasses of SelectorMixin
.
If a specific operator, e.g. SelectPercentile
, is preferred for usage in the 1st step of the pipeline, the template can be defined like 'SelectPercentile-Transformer-Classifier'.
FeatureSetSelector in TPOT
FeatureSetSelector
is a special new operator in TPOT. This operator enables feature selection based on priori expert knowledge. For example, in RNA-seq gene expression analysis, this operator can be used to select one or more gene (feature) set(s) based on GO (Gene Ontology) terms or annotated gene sets Molecular Signatures Database (MSigDB) in the 1st step of pipeline via template
option above, in order to reduce dimensions and TPOT computation time. This operator requires a dataset list in csv format. In this csv file, there are only three columns: 1st column is feature set names, 2nd column is the total number of features in one set and 3rd column is a list of feature names (if input X is pandas.DataFrame) or indexes (if input X is numpy.ndarray) delimited by ";". Below is an example how to use this operator in TPOT.
Please check our preprint paper for more details.
from tpot import TPOTClassifier
import numpy as np
import pandas as pd
from tpot.config import classifier_config_dict
test_data = pd.read_csv("https://raw.githubusercontent.com/EpistasisLab/tpot/master/tests/tests.csv")
test_X = test_data.drop("class", axis=1)
test_y = test_data['class']
# add FeatureSetSelector into tpot configuration
classifier_config_dict['tpot.builtins.FeatureSetSelector'] = {
'subset_list': ['https://raw.githubusercontent.com/EpistasisLab/tpot/master/tests/subset_test.csv'],
'sel_subset': [0,1] # select only one feature set, a list of index of subset in the list above
#'sel_subset': list(combinations(range(3), 2)) # select two feature sets
}
tpot = TPOTClassifier(generations=5,
population_size=50, verbosity=2,
template='FeatureSetSelector-Transformer-Classifier',
config_dict=classifier_config_dict)
tpot.fit(test_X, test_y)
Pipeline caching in TPOT
With the memory
parameter, pipelines can cache the results of each transformer after fitting them. This feature is used to avoid repeated computation by transformers within a pipeline if the parameters and input data are identical to another fitted pipeline during optimization process. TPOT allows users to specify a custom directory path or joblib.Memory
in case they want to re-use the memory cache in future TPOT runs (or a warm_start
run).
There are three methods for enabling memory caching in TPOT:
from tpot import TPOTClassifier
from tempfile import mkdtemp
from joblib import Memory
from shutil import rmtree
# Method 1, auto mode: TPOT uses memory caching with a temporary directory and cleans it up upon shutdown
tpot = TPOTClassifier(memory='auto')
# Method 2, with a custom directory for memory caching
tpot = TPOTClassifier(memory='/to/your/path')
# Method 3, with a Memory object
cachedir = mkdtemp() # Create a temporary folder
memory = Memory(cachedir=cachedir, verbose=0)
tpot = TPOTClassifier(memory=memory)
# Clear the cache directory when you don't need it anymore
rmtree(cachedir)
Note: TPOT does NOT clean up memory caches if users set a custom directory path or Memory object. We recommend that you clean up the memory caches when you don't need it anymore.
Crash/freeze issue with n_jobs > 1 under OSX or Linux
Internally, TPOT uses joblib to fit estimators in parallel. This is the same parallelization framework used by scikit-learn. But it may crash/freeze with n_jobs > 1 under OSX or Linux as scikit-learn does, especially with large datasets.
One solution is to configure Python's multiprocessing
module to use the forkserver
start method (instead of the default fork
) to manage the process pools. You can enable the forkserver
mode globally for your program by putting the following codes into your main script:
import multiprocessing
# other imports, custom code, load data, define model...
if __name__ == '__main__':
multiprocessing.set_start_method('forkserver')
# call scikit-learn utils or tpot utils with n_jobs > 1 here
More information about these start methods can be found in the multiprocessing documentation.
Parallel Training with Dask
For large problems or working on Jupyter notebook, we highly recommend that you can distribute the work on a Dask cluster. The dask-examples binder has a runnable example with a small dask cluster.
To use your Dask cluster to fit a TPOT model, specify the use_dask
keyword when you create the TPOT estimator. Note: if use_dask=True
, TPOT will use as many cores as available on the your Dask cluster. If n_jobs
is specified, then it will control the chunk size (10*n_jobs
if it is less then offspring size) of parallel training.
This will use all the workers on your cluster to do the training, and use Dask-ML's pipeline rewriting to avoid re-fitting estimators multiple times on the same set of data. It will also provide fine-grained diagnostics in the distributed scheduler UI.
Alternatively, Dask implements a joblib backend.
You can instruct TPOT to use the distributed backend during training by specifying a joblib.parallel_backend
:
import joblib
import distributed.joblib
from dask.distributed import Client
# connect to the cluster
client = Client('schedueler-address')
# create the estimator normally
estimator = TPOTClassifier(n_jobs=-1)
# perform the fit in this context manager
with joblib.parallel_backend("dask"):
estimator.fit(X, y)
See dask's distributed joblib integration for more.
Neural Networks in TPOT (tpot.nn
)
Support for neural network models and deep learning is an experimental feature newly added to TPOT. Available neural network architectures are provided by the tpot.nn
module. Unlike regular sklearn
estimators, these models need to be written by hand, and must also inherit the appropriate base classes provided by sklearn
for all of their built-in modules. In other words, they need implement methods like .fit()
, fit_transform()
, get_params()
, etc., as described in detail on Developing scikit-learn estimators.
Telling TPOT to use built-in PyTorch neural network models
Mainly due to the issues described below, TPOT won't use its neural network models unless you explicitly tell it to do so. This is done as follows:
-
Use
import tpot.nn
before instantiating any TPOT estimators. -
Use a configuration dictionary that includes one or more
tpot.nn
estimators, either by writing one manually, including one from a file, or by importing the configuration intpot/config/classifier_nn.py
. A very simple example that will force TPOT to only use a PyTorch-based logistic regression classifier as its main estimator is as follows:
- Alternatively, use a template string including
PytorchLRClassifier
orPytorchMLPClassifier
while loading the TPOT-NN configuration dictionary.
Neural network models are notorious for being extremely sensitive to their initialization parameters, so you may need to heavily adjust tpot.nn
configuration dictionaries in order to attain good performance on your dataset.
A simple example of using TPOT-NN is shown in examples.
Important caveats
-
Neural network models (especially when they reach moderately large sizes) take a notoriously large amount of time and computing power to train. You should expect
tpot.nn
neural networks to train several orders of magnitude slower than theirsklearn
alternatives. This can be alleviated somewhat by training the models on computers with CUDA-enabled GPUs. -
TPOT will occasionally learn pipelines that stack several
sklearn
estimators. Mathematically, these can be nearly identical to some deep learning models. For example, by stacking severalsklearn.linear_model.LogisticRegression
s, you end up with a very close approximation of a Multilayer Perceptron; one of the simplest and most well known deep learning architectures. TPOT's genetic programming algorithms generally optimize these 'networks' much faster than PyTorch, which typically uses a more brute-force convex optimization approach. -
The problem of 'black box' model introspection is one of the most substantial criticisms and challenges of deep learning. This problem persists in
tpot.nn
, whereas TPOT's default estimators often are far easier to introspect.