Automated Machine Learning : : Methods, Systems, Challenges.

Saved in:
Bibliographic Details
Superior document:The Springer Series on Challenges in Machine Learning Series
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer International Publishing AG,, 2019.
Ã2019.
Year of Publication:2019
Edition:1st ed.
Language:English
Series:The Springer Series on Challenges in Machine Learning Series
Online Access:
Physical Description:1 online resource (223 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
Table of Contents:
  • Intro
  • Foreword
  • Preface
  • Acknowledgments
  • Contents
  • Part I AutoML Methods
  • 1 Hyperparameter Optimization
  • 1.1 Introduction
  • 1.2 Problem Statement
  • 1.2.1 Alternatives to Optimization: Ensembling and Marginalization
  • 1.2.2 Optimizing for Multiple Objectives
  • 1.3 Blackbox Hyperparameter Optimization
  • 1.3.1 Model-Free Blackbox Optimization Methods
  • 1.3.2 Bayesian Optimization
  • 1.3.2.1 Bayesian Optimization in a Nutshell
  • 1.3.2.2 Surrogate Models
  • 1.3.2.3 Configuration Space Description
  • 1.3.2.4 Constrained Bayesian Optimization
  • 1.4 Multi-fidelity Optimization
  • 1.4.1 Learning Curve-Based Prediction for Early Stopping
  • 1.4.2 Bandit-Based Algorithm Selection Methods
  • 1.4.3 Adaptive Choices of Fidelities
  • 1.5 Applications to AutoML
  • 1.6 Open Problems and Future Research Directions
  • 1.6.1 Benchmarks and Comparability
  • 1.6.2 Gradient-Based Optimization
  • 1.6.3 Scalability
  • 1.6.4 Overfitting and Generalization
  • 1.6.5 Arbitrary-Size Pipeline Construction
  • Bibliography
  • 2 Meta-Learning
  • 2.1 Introduction
  • 2.2 Learning from Model Evaluations
  • 2.2.1 Task-Independent Recommendations
  • 2.2.2 Configuration Space Design
  • 2.2.3 Configuration Transfer
  • 2.2.3.1 Relative Landmarks
  • 2.2.3.2 Surrogate Models
  • 2.2.3.3 Warm-Started Multi-task Learning
  • 2.2.3.4 Other Techniques
  • 2.2.4 Learning Curves
  • 2.3 Learning from Task Properties
  • 2.3.1 Meta-Features
  • 2.3.2 Learning Meta-Features
  • 2.3.3 Warm-Starting Optimization from Similar Tasks
  • 2.3.4 Meta-Models
  • 2.3.4.1 Ranking
  • 2.3.4.2 Performance Prediction
  • 2.3.5 Pipeline Synthesis
  • 2.3.6 To Tune or Not to Tune?
  • 2.4 Learning from Prior Models
  • 2.4.1 Transfer Learning
  • 2.4.2 Meta-Learning in Neural Networks
  • 2.4.3 Few-Shot Learning
  • 2.4.4 Beyond Supervised Learning
  • 2.5 Conclusion
  • Bibliography.
  • 3 Neural Architecture Search
  • 3.1 Introduction
  • 3.2 Search Space
  • 3.3 Search Strategy
  • 3.4 Performance Estimation Strategy
  • 3.5 Future Directions
  • Bibliography
  • Part II AutoML Systems
  • 4 Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA
  • 4.1 Introduction
  • 4.2 Preliminaries
  • 4.2.1 Model Selection
  • 4.2.2 Hyperparameter Optimization
  • 4.3 CASH
  • 4.3.1 Sequential Model-Based Algorithm Configuration (SMAC)
  • 4.4 Auto-WEKA
  • 4.5 Experimental Evaluation
  • 4.5.1 Baseline Methods
  • 4.5.2 Results for Cross-Validation Performance
  • 4.5.3 Results for Test Performance
  • 4.6 Conclusion
  • 4.6.1 Community Adoption
  • Bibliography
  • 5 Hyperopt-Sklearn
  • 5.1 Introduction
  • 5.2 Background: Hyperopt for Optimization
  • 5.3 Scikit-Learn Model Selection as a Search Problem
  • 5.4 Example Usage
  • 5.5 Experiments
  • 5.6 Discussion and Future Work
  • 5.7 Conclusions
  • Bibliography
  • 6 Auto-sklearn: Efficient and Robust Automated MachineLearning
  • 6.1 Introduction
  • 6.2 AutoML as a CASH Problem
  • 6.3 New Methods for Increasing Efficiency and Robustness of AutoML
  • 6.3.1 Meta-learning for Finding Good Instantiations of Machine Learning Frameworks
  • 6.3.2 Automated Ensemble Construction of Models Evaluated During Optimization
  • 6.4 A Practical Automated Machine Learning System
  • 6.5 Comparing Auto-sklearn to Auto-WEKA and Hyperopt-Sklearn
  • 6.6 Evaluation of the Proposed AutoML Improvements
  • 6.7 Detailed Analysis of Auto-sklearn Components
  • 6.8 Discussion and Conclusion
  • 6.8.1 Discussion
  • 6.8.2 Usage
  • 6.8.3 Extensions in PoSH Auto-sklearn
  • 6.8.4 Conclusion and Future Work
  • Bibliography
  • 7 Towards Automatically-Tuned Deep Neural Networks
  • 7.1 Introduction
  • 7.2 Auto-Net 1.0
  • 7.3 Auto-Net 2.0
  • 7.4 Experiments
  • 7.4.1 Baseline Evaluation of Auto-Net 1.0 and Auto-sklearn.
  • 7.4.2 Results for AutoML Competition Datasets
  • 7.4.3 Comparing AutoNet 1.0 and 2.0
  • 7.5 Conclusion
  • Bibliography
  • 8 TPOT: A Tree-Based Pipeline Optimization Toolfor Automating Machine Learning
  • 8.1 Introduction
  • 8.2 Methods
  • 8.2.1 Machine Learning Pipeline Operators
  • 8.2.2 Constructing Tree-Based Pipelines
  • 8.2.3 Optimizing Tree-Based Pipelines
  • 8.2.4 Benchmark Data
  • 8.3 Results
  • 8.4 Conclusions and Future Work
  • Bibliography
  • 9 The Automatic Statistician
  • 9.1 Introduction
  • 9.2 Basic Anatomy of an Automatic Statistician
  • 9.2.1 Related Work
  • 9.3 An Automatic Statistician for Time Series Data
  • 9.3.1 The Grammar over Kernels
  • 9.3.2 The Search and Evaluation Procedure
  • 9.3.3 Generating Descriptions in Natural Language
  • 9.3.4 Comparison with Humans
  • 9.4 Other Automatic Statistician Systems
  • 9.4.1 Core Components
  • 9.4.2 Design Challenges
  • 9.4.2.1 User Interaction
  • 9.4.2.2 Missing and Messy Data
  • 9.4.2.3 Resource Allocation
  • 9.5 Conclusion
  • Bibliography
  • Part III AutoML Challenges
  • 10 Analysis of the AutoML Challenge Series 2015-2018
  • 10.1 Introduction
  • 10.2 Problem Formalization and Overview
  • 10.2.1 Scope of the Problem
  • 10.2.2 Full Model Selection
  • 10.2.3 Optimization of Hyper-parameters
  • 10.2.4 Strategies of Model Search
  • 10.3 Data
  • 10.4 Challenge Protocol
  • 10.4.1 Time Budget and Computational Resources
  • 10.4.2 Scoring Metrics
  • 10.4.3 Rounds and Phases in the 2015/2016 Challenge
  • 10.4.4 Phases in the 2018 Challenge
  • 10.5 Results
  • 10.5.1 Scores Obtained in the 2015/2016 Challenge
  • 10.5.2 Scores Obtained in the 2018 Challenge
  • 10.5.3 Difficulty of Datasets/Tasks
  • 10.5.4 Hyper-parameter Optimization
  • 10.5.5 Meta-learning
  • 10.5.6 Methods Used in the Challenges
  • 10.6 Discussion
  • 10.7 Conclusion
  • Bibliography
  • Correction to: Neural Architecture Search.