Automated Machine Learning : : Methods, Systems, Challenges.
Saved in:
Superior document: | The Springer Series on Challenges in Machine Learning Series |
---|---|
: | |
TeilnehmendeR: | |
Place / Publishing House: | Cham : : Springer International Publishing AG,, 2019. Ã2019. |
Year of Publication: | 2019 |
Edition: | 1st ed. |
Language: | English |
Series: | The Springer Series on Challenges in Machine Learning Series
|
Online Access: | |
Physical Description: | 1 online resource (223 pages) |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
5005788944 |
---|---|
ctrlnum |
(MiAaPQ)5005788944 (Au-PeEL)EBL5788944 (OCoLC)1105039769 |
collection |
bib_alma |
record_format |
marc |
spelling |
Hutter, Frank. Automated Machine Learning : Methods, Systems, Challenges. 1st ed. Cham : Springer International Publishing AG, 2019. Ã2019. 1 online resource (223 pages) text txt rdacontent computer c rdamedia online resource cr rdacarrier The Springer Series on Challenges in Machine Learning Series Intro -- Foreword -- Preface -- Acknowledgments -- Contents -- Part I AutoML Methods -- 1 Hyperparameter Optimization -- 1.1 Introduction -- 1.2 Problem Statement -- 1.2.1 Alternatives to Optimization: Ensembling and Marginalization -- 1.2.2 Optimizing for Multiple Objectives -- 1.3 Blackbox Hyperparameter Optimization -- 1.3.1 Model-Free Blackbox Optimization Methods -- 1.3.2 Bayesian Optimization -- 1.3.2.1 Bayesian Optimization in a Nutshell -- 1.3.2.2 Surrogate Models -- 1.3.2.3 Configuration Space Description -- 1.3.2.4 Constrained Bayesian Optimization -- 1.4 Multi-fidelity Optimization -- 1.4.1 Learning Curve-Based Prediction for Early Stopping -- 1.4.2 Bandit-Based Algorithm Selection Methods -- 1.4.3 Adaptive Choices of Fidelities -- 1.5 Applications to AutoML -- 1.6 Open Problems and Future Research Directions -- 1.6.1 Benchmarks and Comparability -- 1.6.2 Gradient-Based Optimization -- 1.6.3 Scalability -- 1.6.4 Overfitting and Generalization -- 1.6.5 Arbitrary-Size Pipeline Construction -- Bibliography -- 2 Meta-Learning -- 2.1 Introduction -- 2.2 Learning from Model Evaluations -- 2.2.1 Task-Independent Recommendations -- 2.2.2 Configuration Space Design -- 2.2.3 Configuration Transfer -- 2.2.3.1 Relative Landmarks -- 2.2.3.2 Surrogate Models -- 2.2.3.3 Warm-Started Multi-task Learning -- 2.2.3.4 Other Techniques -- 2.2.4 Learning Curves -- 2.3 Learning from Task Properties -- 2.3.1 Meta-Features -- 2.3.2 Learning Meta-Features -- 2.3.3 Warm-Starting Optimization from Similar Tasks -- 2.3.4 Meta-Models -- 2.3.4.1 Ranking -- 2.3.4.2 Performance Prediction -- 2.3.5 Pipeline Synthesis -- 2.3.6 To Tune or Not to Tune? -- 2.4 Learning from Prior Models -- 2.4.1 Transfer Learning -- 2.4.2 Meta-Learning in Neural Networks -- 2.4.3 Few-Shot Learning -- 2.4.4 Beyond Supervised Learning -- 2.5 Conclusion -- Bibliography. 3 Neural Architecture Search -- 3.1 Introduction -- 3.2 Search Space -- 3.3 Search Strategy -- 3.4 Performance Estimation Strategy -- 3.5 Future Directions -- Bibliography -- Part II AutoML Systems -- 4 Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA -- 4.1 Introduction -- 4.2 Preliminaries -- 4.2.1 Model Selection -- 4.2.2 Hyperparameter Optimization -- 4.3 CASH -- 4.3.1 Sequential Model-Based Algorithm Configuration (SMAC) -- 4.4 Auto-WEKA -- 4.5 Experimental Evaluation -- 4.5.1 Baseline Methods -- 4.5.2 Results for Cross-Validation Performance -- 4.5.3 Results for Test Performance -- 4.6 Conclusion -- 4.6.1 Community Adoption -- Bibliography -- 5 Hyperopt-Sklearn -- 5.1 Introduction -- 5.2 Background: Hyperopt for Optimization -- 5.3 Scikit-Learn Model Selection as a Search Problem -- 5.4 Example Usage -- 5.5 Experiments -- 5.6 Discussion and Future Work -- 5.7 Conclusions -- Bibliography -- 6 Auto-sklearn: Efficient and Robust Automated MachineLearning -- 6.1 Introduction -- 6.2 AutoML as a CASH Problem -- 6.3 New Methods for Increasing Efficiency and Robustness of AutoML -- 6.3.1 Meta-learning for Finding Good Instantiations of Machine Learning Frameworks -- 6.3.2 Automated Ensemble Construction of Models Evaluated During Optimization -- 6.4 A Practical Automated Machine Learning System -- 6.5 Comparing Auto-sklearn to Auto-WEKA and Hyperopt-Sklearn -- 6.6 Evaluation of the Proposed AutoML Improvements -- 6.7 Detailed Analysis of Auto-sklearn Components -- 6.8 Discussion and Conclusion -- 6.8.1 Discussion -- 6.8.2 Usage -- 6.8.3 Extensions in PoSH Auto-sklearn -- 6.8.4 Conclusion and Future Work -- Bibliography -- 7 Towards Automatically-Tuned Deep Neural Networks -- 7.1 Introduction -- 7.2 Auto-Net 1.0 -- 7.3 Auto-Net 2.0 -- 7.4 Experiments -- 7.4.1 Baseline Evaluation of Auto-Net 1.0 and Auto-sklearn. 7.4.2 Results for AutoML Competition Datasets -- 7.4.3 Comparing AutoNet 1.0 and 2.0 -- 7.5 Conclusion -- Bibliography -- 8 TPOT: A Tree-Based Pipeline Optimization Toolfor Automating Machine Learning -- 8.1 Introduction -- 8.2 Methods -- 8.2.1 Machine Learning Pipeline Operators -- 8.2.2 Constructing Tree-Based Pipelines -- 8.2.3 Optimizing Tree-Based Pipelines -- 8.2.4 Benchmark Data -- 8.3 Results -- 8.4 Conclusions and Future Work -- Bibliography -- 9 The Automatic Statistician -- 9.1 Introduction -- 9.2 Basic Anatomy of an Automatic Statistician -- 9.2.1 Related Work -- 9.3 An Automatic Statistician for Time Series Data -- 9.3.1 The Grammar over Kernels -- 9.3.2 The Search and Evaluation Procedure -- 9.3.3 Generating Descriptions in Natural Language -- 9.3.4 Comparison with Humans -- 9.4 Other Automatic Statistician Systems -- 9.4.1 Core Components -- 9.4.2 Design Challenges -- 9.4.2.1 User Interaction -- 9.4.2.2 Missing and Messy Data -- 9.4.2.3 Resource Allocation -- 9.5 Conclusion -- Bibliography -- Part III AutoML Challenges -- 10 Analysis of the AutoML Challenge Series 2015-2018 -- 10.1 Introduction -- 10.2 Problem Formalization and Overview -- 10.2.1 Scope of the Problem -- 10.2.2 Full Model Selection -- 10.2.3 Optimization of Hyper-parameters -- 10.2.4 Strategies of Model Search -- 10.3 Data -- 10.4 Challenge Protocol -- 10.4.1 Time Budget and Computational Resources -- 10.4.2 Scoring Metrics -- 10.4.3 Rounds and Phases in the 2015/2016 Challenge -- 10.4.4 Phases in the 2018 Challenge -- 10.5 Results -- 10.5.1 Scores Obtained in the 2015/2016 Challenge -- 10.5.2 Scores Obtained in the 2018 Challenge -- 10.5.3 Difficulty of Datasets/Tasks -- 10.5.4 Hyper-parameter Optimization -- 10.5.5 Meta-learning -- 10.5.6 Methods Used in the Challenges -- 10.6 Discussion -- 10.7 Conclusion -- Bibliography -- Correction to: Neural Architecture Search. Description based on publisher supplied metadata and other sources. Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. Electronic books. Kotthoff, Lars. Vanschoren, Joaquin. Print version: Hutter, Frank Automated Machine Learning Cham : Springer International Publishing AG,c2019 9783030053178 ProQuest (Firm) https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=5788944 Click to View |
language |
English |
format |
eBook |
author |
Hutter, Frank. |
spellingShingle |
Hutter, Frank. Automated Machine Learning : Methods, Systems, Challenges. The Springer Series on Challenges in Machine Learning Series Intro -- Foreword -- Preface -- Acknowledgments -- Contents -- Part I AutoML Methods -- 1 Hyperparameter Optimization -- 1.1 Introduction -- 1.2 Problem Statement -- 1.2.1 Alternatives to Optimization: Ensembling and Marginalization -- 1.2.2 Optimizing for Multiple Objectives -- 1.3 Blackbox Hyperparameter Optimization -- 1.3.1 Model-Free Blackbox Optimization Methods -- 1.3.2 Bayesian Optimization -- 1.3.2.1 Bayesian Optimization in a Nutshell -- 1.3.2.2 Surrogate Models -- 1.3.2.3 Configuration Space Description -- 1.3.2.4 Constrained Bayesian Optimization -- 1.4 Multi-fidelity Optimization -- 1.4.1 Learning Curve-Based Prediction for Early Stopping -- 1.4.2 Bandit-Based Algorithm Selection Methods -- 1.4.3 Adaptive Choices of Fidelities -- 1.5 Applications to AutoML -- 1.6 Open Problems and Future Research Directions -- 1.6.1 Benchmarks and Comparability -- 1.6.2 Gradient-Based Optimization -- 1.6.3 Scalability -- 1.6.4 Overfitting and Generalization -- 1.6.5 Arbitrary-Size Pipeline Construction -- Bibliography -- 2 Meta-Learning -- 2.1 Introduction -- 2.2 Learning from Model Evaluations -- 2.2.1 Task-Independent Recommendations -- 2.2.2 Configuration Space Design -- 2.2.3 Configuration Transfer -- 2.2.3.1 Relative Landmarks -- 2.2.3.2 Surrogate Models -- 2.2.3.3 Warm-Started Multi-task Learning -- 2.2.3.4 Other Techniques -- 2.2.4 Learning Curves -- 2.3 Learning from Task Properties -- 2.3.1 Meta-Features -- 2.3.2 Learning Meta-Features -- 2.3.3 Warm-Starting Optimization from Similar Tasks -- 2.3.4 Meta-Models -- 2.3.4.1 Ranking -- 2.3.4.2 Performance Prediction -- 2.3.5 Pipeline Synthesis -- 2.3.6 To Tune or Not to Tune? -- 2.4 Learning from Prior Models -- 2.4.1 Transfer Learning -- 2.4.2 Meta-Learning in Neural Networks -- 2.4.3 Few-Shot Learning -- 2.4.4 Beyond Supervised Learning -- 2.5 Conclusion -- Bibliography. 3 Neural Architecture Search -- 3.1 Introduction -- 3.2 Search Space -- 3.3 Search Strategy -- 3.4 Performance Estimation Strategy -- 3.5 Future Directions -- Bibliography -- Part II AutoML Systems -- 4 Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA -- 4.1 Introduction -- 4.2 Preliminaries -- 4.2.1 Model Selection -- 4.2.2 Hyperparameter Optimization -- 4.3 CASH -- 4.3.1 Sequential Model-Based Algorithm Configuration (SMAC) -- 4.4 Auto-WEKA -- 4.5 Experimental Evaluation -- 4.5.1 Baseline Methods -- 4.5.2 Results for Cross-Validation Performance -- 4.5.3 Results for Test Performance -- 4.6 Conclusion -- 4.6.1 Community Adoption -- Bibliography -- 5 Hyperopt-Sklearn -- 5.1 Introduction -- 5.2 Background: Hyperopt for Optimization -- 5.3 Scikit-Learn Model Selection as a Search Problem -- 5.4 Example Usage -- 5.5 Experiments -- 5.6 Discussion and Future Work -- 5.7 Conclusions -- Bibliography -- 6 Auto-sklearn: Efficient and Robust Automated MachineLearning -- 6.1 Introduction -- 6.2 AutoML as a CASH Problem -- 6.3 New Methods for Increasing Efficiency and Robustness of AutoML -- 6.3.1 Meta-learning for Finding Good Instantiations of Machine Learning Frameworks -- 6.3.2 Automated Ensemble Construction of Models Evaluated During Optimization -- 6.4 A Practical Automated Machine Learning System -- 6.5 Comparing Auto-sklearn to Auto-WEKA and Hyperopt-Sklearn -- 6.6 Evaluation of the Proposed AutoML Improvements -- 6.7 Detailed Analysis of Auto-sklearn Components -- 6.8 Discussion and Conclusion -- 6.8.1 Discussion -- 6.8.2 Usage -- 6.8.3 Extensions in PoSH Auto-sklearn -- 6.8.4 Conclusion and Future Work -- Bibliography -- 7 Towards Automatically-Tuned Deep Neural Networks -- 7.1 Introduction -- 7.2 Auto-Net 1.0 -- 7.3 Auto-Net 2.0 -- 7.4 Experiments -- 7.4.1 Baseline Evaluation of Auto-Net 1.0 and Auto-sklearn. 7.4.2 Results for AutoML Competition Datasets -- 7.4.3 Comparing AutoNet 1.0 and 2.0 -- 7.5 Conclusion -- Bibliography -- 8 TPOT: A Tree-Based Pipeline Optimization Toolfor Automating Machine Learning -- 8.1 Introduction -- 8.2 Methods -- 8.2.1 Machine Learning Pipeline Operators -- 8.2.2 Constructing Tree-Based Pipelines -- 8.2.3 Optimizing Tree-Based Pipelines -- 8.2.4 Benchmark Data -- 8.3 Results -- 8.4 Conclusions and Future Work -- Bibliography -- 9 The Automatic Statistician -- 9.1 Introduction -- 9.2 Basic Anatomy of an Automatic Statistician -- 9.2.1 Related Work -- 9.3 An Automatic Statistician for Time Series Data -- 9.3.1 The Grammar over Kernels -- 9.3.2 The Search and Evaluation Procedure -- 9.3.3 Generating Descriptions in Natural Language -- 9.3.4 Comparison with Humans -- 9.4 Other Automatic Statistician Systems -- 9.4.1 Core Components -- 9.4.2 Design Challenges -- 9.4.2.1 User Interaction -- 9.4.2.2 Missing and Messy Data -- 9.4.2.3 Resource Allocation -- 9.5 Conclusion -- Bibliography -- Part III AutoML Challenges -- 10 Analysis of the AutoML Challenge Series 2015-2018 -- 10.1 Introduction -- 10.2 Problem Formalization and Overview -- 10.2.1 Scope of the Problem -- 10.2.2 Full Model Selection -- 10.2.3 Optimization of Hyper-parameters -- 10.2.4 Strategies of Model Search -- 10.3 Data -- 10.4 Challenge Protocol -- 10.4.1 Time Budget and Computational Resources -- 10.4.2 Scoring Metrics -- 10.4.3 Rounds and Phases in the 2015/2016 Challenge -- 10.4.4 Phases in the 2018 Challenge -- 10.5 Results -- 10.5.1 Scores Obtained in the 2015/2016 Challenge -- 10.5.2 Scores Obtained in the 2018 Challenge -- 10.5.3 Difficulty of Datasets/Tasks -- 10.5.4 Hyper-parameter Optimization -- 10.5.5 Meta-learning -- 10.5.6 Methods Used in the Challenges -- 10.6 Discussion -- 10.7 Conclusion -- Bibliography -- Correction to: Neural Architecture Search. |
author_facet |
Hutter, Frank. Kotthoff, Lars. Vanschoren, Joaquin. |
author_variant |
f h fh |
author2 |
Kotthoff, Lars. Vanschoren, Joaquin. |
author2_variant |
l k lk j v jv |
author2_role |
TeilnehmendeR TeilnehmendeR |
author_sort |
Hutter, Frank. |
title |
Automated Machine Learning : Methods, Systems, Challenges. |
title_sub |
Methods, Systems, Challenges. |
title_full |
Automated Machine Learning : Methods, Systems, Challenges. |
title_fullStr |
Automated Machine Learning : Methods, Systems, Challenges. |
title_full_unstemmed |
Automated Machine Learning : Methods, Systems, Challenges. |
title_auth |
Automated Machine Learning : Methods, Systems, Challenges. |
title_new |
Automated Machine Learning : |
title_sort |
automated machine learning : methods, systems, challenges. |
series |
The Springer Series on Challenges in Machine Learning Series |
series2 |
The Springer Series on Challenges in Machine Learning Series |
publisher |
Springer International Publishing AG, |
publishDate |
2019 |
physical |
1 online resource (223 pages) |
edition |
1st ed. |
contents |
Intro -- Foreword -- Preface -- Acknowledgments -- Contents -- Part I AutoML Methods -- 1 Hyperparameter Optimization -- 1.1 Introduction -- 1.2 Problem Statement -- 1.2.1 Alternatives to Optimization: Ensembling and Marginalization -- 1.2.2 Optimizing for Multiple Objectives -- 1.3 Blackbox Hyperparameter Optimization -- 1.3.1 Model-Free Blackbox Optimization Methods -- 1.3.2 Bayesian Optimization -- 1.3.2.1 Bayesian Optimization in a Nutshell -- 1.3.2.2 Surrogate Models -- 1.3.2.3 Configuration Space Description -- 1.3.2.4 Constrained Bayesian Optimization -- 1.4 Multi-fidelity Optimization -- 1.4.1 Learning Curve-Based Prediction for Early Stopping -- 1.4.2 Bandit-Based Algorithm Selection Methods -- 1.4.3 Adaptive Choices of Fidelities -- 1.5 Applications to AutoML -- 1.6 Open Problems and Future Research Directions -- 1.6.1 Benchmarks and Comparability -- 1.6.2 Gradient-Based Optimization -- 1.6.3 Scalability -- 1.6.4 Overfitting and Generalization -- 1.6.5 Arbitrary-Size Pipeline Construction -- Bibliography -- 2 Meta-Learning -- 2.1 Introduction -- 2.2 Learning from Model Evaluations -- 2.2.1 Task-Independent Recommendations -- 2.2.2 Configuration Space Design -- 2.2.3 Configuration Transfer -- 2.2.3.1 Relative Landmarks -- 2.2.3.2 Surrogate Models -- 2.2.3.3 Warm-Started Multi-task Learning -- 2.2.3.4 Other Techniques -- 2.2.4 Learning Curves -- 2.3 Learning from Task Properties -- 2.3.1 Meta-Features -- 2.3.2 Learning Meta-Features -- 2.3.3 Warm-Starting Optimization from Similar Tasks -- 2.3.4 Meta-Models -- 2.3.4.1 Ranking -- 2.3.4.2 Performance Prediction -- 2.3.5 Pipeline Synthesis -- 2.3.6 To Tune or Not to Tune? -- 2.4 Learning from Prior Models -- 2.4.1 Transfer Learning -- 2.4.2 Meta-Learning in Neural Networks -- 2.4.3 Few-Shot Learning -- 2.4.4 Beyond Supervised Learning -- 2.5 Conclusion -- Bibliography. 3 Neural Architecture Search -- 3.1 Introduction -- 3.2 Search Space -- 3.3 Search Strategy -- 3.4 Performance Estimation Strategy -- 3.5 Future Directions -- Bibliography -- Part II AutoML Systems -- 4 Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA -- 4.1 Introduction -- 4.2 Preliminaries -- 4.2.1 Model Selection -- 4.2.2 Hyperparameter Optimization -- 4.3 CASH -- 4.3.1 Sequential Model-Based Algorithm Configuration (SMAC) -- 4.4 Auto-WEKA -- 4.5 Experimental Evaluation -- 4.5.1 Baseline Methods -- 4.5.2 Results for Cross-Validation Performance -- 4.5.3 Results for Test Performance -- 4.6 Conclusion -- 4.6.1 Community Adoption -- Bibliography -- 5 Hyperopt-Sklearn -- 5.1 Introduction -- 5.2 Background: Hyperopt for Optimization -- 5.3 Scikit-Learn Model Selection as a Search Problem -- 5.4 Example Usage -- 5.5 Experiments -- 5.6 Discussion and Future Work -- 5.7 Conclusions -- Bibliography -- 6 Auto-sklearn: Efficient and Robust Automated MachineLearning -- 6.1 Introduction -- 6.2 AutoML as a CASH Problem -- 6.3 New Methods for Increasing Efficiency and Robustness of AutoML -- 6.3.1 Meta-learning for Finding Good Instantiations of Machine Learning Frameworks -- 6.3.2 Automated Ensemble Construction of Models Evaluated During Optimization -- 6.4 A Practical Automated Machine Learning System -- 6.5 Comparing Auto-sklearn to Auto-WEKA and Hyperopt-Sklearn -- 6.6 Evaluation of the Proposed AutoML Improvements -- 6.7 Detailed Analysis of Auto-sklearn Components -- 6.8 Discussion and Conclusion -- 6.8.1 Discussion -- 6.8.2 Usage -- 6.8.3 Extensions in PoSH Auto-sklearn -- 6.8.4 Conclusion and Future Work -- Bibliography -- 7 Towards Automatically-Tuned Deep Neural Networks -- 7.1 Introduction -- 7.2 Auto-Net 1.0 -- 7.3 Auto-Net 2.0 -- 7.4 Experiments -- 7.4.1 Baseline Evaluation of Auto-Net 1.0 and Auto-sklearn. 7.4.2 Results for AutoML Competition Datasets -- 7.4.3 Comparing AutoNet 1.0 and 2.0 -- 7.5 Conclusion -- Bibliography -- 8 TPOT: A Tree-Based Pipeline Optimization Toolfor Automating Machine Learning -- 8.1 Introduction -- 8.2 Methods -- 8.2.1 Machine Learning Pipeline Operators -- 8.2.2 Constructing Tree-Based Pipelines -- 8.2.3 Optimizing Tree-Based Pipelines -- 8.2.4 Benchmark Data -- 8.3 Results -- 8.4 Conclusions and Future Work -- Bibliography -- 9 The Automatic Statistician -- 9.1 Introduction -- 9.2 Basic Anatomy of an Automatic Statistician -- 9.2.1 Related Work -- 9.3 An Automatic Statistician for Time Series Data -- 9.3.1 The Grammar over Kernels -- 9.3.2 The Search and Evaluation Procedure -- 9.3.3 Generating Descriptions in Natural Language -- 9.3.4 Comparison with Humans -- 9.4 Other Automatic Statistician Systems -- 9.4.1 Core Components -- 9.4.2 Design Challenges -- 9.4.2.1 User Interaction -- 9.4.2.2 Missing and Messy Data -- 9.4.2.3 Resource Allocation -- 9.5 Conclusion -- Bibliography -- Part III AutoML Challenges -- 10 Analysis of the AutoML Challenge Series 2015-2018 -- 10.1 Introduction -- 10.2 Problem Formalization and Overview -- 10.2.1 Scope of the Problem -- 10.2.2 Full Model Selection -- 10.2.3 Optimization of Hyper-parameters -- 10.2.4 Strategies of Model Search -- 10.3 Data -- 10.4 Challenge Protocol -- 10.4.1 Time Budget and Computational Resources -- 10.4.2 Scoring Metrics -- 10.4.3 Rounds and Phases in the 2015/2016 Challenge -- 10.4.4 Phases in the 2018 Challenge -- 10.5 Results -- 10.5.1 Scores Obtained in the 2015/2016 Challenge -- 10.5.2 Scores Obtained in the 2018 Challenge -- 10.5.3 Difficulty of Datasets/Tasks -- 10.5.4 Hyper-parameter Optimization -- 10.5.5 Meta-learning -- 10.5.6 Methods Used in the Challenges -- 10.6 Discussion -- 10.7 Conclusion -- Bibliography -- Correction to: Neural Architecture Search. |
isbn |
9783030053185 9783030053178 |
callnumber-first |
Q - Science |
callnumber-subject |
Q - General Science |
callnumber-label |
Q334-342 |
callnumber-sort |
Q 3334 3342 |
genre |
Electronic books. |
genre_facet |
Electronic books. |
url |
https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=5788944 |
illustrated |
Not Illustrated |
oclc_num |
1105039769 |
work_keys_str_mv |
AT hutterfrank automatedmachinelearningmethodssystemschallenges AT kotthofflars automatedmachinelearningmethodssystemschallenges AT vanschorenjoaquin automatedmachinelearningmethodssystemschallenges |
status_str |
n |
ids_txt_mv |
(MiAaPQ)5005788944 (Au-PeEL)EBL5788944 (OCoLC)1105039769 |
carrierType_str_mv |
cr |
hierarchy_parent_title |
The Springer Series on Challenges in Machine Learning Series |
is_hierarchy_title |
Automated Machine Learning : Methods, Systems, Challenges. |
container_title |
The Springer Series on Challenges in Machine Learning Series |
author2_original_writing_str_mv |
noLinkedField noLinkedField |
marc_error |
Info : Unimarc and ISO-8859-1 translations identical, choosing ISO-8859-1. --- [ 856 : z ] |
_version_ |
1792331056217587712 |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>07330nam a22004453i 4500</leader><controlfield tag="001">5005788944</controlfield><controlfield tag="003">MiAaPQ</controlfield><controlfield tag="005">20240229073832.0</controlfield><controlfield tag="006">m o d | </controlfield><controlfield tag="007">cr cnu||||||||</controlfield><controlfield tag="008">240229s2019 xx o ||||0 eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783030053185</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9783030053178</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)5005788944</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL5788944</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1105039769</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">MiAaPQ</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">MiAaPQ</subfield><subfield code="d">MiAaPQ</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">Q334-342</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hutter, Frank.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Automated Machine Learning :</subfield><subfield code="b">Methods, Systems, Challenges.</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st ed.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham :</subfield><subfield code="b">Springer International Publishing AG,</subfield><subfield code="c">2019.</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">Ã2019.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (223 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">The Springer Series on Challenges in Machine Learning Series</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Intro -- Foreword -- Preface -- Acknowledgments -- Contents -- Part I AutoML Methods -- 1 Hyperparameter Optimization -- 1.1 Introduction -- 1.2 Problem Statement -- 1.2.1 Alternatives to Optimization: Ensembling and Marginalization -- 1.2.2 Optimizing for Multiple Objectives -- 1.3 Blackbox Hyperparameter Optimization -- 1.3.1 Model-Free Blackbox Optimization Methods -- 1.3.2 Bayesian Optimization -- 1.3.2.1 Bayesian Optimization in a Nutshell -- 1.3.2.2 Surrogate Models -- 1.3.2.3 Configuration Space Description -- 1.3.2.4 Constrained Bayesian Optimization -- 1.4 Multi-fidelity Optimization -- 1.4.1 Learning Curve-Based Prediction for Early Stopping -- 1.4.2 Bandit-Based Algorithm Selection Methods -- 1.4.3 Adaptive Choices of Fidelities -- 1.5 Applications to AutoML -- 1.6 Open Problems and Future Research Directions -- 1.6.1 Benchmarks and Comparability -- 1.6.2 Gradient-Based Optimization -- 1.6.3 Scalability -- 1.6.4 Overfitting and Generalization -- 1.6.5 Arbitrary-Size Pipeline Construction -- Bibliography -- 2 Meta-Learning -- 2.1 Introduction -- 2.2 Learning from Model Evaluations -- 2.2.1 Task-Independent Recommendations -- 2.2.2 Configuration Space Design -- 2.2.3 Configuration Transfer -- 2.2.3.1 Relative Landmarks -- 2.2.3.2 Surrogate Models -- 2.2.3.3 Warm-Started Multi-task Learning -- 2.2.3.4 Other Techniques -- 2.2.4 Learning Curves -- 2.3 Learning from Task Properties -- 2.3.1 Meta-Features -- 2.3.2 Learning Meta-Features -- 2.3.3 Warm-Starting Optimization from Similar Tasks -- 2.3.4 Meta-Models -- 2.3.4.1 Ranking -- 2.3.4.2 Performance Prediction -- 2.3.5 Pipeline Synthesis -- 2.3.6 To Tune or Not to Tune? -- 2.4 Learning from Prior Models -- 2.4.1 Transfer Learning -- 2.4.2 Meta-Learning in Neural Networks -- 2.4.3 Few-Shot Learning -- 2.4.4 Beyond Supervised Learning -- 2.5 Conclusion -- Bibliography.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3 Neural Architecture Search -- 3.1 Introduction -- 3.2 Search Space -- 3.3 Search Strategy -- 3.4 Performance Estimation Strategy -- 3.5 Future Directions -- Bibliography -- Part II AutoML Systems -- 4 Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA -- 4.1 Introduction -- 4.2 Preliminaries -- 4.2.1 Model Selection -- 4.2.2 Hyperparameter Optimization -- 4.3 CASH -- 4.3.1 Sequential Model-Based Algorithm Configuration (SMAC) -- 4.4 Auto-WEKA -- 4.5 Experimental Evaluation -- 4.5.1 Baseline Methods -- 4.5.2 Results for Cross-Validation Performance -- 4.5.3 Results for Test Performance -- 4.6 Conclusion -- 4.6.1 Community Adoption -- Bibliography -- 5 Hyperopt-Sklearn -- 5.1 Introduction -- 5.2 Background: Hyperopt for Optimization -- 5.3 Scikit-Learn Model Selection as a Search Problem -- 5.4 Example Usage -- 5.5 Experiments -- 5.6 Discussion and Future Work -- 5.7 Conclusions -- Bibliography -- 6 Auto-sklearn: Efficient and Robust Automated MachineLearning -- 6.1 Introduction -- 6.2 AutoML as a CASH Problem -- 6.3 New Methods for Increasing Efficiency and Robustness of AutoML -- 6.3.1 Meta-learning for Finding Good Instantiations of Machine Learning Frameworks -- 6.3.2 Automated Ensemble Construction of Models Evaluated During Optimization -- 6.4 A Practical Automated Machine Learning System -- 6.5 Comparing Auto-sklearn to Auto-WEKA and Hyperopt-Sklearn -- 6.6 Evaluation of the Proposed AutoML Improvements -- 6.7 Detailed Analysis of Auto-sklearn Components -- 6.8 Discussion and Conclusion -- 6.8.1 Discussion -- 6.8.2 Usage -- 6.8.3 Extensions in PoSH Auto-sklearn -- 6.8.4 Conclusion and Future Work -- Bibliography -- 7 Towards Automatically-Tuned Deep Neural Networks -- 7.1 Introduction -- 7.2 Auto-Net 1.0 -- 7.3 Auto-Net 2.0 -- 7.4 Experiments -- 7.4.1 Baseline Evaluation of Auto-Net 1.0 and Auto-sklearn.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">7.4.2 Results for AutoML Competition Datasets -- 7.4.3 Comparing AutoNet 1.0 and 2.0 -- 7.5 Conclusion -- Bibliography -- 8 TPOT: A Tree-Based Pipeline Optimization Toolfor Automating Machine Learning -- 8.1 Introduction -- 8.2 Methods -- 8.2.1 Machine Learning Pipeline Operators -- 8.2.2 Constructing Tree-Based Pipelines -- 8.2.3 Optimizing Tree-Based Pipelines -- 8.2.4 Benchmark Data -- 8.3 Results -- 8.4 Conclusions and Future Work -- Bibliography -- 9 The Automatic Statistician -- 9.1 Introduction -- 9.2 Basic Anatomy of an Automatic Statistician -- 9.2.1 Related Work -- 9.3 An Automatic Statistician for Time Series Data -- 9.3.1 The Grammar over Kernels -- 9.3.2 The Search and Evaluation Procedure -- 9.3.3 Generating Descriptions in Natural Language -- 9.3.4 Comparison with Humans -- 9.4 Other Automatic Statistician Systems -- 9.4.1 Core Components -- 9.4.2 Design Challenges -- 9.4.2.1 User Interaction -- 9.4.2.2 Missing and Messy Data -- 9.4.2.3 Resource Allocation -- 9.5 Conclusion -- Bibliography -- Part III AutoML Challenges -- 10 Analysis of the AutoML Challenge Series 2015-2018 -- 10.1 Introduction -- 10.2 Problem Formalization and Overview -- 10.2.1 Scope of the Problem -- 10.2.2 Full Model Selection -- 10.2.3 Optimization of Hyper-parameters -- 10.2.4 Strategies of Model Search -- 10.3 Data -- 10.4 Challenge Protocol -- 10.4.1 Time Budget and Computational Resources -- 10.4.2 Scoring Metrics -- 10.4.3 Rounds and Phases in the 2015/2016 Challenge -- 10.4.4 Phases in the 2018 Challenge -- 10.5 Results -- 10.5.1 Scores Obtained in the 2015/2016 Challenge -- 10.5.2 Scores Obtained in the 2018 Challenge -- 10.5.3 Difficulty of Datasets/Tasks -- 10.5.4 Hyper-parameter Optimization -- 10.5.5 Meta-learning -- 10.5.6 Methods Used in the Challenges -- 10.6 Discussion -- 10.7 Conclusion -- Bibliography -- Correction to: Neural Architecture Search.</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources.</subfield></datafield><datafield tag="590" ind1=" " ind2=" "><subfield code="a">Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. </subfield></datafield><datafield tag="655" ind1=" " ind2="4"><subfield code="a">Electronic books.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kotthoff, Lars.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Vanschoren, Joaquin.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Hutter, Frank</subfield><subfield code="t">Automated Machine Learning</subfield><subfield code="d">Cham : Springer International Publishing AG,c2019</subfield><subfield code="z">9783030053178</subfield></datafield><datafield tag="797" ind1="2" ind2=" "><subfield code="a">ProQuest (Firm)</subfield></datafield><datafield tag="830" ind1=" " ind2="4"><subfield code="a">The Springer Series on Challenges in Machine Learning Series</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=5788944</subfield><subfield code="z">Click to View</subfield></datafield></record></collection> |