Metalearning : : Applications to Automated Machine Learning and Data Mining.

Saved in:
Bibliographic Details
Superior document:Cognitive Technologies Series
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer International Publishing AG,, 2022.
Ã2022.
Year of Publication:2022
Edition:2nd ed.
Language:English
Series:Cognitive Technologies Series
Online Access:
Physical Description:1 online resource (349 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
id 5006893332
ctrlnum (MiAaPQ)5006893332
(Au-PeEL)EBL6893332
(OCoLC)1301265010
collection bib_alma
record_format marc
spelling Brazdil, Pavel.
Metalearning : Applications to Automated Machine Learning and Data Mining.
2nd ed.
Cham : Springer International Publishing AG, 2022.
Ã2022.
1 online resource (349 pages)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
Cognitive Technologies Series
Intro -- Preface -- Contents -- Part I Basic Concepts and Architecture -- 1 Introduction -- 1.1 Organization of the Book -- 1.2 Basic Concepts and Architecture (Part I) -- 1.3 Advanced Techniques and Methods (Part II) -- 1.4 Repositories of Experimental Results (Part III) -- References -- 2 Metalearning Approaches for Algorithm Selection I (Exploiting Rankings) -- 2.1 Introduction -- 2.2 Different Forms of Recommendation -- 2.3 Ranking Models for Algorithm Selection -- 2.4 Using a Combined Measure of Accuracy and Runtime -- 2.5 Extensions and Other Approaches -- References -- 3 Evaluating Recommendations of Metalearning/AutoML Systems -- 3.1 Introduction -- 3.2 Methodology for Evaluating Base-Level Algorithms -- 3.3 Normalization of Performance for Base-Level Algorithms -- 3.4 Methodology for Evaluating Metalearning and AutoML Systems -- 3.5 Evaluating Recommendations by Correlation -- 3.6 Evaluating the Effects of Recommendations -- 3.7 Some Useful Measures -- References -- 4 Dataset Characteristics (Metafeatures) -- 4.1 Introduction -- 4.2 Data Characterization Used in Classification Tasks -- 4.3 Data Characterization Used in Regression Tasks -- 4.4 Data Characterization Used in Time Series Tasks -- 4.5 Data Characterization Used in Clustering Tasks -- 4.6 Deriving New Features from the Basic Set -- 4.7 Selection of Metafeatures -- 4.8 Algorithm-Specific Characterization and Representation Issues -- 4.9 Establishing Similarity Between Datasets -- References -- 5 Metalearning Approaches for Algorithm Selection II -- 5.1 Introduction -- 5.2 Using Regression Models in Metalearning Systems -- 5.3 Using Classification at Meta-level for the Prediction of Applicability -- 5.4 Methods Based on Pairwise Comparisons -- 5.5 Pairwise Approach for a Set of Algorithms -- 5.6 Iterative Approach of Conducting Pairwise Tests -- 5.7 Using ART Trees and Forests.
5.8 Active Testing -- 5.9 Non-propositional Approaches -- References -- 6 Metalearning for Hyperparameter Optimization -- 6.1 Introduction -- 6.2 Basic Hyperparameter Optimization Methods -- 6.3 Bayesian Optimization -- 6.4 Metalearning for Hyperparameter Optimization -- 6.5 Concluding Remarks -- References -- 7 Automating Workflow/Pipeline Design -- 7.1 Introduction -- 7.2 Constraining the Search in Automatic Workflow Design -- 7.3 Strategies Used in Workflow Design -- 7.4 Exploiting Rankings of Successful Plans (Workflows) -- References -- Part II Advanced Techniques and Methods -- 8 Setting Up Configuration Spaces and Experiments -- 8.1 Introduction -- 8.2 Types of Configuration Spaces -- 8.3 Adequacy of Configuration Spaces for Given Tasks -- 8.4 Hyperparameter Importance and Marginal Contribution -- 8.5 Reducing Configuration Spaces -- 8.6 Configuration Spaces in Symbolic Learning -- 8.7 Which Datasets Are Needed? -- 8.8 Complete versus Incomplete Metadata -- 8.9 Exploiting Strategies from Multi-armed Bandits to Schedule Experiments -- 8.10 Discussion -- References -- 9 Combining Base-Learners into Ensembles -- 9.1 Introduction -- 9.2 Bagging and Boosting -- 9.3 Stacking and Cascade Generalization -- 9.4 Cascading and Delegating -- 9.5 Arbitrating -- 9.6 Meta-decision Trees -- 9.7 Discussion -- References -- 10 Metalearning in Ensemble Methods -- 10.1 Introduction -- 10.2 Basic Characteristics of Ensemble Systems -- 10.3 Selection-Based Approaches for Ensemble Generation -- 10.4 Ensemble Learning (per Dataset) -- 10.5 Dynamic Selection of Models (per Instance) -- 10.6 Generation of Hierarchical Ensembles -- 10.7 Conclusions and Future Research -- References -- 11 Algorithm Recommendation for Data Streams -- 11.1 Introduction -- 11.2 Metafeature-Based Approaches -- 11.3 Data Stream Ensembles -- 11.4 Recurring Meta-level Models.
11.5 Challenges for Future Research -- References -- 12 Transfer of Knowledge Across Tasks -- 12.1 Introduction -- 12.2 Background, Terminology, and Notation -- 12.3 Learning Architectures in Transfer Learning -- 12.4 A Theoretical Framework -- References -- 13 Metalearning for Deep Neural Networks -- 13.1 Introduction -- 13.2 Background and Notation -- 13.3 Metric-Based Metalearning -- 13.4 Model-Based Metalearning -- 13.5 Optimization-Based Metalearning -- 13.6 Discussion and Outlook -- References -- 14 Automating Data Science -- 14.1 Introduction -- 14.2 Defining the Current Problem/Task -- 14.3 Identifying the Task Domain and Knowledge -- 14.4 Obtaining the Data -- 14.5 Automating Data Preprocessing and Transformation -- 14.6 Automating Model and Report Generation -- References -- 15 Automating the Design of Complex Systems -- 15.1 Introduction -- 15.2 Exploiting a Richer Set of Operators -- 15.3 Changing the Granularity by Introducing New Concepts -- 15.4 Reusing New Concepts in Further Learning -- 15.5 Iterative Learning -- 15.6 Learning to Solve Interdependent Tasks -- References -- Part III Organizing and Exploiting Metadata -- 16 Metadata Repositories -- 16.1 Introduction -- 16.2 Organizing the World Machine Learning Information -- 16.3 OpenML -- References -- 17 Learning from Metadata in Repositories -- 17.1 Introduction -- 17.2 Performance Analysis of Algorithms per Dataset -- 17.3 Performance Analysis of Algorithms across Datasets -- 17.4 Effect of Specific Data/Workflow Characteristics on Performance -- 17.5 Summary -- References -- 18 Concluding Remarks -- 18.1 Introduction -- 18.2 Form of Metaknowledge Used in Different Approaches -- 18.3 Future Challenges -- References -- Index.
Description based on publisher supplied metadata and other sources.
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Electronic books.
van Rijn, Jan N.
Soares, Carlos.
Vanschoren, Joaquin.
Print version: Brazdil, Pavel Metalearning Cham : Springer International Publishing AG,c2022 9783030670238
ProQuest (Firm)
https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6893332 Click to View
language English
format eBook
author Brazdil, Pavel.
spellingShingle Brazdil, Pavel.
Metalearning : Applications to Automated Machine Learning and Data Mining.
Cognitive Technologies Series
Intro -- Preface -- Contents -- Part I Basic Concepts and Architecture -- 1 Introduction -- 1.1 Organization of the Book -- 1.2 Basic Concepts and Architecture (Part I) -- 1.3 Advanced Techniques and Methods (Part II) -- 1.4 Repositories of Experimental Results (Part III) -- References -- 2 Metalearning Approaches for Algorithm Selection I (Exploiting Rankings) -- 2.1 Introduction -- 2.2 Different Forms of Recommendation -- 2.3 Ranking Models for Algorithm Selection -- 2.4 Using a Combined Measure of Accuracy and Runtime -- 2.5 Extensions and Other Approaches -- References -- 3 Evaluating Recommendations of Metalearning/AutoML Systems -- 3.1 Introduction -- 3.2 Methodology for Evaluating Base-Level Algorithms -- 3.3 Normalization of Performance for Base-Level Algorithms -- 3.4 Methodology for Evaluating Metalearning and AutoML Systems -- 3.5 Evaluating Recommendations by Correlation -- 3.6 Evaluating the Effects of Recommendations -- 3.7 Some Useful Measures -- References -- 4 Dataset Characteristics (Metafeatures) -- 4.1 Introduction -- 4.2 Data Characterization Used in Classification Tasks -- 4.3 Data Characterization Used in Regression Tasks -- 4.4 Data Characterization Used in Time Series Tasks -- 4.5 Data Characterization Used in Clustering Tasks -- 4.6 Deriving New Features from the Basic Set -- 4.7 Selection of Metafeatures -- 4.8 Algorithm-Specific Characterization and Representation Issues -- 4.9 Establishing Similarity Between Datasets -- References -- 5 Metalearning Approaches for Algorithm Selection II -- 5.1 Introduction -- 5.2 Using Regression Models in Metalearning Systems -- 5.3 Using Classification at Meta-level for the Prediction of Applicability -- 5.4 Methods Based on Pairwise Comparisons -- 5.5 Pairwise Approach for a Set of Algorithms -- 5.6 Iterative Approach of Conducting Pairwise Tests -- 5.7 Using ART Trees and Forests.
5.8 Active Testing -- 5.9 Non-propositional Approaches -- References -- 6 Metalearning for Hyperparameter Optimization -- 6.1 Introduction -- 6.2 Basic Hyperparameter Optimization Methods -- 6.3 Bayesian Optimization -- 6.4 Metalearning for Hyperparameter Optimization -- 6.5 Concluding Remarks -- References -- 7 Automating Workflow/Pipeline Design -- 7.1 Introduction -- 7.2 Constraining the Search in Automatic Workflow Design -- 7.3 Strategies Used in Workflow Design -- 7.4 Exploiting Rankings of Successful Plans (Workflows) -- References -- Part II Advanced Techniques and Methods -- 8 Setting Up Configuration Spaces and Experiments -- 8.1 Introduction -- 8.2 Types of Configuration Spaces -- 8.3 Adequacy of Configuration Spaces for Given Tasks -- 8.4 Hyperparameter Importance and Marginal Contribution -- 8.5 Reducing Configuration Spaces -- 8.6 Configuration Spaces in Symbolic Learning -- 8.7 Which Datasets Are Needed? -- 8.8 Complete versus Incomplete Metadata -- 8.9 Exploiting Strategies from Multi-armed Bandits to Schedule Experiments -- 8.10 Discussion -- References -- 9 Combining Base-Learners into Ensembles -- 9.1 Introduction -- 9.2 Bagging and Boosting -- 9.3 Stacking and Cascade Generalization -- 9.4 Cascading and Delegating -- 9.5 Arbitrating -- 9.6 Meta-decision Trees -- 9.7 Discussion -- References -- 10 Metalearning in Ensemble Methods -- 10.1 Introduction -- 10.2 Basic Characteristics of Ensemble Systems -- 10.3 Selection-Based Approaches for Ensemble Generation -- 10.4 Ensemble Learning (per Dataset) -- 10.5 Dynamic Selection of Models (per Instance) -- 10.6 Generation of Hierarchical Ensembles -- 10.7 Conclusions and Future Research -- References -- 11 Algorithm Recommendation for Data Streams -- 11.1 Introduction -- 11.2 Metafeature-Based Approaches -- 11.3 Data Stream Ensembles -- 11.4 Recurring Meta-level Models.
11.5 Challenges for Future Research -- References -- 12 Transfer of Knowledge Across Tasks -- 12.1 Introduction -- 12.2 Background, Terminology, and Notation -- 12.3 Learning Architectures in Transfer Learning -- 12.4 A Theoretical Framework -- References -- 13 Metalearning for Deep Neural Networks -- 13.1 Introduction -- 13.2 Background and Notation -- 13.3 Metric-Based Metalearning -- 13.4 Model-Based Metalearning -- 13.5 Optimization-Based Metalearning -- 13.6 Discussion and Outlook -- References -- 14 Automating Data Science -- 14.1 Introduction -- 14.2 Defining the Current Problem/Task -- 14.3 Identifying the Task Domain and Knowledge -- 14.4 Obtaining the Data -- 14.5 Automating Data Preprocessing and Transformation -- 14.6 Automating Model and Report Generation -- References -- 15 Automating the Design of Complex Systems -- 15.1 Introduction -- 15.2 Exploiting a Richer Set of Operators -- 15.3 Changing the Granularity by Introducing New Concepts -- 15.4 Reusing New Concepts in Further Learning -- 15.5 Iterative Learning -- 15.6 Learning to Solve Interdependent Tasks -- References -- Part III Organizing and Exploiting Metadata -- 16 Metadata Repositories -- 16.1 Introduction -- 16.2 Organizing the World Machine Learning Information -- 16.3 OpenML -- References -- 17 Learning from Metadata in Repositories -- 17.1 Introduction -- 17.2 Performance Analysis of Algorithms per Dataset -- 17.3 Performance Analysis of Algorithms across Datasets -- 17.4 Effect of Specific Data/Workflow Characteristics on Performance -- 17.5 Summary -- References -- 18 Concluding Remarks -- 18.1 Introduction -- 18.2 Form of Metaknowledge Used in Different Approaches -- 18.3 Future Challenges -- References -- Index.
author_facet Brazdil, Pavel.
van Rijn, Jan N.
Soares, Carlos.
Vanschoren, Joaquin.
author_variant p b pb
author2 van Rijn, Jan N.
Soares, Carlos.
Vanschoren, Joaquin.
author2_variant r j n v rjn rjnv
c s cs
j v jv
author2_role TeilnehmendeR
TeilnehmendeR
TeilnehmendeR
author_sort Brazdil, Pavel.
title Metalearning : Applications to Automated Machine Learning and Data Mining.
title_sub Applications to Automated Machine Learning and Data Mining.
title_full Metalearning : Applications to Automated Machine Learning and Data Mining.
title_fullStr Metalearning : Applications to Automated Machine Learning and Data Mining.
title_full_unstemmed Metalearning : Applications to Automated Machine Learning and Data Mining.
title_auth Metalearning : Applications to Automated Machine Learning and Data Mining.
title_new Metalearning :
title_sort metalearning : applications to automated machine learning and data mining.
series Cognitive Technologies Series
series2 Cognitive Technologies Series
publisher Springer International Publishing AG,
publishDate 2022
physical 1 online resource (349 pages)
edition 2nd ed.
contents Intro -- Preface -- Contents -- Part I Basic Concepts and Architecture -- 1 Introduction -- 1.1 Organization of the Book -- 1.2 Basic Concepts and Architecture (Part I) -- 1.3 Advanced Techniques and Methods (Part II) -- 1.4 Repositories of Experimental Results (Part III) -- References -- 2 Metalearning Approaches for Algorithm Selection I (Exploiting Rankings) -- 2.1 Introduction -- 2.2 Different Forms of Recommendation -- 2.3 Ranking Models for Algorithm Selection -- 2.4 Using a Combined Measure of Accuracy and Runtime -- 2.5 Extensions and Other Approaches -- References -- 3 Evaluating Recommendations of Metalearning/AutoML Systems -- 3.1 Introduction -- 3.2 Methodology for Evaluating Base-Level Algorithms -- 3.3 Normalization of Performance for Base-Level Algorithms -- 3.4 Methodology for Evaluating Metalearning and AutoML Systems -- 3.5 Evaluating Recommendations by Correlation -- 3.6 Evaluating the Effects of Recommendations -- 3.7 Some Useful Measures -- References -- 4 Dataset Characteristics (Metafeatures) -- 4.1 Introduction -- 4.2 Data Characterization Used in Classification Tasks -- 4.3 Data Characterization Used in Regression Tasks -- 4.4 Data Characterization Used in Time Series Tasks -- 4.5 Data Characterization Used in Clustering Tasks -- 4.6 Deriving New Features from the Basic Set -- 4.7 Selection of Metafeatures -- 4.8 Algorithm-Specific Characterization and Representation Issues -- 4.9 Establishing Similarity Between Datasets -- References -- 5 Metalearning Approaches for Algorithm Selection II -- 5.1 Introduction -- 5.2 Using Regression Models in Metalearning Systems -- 5.3 Using Classification at Meta-level for the Prediction of Applicability -- 5.4 Methods Based on Pairwise Comparisons -- 5.5 Pairwise Approach for a Set of Algorithms -- 5.6 Iterative Approach of Conducting Pairwise Tests -- 5.7 Using ART Trees and Forests.
5.8 Active Testing -- 5.9 Non-propositional Approaches -- References -- 6 Metalearning for Hyperparameter Optimization -- 6.1 Introduction -- 6.2 Basic Hyperparameter Optimization Methods -- 6.3 Bayesian Optimization -- 6.4 Metalearning for Hyperparameter Optimization -- 6.5 Concluding Remarks -- References -- 7 Automating Workflow/Pipeline Design -- 7.1 Introduction -- 7.2 Constraining the Search in Automatic Workflow Design -- 7.3 Strategies Used in Workflow Design -- 7.4 Exploiting Rankings of Successful Plans (Workflows) -- References -- Part II Advanced Techniques and Methods -- 8 Setting Up Configuration Spaces and Experiments -- 8.1 Introduction -- 8.2 Types of Configuration Spaces -- 8.3 Adequacy of Configuration Spaces for Given Tasks -- 8.4 Hyperparameter Importance and Marginal Contribution -- 8.5 Reducing Configuration Spaces -- 8.6 Configuration Spaces in Symbolic Learning -- 8.7 Which Datasets Are Needed? -- 8.8 Complete versus Incomplete Metadata -- 8.9 Exploiting Strategies from Multi-armed Bandits to Schedule Experiments -- 8.10 Discussion -- References -- 9 Combining Base-Learners into Ensembles -- 9.1 Introduction -- 9.2 Bagging and Boosting -- 9.3 Stacking and Cascade Generalization -- 9.4 Cascading and Delegating -- 9.5 Arbitrating -- 9.6 Meta-decision Trees -- 9.7 Discussion -- References -- 10 Metalearning in Ensemble Methods -- 10.1 Introduction -- 10.2 Basic Characteristics of Ensemble Systems -- 10.3 Selection-Based Approaches for Ensemble Generation -- 10.4 Ensemble Learning (per Dataset) -- 10.5 Dynamic Selection of Models (per Instance) -- 10.6 Generation of Hierarchical Ensembles -- 10.7 Conclusions and Future Research -- References -- 11 Algorithm Recommendation for Data Streams -- 11.1 Introduction -- 11.2 Metafeature-Based Approaches -- 11.3 Data Stream Ensembles -- 11.4 Recurring Meta-level Models.
11.5 Challenges for Future Research -- References -- 12 Transfer of Knowledge Across Tasks -- 12.1 Introduction -- 12.2 Background, Terminology, and Notation -- 12.3 Learning Architectures in Transfer Learning -- 12.4 A Theoretical Framework -- References -- 13 Metalearning for Deep Neural Networks -- 13.1 Introduction -- 13.2 Background and Notation -- 13.3 Metric-Based Metalearning -- 13.4 Model-Based Metalearning -- 13.5 Optimization-Based Metalearning -- 13.6 Discussion and Outlook -- References -- 14 Automating Data Science -- 14.1 Introduction -- 14.2 Defining the Current Problem/Task -- 14.3 Identifying the Task Domain and Knowledge -- 14.4 Obtaining the Data -- 14.5 Automating Data Preprocessing and Transformation -- 14.6 Automating Model and Report Generation -- References -- 15 Automating the Design of Complex Systems -- 15.1 Introduction -- 15.2 Exploiting a Richer Set of Operators -- 15.3 Changing the Granularity by Introducing New Concepts -- 15.4 Reusing New Concepts in Further Learning -- 15.5 Iterative Learning -- 15.6 Learning to Solve Interdependent Tasks -- References -- Part III Organizing and Exploiting Metadata -- 16 Metadata Repositories -- 16.1 Introduction -- 16.2 Organizing the World Machine Learning Information -- 16.3 OpenML -- References -- 17 Learning from Metadata in Repositories -- 17.1 Introduction -- 17.2 Performance Analysis of Algorithms per Dataset -- 17.3 Performance Analysis of Algorithms across Datasets -- 17.4 Effect of Specific Data/Workflow Characteristics on Performance -- 17.5 Summary -- References -- 18 Concluding Remarks -- 18.1 Introduction -- 18.2 Form of Metaknowledge Used in Different Approaches -- 18.3 Future Challenges -- References -- Index.
isbn 9783030670245
9783030670238
callnumber-first Q - Science
callnumber-subject Q - General Science
callnumber-label Q334-342
callnumber-sort Q 3334 3342
genre Electronic books.
genre_facet Electronic books.
url https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6893332
illustrated Not Illustrated
dewey-hundreds 000 - Computer science, information & general works
dewey-tens 000 - Computer science, knowledge & systems
dewey-ones 006 - Special computer methods
dewey-full 006.31
dewey-sort 16.31
dewey-raw 006.31
dewey-search 006.31
oclc_num 1301265010
work_keys_str_mv AT brazdilpavel metalearningapplicationstoautomatedmachinelearninganddatamining
AT vanrijnjann metalearningapplicationstoautomatedmachinelearninganddatamining
AT soarescarlos metalearningapplicationstoautomatedmachinelearninganddatamining
AT vanschorenjoaquin metalearningapplicationstoautomatedmachinelearninganddatamining
status_str n
ids_txt_mv (MiAaPQ)5006893332
(Au-PeEL)EBL6893332
(OCoLC)1301265010
carrierType_str_mv cr
hierarchy_parent_title Cognitive Technologies Series
is_hierarchy_title Metalearning : Applications to Automated Machine Learning and Data Mining.
container_title Cognitive Technologies Series
author2_original_writing_str_mv noLinkedField
noLinkedField
noLinkedField
marc_error Info : Unimarc and ISO-8859-1 translations identical, choosing ISO-8859-1. --- [ 856 : z ]
_version_ 1792331062333931520
fullrecord <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>07185nam a22004693i 4500</leader><controlfield tag="001">5006893332</controlfield><controlfield tag="003">MiAaPQ</controlfield><controlfield tag="005">20240229073845.0</controlfield><controlfield tag="006">m o d | </controlfield><controlfield tag="007">cr cnu||||||||</controlfield><controlfield tag="008">240229s2022 xx o ||||0 eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783030670245</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9783030670238</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)5006893332</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL6893332</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1301265010</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">MiAaPQ</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">MiAaPQ</subfield><subfield code="d">MiAaPQ</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">Q334-342</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.31</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Brazdil, Pavel.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Metalearning :</subfield><subfield code="b">Applications to Automated Machine Learning and Data Mining.</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">2nd ed.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham :</subfield><subfield code="b">Springer International Publishing AG,</subfield><subfield code="c">2022.</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">Ã2022.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (349 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Cognitive Technologies Series</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Intro -- Preface -- Contents -- Part I Basic Concepts and Architecture -- 1 Introduction -- 1.1 Organization of the Book -- 1.2 Basic Concepts and Architecture (Part I) -- 1.3 Advanced Techniques and Methods (Part II) -- 1.4 Repositories of Experimental Results (Part III) -- References -- 2 Metalearning Approaches for Algorithm Selection I (Exploiting Rankings) -- 2.1 Introduction -- 2.2 Different Forms of Recommendation -- 2.3 Ranking Models for Algorithm Selection -- 2.4 Using a Combined Measure of Accuracy and Runtime -- 2.5 Extensions and Other Approaches -- References -- 3 Evaluating Recommendations of Metalearning/AutoML Systems -- 3.1 Introduction -- 3.2 Methodology for Evaluating Base-Level Algorithms -- 3.3 Normalization of Performance for Base-Level Algorithms -- 3.4 Methodology for Evaluating Metalearning and AutoML Systems -- 3.5 Evaluating Recommendations by Correlation -- 3.6 Evaluating the Effects of Recommendations -- 3.7 Some Useful Measures -- References -- 4 Dataset Characteristics (Metafeatures) -- 4.1 Introduction -- 4.2 Data Characterization Used in Classification Tasks -- 4.3 Data Characterization Used in Regression Tasks -- 4.4 Data Characterization Used in Time Series Tasks -- 4.5 Data Characterization Used in Clustering Tasks -- 4.6 Deriving New Features from the Basic Set -- 4.7 Selection of Metafeatures -- 4.8 Algorithm-Specific Characterization and Representation Issues -- 4.9 Establishing Similarity Between Datasets -- References -- 5 Metalearning Approaches for Algorithm Selection II -- 5.1 Introduction -- 5.2 Using Regression Models in Metalearning Systems -- 5.3 Using Classification at Meta-level for the Prediction of Applicability -- 5.4 Methods Based on Pairwise Comparisons -- 5.5 Pairwise Approach for a Set of Algorithms -- 5.6 Iterative Approach of Conducting Pairwise Tests -- 5.7 Using ART Trees and Forests.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">5.8 Active Testing -- 5.9 Non-propositional Approaches -- References -- 6 Metalearning for Hyperparameter Optimization -- 6.1 Introduction -- 6.2 Basic Hyperparameter Optimization Methods -- 6.3 Bayesian Optimization -- 6.4 Metalearning for Hyperparameter Optimization -- 6.5 Concluding Remarks -- References -- 7 Automating Workflow/Pipeline Design -- 7.1 Introduction -- 7.2 Constraining the Search in Automatic Workflow Design -- 7.3 Strategies Used in Workflow Design -- 7.4 Exploiting Rankings of Successful Plans (Workflows) -- References -- Part II Advanced Techniques and Methods -- 8 Setting Up Configuration Spaces and Experiments -- 8.1 Introduction -- 8.2 Types of Configuration Spaces -- 8.3 Adequacy of Configuration Spaces for Given Tasks -- 8.4 Hyperparameter Importance and Marginal Contribution -- 8.5 Reducing Configuration Spaces -- 8.6 Configuration Spaces in Symbolic Learning -- 8.7 Which Datasets Are Needed? -- 8.8 Complete versus Incomplete Metadata -- 8.9 Exploiting Strategies from Multi-armed Bandits to Schedule Experiments -- 8.10 Discussion -- References -- 9 Combining Base-Learners into Ensembles -- 9.1 Introduction -- 9.2 Bagging and Boosting -- 9.3 Stacking and Cascade Generalization -- 9.4 Cascading and Delegating -- 9.5 Arbitrating -- 9.6 Meta-decision Trees -- 9.7 Discussion -- References -- 10 Metalearning in Ensemble Methods -- 10.1 Introduction -- 10.2 Basic Characteristics of Ensemble Systems -- 10.3 Selection-Based Approaches for Ensemble Generation -- 10.4 Ensemble Learning (per Dataset) -- 10.5 Dynamic Selection of Models (per Instance) -- 10.6 Generation of Hierarchical Ensembles -- 10.7 Conclusions and Future Research -- References -- 11 Algorithm Recommendation for Data Streams -- 11.1 Introduction -- 11.2 Metafeature-Based Approaches -- 11.3 Data Stream Ensembles -- 11.4 Recurring Meta-level Models.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">11.5 Challenges for Future Research -- References -- 12 Transfer of Knowledge Across Tasks -- 12.1 Introduction -- 12.2 Background, Terminology, and Notation -- 12.3 Learning Architectures in Transfer Learning -- 12.4 A Theoretical Framework -- References -- 13 Metalearning for Deep Neural Networks -- 13.1 Introduction -- 13.2 Background and Notation -- 13.3 Metric-Based Metalearning -- 13.4 Model-Based Metalearning -- 13.5 Optimization-Based Metalearning -- 13.6 Discussion and Outlook -- References -- 14 Automating Data Science -- 14.1 Introduction -- 14.2 Defining the Current Problem/Task -- 14.3 Identifying the Task Domain and Knowledge -- 14.4 Obtaining the Data -- 14.5 Automating Data Preprocessing and Transformation -- 14.6 Automating Model and Report Generation -- References -- 15 Automating the Design of Complex Systems -- 15.1 Introduction -- 15.2 Exploiting a Richer Set of Operators -- 15.3 Changing the Granularity by Introducing New Concepts -- 15.4 Reusing New Concepts in Further Learning -- 15.5 Iterative Learning -- 15.6 Learning to Solve Interdependent Tasks -- References -- Part III Organizing and Exploiting Metadata -- 16 Metadata Repositories -- 16.1 Introduction -- 16.2 Organizing the World Machine Learning Information -- 16.3 OpenML -- References -- 17 Learning from Metadata in Repositories -- 17.1 Introduction -- 17.2 Performance Analysis of Algorithms per Dataset -- 17.3 Performance Analysis of Algorithms across Datasets -- 17.4 Effect of Specific Data/Workflow Characteristics on Performance -- 17.5 Summary -- References -- 18 Concluding Remarks -- 18.1 Introduction -- 18.2 Form of Metaknowledge Used in Different Approaches -- 18.3 Future Challenges -- References -- Index.</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources.</subfield></datafield><datafield tag="590" ind1=" " ind2=" "><subfield code="a">Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. </subfield></datafield><datafield tag="655" ind1=" " ind2="4"><subfield code="a">Electronic books.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">van Rijn, Jan N.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Soares, Carlos.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Vanschoren, Joaquin.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Brazdil, Pavel</subfield><subfield code="t">Metalearning</subfield><subfield code="d">Cham : Springer International Publishing AG,c2022</subfield><subfield code="z">9783030670238</subfield></datafield><datafield tag="797" ind1="2" ind2=" "><subfield code="a">ProQuest (Firm)</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Cognitive Technologies Series</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6893332</subfield><subfield code="z">Click to View</subfield></datafield></record></collection>