LEADER 10880nam 22008773 450 001 9910548277503321 005 20250628110046.0 010 $a3-030-67024-4 035 $a(CKB)5590000000896787 035 $a(MiAaPQ)EBC6893332 035 $a(Au-PeEL)EBL6893332 035 $a(oapen)https://directory.doabooks.org/handle/20.500.12854/79344 035 $a(PPN)260826111 035 $a(ODN)ODN0010171413 035 $a(oapen)doab79344 035 $a(OCoLC)1301265010 035 $a(EXLCZ)995590000000896787 100 $a20220321d2022 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aMetalearning $eApplications to Automated Machine Learning and Data Mining 205 $a2nd ed. 210 $aCham$cSpringer Nature$d2022 210 1$aCham :$cSpringer International Publishing AG,$d2022. 210 4$dŠ2022. 215 $a1 online resource (349 pages) 225 1 $aCognitive Technologies 311 08$a3-030-67023-6 327 $aIntro -- Preface -- Contents -- Part I Basic Concepts and Architecture -- 1 Introduction -- 1.1 Organization of the Book -- 1.2 Basic Concepts and Architecture (Part I) -- 1.3 Advanced Techniques and Methods (Part II) -- 1.4 Repositories of Experimental Results (Part III) -- References -- 2 Metalearning Approaches for Algorithm Selection I (Exploiting Rankings) -- 2.1 Introduction -- 2.2 Different Forms of Recommendation -- 2.3 Ranking Models for Algorithm Selection -- 2.4 Using a Combined Measure of Accuracy and Runtime -- 2.5 Extensions and Other Approaches -- References -- 3 Evaluating Recommendations of Metalearning/AutoML Systems -- 3.1 Introduction -- 3.2 Methodology for Evaluating Base-Level Algorithms -- 3.3 Normalization of Performance for Base-Level Algorithms -- 3.4 Methodology for Evaluating Metalearning and AutoML Systems -- 3.5 Evaluating Recommendations by Correlation -- 3.6 Evaluating the Effects of Recommendations -- 3.7 Some Useful Measures -- References -- 4 Dataset Characteristics (Metafeatures) -- 4.1 Introduction -- 4.2 Data Characterization Used in Classification Tasks -- 4.3 Data Characterization Used in Regression Tasks -- 4.4 Data Characterization Used in Time Series Tasks -- 4.5 Data Characterization Used in Clustering Tasks -- 4.6 Deriving New Features from the Basic Set -- 4.7 Selection of Metafeatures -- 4.8 Algorithm-Specific Characterization and Representation Issues -- 4.9 Establishing Similarity Between Datasets -- References -- 5 Metalearning Approaches for Algorithm Selection II -- 5.1 Introduction -- 5.2 Using Regression Models in Metalearning Systems -- 5.3 Using Classification at Meta-level for the Prediction of Applicability -- 5.4 Methods Based on Pairwise Comparisons -- 5.5 Pairwise Approach for a Set of Algorithms -- 5.6 Iterative Approach of Conducting Pairwise Tests -- 5.7 Using ART Trees and Forests. 327 $a5.8 Active Testing -- 5.9 Non-propositional Approaches -- References -- 6 Metalearning for Hyperparameter Optimization -- 6.1 Introduction -- 6.2 Basic Hyperparameter Optimization Methods -- 6.3 Bayesian Optimization -- 6.4 Metalearning for Hyperparameter Optimization -- 6.5 Concluding Remarks -- References -- 7 Automating Workflow/Pipeline Design -- 7.1 Introduction -- 7.2 Constraining the Search in Automatic Workflow Design -- 7.3 Strategies Used in Workflow Design -- 7.4 Exploiting Rankings of Successful Plans (Workflows) -- References -- Part II Advanced Techniques and Methods -- 8 Setting Up Configuration Spaces and Experiments -- 8.1 Introduction -- 8.2 Types of Configuration Spaces -- 8.3 Adequacy of Configuration Spaces for Given Tasks -- 8.4 Hyperparameter Importance and Marginal Contribution -- 8.5 Reducing Configuration Spaces -- 8.6 Configuration Spaces in Symbolic Learning -- 8.7 Which Datasets Are Needed? -- 8.8 Complete versus Incomplete Metadata -- 8.9 Exploiting Strategies from Multi-armed Bandits to Schedule Experiments -- 8.10 Discussion -- References -- 9 Combining Base-Learners into Ensembles -- 9.1 Introduction -- 9.2 Bagging and Boosting -- 9.3 Stacking and Cascade Generalization -- 9.4 Cascading and Delegating -- 9.5 Arbitrating -- 9.6 Meta-decision Trees -- 9.7 Discussion -- References -- 10 Metalearning in Ensemble Methods -- 10.1 Introduction -- 10.2 Basic Characteristics of Ensemble Systems -- 10.3 Selection-Based Approaches for Ensemble Generation -- 10.4 Ensemble Learning (per Dataset) -- 10.5 Dynamic Selection of Models (per Instance) -- 10.6 Generation of Hierarchical Ensembles -- 10.7 Conclusions and Future Research -- References -- 11 Algorithm Recommendation for Data Streams -- 11.1 Introduction -- 11.2 Metafeature-Based Approaches -- 11.3 Data Stream Ensembles -- 11.4 Recurring Meta-level Models. 327 $a11.5 Challenges for Future Research -- References -- 12 Transfer of Knowledge Across Tasks -- 12.1 Introduction -- 12.2 Background, Terminology, and Notation -- 12.3 Learning Architectures in Transfer Learning -- 12.4 A Theoretical Framework -- References -- 13 Metalearning for Deep Neural Networks -- 13.1 Introduction -- 13.2 Background and Notation -- 13.3 Metric-Based Metalearning -- 13.4 Model-Based Metalearning -- 13.5 Optimization-Based Metalearning -- 13.6 Discussion and Outlook -- References -- 14 Automating Data Science -- 14.1 Introduction -- 14.2 Defining the Current Problem/Task -- 14.3 Identifying the Task Domain and Knowledge -- 14.4 Obtaining the Data -- 14.5 Automating Data Preprocessing and Transformation -- 14.6 Automating Model and Report Generation -- References -- 15 Automating the Design of Complex Systems -- 15.1 Introduction -- 15.2 Exploiting a Richer Set of Operators -- 15.3 Changing the Granularity by Introducing New Concepts -- 15.4 Reusing New Concepts in Further Learning -- 15.5 Iterative Learning -- 15.6 Learning to Solve Interdependent Tasks -- References -- Part III Organizing and Exploiting Metadata -- 16 Metadata Repositories -- 16.1 Introduction -- 16.2 Organizing the World Machine Learning Information -- 16.3 OpenML -- References -- 17 Learning from Metadata in Repositories -- 17.1 Introduction -- 17.2 Performance Analysis of Algorithms per Dataset -- 17.3 Performance Analysis of Algorithms across Datasets -- 17.4 Effect of Specific Data/Workflow Characteristics on Performance -- 17.5 Summary -- References -- 18 Concluding Remarks -- 18.1 Introduction -- 18.2 Form of Metaknowledge Used in Different Approaches -- 18.3 Future Challenges -- References -- Index. 330 $aThis open access book as one of the fastest-growing areas of research in machine learning, metalearning studies principled methods to obtain efficient models and solutions by adapting machine learning and data mining processes. This adaptation usually exploits information from past experience on other tasks and the adaptive processes can involve machine learning approaches. As a related area to metalearning and a hot topic currently, automated machine learning (AutoML) is concerned with automating the machine learning processes. Metalearning and AutoML can help AI learn to control the application of different learning methods and acquire new solutions faster without unnecessary interventions from the user. This book offers a comprehensive and thorough introduction to almost all aspects of metalearning and AutoML, covering the basic concepts and architecture, evaluation, datasets, hyperparameter optimization, ensembles and workflows, and also how this knowledge can be used to select, combine, compose, adapt and configure both algorithms and models to yield faster and better solutions to data mining and data science problems. It can thus help developers to develop systems that can improve themselves through experience. This book is a substantial update of the first edition published in 2009. It includes 18 chapters, more than twice as much as the previous version. This enabled the authors to cover the most relevant topics in more depth and incorporate the overview of recent research in the respective area. The book will be of interest to researchers and graduate students in the areas of machine learning, data mining, data science and artificial intelligence. ; Metalearning is the study of principled methods that exploit metaknowledge to obtain efficient models and solutions by adapting machine learning and data mining processes. While the variety of machine learning and data mining techniques now available can, in principle, provide good model solutions, a methodology is still needed to guide the search for the most appropriate model in an efficient way. Metalearning provides one such methodology that allows systems to become more effective through experience. This book discusses several approaches to obtaining knowledge concerning the performance of machine learning and data mining algorithms. It shows how this knowledge can be reused to select, combine, compose and adapt both algorithms and models to yield faster, more effective solutions to data mining problems. It can thus help developers improve their algorithms and also develop learning systems that can improve themselves. The book will be of interest to researchers and graduate students in the areas of machine learning, data mining and artificial intelligence. 410 0$aCognitive Technologies 606 $aArtificial intelligence$2bicssc 606 $aData mining$2bicssc 606 $aMachine learning$2bicssc 606 $aAprenentatge automātic$2thub 606 $aMineria de dades$2thub 608 $aLlibres electrōnics$2thub 610 $aMetalearning 610 $aAutomating Machine Learning (AutoML) 610 $aMachine Learning 610 $aArtificial Intelligence 610 $aalgorithm selection 610 $aalgorithm recommendation 610 $aalgorithm configuration 610 $ahyperparameter optimization 610 $aautomating the workflow/pipeline design 610 $ametalearning in ensemble construction 610 $ametalearning in deep neural networks 610 $atransfer learning 610 $aalgorithm recommendation for data streams 610 $aautomating data science 610 $aOpen Access 615 7$aArtificial intelligence 615 7$aData mining 615 7$aMachine learning 615 7$aAprenentatge automātic. 615 7$aMineria de dades. 676 $a006.31 676 $a006.31 686 $aCOM004000$aCOM021030$2bisacsh 700 $aBrazdil$b Pavel$01214572 701 $avan Rijn$b Jan N$01214573 701 $aSoares$b Carlos$0961096 701 $aVanschoren$b Joaquin$01214574 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910548277503321 996 $aMetalearning$92804517 997 $aUNINA