Data mining with decision trees [[electronic resource] /] : theory and applications / / Lior Rokach, Oded Maimon |
Autore | Rokach Lior |
Pubbl/distr/stampa | Singapore, : World Scientific, c2008 |
Descrizione fisica | 1 online resource (263 p.) |
Disciplina | 006.312 |
Altri autori (Persone) | MaimonOded Z |
Collana | Series in machine perception and artificial intelligence |
Soggetto topico |
Data mining
Decision trees |
Soggetto genere / forma | Electronic books. |
ISBN |
1-281-91179-8
9786611911799 981-277-172-7 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Preface; Contents; 1. Introduction to Decision Trees; 1.1 Data Mining and Knowledge Discovery; 1.2 Taxonomy of Data Mining Methods; 1.3 Supervised Methods; 1.3.1 Overview; 1.4 Classification Trees; 1.5 Characteristics of Classification Trees; 1.5.1 Tree Size; 1.5.2 The hierarchical nature of decision trees; 1.6 Relation to Rule Induction; 2. Growing Decision Trees; 2.0.1 Training Set; 2.0.2 Definition of the Classification Problem; 2.0.3 Induction Algorithms; 2.0.4 Probability Estimation in Decision Trees; 2.0.4.1 Laplace Correction; 2.0.4.2 No Match
2.1 Algorithmic Framework for Decision Trees2.2 Stopping Criteria; 3. Evaluation of Classification Trees; 3.1 Overview; 3.2 Generalization Error; 3.2.1 Theoretical Estimation of Generalization Error; 3.2.2 Empirical Estimation of Generalization Error; 3.2.3 Alternatives to the Accuracy Measure; 3.2.4 The F-Measure; 3.2.5 Confusion Matrix; 3.2.6 Classifier Evaluation under Limited Resources; 3.2.6.1 ROC Curves; 3.2.6.2 Hit Rate Curve; 3.2.6.3 Qrecall (Quota Recall); 3.2.6.4 Lift Curve; 3.2.6.5 Pearson Correlation Coegfficient; 3.2.6.6 Area Under Curve (AUC); 3.2.6.7 Average Hit Rate 3.2.6.8 Average Qrecall3.2.6.9 Potential Extract Measure (PEM); 3.2.7 Which Decision Tree Classifier is Better?; 3.2.7.1 McNemar's Test; 3.2.7.2 A Test for the Difference of Two Proportions; 3.2.7.3 The Resampled Paired t Test; 3.2.7.4 The k-fold Cross-validated Paired t Test; 3.3 Computational Complexity; 3.4 Comprehensibility; 3.5 Scalability to Large Datasets; 3.6 Robustness; 3.7 Stability; 3.8 Interestingness Measures; 3.9 Overfitting and Underfitting; 3.10 "No Free Lunch" Theorem; 4. Splitting Criteria; 4.1 Univariate Splitting Criteria; 4.1.1 Overview; 4.1.2 Impurity based Criteria 4.1.3 Information Gain4.1.4 Gini Index; 4.1.5 Likelihood Ratio Chi-squared Statistics; 4.1.6 DKM Criterion; 4.1.7 Normalized Impurity-based Criteria; 4.1.8 Gain Ratio; 4.1.9 Distance Measure; 4.1.10 Binary Criteria; 4.1.11 Twoing Criterion; 4.1.12 Orthogonal Criterion; 4.1.13 Kolmogorov-Smirnov Criterion; 4.1.14 AUC Splitting Criteria; 4.1.15 Other Univariate Splitting Criteria; 4.1.16 Comparison of Univariate Splitting Criteria; 4.2 Handling Missing Values; 5. Pruning Trees; 5.1 Stopping Criteria; 5.2 Heuristic Pruning; 5.2.1 Overview; 5.2.2 Cost Complexity Pruning 5.2.3 Reduced Error Pruning5.2.4 Minimum Error Pruning (MEP); 5.2.5 Pessimistic Pruning; 5.2.6 Error-Based Pruning (EBP); 5.2.7 Minimum Description Length (MDL) Pruning; 5.2.8 Other Pruning Methods; 5.2.9 Comparison of Pruning Methods; 5.3 Optimal Pruning; 6. Advanced Decision Trees; 6.1 Survey of Common Algorithms for Decision Tree Induction; 6.1.1 ID3; 6.1.2 C4.5; 6.1.3 CART; 6.1.4 CHAID; 6.1.5 QUEST.; 6.1.6 Reference to Other Algorithms; 6.1.7 Advantages and Disadvantages of Decision Trees; 6.1.8 Oblivious Decision Trees; 6.1.9 Decision Trees Inducers for Large Datasets 6.1.10 Online Adaptive Decision Trees |
Record Nr. | UNINA-9910450810803321 |
Rokach Lior | ||
Singapore, : World Scientific, c2008 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Data mining with decision trees [[electronic resource] /] : theory and applications / / Lior Rokach, Oded Maimon |
Autore | Rokach Lior |
Pubbl/distr/stampa | Singapore, : World Scientific, c2008 |
Descrizione fisica | 1 online resource (263 p.) |
Disciplina | 006.312 |
Altri autori (Persone) | MaimonOded Z |
Collana | Series in machine perception and artificial intelligence |
Soggetto topico |
Data mining
Decision trees |
ISBN |
1-281-91179-8
9786611911799 981-277-172-7 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Preface; Contents; 1. Introduction to Decision Trees; 1.1 Data Mining and Knowledge Discovery; 1.2 Taxonomy of Data Mining Methods; 1.3 Supervised Methods; 1.3.1 Overview; 1.4 Classification Trees; 1.5 Characteristics of Classification Trees; 1.5.1 Tree Size; 1.5.2 The hierarchical nature of decision trees; 1.6 Relation to Rule Induction; 2. Growing Decision Trees; 2.0.1 Training Set; 2.0.2 Definition of the Classification Problem; 2.0.3 Induction Algorithms; 2.0.4 Probability Estimation in Decision Trees; 2.0.4.1 Laplace Correction; 2.0.4.2 No Match
2.1 Algorithmic Framework for Decision Trees2.2 Stopping Criteria; 3. Evaluation of Classification Trees; 3.1 Overview; 3.2 Generalization Error; 3.2.1 Theoretical Estimation of Generalization Error; 3.2.2 Empirical Estimation of Generalization Error; 3.2.3 Alternatives to the Accuracy Measure; 3.2.4 The F-Measure; 3.2.5 Confusion Matrix; 3.2.6 Classifier Evaluation under Limited Resources; 3.2.6.1 ROC Curves; 3.2.6.2 Hit Rate Curve; 3.2.6.3 Qrecall (Quota Recall); 3.2.6.4 Lift Curve; 3.2.6.5 Pearson Correlation Coegfficient; 3.2.6.6 Area Under Curve (AUC); 3.2.6.7 Average Hit Rate 3.2.6.8 Average Qrecall3.2.6.9 Potential Extract Measure (PEM); 3.2.7 Which Decision Tree Classifier is Better?; 3.2.7.1 McNemar's Test; 3.2.7.2 A Test for the Difference of Two Proportions; 3.2.7.3 The Resampled Paired t Test; 3.2.7.4 The k-fold Cross-validated Paired t Test; 3.3 Computational Complexity; 3.4 Comprehensibility; 3.5 Scalability to Large Datasets; 3.6 Robustness; 3.7 Stability; 3.8 Interestingness Measures; 3.9 Overfitting and Underfitting; 3.10 "No Free Lunch" Theorem; 4. Splitting Criteria; 4.1 Univariate Splitting Criteria; 4.1.1 Overview; 4.1.2 Impurity based Criteria 4.1.3 Information Gain4.1.4 Gini Index; 4.1.5 Likelihood Ratio Chi-squared Statistics; 4.1.6 DKM Criterion; 4.1.7 Normalized Impurity-based Criteria; 4.1.8 Gain Ratio; 4.1.9 Distance Measure; 4.1.10 Binary Criteria; 4.1.11 Twoing Criterion; 4.1.12 Orthogonal Criterion; 4.1.13 Kolmogorov-Smirnov Criterion; 4.1.14 AUC Splitting Criteria; 4.1.15 Other Univariate Splitting Criteria; 4.1.16 Comparison of Univariate Splitting Criteria; 4.2 Handling Missing Values; 5. Pruning Trees; 5.1 Stopping Criteria; 5.2 Heuristic Pruning; 5.2.1 Overview; 5.2.2 Cost Complexity Pruning 5.2.3 Reduced Error Pruning5.2.4 Minimum Error Pruning (MEP); 5.2.5 Pessimistic Pruning; 5.2.6 Error-Based Pruning (EBP); 5.2.7 Minimum Description Length (MDL) Pruning; 5.2.8 Other Pruning Methods; 5.2.9 Comparison of Pruning Methods; 5.3 Optimal Pruning; 6. Advanced Decision Trees; 6.1 Survey of Common Algorithms for Decision Tree Induction; 6.1.1 ID3; 6.1.2 C4.5; 6.1.3 CART; 6.1.4 CHAID; 6.1.5 QUEST.; 6.1.6 Reference to Other Algorithms; 6.1.7 Advantages and Disadvantages of Decision Trees; 6.1.8 Oblivious Decision Trees; 6.1.9 Decision Trees Inducers for Large Datasets 6.1.10 Online Adaptive Decision Trees |
Record Nr. | UNINA-9910784996003321 |
Rokach Lior | ||
Singapore, : World Scientific, c2008 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Data mining with decision trees : theroy and applications / / Lior Rokach, Oded Maimon |
Autore | Rokach Lior |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Singapore, : World Scientific, c2008 |
Descrizione fisica | 1 online resource (263 p.) |
Disciplina | 006.312 |
Altri autori (Persone) | MaimonOded Z |
Collana | Series in machine perception and artificial intelligence |
Soggetto topico |
Data mining
Decision trees |
ISBN |
1-281-91179-8
9786611911799 981-277-172-7 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Preface; Contents; 1. Introduction to Decision Trees; 1.1 Data Mining and Knowledge Discovery; 1.2 Taxonomy of Data Mining Methods; 1.3 Supervised Methods; 1.3.1 Overview; 1.4 Classification Trees; 1.5 Characteristics of Classification Trees; 1.5.1 Tree Size; 1.5.2 The hierarchical nature of decision trees; 1.6 Relation to Rule Induction; 2. Growing Decision Trees; 2.0.1 Training Set; 2.0.2 Definition of the Classification Problem; 2.0.3 Induction Algorithms; 2.0.4 Probability Estimation in Decision Trees; 2.0.4.1 Laplace Correction; 2.0.4.2 No Match
2.1 Algorithmic Framework for Decision Trees2.2 Stopping Criteria; 3. Evaluation of Classification Trees; 3.1 Overview; 3.2 Generalization Error; 3.2.1 Theoretical Estimation of Generalization Error; 3.2.2 Empirical Estimation of Generalization Error; 3.2.3 Alternatives to the Accuracy Measure; 3.2.4 The F-Measure; 3.2.5 Confusion Matrix; 3.2.6 Classifier Evaluation under Limited Resources; 3.2.6.1 ROC Curves; 3.2.6.2 Hit Rate Curve; 3.2.6.3 Qrecall (Quota Recall); 3.2.6.4 Lift Curve; 3.2.6.5 Pearson Correlation Coegfficient; 3.2.6.6 Area Under Curve (AUC); 3.2.6.7 Average Hit Rate 3.2.6.8 Average Qrecall3.2.6.9 Potential Extract Measure (PEM); 3.2.7 Which Decision Tree Classifier is Better?; 3.2.7.1 McNemar's Test; 3.2.7.2 A Test for the Difference of Two Proportions; 3.2.7.3 The Resampled Paired t Test; 3.2.7.4 The k-fold Cross-validated Paired t Test; 3.3 Computational Complexity; 3.4 Comprehensibility; 3.5 Scalability to Large Datasets; 3.6 Robustness; 3.7 Stability; 3.8 Interestingness Measures; 3.9 Overfitting and Underfitting; 3.10 "No Free Lunch" Theorem; 4. Splitting Criteria; 4.1 Univariate Splitting Criteria; 4.1.1 Overview; 4.1.2 Impurity based Criteria 4.1.3 Information Gain4.1.4 Gini Index; 4.1.5 Likelihood Ratio Chi-squared Statistics; 4.1.6 DKM Criterion; 4.1.7 Normalized Impurity-based Criteria; 4.1.8 Gain Ratio; 4.1.9 Distance Measure; 4.1.10 Binary Criteria; 4.1.11 Twoing Criterion; 4.1.12 Orthogonal Criterion; 4.1.13 Kolmogorov-Smirnov Criterion; 4.1.14 AUC Splitting Criteria; 4.1.15 Other Univariate Splitting Criteria; 4.1.16 Comparison of Univariate Splitting Criteria; 4.2 Handling Missing Values; 5. Pruning Trees; 5.1 Stopping Criteria; 5.2 Heuristic Pruning; 5.2.1 Overview; 5.2.2 Cost Complexity Pruning 5.2.3 Reduced Error Pruning5.2.4 Minimum Error Pruning (MEP); 5.2.5 Pessimistic Pruning; 5.2.6 Error-Based Pruning (EBP); 5.2.7 Minimum Description Length (MDL) Pruning; 5.2.8 Other Pruning Methods; 5.2.9 Comparison of Pruning Methods; 5.3 Optimal Pruning; 6. Advanced Decision Trees; 6.1 Survey of Common Algorithms for Decision Tree Induction; 6.1.1 ID3; 6.1.2 C4.5; 6.1.3 CART; 6.1.4 CHAID; 6.1.5 QUEST.; 6.1.6 Reference to Other Algorithms; 6.1.7 Advantages and Disadvantages of Decision Trees; 6.1.8 Oblivious Decision Trees; 6.1.9 Decision Trees Inducers for Large Datasets 6.1.10 Online Adaptive Decision Trees |
Record Nr. | UNINA-9910824725103321 |
Rokach Lior | ||
Singapore, : World Scientific, c2008 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Machine Learning for Data Science Handbook : Data Mining and Knowledge Discovery Handbook / / edited by Lior Rokach, Oded Maimon, Erez Shmueli |
Autore | Rokach Lior |
Edizione | [3rd ed. 2023.] |
Pubbl/distr/stampa | Cham : , : Springer International Publishing : , : Imprint : Springer, , 2023 |
Descrizione fisica | 1 online resource (975 pages) |
Disciplina | 006.312 |
Altri autori (Persone) |
MaimonOded
ShmueliErez |
Soggetto topico |
Machine learning
Artificial intelligence Data mining Information storage and retrieval systems Machine Learning Artificial Intelligence Data Mining and Knowledge Discovery Information Storage and Retrieval Mineria de dades Aprenentatge automàtic |
Soggetto genere / forma | Llibres electrònics |
ISBN | 3-031-24628-4 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto | Introduction to Knowledge Discovery and Data Mining -- Preprocessing Methods -- Data Cleansing: A Prelude to Knowledge Discovery -- Handling Missing Attribute Values -- Geometric Methods for Feature Extraction and Dimensional Reduction - A Guided Tour -- Dimension Reduction and Feature Selection -- Discretization Methods -- Outlier Detection -- Supervised Methods -- Supervised Learning -- Classification Trees -- Bayesian Networks -- Data Mining within a Regression Framework -- Support Vector Machines -- Rule Induction -- Unsupervised Methods -- A survey of Clustering Algorithms -- Association Rules -- Frequent Set Mining -- Constraint-based Data Mining -- Link Analysis -- Soft Computing Methods -- A Review of Evolutionary Algorithms for Data Mining -- A Review of Reinforcement Learning Methods -- Neural Networks For Data Mining -- Granular Computing and Rough Sets - An Incremental Development -- Pattern Clustering Using a Swarm Intelligence Approach -- Using Fuzzy Logic in Data Mining -- Supporting Methods -- Statistical Methods for Data Mining -- Logics for Data Mining -- Wavelet Methods in Data Mining -- Fractal Mining - Self Similarity-based Clustering and its Applications -- Visual Analysis of Sequences Using Fractal Geometry -- Interestingness Measures - On Determining What Is Interesting -- Quality Assessment Approaches in Data Mining -- Data Mining Model Comparison -- Data Mining Query Languages -- Advanced Methods -- Mining Multi-label Data -- Privacy in Data Mining -- Meta-Learning - Concepts and Techniques -- Bias vs Variance Decomposition for Regression and Classification -- Mining with Rare Cases -- Data Stream Mining -- Mining Concept-Drifting Data Streams -- Mining High-Dimensional Data -- Text Mining and Information Extraction -- Spatial Data Mining -- Spatio-temporal clustering -- Data Mining for Imbalanced Datasets: An Overview -- Relational Data Mining -- Web Mining -- A Review of Web Document Clustering Approaches -- Causal Discovery -- Ensemble Methods in Supervised Learning -- Data Mining using Decomposition Methods -- Information Fusion - Methods and Aggregation Operators -- Parallel and Grid-Based Data Mining – Algorithms, Models and Systems for High-Performance KDD -- Collaborative Data Mining -- Organizational Data Mining -- Mining Time Series Data -- Applications -- Multimedia Data Mining -- Data Mining in Medicine -- Learning Information Patterns in Biological Databases - Stochastic Data Mining -- Data Mining for Financial Applications -- Data Mining for Intrusion Detection -- Data Mining for CRM -- Data Mining for Target Marketing -- NHECD - Nano Health and Environmental Commented Database -- Software -- Commercial Data Mining Software -- Weka-A Machine Learning Workbench for Data Mining. |
Record Nr. | UNINA-9910739470003321 |
Rokach Lior | ||
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2023 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Pattern classification using ensemble methods [[electronic resource] /] / Lior Rokach |
Autore | Rokach Lior |
Pubbl/distr/stampa | Singapore ; ; Hackensack, NJ, : World Scientific, c2010 |
Descrizione fisica | 1 online resource (242 p.) |
Disciplina | 621.389/28 |
Collana | Series in machine perception and artificial intelligence |
Soggetto topico |
Pattern recognition systems
Algorithms Machine learning |
Soggetto genere / forma | Electronic books. |
ISBN |
1-282-75785-7
9786612757853 981-4271-07-1 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Contents; Preface; 1. Introduction to Pattern Classification; 1.1 Pattern Classification; 1.2 Induction Algorithms; 1.3 Rule Induction; 1.4 Decision Trees; 1.5 Bayesian Methods; 1.5.1 Overview.; 1.5.2 Naıve Bayes; 1.5.2.1 The Basic Naıve Bayes Classifier; 1.5.2.2 Naıve Bayes Induction for Numeric Attributes; 1.5.2.3 Correction to the Probability Estimation; 1.5.2.4 Laplace Correction; 1.5.2.5 No Match; 1.5.3 Other Bayesian Methods; 1.6 Other Induction Methods; 1.6.1 Neural Networks; 1.6.2 Genetic Algorithms; 1.6.3 Instance-based Learning; 1.6.4 Support Vector Machines
2. Introduction to Ensemble Learning 2.1 Back to the Roots; 2.2 The Wisdom of Crowds; 2.3 The Bagging Algorithm; 2.4 The Boosting Algorithm; 2.5 The Ada Boost Algorithm; 2.6 No Free Lunch Theorem and Ensemble Learning; 2.7 Bias-Variance Decomposition and Ensemble Learning; 2.8 Occam's Razor and Ensemble Learning; 2.9 Classifier Dependency; 2.9.1 Dependent Methods; 2.9.1.1 Model-guided Instance Selection; 2.9.1.2 Basic Boosting Algorithms; 2.9.1.3 Advanced Boosting Algorithms; 2.9.1.4 Incremental Batch Learning; 2.9.2 Independent Methods; 2.9.2.1 Bagging; 2.9.2.2 Wagging 2.9.2.3 Random Forest and Random Subspace Projection 2.9.2.4 Non-Linear Boosting Projection (NLBP); 2.9.2.5 Cross-validated Committees; 2.9.2.6 Robust Boosting; 2.10 Ensemble Methods for Advanced Classification Tasks; 2.10.1 Cost-Sensitive Classification; 2.10.2 Ensemble for Learning Concept Drift; 2.10.3 Reject Driven Classification; 3. Ensemble Classification; 3.1 Fusions Methods; 3.1.1 Weighting Methods; 3.1.2 Majority Voting; 3.1.3 Performance Weighting; 3.1.4 Distribution Summation; 3.1.5 Bayesian Combination; 3.1.6 Dempster-Shafer; 3.1.7 Vogging; 3.1.8 Naıve Bayes 3.1.9 Entropy Weighting 3.1.10 Density-based Weighting; 3.1.11 DEA Weighting Method; 3.1.12 Logarithmic Opinion Pool; 3.1.13 Order Statistics; 3.2 Selecting Classification; 3.2.1 Partitioning the Instance Space; 3.2.1.1 The K-Means Algorithm as a Decomposition Tool; 3.2.1.2 Determining the Number of Subsets; 3.2.1.3 The Basic K-Classifier Algorithm; 3.2.1.4 The Heterogeneity Detecting K-Classifier (HDK-Classifier); 3.2.1.5 Running-Time Complexity; 3.3 Mixture of Experts and Meta Learning; 3.3.1 Stacking; 3.3.2 Arbiter Trees; 3.3.3 Combiner Trees; 3.3.4 Grading; 3.3.5 Gating Network 4. Ensemble Diversity 4.1 Overview; 4.2 Manipulating the Inducer; 4.2.1 Manipulation of the Inducer's Parameters; 4.2.2 Starting Point in Hypothesis Space; 4.2.3 Hypothesis Space Traversal; 4.3 Manipulating the Training Samples; 4.3.1 Resampling; 4.3.2 Creation; 4.3.3 Partitioning; 4.4 Manipulating the Target Attribute Representation; 4.4.1 Label Switching; 4.5 Partitioning the Search Space; 4.5.1 Divide and Conquer; 4.5.2 Feature Subset-based Ensemble Methods; 4.5.2.1 Random-based Strategy; 4.5.2.2 Reduct-based Strategy; 4.5.2.3 Collective-Performance-based Strategy 4.5.2.4 Feature Set Partitioning |
Record Nr. | UNINA-9910455562503321 |
Rokach Lior | ||
Singapore ; ; Hackensack, NJ, : World Scientific, c2010 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Pattern classification using ensemble methods [[electronic resource] /] / Lior Rokach |
Autore | Rokach Lior |
Pubbl/distr/stampa | Singapore ; ; Hackensack, NJ, : World Scientific, c2010 |
Descrizione fisica | 1 online resource (242 p.) |
Disciplina | 621.389/28 |
Collana | Series in machine perception and artificial intelligence |
Soggetto topico |
Pattern recognition systems
Algorithms Machine learning |
ISBN |
1-282-75785-7
9786612757853 981-4271-07-1 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Contents; Preface; 1. Introduction to Pattern Classification; 1.1 Pattern Classification; 1.2 Induction Algorithms; 1.3 Rule Induction; 1.4 Decision Trees; 1.5 Bayesian Methods; 1.5.1 Overview.; 1.5.2 Naıve Bayes; 1.5.2.1 The Basic Naıve Bayes Classifier; 1.5.2.2 Naıve Bayes Induction for Numeric Attributes; 1.5.2.3 Correction to the Probability Estimation; 1.5.2.4 Laplace Correction; 1.5.2.5 No Match; 1.5.3 Other Bayesian Methods; 1.6 Other Induction Methods; 1.6.1 Neural Networks; 1.6.2 Genetic Algorithms; 1.6.3 Instance-based Learning; 1.6.4 Support Vector Machines
2. Introduction to Ensemble Learning 2.1 Back to the Roots; 2.2 The Wisdom of Crowds; 2.3 The Bagging Algorithm; 2.4 The Boosting Algorithm; 2.5 The Ada Boost Algorithm; 2.6 No Free Lunch Theorem and Ensemble Learning; 2.7 Bias-Variance Decomposition and Ensemble Learning; 2.8 Occam's Razor and Ensemble Learning; 2.9 Classifier Dependency; 2.9.1 Dependent Methods; 2.9.1.1 Model-guided Instance Selection; 2.9.1.2 Basic Boosting Algorithms; 2.9.1.3 Advanced Boosting Algorithms; 2.9.1.4 Incremental Batch Learning; 2.9.2 Independent Methods; 2.9.2.1 Bagging; 2.9.2.2 Wagging 2.9.2.3 Random Forest and Random Subspace Projection 2.9.2.4 Non-Linear Boosting Projection (NLBP); 2.9.2.5 Cross-validated Committees; 2.9.2.6 Robust Boosting; 2.10 Ensemble Methods for Advanced Classification Tasks; 2.10.1 Cost-Sensitive Classification; 2.10.2 Ensemble for Learning Concept Drift; 2.10.3 Reject Driven Classification; 3. Ensemble Classification; 3.1 Fusions Methods; 3.1.1 Weighting Methods; 3.1.2 Majority Voting; 3.1.3 Performance Weighting; 3.1.4 Distribution Summation; 3.1.5 Bayesian Combination; 3.1.6 Dempster-Shafer; 3.1.7 Vogging; 3.1.8 Naıve Bayes 3.1.9 Entropy Weighting 3.1.10 Density-based Weighting; 3.1.11 DEA Weighting Method; 3.1.12 Logarithmic Opinion Pool; 3.1.13 Order Statistics; 3.2 Selecting Classification; 3.2.1 Partitioning the Instance Space; 3.2.1.1 The K-Means Algorithm as a Decomposition Tool; 3.2.1.2 Determining the Number of Subsets; 3.2.1.3 The Basic K-Classifier Algorithm; 3.2.1.4 The Heterogeneity Detecting K-Classifier (HDK-Classifier); 3.2.1.5 Running-Time Complexity; 3.3 Mixture of Experts and Meta Learning; 3.3.1 Stacking; 3.3.2 Arbiter Trees; 3.3.3 Combiner Trees; 3.3.4 Grading; 3.3.5 Gating Network 4. Ensemble Diversity 4.1 Overview; 4.2 Manipulating the Inducer; 4.2.1 Manipulation of the Inducer's Parameters; 4.2.2 Starting Point in Hypothesis Space; 4.2.3 Hypothesis Space Traversal; 4.3 Manipulating the Training Samples; 4.3.1 Resampling; 4.3.2 Creation; 4.3.3 Partitioning; 4.4 Manipulating the Target Attribute Representation; 4.4.1 Label Switching; 4.5 Partitioning the Search Space; 4.5.1 Divide and Conquer; 4.5.2 Feature Subset-based Ensemble Methods; 4.5.2.1 Random-based Strategy; 4.5.2.2 Reduct-based Strategy; 4.5.2.3 Collective-Performance-based Strategy 4.5.2.4 Feature Set Partitioning |
Record Nr. | UNINA-9910780894103321 |
Rokach Lior | ||
Singapore ; ; Hackensack, NJ, : World Scientific, c2010 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Pattern classification using ensemble methods / / Lior Rokach |
Autore | Rokach Lior |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Singapore ; ; Hackensack, NJ, : World Scientific, c2010 |
Descrizione fisica | 1 online resource (242 p.) |
Disciplina | 621.389/28 |
Collana | Series in machine perception and artificial intelligence |
Soggetto topico |
Pattern recognition systems
Algorithms Machine learning |
ISBN |
1-282-75785-7
9786612757853 981-4271-07-1 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Contents; Preface; 1. Introduction to Pattern Classification; 1.1 Pattern Classification; 1.2 Induction Algorithms; 1.3 Rule Induction; 1.4 Decision Trees; 1.5 Bayesian Methods; 1.5.1 Overview.; 1.5.2 Naıve Bayes; 1.5.2.1 The Basic Naıve Bayes Classifier; 1.5.2.2 Naıve Bayes Induction for Numeric Attributes; 1.5.2.3 Correction to the Probability Estimation; 1.5.2.4 Laplace Correction; 1.5.2.5 No Match; 1.5.3 Other Bayesian Methods; 1.6 Other Induction Methods; 1.6.1 Neural Networks; 1.6.2 Genetic Algorithms; 1.6.3 Instance-based Learning; 1.6.4 Support Vector Machines
2. Introduction to Ensemble Learning 2.1 Back to the Roots; 2.2 The Wisdom of Crowds; 2.3 The Bagging Algorithm; 2.4 The Boosting Algorithm; 2.5 The Ada Boost Algorithm; 2.6 No Free Lunch Theorem and Ensemble Learning; 2.7 Bias-Variance Decomposition and Ensemble Learning; 2.8 Occam's Razor and Ensemble Learning; 2.9 Classifier Dependency; 2.9.1 Dependent Methods; 2.9.1.1 Model-guided Instance Selection; 2.9.1.2 Basic Boosting Algorithms; 2.9.1.3 Advanced Boosting Algorithms; 2.9.1.4 Incremental Batch Learning; 2.9.2 Independent Methods; 2.9.2.1 Bagging; 2.9.2.2 Wagging 2.9.2.3 Random Forest and Random Subspace Projection 2.9.2.4 Non-Linear Boosting Projection (NLBP); 2.9.2.5 Cross-validated Committees; 2.9.2.6 Robust Boosting; 2.10 Ensemble Methods for Advanced Classification Tasks; 2.10.1 Cost-Sensitive Classification; 2.10.2 Ensemble for Learning Concept Drift; 2.10.3 Reject Driven Classification; 3. Ensemble Classification; 3.1 Fusions Methods; 3.1.1 Weighting Methods; 3.1.2 Majority Voting; 3.1.3 Performance Weighting; 3.1.4 Distribution Summation; 3.1.5 Bayesian Combination; 3.1.6 Dempster-Shafer; 3.1.7 Vogging; 3.1.8 Naıve Bayes 3.1.9 Entropy Weighting 3.1.10 Density-based Weighting; 3.1.11 DEA Weighting Method; 3.1.12 Logarithmic Opinion Pool; 3.1.13 Order Statistics; 3.2 Selecting Classification; 3.2.1 Partitioning the Instance Space; 3.2.1.1 The K-Means Algorithm as a Decomposition Tool; 3.2.1.2 Determining the Number of Subsets; 3.2.1.3 The Basic K-Classifier Algorithm; 3.2.1.4 The Heterogeneity Detecting K-Classifier (HDK-Classifier); 3.2.1.5 Running-Time Complexity; 3.3 Mixture of Experts and Meta Learning; 3.3.1 Stacking; 3.3.2 Arbiter Trees; 3.3.3 Combiner Trees; 3.3.4 Grading; 3.3.5 Gating Network 4. Ensemble Diversity 4.1 Overview; 4.2 Manipulating the Inducer; 4.2.1 Manipulation of the Inducer's Parameters; 4.2.2 Starting Point in Hypothesis Space; 4.2.3 Hypothesis Space Traversal; 4.3 Manipulating the Training Samples; 4.3.1 Resampling; 4.3.2 Creation; 4.3.3 Partitioning; 4.4 Manipulating the Target Attribute Representation; 4.4.1 Label Switching; 4.5 Partitioning the Search Space; 4.5.1 Divide and Conquer; 4.5.2 Feature Subset-based Ensemble Methods; 4.5.2.1 Random-based Strategy; 4.5.2.2 Reduct-based Strategy; 4.5.2.3 Collective-Performance-based Strategy 4.5.2.4 Feature Set Partitioning |
Record Nr. | UNINA-9910826382103321 |
Rokach Lior | ||
Singapore ; ; Hackensack, NJ, : World Scientific, c2010 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|