|
|
|
|
|
|
|
|
|
1. |
Record Nr. |
UNINA9910878981203321 |
|
|
Autore |
Villmann Thomas |
|
|
Titolo |
Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond : Proceedings of the 15th International Workshop, WSOM+ 2024, Mittweida, Germany, July 10-12 2024 |
|
|
|
|
|
|
|
Pubbl/distr/stampa |
|
|
Cham : , : Springer, , 2024 |
|
©2024 |
|
|
|
|
|
|
|
|
|
ISBN |
|
|
|
|
|
|
Edizione |
[1st ed.] |
|
|
|
|
|
Descrizione fisica |
|
1 online resource (240 pages) |
|
|
|
|
|
|
Collana |
|
Lecture Notes in Networks and Systems Series ; ; v.1087 |
|
|
|
|
|
|
Altri autori (Persone) |
|
KadenMarika |
GewenigerTina |
SchleifFrank-Michael |
|
|
|
|
|
|
|
|
Lingua di pubblicazione |
|
|
|
|
|
|
Formato |
Materiale a stampa |
|
|
|
|
|
Livello bibliografico |
Monografia |
|
|
|
|
|
Nota di contenuto |
|
Intro -- Preface -- Organization -- Contents -- New Cloth Unto an Old Garment: SOM for Regeneration Learning -- 1 Introduction -- 2 Proposed Framework -- 3 Complex Data Regeneration with ERGSOM -- 3.1 Stages 1 and 3: Custom -CVAE Model -- 3.2 Stage 2: ERGSOM-Based Methods -- 4 Discussion -- 5 Conclusions -- References -- Unsupervised Learning-Based Data Collection Planning with Dubins Vehicle and Constrained Data Retrieving Time -- 1 Introduction -- 2 Problem Statement -- 3 Decoupled Sampling-Based Solution -- 4 Growing Self-organizing Array for the CEDTSP-TC -- 5 Empirical Evaluation -- 6 Conclusion -- References -- Hyperbox-GLVQ Based on Min-Max-Neurons -- 1 Introduction -- 2 Generalized Learning Vector Quantization Using Hyperbox-Prototypes -- 2.1 Min-Max Neurons -- 2.2 Standard GLVQ -- 2.3 Hyperbox-GLVQ -- 3 Experimental Results -- 4 Conclusion and Outlook -- References -- Sparse Clustering with K-Means - Which Penalties and for Which Data? -- 1 Introduction -- 2 Sparse K-Means -- 2.1 First Formulation of the Problem -- 2.2 Second Formulation of the Problem -- 2.3 Discussion and Hyperparameter Tuning -- 3 Numerical Illustrations -- 3.1 Synthetic Data - Scenario A |
|
|
|
|
|
|
|
|
|
-- 3.2 Synthetic Data - Scenario B -- 4 The vimpclust R-Package -- 5 Conclusion and Perspectives -- References -- Is t-SNE Becoming the New Self-organizing Map? Similarities and Differences -- Pursuing the Perfect Projection: A Projection Pursuit Framework for Deep Learning -- 1 Introduction -- 2 Projection Pursuit Framework -- 3 Related Work -- 4 Experiments -- 4.1 Principal Component Curves -- 4.2 Reduced Neural-Additive Models -- 4.3 DPP Preserves Group Structure -- 5 Discussion -- References -- Generalizing Self-organizing Maps: Large-Scale Training of GMMs and Applications in Data Science -- 1 Introduction -- 1.1 Related Work -- 1.2 Contributions -- 2 Theoretical Contributions. |
2.1 Relation Between GMM and SOM Training -- 2.2 Analysis of SOM Capabilities -- 2.3 Advantages of GMMs w.r.t SOMs -- 3 Experiments -- 3.1 Data Visualization with SOMs, Energy-Based SOMs and GMMs -- 3.2 Conditional and Unconditional Sampling from GMMs -- 3.3 Sampling with MFA Instances -- 3.4 Outlier Detection Experiments with MFA -- 3.5 MFA: Generative Classification -- 4 Discussion and Outlook -- References -- A Self-Organizing UMAP for Clustering -- 1 Introduction -- 2 Background -- 2.1 The Self-Organizing Map -- 2.2 UMAP -- 3 SOUMAP Methodology -- 4 Experiment Descriptions -- 4.1 Datasets, Parameterization, and Design -- 4.2 Quality Measures -- 5 Results -- 6 Conclusions and Future Work -- References -- Knowledge Integration in Vector Quantization Models and Corresponding Structured Covariance Estimation -- 1 Introduction -- 2 Vector Quantization by Self-Organizing Maps and Neural Gas -- 2.1 Adaptive Covariance Learning in SOM/NG -- 3 Integration of Structural Data Knowledge into Adaptive Covariance Learning -- 4 Exemplary Experiments Using Gene Expression Data -- 4.1 Data Setting and Knowledge Structure -- 4.2 Experiments -- 5 Conclusions -- References -- Exploring Data Distributions in Machine Learning Models with SOMs -- 1 Introduction -- 2 Materials -- 3 Methods -- 4 Results -- 4.1 Regression Models and Feature Relevance Analysis -- 4.2 Exploratory Analysis of the Training Dataset -- 4.3 Exploratory Analysis of the Validation Set -- 5 Discussion -- References -- Interpretable Machine Learning in Endocrinology: A Diagnostic Tool in Primary Aldosteronism -- 1 Introduction, Background and Motivation -- 2 The Data -- 3 Machine Learning Analysis -- 3.1 Generalized Matrix Relevance Learning Vector Quantization -- 3.2 Workflow -- 4 Results and Discussion -- 4.1 Healthy Controls vs. Primary Aldosteronism -- 4.2 Unilateral PA vs. Bilateral PA. |
4.3 KCNJ5 vs Non-KCNJ5 Samples -- 5 Conclusion and Outlook -- References -- The Beauty of Prototype Based Learning -- Setting Vector Quantizer Resolution via Density Estimation Theory -- 1 Introduction -- 2 VQ Theory -- 3 Existing Methods to Select M -- 4 Variable Kernel Density Estimation -- 4.1 L2-Error for vKDEs -- 5 Methodology -- 5.1 Motivation -- 5.2 An Equivalent Sample Size Analysis for M -- 5.3 Practical Considerations -- 6 Experiments and Discussion -- 7 Conclusions and Further Work -- References -- Practical Approaches to Approximate Dominant Eigenvalues in Large Matrices -- 1 Introduction -- 2 Eigenvalue Approximation Techniques -- 2.1 Exact Approaches -- 2.2 von Mises/Power-Iteration -- 2.3 Gershgorin Circles -- 3 Eigenvalue Estimation at Large Scale -- 3.1 Nyström Approximation -- 3.2 Nyström Approximation for Similarities -- 3.3 von Mises Iteration using Nyström -- 3.4 Estimating the Gershgorin Circle Bounds at Large Scale -- 4 Experiments -- 5 Conclusions -- References -- Enhancing LDA Method by the Use of Feature Maximization -- 1 Introduction -- 2 Corpus and Methods -- 3 Results -- 4 Conclusion -- References -- Explaining Neural Networks - Deep and Shallow -- References -- |
|
|
|
|
|
|
|
|
FairGLVQ: Fairness in Partition-Based Classification -- 1 Introduction -- 2 Problem Setup -- 2.1 Fair Machine Learning -- 2.2 Learning Vector Quantization Models -- 2.3 Related Work -- 3 Method -- 3.1 Fairness for General Partition-Based Models -- 3.2 Fair Hebbian Learning, Pseudo-classes, and FairGLVQ -- 3.3 Algorithm Design -- 4 Experiments -- 4.1 Synthetic Data -- 4.2 Real-World Benchmarks -- 5 Conclusion and Discussion -- References -- About Interpretable Learning Rules for Vector Quantizers - A Methodological Approach -- 1 Introduction -- 2 Oja's Learning Rule - A Input-Output Relation View. |
3 Competition Functions - An Objective View for Vector Quantization -- 4 A Learning Rule for Vector Quantization -- 4.1 The Generic Rule in the Objective View -- 4.2 Decision Making and Output Generation of VQ as an Input-Output Relation -- 4.3 The Generic Rule According to Oja's View -- 5 Interpreting and Explaining the Generic Learning Rule -- 5.1 Gain and Shift -- 5.2 The Decision Function -- 5.3 Pitfalls -- 6 Summary and Outlook -- References -- Precision and Recall Reject Curves -- 1 Introduction -- 2 Related Work -- 3 Prototype-Based Classification -- 4 Global Reject Option -- 5 Evaluation of Reject Options Using Reject Curves -- 6 Experiments -- 7 Conclusion -- References -- K Minimum Enclosing Balls for Outlier Detection -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Stochastic Gradient Descent -- 3.2 Alternating Optimization -- 3.3 Kernel Approach -- 3.4 Prediction -- 4 Experiments and Discussion -- 5 Conclusion -- References -- Probabilistic Models with Invariance -- 1 Introduction -- 1.1 Motivation: Complex Image Data for CL -- 1.2 Related Work -- 1.3 GMMs with Invariance (iGMMs) -- 1.4 Contributions -- 2 Methods -- 2.1 Data Augmentation in General -- 2.2 Transformations for Data Augmentation -- 2.3 Data and Preprocessing -- 2.4 GMMS with Invariance: iGMMs -- 2.5 GMM-IGMM Hierarchies -- 3 Experiments -- 3.1 Preliminary Experiment: Naive Application of iGMMs -- 3.2 Formation of Simple/complex Cells in GMM-IGMM Hierarchies -- 3.3 Explicitly Measuring Invariance -- 4 Discussion -- References -- Optimizing YOLOv5 for Green AI: A Study on Model Pruning and Lightweight Networks -- 1 Introduction -- 2 Related Work -- 2.1 LabelImg -- 2.2 YOLO -- 2.3 Pretrained Models and Fine-Tuning -- 3 Preliminary -- 3.1 Pruning -- 3.2 Lightweight Network -- 4 Experiment -- 4.1 Dataset -- 4.2 Training, Pruning and Fine-Tuning -- 4.3 Backbone Replacement. |
5 Discussion -- 6 Conclusions -- References -- Process Phase Monitoring in Industrial Manufacturing Processes with a Hybrid Unsupervised Learning Strategy -- 1 Introduction -- 2 Methodologies -- 2.1 Self-organizing Maps -- 2.2 Unified Distance Matrix -- 2.3 Instantaneous Topological Map -- 3 Hybrid Unsupervised Learning Strategy -- 3.1 Mechanism for SOM Map Size Adaptation -- 3.2 Automated Segmentation of the U-Matrix -- 4 Experiments -- 4.1 Laboratory Batch Process and Data Acquisition -- 4.2 Discovering Unknown Process Phases -- 5 Conclusions -- References -- Knowledge Management in SMEs: Applying Link Prediction for Assisted Decision Making -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 4 Results -- 5 Conclusion -- References -- Author Index. |
|
|
|
|
|
| |