|
|
|
|
|
|
|
|
|
1. |
Record Nr. |
UNISA996464382703316 |
|
|
Titolo |
Advances in intelligent data analysis XIX : 19th International Symposium on Intelligent Data Analysis, IDA 2021, Porto, Portugal, April 26-28, 2021 : proceedings / / Pedro Henriques Abreu [and three others], (editors) |
|
|
|
|
|
|
|
Pubbl/distr/stampa |
|
|
Cham, Switzerland : , : Springer, , [2021] |
|
©2021 |
|
|
|
|
|
|
|
|
|
ISBN |
|
|
|
|
|
|
Descrizione fisica |
|
1 online resource (xvi, 454 pages) |
|
|
|
|
|
|
Collana |
|
Lecture notes in computer science ; ; 12695 |
|
|
|
|
|
|
Disciplina |
|
|
|
|
|
|
Soggetti |
|
Pattern recognition systems |
Mathematical statistics |
Mathematical statistics - Data processing |
|
|
|
|
|
|
|
|
Lingua di pubblicazione |
|
|
|
|
|
|
Formato |
Materiale a stampa |
|
|
|
|
|
Livello bibliografico |
Monografia |
|
|
|
|
|
Nota di bibliografia |
|
Includes bibliographical references and index. |
|
|
|
|
|
|
Nota di contenuto |
|
Intro -- Preface -- Organization -- Contents -- Modeling with Neural Networks -- Hyperspherical Weight Uncertainty in Neural Networks -- 1 Introduction -- 2 Background: On Gaussian Distributions -- 3 Hypersphere Bayesian Neural Networks -- 4 Results -- 4.1 Non-linear Regression -- 4.2 Image Classification -- 4.3 Measuring Uncertainty -- 4.4 Active Learning Using Uncertainty Quantification -- 4.5 Variational Auto-encoders -- 5 Conclusion -- References -- Partially Monotonic Learning for Neural Networks -- 1 Introduction -- 2 Related Work -- 3 Monotonicity -- 4 Partially Monotonic Learning -- 4.1 Loss Function -- 5 Evaluation -- 5.1 Datasets -- 5.2 Methodology -- 5.3 Monotonic Features Extraction -- 5.4 Models -- 5.5 Monotonicity Analysis -- 6 Conclusion and Future Work -- References -- Multiple-manifold Generation with an Ensemble GAN and Learned Noise Prior -- 1 Introduction -- 2 Related Work -- 3 Model -- 4 Experiments -- 4.1 Disconnected Manifolds -- 4.2 CelebA+Photo -- 4.3 Complex-But-Connected Image Dataset -- 4.4 CIFAR -- 5 Discussion -- References -- Simple, Efficient and Convenient Decentralized Multi-task Learning for Neural Networks -- 1 Introduction -- 2 The Method -- 2.1 Intuition |
|
|
|
|
|
|
|
|
|
-- 2.2 Description -- 3 Theoretical Analysis -- 4 Experiments -- 4.1 Setting -- 4.2 Results -- 5 Related Work -- 6 Conclusion -- References -- Deep Hybrid Neural Networks with Improved Weighted Word Embeddings for Sentiment Analysis -- 1 Introduction -- 2 Related Work -- 2.1 Sentiment Analysis -- 2.2 Vector Representation -- 3 Proposed Model -- 3.1 Embedding Layer -- 3.2 Convolution Layer -- 3.3 Max-Pooling and Dropout Layer -- 3.4 LSTM Layer -- 3.5 Fully-Connected Layer -- 3.6 Output Layer -- 4 Experiments and Results -- 4.1 Dataset Description -- 4.2 Parameters -- 4.3 Evaluation Metrics -- 4.4 Results and Discussion -- 5 Conclusion -- References. |
Explaining Neural Networks by Decoding Layer Activations -- 1 Introduction -- 2 Method and Architecture -- 3 Theoretical Motivation of ClaDec -- 4 Assessing Interpretability and Fidelity -- 5 Evaluation -- 5.1 Qualitative Evaluation -- 5.2 Quantitative Evaluation -- 6 Related Work -- 7 Conclusions -- References -- Analogical Embedding for Analogy-Based Learning to Rank -- 1 Introduction -- 2 Analogy-Based Learning to Rank -- 3 Related Work -- 4 Analogical Embedding -- 4.1 Training the Embedding Network -- 4.2 Constructing Training Examples -- 5 Experiments -- 5.1 Data and Experimental Setup -- 5.2 Case Study 1: Analysing the Embedding Space -- 5.3 Case Study 2: Performance of able2rank -- 6 Conclusion -- References -- HORUS-NER: A Multimodal Named Entity Recognition Framework for Noisy Data -- 1 Introduction -- 2 Methodology and Features -- 3 Experimental Setup -- 4 Results and Discussion -- 5 Related Work -- 6 Conclusion -- References -- Modeling with Statistical Learning -- Incremental Search Space Construction for Machine Learning Pipeline Synthesis -- 1 Introduction -- 2 Preliminary and Related Work -- 3 DSWIZARD Methodology -- 3.1 Incremental Pipeline Structure Search -- 3.2 Hyperparameter Optimization -- 3.3 Meta-Learning -- 4 Experiments -- 4.1 Experiment Setup -- 4.2 Experiment Results -- 5 Conclusion -- References -- Adversarial Vulnerability of Active Transfer Learning -- 1 Introduction -- 2 Related Work -- 3 Attacking Active Transfer Learning -- 3.1 Threat Model -- 3.2 Feature Collision Attack -- 4 Implementation and Results -- 4.1 Active Transfer Learner Setup -- 4.2 Feature Collision Results -- 4.3 Impact on the Model -- 4.4 Hyper Parameters and Runtime -- 4.5 Adversarial Retraining Defense -- 5 Conclusion and Future Work -- References -- Revisiting Non-specific Syndromic Surveillance -- 1 Introduction. |
2 Non-specific Syndromic Surveillance -- 2.1 Problem Definition -- 2.2 Evaluation -- 3 Machine Learning Algorithms -- 3.1 Data Mining Surveillance System (DMSS) -- 3.2 What Is Strange About Recent Events? (WSARE) -- 3.3 Eigenevent -- 3.4 Anomaly Detection Algorithms -- 4 Basic Statistical Approaches -- 5 Experiments and Results -- 5.1 Evaluation Setup -- 5.2 Preliminary Evaluation -- 5.3 Results -- 6 Conclusion -- References -- Gradient Ascent for Best Response Regression -- 1 Introduction -- 2 Best Response Regression -- 2.1 Shortcomings of the Approach by Ben-Porat and Tennenholtz -- 3 Notation -- 4 Gradient Ascent Approach -- 5 Experiments -- 6 Conclusions -- References -- Intelligent Structural Damage Detection: A Federated Learning Approach -- 1 Introduction -- 2 Background -- 2.1 Autoencoder Deep Neural Network -- 3 Federated Learning Augmented with Tensor Data Fusion for SHM -- 3.1 Data Structure -- 3.2 Problem Formulation in Federated Learning -- 3.3 Tensor Data Fusion -- 3.4 The Client-Server Learning Phase -- 4 Related Work -- 5 Experimental Results -- 5.1 Data Collection -- 5.2 Results and Discussions -- 6 Conclusions -- References -- Composite Surrogate for Likelihood-Free Bayesian Optimisation in High-Dimensional Settings of Activity-Based Transportation Models -- 1 Introduction -- 2 |
|
|
|
|
|
|
|
Materials and Methods -- 2.1 Preday ABM -- 2.2 Bayesian Optimisation for Likelihood-Free Inference -- 2.3 Limitations of BOLFI for Calibrating Preday ABM -- 3 BOLFI with Composite Surrogate Model -- 4 Results -- 5 Summary and Conclusions -- References -- Active Selection of Classification Features -- 1 Introduction -- 2 Related Work -- 3 Utility-Based Active Selection of Classification Features -- 3.1 Unsupervised, Imputation Variance-Based Variant (U-ASCF) -- 3.2 Supervised, Probabilistic Selection Variant (S-ASCF) -- 4 Experimental Results. |
4.1 Comparative Results -- 4.2 Case Study -- 5 Conclusion -- References -- Feature Selection for Hierarchical Multi-label Classification -- 1 Introduction -- 2 Feature Selection -- 2.1 ReliefF -- 2.2 Information Gain -- 3 Related Work -- 4 Applying Feature Selection in HMC -- 4.1 Binary Relevance -- 4.2 Label Powerset -- 4.3 Our Proposal -- 5 Methodology -- 5.1 Datasets -- 5.2 Base Classifier -- 5.3 Evaluation Measures -- 6 Experiments and Discussion -- 7 Conclusion and Future Work -- References -- Bandit Algorithm for both Unknown Best Position and Best Item Display on Web Pages -- 1 Introduction -- 2 Related Work -- 3 Recommendation Setting -- 4 PB-MHB Algorithm -- 4.1 Sampling w.r.t. the Posterior Distribution -- 4.2 Overall Complexity -- 5 Experiments -- 5.1 Datasets -- 5.2 Competitors -- 5.3 Results -- 6 Conclusion -- References -- Performance Prediction for Hardware-Software Configurations: A Case Study for Video Games -- 1 Introduction -- 2 Learning Problem -- 3 Learning Model -- 3.1 Learning from Imprecise Observations -- 3.2 Enforcing Monotonicity Using a Penalty Term -- 3.3 Combined Loss -- 4 Case Study: Predicting FPS in Video Games -- 4.1 Dataset -- 4.2 Modeling Imprecise Observations -- 4.3 Experimental Design -- 4.4 Results -- 5 Related Work -- 6 Conclusion -- References -- AVATAR-Automated Feature Wrangling for Machine Learning -- 1 Introduction -- 2 Related Work -- 3 Data Wrangling for Machine Learning -- 3.1 Problem Statement -- 3.2 A Language for Feature Wrangling -- 3.3 Generating Arguments -- 4 Machine Learning for Feature Wrangling -- 4.1 Prune -- 4.2 Select -- 4.3 Evaluate -- 4.4 Wrangle -- 5 Evaluation -- 5.1 Wrangling New Features -- 5.2 Comparison with Humans -- 6 Conclusion and Future Work -- References -- Modeling Language and Graphs. |
Semantically Enriching Embeddings of Highly Inflectable Verbs for Improving Intent Detection in a Romanian Home Assistant Scenario -- 1 Introduction -- 2 Related Work -- 3 Home Assistant Scenario and Challenges -- 4 Proposed Solution -- 5 Empirical Evaluations -- 5.1 Experimental Setup -- 5.2 Results and Discussions -- 6 Conclusions, Limitations, and Further Work -- Appendix A Confusion matrices and histograms -- References -- BoneBert: A BERT-based Automated Information Extraction System of Radiology Reports for Bone Fracture Detection and Diagnosis -- 1 Introduction -- 2 Related Works -- 2.1 Rule-Based Approaches -- 2.2 Machine Learning Approaches -- 2.3 Hybrid Approaches -- 3 Methodology -- 3.1 Dataset -- 3.2 Information Extraction -- 3.3 Training and Evaluation -- 4 Experiments -- 4.1 Assertion Classification -- 4.2 Named Entity Recognition -- 5 Discussion -- 6 Conclusion -- References -- Linking the Dynamics of User Stance to the Structure of Online Discussions -- 1 Introduction -- 2 Related Work -- 3 The Dynamics of User Stance and Dataset -- 4 Forecast User Stance Dynamics -- 4.1 A Supervised Machine Learning Problem -- 4.2 Predictive Features -- 4.3 Learning Stance in Twitter -- 4.4 Predictive Setup -- 5 Results -- 6 Conclusion -- References -- Unsupervised Methods for the Study of Transformer Embeddings -- 1 Introduction -- 2 Related Work -- 3 Unsupervised Methods for Layer Analysis -- 3.1 Matrix and Vector Representation of Layers -- 3.2 |
|
|
|
|
|
|
|
|
Measuring the Correlations Between Layers -- 3.3 Clustering Layers -- 3.4 Interpreting Layers -- 4 Experiments -- 4.1 Datasets and Models Used -- 4.2 Investigating the Correlations Between Layers -- 4.3 Identifying Clusters of Layers -- 4.4 Qualitative Interpretation -- 4.5 Quantitative Interpretation Using Dimension Reduction -- 4.6 Results Validation Using a Clustering Performance Metric -- 5 Conclusion. |
References. |
|
|
|
|
|
| |