Vai al contenuto principale della pagina

Artificial neural networks and machine learning -- ICANN 2022 : 31st international conference on artificial neural networks, Bristol, UK, September 6-9, 2022, proceedings, Part IV / / edited by Elias Pimenidis [and four others]



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Titolo: Artificial neural networks and machine learning -- ICANN 2022 : 31st international conference on artificial neural networks, Bristol, UK, September 6-9, 2022, proceedings, Part IV / / edited by Elias Pimenidis [and four others] Visualizza cluster
Pubblicazione: Cham, Switzerland : , : Springer, , [2022]
©2022
Descrizione fisica: 1 online resource (817 pages)
Disciplina: 006.3
Soggetto topico: Artificial intelligence
Machine learning
Persona (resp. second.): PimenidisElias
Nota di bibliografia: Includes bibliographical references and index.
Nota di contenuto: Intro -- Preface -- Organization -- Contents - Part IV -- Analysing the Predictivity of Features to Characterise the Search Space -- 1 Introduction -- 2 Related Work -- 3 Landscape Features -- 4 Experimental Results -- 4.1 Feature Exploratory Analysis -- 4.2 Operator Classification -- 5 Conclusions and Future Work -- References -- Boosting Feature-Aware Network for Salient Object Detection -- 1 Introduction -- 2 Related Work -- 3 Proposed Model -- 3.1 Overall Framework -- 3.2 Edge Guidance Sub-network -- 3.3 Object Sub-network -- 3.4 Loss Function -- 4 Experimental Results -- 4.1 Datasets and Evaluation Metrics -- 4.2 Implementation Details -- 4.3 Comparison with the State-of-the-Arts -- 4.4 Ablation Studies -- 5 Conclusion -- References -- Continual Learning Based on Knowledge Distillation and Representation Learning -- 1 Introduction -- 2 Related Works -- 2.1 Class Incremental Learning -- 2.2 Beta-VAE -- 2.3 Knowledge Distillation -- 3 Model and Methodology -- 3.1 KRCL Model -- 3.2 KRCL Loss Function -- 3.3 Model Parameters and Update Rules -- 4 Experimental Comparison -- 4.1 Benchmark Datasets -- 4.2 Baseline Methods -- 4.3 Network Architecture -- 4.4 Evaluation Metrics -- 4.5 Experimental Results and Analysis -- 5 Conclusions and Future Works -- References -- Deep Feature Learning for Medical Acoustics -- 1 Introduction -- 2 The Considered Frontends -- 2.1 Mel-filterbanks -- 2.2 LEAF -- 2.3 nnAudio -- 3 Models -- 3.1 EfficientNet -- 3.2 VGG -- 4 Datasets -- 4.1 Respiratory Dataset -- 4.2 Heartbeat Dataset -- 5 Experiments -- 5.1 Pre-processing -- 5.2 System Parameterization -- 6 Results -- 6.1 Test 1 - Respiratory -- 6.2 Test 2 - Heartbeat -- 6.3 Overall -- 7 Conclusion -- References -- Feature Fusion Distillation -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Feature Fusion Module -- 3.2 Asymmetric Switch Function.
3.3 Total Loss Function -- 4 Experiments -- 4.1 Image Classification (CIFAR-100) -- 4.2 Image Classification (ImageNet-1K) -- 4.3 Object Detection -- 4.4 Semantic Segmentation -- 5 Ablation Study -- 6 Conclusion -- A Margin Value -- References -- Feature Recalibration Network for Salient Object Detection -- 1 Introduction -- 2 Proposed Method -- 2.1 Consistency Recalibration Module -- 2.2 Multi-source Feature Recalibration Module -- 2.3 Loss Function -- 3 Experiments -- 3.1 Datasets and Evaluation Metrics -- 3.2 Implementation Details -- 3.3 Comparison with the State-of-the-Art -- 3.4 Ablation Studies -- 4 Conclusion -- References -- Feature Selection for Trustworthy Regression Using Higher Moments -- 1 Introduction -- 2 Trustworthy Regression -- 3 Feature Relevance -- 3.1 Feature Relevance for Classification -- 3.2 Feature Relevance for (MSE-)Regression -- 4 Feature Selection Methods -- 5 On the Relation of Relevance Notions -- 6 Application: Moment Feature Relevance -- 7 Empirical Evaluation -- 8 Conclusion -- References -- Fire Detection Based on Improved-YOLOv5s -- 1 Introduction -- 2 Method -- 2.1 Data Collection and Preprocessing -- 2.2 Network Model -- 2.3 Cosine Annealing + Warm-Up -- 2.4 Label Smoothing -- 2.5 Multi-scale -- 3 Result -- 3.1 Evaluation Index Calculation Formula -- 3.2 Results Presentation -- 4 Discussion -- 5 Conclusion -- References -- Heterogeneous Graph Neural Network for Multi-behavior Feature-Interaction Recommendation -- 1 Introduction -- 2 Methodology -- 2.1 Heterogeneous Bipartite Graph -- 2.2 User-Features Interaction -- 2.3 Graph Neural Network Aggregation Layer -- 2.4 Prediction Layer -- 2.5 Model Training -- 2.6 Complexity Analysis -- 3 Experiment -- 3.1 Dataset Description -- 3.2 Experimental Settings -- 3.3 Overall Performance -- 3.4 Model Analysis -- 4 Conclusion -- References.
JointFusionNet: Parallel Learning Human Structural Local and Global Joint Features for 3D Human Pose Estimation -- 1 Introduction -- 2 Related Works -- 2.1 3D Human Pose Estimation -- 2.2 Global-Local Features Fusion -- 3 Method -- 3.1 Inspiring Pattern of Human Pose -- 3.2 Global and Local Features Fusion -- 4 Experiments -- 4.1 Datasets, Evaluation Metrics and Details -- 4.2 Comparison with State-of-the-Art Methods -- 4.3 Cross-dataset Results on 3DPW -- 4.4 Visualization and Explanation -- 4.5 Ablation Study -- 5 Conclusion -- References -- Multi-scale Feature Extraction and Fusion for Online Knowledge Distillation -- 1 Introduction -- 2 Related Work -- 2.1 Traditional Knowledge Distillation -- 2.2 Online Knowledge Distillation -- 2.3 Multi-scale Feature -- 3 Proposed Method -- 3.1 Problem Definition -- 3.2 MFEF Framework -- 3.3 Loss Function -- 4 Experiment -- 4.1 Experiment Settings -- 4.2 Experiment Results -- 5 Conclusion -- References -- Multi-scale Vertical Cross-layer Feature Aggregation and Attention Fusion Network for Object Detection -- 1 Introduction -- 2 Related Work -- 3 Architecture of Proposed Network -- 3.1 Multi-scale Vertical Cross-Layer Feature Aggregation Network -- 3.2 Attention Fusion Module -- 3.3 Anchor Optimization Strategy -- 4 Experiment -- 4.1 Implementation Details -- 4.2 Comparison with Other Methods -- 4.3 Ablation Study -- 5 Conclusion -- References -- Multi-spectral Dynamic Feature Encoding Network for Image Demoiréing -- 1 Introduction -- 2 Proposed Method -- 2.1 Overall Network Architecture -- 2.2 DCT and Channel Attention -- 2.3 Multi-spectral Channel Attention (MSCA) -- 2.4 Multi-spectral Dynamic Feature Encoding (MSDFE) -- 2.5 Loss Function -- 3 Experiments -- 3.1 Datasets and Training Details -- 3.2 Comparison with State-of-the-Arts -- 3.3 Visual Results -- 3.4 Model Parameters -- 4 Ablation Study.
4.1 Network Branches -- 4.2 Multi-spectral Dynamic Feature Encoding -- 5 Conclusion -- References -- Ranking Feature-Block Importance in Artificial Multiblock Neural Networks -- 1 Introduction -- 2 Block Importance Ranking Methods -- 2.1 Composite Strategy -- 2.2 Knock-In Strategy -- 2.3 Knock-Out Strategy -- 3 Experiments -- 3.1 Simulation Experiment -- 3.2 Real-World Experiment -- 4 Discussion -- 5 Conclusion -- References -- Robust Sparse Learning Based Sensor Array Optimization for Multi-feature Fusion Classification -- 1 Introduction -- 2 The Proposed Method -- 2.1 F,1 Norm Regularization Term -- 2.2 Sensor Selection Model -- 2.3 Model Optimization -- 2.4 Complexity Analysis -- 3 Experiment -- 3.1 Data Sets -- 3.2 Experiments Settings -- 3.3 Comparison of Classification Accuracy -- 4 Conclusion and Future Work -- References -- Stimulates Potential for Knowledge Distillation -- 1 Introduction -- 2 Related Literature -- 2.1 Knowledge Distillation -- 2.2 Normalization -- 3 Approach -- 3.1 Residual-Based Local Feature Normalization -- 3.2 Local Feature Normalized Extraction -- 3.3 How to Use Structure -- 4 Experiment -- 4.1 Experiments on CIFAR-10 -- 4.2 Experiments on CIFAR-100 -- 4.3 Ablation Experiments -- 5 Conclusion -- References -- Adaptive Compatibility Matrix for Superpixel-CRF -- 1 Introduction -- 2 Related Work -- 2.1 CRF and Superpixel-CRF -- 2.2 Compatibility Function -- 3 Preliminary -- 4 Adaptive Compatibility Matrix -- 5 Apply Adaptive Compatibility Matrix to Superpixel CRF -- 5.1 Binary Class -- 5.2 Multi-class -- 6 Experiments -- 7 Conclusions -- References -- BERT-Based Scientific Paper Quality Prediction -- 1 Introduction -- 2 BERT -- 3 Proposed Quality Prediction of Scientific Papers -- 3.1 Dataset of Scientific Papers -- 3.2 Quality Classification of Papers -- 3.3 BERT-Based Model of Quality Prediction of Scientific Papers.
4 Experimental Results -- 4.1 Training in the Pre-training Phase on Abstracts from S2ORC -- 4.2 Training in the Fine-Tuning Phase -- 4.3 The Test Accuracy of Prediction of the Trained Model -- 4.4 Detailed Analysis of the Prediction -- 5 Conclusions -- References -- Effective ML-Block and Weighted IoU Loss for Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Box Regression Loss -- 2.2 One-Stage Object Detectors -- 3 Approach -- 3.1 Weighted IoU Loss -- 3.2 MobileLight Block -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Ablation Studies -- 4.3 Evaluation on PASCAL VOC -- 4.4 Evaluation on COCOmini -- 5 Conclusion -- References -- FedNet2Net: Saving Communication and Computations in Federated Learning with Model Growing -- 1 Introduction -- 2 Related Work -- 3 Proposed Approach -- 4 Datasets and Detailed Model Implementations -- 4.1 Data Description -- 4.2 Performance Evaluation -- 4.3 Parameters for Switching -- 4.4 Model Description and Hyper-parameters -- 5 Results -- 6 Conclusion -- References -- Reject Options for Incremental Regression Scenarios -- 1 Introduction -- 2 Problem Setting -- 3 Rejection Models -- 3.1 Drift Rejection -- 3.2 Local Outlier Probabilities Rejector -- 3.3 Baseline Rejection -- 4 Experiments -- 4.1 Chaotic Time Series Data -- 4.2 Real World Data -- 4.3 RMSE-Reject Curves -- 4.4 Chaotic Data Experiment -- 4.5 Real World Data Experiment -- 5 Results -- 5.1 Chaotic Data Results -- 5.2 Real World Data Results -- 5.3 Tabular Evaluation -- 6 Conclusion -- References -- Stream-Based Active Learning with Verification Latency in Non-stationary Environments -- 1 Introduction -- 2 Related Work -- 3 Proposed Active Learning Framework -- 3.1 Proposed Utility Estimator: PRopagate Labels -- 3.2 Proposed Budget Strategy: Dynamic Budget Allocation -- 4 Experimental Setup -- 5 Results and Discussion.
6 Conclusion and Future Work.
Titolo autorizzato: Artificial neural networks and machine learning -- ICANN 2022  Visualizza cluster
ISBN: 3-031-15937-3
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910592989803321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Serie: Lecture Notes in Computer Science