top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Pattern Recognition and Computer Vision [[electronic resource] ] : 6th Chinese Conference, PRCV 2023, Xiamen, China, October 13–15, 2023, Proceedings, Part I / / edited by Qingshan Liu, Hanzi Wang, Zhanyu Ma, Weishi Zheng, Hongbin Zha, Xilin Chen, Liang Wang, Rongrong Ji
Pattern Recognition and Computer Vision [[electronic resource] ] : 6th Chinese Conference, PRCV 2023, Xiamen, China, October 13–15, 2023, Proceedings, Part I / / edited by Qingshan Liu, Hanzi Wang, Zhanyu Ma, Weishi Zheng, Hongbin Zha, Xilin Chen, Liang Wang, Rongrong Ji
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (XIV, 513 p. 159 illus., 152 illus. in color.)
Disciplina 006
Collana Lecture Notes in Computer Science
Soggetto topico Image processing - Digital techniques
Computer vision
Artificial intelligence
Application software
Computer networks
Computer systems
Machine learning
Computer Imaging, Vision, Pattern Recognition and Graphics
Artificial Intelligence
Computer and Information Systems Applications
Computer Communication Networks
Computer System Implementation
Machine Learning
ISBN 981-9984-29-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part I -- Action Recognition -- Learning Bottleneck Transformer for Event Image-Voxel Feature Fusion Based Classification -- 1 Introduction -- 2 Related Work -- 3 Our Proposed Approach -- 3.1 Overview -- 3.2 Network Architecture -- 4 Experiment -- 4.1 Dataset and Evaluation Metric -- 4.2 Implementation Details -- 4.3 Comparison with Other SOTA Algorithms -- 4.4 Ablation Study -- 4.5 Parameter Analysis -- 5 Conclusion -- References -- Multi-scale Dilated Attention Graph Convolutional Network for Skeleton-Based Action Recognition -- 1 Introduction -- 2 Related Works -- 2.1 Attention Mechanism -- 2.2 Lightweight Models -- 3 Method -- 3.1 Multi-Branch Fusion Module -- 3.2 Semantic Information -- 3.3 Graph Convolution Module -- 3.4 Time Convolution Module -- 4 Experiment -- 4.1 Dataset -- 4.2 Experimental Details -- 4.3 Ablation Experiment -- 4.4 Comparison with State-of-the-Art -- 5 Action Visualization -- 6 Conclusion -- References -- Auto-Learning-GCN: An Ingenious Framework for Skeleton-Based Action Recognition -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 GCN-Based Skeleton Processing -- 3.2 The AL-GCN Module -- 3.3 The Attention Correction and Jump Model -- 3.4 Multi-stream Gaussian Weight Selection Algorithm -- 4 Experimental Results and Analysis -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Compared with the State-of-the-Art Methods -- 4.4 Ablation Study -- 4.5 Visualization -- 5 Conclusion -- References -- Skeleton-Based Action Recognition with Combined Part-Wise Topology Graph Convolutional Networks -- 1 Introduction -- 2 Related Work -- 2.1 Skeleton-Based Action Recognition -- 2.2 Partial Graph Convolution in Skeleton-Based Action Recognition -- 3 Methods -- 3.1 Preliminaries -- 3.2 Part-Wise Spatial Modeling -- 3.3 Part-Wise Spatio-Temporal Modeling.
3.4 Model Architecture -- 4 Experiments -- 4.1 Datasets -- 4.2 Training Details -- 4.3 Ablation Studies -- 4.4 Comparison with the State-of-the-Art -- 5 Conclusion -- References -- Segmenting Key Clues to Induce Human-Object Interaction Detection -- 1 Introduction -- 2 Related Work -- 3 Approach -- 3.1 Key Features Segmentation-Based Module -- 3.2 Key Features Learning Encoder -- 3.3 Spatial Relationships Learning Graph-Based Module -- 3.4 Training and Inference -- 4 Experiments -- 4.1 Implementation Details -- 4.2 Implementation Results -- 4.3 Ablation Study -- 4.4 Qualitative Results -- 5 Conclusion -- References -- Lightweight Multispectral Skeleton and Multi-stream Graph Attention Networks for Enhanced Action Prediction with Multiple Modalities -- 1 Introduction -- 2 Related Work -- 2.1 Skeleton-Based Action Recognition -- 2.2 Dynamic Graph Neural Network -- 3 Methods -- 3.1 Spatial Embedding Component -- 3.2 Temporal Embedding Component -- 3.3 Action Prediction -- 4 Experiments and Discussion -- 4.1 NTU RGB+D Dataset -- 4.2 Experiments Setting -- 4.3 Evaluation of Human Action Recognition -- 4.4 Ablation Study -- 4.5 Visualization -- 5 Conclusion -- References -- Spatio-Temporal Self-supervision for Few-Shot Action Recognition -- 1 Introduction -- 2 Related Work -- 2.1 Few-Shot Action Recognition -- 2.2 Self-supervised Learning (SSL)-Based Few-Shot Learning -- 3 Method -- 3.1 Problem Definition -- 3.2 Spatio-Temporal Self-supervision Framework -- 4 Experiments -- 4.1 Experimental Settings -- 4.2 Comparison with State-of-the-Art Methods -- 4.3 Ablation Studies -- 5 Conclusions -- References -- A Fuzzy Error Based Fine-Tune Method for Spatio-Temporal Recognition Model -- 1 Introduction -- 2 Related Work -- 2.1 Spatio-Temporal (3D) Convolution Networks -- 2.2 Clips Selection and Features Aggregation -- 3 Proposed Method -- 3.1 Problem Definition.
3.2 Fuzzy Target -- 3.3 Fine Tune Loss Function -- 4 Experiment -- 4.1 Datasets and Implementation Details -- 4.2 Performance Comparison -- 4.3 Discussion -- 5 Conclusion -- References -- Temporal-Channel Topology Enhanced Network for Skeleton-Based Action Recognition -- 1 Introduction -- 2 Proposed Method -- 2.1 Network Architecture -- 2.2 Temporal-Channel Focus Module -- 2.3 Dynamic Channel Topology Attention Module -- 3 Experiments -- 3.1 Datasets and Implementation Details -- 3.2 Ablation Study -- 3.3 Comparison with the State-of-the-Art -- 4 Conclusion -- References -- HFGCN-Based Action Recognition System for Figure Skating -- 1 Introduction -- 2 Figure Skating Hierarchical Dataset -- 3 Figure Skating Action Recognition System -- 3.1 Data Preprocessing -- 3.2 Multi-stream Generation -- 3.3 Hierarchical Fine-Grained Graph Convolutional Neural Network (HFGCN) -- 3.4 Decision Fusion Module -- 4 Experiments and Results -- 4.1 Experimental Environment -- 4.2 Experiment Results and Analysis -- 5 Conclusion -- References -- Multi-modal Information Processing -- Image Priors Assisted Pre-training for Point Cloud Shape Analysis -- 1 Introduction -- 2 Proposed Method -- 2.1 Problem Setting -- 2.2 Overview Framework -- 2.3 Multi-task Cross-Modal SSL -- 2.4 Objective Function -- 3 Experiments and Analysis -- 3.1 Pre-training Setup -- 3.2 Downstream Tasks -- 3.3 Ablation Study -- 4 Conclusion -- References -- AMM-GAN: Attribute-Matching Memory for Person Text-to-Image Generation -- 1 Introduction -- 2 Related Work -- 2.1 Text-to-image Generative Adversarial Network -- 2.2 GANs for Person Image -- 3 Method -- 3.1 Feature Extraction -- 3.2 Multi-scale Feature Fusion Generator -- 3.3 Real-Result-Driven Discriminator -- 3.4 Objective Functions -- 4 Experiment -- 4.1 Dataset -- 4.2 Implementation -- 4.3 Evaluation Metrics -- 4.4 Quantitative Evaluation.
4.5 Qualitative Evaluation -- 4.6 Ablation Study -- 5 Conclusion -- References -- RecFormer: Recurrent Multi-modal Transformer with History-Aware Contrastive Learning for Visual Dialog -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Preliminaries -- 3.2 Model Architecture -- 3.3 Training Objectives -- 4 Experimental Setup -- 4.1 Dataset -- 4.2 Baselines -- 4.3 Evaluation Metric -- 4.4 Implementation Details -- 5 Results and Analysis -- 5.1 Main Results -- 5.2 Ablation Study -- 5.3 Attention Visualization -- 6 Conclusion -- References -- KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image Action Editing -- 1 Introduction -- 2 Background -- 2.1 Text-to-Image Generation and Editing -- 2.2 Stable Diffusion Model -- 3 KV Inversion: Training-Free KV Embeddings Learning -- 3.1 Task Setting and Reason of Existing Problem -- 3.2 KV Inversion Overview -- 4 Experiments -- 4.1 Comparisons with Other Concurrent Works -- 4.2 Ablation Study -- 5 Limitations and Conclusion -- References -- Enhancing Text-Image Person Retrieval Through Nuances Varied Sample -- 1 Introduction -- 2 Relataed Work -- 2.1 Text-Image Retrieval -- 2.2 Text-Image Person Retrieval -- 3 Method -- 3.1 Feature Extraction and Alignment -- 3.2 Nuanced Variation Module -- 3.3 Image Text Matching Loss -- 3.4 Hard Negative Metric Loss -- 4 Experiment -- 4.1 Datasets and Evaluation Setting -- 4.2 Comparison with State-of-the-Art Methods -- 4.3 Ablation Study -- 5 Conclusion -- References -- Unsupervised Prototype Adapter for Vision-Language Models -- 1 Introduction -- 2 Related Work -- 2.1 Large-Scale Pre-trained Vision-Language Models -- 2.2 Adaptation Methods for Vision-Language Models -- 2.3 Self-training with Pseudo-Labeling -- 3 Method -- 3.1 Background -- 3.2 Unsupervised Prototype Adapter -- 4 Experiments -- 4.1 Image Recognition -- 4.2 Domain Generalization.
4.3 Ablation Study -- 5 Conclusion -- References -- Multimodal Causal Relations Enhanced CLIP for Image-to-Text Retrieval -- 1 Introduction -- 2 Related Works -- 3 Method -- 3.1 Overview -- 3.2 MCD: Multimodal Causal Discovery -- 3.3 MMC-CLIP -- 3.4 Image-Text Alignment -- 4 Experiments -- 4.1 Datasets and Settings -- 4.2 Results on MSCOCO -- 4.3 Results on Flickr30K -- 4.4 Ablation Studies -- 5 Conclusion -- References -- Exploring Cross-Modal Inconsistency in Entities and Emotions for Multimodal Fake News Detection -- 1 Introduction -- 2 Related Works -- 2.1 Single-Modality Fake News Detection -- 2.2 Multimodal Fake News Detection -- 3 Methodology -- 3.1 Feature Extraction -- 3.2 Cross-Modal Contrastive Learning -- 3.3 Entity Consistency Learning -- 3.4 Emotional Consistency Learning -- 3.5 Multimodal Fake News Detector -- 4 Experiments -- 4.1 Experimental Configurations -- 4.2 Overall Performance -- 4.3 Ablation Studies -- 5 Conclusion -- References -- Deep Consistency Preserving Network for Unsupervised Cross-Modal Hashing -- 1 Introduction -- 2 The Proposed Method -- 2.1 Problem Definition -- 2.2 Deep Feature Extraction and Hashing Learning -- 2.3 Features Fusion and Similarity Matrix Construction -- 2.4 Hash Code Fusion and Reconstruction -- 2.5 Objective Function -- 3 Experiments -- 3.1 Datasets and Baselines -- 3.2 Implementation Details -- 3.3 Results and Analysis -- 4 Conclusion -- References -- Learning Adapters for Text-Guided Portrait Stylization with Pretrained Diffusion Models -- 1 Introduction -- 2 Related Work -- 2.1 Text-to-Image Diffusion Models -- 2.2 Control of Pretrained Diffusion Model -- 2.3 Text-Guided Portrait Stylizing -- 3 Method -- 3.1 Background and Preliminaries -- 3.2 Overview of Our Method -- 3.3 Portrait Stylization with Text Prompt -- 3.4 Convolution Adapter -- 3.5 Adapter Optimization -- 4 Experiments.
4.1 Implementation Settings.
Record Nr. UNISA-996587868103316
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Pattern Recognition and Computer Vision [[electronic resource] ] : 6th Chinese Conference, PRCV 2023, Xiamen, China, October 13–15, 2023, Proceedings, Part VIII / / edited by Qingshan Liu, Hanzi Wang, Zhanyu Ma, Weishi Zheng, Hongbin Zha, Xilin Chen, Liang Wang, Rongrong Ji
Pattern Recognition and Computer Vision [[electronic resource] ] : 6th Chinese Conference, PRCV 2023, Xiamen, China, October 13–15, 2023, Proceedings, Part VIII / / edited by Qingshan Liu, Hanzi Wang, Zhanyu Ma, Weishi Zheng, Hongbin Zha, Xilin Chen, Liang Wang, Rongrong Ji
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (XIV, 513 p. 157 illus., 152 illus. in color.)
Disciplina 006
Collana Lecture Notes in Computer Science
Soggetto topico Image processing - Digital techniques
Computer vision
Artificial intelligence
Application software
Computer networks
Computer systems
Machine learning
Computer Imaging, Vision, Pattern Recognition and Graphics
Artificial Intelligence
Computer and Information Systems Applications
Computer Communication Networks
Computer System Implementation
Machine Learning
ISBN 981-9985-43-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part VIII -- Neural Network and Deep Learning I -- A Quantum-Based Attention Mechanism in Scene Text Detection -- 1 Introduction -- 2 Related Work -- 2.1 Attention Mechanism -- 2.2 Revisit Quantum-State-based Mapping -- 3 Approach -- 3.1 QSM-Based Channel Attention (QCA) Module and QSM-Based Spatial Attention (QSA) Module -- 3.2 Quantum-Based Convolutional Attention Module (QCAM) -- 3.3 Adaptive Channel Information Transfer Module (ACTM) -- 4 Experiments -- 4.1 Implementation Details -- 4.2 Performance Comparison -- 4.3 Ablation Study -- 5 Discussion and Conclusion -- References -- NCMatch: Semi-supervised Learning with Noisy Labels via Noisy Sample Filter and Contrastive Learning -- 1 Introduction -- 2 Related Work -- 2.1 Semi-supervised Learning -- 2.2 Self-supervised Contrastive Learning -- 2.3 Learning with Noisy Labels -- 3 Method -- 3.1 Preliminaries -- 3.2 Overall Framework -- 3.3 Noisy Sample Filter (NSF) -- 3.4 Semi-supervised Contrastive Learning (SSCL) -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental for SSL -- 4.3 Experimental for SSLNL -- 4.4 Ablation Study -- 5 Conclusion -- References -- Data-Free Low-Bit Quantization via Dynamic Multi-teacher Knowledge Distillation -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Preliminaries -- 3.2 More Insight on 8-Bit Quantized Models -- 3.3 Dynamic Multi-teacher Knowledge Distillation -- 4 Experiments -- 4.1 Experimental Setups -- 4.2 Comparison with Previous Data-Free Quantization Methods -- 4.3 Ablation Studies -- 5 Conclusion -- References -- LeViT-UNet: Make Faster Encoders with Transformer for Medical Image Segmentation -- 1 Introduction -- 2 Related Works -- 3 Method -- 3.1 Architecture of LeViT-UNet -- 3.2 LeViT as Encoder -- 3.3 CNNs as Decoder -- 4 Experiments and Results -- 4.1 Dataset -- 4.2 Implementation Details.
4.3 Experiment Results on Synapse Dataset -- 4.4 Experiment Results on ACDC Dataset -- 5 Conclusion -- References -- DUFormer: Solving Power Line Detection Task in Aerial Images Using Semantic Segmentation -- 1 Introduction -- 2 Related Work -- 2.1 Vision Transformer -- 2.2 Semantic Segmentation -- 3 Proposed Architecture -- 3.1 Overview -- 3.2 Double U Block (DUB) -- 3.3 Power Line Aware Block (PLAB) -- 3.4 BiscSE Block -- 3.5 Loss Function -- 4 Experiments -- 4.1 Experimental Settings -- 4.2 Comparative Experiments -- 4.3 Ablation Experiments -- 5 Conclusion -- References -- Space-Transform Margin Loss with Mixup for Long-Tailed Visual Recognition -- 1 Introduction -- 2 Related Work -- 2.1 Mixup and Its Space Transformation -- 2.2 Long-Tailed Learning with Mixup -- 2.3 Re-balanced Loss Function Modification Methods -- 3 Method -- 3.1 Space Transformation in Mixup -- 3.2 Space-Transform Margin Loss Function -- 4 Experiments -- 4.1 Datasets -- 4.2 Implementations Details -- 4.3 Main Results -- 4.4 Feature Visualization and Analysis of STM Loss -- 4.5 Ablation Study -- 5 Conclusion -- References -- A Multi-perspective Squeeze Excitation Classifier Based on Vision Transformer for Few Shot Image Classification -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Problem Definition -- 3.2 Meta-Training Phase -- 3.3 Meta-test Phase -- 4 Experimental Results -- 4.1 Datasets and Training Details -- 4.2 Evaluation Results -- 4.3 Ablation Study -- 5 Conclusion -- References -- ITCNN: Incremental Learning Network Based on ITDA and Tree Hierarchical CNN -- 1 Introduction -- 2 Proposed Network -- 2.1 Network Structure -- 2.2 ITDA -- 2.3 Branch Route -- 2.4 Training Strategies -- 2.5 Optimization Strategies -- 3 Experiments and Results -- 3.1 Experiment on Classification -- 3.2 Experiment on CIL -- 4 Conclusion -- References.
Periodic-Aware Network for Fine-Grained Action Recognition -- 1 Introduction -- 2 Related Work -- 2.1 Skeleton-Based Action Recognition -- 2.2 Periodicity Estimation of Videos -- 2.3 Squeeze and Excitation Module -- 3 Method -- 3.1 3D-CNN Backbone -- 3.2 Periodicity Feature Extraction Module -- 3.3 Periodicity Fusion Module -- 4 Experiment -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Ablation Study -- 4.4 Comparison with State-of-the-Art Methods -- 5 Conclusion -- References -- Learning Domain-Invariant Representations from Text for Domain Generalization -- 1 Introduction -- 2 Related Work -- 2.1 Domain Generalization -- 2.2 CLIP in Domain Generalization -- 3 Method -- 3.1 Problem Formulation -- 3.2 Text Regularization -- 3.3 CLIP Representations -- 4 Experiments and Results -- 4.1 Datasets and Experimental Settings -- 4.2 Comparison with Existing DG Methods -- 4.3 Ablation Study -- 5 Conclusions -- References -- TSTD:A Cross-modal Two Stages Network with New Trans-decoder for Point Cloud Semantic Segmentation -- 1 Introduction -- 2 Related Works -- 2.1 Image Transformers -- 2.2 Point Cloud Transformer -- 2.3 Joint 2D-3D Network -- 3 Method -- 3.1 Overall Architecture -- 3.2 2D-3D Backprojection -- 3.3 Trans-Decoder -- 4 Experiments -- 4.1 Dataset and Metric -- 4.2 Performance Comparison -- 4.3 Ablation Experiment -- 5 Conclusion -- References -- NeuralMAE: Data-Efficient Neural Architecture Predictor with Masked Autoencoder -- 1 Introduction -- 2 Related Work -- 2.1 Neural Architecture Performance Predictors -- 2.2 Generative Self-supervised Learning -- 3 Method -- 3.1 Overall Framework -- 3.2 Pre-training -- 3.3 Fine-Tuning -- 3.4 Multi-head Attention-Masked Transformer -- 4 Experiments -- 4.1 Implementation Details -- 4.2 Experiments on NAS-Bench-101 -- 4.3 Experiments on NAS-Bench-201 -- 4.4 Experiments on NAS-Bench-301.
4.5 Ablation Study -- 5 Conclusion -- References -- Co-regularized Facial Age Estimation with Graph-Causal Learning -- 1 Introduction -- 2 Method -- 2.1 Problem Formulation -- 2.2 Ordinal Decision Mapping -- 2.3 Bilateral Counterfactual Pooling -- 3 Experiments -- 3.1 Datasets and Evaluation Settings -- 3.2 Comparison with State-of-the-Art Methods -- 3.3 Ablation Study -- 3.4 Performance Under Out-of-Distribution Settings -- 3.5 Qualitative Results -- 4 Conclusion -- References -- Online Distillation and Preferences Fusion for Graph Convolutional Network-Based Sequential Recommendation -- 1 Introduction -- 2 Method -- 2.1 Graph Construction -- 2.2 Collaborative Learning -- 2.3 Feature Fusion -- 3 Experiment -- 3.1 Experimental Setup -- 3.2 Experimental Results -- 3.3 Ablation Studies -- 4 Conclusion -- References -- Grassmann Graph Embedding for Few-Shot Class Incremental Learning -- 1 Introduction -- 2 Related Work -- 3 The Proposed Method -- 3.1 Problem Definition -- 3.2 Overview -- 3.3 Grassmann Manifold Embedding -- 3.4 Graph Structure Preserving on Grassmann Manifold -- 4 Experiment -- 4.1 Experimental Setup -- 4.2 Comparison with State-of-the-Art Methods -- 5 Conclusion -- References -- Global Variational Convolution Network for Semi-supervised Node Classification on Large-Scale Graphs -- 1 Introduction -- 2 Related Work -- 3 Proposed Methods -- 3.1 Positive Pointwise Mutual Information on Large-Scale Graphs -- 3.2 Global Variational Aggregation -- 3.3 Variational Convolution Kernels -- 4 Experiments -- 4.1 Comparison Experiments -- 4.2 Ablation Study -- 4.3 Runtime Study -- 5 Conclusion -- References -- Frequency Domain Distillation for Data-Free Quantization of Vision Transformer -- 1 Introduction -- 2 Related Work -- 2.1 Vision Transformer (ViT) -- 2.2 Network Quantization -- 3 Preliminaries -- 3.1 Quantizer.
3.2 Fast Fourier Transform (FFT) and Frequency Domain -- 4 Method -- 4.1 Our Insights -- 4.2 Frequency Domain Distillation -- 4.3 The Overall Pipeline -- 5 Experimentation -- 5.1 Comparison Experiments -- 5.2 Ablation Study -- 6 Conclusions -- References -- An ANN-Guided Approach to Task-Free Continual Learning with Spiking Neural Networks -- 1 Introduction -- 2 Related Works -- 2.1 Image Generation in SNNs -- 2.2 Continual Learning -- 3 Preliminary -- 3.1 The Referee Module: WGAN -- 3.2 The Player Module: FSVAE -- 4 Methodology -- 4.1 Problem Setting -- 4.2 Overview of Our Model -- 4.3 Adversarial Similarity Expansion -- 4.4 Precise Pruning -- 5 Experimental Results -- 5.1 Dataset Setup -- 5.2 Classification Tasks Under TFCL -- 5.3 The Impact of Different Thresholds and Buffer Sizes -- 5.4 ANN and SNN Under TFCL -- 6 Conclusion -- References -- Multi-adversarial Adaptive Transformers for Joint Multi-agent Trajectory Prediction -- 1 Introduction -- 2 Related Works -- 2.1 Multi-agent Trajectory Prediction -- 2.2 Domain Adaptation -- 3 Proposed Method -- 3.1 Encoder: Processing Multi-aspect Data -- 3.2 Decoder: Generating Multi-modal Trajectories -- 3.3 Adaptation: Learning Doamin Invaint Feature -- 3.4 Loss Function -- 4 Experiments -- 4.1 Dataset -- 4.2 Problem Setting -- 4.3 Evaluation Metrics -- 4.4 Implementation Details -- 4.5 Quantitative Analysis -- 4.6 Ablation Study -- 5 Conclusion -- References -- Enhancing Open-Set Object Detection via Uncertainty-Boxes Identification -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Preliminary -- 3.2 Baseline Setup -- 3.3 Pseudo Proposal Advisor -- 3.4 Uncertainty-Box Detection -- 4 Experiment -- 4.1 Experimental Setup -- 4.2 Comparison with Other Methods -- 4.3 Ablation Studies -- 4.4 Visualization and Qualitative Analysis -- 5 Conclusions -- References.
Interventional Supervised Learning for Person Re-identification.
Record Nr. UNISA-996587868003316
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Perinatal, Preterm and Paediatric Image Analysis [[electronic resource] ] : 8th International Workshop, PIPPI 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 12, 2023, Proceedings / / edited by Daphna Link-Sourani, Esra Abaci Turk, Christopher Macgowan, Jana Hutter, Andrew Melbourne, Roxane Licandro
Perinatal, Preterm and Paediatric Image Analysis [[electronic resource] ] : 8th International Workshop, PIPPI 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 12, 2023, Proceedings / / edited by Daphna Link-Sourani, Esra Abaci Turk, Christopher Macgowan, Jana Hutter, Andrew Melbourne, Roxane Licandro
Edizione [1st ed. 2023.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Descrizione fisica 1 online resource (128 pages)
Disciplina 618.9200754
Collana Lecture Notes in Computer Science
Soggetto topico Image processing - Digital techniques
Computer vision
Artificial intelligence
Application software
Computer Imaging, Vision, Pattern Recognition and Graphics
Artificial Intelligence
Computer and Information Systems Applications
ISBN 3-031-45544-4
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents -- Fetal Brain Image Analysis -- FetMRQC: Automated Quality Control for Fetal Brain MRI -- 1 Introduction -- 2 Methodology -- 2.1 Data -- 2.2 Manual QA of Fetal MRI Stacks -- 2.3 IQMs Extraction and Learning -- 3 Results and Discussion -- 4 Conclusion -- References -- A Deep Learning Approach for Segmenting the Subplate and Proliferative Zones in Fetal Brain MRI -- 1 Introduction -- 2 Methods -- 2.1 Cohort, Datasets and Preprocessing -- 2.2 Neuroanatomical Parcellation of Transient Regions -- 2.3 Automated Segmentation of Transient Regions -- 2.4 Qualitative and Quantitative Analyses of Transient Regions -- 3 Results and Discussion -- 3.1 Qualitative Analysis of Transient Regions in Atlas Space -- 3.2 Quantitative Analyses of Transient Regions in Subject Space -- 3.3 Quantitative Comparison of Transient Volumes in Fetuses with Ventriculomegaly and Controls -- 4 Conclusion and Future Work -- References -- Combined Quantitative T2* Map and Structural T2-Weighted Tissue-Specific Analysis for Fetal Brain MRI: Pilot Automated Pipeline -- 1 Introduction -- 2 Methods -- 2.1 Datasets, Acquisition and Pre-processing -- 2.2 Automated 3D T2* Fetal Brain Reconstruction in T2w Space -- 2.3 Automated 3D T2* Fetal Brain Tissue Segmentation -- 2.4 Analysis of Brain Development in Combined T2w+T2* Datasets -- 2.5 Implementation Details -- 3 Experiments and Results -- 3.1 Automated 3D T2* Fetal Brain Reconstruction in T2w Space -- 3.2 Automated 3D T2* Fetal Brain Tissue Segmentation -- 3.3 Analysis of Brain Development in T2w+T2* Datasets -- 4 Discussion and Conclusions -- References -- Quantitative T2 Relaxometry in Fetal Brain: Validation Using Modified FaBiaN Fetal Brain MRI Simulator -- 1 Introduction -- 2 Methodology -- 2.1 Quantitative T2 Measurement Framework for Fetal MRI -- 2.2 Overview of the FaBiAN Phantom.
2.3 The Fetal Brain Model -- 2.4 Modelling Fetal Motion -- 2.5 Modelling Signals with Slice Profiles -- 2.6 Sampling K-Space -- 2.7 Modelling the Signals for the Dictionary -- 2.8 Simulated Experiments -- 2.9 Fetal Brain Measurements -- 3 Results -- 3.1 Simulated Fetal MRI -- 3.2 Reconstruction of T2 Maps from Simulated Fetal Data -- 3.3 Fetal Measurements -- 4 Discussion -- 5 Conclusion -- References -- Fetal Cardiac Image Analysis -- Towards Automatic Risk Prediction of Coarctation of the Aorta from Fetal CMR Using Atlas-Based Segmentation and Statistical Shape Modelling -- 1 Introduction -- 1.1 Contributions -- 2 Methods -- 2.1 Dataset Description -- 2.2 Automated Segmentation -- 2.3 Statistical Shape Analysis -- 3 Results -- 3.1 Segmentation -- 3.2 Statistical Shape Analysis -- 4 Discussion -- 5 Conclusion -- References -- The Challenge of Fetal Cardiac MRI Reconstruction Using Deep Learning -- 1 Introduction -- 2 Methods -- 3 Results -- 4 Discussion -- 5 Conclusion -- References -- Placental and Cervical Image Analysis -- Consistency Regularization Improves Placenta Segmentation in Fetal EPI MRI Time Series -- 1 Introduction -- 2 Methods -- 2.1 Consistency Regularization Loss -- 2.2 Siamese Neural Network -- 2.3 Implementation Details -- 3 Experiments -- 3.1 Dataset -- 3.2 Baseline Methods -- 3.3 Evaluation -- 3.4 Results -- 4 Limitations and Future Work -- 5 Conclusions -- References -- Visualization and Quantification of Placental Vasculature Using MRI -- 1 Introduction -- 1.1 MRI Acquisitions and Differing Contrasts -- 2 Methods -- 2.1 Data Acquisition -- 2.2 Image Quantification Metrics -- 2.3 Quantification of Vessel Segmentation -- 2.4 Statistics -- 2.5 Segmentation Performance Evaluation -- 3 Results -- 3.1 Validation from Micro-CT -- 4 Discussion -- 5 Conclusion -- References.
The Comparison Analysis of the Cervical Features Between Second-and Third-Trimester Pregnancy in Ultrasound Images Using eXplainable AI -- 1 Introduction -- 2 Method -- 2.1 eXplainable Artificial Intelligence(XAI) - CAM Based Methods -- 2.2 Deep Neural Network Model for Classification Task -- 2.3 Dataset and Preprocessing -- 2.4 Experimental Design -- 3 Results -- 3.1 Comparison of Heatmap Between Second- And Third-Trimester -- 3.2 Difference in Heatmap with and Without Fetal Head -- 3.3 Cross Validation Using Another Institution -- 4 Discussion and Conclusion -- References -- Infant Video Analysis -- Automatic Infant Respiration Estimation from Video: A Deep Flow-Based Algorithm and a Novel Public Benchmark -- 1 Introduction -- 2 Related Work -- 3 AIR-125: An Annotated Infant Respiration Dataset -- 4 Methodology -- 4.1 AirFlowNet Architecture -- 4.2 Spectral Bandpass Loss -- 5 Evaluation and Results -- 5.1 Experimental Setup -- 5.2 Results and Analysis -- 6 Conclusion -- References -- Author Index.
Record Nr. UNISA-996558468803316
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Perinatal, Preterm and Paediatric Image Analysis [[electronic resource] ] : 8th International Workshop, PIPPI 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 12, 2023, Proceedings / / edited by Daphna Link-Sourani, Esra Abaci Turk, Christopher Macgowan, Jana Hutter, Andrew Melbourne, Roxane Licandro
Perinatal, Preterm and Paediatric Image Analysis [[electronic resource] ] : 8th International Workshop, PIPPI 2023, Held in Conjunction with MICCAI 2023, Vancouver, BC, Canada, October 12, 2023, Proceedings / / edited by Daphna Link-Sourani, Esra Abaci Turk, Christopher Macgowan, Jana Hutter, Andrew Melbourne, Roxane Licandro
Edizione [1st ed. 2023.]
Pubbl/distr/stampa Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Descrizione fisica 1 online resource (128 pages)
Disciplina 618.9200754
Collana Lecture Notes in Computer Science
Soggetto topico Image processing - Digital techniques
Computer vision
Artificial intelligence
Application software
Computer Imaging, Vision, Pattern Recognition and Graphics
Artificial Intelligence
Computer and Information Systems Applications
ISBN 3-031-45544-4
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents -- Fetal Brain Image Analysis -- FetMRQC: Automated Quality Control for Fetal Brain MRI -- 1 Introduction -- 2 Methodology -- 2.1 Data -- 2.2 Manual QA of Fetal MRI Stacks -- 2.3 IQMs Extraction and Learning -- 3 Results and Discussion -- 4 Conclusion -- References -- A Deep Learning Approach for Segmenting the Subplate and Proliferative Zones in Fetal Brain MRI -- 1 Introduction -- 2 Methods -- 2.1 Cohort, Datasets and Preprocessing -- 2.2 Neuroanatomical Parcellation of Transient Regions -- 2.3 Automated Segmentation of Transient Regions -- 2.4 Qualitative and Quantitative Analyses of Transient Regions -- 3 Results and Discussion -- 3.1 Qualitative Analysis of Transient Regions in Atlas Space -- 3.2 Quantitative Analyses of Transient Regions in Subject Space -- 3.3 Quantitative Comparison of Transient Volumes in Fetuses with Ventriculomegaly and Controls -- 4 Conclusion and Future Work -- References -- Combined Quantitative T2* Map and Structural T2-Weighted Tissue-Specific Analysis for Fetal Brain MRI: Pilot Automated Pipeline -- 1 Introduction -- 2 Methods -- 2.1 Datasets, Acquisition and Pre-processing -- 2.2 Automated 3D T2* Fetal Brain Reconstruction in T2w Space -- 2.3 Automated 3D T2* Fetal Brain Tissue Segmentation -- 2.4 Analysis of Brain Development in Combined T2w+T2* Datasets -- 2.5 Implementation Details -- 3 Experiments and Results -- 3.1 Automated 3D T2* Fetal Brain Reconstruction in T2w Space -- 3.2 Automated 3D T2* Fetal Brain Tissue Segmentation -- 3.3 Analysis of Brain Development in T2w+T2* Datasets -- 4 Discussion and Conclusions -- References -- Quantitative T2 Relaxometry in Fetal Brain: Validation Using Modified FaBiaN Fetal Brain MRI Simulator -- 1 Introduction -- 2 Methodology -- 2.1 Quantitative T2 Measurement Framework for Fetal MRI -- 2.2 Overview of the FaBiAN Phantom.
2.3 The Fetal Brain Model -- 2.4 Modelling Fetal Motion -- 2.5 Modelling Signals with Slice Profiles -- 2.6 Sampling K-Space -- 2.7 Modelling the Signals for the Dictionary -- 2.8 Simulated Experiments -- 2.9 Fetal Brain Measurements -- 3 Results -- 3.1 Simulated Fetal MRI -- 3.2 Reconstruction of T2 Maps from Simulated Fetal Data -- 3.3 Fetal Measurements -- 4 Discussion -- 5 Conclusion -- References -- Fetal Cardiac Image Analysis -- Towards Automatic Risk Prediction of Coarctation of the Aorta from Fetal CMR Using Atlas-Based Segmentation and Statistical Shape Modelling -- 1 Introduction -- 1.1 Contributions -- 2 Methods -- 2.1 Dataset Description -- 2.2 Automated Segmentation -- 2.3 Statistical Shape Analysis -- 3 Results -- 3.1 Segmentation -- 3.2 Statistical Shape Analysis -- 4 Discussion -- 5 Conclusion -- References -- The Challenge of Fetal Cardiac MRI Reconstruction Using Deep Learning -- 1 Introduction -- 2 Methods -- 3 Results -- 4 Discussion -- 5 Conclusion -- References -- Placental and Cervical Image Analysis -- Consistency Regularization Improves Placenta Segmentation in Fetal EPI MRI Time Series -- 1 Introduction -- 2 Methods -- 2.1 Consistency Regularization Loss -- 2.2 Siamese Neural Network -- 2.3 Implementation Details -- 3 Experiments -- 3.1 Dataset -- 3.2 Baseline Methods -- 3.3 Evaluation -- 3.4 Results -- 4 Limitations and Future Work -- 5 Conclusions -- References -- Visualization and Quantification of Placental Vasculature Using MRI -- 1 Introduction -- 1.1 MRI Acquisitions and Differing Contrasts -- 2 Methods -- 2.1 Data Acquisition -- 2.2 Image Quantification Metrics -- 2.3 Quantification of Vessel Segmentation -- 2.4 Statistics -- 2.5 Segmentation Performance Evaluation -- 3 Results -- 3.1 Validation from Micro-CT -- 4 Discussion -- 5 Conclusion -- References.
The Comparison Analysis of the Cervical Features Between Second-and Third-Trimester Pregnancy in Ultrasound Images Using eXplainable AI -- 1 Introduction -- 2 Method -- 2.1 eXplainable Artificial Intelligence(XAI) - CAM Based Methods -- 2.2 Deep Neural Network Model for Classification Task -- 2.3 Dataset and Preprocessing -- 2.4 Experimental Design -- 3 Results -- 3.1 Comparison of Heatmap Between Second- And Third-Trimester -- 3.2 Difference in Heatmap with and Without Fetal Head -- 3.3 Cross Validation Using Another Institution -- 4 Discussion and Conclusion -- References -- Infant Video Analysis -- Automatic Infant Respiration Estimation from Video: A Deep Flow-Based Algorithm and a Novel Public Benchmark -- 1 Introduction -- 2 Related Work -- 3 AIR-125: An Annotated Infant Respiration Dataset -- 4 Methodology -- 4.1 AirFlowNet Architecture -- 4.2 Spectral Bandpass Loss -- 5 Evaluation and Results -- 5.1 Experimental Setup -- 5.2 Results and Analysis -- 6 Conclusion -- References -- Author Index.
Record Nr. UNINA-9910751388203321
Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2023
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part I / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part I / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
Autore Liu Fenrong
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (525 pages)
Disciplina 006.3
Altri autori (Persone) SadanandanArun Anand
PhamDuc Nghia
MursantoPetrus
LukoseDickson
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computers
Computer engineering
Computer networks
Application software
Image processing - Digital techniques
Computer vision
Artificial Intelligence
Computing Milieux
Computer Engineering and Networks
Computer and Information Systems Applications
Computer Imaging, Vision, Pattern Recognition and Graphics
ISBN 981-9970-19-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Agents/Decision Theory -- Data Mining and Knowledge Discovery -- (Deep) Reinforcement Learning -- Generative AI -- Graph Learning -- Healthcare and Wellbeing -- Knowledge Representation and Reasoning. .
Record Nr. UNINA-9910760261003321
Liu Fenrong  
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part II / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part II / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
Autore Liu Fenrong
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (515 pages)
Disciplina 006.3
Altri autori (Persone) SadanandanArun Anand
PhamDuc Nghia
MursantoPetrus
LukoseDickson
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computers
Computer engineering
Computer networks
Application software
Image processing - Digital techniques
Computer vision
Artificial Intelligence
Computing Milieux
Computer Engineering and Networks
Computer and Information Systems Applications
Computer Imaging, Vision, Pattern Recognition and Graphics
ISBN 981-9970-22-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Machine Learning/Deep Learning -- Natural Language Processing -- Optimization -- Responsible AI/Explainable AI.
Record Nr. UNINA-9910760256803321
Liu Fenrong  
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part III / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part III / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
Autore Liu Fenrong
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (514 pages)
Disciplina 006.3
Altri autori (Persone) SadanandanArun Anand
PhamDuc Nghia
MursantoPetrus
LukoseDickson
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computers
Computer engineering
Computer networks
Application software
Image processing - Digital techniques
Computer vision
Artificial Intelligence
Computing Milieux
Computer Engineering and Networks
Computer and Information Systems Applications
Computer Imaging, Vision, Pattern Recognition and Graphics
ISBN 981-9970-25-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part III -- Vision and Perception -- A Multi-scale Densely Connected and Feature Aggregation Network for Hyperspectral Image Classification -- 1 Introduction -- 2 Proposed Method -- 2.1 Spectral-Spatial Feature Extraction Module -- 2.2 Multi-scale Feature Extraction Module -- 2.3 Multi-level Feature Aggregation Module -- 3 Experiment and Analysis -- 3.1 Dataset Description and Experiment Setup -- 3.2 Experiment Results and Analysis -- 3.3 Parametric Analysis -- 3.4 Ablation Experiments -- 4 Conclusion -- References -- A-ESRGAN: Training Real-World Blind Super-Resolution with Attention U-Net Discriminators -- 1 Introduction and Motivation -- 2 Related Work -- 2.1 GANs-Based Blind SR Methods -- 2.2 Discriminator Models -- 3 Method -- 4 Experiments -- 4.1 Implementation Details -- 4.2 Testsets and Experiment Settings -- 4.3 Comparing with the State-of-the-Arts -- 4.4 Attention Block Analysis -- 4.5 Multi-scale Discriminator Analysis -- 4.6 Ablation Study -- 5 Conclusions -- References -- AI-Based Intelligent-Annotation Algorithm for Medical Segmentation from Ultrasound Data -- 1 Introduction -- 1.1 Contributions -- 1.2 Related Work -- 2 Methodology -- 2.1 Workflow -- 2.2 Adaptive Polygon Tracking (APT) Model -- 2.3 Historical Storage-Based Quantum-Inspired Evolutionary Network (HQIE) -- 2.4 Mathematical Model-Based Contour Detection -- 3 Experiment Setup and Results -- 3.1 Databases -- 3.2 Performance on the Testing Dataset Disturbed by Noise -- 3.3 Ablation Study -- 3.4 Comparison with State-Of-The-Art (SOTA) Models -- 4 Conclusion -- References -- An Automatic Fabric Defect Detector Using an Efficient Multi-scale Network -- 1 Introduction -- 2 Related Work -- 3 Proposed Model EMSD -- 3.1 LSC-Darknet -- 3.2 DCSPPF -- 3.3 LSG-PAFPN -- 3.4 Detection Head -- 4 Experiments -- 4.1 Setup -- 4.2 Datasets.
4.3 Evaluation Metrics -- 4.4 Comparison Experiment Results -- 4.5 Ablation Experiments -- 4.6 Visualization of Detection Results -- 5 Conclusion -- References -- An Improved Framework for Pedestrian Tracking and Counting Based on DeepSORT -- 1 Introduction -- 2 FR-DeepSort for Pedestrian Tracking and Counting -- 2.1 The FR-DeepSORT Framework -- 2.2 Pedestrian Tracking -- 2.3 Pedestrian Counting -- 3 Experiments -- 3.1 Analysis of Pedestrian Tracking Results -- 3.2 Analysis of Pedestrian Counting Results -- 4 Conclusion -- References -- Bootstrap Diffusion Model Curve Estimation for High Resolution Low-Light Image Enhancement -- 1 Introduction -- 2 Related Work -- 2.1 Learning-Based Methods in LLIE -- 2.2 Diffusion Models -- 3 Methodology -- 3.1 Curve Estimation for High Resolution Image -- 3.2 Bootstrap Diffusion Model for Better Curve Estimation -- 3.3 Denoising Module for Real Low-Light Image -- 4 Experiments -- 4.1 Datasets Settings -- 4.2 Comparison with SOTA Methods on Paired Data -- 4.3 Comparison with SOTA Methods on Unpaired Data -- 4.4 Ablation Study -- 5 Conclusion and Limitation -- References -- CoalUMLP: Slice and Dice! A Fast, MLP-Like 3D Medical Image Segmentation Network -- 1 Introduction -- 2 Method -- 2.1 Overview -- 2.2 Multi-scale Axial Permute Encoder -- 2.3 Masked Axial Permute Decoder -- 2.4 Semantic Bridging Connections -- 3 Experiment -- 3.1 Dataset -- 3.2 Implement Details -- 3.3 Comparison with SOTA -- 3.4 Ablation Study -- 4 Conclusion -- References -- Enhancing Interpretability in CT Reconstruction Using Tomographic Domain Transform with Self-supervision -- 1 Introduction -- 2 Methodology -- 2.1 Radon Transform in CT Imaging -- 2.2 CT Reconstruction Using Tomographic Domain Transform with Self-supervision -- 3 Experimental Results -- 3.1 Datasets and Experimental Settings -- 3.2 Comparison Experiments -- 4 Conclusion.
References -- Feature Aggregation Network for Building Extraction from High-Resolution Remote Sensing Images -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Transformer Encoder -- 3.2 Feature Aggregation Module -- 3.3 Feature Refinement via Difference Elimination Module and Receptive Field Block -- 3.4 Dual Attention Module for Enhanced Feature Interactions -- 3.5 Fusion Decoder and Loss Function -- 4 Experiments -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Comparison with Other State-of-the-Art Methods -- 4.4 Ablation Study -- 5 Conclusion -- References -- Image Quality Assessment Method Based on Cross-Modal -- 1 Introduction -- 2 Related Work -- 2.1 Deep Learning-Based Image Quality Assessment -- 2.2 Cross-Modal Techniques -- 3 Methods -- 3.1 Exploring the Feasibility of Cross-Modal Models -- 3.2 Image Quality Score Assessment Based on Cross-Modality -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental Details -- 4.3 Evaluation Metrics -- 4.4 Feasibility Research -- 4.5 Comparison Experiments -- 4.6 Ablation Experiments -- 5 Conclusion -- 6 Outlook -- References -- KDED: A Knowledge Distillation Based Edge Detector -- 1 Introduction -- 2 Related Work -- 2.1 Label Problems in Edge Detection -- 2.2 Knowledge Distillation -- 3 Method -- 3.1 Compact Twice Fusion Network for Edge Detection -- 3.2 Knowledge Distillation Based on Label Correction -- 3.3 Sample Balance Loss -- 4 Experiments -- 4.1 Datasets and Implementation -- 4.2 Comparison with the State-of-the-Art Methods -- 4.3 Ablation Study -- 5 Conclusion -- References -- Multiple Attention Network for Facial Expression Recognition -- 1 Introduction -- 2 Related Work -- 2.1 Real-Time Classification Networks -- 2.2 Attention Mechanism -- 3 Methodology -- 3.1 Multi-branch Stack Residual Network -- 3.2 Transitional Attention Network -- 3.3 Appropriate Cascade Structure.
4 Experiments -- 4.1 Implementation Details -- 4.2 Ablation Studies -- 4.3 Comparision with Previous Results -- 5 Conclusion -- References -- PMT-IQA: Progressive Multi-task Learning for Blind Image Quality Assessment -- 1 Introduction -- 2 Related Works -- 3 Methods -- 3.1 Overview of the Proposed Model -- 3.2 Multi-scale Semantic Feature Extraction -- 3.3 Progressive Multi-Task Image Quality Assessment -- 4 Experiment -- 4.1 Experimental Setup -- 4.2 Performance Evaluation -- 4.3 Ablation Study -- 5 Conclusion -- References -- Reduced-Resolution Head for Object Detection -- 1 Introduction -- 2 Related Works -- 3 Method -- 3.1 Motivation and Analysis -- 3.2 Reduced-Resolution Head for Object Detection -- 4 Experiments -- 4.1 Ablation Study -- 4.2 Applied to Other Detectors -- 5 Conclusion -- References -- Research of Highway Vehicle Inspection Based on Improved YOLOv5 -- 1 Introduction -- 2 Related Work -- 2.1 YOLOv5 Model -- 2.2 The Improvement of YOLOv5 -- 3 Method -- 3.1 Ghostnet-C -- 3.2 GSConv+Slim-Neck -- 3.3 CAS Attention Mechanism -- 4 Experiment and Metrics -- 4.1 Experimental Environment and Data Set -- 4.2 Metrics -- 4.3 Experiment and Experimental Analysis -- 5 Conclusion -- References -- STN-BA: Weakly-Supervised Few-Shot Temporal Action Localization -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Feature Extractor -- 3.2 Similarity Generator -- 3.3 Video-Level Classifier -- 3.4 Localization and Boundary-Check Algorithm -- 4 Experiment -- 4.1 Experiment Setup -- 4.2 Main Experimental Results -- 4.3 Ablation Experiment -- 4.4 Generalization Test -- 5 Conclusion -- References -- SVFNeXt: Sparse Voxel Fusion for LiDAR-Based 3D Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Voxel-Based 3D Detectors -- 2.2 Fusion-Based 3D Detectors -- 2.3 Transformer-Based 3D Detectors -- 3 SVFNeXt for 3D Object Detection.
3.1 Dynamic Distance-Aware Cylindrical Voxelization -- 3.2 Foreground Centroid-Voxel Selection-Query-Fusion -- 3.3 Object-Aware Center-Voxel Transformer -- 3.4 Loss Functions -- 4 Experiments -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Main Results -- 4.4 Ablation Study -- 5 Conclusion -- References -- Traffic Sign Recognition Model Based on Small Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Data Augmentation -- 2.2 Loss Function -- 2.3 Deep Learning For Small Object Detection -- 3 Method -- 3.1 FlexCut Data Augmentation -- 3.2 Keypoint-Based PIoU Loss Function -- 3.3 The Proposed YOLOv5T -- 4 Experiments -- 4.1 Dataset -- 4.2 Experimental Analysis -- 5 Conclusion -- References -- A Multi-scale Multi-modal Multi-dimension Joint Transformer for Two-Stream Action Classification -- 1 Introduction -- 2 The Proposed Method -- 2.1 Training Schemes -- 3 Experiments -- 3.1 Experimental Setups -- 3.2 Results and Discussions -- 3.3 Visualizations -- 4 Conclusions -- References -- Adv-Triplet Loss for Sparse Attack on Facial Expression Recognition -- 1 Introduction -- 2 Method -- 2.1 Problem Definition -- 2.2 Adv-Triplet Loss Function -- 2.3 Adv-Triplet Loss Search Attack -- 3 Experiments and Results -- 3.1 Sparsity Evaluation -- 3.2 Invisibility Evaluation -- 4 Conclusion -- References -- Credible Dual-X Modality Learning for Visible and Infrared Person Re-Identification -- 1 Introduction -- 2 Methodology -- 2.1 Overview -- 2.2 Dual-X Module -- 2.3 Uncertainty Estimation Algorithm -- 3 Experiment and Analysis -- 3.1 Experimental Settings -- 3.2 Ablation Study -- 3.3 Comparison with State-of-the-Art Methods -- 4 Conclusion -- References -- Facial Expression Recognition in Online Course Using Light-Weight Vision Transformer via Knowledge Distillation -- 1 Introduction -- 2 Related Work -- 3 Method -- 4 Experiments Results -- 5 Conclusion.
References.
Record Nr. UNINA-9910760258703321
Liu Fenrong  
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part III / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part III / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
Autore Liu Fenrong
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (514 pages)
Disciplina 006.3
Altri autori (Persone) SadanandanArun Anand
PhamDuc Nghia
MursantoPetrus
LukoseDickson
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computers
Computer engineering
Computer networks
Application software
Image processing - Digital techniques
Computer vision
Artificial Intelligence
Computing Milieux
Computer Engineering and Networks
Computer and Information Systems Applications
Computer Imaging, Vision, Pattern Recognition and Graphics
ISBN 981-9970-25-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents - Part III -- Vision and Perception -- A Multi-scale Densely Connected and Feature Aggregation Network for Hyperspectral Image Classification -- 1 Introduction -- 2 Proposed Method -- 2.1 Spectral-Spatial Feature Extraction Module -- 2.2 Multi-scale Feature Extraction Module -- 2.3 Multi-level Feature Aggregation Module -- 3 Experiment and Analysis -- 3.1 Dataset Description and Experiment Setup -- 3.2 Experiment Results and Analysis -- 3.3 Parametric Analysis -- 3.4 Ablation Experiments -- 4 Conclusion -- References -- A-ESRGAN: Training Real-World Blind Super-Resolution with Attention U-Net Discriminators -- 1 Introduction and Motivation -- 2 Related Work -- 2.1 GANs-Based Blind SR Methods -- 2.2 Discriminator Models -- 3 Method -- 4 Experiments -- 4.1 Implementation Details -- 4.2 Testsets and Experiment Settings -- 4.3 Comparing with the State-of-the-Arts -- 4.4 Attention Block Analysis -- 4.5 Multi-scale Discriminator Analysis -- 4.6 Ablation Study -- 5 Conclusions -- References -- AI-Based Intelligent-Annotation Algorithm for Medical Segmentation from Ultrasound Data -- 1 Introduction -- 1.1 Contributions -- 1.2 Related Work -- 2 Methodology -- 2.1 Workflow -- 2.2 Adaptive Polygon Tracking (APT) Model -- 2.3 Historical Storage-Based Quantum-Inspired Evolutionary Network (HQIE) -- 2.4 Mathematical Model-Based Contour Detection -- 3 Experiment Setup and Results -- 3.1 Databases -- 3.2 Performance on the Testing Dataset Disturbed by Noise -- 3.3 Ablation Study -- 3.4 Comparison with State-Of-The-Art (SOTA) Models -- 4 Conclusion -- References -- An Automatic Fabric Defect Detector Using an Efficient Multi-scale Network -- 1 Introduction -- 2 Related Work -- 3 Proposed Model EMSD -- 3.1 LSC-Darknet -- 3.2 DCSPPF -- 3.3 LSG-PAFPN -- 3.4 Detection Head -- 4 Experiments -- 4.1 Setup -- 4.2 Datasets.
4.3 Evaluation Metrics -- 4.4 Comparison Experiment Results -- 4.5 Ablation Experiments -- 4.6 Visualization of Detection Results -- 5 Conclusion -- References -- An Improved Framework for Pedestrian Tracking and Counting Based on DeepSORT -- 1 Introduction -- 2 FR-DeepSort for Pedestrian Tracking and Counting -- 2.1 The FR-DeepSORT Framework -- 2.2 Pedestrian Tracking -- 2.3 Pedestrian Counting -- 3 Experiments -- 3.1 Analysis of Pedestrian Tracking Results -- 3.2 Analysis of Pedestrian Counting Results -- 4 Conclusion -- References -- Bootstrap Diffusion Model Curve Estimation for High Resolution Low-Light Image Enhancement -- 1 Introduction -- 2 Related Work -- 2.1 Learning-Based Methods in LLIE -- 2.2 Diffusion Models -- 3 Methodology -- 3.1 Curve Estimation for High Resolution Image -- 3.2 Bootstrap Diffusion Model for Better Curve Estimation -- 3.3 Denoising Module for Real Low-Light Image -- 4 Experiments -- 4.1 Datasets Settings -- 4.2 Comparison with SOTA Methods on Paired Data -- 4.3 Comparison with SOTA Methods on Unpaired Data -- 4.4 Ablation Study -- 5 Conclusion and Limitation -- References -- CoalUMLP: Slice and Dice! A Fast, MLP-Like 3D Medical Image Segmentation Network -- 1 Introduction -- 2 Method -- 2.1 Overview -- 2.2 Multi-scale Axial Permute Encoder -- 2.3 Masked Axial Permute Decoder -- 2.4 Semantic Bridging Connections -- 3 Experiment -- 3.1 Dataset -- 3.2 Implement Details -- 3.3 Comparison with SOTA -- 3.4 Ablation Study -- 4 Conclusion -- References -- Enhancing Interpretability in CT Reconstruction Using Tomographic Domain Transform with Self-supervision -- 1 Introduction -- 2 Methodology -- 2.1 Radon Transform in CT Imaging -- 2.2 CT Reconstruction Using Tomographic Domain Transform with Self-supervision -- 3 Experimental Results -- 3.1 Datasets and Experimental Settings -- 3.2 Comparison Experiments -- 4 Conclusion.
References -- Feature Aggregation Network for Building Extraction from High-Resolution Remote Sensing Images -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Transformer Encoder -- 3.2 Feature Aggregation Module -- 3.3 Feature Refinement via Difference Elimination Module and Receptive Field Block -- 3.4 Dual Attention Module for Enhanced Feature Interactions -- 3.5 Fusion Decoder and Loss Function -- 4 Experiments -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Comparison with Other State-of-the-Art Methods -- 4.4 Ablation Study -- 5 Conclusion -- References -- Image Quality Assessment Method Based on Cross-Modal -- 1 Introduction -- 2 Related Work -- 2.1 Deep Learning-Based Image Quality Assessment -- 2.2 Cross-Modal Techniques -- 3 Methods -- 3.1 Exploring the Feasibility of Cross-Modal Models -- 3.2 Image Quality Score Assessment Based on Cross-Modality -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental Details -- 4.3 Evaluation Metrics -- 4.4 Feasibility Research -- 4.5 Comparison Experiments -- 4.6 Ablation Experiments -- 5 Conclusion -- 6 Outlook -- References -- KDED: A Knowledge Distillation Based Edge Detector -- 1 Introduction -- 2 Related Work -- 2.1 Label Problems in Edge Detection -- 2.2 Knowledge Distillation -- 3 Method -- 3.1 Compact Twice Fusion Network for Edge Detection -- 3.2 Knowledge Distillation Based on Label Correction -- 3.3 Sample Balance Loss -- 4 Experiments -- 4.1 Datasets and Implementation -- 4.2 Comparison with the State-of-the-Art Methods -- 4.3 Ablation Study -- 5 Conclusion -- References -- Multiple Attention Network for Facial Expression Recognition -- 1 Introduction -- 2 Related Work -- 2.1 Real-Time Classification Networks -- 2.2 Attention Mechanism -- 3 Methodology -- 3.1 Multi-branch Stack Residual Network -- 3.2 Transitional Attention Network -- 3.3 Appropriate Cascade Structure.
4 Experiments -- 4.1 Implementation Details -- 4.2 Ablation Studies -- 4.3 Comparision with Previous Results -- 5 Conclusion -- References -- PMT-IQA: Progressive Multi-task Learning for Blind Image Quality Assessment -- 1 Introduction -- 2 Related Works -- 3 Methods -- 3.1 Overview of the Proposed Model -- 3.2 Multi-scale Semantic Feature Extraction -- 3.3 Progressive Multi-Task Image Quality Assessment -- 4 Experiment -- 4.1 Experimental Setup -- 4.2 Performance Evaluation -- 4.3 Ablation Study -- 5 Conclusion -- References -- Reduced-Resolution Head for Object Detection -- 1 Introduction -- 2 Related Works -- 3 Method -- 3.1 Motivation and Analysis -- 3.2 Reduced-Resolution Head for Object Detection -- 4 Experiments -- 4.1 Ablation Study -- 4.2 Applied to Other Detectors -- 5 Conclusion -- References -- Research of Highway Vehicle Inspection Based on Improved YOLOv5 -- 1 Introduction -- 2 Related Work -- 2.1 YOLOv5 Model -- 2.2 The Improvement of YOLOv5 -- 3 Method -- 3.1 Ghostnet-C -- 3.2 GSConv+Slim-Neck -- 3.3 CAS Attention Mechanism -- 4 Experiment and Metrics -- 4.1 Experimental Environment and Data Set -- 4.2 Metrics -- 4.3 Experiment and Experimental Analysis -- 5 Conclusion -- References -- STN-BA: Weakly-Supervised Few-Shot Temporal Action Localization -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Feature Extractor -- 3.2 Similarity Generator -- 3.3 Video-Level Classifier -- 3.4 Localization and Boundary-Check Algorithm -- 4 Experiment -- 4.1 Experiment Setup -- 4.2 Main Experimental Results -- 4.3 Ablation Experiment -- 4.4 Generalization Test -- 5 Conclusion -- References -- SVFNeXt: Sparse Voxel Fusion for LiDAR-Based 3D Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Voxel-Based 3D Detectors -- 2.2 Fusion-Based 3D Detectors -- 2.3 Transformer-Based 3D Detectors -- 3 SVFNeXt for 3D Object Detection.
3.1 Dynamic Distance-Aware Cylindrical Voxelization -- 3.2 Foreground Centroid-Voxel Selection-Query-Fusion -- 3.3 Object-Aware Center-Voxel Transformer -- 3.4 Loss Functions -- 4 Experiments -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Main Results -- 4.4 Ablation Study -- 5 Conclusion -- References -- Traffic Sign Recognition Model Based on Small Object Detection -- 1 Introduction -- 2 Related Work -- 2.1 Data Augmentation -- 2.2 Loss Function -- 2.3 Deep Learning For Small Object Detection -- 3 Method -- 3.1 FlexCut Data Augmentation -- 3.2 Keypoint-Based PIoU Loss Function -- 3.3 The Proposed YOLOv5T -- 4 Experiments -- 4.1 Dataset -- 4.2 Experimental Analysis -- 5 Conclusion -- References -- A Multi-scale Multi-modal Multi-dimension Joint Transformer for Two-Stream Action Classification -- 1 Introduction -- 2 The Proposed Method -- 2.1 Training Schemes -- 3 Experiments -- 3.1 Experimental Setups -- 3.2 Results and Discussions -- 3.3 Visualizations -- 4 Conclusions -- References -- Adv-Triplet Loss for Sparse Attack on Facial Expression Recognition -- 1 Introduction -- 2 Method -- 2.1 Problem Definition -- 2.2 Adv-Triplet Loss Function -- 2.3 Adv-Triplet Loss Search Attack -- 3 Experiments and Results -- 3.1 Sparsity Evaluation -- 3.2 Invisibility Evaluation -- 4 Conclusion -- References -- Credible Dual-X Modality Learning for Visible and Infrared Person Re-Identification -- 1 Introduction -- 2 Methodology -- 2.1 Overview -- 2.2 Dual-X Module -- 2.3 Uncertainty Estimation Algorithm -- 3 Experiment and Analysis -- 3.1 Experimental Settings -- 3.2 Ablation Study -- 3.3 Comparison with State-of-the-Art Methods -- 4 Conclusion -- References -- Facial Expression Recognition in Online Course Using Light-Weight Vision Transformer via Knowledge Distillation -- 1 Introduction -- 2 Related Work -- 3 Method -- 4 Experiments Results -- 5 Conclusion.
References.
Record Nr. UNISA-996565870003316
Liu Fenrong  
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part II / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part II / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
Autore Liu Fenrong
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (515 pages)
Disciplina 006.3
Altri autori (Persone) SadanandanArun Anand
PhamDuc Nghia
MursantoPetrus
LukoseDickson
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computers
Computer engineering
Computer networks
Application software
Image processing - Digital techniques
Computer vision
Artificial Intelligence
Computing Milieux
Computer Engineering and Networks
Computer and Information Systems Applications
Computer Imaging, Vision, Pattern Recognition and Graphics
ISBN 981-9970-22-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Machine Learning/Deep Learning -- Natural Language Processing -- Optimization -- Responsible AI/Explainable AI.
Record Nr. UNISA-996565870103316
Liu Fenrong  
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part I / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
PRICAI 2023: Trends in Artificial Intelligence [[electronic resource] ] : 20th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2023, Jakarta, Indonesia, November 15–19, 2023, Proceedings, Part I / / edited by Fenrong Liu, Arun Anand Sadanandan, Duc Nghia Pham, Petrus Mursanto, Dickson Lukose
Autore Liu Fenrong
Edizione [1st ed. 2024.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Descrizione fisica 1 online resource (525 pages)
Disciplina 006.3
Altri autori (Persone) SadanandanArun Anand
PhamDuc Nghia
MursantoPetrus
LukoseDickson
Collana Lecture Notes in Artificial Intelligence
Soggetto topico Artificial intelligence
Computers
Computer engineering
Computer networks
Application software
Image processing - Digital techniques
Computer vision
Artificial Intelligence
Computing Milieux
Computer Engineering and Networks
Computer and Information Systems Applications
Computer Imaging, Vision, Pattern Recognition and Graphics
ISBN 981-9970-19-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Agents/Decision Theory -- Data Mining and Knowledge Discovery -- (Deep) Reinforcement Learning -- Generative AI -- Graph Learning -- Healthcare and Wellbeing -- Knowledge Representation and Reasoning. .
Record Nr. UNISA-996565869903316
Liu Fenrong  
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2024
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui