Vai al contenuto principale della pagina

Computer vision - ECCV 2022 . Part XXXIX : 17th European Conference, Tel Aviv, Israel, October 23-27, 2022 : proceedings / / Shai Avidan [and four others]



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Titolo: Computer vision - ECCV 2022 . Part XXXIX : 17th European Conference, Tel Aviv, Israel, October 23-27, 2022 : proceedings / / Shai Avidan [and four others] Visualizza cluster
Pubblicazione: Cham, Switzerland : , : Springer, , [2022]
©2022
Descrizione fisica: 1 online resource (785 pages)
Disciplina: 006.4
Soggetto topico: Pattern recognition systems
Computer vision
Persona (resp. second.): AvidanShai
Nota di contenuto: Intro -- Foreword -- Preface -- Organization -- Contents - Part XXXIX -- Lane Detection Transformer Based on Multi-frame Horizontal and Vertical Attention and Visual Transformer Module -- 1 Introduction -- 2 Related Work -- 3 Approach -- 3.1 Lane Shape Model -- 3.2 The MHVA Module -- 3.3 The VT Module -- 3.4 FFNs -- 3.5 Loss Function -- 4 Experiments -- 4.1 Dataset -- 4.2 Evaluation Metrics -- 4.3 Implementation Details -- 4.4 Results -- 4.5 Ablation Study -- 5 Conclusions -- References -- ProposalContrast: Unsupervised Pre-training for LiDAR-Based 3D Object Detection -- 1 Introduction -- 2 Related Work -- 3 Approach -- 3.1 Overview of ProposalContrast -- 3.2 Region Proposal Encoding Module -- 3.3 Joint Optimization of Inter-proposal Discrimination and Inter-cluster Separation -- 4 Experimental Results -- 4.1 Pre-training Settings -- 4.2 Transfer Learning Settings and Results -- 4.3 Ablation Study -- 5 Conclusion -- References -- PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map -- 1 Introduction -- 2 Background -- 2.1 Problem Formulation of Trajectory Forecasting -- 2.2 Contrastive Learning -- 3 Method -- 3.1 Trajectory-Map Contrastive Learning (TMCL) -- 3.2 Map Contrastive Learning (MCL) -- 3.3 Training Objective -- 4 Experiments -- 4.1 Dataset and Implementation Details -- 4.2 Comparison Experiments -- 4.3 Data Efficiency -- 4.4 Scalability Analysis -- 4.5 Analysis -- 5 Related Works -- 5.1 Scene Representation in Trajectory Forecasting -- 5.2 Self-supervised Learning in Trajectory Forecasting -- 6 Discussion and Limitations -- 7 Conclusion -- References -- Master of All: Simultaneous Generalization of Urban-Scene Segmentation to All Adverse Weather Conditions -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Softmax Multi-class Normalized Cut Loss -- 3.2 Class Imbalance Re-weighting.
3.3 Sample Importance Weighting -- 3.4 MALL-Sample -- 3.5 MALL-Domain -- 4 Experiments -- 4.1 Datasets and Evaluation Criteria -- 4.2 Improvement on SOTA Daytime Models -- 4.3 Improvement on Domain Generalization Models -- 4.4 Improvement on Unsupervised Domain Adaptation Models -- References -- LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds -- 1 Introduction -- 2 Related Work -- 2.1 Segmentation Networks for LiDAR Point Clouds -- 2.2 Label-Efficient 3D Semantic Segmentation -- 3 Method -- 3.1 Pilot Study: What Should We Pay Attention To? -- 3.2 Overview -- 3.3 Pre-segmentation -- 3.4 Annotation Policy and Training Labels -- 3.5 Contrastive Prototype Learning -- 3.6 Multi-scan Distillation -- 4 Experiments -- 4.1 Comparison on SemanticKITTI -- 4.2 Comparison on nuScenes -- 4.3 Ablation Study -- 4.4 Analysis of Pre-segmentation and Labeling -- 4.5 Analysis of Multi-scan Distillation -- 5 Conclusion and Future Work -- References -- Visual Cross-View Metric Localization with Dense Uncertainty Estimates -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Baseline Cross-View Regression -- 3.2 Proposed Method -- 4 Experiments -- 4.1 Datasets -- 4.2 Evaluation Metrics -- 4.3 Hyper-parameters and Ablation Study -- 4.4 Generalization in the Same Area/Across Areas -- 4.5 Generalization Across Time -- 5 Conclusion -- References -- V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Main Architecture Design -- 3.2 V2X-Vision Transformer -- 4 Experiments -- 4.1 V2XSet: An Open Dataset for V2X Cooperative Perception -- 4.2 Experimental Setup -- 4.3 Quantitative Evaluation -- 4.4 Qualitative Evaluation -- 4.5 Ablation Studies -- 5 Conclusion -- References -- DevNet: Self-supervised Monocular Depth Learning via Density Volume Construction.
1 Introduction -- 2 Related Works -- 2.1 Supervised Monocular Depth Learning -- 2.2 Self-supervised Depth Learning -- 2.3 Neural Rendering -- 3 Density Volume Rendering Based Depth Learning -- 3.1 Preliminary -- 3.2 Framework -- 3.3 Regularizations -- 3.4 Training Loss -- 3.5 Discussions -- 4 Experiments -- 4.1 Depth Estimation -- 4.2 Odometry Estimation -- 4.3 Ablation Study -- 4.4 Generalization -- 5 Conclusion -- References -- Action-Based Contrastive Learning for Trajectory Prediction -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Problem Formulation -- 3.2 Multi-modal Trajectory Prediction -- 3.3 Action-Based Contrastive Learning Framework -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental Setup -- 4.3 Trajectory Prediction Results -- 4.4 Ablation Study -- 5 Discussion and Conclusions -- References -- Radatron: Accurate Detection Using Multi-resolution Cascaded MIMO Radar -- 1 Introduction -- 2 Related Work -- 3 Background on mmWave MIMO Radar -- 4 Method -- 4.1 Radar Signal Processing -- 4.2 Radatron's Network Design -- 5 Radatron Dataset -- 6 Evaluation and Experiments -- 6.1 Performance Against Baselines -- 6.2 Radatron's Performance -- 6.3 Qualitative Results -- 7 Limitations and Discussion -- References -- LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection -- 1 Introduction -- 2 Related Work -- 3 Approach -- 3.1 Problem Statement -- 3.2 Overview -- 3.3 Pseudo Low-beam Data Generation -- 3.4 Distillation from High-beam Data -- 3.5 Progressive Knowledge Transfer -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Cross-Dataset Adaptation -- 4.3 Single-Dataset Adaptation -- 4.4 Ablation Studies -- 4.5 Further Analysis -- 5 Conclusion -- References -- Efficient Point Cloud Segmentation with Geometry-Aware Sparse Networks -- 1 Introduction -- 2 Related Work -- 3 Approach -- 3.1 Prerequisite.
3.2 Sparse Feature Encoder -- 3.3 Sparse Geometry Feature Enhancement -- 3.4 Deep Sparse Supervision -- 3.5 Final Prediction -- 4 Experiments -- 4.1 Datasets -- 4.2 Results On SemanticKITTI -- 4.3 Results On Nuscenes -- 4.4 Ablation Study -- 5 Conclusion -- References -- FH-Net: A Fast Hierarchical Network for Scene Flow Estimation on Real-World Point Clouds -- 1 Introduction -- 2 Related Works -- 3 Methodology -- 3.1 Network Architectures -- 3.2 Hierarchical Loss -- 3.3 FH-Net Families -- 3.4 Data Augmentation -- 4 Real-World Dataset Creation -- 5 Experiments -- 5.1 Experimental Settings -- 5.2 Results on Lidar-KITTI -- 5.3 Results on SF-Waymo -- 5.4 Results on No-lidar-scanned Dataset -- 5.5 Ablation Studies -- 6 Conclusion -- References -- SpatialDETR: Robust Scalable Transformer-Based 3D Object Detection From Multi-view Camera Images With Global Cross-Sensor Attention -- 1 Introduction -- 2 Related Work -- 2.1 Transformer-Based Object Detection -- 2.2 Prior-Guided Attention -- 2.3 Positional Encodings -- 3 Method -- 3.1 Overall Architecture -- 3.2 Revisiting Attention In DETR -- 3.3 Geometric Positional Encoding -- 3.4 Spatially-Aware Attention -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Comparison To Existing Works -- 4.3 Ablations And Analysis -- 5 Conclusion -- References -- Pixel-Wise Energy-Biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Training Set -- 3.2 Pixel-Wise Energy-Biased Abstention Learning (PEBAL) -- 3.3 Training and Inference -- 4 Experiment -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Evaluation Measures -- 4.4 Comparison on Anomaly Segmentation Benchmarks -- 4.5 Ablation Study -- 4.6 Outlier Samples and Computational Efficiency -- 5 Conclusions and Discussions -- References.
Rethinking Closed-Loop Training for Autonomous Driving -- 1 Introduction -- 2 Related Work -- 3 Learning Neural Planners in Closed-Loop -- 3.1 Preliminaries on RL -- 3.2 Planning with TRAVL -- 3.3 Efficient Learning with Counterfactual Rollouts -- 4 Large Scale Closed-Loop Benchmark Dataset -- 5 Experiments -- 6 Conclusion -- References -- SLiDE: Self-supervised LiDAR De-snowing Through Reconstruction Difficulty -- 1 Introduction -- 2 Related Work -- 2.1 LiDAR Point Cloud De-noising -- 2.2 Self-supervised Image De-noising -- 2.3 Semi-supervised Learning -- 3 Proposed Method -- 3.1 Input Representation -- 3.2 Self-supervised Learning Framework -- 3.3 Point Reconstruction with Multiple Hypotheses -- 3.4 Post-processing -- 3.5 Semi-supervised Learning -- 4 Experimental Result -- 4.1 Point-wise Evaluation on Synthetic Data -- 4.2 Qualitative Evaluation on Real-world Data -- 5 Conclusion -- References -- Generative Meta-Adversarial Network for Unseen Object Navigation -- 1 Introduction -- 2 Related Work -- 3 Unseen Object Navigation -- 3.1 Task Definition -- 3.2 A3C Baseline Model -- 4 Generative Meta-Adversarial Network -- 4.1 Feature Generator -- 4.2 Environmental Meta Discriminator -- 5 Experiments -- 5.1 Experiment Setup -- 5.2 Methods for Comparison -- 5.3 Evaluation Results -- 5.4 Comparisons with the Related Works -- 6 Conclusions -- References -- Object Manipulation via Visual Target Localization -- 1 Introduction -- 2 Related Works -- 3 Object Displacement -- 4 Manipulation via Visual Object Location Estimation -- 4.1 Estimating Relative 3D Object Location -- 4.2 Conditional Segmentation -- 4.3 Policy Network -- 5 Experiments -- 5.1 How Well Does m-VOLE Work? -- 5.2 How Robust Is m-VOLE to Noise? -- 5.3 Why Conditional Segmentation? -- 5.4 Can m-VOLE Do Zero-Shot Manipulation? -- 6 Conclusion -- References.
MoDA: Map Style Transfer for Self-supervised Domain Adaptation of Embodied Agents.
Titolo autorizzato: Computer Vision – ECCV 2022  Visualizza cluster
ISBN: 3-031-19842-5
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910620199303321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Serie: Lecture notes in computer science.