Vai al contenuto principale della pagina
Titolo: | Computer vision - ECCV 2022 : 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, proceedings. Part XX / / Shai Avidan [and four others] editors |
Pubblicazione: | Cham, Switzerland : , : Springer, , [2022] |
©2022 | |
Descrizione fisica: | 1 online resource (815 pages) |
Disciplina: | 006.37 |
Soggetto topico: | Computer vision |
Pattern recognition systems | |
Persona (resp. second.): | AvidanShai |
Nota di bibliografia: | Includes bibliographical references and index. |
Nota di contenuto: | Intro -- Foreword -- Preface -- Organization -- Contents - Part XX -- tSF: Transformer-Based Semantic Filter for Few-Shot Learning -- 1 Introduction -- 2 Related Work -- 3 Transformer-Based Semantic Filter (tSF) -- 3.1 Related Transformer -- 3.2 tSF Methodology -- 3.3 Discussions -- 4 tSF for Few-Shot Classification -- 4.1 Problem Definition -- 4.2 PatchProto Framework with tSF -- 5 tSF for Few-Shot Segmentation and Detection -- 6 Experiments -- 6.1 Few-Shot Classification -- 6.2 Few-Shot Semantic Segmentation -- 6.3 Few-Shot Object Detection -- 7 Conclusions -- References -- Adversarial Feature Augmentation for Cross-domain Few-Shot Classification -- 1 Introduction -- 2 Related Work -- 3 Proposed Method -- 3.1 Preliminaries -- 3.2 Network Architecture -- 3.3 Feature Augmentation -- 3.4 Adversarial Feature Augmentation -- 4 Experiment -- 4.1 Implementation -- 4.2 Experimental Setting -- 4.3 Results on Benchmarks -- 4.4 Ablation Experiments -- 4.5 Results of Base Classes and Novel Classes -- 4.6 Comparison with Fine-Tuning -- 5 Conclusions -- References -- Constructing Balance from Imbalance for Long-Tailed Image Recognition -- 1 Introduction -- 2 Related Work -- 2.1 Long-Tailed Recognition -- 2.2 Normalizing Flows -- 3 Approach -- 3.1 Preliminaries -- 3.2 Overview -- 3.3 Gaussian Mixture Flow Filter -- 3.4 Cluster-Aided Classifier -- 4 Experiment -- 4.1 Datasets -- 4.2 Baselines -- 4.3 Implementation Details -- 4.4 Results -- 4.5 Visualization -- 4.6 Ablation Study -- 5 Conclusions -- References -- On Multi-Domain Long-Tailed Recognition, Imbalanced Domain Generalization and Beyond -- 1 Introduction -- 2 Related Work -- 3 Domain-Class Transferability Graph -- 4 What Makes for Good Representations in MDLT? -- 4.1 Divergent Label Distributions Hamper Transferable Features -- 4.2 Transferability Statistics Characterize Generalization. |
4.3 A Loss that Bounds the Transferability Statistics -- 4.4 Calibration for Data Imbalance Leads to Better Transfer -- 5 What Makes for Good Classifiers in MDLT? -- 6 Benchmarking MDLT -- 6.1 Main Results -- 6.2 Understanding the Behavior of BoDA on MDLT -- 7 Beyond MDLT: (Imbalanced) Domain Generalization -- 8 Conclusion -- References -- Few-Shot Video Object Detection -- 1 Introduction -- 2 Related Work -- 3 Proposed Method -- 3.1 Overview -- 3.2 Few-Shot Video Object Detection Network -- 4 FSVOD-500 Dataset -- 5 Experiments -- 5.1 Comparison with Other Methods -- 5.2 Ablation Studies -- 5.3 Advantages of Temporal Matching -- 5.4 Object Indexing in Massive Videos -- 6 Conclusion -- References -- Worst Case Matters for Few-Shot Recognition -- 1 Introduction -- 2 Related Work -- 3 The Worst-case Accuracy and Its Surrogate -- 3.1 Existing and Proposed Metrics -- 3.2 Implicitly Optimizing the Worst-Case Accuracy -- 3.3 The Bias-Variance Tradeoff in the Few-Shot Scenario -- 3.4 Reducing Variance: Stability Regularization -- 3.5 Reducing Bias: Adaptability Calibration -- 3.6 Reducing Both Variance and Bias: Model Ensemble -- 4 Experiments -- 4.1 Implementation Details -- 4.2 Comparing with State-of-the-art Methods -- 4.3 Relationships: , , -3, ACC1 and ACCm -- 4.4 Ablation Analyses -- 5 Conclusions -- References -- Exploring Hierarchical Graph Representation for Large-Scale Zero-Shot Image Classification -- 1 Introduction -- 2 Related Work -- 2.1 Zero-/Few-Shot Learning -- 2.2 Large-Scale Graphical Zero-Shot Learning -- 2.3 Visual Representation Learning from Semantic Supervision -- 3 Method -- 3.1 Problem Definition -- 3.2 HGR-Net: Large-Scale ZSL with Hierarchical Graph Representation Learning -- 4 Experiments -- 4.1 Datasets and the Hierarchical Structure -- 4.2 Implementation Details -- 4.3 Large-Scale ZSL Performance -- 4.4 Ablation Studies. | |
4.5 Qualitative Results -- 4.6 Low-shot Classification on Large-Scale Dataset -- 5 Conclusions -- References -- Doubly Deformable Aggregation of Covariance Matrices for Few-Shot Segmentation -- 1 Introduction -- 2 Related Work -- 2.1 Few-Shot Segmentation -- 2.2 Gaussian Process -- 3 Methodology -- 3.1 Preliminaries: Gaussian Process -- 3.2 Hard Example Mining-based GP Kernel Learning -- 3.3 Doubly Deformable 4D Transformer for Cost Volume Aggregation -- 4 Experiments -- 4.1 Experiment Settings -- 4.2 Results and Analysis -- 4.3 Ablation Study and Analysis -- 5 Conclusions -- References -- Dense Cross-Query-and-Support Attention Weighted Mask Aggregation for Few-Shot Segmentation -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Problem Setup -- 3.2 DCAMA Framework for 1-Shot Learning -- 3.3 Extension to n-Shot Inference -- 4 Experiments and Results -- 4.1 Comparison with State of the Art -- 4.2 Ablation Study -- 5 Conclusion -- References -- Rethinking Clustering-Based Pseudo-Labeling for Unsupervised Meta-Learning -- 1 Introduction -- 2 Related Work -- 2.1 Meta-Learning for Few-Shot Classification -- 2.2 Unsupervised Meta-Learning -- 3 In-Depth Analysis of Clustering-Based Unsupervised Methods -- 4 Our Approach -- 4.1 Clustering-Friendly Feature Embedding -- 4.2 Progressive Evaluation Mechanism -- 5 Experiments -- 5.1 Datasets and Implementation Details -- 5.2 Ablation Study -- 5.3 Comparison with Other Algorithms -- 6 Conclusion -- References -- CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action Recognition -- 1 Introduction -- 2 Related Work -- 3 CLASTER -- 3.1 Problem Definition -- 3.2 Visual-Semantic Representation -- 3.3 CLASTER Representation -- 3.4 Loss Function -- 3.5 Optimization with Reinforcement Learning -- 4 Implementation Details -- 5 Experimental Analysis -- 5.1 Datasets -- 5.2 Ablation Study. | |
5.3 Results on ZSL -- 5.4 Results on GZSL -- 5.5 Results on TruZe -- 5.6 Analysis of the RL Optimization -- 6 Conclusion -- References -- Few-Shot Class-Incremental Learning for 3D Point Cloud Objects*-10pt -- 1 Introduction -- 2 Related Work -- 3 Few-Shot Class-Incremental Learning for 3D -- 3.1 Model Overview -- 3.2 Microshapes -- 3.3 Training Pipeline -- 4 Experiments -- 4.1 Main Results -- 4.2 Ablation Studies -- 4.3 Beyond FSCIL -- 5 Conclusions -- References -- Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions -- 1 Introduction -- 2 Related Work -- 3 Problem Setting -- 4 Methodology -- 4.1 Standard Semi-supervised Few-Shot Learning -- 4.2 Mutual-Information for Unlabeled Data Handling -- 4.3 Mitigate CF by Optimal Transport -- 4.4 Overall Learning Objective -- 5 Experiments -- 5.1 Benefit of Using Unlabeled Data in SETS -- 5.2 Comparison to Meta-Learning -- 5.3 Comparison to Continual Learning -- 5.4 Ablation Study -- 6 Conclusion -- References -- DnA: Improving Few-Shot Transfer Learning with Low-Rank Decomposition and Alignment -- 1 Introduction -- 2 Related Works -- 3 Method -- 3.1 Overview -- 3.2 Basic Alignment Step -- 3.3 Decomposition Before Alignment -- 4 Experiments -- 4.1 Settings -- 4.2 DnA Improves Few-Shot Performance -- 4.3 Ablation Study -- 5 Conclusion -- References -- Learning Instance and Task-Aware Dynamic Kernels for Few-Shot Learning -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Problem Formulation -- 3.2 Model Overview -- 3.3 Dynamic Kernel Generator -- 3.4 Dynamic Kernel -- 4 Experiments -- 4.1 Few-Shot Classification -- 4.2 Few-Shot Detection -- 4.3 Ablation Study -- 5 Conclusion -- References -- Open-World Semantic Segmentation via Contrasting and Clustering Vision-Language Embedding -- 1 Introduction -- 2 Related Work -- 2.1 Zero-Shot Semantic Segmentation. | |
2.2 Vision-Language Pre-training -- 3 Method -- 3.1 ViL-Seg Framework -- 3.2 Vision-Based and Cross-Modal Contrasting -- 3.3 Online Clustering of Visual Embeddings -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Comparison with Other Methods -- 4.3 Ablation Analysis of ViL-Seg -- 5 Conclusion -- References -- Few-Shot Classification with Contrastive Learning -- 1 Introduction -- 2 Related Work -- 2.1 Few-Shot Learning -- 2.2 Contrastive Learning -- 2.3 Few-Shot Learning with Contrastive Learning -- 3 Method -- 3.1 Preliminary -- 3.2 Overview -- 3.3 Pre-training -- 3.4 Meta-training -- 4 Experiments -- 4.1 Datasets and Setup -- 4.2 Main Results -- 4.3 Ablation Study -- 4.4 Further Analysis -- 5 Conclusions -- References -- Time-rEversed DiffusioN tEnsor Transformer: A New TENET of Few-Shot Object Detection -- 1 Introduction -- 2 Related Works -- 3 Background -- 4 Proposed Approach -- 4.1 Extracting Representations for Support and Query RoIs -- 4.2 Transformer Relation Head -- 4.3 Pipeline Details (Fig.1 (Bottom)) -- 5 Experiments -- 5.1 Comparisons with the State of the Art -- 5.2 Hyper-parameter and Ablation Analysis -- 6 Conclusions -- References -- Self-Promoted Supervision for Few-Shot Transformer -- 1 Introduction -- 2 Related Work -- 3 Empirical Study of ViTs for Few-Shot Classification -- 4 Self-Promoted Supervision for Few-Shot Classification -- 4.1 Meta-Training -- 4.2 Meta-Tuning -- 5 Experiments -- 5.1 Comparison on Different ViTs -- 5.2 Comparison Among Different Few-shot Learning Frameworks -- 5.3 Comparison with State-of-The-Arts -- 5.4 Ablation Study -- 6 Conclusion -- References -- Few-Shot Object Counting and Detection -- 1 Introduction -- 2 Related Work -- 3 Proposed Approach -- 3.1 Feature Extraction and Feature Aggregation -- 3.2 The Encoder-Decoder Transformer -- 3.3 The Two-Stage Training Strategy. | |
4 New Datasets for Few-Shot Counting and Detection. | |
Titolo autorizzato: | Computer Vision – ECCV 2022 |
ISBN: | 3-031-20044-6 |
Formato: | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione: | Inglese |
Record Nr.: | 9910619268003321 |
Lo trovi qui: | Univ. Federico II |
Opac: | Controlla la disponibilità qui |