1.

Record Nr.

UNISA996495567903316

Titolo

Computer vision - ECCV 2022 . Part XII : 17th European Conference, Tel Aviv, Israel, October 23-27, 2022 : proceedings / / Shai Avidan [and four others]

Pubbl/distr/stampa

Cham, Switzerland : , : Springer, , [2022]

©2022

ISBN

3-031-19775-5

Descrizione fisica

1 online resource (813 pages)

Collana

Lecture Notes in Computer Science

Disciplina

006.4

Soggetti

Pattern recognition systems

Computer vision

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Nota di contenuto

Intro -- Foreword -- Preface -- Organization -- Contents - Part XII -- .26em plus .1em minus .1emExplicit Model Size Control and Relaxation via Smooth Regularization for Mixed-Precision Quantization -- 1 Introduction -- 2 Related Work -- 3 Motivation and Preliminaries -- 4 Methodology -- 4.1 Explicit Model Size Control Using Surfaces of Constant-Size Neural Networks -- 4.2 Smooth Bounded Regularization as a Booster for Quantized Training -- 4.3 Regularizers for Bit-Width Stabilization -- 4.4 Algorithm Overview -- 5 Experiments -- 5.1 Ablation Study -- 5.2 Comparison with Existing Studies -- 6 Conclusions -- References -- BASQ: Branch-wise Activation-clipping Search Quantization for Sub-4-bit Neural Networks -- 1 Introduction -- 2 Related Works -- 2.1 Low-bit Quantization -- 2.2 Neural Architecture Search -- 3 Preliminary -- 4 Branch-wise Activation-clipping Search Quantization -- 4.1 Search Space Design -- 4.2 Search Strategy -- 5 Block Structure for Low-bit Quantization -- 5.1 New Building Block -- 5.2 Flexconn: A Flexible Block Skip Connection for Fully Skip-Connected Layers -- 6 Experiments -- 6.1 Evaluation with MobileNet-v2 and MobileNet-v1 -- 6.2 Evaluation with ResNet-18 -- 6.3 Ablation Study -- 7 Conclusion -- References -- You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding -- 1 Introduction -- 2 Background -- 2.1



Rounding Schemes -- 2.2 Stochastic Rounding in Low-Precision DNN Training -- 3 A Generator-Free Framework for Low-Precision DNN Training -- 3.1 Framework Overview -- 3.2 Source of Random Numbers -- 3.3 Methods of Random Number Extraction -- 3.4 Obtaining Random Numbers with Arbitrary Distribution -- 4 Results -- 4.1 Experiments Setup -- 4.2 Accuracy of Low-precision DNN Training -- 4.3 Comparison of Different Extraction Sources -- 4.4 Randomness Tests and Hardware Saving.

4.5 Discussion and Future Works -- 5 Conclusion -- References -- Real Spike: Learning Real-Valued Spikes for Spiking Neural Networks -- 1 Introduction -- 2 Related Work -- 2.1 Learning Algorithms of SNNs -- 2.2 SNN Programming Frameworks -- 2.3 Convolutions -- 3 Materials and Methodology -- 3.1 Explicitly Iterative Leaky Integrate-and-Fire Model -- 3.2 Real Spike -- 3.3 Re-parameterization -- 3.4 Analysis and Discussions -- 3.5 Extensions of Real Spike -- 4 Experiment -- 4.1 Ablation Study for Real Spike -- 4.2 Ablation Study for Extensions of Real Spike -- 4.3 Comparison with the State-of-the-Art -- 5 Conclusions -- References -- FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks -- 1 Introduction -- 2 Related Work -- 2.1 Performance on Heterogeneous (Non-IID) Data -- 2.2 Personalization -- 2.3 Communication Cost -- 2.4 Lottery Ticket Hypothesis -- 3 FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks -- 3.1 Personalization -- 3.2 Smaller Memory Footprint/Faster Pruning -- 4 Experiments -- 4.1 Experiment Setup -- 4.2 Evaluation -- 5 Conclusion -- References -- Theoretical Understanding of the Information Flow on Continual Learning Performance -- 1 Introduction -- 2 Problem Formulation -- 3 Continual Learning Performance Study -- 3.1 Performance Analysis -- 3.2 Forgetting Analysis -- 3.3 A Bound on EOt Using Optimal Freezing Mask -- 3.4 Key Components to Prove Theorems -- 4 Related Work -- 5 Experimental Evidence -- 5.1 How Do Task Sensitive Layers Affect Performance? -- 6 Discussion -- 7 Conclusion -- References -- Exploring Lottery Ticket Hypothesis in Spiking Neural Networks -- 1 Introduction -- 2 Related Work -- 2.1 Spiking Neural Networks -- 2.2 Lottery Ticket Hypothesis -- 3 Drawing Winning Tickets from SNN -- 3.1 Lottery Ticket Hypothesis -- 3.2 Early-Bird Tickets -- 3.3 Early-Time Tickets.

4 Experimental Results -- 4.1 Implementation Details -- 4.2 Winning Tickets in SNNs -- 4.3 Transferred Winning Tickets from ANN -- 4.4 Finding Winning Tickets from Initialization -- 4.5 Performance Comparison with Previous Works -- 4.6 Observation on the Number of Spikes -- 5 Conclusion -- References -- On the Angular Update and Hyperparameter Tuning of a Scale-Invariant Network -- 1 Introduction -- 2 Preliminaries -- 3 Common Feature of Good Hyperparameter Combinations -- 3.1 Measuring Updates of a Scale-Invariant Network -- 3.2 Observation -- 4 Angular Update -- 5 Experiments -- 5.1 Scale-Invariant Network -- 5.2 Unmodified Network -- 6 Efficient Hyperparameter Search -- 7 Conclusion -- References -- LANA: Latency Aware Network Acceleration -- 1 Introduction -- 1.1 Related Work -- 2 Method -- 2.1 Candidate Pretraining Phase -- 2.2 Operation Selection Phase -- 3 Experiments -- 3.1 EfficientNet and ResNeST Derivatives -- 3.2 Analysis -- 4 Conclusion -- References -- RDO-Q: Extremely Fine-Grained Channel-Wise Quantization via Rate-Distortion Optimization -- 1 Introduction -- 2 Related Works -- 3 Approach -- 3.1 Formulation -- 3.2 Optimizing Channel-Wise Bit Allocation -- 3.3 Choice of Quantizer -- 3.4 Improving Performance on Hardware -- 3.5 Discussion -- 4 Experiments -- 4.1 Parameter Settings -- 4.2 Quantization Results -- 4.3 Performance on Hardware Platforms -- 4.4



Time Cost -- 4.5 Distributions of Bit Rate Across Layers -- 4.6 Discussion of Additivity Property -- 4.7 Implementation Details of Optimization -- 5 Conclusion -- References -- U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture Search -- 1 Introduction -- 2 Related Work -- 3 Modeling Resource Utilization in Inference Platforms -- 3.1 Dataflows on Hardware Accelerators -- 3.2 Proposed Utilization Model -- 4 Proposed NAS Framework.

4.1 Approximation of the Utilization Function -- 4.2 Multi-objective Loss Function -- 4.3 NAS Algorithm -- 5 Experiments -- 5.1 CIFAR10 Experiments -- 5.2 ImageNet100 Experiments -- 6 Conclusion -- References -- PTQ4ViT: Post-training Quantization for Vision Transformers with Twin Uniform Quantization -- 1 Introduction -- 2 Background and Related Work -- 2.1 Vision Transformer -- 2.2 Quantization -- 3 Method -- 3.1 Base PTQ for Vision Transformer -- 3.2 Twin Uniform Quantization -- 3.3 Hessian Guided Metric -- 3.4 PTQ4ViT Framework -- 4 Experiments -- 4.1 Experiment Settings -- 4.2 Results on ImageNet Classification Task -- 4.3 Ablation Study -- 5 Conclusion -- A Appendix -- A.1 Derivation of Hessian guided metric -- References -- Bitwidth-Adaptive Quantization-Aware Neural Network Training: A Meta-Learning Approach -- 1 Introduction -- 2 Related Work -- 3 Meta-Learning Based Bitwidth-Adaptive QAT -- 3.1 Bitwidth Adaptation Scenario -- 3.2 Bitwidth-Class Joint Adaptation Scenario -- 3.3 Implementation -- 4 Evaluation -- 4.1 Experiments on the Bitwidth Adaptation Scenario -- 4.2 Experiments on the Bitwidth-Class Joint Adaptation Scenario -- 5 Discussion -- 6 Conclusion -- References -- Understanding the Dynamics of DNNs Using Graph Modularity -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Preliminary -- 3.2 Dynamic Graph Construction -- 4 Experiments -- 4.1 Understanding the Evolution of Communities -- 4.2 Modularity Curves of Adversarial Samples -- 4.3 Modularity During Training Time -- 4.4 Ablation Study -- 5 Application Scenarios of Modularity -- 5.1 Representing the Difference of Various Layers -- 5.2 Guiding Layer Pruning with Modularity -- 6 Conclusion and Future Work -- References -- Latent Discriminant Deterministic Uncertainty -- 1 Introduction -- 2 Related Work -- 2.1 Uncertainty Estimation for Computer Vision Tasks.

2.2 Prototype Learning in DNNs -- 3 Latent Discriminant Deterministic Uncertainty (LDU) -- 3.1 DUM Preliminaries -- 3.2 Discriminant Latent Space -- 3.3 LDU Optimization -- 3.4 Addressing Feature Collapse -- 3.5 LDU And Epistemic/Aleatoric Uncertainty -- 4 Experiments -- 4.1 Classification Experiments -- 4.2 Semantic Segmentation Experiments -- 4.3 Monocular Depth Experiments -- 5 Discussions and Conclusions -- References -- Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Counterfactual Problem Formulation: Preliminaries -- 3.2 Counterfactuals with a Semantic Consistency Constraint -- 3.3 Using Multiple Distractor Images Through a Semantic Constraint -- 4 Experiments -- 4.1 Implementation Details and Evaluation Setup -- 4.2 State-of-the-Art Comparison -- 4.3 Ablation Studies -- 4.4 Online Evaluation Through Machine Teaching -- 5 Towards Language-Based Counterfactual Explanations -- 6 Conclusion and Future Work -- References -- HIVE: Evaluating the Human Interpretability of Visual Explanations -- 1 Introduction -- 2 Related Work -- 3 HIVE Design Principles -- 3.1 Falsifiable Hypothesis Testing -- 3.2 Cross-method Comparison -- 3.3 Human-Centered Evaluation -- 3.4 Generalizability and Scalability -- 4 HIVE Study Design -- 5 Experiments -- 5.1 Experimental Details -- 5.2 The Issue of Confirmation Bias -- 5.3 Objective Assessment of Interpretability -- 5.4 A Closer Examination of



Prototype-Based Models -- 5.5 Subjective Evaluation Of Interpretability -- 5.6 Interpretability-Accuracy Tradeoff -- 6 Conclusion -- References -- BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks -- 1 Introduction -- 2 Related Works -- 3 Methodology: BayesCap - Bayesian Identity Cap -- 3.1 Problem Formulation -- 3.2 Preliminaries: Uncertainty Estimation.

3.3 Constructing BayesCap.