Vai al contenuto principale della pagina
Titolo: | Computer Vision - ECCV 2022 . Part XIX : 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, proceedings / / Shai Avidan [and four others] |
Pubblicazione: | Cham, Switzerland : , : Springer, , [2022] |
©2022 | |
Descrizione fisica: | 1 online resource (817 pages) |
Disciplina: | 006.4 |
Soggetto topico: | Pattern recognition systems |
Computer vision | |
Persona (resp. second.): | AvidanShai |
Nota di contenuto: | Intro -- Foreword -- Preface -- Organization -- Contents - Part XIX -- Learning Mutual Modulation for Self-supervised Cross-Modal Super-Resolution -- 1 Introduction -- 2 Related Work -- 2.1 Cross-Modal SR -- 2.2 Modulation Networks -- 2.3 Image Filtering -- 2.4 Cycle-Consistent Learning -- 3 Method -- 3.1 Mutual Modulation -- 3.2 Cycle-Consistent Self-supervised Learning -- 4 Experiments -- 4.1 Experimental Settings -- 4.2 Evaluation on Benchmark Depth Data -- 4.3 Ablation Study -- 4.4 Validation on Real-World DEM and Thermal -- 5 Conclusions -- References -- Spectrum-Aware and Transferable Architecture Search for Hyperspectral Image Restoration -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Spectrum-Aware Search Space -- 3.2 Noise Level Independent Search Algorithm -- 3.3 Difference with Existing Works -- 4 Experiments -- 4.1 Datasets and Implementation Details -- 4.2 Denoising Results -- 4.3 Imaging Reconstruction Results -- 4.4 Understanding of STAS for HSIs -- 4.5 Transferability -- 5 Conclusions -- References -- Neural Color Operators for Sequential Image Retouching -- 1 Introduction -- 2 Related Works -- 3 Neural Color Operator -- 4 Automatic Sequential Image Retouching -- 4.1 Problem Setup -- 4.2 Strength Predictor -- 4.3 Loss Function and Training -- 5 Experiments -- 5.1 Comparison and Results -- 5.2 Ablation Study -- 6 Conclusion and Limitations -- References -- Optimizing Image Compression via Joint Learning with Denoising -- 1 Introduction -- 2 Related Work -- 2.1 Image Denoising -- 2.2 Lossy Image Compression -- 2.3 Joint Solutions -- 3 Problem Specification -- 3.1 Selection of Datasets -- 3.2 Noise Synthesis -- 4 Method -- 4.1 Network Design -- 4.2 Rate-Distortion Optimization -- 4.3 Training Strategy -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 Rate-Distortion Performance -- 5.3 Efficiency Performance. |
5.4 Qualitative Results -- 6 Conclusion -- References -- Restore Globally, Refine Locally: A Mask-Guided Scheme to Accelerate Super-Resolution Networks -- 1 Introduction -- 2 Related Work -- 2.1 Single Image Super-Resolution -- 2.2 Lightweight Super-Resolution -- 2.3 Accelerating Super-Resolution Networks -- 3 Our Mask-Guided Acceleration Scheme -- 3.1 Motivation and Overview -- 3.2 Base Network for Global Super-Resolution -- 3.3 Mask Prediction and Feature Patch Selection -- 3.4 Refine Network for Local Enhancement -- 3.5 Training Strategy -- 4 Experiments -- 4.1 Implementation Details -- 4.2 Datasets and Metrics -- 4.3 Comparison Results -- 4.4 Ablation Study -- 5 Conclusion -- References -- Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution -- 1 Introduction -- 2 Related Work -- 3 Motivation and Challenges -- 4 Our Method -- 4.1 Framework with Adaptive SR Block -- 4.2 Per-Layer Width Search with Mask Layer for C1 and C2 -- 4.3 Speed Prediction with Speed Model for C3 -- 4.4 Depth Search with Aggregation Layer for C4 -- 4.5 Training Loss -- 5 Compiler Awareness with Speed Model -- 6 Experiments -- 6.1 Experimental Settings -- 6.2 Experimental Results -- 6.3 Ablation Study -- 7 Conclusion -- References -- Modeling Mask Uncertainty in Hyperspectral Image Reconstruction -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Preliminaries -- 3.2 Mask Uncertainty -- 3.3 Graph-Based Self-tuning Network -- 3.4 Bilevel Optimization -- 4 Experiments -- 4.1 HSI Reconstruction Performance -- 4.2 Model Discussion -- 5 Conclusions -- References -- Perceiving and Modeling Density for Image Dehazing -- 1 Introduction -- 2 Related Works -- 2.1 Single Image Dehazing -- 2.2 Attention Mechanism -- 3 Proposed Methods -- 3.1 Implicit Perception of Haze Density - Separable Hybrid Attention Mechanism and Its Variants. | |
3.2 Shallow Layers -- 3.3 Explicit Model of Haze Density - Haze Density Encoding Matrix -- 3.4 Deep Layers -- 4 Experiments -- 4.1 Datasets and Metrics -- 4.2 Loss Function -- 4.3 Ablation Study -- 5 Compare with SOTA Methods -- 5.1 Implementation Details -- 5.2 Qualitative and Quantitative Results on Benchmarks -- 6 Conclusion -- References -- Stripformer: Strip Transformer for Fast Image Deblurring -- 1 Introduction -- 2 Related Work -- 3 Proposed Method -- 3.1 Feature Embedding Block (FEB) -- 3.2 Intra-SA and Inter-SA Blocks -- 3.3 Loss Function -- 4 Experiments -- 4.1 Datasets and Implementation Details -- 4.2 Experimental Results -- 4.3 Ablation Studies -- 5 Conclusions -- References -- Deep Fourier-Based Exposure Correction Network with Spatial-Frequency Interaction -- 1 Introduction -- 2 Related Work -- 2.1 Exposure Correction -- 2.2 Fourier Transform in Neural Networks -- 3 Method -- 3.1 Motivation and Background -- 3.2 Deep Fourier-Based Exposure Correction Network -- 3.3 Spatial-Frequency Interaction Block -- 4 Experiment -- 4.1 Settings -- 4.2 Performance Evaluation -- 4.3 Ablation Studies -- 4.4 Extensions on Other Image Enhancement Tasks -- 5 Conclusion -- References -- Frequency and Spatial Dual Guidance for Image Dehazing -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Motivation -- 3.2 Frequency and Spatial Dual-Guidance Network -- 3.3 Amplitude Guided Phase Module -- 3.4 Global Guided Local Module -- 3.5 Frequency and Spatial Dual Supervision Losses -- 4 Experiments -- 4.1 Experiment Setup -- 4.2 Comparison with State-of-the-Art Methods -- 4.3 Ablation Studies -- 5 Conclusion -- References -- Towards Real-World HDRTV Reconstruction: A Data Synthesis-Based Approach -- 1 Introduction -- 2 Related Work -- 3 Motivation -- 4 Learning-Based SDRTV Data Synthesis -- 4.1 Conditioned Two-Stream Network. | |
4.2 Hybrid Tone Mapping Prior Loss -- 4.3 Adversarial Loss -- 5 Experimental Results -- 5.1 Experimental Settings -- 5.2 Generalize to Labeled Real-World SDRTVs -- 5.3 Generalize to Unlabeled Real-World SDRTVs -- 5.4 The Quality of Synthesized SDRTVs -- 5.5 Ablation -- 6 Conclusion -- References -- Learning Discriminative Shrinkage Deep Networks for Image Deconvolution -- 1 Introduction -- 2 Related Work -- 3 Revisiting Deep Unrolling-Based Methods -- 4 Proposed Method -- 4.1 Network Architecture -- 5 Experimental Results -- 5.1 Datasets and Implementation Details -- 5.2 Quantitative Evaluation -- 5.3 Qualitative Evaluation -- 5.4 Ablation Study -- 5.5 Execution Time Analysis -- 5.6 Limitations -- 6 Conclusion -- References -- KXNet: A Model-Driven Deep Neural Network for Blind Super-Resolution -- 1 Introduction -- 2 Related Work -- 2.1 Non-Blind Single Image Super-Resolution -- 2.2 Blind Single Image Super-Resolution -- 3 Blind Single Image Super-Resolution Model -- 3.1 Model Formulation -- 3.2 Model Optimization -- 4 Blind Super-Resolution Unfolding Network -- 4.1 Network Module Design -- 4.2 Network Training -- 5 Experimental Results -- 5.1 Details Descriptions -- 5.2 Experiments on Synthetic Data -- 5.3 More Analysis and Verification -- 5.4 Inference Speed -- 5.5 Experiments on Real Images -- 6 Conclusion -- References -- ARM: Any-Time Super-Resolution Method -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Preliminary -- 3.2 ARM Supernet Training -- 3.3 ARM Supernet Inference -- 4 Experiments -- 4.1 Settings -- 4.2 Main Results -- 4.3 Computation Cost Analysis -- 4.4 Ablation Study -- 5 Conclusion -- References -- Attention-Aware Learning for Hyperparameter Prediction in Image Processing Pipelines -- 1 Introduction -- 2 Related Work -- 3 Image Processing Pipelines -- 4 Method -- 4.1 Framework. | |
4.2 Attention-aware Parameter Prediction Network -- 5 Experiments -- 5.1 Settings -- 5.2 ISP Hyperparameter Prediction for Object Detection -- 5.3 ISP Hyperparameter Prediction for Image Segmentation -- 5.4 ISP Hyperparameter Prediction for Human Viewing -- 5.5 Ablation Study -- 6 Conclusions and Future Work -- References -- RealFlow: EM-Based Realistic Optical Flow Dataset Generation from Videos -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 RealFlow Framework -- 3.2 Realistic Image Pair Rendering -- 4 Experiments -- 4.1 Datasets -- 4.2 Implementation Details -- 4.3 Comparison with Existing Methods -- 4.4 Ablation Study -- 5 Conclusions -- References -- Memory-Augmented Model-Driven Network for Pansharpening -- 1 Introduction -- 2 Related Work -- 2.1 Traditional Methods -- 2.2 Deep Learning Based Methods -- 3 Proposed Approach -- 3.1 Model Formulation -- 3.2 Memory-Augmented Model-Driven Network -- 3.3 Network Training -- 4 Experiments -- 4.1 Datasets and Evaluation Metrics -- 4.2 Implementation Details -- 4.3 Comparison with SOTA Methods -- 4.4 Ablation Study -- 5 Conclusions -- References -- All You Need Is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines -- 1 Introduction -- 2 Related Work -- 2.1 Camera Image Signal Processing (ISP) Pipeline -- 2.2 Adversarial Attack Methods -- 2.3 Defense Methods -- 3 Sensor Image Formation -- 4 Raw Image Domain Defense -- 4.1 F Operator: Image-to-RAW Mapping -- 4.2 G Operator: Learned ISP -- 4.3 S Operator: Conventional ISP -- 4.4 Operator Training -- 5 Experiments and Analysis -- 5.1 Experimental Setup -- 5.2 Assessment -- 5.3 RAW Distribution Analysis -- 5.4 Robustness to Hyperparameter and Operator Deviations -- 6 Conclusion -- References -- Ghost-free High Dynamic Range Imaging with Context-Aware Transformer -- 1 Introduction -- 2 Related Work -- 2.1 HDR Deghosting Algorithms. | |
2.2 Vision Transformers. | |
Titolo autorizzato: | Computer Vision – ECCV 2022 |
ISBN: | 3-031-19800-X |
Formato: | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione: | Inglese |
Record Nr.: | 996500065403316 |
Lo trovi qui: | Univ. di Salerno |
Opac: | Controlla la disponibilità qui |