top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Face Analysis under Uncontrolled Conditions : From Face Detection to Expression Recognition
Face Analysis under Uncontrolled Conditions : From Face Detection to Expression Recognition
Autore Belmonte Romain
Pubbl/distr/stampa Newark : , : John Wiley & Sons, Incorporated, , 2022
Descrizione fisica 1 online resource (312 pages)
Altri autori (Persone) AllaertBenjamin
Soggetto genere / forma Electronic books.
ISBN 1-394-17385-7
1-394-17383-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Title Page -- Copyright Page -- Contents -- Preface -- Part 1. Facial Landmark Detection -- Introduction to Part 1 -- Chapter 1. Facial Landmark Detection -- 1.1. Facial landmark detection in still images -- 1.1.1. Generative approaches -- 1.1.2. Discriminative approaches -- 1.1.3. Deep learning approaches -- 1.1.4. Handling challenges -- 1.1.5. Summary -- 1.2. Extending facial landmark detection to videos -- 1.2.1. Tracking by detection -- 1.2.2. Box, landmark and pose tracking -- 1.2.3. Adaptive approaches -- 1.2.4. Joint approaches -- 1.2.5. Temporal constrained approaches -- 1.2.6. Summary -- 1.3. Discussion -- 1.4. References -- Chapter 2. Effectiveness of Facial Landmark Detection -- 2.1. Overview -- 2.2. Datasets and evaluation metrics -- 2.2.1. Image and video datasets -- 2.2.2. Face preprocessing and data augmentation -- 2.2.3. Evaluation metrics -- 2.2.4. Summary -- 2.3. Image and video benchmarks -- 2.3.1. Compiled results on 300W -- 2.3.2. Compiled results on 300VW -- 2.4. Cross-dataset benchmark -- 2.4.1. Evaluation protocol -- 2.4.2. Comparison of selected approaches -- 2.5. Discussion -- 2.6. References -- Chapter 3. Facial Landmark Detection with Spatio-temporal Modeling -- 3.1. Overview -- 3.2. Spatio-temporal modeling review -- 3.2.1. Hand-crafted approaches -- 3.2.2. Deep learning approaches -- 3.2.3. Summary -- 3.3. Architecture design -- 3.3.1. Coordinate regression networks -- 3.3.2. Heatmap regression networks -- 3.4. Experiments -- 3.4.1. Datasets and evaluation protocols -- 3.4.2. Implementation details -- 3.4.3. Evaluation on SNaP-2DFe -- 3.4.4. Evaluation on 300VW -- 3.4.5. Comparison with existing models -- 3.4.6. Qualitative results -- 3.4.7. Properties of the networks -- 3.5. Design investigations -- 3.5.1. Encoder-decoder -- 3.5.2. Complementarity between spatial and temporal information.
3.5.3. Complementarity between local and global motion -- 3.6. Discussion -- 3.7. References -- Conclusion to Part 1 -- Part 2. Facial Expression Analysis -- Introduction to Part 2 -- Chapter 4. Extraction of Facial Features -- 4.1. Introduction -- 4.2. Face detection -- 4.2.1. Point-of-interest detection algorithms -- 4.2.2. Face alignment approaches -- 4.2.3. Synthesis -- 4.3. Face normalization -- 4.3.1. Dealing with head pose variations -- 4.3.2. Dealing with facial occlusions -- 4.3.3. Synthesis -- 4.4. Extraction of visual features -- 4.4.1. Facial appearance features -- 4.4.2. Facial geometric features -- 4.4.3. Facial dynamics features -- 4.4.4. Facial segmentation models -- 4.4.5. Synthesis -- 4.5. Learning methods -- 4.5.1. Classification versus regression -- 4.5.2. Fusion model -- 4.5.3. Synthesis -- 4.6. Conclusion -- 4.7. References -- Chapter 5. Facial Expression Modeling -- 5.1. Introduction -- 5.2. Modeling of the affective state -- 5.2.1. Categorical modeling -- 5.2.2. Dimensional modeling -- 5.2.3. Synthesis -- 5.3. The challenges of facial expression recognition -- 5.3.1. The variation of the intensity of the expressions -- 5.3.2. Variation of facial movement -- 5.3.3. Synthesis -- 5.4. The learning databases -- 5.4.1. Improvement of learning data -- 5.4.2. Comparison of learning databases -- 5.4.3. Synthesis -- 5.5. Invariance to facial expression intensities -- 5.5.1. Macro-expression -- 5.5.2. Micro-expression -- 5.5.3. Synthesis -- 5.6. Invariance to facial movements -- 5.6.1. Pose variations (PV) and large displacements (LD) -- 5.6.2. Synthesis -- 5.7. Conclusion -- 5.8. References -- Chapter 6. Facial Motion Characteristics -- 6.1. Introduction -- 6.2. Characteristics of the facial movement -- 6.2.1. Local constraint of magnitude and direction -- 6.2.2. Local constraint of the motion distribution.
6.2.3. Motion propagation constraint -- 6.3. LMP -- 6.3.1. Local consistency of the movement -- 6.3.2. Consistency of local distribution -- 6.3.3. Coherence in the propagation of the movement -- 6.4. Conclusion -- 6.5. References -- Chapter 7. Micro- and Macro-Expression Analysis -- 7.1. Introduction -- 7.2. Definition of a facial segmentation model -- 7.3. Feature vector construction -- 7.3.1. Motion features vector -- 7.3.2. Geometric features vector -- 7.3.3. Features fusion -- 7.4. Recognition process -- 7.5. Evaluation on micro- and macro-expressions -- 7.5.1. Learning databases -- 7.5.2. Micro-expression recognition -- 7.5.3. Macro-expressions recognition -- 7.5.4. Synthesis of experiments on micro- and macro-expressions -- 7.6. Same expression with different intensities -- 7.6.1. Data preparation -- 7.6.2. Fractional time analysis -- 7.6.3. Analysis on a different time frame -- 7.6.4. Synthesis of experiments on activation segments -- 7.7. Conclusion -- 7.8. References -- Chapter 8. Towards Adaptation to Head Pose Variations -- 8.1. Introduction -- 8.2. Learning database challenges -- 8.3. Innovative acquisition system (SNaP-2DFe) -- 8.4. Evaluation of face normalization methods -- 8.4.1. Does the normalization preserve the facial geometry? -- 8.4.2. Does normalization preserve facial expressions? -- 8.5. Conclusion -- 8.6. References -- Conclusion to Part 2 -- List of Authors -- Index -- EULA.
Record Nr. UNINA-9910595599403321
Belmonte Romain  
Newark : , : John Wiley & Sons, Incorporated, , 2022
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Face analysis under uncontrolled conditions : from face detection to expression recognition / / Romain Belmonte and Benjamin Allaert
Face analysis under uncontrolled conditions : from face detection to expression recognition / / Romain Belmonte and Benjamin Allaert
Autore Belmonte Romain
Pubbl/distr/stampa Hoboken, NJ : , : John Wiley & Sons, Inc., , [2022]
Descrizione fisica 1 online resource (312 pages)
Disciplina 006.42
Soggetto topico Human face recognition (Computer science)
Image processing
ISBN 1-394-17385-7
1-394-17383-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Title Page -- Copyright Page -- Contents -- Preface -- Part 1. Facial Landmark Detection -- Introduction to Part 1 -- Chapter 1. Facial Landmark Detection -- 1.1. Facial landmark detection in still images -- 1.1.1. Generative approaches -- 1.1.2. Discriminative approaches -- 1.1.3. Deep learning approaches -- 1.1.4. Handling challenges -- 1.1.5. Summary -- 1.2. Extending facial landmark detection to videos -- 1.2.1. Tracking by detection -- 1.2.2. Box, landmark and pose tracking -- 1.2.3. Adaptive approaches -- 1.2.4. Joint approaches -- 1.2.5. Temporal constrained approaches -- 1.2.6. Summary -- 1.3. Discussion -- 1.4. References -- Chapter 2. Effectiveness of Facial Landmark Detection -- 2.1. Overview -- 2.2. Datasets and evaluation metrics -- 2.2.1. Image and video datasets -- 2.2.2. Face preprocessing and data augmentation -- 2.2.3. Evaluation metrics -- 2.2.4. Summary -- 2.3. Image and video benchmarks -- 2.3.1. Compiled results on 300W -- 2.3.2. Compiled results on 300VW -- 2.4. Cross-dataset benchmark -- 2.4.1. Evaluation protocol -- 2.4.2. Comparison of selected approaches -- 2.5. Discussion -- 2.6. References -- Chapter 3. Facial Landmark Detection with Spatio-temporal Modeling -- 3.1. Overview -- 3.2. Spatio-temporal modeling review -- 3.2.1. Hand-crafted approaches -- 3.2.2. Deep learning approaches -- 3.2.3. Summary -- 3.3. Architecture design -- 3.3.1. Coordinate regression networks -- 3.3.2. Heatmap regression networks -- 3.4. Experiments -- 3.4.1. Datasets and evaluation protocols -- 3.4.2. Implementation details -- 3.4.3. Evaluation on SNaP-2DFe -- 3.4.4. Evaluation on 300VW -- 3.4.5. Comparison with existing models -- 3.4.6. Qualitative results -- 3.4.7. Properties of the networks -- 3.5. Design investigations -- 3.5.1. Encoder-decoder -- 3.5.2. Complementarity between spatial and temporal information.
3.5.3. Complementarity between local and global motion -- 3.6. Discussion -- 3.7. References -- Conclusion to Part 1 -- Part 2. Facial Expression Analysis -- Introduction to Part 2 -- Chapter 4. Extraction of Facial Features -- 4.1. Introduction -- 4.2. Face detection -- 4.2.1. Point-of-interest detection algorithms -- 4.2.2. Face alignment approaches -- 4.2.3. Synthesis -- 4.3. Face normalization -- 4.3.1. Dealing with head pose variations -- 4.3.2. Dealing with facial occlusions -- 4.3.3. Synthesis -- 4.4. Extraction of visual features -- 4.4.1. Facial appearance features -- 4.4.2. Facial geometric features -- 4.4.3. Facial dynamics features -- 4.4.4. Facial segmentation models -- 4.4.5. Synthesis -- 4.5. Learning methods -- 4.5.1. Classification versus regression -- 4.5.2. Fusion model -- 4.5.3. Synthesis -- 4.6. Conclusion -- 4.7. References -- Chapter 5. Facial Expression Modeling -- 5.1. Introduction -- 5.2. Modeling of the affective state -- 5.2.1. Categorical modeling -- 5.2.2. Dimensional modeling -- 5.2.3. Synthesis -- 5.3. The challenges of facial expression recognition -- 5.3.1. The variation of the intensity of the expressions -- 5.3.2. Variation of facial movement -- 5.3.3. Synthesis -- 5.4. The learning databases -- 5.4.1. Improvement of learning data -- 5.4.2. Comparison of learning databases -- 5.4.3. Synthesis -- 5.5. Invariance to facial expression intensities -- 5.5.1. Macro-expression -- 5.5.2. Micro-expression -- 5.5.3. Synthesis -- 5.6. Invariance to facial movements -- 5.6.1. Pose variations (PV) and large displacements (LD) -- 5.6.2. Synthesis -- 5.7. Conclusion -- 5.8. References -- Chapter 6. Facial Motion Characteristics -- 6.1. Introduction -- 6.2. Characteristics of the facial movement -- 6.2.1. Local constraint of magnitude and direction -- 6.2.2. Local constraint of the motion distribution.
6.2.3. Motion propagation constraint -- 6.3. LMP -- 6.3.1. Local consistency of the movement -- 6.3.2. Consistency of local distribution -- 6.3.3. Coherence in the propagation of the movement -- 6.4. Conclusion -- 6.5. References -- Chapter 7. Micro- and Macro-Expression Analysis -- 7.1. Introduction -- 7.2. Definition of a facial segmentation model -- 7.3. Feature vector construction -- 7.3.1. Motion features vector -- 7.3.2. Geometric features vector -- 7.3.3. Features fusion -- 7.4. Recognition process -- 7.5. Evaluation on micro- and macro-expressions -- 7.5.1. Learning databases -- 7.5.2. Micro-expression recognition -- 7.5.3. Macro-expressions recognition -- 7.5.4. Synthesis of experiments on micro- and macro-expressions -- 7.6. Same expression with different intensities -- 7.6.1. Data preparation -- 7.6.2. Fractional time analysis -- 7.6.3. Analysis on a different time frame -- 7.6.4. Synthesis of experiments on activation segments -- 7.7. Conclusion -- 7.8. References -- Chapter 8. Towards Adaptation to Head Pose Variations -- 8.1. Introduction -- 8.2. Learning database challenges -- 8.3. Innovative acquisition system (SNaP-2DFe) -- 8.4. Evaluation of face normalization methods -- 8.4.1. Does the normalization preserve the facial geometry? -- 8.4.2. Does normalization preserve facial expressions? -- 8.5. Conclusion -- 8.6. References -- Conclusion to Part 2 -- List of Authors -- Index -- EULA.
Record Nr. UNINA-9910643860303321
Belmonte Romain  
Hoboken, NJ : , : John Wiley & Sons, Inc., , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Face analysis under uncontrolled conditions : from face detection to expression recognition / / Romain Belmonte and Benjamin Allaert
Face analysis under uncontrolled conditions : from face detection to expression recognition / / Romain Belmonte and Benjamin Allaert
Autore Belmonte Romain
Pubbl/distr/stampa Hoboken, NJ : , : John Wiley & Sons, Inc., , [2022]
Descrizione fisica 1 online resource (312 pages)
Disciplina 006.42
Collana Sciences. Image. Information seeking in images and videos
Soggetto topico Human face recognition (Computer science)
ISBN 1-394-17385-7
1-394-17383-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Title Page -- Copyright Page -- Contents -- Preface -- Part 1. Facial Landmark Detection -- Introduction to Part 1 -- Chapter 1. Facial Landmark Detection -- 1.1. Facial landmark detection in still images -- 1.1.1. Generative approaches -- 1.1.2. Discriminative approaches -- 1.1.3. Deep learning approaches -- 1.1.4. Handling challenges -- 1.1.5. Summary -- 1.2. Extending facial landmark detection to videos -- 1.2.1. Tracking by detection -- 1.2.2. Box, landmark and pose tracking -- 1.2.3. Adaptive approaches -- 1.2.4. Joint approaches -- 1.2.5. Temporal constrained approaches -- 1.2.6. Summary -- 1.3. Discussion -- 1.4. References -- Chapter 2. Effectiveness of Facial Landmark Detection -- 2.1. Overview -- 2.2. Datasets and evaluation metrics -- 2.2.1. Image and video datasets -- 2.2.2. Face preprocessing and data augmentation -- 2.2.3. Evaluation metrics -- 2.2.4. Summary -- 2.3. Image and video benchmarks -- 2.3.1. Compiled results on 300W -- 2.3.2. Compiled results on 300VW -- 2.4. Cross-dataset benchmark -- 2.4.1. Evaluation protocol -- 2.4.2. Comparison of selected approaches -- 2.5. Discussion -- 2.6. References -- Chapter 3. Facial Landmark Detection with Spatio-temporal Modeling -- 3.1. Overview -- 3.2. Spatio-temporal modeling review -- 3.2.1. Hand-crafted approaches -- 3.2.2. Deep learning approaches -- 3.2.3. Summary -- 3.3. Architecture design -- 3.3.1. Coordinate regression networks -- 3.3.2. Heatmap regression networks -- 3.4. Experiments -- 3.4.1. Datasets and evaluation protocols -- 3.4.2. Implementation details -- 3.4.3. Evaluation on SNaP-2DFe -- 3.4.4. Evaluation on 300VW -- 3.4.5. Comparison with existing models -- 3.4.6. Qualitative results -- 3.4.7. Properties of the networks -- 3.5. Design investigations -- 3.5.1. Encoder-decoder -- 3.5.2. Complementarity between spatial and temporal information.
3.5.3. Complementarity between local and global motion -- 3.6. Discussion -- 3.7. References -- Conclusion to Part 1 -- Part 2. Facial Expression Analysis -- Introduction to Part 2 -- Chapter 4. Extraction of Facial Features -- 4.1. Introduction -- 4.2. Face detection -- 4.2.1. Point-of-interest detection algorithms -- 4.2.2. Face alignment approaches -- 4.2.3. Synthesis -- 4.3. Face normalization -- 4.3.1. Dealing with head pose variations -- 4.3.2. Dealing with facial occlusions -- 4.3.3. Synthesis -- 4.4. Extraction of visual features -- 4.4.1. Facial appearance features -- 4.4.2. Facial geometric features -- 4.4.3. Facial dynamics features -- 4.4.4. Facial segmentation models -- 4.4.5. Synthesis -- 4.5. Learning methods -- 4.5.1. Classification versus regression -- 4.5.2. Fusion model -- 4.5.3. Synthesis -- 4.6. Conclusion -- 4.7. References -- Chapter 5. Facial Expression Modeling -- 5.1. Introduction -- 5.2. Modeling of the affective state -- 5.2.1. Categorical modeling -- 5.2.2. Dimensional modeling -- 5.2.3. Synthesis -- 5.3. The challenges of facial expression recognition -- 5.3.1. The variation of the intensity of the expressions -- 5.3.2. Variation of facial movement -- 5.3.3. Synthesis -- 5.4. The learning databases -- 5.4.1. Improvement of learning data -- 5.4.2. Comparison of learning databases -- 5.4.3. Synthesis -- 5.5. Invariance to facial expression intensities -- 5.5.1. Macro-expression -- 5.5.2. Micro-expression -- 5.5.3. Synthesis -- 5.6. Invariance to facial movements -- 5.6.1. Pose variations (PV) and large displacements (LD) -- 5.6.2. Synthesis -- 5.7. Conclusion -- 5.8. References -- Chapter 6. Facial Motion Characteristics -- 6.1. Introduction -- 6.2. Characteristics of the facial movement -- 6.2.1. Local constraint of magnitude and direction -- 6.2.2. Local constraint of the motion distribution.
6.2.3. Motion propagation constraint -- 6.3. LMP -- 6.3.1. Local consistency of the movement -- 6.3.2. Consistency of local distribution -- 6.3.3. Coherence in the propagation of the movement -- 6.4. Conclusion -- 6.5. References -- Chapter 7. Micro- and Macro-Expression Analysis -- 7.1. Introduction -- 7.2. Definition of a facial segmentation model -- 7.3. Feature vector construction -- 7.3.1. Motion features vector -- 7.3.2. Geometric features vector -- 7.3.3. Features fusion -- 7.4. Recognition process -- 7.5. Evaluation on micro- and macro-expressions -- 7.5.1. Learning databases -- 7.5.2. Micro-expression recognition -- 7.5.3. Macro-expressions recognition -- 7.5.4. Synthesis of experiments on micro- and macro-expressions -- 7.6. Same expression with different intensities -- 7.6.1. Data preparation -- 7.6.2. Fractional time analysis -- 7.6.3. Analysis on a different time frame -- 7.6.4. Synthesis of experiments on activation segments -- 7.7. Conclusion -- 7.8. References -- Chapter 8. Towards Adaptation to Head Pose Variations -- 8.1. Introduction -- 8.2. Learning database challenges -- 8.3. Innovative acquisition system (SNaP-2DFe) -- 8.4. Evaluation of face normalization methods -- 8.4.1. Does the normalization preserve the facial geometry? -- 8.4.2. Does normalization preserve facial expressions? -- 8.5. Conclusion -- 8.6. References -- Conclusion to Part 2 -- List of Authors -- Index -- EULA.
Record Nr. UNINA-9910830441903321
Belmonte Romain  
Hoboken, NJ : , : John Wiley & Sons, Inc., , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui