top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
Autore Kitagawa Midori <1963-, >
Edizione [1st edition]
Pubbl/distr/stampa Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Descrizione fisica 1 online resource (231 p.)
Disciplina 006.6/96
Altri autori (Persone) WindsorBrian
Soggetto topico Computer animation
Motion - Computer simulation
Three-dimensional imaging
Soggetto genere / forma Electronic books.
ISBN 1-136-13965-6
1-136-13966-4
1-281-30730-0
9786611307301
0-08-087794-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto MoCap for ArtistsWorkflow and Techniques for Motion Capture; Copyright; Contents; Acknowledgments; Introduction; Chapter 1: An Overview and History of Motion Capture; 1.1 About This Book; 1.2 History of Mocap; 1.2.1 Early attempts; 1.2.2 Rotoscoping; 1.2.3 Beginning of digital mocap; 1.3 Types of Mocap; 1.3.1 Optical mocap systems; 1.3.2 Magnetic mocap systems; 1.3.3 Mechanical mocap systems; Chapter 2: Preproduction; 2.1 Importance of Preproduction; 2.2 Pre-capture Planning; 2.2.1 Script; 2.2.2 Storyboard; 2.2.3 Shot list; 2.2.4 Animatic; 2.3 Preparation for Capture; 2.3.1 Talent
2.3.2 Marker sets2.3.2.1 What are the system limitations?; 2.3.2.2 What kind of motion will be captured?; 2.3.2.3 Know the anatomy; 2.3.3 Capture volume; 2.3.4 Shot list; 2.3.5 Capture schedule; 2.3.6 Rehearsals; 2.3.7 Props; 2.3.8 Suits and markers; Chapter 3: Pipeline; 3.1 Setting up a Skeleton for a 3D Character; 3.2 Calibrations; 3.2.1 System calibration; 3.2.2 Subject calibration; 3.3 Capture Sessions; 3.3.1 Audio and video references; 3.3.2 Organization; 3.3.3 Preventing occlusions; 3.4 Cleaning Data; 3.5 Editing Data; 3.6 Applying Motions to a 3D Character
3.7 Rendering and Post-productionChapter 4: Cleaning and Editing Data; 4.1 Cleaning Marker Data; 4.1.1 Types of data; 4.1.1.1 Optical marker data (translational data); 4.1.1.2 Translational and rotational data; 4.1.1.3 Skeletal data; 4.1.2 What to clean and what not?; 4.1.2.1 What not to clean?; 4.1.2.2 What to clean?; 4.1.3 Labeling/identifying; 4.1.4 Data cleaning methods; 4.1.4.1 Eliminating gaps; 4.1.4.2 Eliminating spikes; 4.1.4.3 Rigid body; 4.1.4.4 Filters; 4.1.5 When to stop?; 4.2 Applying Marker Data to the Skeleton; 4.2.1 Actor; 4.2.2 Skeleton; 4.2.3 Character
Chapter 5: Skeletal Editing5.1 Retargeting; 5.1.1 Reducing need for retargeting; 5.1.2 Scaling a skeleton; 5.1.3 Fixing foot sliding; 5.1.4 Working on the spine; 5.2 Blending Motions; 5.2.1 Selecting a blending point; 5.2.2 Matching positions; 5.2.3 Dealing with less than ideal cases; 5.3 Inverse Kinematics; 5.4 Floor Contact; 5.5 Rigid Body; 5.6 Looping Motion; 5.6.1 Getting motion ready; 5.6.2 Setting up the loop; 5.6.2.1 Walking down the z-axis; 5.6.2.2 Taking out the translation; 5.7 Poses; 5.7.1 Deciding what to use; 5.7.2 Creating a pose; 5.7.3 Key-framing a pose
Chapter 6: Data Application - Intro Level: Props6.1 A Stick with Two Markers; 6.1.1 When it fails: Occlusion; 6.1.2 When it fails: Rotation; 6.2 A Stick with Three Markers; 6.2.1 Three markers with equal distances; 6.2.2 Three markers on a single straight line; 6.2.3 Placement of three markers that works; 6.3 Flexible Objects; Chapter 7: Data Application - Intermediate Level: Decomposing and Composing Motions; 7.1 Mapping Multiple Motions; 7.1.1 Decomposing and composing upper and lower body motions; 7.1.2 Synchronizing upper and lower body motions; 7.2 Balance; 7.3 Breaking Motion Apart
7.3.1 When you don't need all the motion
Record Nr. UNINA-9910451104603321
Kitagawa Midori <1963-, >  
Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
Autore Kitagawa Midori <1963-, >
Edizione [1st edition]
Pubbl/distr/stampa Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Descrizione fisica 1 online resource (231 p.)
Disciplina 006.6/96
Altri autori (Persone) WindsorBrian
Soggetto topico Computer animation
Motion - Computer simulation
Three-dimensional imaging
ISBN 1-136-13965-6
1-136-13966-4
1-281-30730-0
9786611307301
0-08-087794-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto MoCap for ArtistsWorkflow and Techniques for Motion Capture; Copyright; Contents; Acknowledgments; Introduction; Chapter 1: An Overview and History of Motion Capture; 1.1 About This Book; 1.2 History of Mocap; 1.2.1 Early attempts; 1.2.2 Rotoscoping; 1.2.3 Beginning of digital mocap; 1.3 Types of Mocap; 1.3.1 Optical mocap systems; 1.3.2 Magnetic mocap systems; 1.3.3 Mechanical mocap systems; Chapter 2: Preproduction; 2.1 Importance of Preproduction; 2.2 Pre-capture Planning; 2.2.1 Script; 2.2.2 Storyboard; 2.2.3 Shot list; 2.2.4 Animatic; 2.3 Preparation for Capture; 2.3.1 Talent
2.3.2 Marker sets2.3.2.1 What are the system limitations?; 2.3.2.2 What kind of motion will be captured?; 2.3.2.3 Know the anatomy; 2.3.3 Capture volume; 2.3.4 Shot list; 2.3.5 Capture schedule; 2.3.6 Rehearsals; 2.3.7 Props; 2.3.8 Suits and markers; Chapter 3: Pipeline; 3.1 Setting up a Skeleton for a 3D Character; 3.2 Calibrations; 3.2.1 System calibration; 3.2.2 Subject calibration; 3.3 Capture Sessions; 3.3.1 Audio and video references; 3.3.2 Organization; 3.3.3 Preventing occlusions; 3.4 Cleaning Data; 3.5 Editing Data; 3.6 Applying Motions to a 3D Character
3.7 Rendering and Post-productionChapter 4: Cleaning and Editing Data; 4.1 Cleaning Marker Data; 4.1.1 Types of data; 4.1.1.1 Optical marker data (translational data); 4.1.1.2 Translational and rotational data; 4.1.1.3 Skeletal data; 4.1.2 What to clean and what not?; 4.1.2.1 What not to clean?; 4.1.2.2 What to clean?; 4.1.3 Labeling/identifying; 4.1.4 Data cleaning methods; 4.1.4.1 Eliminating gaps; 4.1.4.2 Eliminating spikes; 4.1.4.3 Rigid body; 4.1.4.4 Filters; 4.1.5 When to stop?; 4.2 Applying Marker Data to the Skeleton; 4.2.1 Actor; 4.2.2 Skeleton; 4.2.3 Character
Chapter 5: Skeletal Editing5.1 Retargeting; 5.1.1 Reducing need for retargeting; 5.1.2 Scaling a skeleton; 5.1.3 Fixing foot sliding; 5.1.4 Working on the spine; 5.2 Blending Motions; 5.2.1 Selecting a blending point; 5.2.2 Matching positions; 5.2.3 Dealing with less than ideal cases; 5.3 Inverse Kinematics; 5.4 Floor Contact; 5.5 Rigid Body; 5.6 Looping Motion; 5.6.1 Getting motion ready; 5.6.2 Setting up the loop; 5.6.2.1 Walking down the z-axis; 5.6.2.2 Taking out the translation; 5.7 Poses; 5.7.1 Deciding what to use; 5.7.2 Creating a pose; 5.7.3 Key-framing a pose
Chapter 6: Data Application - Intro Level: Props6.1 A Stick with Two Markers; 6.1.1 When it fails: Occlusion; 6.1.2 When it fails: Rotation; 6.2 A Stick with Three Markers; 6.2.1 Three markers with equal distances; 6.2.2 Three markers on a single straight line; 6.2.3 Placement of three markers that works; 6.3 Flexible Objects; Chapter 7: Data Application - Intermediate Level: Decomposing and Composing Motions; 7.1 Mapping Multiple Motions; 7.1.1 Decomposing and composing upper and lower body motions; 7.1.2 Synchronizing upper and lower body motions; 7.2 Balance; 7.3 Breaking Motion Apart
7.3.1 When you don't need all the motion
Record Nr. UNINA-9910784829503321
Kitagawa Midori <1963-, >  
Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
Autore Kitagawa Midori <1963-, >
Edizione [1st edition]
Pubbl/distr/stampa Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Descrizione fisica 1 online resource (231 p.)
Disciplina 006.6/96
Altri autori (Persone) WindsorBrian
Soggetto topico Computer animation
Motion - Computer simulation
Three-dimensional imaging
ISBN 1-136-13965-6
1-136-13966-4
1-281-30730-0
9786611307301
0-08-087794-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto MoCap for ArtistsWorkflow and Techniques for Motion Capture; Copyright; Contents; Acknowledgments; Introduction; Chapter 1: An Overview and History of Motion Capture; 1.1 About This Book; 1.2 History of Mocap; 1.2.1 Early attempts; 1.2.2 Rotoscoping; 1.2.3 Beginning of digital mocap; 1.3 Types of Mocap; 1.3.1 Optical mocap systems; 1.3.2 Magnetic mocap systems; 1.3.3 Mechanical mocap systems; Chapter 2: Preproduction; 2.1 Importance of Preproduction; 2.2 Pre-capture Planning; 2.2.1 Script; 2.2.2 Storyboard; 2.2.3 Shot list; 2.2.4 Animatic; 2.3 Preparation for Capture; 2.3.1 Talent
2.3.2 Marker sets2.3.2.1 What are the system limitations?; 2.3.2.2 What kind of motion will be captured?; 2.3.2.3 Know the anatomy; 2.3.3 Capture volume; 2.3.4 Shot list; 2.3.5 Capture schedule; 2.3.6 Rehearsals; 2.3.7 Props; 2.3.8 Suits and markers; Chapter 3: Pipeline; 3.1 Setting up a Skeleton for a 3D Character; 3.2 Calibrations; 3.2.1 System calibration; 3.2.2 Subject calibration; 3.3 Capture Sessions; 3.3.1 Audio and video references; 3.3.2 Organization; 3.3.3 Preventing occlusions; 3.4 Cleaning Data; 3.5 Editing Data; 3.6 Applying Motions to a 3D Character
3.7 Rendering and Post-productionChapter 4: Cleaning and Editing Data; 4.1 Cleaning Marker Data; 4.1.1 Types of data; 4.1.1.1 Optical marker data (translational data); 4.1.1.2 Translational and rotational data; 4.1.1.3 Skeletal data; 4.1.2 What to clean and what not?; 4.1.2.1 What not to clean?; 4.1.2.2 What to clean?; 4.1.3 Labeling/identifying; 4.1.4 Data cleaning methods; 4.1.4.1 Eliminating gaps; 4.1.4.2 Eliminating spikes; 4.1.4.3 Rigid body; 4.1.4.4 Filters; 4.1.5 When to stop?; 4.2 Applying Marker Data to the Skeleton; 4.2.1 Actor; 4.2.2 Skeleton; 4.2.3 Character
Chapter 5: Skeletal Editing5.1 Retargeting; 5.1.1 Reducing need for retargeting; 5.1.2 Scaling a skeleton; 5.1.3 Fixing foot sliding; 5.1.4 Working on the spine; 5.2 Blending Motions; 5.2.1 Selecting a blending point; 5.2.2 Matching positions; 5.2.3 Dealing with less than ideal cases; 5.3 Inverse Kinematics; 5.4 Floor Contact; 5.5 Rigid Body; 5.6 Looping Motion; 5.6.1 Getting motion ready; 5.6.2 Setting up the loop; 5.6.2.1 Walking down the z-axis; 5.6.2.2 Taking out the translation; 5.7 Poses; 5.7.1 Deciding what to use; 5.7.2 Creating a pose; 5.7.3 Key-framing a pose
Chapter 6: Data Application - Intro Level: Props6.1 A Stick with Two Markers; 6.1.1 When it fails: Occlusion; 6.1.2 When it fails: Rotation; 6.2 A Stick with Three Markers; 6.2.1 Three markers with equal distances; 6.2.2 Three markers on a single straight line; 6.2.3 Placement of three markers that works; 6.3 Flexible Objects; Chapter 7: Data Application - Intermediate Level: Decomposing and Composing Motions; 7.1 Mapping Multiple Motions; 7.1.1 Decomposing and composing upper and lower body motions; 7.1.2 Synchronizing upper and lower body motions; 7.2 Balance; 7.3 Breaking Motion Apart
7.3.1 When you don't need all the motion
Record Nr. UNINA-9910799978403321
Kitagawa Midori <1963-, >  
Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
MoCap for artists : workflow and techniques for motion capture / / Midori Kitagawa and Brian Windsor
Autore Kitagawa Midori <1963-, >
Edizione [1st edition]
Pubbl/distr/stampa Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Descrizione fisica 1 online resource (231 p.)
Disciplina 006.6/96
006.696
Altri autori (Persone) WindsorBrian
Soggetto topico Computer animation
Motion - Computer simulation
Three-dimensional imaging
ISBN 1-136-13965-6
1-136-13966-4
1-281-30730-0
9786611307301
0-08-087794-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto MoCap for ArtistsWorkflow and Techniques for Motion Capture; Copyright; Contents; Acknowledgments; Introduction; Chapter 1: An Overview and History of Motion Capture; 1.1 About This Book; 1.2 History of Mocap; 1.2.1 Early attempts; 1.2.2 Rotoscoping; 1.2.3 Beginning of digital mocap; 1.3 Types of Mocap; 1.3.1 Optical mocap systems; 1.3.2 Magnetic mocap systems; 1.3.3 Mechanical mocap systems; Chapter 2: Preproduction; 2.1 Importance of Preproduction; 2.2 Pre-capture Planning; 2.2.1 Script; 2.2.2 Storyboard; 2.2.3 Shot list; 2.2.4 Animatic; 2.3 Preparation for Capture; 2.3.1 Talent
2.3.2 Marker sets2.3.2.1 What are the system limitations?; 2.3.2.2 What kind of motion will be captured?; 2.3.2.3 Know the anatomy; 2.3.3 Capture volume; 2.3.4 Shot list; 2.3.5 Capture schedule; 2.3.6 Rehearsals; 2.3.7 Props; 2.3.8 Suits and markers; Chapter 3: Pipeline; 3.1 Setting up a Skeleton for a 3D Character; 3.2 Calibrations; 3.2.1 System calibration; 3.2.2 Subject calibration; 3.3 Capture Sessions; 3.3.1 Audio and video references; 3.3.2 Organization; 3.3.3 Preventing occlusions; 3.4 Cleaning Data; 3.5 Editing Data; 3.6 Applying Motions to a 3D Character
3.7 Rendering and Post-productionChapter 4: Cleaning and Editing Data; 4.1 Cleaning Marker Data; 4.1.1 Types of data; 4.1.1.1 Optical marker data (translational data); 4.1.1.2 Translational and rotational data; 4.1.1.3 Skeletal data; 4.1.2 What to clean and what not?; 4.1.2.1 What not to clean?; 4.1.2.2 What to clean?; 4.1.3 Labeling/identifying; 4.1.4 Data cleaning methods; 4.1.4.1 Eliminating gaps; 4.1.4.2 Eliminating spikes; 4.1.4.3 Rigid body; 4.1.4.4 Filters; 4.1.5 When to stop?; 4.2 Applying Marker Data to the Skeleton; 4.2.1 Actor; 4.2.2 Skeleton; 4.2.3 Character
Chapter 5: Skeletal Editing5.1 Retargeting; 5.1.1 Reducing need for retargeting; 5.1.2 Scaling a skeleton; 5.1.3 Fixing foot sliding; 5.1.4 Working on the spine; 5.2 Blending Motions; 5.2.1 Selecting a blending point; 5.2.2 Matching positions; 5.2.3 Dealing with less than ideal cases; 5.3 Inverse Kinematics; 5.4 Floor Contact; 5.5 Rigid Body; 5.6 Looping Motion; 5.6.1 Getting motion ready; 5.6.2 Setting up the loop; 5.6.2.1 Walking down the z-axis; 5.6.2.2 Taking out the translation; 5.7 Poses; 5.7.1 Deciding what to use; 5.7.2 Creating a pose; 5.7.3 Key-framing a pose
Chapter 6: Data Application - Intro Level: Props6.1 A Stick with Two Markers; 6.1.1 When it fails: Occlusion; 6.1.2 When it fails: Rotation; 6.2 A Stick with Three Markers; 6.2.1 Three markers with equal distances; 6.2.2 Three markers on a single straight line; 6.2.3 Placement of three markers that works; 6.3 Flexible Objects; Chapter 7: Data Application - Intermediate Level: Decomposing and Composing Motions; 7.1 Mapping Multiple Motions; 7.1.1 Decomposing and composing upper and lower body motions; 7.1.2 Synchronizing upper and lower body motions; 7.2 Balance; 7.3 Breaking Motion Apart
7.3.1 When you don't need all the motion
Record Nr. UNINA-9910810096303321
Kitagawa Midori <1963-, >  
Amsterdam ; ; Boston : , : Elsevier/Focal Press, , 2008
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Modern machine learning techniques and their applications in cartoon animation research / / Jun Yu, Dacheng Tao
Modern machine learning techniques and their applications in cartoon animation research / / Jun Yu, Dacheng Tao
Autore Yu Jun
Pubbl/distr/stampa Piscataway, New Jersey : , : IEEE Press, , c2013
Descrizione fisica 1 online resource (210 p.)
Disciplina 006.6/96
006.696
Altri autori (Persone) TaoDacheng <1978->
Collana IEEE Press Series on Systems Science and Engineering
Soggetto topico Computer animation
Machine learning
ISBN 1-299-44909-3
1-118-55998-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Preface xi -- 1 Introduction 1 -- 1.1 Perception 2 -- 1.2 Overview of Machine Learning Techniques 2 -- 1.2.1 Manifold Learning 3 -- 1.2.2 Semi-supervised Learning 5 -- 1.2.3 Multiview Learning 8 -- 1.2.4 Learning-based Optimization 9 -- 1.3 Recent Developments in Computer Animation 11 -- 1.3.1 Example-Based Motion Reuse 11 -- 1.3.2 Physically Based Computer Animation 26 -- 1.3.3 Computer-Assisted Cartoon Animation 33 -- 1.3.4 Crowd Animation 42 -- 1.3.5 Facial Animation 51 -- 1.4 Chapter Summary 60 -- 2 Modern Machine Learning Techniques 63 -- 2.1 A Unified Framework for Manifold Learning 65 -- 2.1.1 Framework Introduction 65 -- 2.1.2 Various Manifold Learning Algorithm Unifying 67 -- 2.1.3 Discriminative Locality Alignment 69 -- 2.1.4 Discussions 71 -- 2.2 Spectral Clustering and Graph Cut 71 -- 2.2.1 Spectral Clustering 72 -- 2.2.2 Graph Cut Approximation 76 -- 2.3 Ensemble Manifold Learning 81 -- 2.3.1 Motivation for EMR 81 -- 2.3.2 Overview of EMR 81 -- 2.3.3 Applications of EMR 84 -- 2.4 Multiple Kernel Learning 86 -- 2.4.1 A Unified Mulitple Kernel Learning Framework 87 -- 2.4.2 SVM with Multiple Unweighted-Sum Kernels 89 -- 2.4.3 QCQP Multiple Kernel Learning 89 -- 2.5 Multiview Subspace Learning 90 -- 2.5.1 Approach Overview 90 -- 2.5.2 Techinique Details 90 -- 2.5.3 Alternative Optimization Used in PA-MSL 93 -- 2.6 Multiview Distance Metric Learning 94 -- 2.6.1 Motivation for MDML 94 -- 2.6.2 Graph-Based Semi-supervised Learning 95 -- 2.6.3 Overview of MDML 95 -- 2.7 Multi-task Learning 98 -- 2.7.1 Introduction of Structural Learning 99 -- 2.7.2 Hypothesis Space Selection 100 -- 2.7.3 Algorithm for Multi-task Learning 101 -- 2.7.4 Solution by Alternative Optimization 102 -- 2.8 Chapter Summary 103 -- 3 Animation Research: A Brief Introduction 105 -- 3.1 Traditional Animation Production 107 -- 3.1.1 History of Traditional Animation Production 107 -- 3.1.2 Procedures of Animation Production 108 -- 3.1.3 Relationship Between Traditional Animation and Computer Animation 109.
3.2 Computer-Assisted Systems 110 -- 3.2.1 Computer Animation Techniques 111 -- 3.3 Cartoon Reuse Systems for Animation Synthesis 117 -- 3.3.1 Cartoon Texture for Animation Synthesis 118 -- 3.3.2 Cartoon Motion Reuse 120 -- 3.3.3 Motion Capture Data Reuse in Cartoon Characters 122 -- 3.4 Graphical Materials Reuse: More Examples 124 -- 3.4.1 Video Clips Reuse 124 -- 3.4.2 Motion Captured Data Reuse by Motion Texture 126 -- 3.4.3 Motion Capture Data Reuse by Motion Graph 127 -- 3.5 Chapter Summary 129 -- 4 Animation Research: Modern Techniques 131 -- 4.1 Automatic Cartoon Generation with Correspondence Construction 131 -- 4.1.1 Related Work in Correspondence Construction 132 -- 4.1.2 Introduction of the Semi-supervised Correspondence Construction 133 -- 4.1.3 Stroke Correspondence Construction via Stroke Reconstruction Algorithm 138 -- 4.1.4 Simulation Results 141 -- 4.2 Cartoon Characters Represented by Multiple Features 146 -- 4.2.1 Cartoon Character Extraction 147 -- 4.2.2 Color Histogram 148 -- 4.2.3 Hausdorff Edge Feature 148 -- 4.2.4 Motion Feature 150 -- 4.2.5 Skeleton Feature 151 -- 4.2.6 Complementary Characteristics of Multiview Features 153 -- 4.3 Graph-based Cartoon Clips Synthesis 154 -- 4.3.1 Graph Model Construction 155 -- 4.3.2 Distance Calculation 155 -- 4.3.3 Simulation Results 156 -- 4.4 Retrieval-based Cartoon Clips Synthesis 161 -- 4.4.1 Constrained Spreading Activation Network 162 -- 4.4.2 Semi-supervised Multiview Subspace Learning 165 -- 4.4.3 Simulation Results 168 -- 4.5 Chapter Summary 173 -- References 174 -- Index 195.
Record Nr. UNINA-9910139027603321
Yu Jun  
Piscataway, New Jersey : , : IEEE Press, , c2013
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Modern machine learning techniques and their applications in cartoon animation research / / Jun Yu, Dacheng Tao
Modern machine learning techniques and their applications in cartoon animation research / / Jun Yu, Dacheng Tao
Autore Yu Jun
Pubbl/distr/stampa Piscataway, New Jersey : , : IEEE Press, , c2013
Descrizione fisica 1 online resource (210 p.)
Disciplina 006.6/96
006.696
Altri autori (Persone) TaoDacheng <1978->
Collana IEEE Press Series on Systems Science and Engineering
Soggetto topico Computer animation
Machine learning
ISBN 1-299-44909-3
1-118-55998-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Preface xi -- 1 Introduction 1 -- 1.1 Perception 2 -- 1.2 Overview of Machine Learning Techniques 2 -- 1.2.1 Manifold Learning 3 -- 1.2.2 Semi-supervised Learning 5 -- 1.2.3 Multiview Learning 8 -- 1.2.4 Learning-based Optimization 9 -- 1.3 Recent Developments in Computer Animation 11 -- 1.3.1 Example-Based Motion Reuse 11 -- 1.3.2 Physically Based Computer Animation 26 -- 1.3.3 Computer-Assisted Cartoon Animation 33 -- 1.3.4 Crowd Animation 42 -- 1.3.5 Facial Animation 51 -- 1.4 Chapter Summary 60 -- 2 Modern Machine Learning Techniques 63 -- 2.1 A Unified Framework for Manifold Learning 65 -- 2.1.1 Framework Introduction 65 -- 2.1.2 Various Manifold Learning Algorithm Unifying 67 -- 2.1.3 Discriminative Locality Alignment 69 -- 2.1.4 Discussions 71 -- 2.2 Spectral Clustering and Graph Cut 71 -- 2.2.1 Spectral Clustering 72 -- 2.2.2 Graph Cut Approximation 76 -- 2.3 Ensemble Manifold Learning 81 -- 2.3.1 Motivation for EMR 81 -- 2.3.2 Overview of EMR 81 -- 2.3.3 Applications of EMR 84 -- 2.4 Multiple Kernel Learning 86 -- 2.4.1 A Unified Mulitple Kernel Learning Framework 87 -- 2.4.2 SVM with Multiple Unweighted-Sum Kernels 89 -- 2.4.3 QCQP Multiple Kernel Learning 89 -- 2.5 Multiview Subspace Learning 90 -- 2.5.1 Approach Overview 90 -- 2.5.2 Techinique Details 90 -- 2.5.3 Alternative Optimization Used in PA-MSL 93 -- 2.6 Multiview Distance Metric Learning 94 -- 2.6.1 Motivation for MDML 94 -- 2.6.2 Graph-Based Semi-supervised Learning 95 -- 2.6.3 Overview of MDML 95 -- 2.7 Multi-task Learning 98 -- 2.7.1 Introduction of Structural Learning 99 -- 2.7.2 Hypothesis Space Selection 100 -- 2.7.3 Algorithm for Multi-task Learning 101 -- 2.7.4 Solution by Alternative Optimization 102 -- 2.8 Chapter Summary 103 -- 3 Animation Research: A Brief Introduction 105 -- 3.1 Traditional Animation Production 107 -- 3.1.1 History of Traditional Animation Production 107 -- 3.1.2 Procedures of Animation Production 108 -- 3.1.3 Relationship Between Traditional Animation and Computer Animation 109.
3.2 Computer-Assisted Systems 110 -- 3.2.1 Computer Animation Techniques 111 -- 3.3 Cartoon Reuse Systems for Animation Synthesis 117 -- 3.3.1 Cartoon Texture for Animation Synthesis 118 -- 3.3.2 Cartoon Motion Reuse 120 -- 3.3.3 Motion Capture Data Reuse in Cartoon Characters 122 -- 3.4 Graphical Materials Reuse: More Examples 124 -- 3.4.1 Video Clips Reuse 124 -- 3.4.2 Motion Captured Data Reuse by Motion Texture 126 -- 3.4.3 Motion Capture Data Reuse by Motion Graph 127 -- 3.5 Chapter Summary 129 -- 4 Animation Research: Modern Techniques 131 -- 4.1 Automatic Cartoon Generation with Correspondence Construction 131 -- 4.1.1 Related Work in Correspondence Construction 132 -- 4.1.2 Introduction of the Semi-supervised Correspondence Construction 133 -- 4.1.3 Stroke Correspondence Construction via Stroke Reconstruction Algorithm 138 -- 4.1.4 Simulation Results 141 -- 4.2 Cartoon Characters Represented by Multiple Features 146 -- 4.2.1 Cartoon Character Extraction 147 -- 4.2.2 Color Histogram 148 -- 4.2.3 Hausdorff Edge Feature 148 -- 4.2.4 Motion Feature 150 -- 4.2.5 Skeleton Feature 151 -- 4.2.6 Complementary Characteristics of Multiview Features 153 -- 4.3 Graph-based Cartoon Clips Synthesis 154 -- 4.3.1 Graph Model Construction 155 -- 4.3.2 Distance Calculation 155 -- 4.3.3 Simulation Results 156 -- 4.4 Retrieval-based Cartoon Clips Synthesis 161 -- 4.4.1 Constrained Spreading Activation Network 162 -- 4.4.2 Semi-supervised Multiview Subspace Learning 165 -- 4.4.3 Simulation Results 168 -- 4.5 Chapter Summary 173 -- References 174 -- Index 195.
Record Nr. UNISA-996202754503316
Yu Jun  
Piscataway, New Jersey : , : IEEE Press, , c2013
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Modern machine learning techniques and their applications in cartoon animation research / / Jun Yu, Dacheng Tao
Modern machine learning techniques and their applications in cartoon animation research / / Jun Yu, Dacheng Tao
Autore Yu Jun
Pubbl/distr/stampa Piscataway, New Jersey : , : IEEE Press, , c2013
Descrizione fisica 1 online resource (210 p.)
Disciplina 006.6/96
006.696
Altri autori (Persone) TaoDacheng <1978->
Collana IEEE Press Series on Systems Science and Engineering
Soggetto topico Computer animation
Machine learning
ISBN 1-299-44909-3
1-118-55998-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Preface xi -- 1 Introduction 1 -- 1.1 Perception 2 -- 1.2 Overview of Machine Learning Techniques 2 -- 1.2.1 Manifold Learning 3 -- 1.2.2 Semi-supervised Learning 5 -- 1.2.3 Multiview Learning 8 -- 1.2.4 Learning-based Optimization 9 -- 1.3 Recent Developments in Computer Animation 11 -- 1.3.1 Example-Based Motion Reuse 11 -- 1.3.2 Physically Based Computer Animation 26 -- 1.3.3 Computer-Assisted Cartoon Animation 33 -- 1.3.4 Crowd Animation 42 -- 1.3.5 Facial Animation 51 -- 1.4 Chapter Summary 60 -- 2 Modern Machine Learning Techniques 63 -- 2.1 A Unified Framework for Manifold Learning 65 -- 2.1.1 Framework Introduction 65 -- 2.1.2 Various Manifold Learning Algorithm Unifying 67 -- 2.1.3 Discriminative Locality Alignment 69 -- 2.1.4 Discussions 71 -- 2.2 Spectral Clustering and Graph Cut 71 -- 2.2.1 Spectral Clustering 72 -- 2.2.2 Graph Cut Approximation 76 -- 2.3 Ensemble Manifold Learning 81 -- 2.3.1 Motivation for EMR 81 -- 2.3.2 Overview of EMR 81 -- 2.3.3 Applications of EMR 84 -- 2.4 Multiple Kernel Learning 86 -- 2.4.1 A Unified Mulitple Kernel Learning Framework 87 -- 2.4.2 SVM with Multiple Unweighted-Sum Kernels 89 -- 2.4.3 QCQP Multiple Kernel Learning 89 -- 2.5 Multiview Subspace Learning 90 -- 2.5.1 Approach Overview 90 -- 2.5.2 Techinique Details 90 -- 2.5.3 Alternative Optimization Used in PA-MSL 93 -- 2.6 Multiview Distance Metric Learning 94 -- 2.6.1 Motivation for MDML 94 -- 2.6.2 Graph-Based Semi-supervised Learning 95 -- 2.6.3 Overview of MDML 95 -- 2.7 Multi-task Learning 98 -- 2.7.1 Introduction of Structural Learning 99 -- 2.7.2 Hypothesis Space Selection 100 -- 2.7.3 Algorithm for Multi-task Learning 101 -- 2.7.4 Solution by Alternative Optimization 102 -- 2.8 Chapter Summary 103 -- 3 Animation Research: A Brief Introduction 105 -- 3.1 Traditional Animation Production 107 -- 3.1.1 History of Traditional Animation Production 107 -- 3.1.2 Procedures of Animation Production 108 -- 3.1.3 Relationship Between Traditional Animation and Computer Animation 109.
3.2 Computer-Assisted Systems 110 -- 3.2.1 Computer Animation Techniques 111 -- 3.3 Cartoon Reuse Systems for Animation Synthesis 117 -- 3.3.1 Cartoon Texture for Animation Synthesis 118 -- 3.3.2 Cartoon Motion Reuse 120 -- 3.3.3 Motion Capture Data Reuse in Cartoon Characters 122 -- 3.4 Graphical Materials Reuse: More Examples 124 -- 3.4.1 Video Clips Reuse 124 -- 3.4.2 Motion Captured Data Reuse by Motion Texture 126 -- 3.4.3 Motion Capture Data Reuse by Motion Graph 127 -- 3.5 Chapter Summary 129 -- 4 Animation Research: Modern Techniques 131 -- 4.1 Automatic Cartoon Generation with Correspondence Construction 131 -- 4.1.1 Related Work in Correspondence Construction 132 -- 4.1.2 Introduction of the Semi-supervised Correspondence Construction 133 -- 4.1.3 Stroke Correspondence Construction via Stroke Reconstruction Algorithm 138 -- 4.1.4 Simulation Results 141 -- 4.2 Cartoon Characters Represented by Multiple Features 146 -- 4.2.1 Cartoon Character Extraction 147 -- 4.2.2 Color Histogram 148 -- 4.2.3 Hausdorff Edge Feature 148 -- 4.2.4 Motion Feature 150 -- 4.2.5 Skeleton Feature 151 -- 4.2.6 Complementary Characteristics of Multiview Features 153 -- 4.3 Graph-based Cartoon Clips Synthesis 154 -- 4.3.1 Graph Model Construction 155 -- 4.3.2 Distance Calculation 155 -- 4.3.3 Simulation Results 156 -- 4.4 Retrieval-based Cartoon Clips Synthesis 161 -- 4.4.1 Constrained Spreading Activation Network 162 -- 4.4.2 Semi-supervised Multiview Subspace Learning 165 -- 4.4.3 Simulation Results 168 -- 4.5 Chapter Summary 173 -- References 174 -- Index 195.
Record Nr. UNINA-9910830655403321
Yu Jun  
Piscataway, New Jersey : , : IEEE Press, , c2013
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
MPEG-4 facial animation [[electronic resource] ] : the standard, implementation and applications / / edited by Igor S. Pandzic and Robert Forchheimer
MPEG-4 facial animation [[electronic resource] ] : the standard, implementation and applications / / edited by Igor S. Pandzic and Robert Forchheimer
Pubbl/distr/stampa Chichester ; ; Hoboken, NJ, : J. Wiley, c2002
Descrizione fisica 1 online resource (329 p.)
Disciplina 006.6/96
621.388
Altri autori (Persone) PandzicIgor S
ForchheimerRobert
Soggetto topico MPEG (Video coding standard)
Computer animation - Standards
Face perception - Data processing - Standards
Facial expression - Computer simulation - Standards
Digital video
ISBN 1-280-27007-1
9786610270071
0-470-33904-7
0-470-85461-8
0-470-85462-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto MPEG-4 Facial Animation The Standard, Implementation and Applications; Contents; List of Contributors; Author Biographies; Foreword; Preface; PART 1 BACKGROUND; 1 The Origins of the MPEG-4 Facial Animation Standard; Abstract; 1.1 Introduction; 1.2 The Need for Parameterization; 1.3 The Ideal Parameterization; 1.4 Is MPEG-4 FA up to the Ideal?; 1.4.1 Conclusion; 1.5 Brief History of Facial Control Parameterization; 1.6 The Birth of the Standard; Acknowledgments; References; PART 2 THE STANDARD; 2 Face Animation in MPEG-4; Abstract; 2.1 Introduction; 2.2 Specification and Animation of Faces
2.2.1 MPEG-4 Face Model in Neutral State2.2.2 Face Animation Parameters; 2.2.3 Face Model Specification; 2.3 Coding of Face Animation Parameters; 2.3.1 Arithmetic Coding of FAPs; 2.3.2 DCT Coding of FAPs; 2.3.3 FAP Interpolation Tables; 2.4 Integration of Face Animation and Text-to-Speech Synthesis; 2.5 Integration with MPEG-4 Systems; 2.6 MPEG-4 Profiles for Face Animation; 2.7 Conclusion; References; Annex; 3 MPEG-4 Face Animation Conformance; 3.1 Introduction; 3.2 MPEG Conformance Principles; 3.3 MPEG-4 Profile Architecture; 3.4 The Minimum Face; 3.5 Graphics Profiles
3.6 Conformance Testing3.7 Summary; PART 3 IMPLEMENTATIONS; 4 MPEG-4 Facial Animation Framework for the Web and Mobile Applications; Abstract; 4.1 Introduction; 4.2 The Facial Animation Player; 4.3 Producing Animatable Face Models; 4.4 The Facial Motion Cloning Method; 4.4.1 Interpolation from 2-D Triangle Mesh; 4.4.2 Normalizing the Face; 4.4.3 Computing Facial Motion; 4.4.4 Aligning Source and Target Ace; 4.4.5 Mapping Facial Motion; 4.4.6 Antialiasing; 4.4.7 Treating the Lip Region; 4.4.8 Treating Eyes, Teeth, Tongue and Global Motion; 4.4.9 Facial Motion Cloning Results
4.5 Producing Facial Animation Content4.6 Conclusion; Acknowledgments; References; 5 The Facial Animation Engine; 5.1 Introduction; 5.2 The FAE Block Diagram; 5.3 The Face Model; 5.3.1 Mesh Geometry Description; 5.3.2 Mesh Semantics Description; 5.3.3 The Model Authoring Tool; 5.3.4 Sample Face Models; 5.4 The Mesh Animation Block; 5.4.1 Animation Results; 5.5 The Mesh Calibration Block; 5.5.1 Multilevel Calibration with RBF; 5.5.2 Calibration with Texture; 5.5.3 Calibration Results; 5.6 The Mesh Simplification Block; 5.6.1 Iterative Edge Contraction and Quadric Error Metric
5.6.2 Simplification of MPEG-4 Animated Faces5.6.3 Simplification with Textures; 5.6.4 Simplification Results; 5.7 The FAP Decoding Block; 5.7.1 FAP Interpolation; 5.8 The Audio Decoding Block; 5.9 The Implementation; 5.9.1 Performances; References; 6 Extracting MPEG-4 FAPS from Video; 6.1 Introduction; 6.2 Methods for Detection and Tracking of Faces; 6.3 Active and Statistical Models of Faces; 6.3.1 The Active Appearance Model Search Algorithm; 6.3.2 Training for Active Appearance Model Search; 6.4 An Active Model for Face Tracking; 6.4.1 Analysis - Synthesis; 6.4.2 Collecting Training Data
6.4.3 Tracking a Face with the Active Model
Record Nr. UNINA-9910142530603321
Chichester ; ; Hoboken, NJ, : J. Wiley, c2002
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
MPEG-7 audio and beyond [[electronic resource] ] : audio content indexing and retrieval / / Hyoung-Gook Kim, Nicolas Moreau, Thomas Sikora
MPEG-7 audio and beyond [[electronic resource] ] : audio content indexing and retrieval / / Hyoung-Gook Kim, Nicolas Moreau, Thomas Sikora
Autore Kim Hyoung-Gook
Pubbl/distr/stampa Chichester, West Sussex, England ; ; Hoboken, NJ, USA, : J. Wiley, c2005
Descrizione fisica 1 online resource (305 p.)
Disciplina 006.6/96
006.696
Altri autori (Persone) MoreauNicolas
SikoraThomas
Soggetto topico MPEG (Video coding standard)
Multimedia systems
Sound - Recording and reproducing - Digital techniques - Standards
Soggetto genere / forma Electronic books.
ISBN 1-280-33982-9
9786610339822
0-470-09336-6
0-470-09335-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto MPEG-7 Audio and Beyond; Contents; List of Acronyms; List of Symbols; 1 Introduction; 1.1 Audio Content Description; 1.2 MPEG-7 Audio Content Description - An Overview; 1.2.1 MPEG-7 Low-Level Descriptors; 1.2.2 MPEG-7 Description Schemes; 1.2.3 MPEG-7 Description Definition Language (DDL); 1.2.4 BiM (Binary Format for MPEG-7); 1.3 Organization of the Book; 2 Low-Level Descriptors; 2.1 Introduction; 2.2 Basic Parameters and Notations; 2.2.1 Time Domain; 2.2.2 Frequency Domain; 2.3 Scalable Series; 2.3.1 Series of Scalars; 2.3.2 Series of Vectors; 2.3.3 Binary Series; 2.4 Basic Descriptors
2.4.1 Audio Waveform2.4.2 Audio Power; 2.5 Basic Spectral Descriptors; 2.5.1 Audio Spectrum Envelope; 2.5.2 Audio Spectrum Centroid; 2.5.3 Audio Spectrum Spread; 2.5.4 Audio Spectrum Flatness; 2.6 Basic Signal Parameters; 2.6.1 Audio Harmonicity; 2.6.2 Audio Fundamental Frequency; 2.7 Timbral Descriptors; 2.7.1 Temporal Timbral: Requirements; 2.7.2 Log Attack Time; 2.7.3 Temporal Centroid; 2.7.4 Spectral Timbral: Requirements; 2.7.5 Harmonic Spectral Centroid; 2.7.6 Harmonic Spectral Deviation; 2.7.7 Harmonic Spectral Spread; 2.7.8 Harmonic Spectral Variation; 2.7.9 Spectral Centroid
2.8 Spectral Basis Representations2.9 Silence Segment; 2.10 Beyond the Scope of MPEG-7; 2.10.1 Other Low-Level Descriptors; 2.10.2 Mel-Frequency Cepstrum Coefficients; References; 3 Sound Classification and Similarity; 3.1 Introduction; 3.2 Dimensionality Reduction; 3.2.1 Singular Value Decomposition (SVD); 3.2.2 Principal Component Analysis (PCA); 3.2.3 Independent Component Analysis (ICA); 3.2.4 Non-Negative Factorization (NMF); 3.3 Classification Methods; 3.3.1 Gaussian Mixture Model (GMM); 3.3.2 Hidden Markov Model (HMM); 3.3.3 Neural Network (NN); 3.3.4 Support Vector Machine (SVM)
3.4 MPEG-7 Sound Classification3.4.1 MPEG-7 Audio Spectrum Projection (ASP) Feature Extraction; 3.4.2 Training Hidden Markov Models (HMMs); 3.4.3 Classification of Sounds; 3.5 Comparison of MPEG-7 Audio Spectrum Projection vs. MFCC Features; 3.6 Indexing and Similarity; 3.6.1 Audio Retrieval Using Histogram Sum of Squared Differences; 3.7 Simulation Results and Discussion; 3.7.1 Plots of MPEG-7 Audio Descriptors; 3.7.2 Parameter Selection; 3.7.3 Results for Distinguishing Between Speech, Music and Environmental Sound; 3.7.4 Results of Sound Classification Using Three Audio Taxonomy Methods
3.7.5 Results for Speaker Recognition3.7.6 Results of Musical Instrument Classification; 3.7.7 Audio Retrieval Results; 3.8 Conclusions; References; 4 Spoken Content; 4.1 Introduction; 4.2 Automatic Speech Recognition; 4.2.1 Basic Principles; 4.2.2 Types of Speech Recognition Systems; 4.2.3 Recognition Results; 4.3 MPEG-7 SpokenContent Description; 4.3.1 General Structure; 4.3.2 SpokenContentHeader; 4.3.3 SpokenContentLattice; 4.4 Application: Spoken Document Retrieval; 4.4.1 Basic Principles of IR and SDR; 4.4.2 Vector Space Models; 4.4.3 Word-Based SDR
4.4.4 Sub-Word-Based Vector Space Models
Record Nr. UNINA-9910143709903321
Kim Hyoung-Gook  
Chichester, West Sussex, England ; ; Hoboken, NJ, USA, : J. Wiley, c2005
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
MPEG-7 audio and beyond [[electronic resource] ] : audio content indexing and retrieval / / Hyoung-Gook Kim, Nicolas Moreau, Thomas Sikora
MPEG-7 audio and beyond [[electronic resource] ] : audio content indexing and retrieval / / Hyoung-Gook Kim, Nicolas Moreau, Thomas Sikora
Autore Kim Hyoung-Gook
Pubbl/distr/stampa Chichester, West Sussex, England ; ; Hoboken, NJ, USA, : J. Wiley, c2005
Descrizione fisica 1 online resource (305 p.)
Disciplina 006.6/96
006.696
Altri autori (Persone) MoreauNicolas
SikoraThomas
Soggetto topico MPEG (Video coding standard)
Multimedia systems
Sound - Recording and reproducing - Digital techniques - Standards
ISBN 1-280-33982-9
9786610339822
0-470-09336-6
0-470-09335-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto MPEG-7 Audio and Beyond; Contents; List of Acronyms; List of Symbols; 1 Introduction; 1.1 Audio Content Description; 1.2 MPEG-7 Audio Content Description - An Overview; 1.2.1 MPEG-7 Low-Level Descriptors; 1.2.2 MPEG-7 Description Schemes; 1.2.3 MPEG-7 Description Definition Language (DDL); 1.2.4 BiM (Binary Format for MPEG-7); 1.3 Organization of the Book; 2 Low-Level Descriptors; 2.1 Introduction; 2.2 Basic Parameters and Notations; 2.2.1 Time Domain; 2.2.2 Frequency Domain; 2.3 Scalable Series; 2.3.1 Series of Scalars; 2.3.2 Series of Vectors; 2.3.3 Binary Series; 2.4 Basic Descriptors
2.4.1 Audio Waveform2.4.2 Audio Power; 2.5 Basic Spectral Descriptors; 2.5.1 Audio Spectrum Envelope; 2.5.2 Audio Spectrum Centroid; 2.5.3 Audio Spectrum Spread; 2.5.4 Audio Spectrum Flatness; 2.6 Basic Signal Parameters; 2.6.1 Audio Harmonicity; 2.6.2 Audio Fundamental Frequency; 2.7 Timbral Descriptors; 2.7.1 Temporal Timbral: Requirements; 2.7.2 Log Attack Time; 2.7.3 Temporal Centroid; 2.7.4 Spectral Timbral: Requirements; 2.7.5 Harmonic Spectral Centroid; 2.7.6 Harmonic Spectral Deviation; 2.7.7 Harmonic Spectral Spread; 2.7.8 Harmonic Spectral Variation; 2.7.9 Spectral Centroid
2.8 Spectral Basis Representations2.9 Silence Segment; 2.10 Beyond the Scope of MPEG-7; 2.10.1 Other Low-Level Descriptors; 2.10.2 Mel-Frequency Cepstrum Coefficients; References; 3 Sound Classification and Similarity; 3.1 Introduction; 3.2 Dimensionality Reduction; 3.2.1 Singular Value Decomposition (SVD); 3.2.2 Principal Component Analysis (PCA); 3.2.3 Independent Component Analysis (ICA); 3.2.4 Non-Negative Factorization (NMF); 3.3 Classification Methods; 3.3.1 Gaussian Mixture Model (GMM); 3.3.2 Hidden Markov Model (HMM); 3.3.3 Neural Network (NN); 3.3.4 Support Vector Machine (SVM)
3.4 MPEG-7 Sound Classification3.4.1 MPEG-7 Audio Spectrum Projection (ASP) Feature Extraction; 3.4.2 Training Hidden Markov Models (HMMs); 3.4.3 Classification of Sounds; 3.5 Comparison of MPEG-7 Audio Spectrum Projection vs. MFCC Features; 3.6 Indexing and Similarity; 3.6.1 Audio Retrieval Using Histogram Sum of Squared Differences; 3.7 Simulation Results and Discussion; 3.7.1 Plots of MPEG-7 Audio Descriptors; 3.7.2 Parameter Selection; 3.7.3 Results for Distinguishing Between Speech, Music and Environmental Sound; 3.7.4 Results of Sound Classification Using Three Audio Taxonomy Methods
3.7.5 Results for Speaker Recognition3.7.6 Results of Musical Instrument Classification; 3.7.7 Audio Retrieval Results; 3.8 Conclusions; References; 4 Spoken Content; 4.1 Introduction; 4.2 Automatic Speech Recognition; 4.2.1 Basic Principles; 4.2.2 Types of Speech Recognition Systems; 4.2.3 Recognition Results; 4.3 MPEG-7 SpokenContent Description; 4.3.1 General Structure; 4.3.2 SpokenContentHeader; 4.3.3 SpokenContentLattice; 4.4 Application: Spoken Document Retrieval; 4.4.1 Basic Principles of IR and SDR; 4.4.2 Vector Space Models; 4.4.3 Word-Based SDR
4.4.4 Sub-Word-Based Vector Space Models
Record Nr. UNINA-9910830304103321
Kim Hyoung-Gook  
Chichester, West Sussex, England ; ; Hoboken, NJ, USA, : J. Wiley, c2005
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui