Vai al contenuto principale della pagina

Development of Multimodal Interfaces: Active Listening and Synchrony [[electronic resource] ] : Second COST 2102 International Training School, Dublin, Ireland, March 23-27, 2009, Revised Selected Papers / / edited by Anna Esposito, Nick Campbell, Carl Vogel, Amir Hussain, Anton Nijholt



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Titolo: Development of Multimodal Interfaces: Active Listening and Synchrony [[electronic resource] ] : Second COST 2102 International Training School, Dublin, Ireland, March 23-27, 2009, Revised Selected Papers / / edited by Anna Esposito, Nick Campbell, Carl Vogel, Amir Hussain, Anton Nijholt Visualizza cluster
Pubblicazione: Berlin, Heidelberg : , : Springer Berlin Heidelberg : , : Imprint : Springer, , 2010
Edizione: 1st ed. 2010.
Descrizione fisica: 1 online resource (XVII, 446 p. 193 illus.)
Disciplina: 005.437
4.019
Soggetto topico: User interfaces (Computer systems)
Application software
Computer engineering
Multimedia information systems
Computers and civilization
Optical data processing
User Interfaces and Human Computer Interaction
Information Systems Applications (incl. Internet)
Computer Engineering
Multimedia Information Systems
Computers and Society
Image Processing and Computer Vision
Persona (resp. second.): EspositoAnna
CampbellNick
VogelCarl
HussainAmir
NijholtAnton
Note generali: Bibliographic Level Mode of Issuance: Monograph
Nota di contenuto: Spacing and Orientation in Co-present Interaction -- Group Cohesion, Cooperation and Synchrony in a Social Model of Language Evolution -- Pointing Gestures and Synchronous Communication Management -- How an Agent Can Detect and Use Synchrony Parameter of Its Own Interaction with a Human? -- Accessible Speech-Based and Multimodal Media Center Interface for Users with Physical Disabilities -- A Controller-Based Animation System for Synchronizing and Realizing Human-Like Conversational Behaviors -- Generating Simple Conversations -- Media Differences in Communication -- Towards Influencing of the Conversational Agent Mental State in the Task of Active Listening -- Integrating Emotions in the TRIPLE ECA Model -- Manipulating Stress and Cognitive Load in Conversational Interactions with a Multimodal System for Crisis Management Support -- Sentic Computing: Exploitation of Common Sense for the Development of Emotion-Sensitive Systems -- Face-to-Face Interaction and the KTH Cooking Show -- Affect Listeners: Acquisition of Affective States by Means of Conversational Systems -- Nonverbal Synchrony or Random Coincidence? How to Tell the Difference -- Biometric Database Acquisition Close to “Real World” Conditions -- Optimizing Phonetic Encoding for Viennese Unit Selection Speech Synthesis -- Advances on the Use of the Foreign Language Recognizer -- Challenges in Speech Processing of Slavic Languages (Case Studies in Speech Recognition of Czech and Slovak) -- Multiple Feature Extraction and Hierarchical Classifiers for Emotions Recognition -- Emotional Vocal Expressions Recognition Using the COST 2102 Italian Database of Emotional Speech -- Microintonation Analysis of Emotional Speech -- Speech Emotion Modification Using a Cepstral Vocoder -- Analysis of Emotional Voice Using Electroglottogram-Based Temporal Measures of Vocal Fold Opening -- Effects of Smiling on Articulation: Lips, Larynx and Acoustics -- Neural Basis of Emotion Regulation -- Automatic Meeting Participant Role Detection by Dialogue Patterns -- Linguistic and Non-verbal Cues for the Induction of Silent Feedback -- Audiovisual Tools for Phonetic and Articulatory Visualization in Computer-Aided Pronunciation Training -- Gesture Duration and Articulator Velocity in Plosive-Vowel-Transitions -- Stereo Presentation and Binaural Localization in a Memory Game for the Visually Impaired -- Pathological Voice Analysis and Classification Based on Empirical Mode Decomposition -- Disfluencies and the Perspective of Prosodic Fluency -- Subjective Tests and Automatic Sentence Modality Recognition with Recordings of Speech Impaired Children -- The New Italian Audio and Video Emotional Database -- Spoken Dialogue in Virtual Worlds.
Sommario/riassunto: This volume brings together, through a peer-revision process, the advanced research results obtained by the European COST Action 2102: Cross-Modal Analysis of Verbal and Nonverbal Communication, primarily discussed for the first time at the Second COST 2102 International Training School on “Development of Multimodal Int- faces: Active Listening and Synchrony” held in Dublin, Ireland, March 23–27 2009. The school was sponsored by COST (European Cooperation in the Field of Sci- tific and Technical Research, www.cost.esf.org ) in the domain of Information and Communication Technologies (ICT) for disseminating the advances of the research activities developed within the COST Action 2102: “Cross-Modal Analysis of Verbal and Nonverbal Communication” (cost2102.cs.stir.ac.uk) COST Action 2102 in its third year of life brought together about 60 European and 6 overseas scientific laboratories whose aim is to develop interactive dialogue systems and intelligent virtual avatars graphically embodied in a 2D and/or 3D interactive virtual world, capable of interacting intelligently with the environment, other avatars, and particularly with human users.
Titolo autorizzato: Development of Multimodal Interfaces: Active Listening and Synchrony  Visualizza cluster
ISBN: 1-280-38622-3
9786613564146
3-642-12397-X
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910481954703321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Serie: Information Systems and Applications, incl. Internet/Web, and HCI ; ; 5967