04688nam 22006975 450 991029905970332120200630061445.03-319-04561-X10.1007/978-3-319-04561-0(CKB)2550000001199647(EBL)1697738(OCoLC)881165953(SSID)ssj0001178434(PQKBManifestationID)11746887(PQKBTitleCode)TC0001178434(PQKBWorkID)11168860(PQKB)11044722(MiAaPQ)EBC1697738(DE-He213)978-3-319-04561-0(PPN)176109404(EXLCZ)99255000000119964720140125d2014 u| 0engur|n|---|||||txtccrHuman Action Recognition with Depth Cameras /by Jiang Wang, Zicheng Liu, Ying Wu1st ed. 2014.Cham :Springer International Publishing :Imprint: Springer,2014.1 online resource (65 p.)SpringerBriefs in Computer Science,2191-5768Description based upon print version of record.3-319-04560-1 Includes bibliographical references at the end of each chapters and index.Introduction -- Learning Actionlet Ensemble for 3D Human Action Recognition -- Random Occupancy Patterns -- Conclusion.Action recognition is an enabling technology for many real world applications, such as human-computer interaction, surveillance, video retrieval, retirement home monitoring, and robotics. In the past decade, it has attracted a great amount of interest in the research community. Recently, the commoditization of depth sensors has generated much excitement in action recognition from depth sensors. New depth sensor technology has enabled many applications that were not feasible before. On one hand, action recognition becomes far easier with depth sensors. On the other hand, the drive to recognize more complex actions presents new challenges. One crucial aspect of action recognition is to extract discriminative features. The depth maps have completely different characteristics from the RGB images. Directly applying features designed for RGB images does not work. Complex actions usually involve complicated temporal structures, human-object interactions, and person-person contacts. New machine learning algorithms need to be developed to learn these complex structures. This work enables the reader to quickly familiarize themselves with the latest research in depth-sensor based action recognition, and to gain a deeper understanding of recently developed techniques. It will be of great use for both researchers and practitioners who are interested in human action recognition with depth sensors. The text focuses on feature representation and machine learning algorithms for action recognition from depth sensors. After presenting a comprehensive overview of the state of the art in action recognition from depth data, the authors then provide in-depth descriptions of their recently developed feature representations and machine learning techniques, including lower-level depth and skeleton features, higher-level representations to model the temporal structure and human-object interactions, and feature selection techniques for occlusion handling.SpringerBriefs in Computer Science,2191-5768Optical data processingBiometrics (Biology)User interfaces (Computer systems)Image Processing and Computer Visionhttps://scigraph.springernature.com/ontologies/product-market-codes/I22021Biometricshttps://scigraph.springernature.com/ontologies/product-market-codes/I22040User Interfaces and Human Computer Interactionhttps://scigraph.springernature.com/ontologies/product-market-codes/I18067Optical data processing.Biometrics (Biology).User interfaces (Computer systems).Image Processing and Computer Vision.Biometrics.User Interfaces and Human Computer Interaction.006Wang Jiangauthttp://id.loc.gov/vocabulary/relators/aut652518Liu Zichengauthttp://id.loc.gov/vocabulary/relators/autWu Yingauthttp://id.loc.gov/vocabulary/relators/autMiAaPQMiAaPQMiAaPQBOOK9910299059703321Human Action Recognition with Depth Cameras2276665UNINA