LEADER 04688nam 22006975 450 001 9910299059703321 005 20200630061445.0 010 $a3-319-04561-X 024 7 $a10.1007/978-3-319-04561-0 035 $a(CKB)2550000001199647 035 $a(EBL)1697738 035 $a(OCoLC)881165953 035 $a(SSID)ssj0001178434 035 $a(PQKBManifestationID)11746887 035 $a(PQKBTitleCode)TC0001178434 035 $a(PQKBWorkID)11168860 035 $a(PQKB)11044722 035 $a(MiAaPQ)EBC1697738 035 $a(DE-He213)978-3-319-04561-0 035 $a(PPN)176109404 035 $a(EXLCZ)992550000001199647 100 $a20140125d2014 u| 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 10$aHuman Action Recognition with Depth Cameras /$fby Jiang Wang, Zicheng Liu, Ying Wu 205 $a1st ed. 2014. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2014. 215 $a1 online resource (65 p.) 225 1 $aSpringerBriefs in Computer Science,$x2191-5768 300 $aDescription based upon print version of record. 311 $a3-319-04560-1 320 $aIncludes bibliographical references at the end of each chapters and index. 327 $aIntroduction -- Learning Actionlet Ensemble for 3D Human Action Recognition -- Random Occupancy Patterns -- Conclusion. 330 $aAction recognition is an enabling technology for many real world applications, such as human-computer interaction, surveillance, video retrieval, retirement home monitoring, and robotics. In the past decade, it has attracted a great amount of interest in the research community. Recently, the commoditization of depth sensors has generated much excitement in action recognition from depth sensors. New depth sensor technology has enabled many applications that were not feasible before. On one hand, action recognition becomes far easier with depth sensors. On the other hand, the drive to recognize more complex actions presents new challenges. One crucial aspect of action recognition is to extract discriminative features. The depth maps have completely different characteristics from the RGB images. Directly applying features designed for RGB images does not work. Complex actions usually involve complicated temporal structures, human-object interactions, and person-person contacts. New machine learning algorithms need to be developed to learn these complex structures. This work enables the reader to quickly familiarize themselves with the latest research in depth-sensor based action recognition, and to gain a deeper understanding of recently developed techniques. It will be of great use for both researchers and practitioners who are interested in human action recognition with depth sensors. The text focuses on feature representation and machine learning algorithms for action recognition from depth sensors. After presenting a comprehensive overview of the state of the art in action recognition from depth data, the authors then provide in-depth descriptions of their recently developed feature representations and machine learning techniques, including lower-level depth and skeleton features, higher-level representations to model the temporal structure and human-object interactions, and feature selection techniques for occlusion handling. 410 0$aSpringerBriefs in Computer Science,$x2191-5768 606 $aOptical data processing 606 $aBiometrics (Biology) 606 $aUser interfaces (Computer systems) 606 $aImage Processing and Computer Vision$3https://scigraph.springernature.com/ontologies/product-market-codes/I22021 606 $aBiometrics$3https://scigraph.springernature.com/ontologies/product-market-codes/I22040 606 $aUser Interfaces and Human Computer Interaction$3https://scigraph.springernature.com/ontologies/product-market-codes/I18067 615 0$aOptical data processing. 615 0$aBiometrics (Biology). 615 0$aUser interfaces (Computer systems). 615 14$aImage Processing and Computer Vision. 615 24$aBiometrics. 615 24$aUser Interfaces and Human Computer Interaction. 676 $a006 700 $aWang$b Jiang$4aut$4http://id.loc.gov/vocabulary/relators/aut$0652518 702 $aLiu$b Zicheng$4aut$4http://id.loc.gov/vocabulary/relators/aut 702 $aWu$b Ying$4aut$4http://id.loc.gov/vocabulary/relators/aut 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910299059703321 996 $aHuman Action Recognition with Depth Cameras$92276665 997 $aUNINA