LEADER 01377nas 2200481- 450 001 9910141178103321 005 20240912213021.0 011 $a2047-2382 035 $a(OCoLC)805421471 035 $a(CKB)2670000000140894 035 $a(CONSER)--2013201503 035 $a(DE-599)ZDB2662506-4 035 $a(MiFhGG)5YYE 035 $a(MiAaPQ)2040243 035 $a(EXLCZ)992670000000140894 100 $a20120808a20119999 o-- a 101 0 $aeng 135 $aur||||||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aEnvironmental evidence 210 1$a[London] :$cBioMed Central,$d[2011]- 215 $a1 online resource 300 $aRefereed/Peer-reviewed 531 $aENVIRON EVID 531 10$aEnviron Evid 606 $aEnvironmental management$vPeriodicals 606 $aEnvironmental management$2fast$3(OCoLC)fst00913186 606 $aEnvironment 606 $aEnvironmental Policy 608 $aPeriodical 608 $aPeriodicals.$2fast 615 0$aEnvironmental management 615 7$aEnvironmental management. 615 12$aEnvironment 615 22$aEnvironmental Policy 676 $a333.7205 712 02$aCollaboration for Environmental Evidence, 906 $aJOURNAL 912 $a9910141178103321 996 $aEnvironmental evidence$92104895 997 $aUNINA LEADER 02554oam 2200457zu 450 001 9910141067203321 005 20241212220147.0 010 $a9781457721854 010 $a1457721856 010 $a9781457721847 010 $a1457721848 035 $a(CKB)2670000000131685 035 $a(SSID)ssj0000669988 035 $a(PQKBManifestationID)12276044 035 $a(PQKBTitleCode)TC0000669988 035 $a(PQKBWorkID)10715839 035 $a(PQKB)11325784 035 $a(NjHacI)992670000000131685 035 $a(EXLCZ)992670000000131685 100 $a20160829d2011 uy 101 0 $aeng 135 $aur||||||||||| 181 $ctxt 182 $cc 183 $acr 200 10$a2011 IEEE International Symposium on Mixed and Augmented Reality 210 31$a[Place of publication not identified]$cIEEE$d2011 215 $a1 online resource 300 $aBibliographic Level Mode of Issuance: Monograph 311 08$a9781457721830 311 08$a145772183X 330 $aWe present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision. 606 $aAugmented reality$vCongresses 615 0$aAugmented reality 676 $a006.8 702 $aIEEE Staff 801 0$bPQKB 906 $aPROCEEDING 912 $a9910141067203321 996 $a2011 IEEE International Symposium on Mixed and Augmented Reality$92308208 997 $aUNINA