LEADER 00824nam0-2200277 --450 001 9910309528103321 005 20190301123755.0 010 $a978-88-430-6374-1 100 $a20190301d2013----kmuy0itay5050 ba 101 2 $aita$aspa 102 $aIT 105 $a 001yy 200 1 $a<>libro dei dodici sapienti$ela formazione del re nella Castiglia del 13. secolo$fa cura di Gaetano Lalomia 210 $aRoma$cCarocci$d2013 215 $a143 p.$d18 cm 225 1 $aBiblioteca medievale$v140 300 $aTraduzione italiana a fronte. 676 $a868.1$v22 702 1$aLalomia,$bGaetano 801 0$aIT$bUNINA$gREICAT$2UNIMARC 901 $aBK 912 $a9910309528103321 952 $a868.1 LAL 1$bBibl.2019$fFLFBC 959 $aFLFBC 996 $aLibro dei dodici sapienti$91546182 997 $aUNINA LEADER 04196nam 22005775 450 001 9910253941203321 005 20200703085423.0 010 $a3-319-61807-5 024 7 $a10.1007/978-3-319-61807-4 035 $a(CKB)3710000001631547 035 $a(DE-He213)978-3-319-61807-4 035 $a(MiAaPQ)EBC5014247 035 $a(PPN)203852478 035 $a(EXLCZ)993710000001631547 100 $a20170831d2017 u| 0 101 0 $aeng 135 $aurnn|008mamaa 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aMultimodal Analysis of User-Generated Multimedia Content /$fby Rajiv Shah, Roger Zimmermann 205 $a1st ed. 2017. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2017. 215 $a1 online resource (XXII, 263 p. 63 illus., 42 illus. in color.) 225 1 $aSocio-Affective Computing,$x2509-5706 ;$v6 311 $a3-319-61806-7 330 $aThis book presents a study of semantics and sentics understanding derived from user-generated multimodal content (UGC). It enables researchers to learn about the ways multimodal analysis of UGC can augment semantics and sentics understanding and it helps in addressing several multimedia analytics problems from social media such as event detection and summarization, tag recommendation and ranking, soundtrack recommendation, lecture video segmentation, and news video uploading. Readers will discover how the derived knowledge structures from multimodal information are beneficial for efficient multimedia search, retrieval, and recommendation. However, real-world UGC is complex, and extracting the semantics and sentics from only multimedia content is very difficult because suitable concepts may be exhibited in different representations. Moreover, due to the increasing popularity of social media websites and advancements in technology, it is now possible to collect a significant amount of important contextual information (e.g., spatial, temporal, and preferential information). Thus, there is a need to analyze the information of UGC from multiple modalities to address these problems. A discussion of multimodal analysis is presented followed by studies on how multimodal information is exploited to address problems that have a significant impact on different areas of society (e.g., entertainment, education, and journalism). Specifically, the methods presented exploit the multimedia content (e.g., visual content) and associated contextual information (e.g., geo-, temporal, and other sensory data). The reader is introduced to several knowledge bases and fusion techniques to address these problems. This work includes future directions for several interesting multimedia analytics problems that have the potential to significantly impact society. The work is aimed at researchers in the multimedia field who would like to pursue research in the area of multimodal analysis of UGC. 410 0$aSocio-Affective Computing,$x2509-5706 ;$v6 606 $aNeurosciences 606 $aData mining 606 $aSemantics 606 $aCognitive psychology 606 $aNeurosciences$3https://scigraph.springernature.com/ontologies/product-market-codes/B18006 606 $aData Mining and Knowledge Discovery$3https://scigraph.springernature.com/ontologies/product-market-codes/I18030 606 $aSemantics$3https://scigraph.springernature.com/ontologies/product-market-codes/N39000 606 $aCognitive Psychology$3https://scigraph.springernature.com/ontologies/product-market-codes/Y20060 615 0$aNeurosciences. 615 0$aData mining. 615 0$aSemantics. 615 0$aCognitive psychology. 615 14$aNeurosciences. 615 24$aData Mining and Knowledge Discovery. 615 24$aSemantics. 615 24$aCognitive Psychology. 676 $a612.8 700 $aShah$b Rajiv$4aut$4http://id.loc.gov/vocabulary/relators/aut$0917382 702 $aZimmermann$b Roger$4aut$4http://id.loc.gov/vocabulary/relators/aut 906 $aBOOK 912 $a9910253941203321 996 $aMultimodal Analysis of User-Generated Multimedia Content$92056939 997 $aUNINA