LEADER 04196nam 22005775 450 001 9910253941203321 005 20200703085423.0 010 $a3-319-61807-5 024 7 $a10.1007/978-3-319-61807-4 035 $a(CKB)3710000001631547 035 $a(DE-He213)978-3-319-61807-4 035 $a(MiAaPQ)EBC5014247 035 $a(PPN)203852478 035 $a(EXLCZ)993710000001631547 100 $a20170831d2017 u| 0 101 0 $aeng 135 $aurnn|008mamaa 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aMultimodal Analysis of User-Generated Multimedia Content /$fby Rajiv Shah, Roger Zimmermann 205 $a1st ed. 2017. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2017. 215 $a1 online resource (XXII, 263 p. 63 illus., 42 illus. in color.) 225 1 $aSocio-Affective Computing,$x2509-5706 ;$v6 311 $a3-319-61806-7 330 $aThis book presents a study of semantics and sentics understanding derived from user-generated multimodal content (UGC). It enables researchers to learn about the ways multimodal analysis of UGC can augment semantics and sentics understanding and it helps in addressing several multimedia analytics problems from social media such as event detection and summarization, tag recommendation and ranking, soundtrack recommendation, lecture video segmentation, and news video uploading. Readers will discover how the derived knowledge structures from multimodal information are beneficial for efficient multimedia search, retrieval, and recommendation. However, real-world UGC is complex, and extracting the semantics and sentics from only multimedia content is very difficult because suitable concepts may be exhibited in different representations. Moreover, due to the increasing popularity of social media websites and advancements in technology, it is now possible to collect a significant amount of important contextual information (e.g., spatial, temporal, and preferential information). Thus, there is a need to analyze the information of UGC from multiple modalities to address these problems. A discussion of multimodal analysis is presented followed by studies on how multimodal information is exploited to address problems that have a significant impact on different areas of society (e.g., entertainment, education, and journalism). Specifically, the methods presented exploit the multimedia content (e.g., visual content) and associated contextual information (e.g., geo-, temporal, and other sensory data). The reader is introduced to several knowledge bases and fusion techniques to address these problems. This work includes future directions for several interesting multimedia analytics problems that have the potential to significantly impact society. The work is aimed at researchers in the multimedia field who would like to pursue research in the area of multimodal analysis of UGC. 410 0$aSocio-Affective Computing,$x2509-5706 ;$v6 606 $aNeurosciences 606 $aData mining 606 $aSemantics 606 $aCognitive psychology 606 $aNeurosciences$3https://scigraph.springernature.com/ontologies/product-market-codes/B18006 606 $aData Mining and Knowledge Discovery$3https://scigraph.springernature.com/ontologies/product-market-codes/I18030 606 $aSemantics$3https://scigraph.springernature.com/ontologies/product-market-codes/N39000 606 $aCognitive Psychology$3https://scigraph.springernature.com/ontologies/product-market-codes/Y20060 615 0$aNeurosciences. 615 0$aData mining. 615 0$aSemantics. 615 0$aCognitive psychology. 615 14$aNeurosciences. 615 24$aData Mining and Knowledge Discovery. 615 24$aSemantics. 615 24$aCognitive Psychology. 676 $a612.8 700 $aShah$b Rajiv$4aut$4http://id.loc.gov/vocabulary/relators/aut$0917382 702 $aZimmermann$b Roger$4aut$4http://id.loc.gov/vocabulary/relators/aut 906 $aBOOK 912 $a9910253941203321 996 $aMultimodal Analysis of User-Generated Multimedia Content$92056939 997 $aUNINA LEADER 03000nam 2200745 a 450 001 9910961663803321 005 20250624222411.0 010 $a1135607613 010 $a0838466907 010 $a1283882434 010 $a1135607621 010 $a1282375466 010 $a9786612375460 010 $a1410615723 024 7 $a10.4324/9781410615725 035 $a(CKB)1000000000244781 035 $a(EBL)261419 035 $a(OCoLC)437168056 035 $a(SSID)ssj0000264493 035 $a(PQKBManifestationID)11205312 035 $a(PQKBTitleCode)TC0000264493 035 $a(PQKBWorkID)10292229 035 $a(PQKB)11469846 035 $a(MiAaPQ)EBC261419 035 $a(MiAaPQ)EBC5121817 035 $a(Au-PeEL)EBL261419 035 $a(CaPaEBR)ebr10130724 035 $a(CaONFJC)MIL419493 035 $a(OCoLC)936815397 035 $a(OCoLC)70684824 035 $a(Au-PeEL)EBL5121817 035 $a(CaONFJC)MIL237546 035 $a(OCoLC)1027171869 035 $a(OCoLC)1325902544 035 $a(FINmELB)ELB154537 035 $a(EXLCZ)991000000000244781 100 $a20050415d2006 uy 0 101 0 $aeng 135 $aur||||||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aUnderstanding language teaching $efrom method to post-method /$fB. Kumaravadivelu 205 $a1st ed. 210 $aMahwah, N.J. $cLawrence Erlbaum Associates$d2006 215 $a1 online resource (269 pages) 225 1 $aESL and applied linguistics professional series 300 $aDescription based upon print version of record. 311 1 $a0805856765 311 1 $a0805851763 320 $aIncludes bibliographical references (pages 227-243) and index. 327 $aPreface : the pattern which connects -- Part One. Language, learning, and teaching -- Part two. Language teaching methods -- Part three. Postmethod perspectives. 330 $aThis book traces the historical development of major language teaching methods in terms of theoretical principles and classroom procedures, and provides a critical evaluation of each. Drawing from seminal, foundational texts and from critical commentaries made by various scholars, Kumaravadivelu examines the profession's current transition from method to postmethod and, in the process, elucidates the relationship between theory, research, and practice. The chief objective is to help readers see the pattern that connects language, learning, teaching methods, and postmethod perspectives. 410 0$aESL and applied linguistics professional series. 606 $aLanguage and languages$xStudy and teaching 615 0$aLanguage and languages$xStudy and teaching. 676 $a418/.0071 700 $aKumaravadivelu$b B.$f1948-$01045301 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910961663803321 996 $aUnderstanding language teaching$94397445 997 $aUNINA