LEADER 03872oam 2200577 450 001 9910299277703321 005 20231006024129.0 010 $a3-319-73004-5 024 7 $a10.1007/978-3-319-73004-2 035 $a(CKB)4100000002485407 035 $a(DE-He213)978-3-319-73004-2 035 $a(MiAaPQ)EBC6315244 035 $a(MiAaPQ)EBC5591543 035 $a(Au-PeEL)EBL5591543 035 $a(OCoLC)1027057165 035 $a(PPN)224638327 035 $a(EXLCZ)994100000002485407 100 $a20180205d2018 u| 0 101 0 $aeng 135 $aurnn#008mamaa 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aIntroduction to deep learning $efrom logical calculus to artificial intelligence /$fby Sandro Skansi 205 $a1st edition. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2018. 215 $a1 online resource (XIII, 191 p. 38 illus.) 225 1 $aUndergraduate Topics in Computer Science,$x1863-7310 311 0 $a3-319-73003-7 327 $aFrom Logic to Cognitive Science -- Mathematical and Computational Prerequisites -- Machine Learning Basics -- Feed-forward Neural Networks -- Modifications and Extensions to a Feed-forward Neural Network -- Convolutional Neural Networks -- Recurrent Neural Networks -- Autoencoders -- Neural Language Models -- An Overview of Different Neural Network Architectures -- Conclusion. 330 $aThis textbook presents a concise, accessible and engaging first introduction to deep learning, offering a wide range of connectionist models which represent the current state-of-the-art. The text explores the most popular algorithms and architectures in a simple and intuitive style, explaining the mathematical derivations in a step-by-step manner. The content coverage includes convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks and autoencoders. Numerous examples in working Python code are provided throughout the book, and the code is also supplied separately at an accompanying website. Topics and features: Introduces the fundamentals of machine learning, and the mathematical and computational prerequisites for deep learning Discusses feed-forward neural networks, and explores the modifications to these which can be applied to any neural network Examines convolutional neural networks, and the recurrent connections to a feed-forward neural network Describes the notion of distributed representations, the concept of the autoencoder, and the ideas behind language processing with deep learning Presents a brief history of artificial intelligence and neural networks, and reviews interesting open research problems in deep learning and connectionism This clearly written and lively primer on deep learning is essential reading for graduate and advanced undergraduate students of computer science, cognitive science and mathematics, as well as fields such as linguistics, logic, philosophy, and psychology. Dr. Sandro Skansi is an Assistant Professor of Logic at the University of Zagreb and Lecturer in Data Science at University College Algebra, Zagreb, Croatia. 410 0$aUndergraduate Topics in Computer Science,$x1863-7310 606 $aMachine learning 606 $aPattern recognition 606 $aNeural networks (Computer science)  606 $aCoding theory 606 $aInformation theory 615 0$aMachine learning. 615 0$aPattern recognition. 615 0$aNeural networks (Computer science) . 615 0$aCoding theory. 615 0$aInformation theory. 676 $a004 700 $aSkansi$b Sandro$0910927 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910299277703321 996 $aIntroduction to Deep Learning$92039138 997 $aUNINA