LEADER 03955nam 22006015 450 001 9910337657903321 005 20200629154409.0 010 $a3-319-99223-6 024 7 $a10.1007/978-3-319-99223-5 035 $a(CKB)4100000007102927 035 $a(MiAaPQ)EBC5567617 035 $a(DE-He213)978-3-319-99223-5 035 $a(PPN)231464746 035 $a(EXLCZ)994100000007102927 100 $a20181023d2019 u| 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aEmbedded Deep Learning$b[electronic resource] $eAlgorithms, Architectures and Circuits for Always-on Neural Network Processing /$fby Bert Moons, Daniel Bankman, Marian Verhelst 205 $a1st ed. 2019. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2019. 215 $a1 online resource (216 pages) 311 $a3-319-99222-8 327 $aChapter 1 Embedded Deep Neural Networks -- Chapter 2 Optimized Hierarchical Cascaded Processing -- Chapter 3 Hardware-Algorithm Co-optimizations -- Chapter 4 Circuit Techniques for Approximate Computing -- Chapter 5 ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing -- Chapter 6 BINAREYE: Digital and Mixed-signal Always-on Binary Neural Network Processing -- Chapter 7 Conclusions, contributions and future work. 330 $aThis book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning. Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices; Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy ? applications, algorithms, hardware architectures, and circuits ? supported by real silicon prototypes; Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations; Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization?s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts. 606 $aElectronic circuits 606 $aSignal processing 606 $aImage processing 606 $aSpeech processing systems 606 $aElectronics 606 $aMicroelectronics 606 $aCircuits and Systems$3https://scigraph.springernature.com/ontologies/product-market-codes/T24068 606 $aSignal, Image and Speech Processing$3https://scigraph.springernature.com/ontologies/product-market-codes/T24051 606 $aElectronics and Microelectronics, Instrumentation$3https://scigraph.springernature.com/ontologies/product-market-codes/T24027 615 0$aElectronic circuits. 615 0$aSignal processing. 615 0$aImage processing. 615 0$aSpeech processing systems. 615 0$aElectronics. 615 0$aMicroelectronics. 615 14$aCircuits and Systems. 615 24$aSignal, Image and Speech Processing. 615 24$aElectronics and Microelectronics, Instrumentation. 676 $a370.285 700 $aMoons$b Bert$4aut$4http://id.loc.gov/vocabulary/relators/aut$01000347 702 $aBankman$b Daniel$4aut$4http://id.loc.gov/vocabulary/relators/aut 702 $aVerhelst$b Marian$4aut$4http://id.loc.gov/vocabulary/relators/aut 906 $aBOOK 912 $a9910337657903321 996 $aEmbedded Deep Learning$92296067 997 $aUNINA