03928nam 22006015 450 991033765790332120200629154409.03-319-99223-610.1007/978-3-319-99223-5(CKB)4100000007102927(MiAaPQ)EBC5567617(DE-He213)978-3-319-99223-5(PPN)231464746(EXLCZ)99410000000710292720181023d2019 u| 0engurcnu||||||||txtrdacontentcrdamediacrrdacarrierEmbedded Deep Learning Algorithms, Architectures and Circuits for Always-on Neural Network Processing /by Bert Moons, Daniel Bankman, Marian Verhelst1st ed. 2019.Cham :Springer International Publishing :Imprint: Springer,2019.1 online resource (216 pages)3-319-99222-8 Chapter 1 Embedded Deep Neural Networks -- Chapter 2 Optimized Hierarchical Cascaded Processing -- Chapter 3 Hardware-Algorithm Co-optimizations -- Chapter 4 Circuit Techniques for Approximate Computing -- Chapter 5 ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing -- Chapter 6 BINAREYE: Digital and Mixed-signal Always-on Binary Neural Network Processing -- Chapter 7 Conclusions, contributions and future work.This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning. Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices; Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes; Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations; Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.Electronic circuitsSignal processingImage processingSpeech processing systemsElectronicsMicroelectronicsCircuits and Systemshttps://scigraph.springernature.com/ontologies/product-market-codes/T24068Signal, Image and Speech Processinghttps://scigraph.springernature.com/ontologies/product-market-codes/T24051Electronics and Microelectronics, Instrumentationhttps://scigraph.springernature.com/ontologies/product-market-codes/T24027Electronic circuits.Signal processing.Image processing.Speech processing systems.Electronics.Microelectronics.Circuits and Systems.Signal, Image and Speech Processing.Electronics and Microelectronics, Instrumentation.370.285Moons Bertauthttp://id.loc.gov/vocabulary/relators/aut1000347Bankman Danielauthttp://id.loc.gov/vocabulary/relators/autVerhelst Marianauthttp://id.loc.gov/vocabulary/relators/autBOOK9910337657903321Embedded Deep Learning2296067UNINA