LEADER 04325nam 2200541 450 001 9910580173103321 005 20221223131426.0 010 $a1-4842-8149-7 024 7 $a10.1007/978-1-4842-8149-9 035 $a(MiAaPQ)EBC7020107 035 $a(Au-PeEL)EBL7020107 035 $a(CKB)23971653200041 035 $aEBL7020107 035 $a(AU-PeEL)EBL7020107 035 $a(OCoLC)1332779497 035 $a(OCoLC-P)1332779497 035 $a(CaSebORM)9781484281499 035 $a(PPN)266357261 035 $a(EXLCZ)9923971653200041 100 $a20221223d2022 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aAutomated deep learning using neural network intelligence $edevelop and design PyTorch and TensorFlow models using Python /$fIvan Gridin 210 1$aNew York, New York :$cApress L. P.,$d[2022] 210 4$dİ2022 215 $a1 online resource (396 pages) 300 $aIncludes index. 311 08$aPrint version: Gridin, Ivan Automated Deep Learning Using Neural Network Intelligence Berkeley, CA : Apress L. P.,c2022 9781484281482 327 $aChapter 1: Introduction to Neural Network Intelligence -- Chapter 2:Hyperparameter Optimization -- Chapter 3: Hyperparameter Optimization Under Shell -- 4. Multi-Trial Neural Architecture Search -- Chapter 5: One-Shot Neural Architecture Search -- Chapter 6: Model Pruning -- Chapter 7: NNI Recipes. 330 $aOptimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development. The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural network design are presented. The book teaches you how to construct a search space and launch an architecture search using the latest state-of-the-art exploration strategies: Efficient Neural Architecture Search (ENAS) and Differential Architectural Search (DARTS). You will learn how to automate the construction of a neural network architecture for a particular problem and dataset. The book focuses on model compression and feature engineering methods that are essential in automated deep learning. It also includes performance techniques that allow the creation of large-scale distributive training platforms using NNI. After reading this book, you will know how to use the full toolkit of automated deep learning methods. The techniques and practical examples presented in this book will allow you to bring your neural network routines to a higher level. What You Will Learn Know the basic concepts of optimization tuners, search space, and trials Apply different hyper-parameter optimization algorithms to develop effective neural networks Construct new deep learning models from scratch Execute the automated Neural Architecture Search to create state-of-the-art deep learning models Compress the model to eliminate unnecessary deep learning layers. 606 $aDeep learning (Machine learning) 606 $aNeural networks (Computer science) 606 $aPython (Computer program language) 615 0$aDeep learning (Machine learning) 615 0$aNeural networks (Computer science) 615 0$aPython (Computer program language) 676 $a733 700 $aGridin$b Ivan$01246255 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910580173103321 996 $aAutomated Deep Learning Using Neural Network Intelligence$92889821 997 $aUNINA