LEADER 01092nam0-22003371i-450- 001 990000828030403321 005 20001010 035 $a000082803 035 $aFED01000082803 035 $a(Aleph)000082803FED01 035 $a000082803 100 $a20001010d--------km-y0itay50------ba 101 0 $aita 105 $ay-------001yy 200 1 $aAnalyse Matricielle des Reseaux Electri ques$fpar P. Le Corbeiller, traduit par G.Lehr, preface de l'edition francaise parA. Mauduit 210 $aParis$cDunod$d1954 215 $aXI, 124 p.$d21 cm 300 $aL'edition originale de cet ouvrage a paru sous le titre de : Matrix Analysis of Elec tric Networks 610 0 $aAnalyse Matricielle des Reseaux Electriques 676 $a512 700 1$aLe Corbeiller,$bPhilippe$040483 702 1$aLehr,$bGeorges 702 1$aMauduit,$bA. 801 0$aIT$bUNINA$gRICA$2UNIMARC 901 $aBK 912 $a990000828030403321 952 $a02 21 A 4$b1407$fFINBN 959 $aFINBN 996 $aAnalyse Matricielle des Reseaux Electri ques$9347358 997 $aUNINA DB $aING01 LEADER 01257cam a2200253Ia 4500 001 991002376159707536 008 051104s20052005sw aaa b 001 0 eng d 020 $a9179160514 035 $ab13517818-39ule_inst 040 $aDip.to Beni Culturali$bita 100 1 $aSourvinou-Inwood, Christiane$0180387 245 10$aHylas, the nymphs, Dionysos and others :$bmyth, ritual, ethnicity : Martin P. Nilsson Lecture on Greek Religion, delivered 1997 at the Swedish Institute at Athens /$cby Christiane Sourvinou-Inwood 260 $aStockholm :$bDistributor, Paul Aaström,$c2005 300 $a421 p. :$bill. ;$c25 cm. 490 1 $aSkrifter utgivna av Svenska institutet i Athen. Series in 8o ;$v19 =$aActa Instituti Atheniensis Regni Sueciae. Series in 8o ;$v19,$x0081-9921 504 $aContiene bibliografia: pp. [395]-410 650 4$aMitologia greca 830 0$aSkrifter utgivna av Svenska institutet i Athen.$p8o ;$v19 907 $a.b13517818$b02-04-14$c24-04-07 912 $a991002376159707536 945 $aLE001 Per S 35 2005$g1$i2001000161980$lle001$op$pE90.00$q-$rl$s- $t0$u0$v0$w0$x0$y.i1443250x$z24-04-07 996 $aHylas, the Nymphs, Dionysos and others$91114932 997 $aUNISALENTO 998 $ale001$b24-04-07$cm$da $e-$feng$gsw $h0$i0 LEADER 02204nam 2200649Ia 450 001 9910785847903321 005 20230921010230.0 010 $a3-11-082694-1 024 7 $a10.1515/9783110826944 035 $a(CKB)2670000000252042 035 $a(EBL)935878 035 $a(SSID)ssj0000559860 035 $a(PQKBManifestationID)11353419 035 $a(PQKBTitleCode)TC0000559860 035 $a(PQKBWorkID)10568992 035 $a(PQKB)11599486 035 $a(MiAaPQ)EBC935878 035 $a(WaSeSS)Ind00011903 035 $a(DE-B1597)50858 035 $a(OCoLC)979906856 035 $a(DE-B1597)9783110826944 035 $a(Au-PeEL)EBL935878 035 $a(CaPaEBR)ebr10599644 035 $a(OCoLC)843635327 035 $a(EXLCZ)992670000000252042 100 $a20760224e19752010 uy 0 101 0 $aeng 135 $aurcn||||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 14$aThe histories /$fAgathias ; transl. with an introd. and short explanatory notes by Joseph D. Frendo 205 $aReprint 2010 210 1$aBerlin :$cWalter de Gruyter,$d1975. 215 $a1 online resource (xiii, 170 pages) 225 1 $aCorpus fontium historiae Byzantinae ;$vv. 2 A :$aSeries Berolinensis 300 $aOriginally published in 1594 under title: De imperio et rebus gestis Iustiniani Imperatoris libri quinque. 311 0 $a3-11-003357-7 320 $aIncludes bibliographical references and index. 327 $tFront matter --$tPreface --$tBook 1 --$tBook 2 --$tBook 3 --$tBook 4 --$tBook 5 --$tIndex of proper names 330 $aThe Histories 410 0$aCorpus fontium historiae Byzantinae ;$v2 A. 410 0$aCorpus fontium historiae Byzantinae.$pSeries Berolinensis. 606 $aGoths$zItaly 607 $aByzantine Empire$xHistory$yJustinian I, 527-565 615 0$aGoths 676 $a949.5/01 686 $aFK 13401$2rvk 700 $aAgathias$fd. 582.$01542160 701 $aFrendo$b Joseph D$0168417 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910785847903321 996 $aThe histories$93794650 997 $aUNINA LEADER 06880nam 2200481 450 001 9910830539803321 005 20231207234445.0 010 $a1-394-20563-5 010 $a1-394-20561-9 035 $a(MiAaPQ)EBC30970336 035 $a(Au-PeEL)EBL30970336 035 $a(DLC) 2023047936 035 $a(DLC) 2023047937 035 $a(EXLCZ)9929038584500041 100 $a20231207d2024 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aDemystifying Deep Learning $eAn Introduction to the Mathematics of Neural Networks /$fDouglas J. Santry 205 $aFirst edition. 210 1$aHoboken, New Jersey :$cJohn Wiley & Sons, Inc.,$d[2024] 210 4$d©2024 215 $a1 online resource (259 pages) 300 $aIncludes index. 311 08$aPrint version: Santry, Douglas J. Demystifying Deep Learning Newark : John Wiley & Sons, Incorporated,c2023 320 $aIncludes bibliographical references and index. 327 $aCover -- Title Page -- Copyright -- Contents -- About the Author -- Acronyms -- Chapter 1 Introduction -- 1.1 AI/ML - Deep Learning? -- 1.2 A Brief History -- 1.3 The Genesis of Models -- 1.3.1 Rise of the Empirical Functions -- 1.3.2 The Biological Phenomenon and the Analogue -- 1.4 Numerical Computation - Computer Numbers Are Not ?eal -- 1.4.1 The IEEE 754 Floating Point System -- 1.4.2 Numerical Coding Tip: Think in Floating Point -- 1.5 Summary -- 1.6 Projects -- Chapter 2 Deep Learning and Neural Networks -- 2.1 Feed?Forward and Fully?Connected Artificial Neural Networks -- 2.2 Computing Neuron State -- 2.2.1 Activation Functions -- 2.3 The Feed?Forward ANN Expressed with Matrices -- 2.3.1 Neural Matrices: A Convenient Notation -- 2.4 Classification -- 2.4.1 Binary Classification -- 2.4.2 One?Hot Encoding -- 2.4.3 The Softmax Layer -- 2.5 Summary -- 2.6 Projects -- Chapter 3 Training Neural Networks -- 3.1 Preparing the Training Set: Data Preprocessing -- 3.2 Weight Initialization -- 3.3 Training Outline -- 3.4 Least Squares: A Trivial Example -- 3.5 Backpropagation of Error for Regression -- 3.5.1 The Terminal Layer (Output) -- 3.5.2 Backpropagation: The Shallower Layers -- 3.5.3 The Complete Backpropagation Algorithm -- 3.5.4 A Word on the Rectified Linear Unit (ReLU) -- 3.6 Stochastic Sine -- 3.7 Verification of a Software Implementation -- 3.8 Summary -- 3.9 Projects -- Chapter 4 Training Classifiers -- 4.1 Backpropagation for Classifiers -- 4.1.1 Likelihood -- 4.1.2 Categorical Loss Functions -- 4.2 Computing the Derivative of the Loss -- 4.2.1 Initiate Backpropagation -- 4.3 Multilabel Classification -- 4.3.1 Binary Classification -- 4.3.2 Training A Multilabel Classifier ANN -- 4.4 Summary -- 4.5 Projects -- Chapter 5 Weight Update Strategies -- 5.1 Stochastic Gradient Descent -- 5.2 Weight Updates as Iteration and Convex Optimization. 327 $a5.2.1 Newton's Method for Optimization -- 5.3 RPROP+ -- 5.4 Momentum Methods -- 5.4.1 AdaGrad and RMSProp -- 5.4.2 ADAM -- 5.5 Levenberg-Marquard Optimization for Neural Networks -- 5.6 Summary -- 5.7 Projects -- Chapter 6 Convolutional Neural Networks -- 6.1 Motivation -- 6.2 Convolutions and Features -- 6.3 Filters -- 6.4 Pooling -- 6.5 Feature Layers -- 6.6 Training a CNN -- 6.6.1 Flatten and the Gradient -- 6.6.2 Pooling and the Gradient -- 6.6.3 Filters and the Gradient -- 6.7 Applications -- 6.8 Summary -- 6.9 Projects -- Chapter 7 Fixing the Fit -- 7.1 Quality of the Solution -- 7.2 Generalization Error -- 7.2.1 Bias -- 7.2.2 Variance -- 7.2.3 The Bias?Variance Trade?off -- 7.2.4 The Bias?Variance Trade?off in Context -- 7.2.5 The Test Set -- 7.3 Classification Performance -- 7.4 Regularization -- 7.4.1 Forward Pass During Training -- 7.4.2 Forward Pass During Normal Inference -- 7.4.3 Backpropagation of Error -- 7.5 Advanced Normalization -- 7.5.1 Batch Normalization -- 7.5.2 Layer Normalization -- 7.6 Summary -- 7.7 Projects -- Chapter 8 Design Principles for a Deep Learning Training Library -- 8.1 Computer Languages -- 8.2 The Matrix: Crux of a Library Implementation -- 8.2.1 Memory Access and Modern CPU Architectures -- 8.2.2 Designing Matrix Computations -- 8.2.2.1 Convolutions as Matrices -- 8.3 The Framework -- 8.4 Summary -- 8.5 Projects -- Chapter 9 Vistas -- 9.1 The Limits of ANN Learning Capacity -- 9.2 Generative Adversarial Networks -- 9.2.1 GAN Architecture -- 9.2.2 The GAN Loss Function -- 9.3 Reinforcement Learning -- 9.3.1 The Elements of Reinforcement Learning -- 9.3.2 A Trivial RL Training Algorithm -- 9.4 Natural Language Processing Transformed -- 9.4.1 The Challenges of Natural Language -- 9.4.2 Word Embeddings -- 9.4.3 Attention -- 9.4.4 Transformer Blocks -- 9.4.5 Multi?Head Attention -- 9.4.6 Transformer Applications. 327 $a9.5 Neural Turing Machines -- 9.6 Summary -- 9.7 Projects -- Appendix A Mathematical Review -- A.1 Linear Algebra -- A.1.1 Vectors -- A.1.2 Matrices -- A.1.3 Matrix Properties -- A.1.4 Linear Independence -- A.1.5 The QR Decomposition -- A.1.6 Least Squares -- A.1.7 Eigenvalues and Eigenvectors -- A.1.8 Hadamard Operations -- A.2 Basic Calculus -- A.2.1 The Product Rule -- A.2.2 The Chain Rule -- A.2.3 Multivariable Functions -- A.2.4 Taylor Series -- A.3 Advanced Matrices -- A.4 Probability -- Glossary -- References -- Index -- EULA. 330 $a"Artificial Neural Networks (ANN) are incredibly successful subfield of artificial intelligence (AI). ANNs are everywhere and their introduction to the world is accelerating as new applications for ANNs are launched. No profession is exempt: medicine, law, financial services and science. The robot revolution threatened blue collar jobs in the 1970s. The AI revolution threatens white collar jobs. ANNs are successfully helping medical doctors detect and predict disease. Language comprehension ANNs based on transformers are reading legal contracts and making recommendations. Scientists use ANNs to understand experimental data, model protein folding and hurricane modeling - and it is just beginning. AI is on the agenda, in the news (for good reasons - and bad), discussed by think tanks and government policy makers. The AI they are usually discussing is based on ANNs. ANN techniques are specializing as they adapt to natural language process, image recognition, problem solving and generative applications, but they still share certain canonical properties."--$cProvided by publisher. 606 $aDeep learning (Machine learning) 615 0$aDeep learning (Machine learning) 676 $a006.310151 700 $aSantry$b Douglas J.$01634567 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910830539803321 996 $aDemystifying Deep Learning$93974839 997 $aUNINA