Explainable AI: Interpreting, Explaining and Visualizing Deep Learning [[electronic resource] /] / edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller |
Edizione | [1st ed. 2019.] |
Pubbl/distr/stampa | Cham : , : Springer International Publishing : , : Imprint : Springer, , 2019 |
Descrizione fisica | 1 online resource (XI, 439 p. 152 illus., 119 illus. in color.) |
Disciplina | 006.32 |
Collana | Lecture Notes in Artificial Intelligence |
Soggetto topico |
Artificial intelligence
Optical data processing Computers Computer security Computer organization Artificial Intelligence Image Processing and Computer Vision Computing Milieux Systems and Data Security Computer Systems Organization and Communication Networks |
ISBN | 3-030-28954-0 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto | Towards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods. |
Record Nr. | UNISA-996466320103316 |
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2019 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. di Salerno | ||
|
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning / / edited by Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller |
Edizione | [1st ed. 2019.] |
Pubbl/distr/stampa | Cham : , : Springer International Publishing : , : Imprint : Springer, , 2019 |
Descrizione fisica | 1 online resource (XI, 439 p. 152 illus., 119 illus. in color.) |
Disciplina |
006.32
006.3 |
Collana | Lecture Notes in Artificial Intelligence |
Soggetto topico |
Artificial intelligence
Optical data processing Computers Computer security Computer organization Artificial Intelligence Image Processing and Computer Vision Computing Milieux Systems and Data Security Computer Systems Organization and Communication Networks |
ISBN | 3-030-28954-0 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto | Towards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods. |
Record Nr. | UNINA-9910349299503321 |
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2019 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Neural Networks: Tricks of the Trade [[electronic resource] /] / edited by Grégoire Montavon, Geneviève Orr, Klaus-Robert Müller |
Edizione | [2nd ed. 2012.] |
Pubbl/distr/stampa | Berlin, Heidelberg : , : Springer Berlin Heidelberg : , : Imprint : Springer, , 2012 |
Descrizione fisica | 1 online resource (XII, 769 p. 223 illus.) |
Disciplina | 006.32 |
Collana | Theoretical Computer Science and General Issues |
Soggetto topico |
Computer science
Artificial intelligence Algorithms Pattern recognition systems Dynamics Nonlinear theories Application software Theory of Computation Artificial Intelligence Automated Pattern Recognition Applied Dynamical Systems Computer and Information Systems Applications |
ISBN | 3-642-35289-8 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto | Introduction -- Preface on Speeding Learning -- 1. Efficient BackProp -- Preface on Regularization Techniques to Improve Generalization -- 2. Early Stopping — But When? -- 3. A Simple Trick for Estimating the Weight Decay Parameter -- 4. Controlling the Hyperparameter Search in MacKay’s Bayesian Neural Network Framework.- 5. Adaptive Regularization in Neural Network Modeling -- 6. Large Ensemble Averaging -- Preface on Improving Network Models and Algorithmic Tricks -- 7. Square Unit Augmented, Radially Extended, Multilayer Perceptrons -- 8. A Dozen Tricks with Multitask Learning -- 9. Solving the Ill-Conditioning in Neural Network Learning -- 10. Centering Neural Network Gradient Factors -- 11. Avoiding Roundoff Error in Backpropagating Derivatives.- 12. Transformation Invariance in Pattern Recognition –Tangent Distance and Tangent Propagation -- 13. Combining Neural Networks and Context-Driven Search for On-line, Printed Handwriting Recognition in the Newtons -- 14. Neural Network Classification and Prior Class Probabilities -- 15. Applying Divide and Conquer to Large Scale Pattern Recognition Tasks -- Preface on Tricks for Time Series -- 16. Forecasting the Economy with Neural Nets: A Survey of Challenges and Solutions -- 17. How to Train Neural Networks -- Preface on Big Learning in Deep Neural Networks -- 18. Stochastic Gradient Descent Tricks.- 19. Practical Recommendations for Gradient-Based Training of Deep Architectures -- 20. Training Deep and Recurrent Networks with Hessian-Free Optimization -- 21. Implementing Neural Networks Efficiently -- Preface on Better Representations: Invariant, Disentangled and Reusable -- 22. Learning Feature Representations with K-Means -- 23. Deep Big Multilayer Perceptrons for Digit Recognition -- 24. A Practical Guide to Training Restricted Boltzmann Machines -- 25. Deep Boltzmann Machines and the Centering Trick -- 26. Deep Learning via Semi-supervised Embedding -- Preface on Identifying Dynamical Systems for Forecasting and Control -- 27. A Practical Guide to Applying Echo State Networks -- 28. Forecasting with Recurrent Neural Networks: 12 Tricks -- 29. Solving Partially Observable Reinforcement Learning Problems with Recurrent Neural Networks -- 30. 10 Steps and Some Tricks to Set up Neural Reinforcement Controllers. |
Record Nr. | UNISA-996466274203316 |
Berlin, Heidelberg : , : Springer Berlin Heidelberg : , : Imprint : Springer, , 2012 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. di Salerno | ||
|