Advanced Applied Deep Learning : Convolutional Neural Networks and Object Detection / / by Umberto Michelucci
| Advanced Applied Deep Learning : Convolutional Neural Networks and Object Detection / / by Umberto Michelucci |
| Autore | Michelucci Umberto |
| Edizione | [1st ed. 2019.] |
| Pubbl/distr/stampa | Berkeley, CA : , : Apress : , : Imprint : Apress, , 2019 |
| Descrizione fisica | 1 online resource (XVIII, 285 p. 88 illus., 28 illus. in color.) |
| Disciplina | 006.3 |
| Soggetto topico |
Artificial intelligence
Python (Computer program language) Open source software Artificial Intelligence Python Open Source |
| ISBN |
9781484249765
1484249763 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto | Chapter 1: Introduction and Development Environment Setup -- Chapter 2: TensorFlow: advanced topics -- Chapter 3: Fundamentals of Convolutional Neural Networks -- Chapter 4: Advanced CNNs and Transfer Learning -- Chapter 5: Cost functions and style transfer -- Chapter 6: Object classification - an introduction -- Chapter 7: Object localization - an implementation in Python -- Chapter 8: Histology Tissue Classification. |
| Record Nr. | UNINA-9910349527903321 |
Michelucci Umberto
|
||
| Berkeley, CA : , : Apress : , : Imprint : Apress, , 2019 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Applied Deep Learning : A Case-Based Approach to Understanding Deep Neural Networks / / by Umberto Michelucci
| Applied Deep Learning : A Case-Based Approach to Understanding Deep Neural Networks / / by Umberto Michelucci |
| Autore | Michelucci Umberto |
| Edizione | [1st ed. 2018.] |
| Pubbl/distr/stampa | Berkeley, CA : , : Apress : , : Imprint : Apress, , 2018 |
| Descrizione fisica | 1 online resource (425 pages) |
| Disciplina | 006.31 |
| Soggetto topico |
Artificial intelligence
Python (Computer program language) Open source software Computer programming Big data Artificial Intelligence Python Open Source Big Data |
| ISBN |
9781484237908
1484237900 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto | Chapter 1: Introduction -- Chapter 2: Single Neurons -- Chapter 3: Fully connected Neural Network with more neurons -- Chapter 4: Neural networks error analysis -- Chapter 5: Dropout technique -- Chapter 6: Hyper parameters tuning -- Chapter 7: Tensorflow and optimizers (Gradient descent, Adam, momentum, etc.) -- Chapter 8: Convolutional Networks and image recognition -- Chapter 9: Recurrent Neural Networks -- Chapter 10: A practical COMPLETE example from scratch (put everything together) -- Chapter 11: Logistic regression implement from scratch in Python without libraries. . |
| Altri titoli varianti | Case-based approach to understanding neural networks |
| Record Nr. | UNINA-9910300755003321 |
Michelucci Umberto
|
||
| Berkeley, CA : , : Apress : , : Imprint : Apress, , 2018 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Applied deep learning with TensorFlow 2 : learn to implement advanced deep learning techniques with Python / / Umberto Michelucci
| Applied deep learning with TensorFlow 2 : learn to implement advanced deep learning techniques with Python / / Umberto Michelucci |
| Autore | Michelucci Umberto |
| Edizione | [2nd ed.] |
| Pubbl/distr/stampa | New York, NY : , : Apress, , [2022] |
| Descrizione fisica | 1 online resource (397 pages) |
| Disciplina | 006.32 |
| Collana | ITpro collection |
| Soggetto topico |
Python (Computer program language)
Machine learning Neural networks (Computer science) |
| ISBN |
1-5231-5107-2
1-4842-8020-2 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto |
Intro -- Table of Contents -- About the Author -- About the Contributing Author -- About the Technical Reviewer -- Acknowledgments -- Foreword -- Introduction -- Chapter 1: Optimization and Neural Networks -- A Basic Understanding of Neural Networks -- The Problem of Learning -- A First Definition of Learning -- [Advanced Section] Assumption in the Formulation -- A Definition of Learning for Neural Networks -- Constrained vs. Unconstrained Optimization -- [Advanced Section] Reducing a Constrained Problem to an Unconstrained Optimization Problem -- Absolute and Local Minima of a Function -- Optimization Algorithms -- Line Search and Trust Region -- Steepest Descent -- The Gradient Descent Algorithm -- Choosing the Right Learning Rate -- Variations of GD -- Mini-Batch GD -- Stochastic GD -- How to Choose the Right Mini-Batch Size -- [Advanced Section] SGD and Fractals -- Exercises -- Conclusion -- Chapter 2: Hands-on with a Single Neuron -- A Short Overview of a Neuron's Structure -- A Short Introduction to Matrix Notation -- An Overview of the Most Common Activation Functions -- Identity Function -- Sigmoid Function -- Tanh (Hyperbolic Tangent) Activation Function -- ReLU (Rectified Linear Unit) Activation Function -- Leaky ReLU -- The Swish Activation Function -- Other Activation Functions -- How to Implement a Neuron in Keras -- Python Implementation Tips: Loops and NumPy -- Linear Regression with a Single Neuron -- The Dataset for the Real-World Example -- Dataset Splitting -- Linear Regression Model -- Keras Implementation -- The Model's Learning Phase -- Model's Performance Evaluation on Unseen Data -- Logistic Regression with a Single Neuron -- The Dataset for the Classification Problem -- Dataset Splitting -- The Logistic Regression Model -- Keras Implementation -- The Model's Learning Phase -- The Model's Performance Evaluation.
Conclusion -- Exercises -- References -- Chapter 3: Feed-Forward Neural Networks -- A Short Review of Network's Architecture and Matrix Notation -- Output of Neurons -- A Short Summary of Matrix Dimensions -- Example: Equations for a Network with Three Layers -- Hyper-Parameters in Fully Connected Networks -- A Short Review of the Softmax Activation Function for Multiclass Classifications -- A Brief Digression: Overfitting -- A Practical Example of Overfitting -- Basic Error Analysis -- Implementing a Feed-Forward Neural Network in Keras -- Multiclass Classification with Feed-Forward Neural Networks -- The Zalando Dataset for the Real-World Example -- Modifying Labels for the Softmax Function: One-Hot Encoding -- The Feed-Forward Network Model -- Keras Implementation -- Gradient Descent Variations Performances -- Comparing the Variations -- Examples of Wrong Predictions -- Weight Initialization -- Adding Many Layers Efficiently -- Advantages of Additional Hidden Layers -- Comparing Different Networks -- Tips for Choosing the Right Network -- Estimating the Memory Requirements of Models -- General Formula for the Memory Footprint -- Exercises -- References -- Chapter 4: Regularization -- Complex Networks and Overfitting -- What Is Regularization -- About Network Complexity -- ℓp Norm -- ℓ2 Regularization -- Theory of ℓ2 Regularization -- Keras Implementation -- ℓ1 Regularization -- Theory of ℓ1 Regularization and Keras Implementation -- Are the Weights Really Going to Zero? -- Dropout -- Early Stopping -- Additional Methods -- Exercises -- References -- Chapter 5: Advanced Optimizers -- Available Optimizers in Keras in TensorFlow 2.5 -- Advanced Optimizers -- Exponentially Weighted Averages -- Momentum -- RMSProp -- Adam -- Comparison of the Optimizers' Performance -- Small Coding Digression -- Which Optimizer Should You Use?. Chapter 6: Hyper-Parameter Tuning -- Black-Box Optimization -- Notes on Black-Box Functions -- The Problem of Hyper-Parameter Tuning -- Sample Black-Box Problem -- Grid Search -- Random Search -- Coarse to Fine Optimization -- Bayesian Optimization -- Nadaraya-Watson Regression -- Gaussian Process -- Stationary Process -- Prediction with Gaussian Processes -- Acquisition Function -- Upper Confidence Bound (UCB) -- Example -- Sampling on a Logarithmic Scale -- Hyper-Parameter Tuning with the Zalando Dataset -- A Quick Note about the Radial Basis Function -- Exercises -- References -- Chapter 7: Convolutional Neural Networks -- Kernels and Filters -- Convolution -- Examples of Convolution -- Pooling -- Padding -- Building Blocks of a CNN -- Convolutional Layers -- Pooling Layers -- Stacking Layers Together -- An Example of a CNN -- Conclusion -- Exercises -- References -- Chapter 8: A Brief Introduction to Recurrent Neural Networks -- Introduction to RNNs -- Notation -- The Basic Idea of RNNs -- Why the Name Recurrent -- Learning to Count -- Conclusion -- Further Readings -- Chapter 9: Autoencoders -- Introduction -- Regularization in Autoencoders -- Feed-Forward Autoencoders -- Activation Function of the Output Layer -- ReLU -- Sigmoid -- The Loss Function -- Mean Square Error -- Binary Cross-Entropy -- The Reconstruction Error -- Example: Reconstructing Handwritten Digits -- Autoencoder Applications -- Dimensionality Reduction -- Equivalence with PCA -- Classification -- Classification with Latent Features -- The Curse of Dimensionality: A Small Detour -- Anomaly Detection -- Model Stability: A Short Note -- Denoising Autoencoders -- Beyond FFA: Autoencoders with Convolutional Layers -- Implementation in Keras -- Exercises -- Further Readings -- Chapter 10: Metric Analysis -- Human-Level Performance and Bayes Error. A Short Story About Human-Level Performance -- Human-Level Performance on MNIST -- Bias -- Metric Analysis Diagram -- Training Set Overfitting -- Test Set -- How to Split Your Dataset -- Unbalanced Class Distribution: What Can Happen -- Datasets with Different Distributions -- k-fold Cross Validation -- Manual Metric Analysis: An Example -- Exercises -- References -- Chapter 11: Generative Adversarial Networks (GANs) -- Introduction to GANs -- Training Algorithm for GANs -- A Practical Example with Keras and MNIST -- A Note on Training -- Conditional GANs -- Conclusion -- Appendix A: Introduction to Keras -- Some History -- Understanding the Sequential Model -- Understanding Keras Layers -- Setting the Activation Function -- Using Functional APIs -- Specifying Loss Functions and Metrics -- Putting It All Together and Training -- Modeling evaluate() and predict () -- Using Callback Functions -- Saving and Loading Models -- Saving Your Weights Manually -- Saving the Entire Model -- Conclusion -- Appendix B: Customizing Keras -- Customizing Callback Classes -- Example of a Custom Callback Class -- Custom Training Loops -- Calculating Gradients -- Custom Training Loop for a Neural Network -- Index. |
| Record Nr. | UNINA-9910556881503321 |
Michelucci Umberto
|
||
| New York, NY : , : Apress, , [2022] | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Fundamental Mathematical Concepts for Machine Learning in Science
| Fundamental Mathematical Concepts for Machine Learning in Science |
| Autore | Michelucci Umberto |
| Edizione | [1st ed.] |
| Pubbl/distr/stampa | Cham : , : Springer International Publishing AG, , 2024 |
| Descrizione fisica | 1 online resource (259 pages) |
| ISBN | 3-031-56431-6 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto |
Intro -- Preface -- Acknowledgements -- Contents -- Acronyms -- Chapter 1 Introduction -- 1.1 Introduction -- 1.2 Choice of Topics -- 1.3 Prerequisites -- 1.4 Book Structure -- 1.5 About This Book -- 1.6 Warnings, Info and Examples -- 1.7 Optional and Advanced Material -- 1.8 Further Exploration and Reading -- 1.9 References -- 1.10 Let us Start -- References -- Chapter 2 Machine Learning: History and Terminology -- 2.1 Brief History of Machine Learning -- 2.2 Machine Learning in Science -- 2.3 Types of Machine Learning -- References -- Chapter 3 Calculus and Optimisation for Machine Learning -- 3.1 Motivation -- 3.2 Concept of Limit -- 3.3 Derivative and its Properties -- 3.4 Partial Derivative -- 3.5 Gradient -- 3.6 Extrema of a Function -- 3.7 Optimisation for Machine Learning -- 3.8 Introduction to Optimisation for Neural Networks -- 3.8.1 First Definition of Learning -- 3.8.2 Constrained vs. Unconstrained Optimisation -- 3.9 Optimization Algorithms -- 3.9.1 Line Search and Trust Region Approaches -- 3.9.2 Steepest Descent -- 3.9.3 Additional Directions for the Line Search Approach -- 3.9.4 The Gradient Descent Algorithm -- 3.9.5 Choosing the Right Learning Rate -- 3.9.6 Variations of Gradient Descent -- 3.9.6.1 Mini-batch Gradient Descent -- 3.9.6.2 Stochastic Gradient Descent -- 3.9.7 How to Choose the Right Mini-batch Size -- 3.9.8 Stochastic Gradient Descent and Fractals -- 3.10 Conclusions -- References -- Chapter 4 Linear Algebra -- 4.1 Motivation -- 4.2 Vectors -- 4.2.1 Geometrical Interpretation of Vectors -- 4.2.2 Norm of Vectors -- 4.2.3 Dot Product -- 4.2.4 Cross Product -- 4.3 Matrices -- 4.3.1 Sum, Subtraction and Transpose -- 4.3.2 Multiplication of Matrices and Vectors -- 4.3.3 Inverse and Trace -- 4.3.4 Determinant -- 4.3.5 Matrix Calculus and Linear Regression -- 4.4 Relevance for Machine Learning.
4.5 Eigenvectors and Eigenvalues -- 4.6 Principal Component Analysis -- 4.6.1 Basis of a Vector Space -- 4.6.2 Definition of a Vector Space -- 4.6.3 Linear Transformations (maps) -- 4.6.4 PCA Formalisation -- 4.6.5 Covariance Matrix -- 4.6.6 Overview of Assumptions -- 4.6.7 PCA with Eigenvectors and Eigenvalues -- 4.6.8 One Implementation Limitation -- References -- Chapter 5 Statistics and Probability for Machine Learning -- 5.1 Motivation -- 5.2 Random Experiments and Variables -- 5.3 Algebra of Sets -- 5.4 Probability -- 5.4.1 Relative Frequency Interpretation of Probability -- 5.4.2 Probability as a Set Function -- 5.4.3 Axiomatic Definition of Probability -- 5.4.4 Properties of Probability Functions -- 5.5 The Softmax Function -- 5.5.1 Softmax Range of Applications -- 5.6 Some Theorems about Probability Functions -- 5.7 Conditional Probability -- 5.8 Bayes Theorem -- 5.9 Bayes Error -- 5.10 Naïve Bayes Classifier -- 5.11 Distribution Functions -- 5.11.1 Cumulative Distribution Function (CDF) -- 5.11.2 Probability Density (PDF) and Mass Functions (PMF) -- 5.12 Expected Values and its Properties -- 5.13 Variance and its Properties -- 5.13.1 Properties -- 5.14 Normal Distribution -- 5.15 Other Distributions -- 5.16 The MSE and its Distribution -- 5.16.1 Moment Generating Functions -- 5.16.2 Central Limit Theorem -- 5.17 Central Limit Theorem without Mathematics -- References -- Chapter 6 Sampling Theory (a.k.a. Creating a Dataset Properly) -- 6.1 Introduction -- 6.2 Research Questions and Hypotheses -- 6.2.1 Research Questions -- 6.2.2 Hypothesis -- 6.2.3 Relevance of Hypothesis and Research Questions in Machine Learning -- 6.3 Survey Populations -- 6.4 Survey Samples -- 6.4.1 Non-probability Sampling -- 6.4.2 Probability Sampling -- 6.5 Stratification and Clustering -- 6.6 Random Sampling without Replacement -- 6.7 Random Sampling with Replacement. 6.8 Bootstrapping -- 6.9 Random Stratified Sampling -- 6.10 Sampling in Machine Learning -- References -- Chapter 7 Model Validation and Selection -- 7.1 Introduction -- 7.2 Bias-Variance Tradeoff -- 7.3 Bias-Variance Tradeoff - a Mathematical Discussion -- 7.4 High-Variance Low-Bias regime -- 7.5 Low-Variance High-Bias regime -- 7.6 Overfitting and Underfitting -- 7.7 The Simple Split Approach (a.k.a. Hold-out Approach) -- 7.8 Data Leakage -- 7.8.1 Data Leakage with Correlated Observations -- 7.8.2 Stratified Sampling -- 7.9 Monte Carlo Cross-Validation -- 7.10 Monte-Carlo Cross Validation with Bootstrapping -- 7.11 k-Fold Cross Validation -- 7.12 The Leave-One-Out Approach -- 7.13 Choosing the Cross-Validation Approach -- 7.14 Model Selection -- 7.14.1 Model Selection with Supervised Learning -- 7.14.2 Model Selection with Unsupervised Learning -- 7.15 Qualitative Criteria for Model Selection -- References -- Chapter 8 Unbalanced Datasets and Machine Learning Metrics -- 8.1 Introduction -- 8.2 A Simple Example -- 8.3 Approaches to Deal with Unbalanced Datasets -- 8.3.1 Oversampling -- 8.3.2 (Random) Undersampling -- 8.4 Synthetic Minority Oversampling TEchnique (SMOTE) -- 8.5 Summary of Methods for Dealing with Unbalanced Datasets -- 8.6 Important Metrics -- 8.6.1 The Notion of Metric -- 8.6.1.1 .The MSE is a Metric -- 8.6.1.2 . 1 − is a Metric -- 8.6.2 Confusion Matrix -- 8.6.3 Sensitivity and Specificity -- 8.6.4 Precision -- 8.6.5 -score -- 8.6.6 Balanced Accuracy -- 8.6.7 Receiving Operating Characteristic (ROC) Curve -- 8.6.8 Probability Interpretation of the AUC -- References -- Chapter 9 Hyper-parameter Tuning -- 9.1 Introduction -- 9.2 Black-box Optimisation -- 9.2.1 Notes on Black-box Functions -- 9.3 The Problem of Hyper-parameter Tuning -- 9.4 Sample Black-box Problem -- 9.4.1 Grid Search -- 9.4.2 Random Search. 9.4.3 Coarse to Fine Optimisation -- 9.4.4 Sampling on a Logarithmic Scale -- 9.5 Overview of Approaches for Hyper-parameter Tuning -- References -- Chapter 10 Feature Importance and Selection -- 10.1 Introduction -- 10.2 Feature Importance Taxonomy -- 10.2.1 Filter Methods -- 10.2.2 Wrapper Methods -- 10.2.3 Embedded Methods -- 10.3 Forward Feature Selection -- 10.3.1 Forward Feature Selection Practical Example -- 10.4 Backward Feature Elimination -- 10.5 Permutation Feature Importance -- 10.6 Information Content Elimination -- 10.7 Summary -- 10.8 SHapley Additive exPlanations (SHAP) -- References -- Index. |
| Record Nr. | UNINA-9910861099003321 |
Michelucci Umberto
|
||
| Cham : , : Springer International Publishing AG, , 2024 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Statistics for Scientists : A Concise Guide for Data-driven Research / / by Umberto Michelucci
| Statistics for Scientists : A Concise Guide for Data-driven Research / / by Umberto Michelucci |
| Autore | Michelucci Umberto |
| Edizione | [1st ed. 2025.] |
| Pubbl/distr/stampa | Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2025 |
| Descrizione fisica | 1 online resource (0 pages) |
| Disciplina | 519.5 |
| Soggetto topico |
Mathematical statistics - Data processing
Computer science - Mathematics Mathematical statistics Statistics Statistics and Computing Probability and Statistics in Computer Science Statistics in Engineering, Physics, Computer Science, Chemistry and Earth Sciences |
| ISBN |
9783031781476
9783031781469 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto | Introduction to Statistics -- Types of Data -- Data Collection Methods (Sampling Theory) -- Measures of Central Tendency -- Measures of Dispersion -- Measures of Positions -- Outliers -- Introduction to Distributions -- Skewness, Kurtosis and Modality -- Data Visualisation -- Confidence Intervals -- Hypothesis Testing -- Correlation and Linear Regression -- Statistical Project - Steps and Process -- Appendix A - Partioning of the Ordinary Least Square Variance -- Appendix B - Big-O and Little-o Notation. |
| Record Nr. | UNINA-9911016076003321 |
Michelucci Umberto
|
||
| Cham : , : Springer Nature Switzerland : , : Imprint : Springer, , 2025 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||