top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Adversarial machine learning : attack surfaces, defence mechanisms, learning theories in artificial intelligence / / Aneesh Sreevallabh Chivukula [and four others]
Adversarial machine learning : attack surfaces, defence mechanisms, learning theories in artificial intelligence / / Aneesh Sreevallabh Chivukula [and four others]
Autore Sreevallabh Chivukula Aneesh
Edizione [1st ed. 2023.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Descrizione fisica 1 online resource (314 pages)
Disciplina 005.8
Soggetto topico Computer security
Deep learning (Machine learning)
ISBN 3-030-99772-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Adversarial Machine Learning -- Adversarial Deep Learning -- Security and Privacy in Adversarial Learning -- Game-Theoretical Attacks with Adversarial Deep Learning Models -- Physical Attacks in the Real World -- Adversarial Defense Mechanisms -- Adversarial Learning for Privacy Preservation.
Record Nr. UNINA-9910678248603321
Sreevallabh Chivukula Aneesh  
Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Adversarial machine learning : attack surfaces, defence mechanisms, learning theories in artificial intelligence / / Aneesh Sreevallabh Chivukula [and four others]
Adversarial machine learning : attack surfaces, defence mechanisms, learning theories in artificial intelligence / / Aneesh Sreevallabh Chivukula [and four others]
Autore Sreevallabh Chivukula Aneesh
Edizione [1st ed. 2023.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Descrizione fisica 1 online resource (314 pages)
Disciplina 005.8
Soggetto topico Computer security
Deep learning (Machine learning)
ISBN 3-030-99772-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Adversarial Machine Learning -- Adversarial Deep Learning -- Security and Privacy in Adversarial Learning -- Game-Theoretical Attacks with Adversarial Deep Learning Models -- Physical Attacks in the Real World -- Adversarial Defense Mechanisms -- Adversarial Learning for Privacy Preservation.
Record Nr. UNISA-996547955603316
Sreevallabh Chivukula Aneesh  
Cham, Switzerland : , : Springer Nature Switzerland AG, , [2023]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Applied deep learning : tools, techniques, and implementation / / Paul Fergus, Carl Chalmers
Applied deep learning : tools, techniques, and implementation / / Paul Fergus, Carl Chalmers
Autore Fergus Paul
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (355 pages)
Disciplina 006.31
Collana Computational Intelligence Methods and Applications
Soggetto topico Deep learning (Machine learning)
ISBN 3-031-04420-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Acknowledgements -- Contents -- List of Figures -- List of Tables -- Part I: Introduction and Overview -- Chapter 1: Introduction -- 1.1 Artificial Intelligence, Machine Learning, Deep Learning -- 1.1.1 Artificial Intelligence -- 1.1.2 Machine Learning -- 1.1.3 Deep Learning -- 1.1.4 How they Come Together -- 1.2 Artificial Intelligence Is Driving Innovation -- 1.2.1 Transforming Healthcare -- 1.2.2 Protecting Wildlife -- 1.2.3 Securing the Environment -- 1.3 Tools, Frameworks and Hardware -- 1.3.1 Building Intelligent Applications -- 1.3.2 Python, Notebooks and Environments -- 1.3.3 Pre-Processing -- 1.3.4 Machine Learning -- 1.3.5 Deep Learning -- 1.3.6 Inferencing -- 1.4 How this Book Is Organised -- 1.5 Who Should Read this Book -- 1.6 Summary -- References -- Part II: Foundations of Machine Learning -- Chapter 2: Fundamentals of Machine Learning -- 2.1 What Is Machine Learning? -- 2.1.1 Formal and Non-Formal Definition -- 2.1.2 How AI and Machine Learning Differs from Conventional Software Development -- 2.1.2.1 Rewriting the Rules -- 2.1.2.2 Intelligent Decision Making -- 2.2 Machine Learning Tribes -- 2.2.1 Connectionists -- 2.2.2 Evolutionists -- 2.2.3 Bayesians -- 2.2.4 Symbolists -- 2.2.5 Analogists -- 2.3 Data Management -- 2.3.1 Data Types and Data Objects -- 2.3.1.1 Numerical -- 2.3.1.2 Textual -- 2.3.1.3 Categorical -- 2.3.1.4 Timeseries -- 2.3.2 Data Structure -- 2.3.2.1 Data Objects -- 2.3.3 Datasets -- 2.3.4 Exploratory Data Analysis -- 2.3.4.1 What Is Exploratory Data Analysis -- 2.3.4.2 Data Distributions -- 2.3.4.3 Validate Assumptions -- 2.3.4.4 Feature Engineering -- 2.4 Learning Problems -- 2.4.1 Supervised Machine Learning -- 2.4.2 Semi-Supervised Machine Learning -- 2.4.3 Un-Supervised Machine Learning -- 2.4.4 Regression -- 2.4.5 Reinforcement Learning -- 2.5 Evaluating Machine Learning Models.
2.6 Summary -- References -- Chapter 3: Supervised Learning -- 3.1 Basic Concepts -- 3.2 Supervised Learning Tasks -- 3.2.1 Data Extraction -- 3.2.2 Data Preparation -- 3.2.2.1 Data Size -- 3.2.2.2 Missing Data -- 3.2.2.3 Textual Data -- One Hot Encoding -- 3.2.2.4 Value Ranges (Normalisation and Scaling) -- Scaling -- Normalisation -- Standardisation -- 3.2.2.5 Distribution -- 3.2.2.6 Class Balance -- 3.2.2.7 Correlation Between Features -- 3.2.3 Feature Engineering -- 3.2.3.1 Feature Selection -- 3.2.3.2 Dimensionality Reduction -- 3.2.4 Selecting a Training Algorithm -- 3.3 Supervised Algorithms -- 3.3.1 Linear Regression -- 3.3.2 Logistic Regression -- 3.3.3 Linear Discriminate Analysis -- 3.3.4 Support Vector Machine -- 3.3.5 Random Forest -- 3.3.6 Naive Bayes -- 3.3.7 K-Nearest Neighbours -- 3.4 Summary -- References -- Chapter 4: Un-Supervised Learning -- 4.1 Basic Concepts -- 4.2 Clustering -- 4.2.1 Hierarchical Clustering -- 4.2.2 K-Means -- 4.2.3 Mixture Models -- 4.2.4 DBSCAN -- 4.2.5 Optics Algorithm -- 4.3 Principal Component Analysis -- 4.4 Association Rule Mining -- 4.5 Summary -- References -- Chapter 5: Performance Evaluation Metrics -- 5.1 Introduction to Model Evaluation -- 5.1.1 Evaluation Challenges -- 5.1.2 Taxonomy of Classifier Evaluation Metrics -- 5.2 Classification Accuracy -- 5.3 Train, Test and Validation Sets -- 5.4 Underfitting and Overfitting -- 5.5 Supervised Learning Evaluation Metrics -- 5.5.1 Confusion Matrix -- 5.5.1.1 Accuracy -- 5.5.1.2 Precision -- 5.5.1.3 Recall (Sensitivity) -- 5.5.1.4 Specificity -- 5.5.1.5 False Positive Rate -- 5.5.1.6 F1-Score -- 5.5.2 Receiver Operating Characteristic -- 5.5.3 Regression Metrics -- 5.5.3.1 Mean Square Error (MSE) -- 5.5.3.2 MAE -- 5.5.3.3 R2 (Coefficient of Determination) -- 5.6 Probability Scoring Methods -- 5.6.1 Log Loss Score -- 5.6.2 Brier Score.
5.7 Cross-Validation -- 5.7.1 Challenge of Evaluating Classifiers -- 5.7.2 K-Fold Cross-Validation -- 5.8 Un-Supervised Learning Evaluation Metric -- 5.8.1 Elbow Method -- 5.8.2 Davies-Bouldin Index -- 5.8.3 Dunn Index -- 5.8.4 Silhouette Coefficient -- 5.9 Summary -- References -- Part III: Deep Learning Concepts and Techniques -- Chapter 6: Introduction to Deep Learning -- 6.1 So what´s the Difference Between DL and ML? -- 6.2 Introduction to Deep Learning -- 6.3 Artificial Neural Networks -- 6.3.1 Perceptrons -- 6.3.2 Neural Networks -- 6.3.3 Activation Functions -- 6.3.4 Multi-Class Classification Considerations -- 6.3.5 Cost Functions and Optimisers -- 6.3.6 Backpropagation -- 6.3.7 The Vanishing Gradient -- 6.3.8 Weight Initialisation -- 6.3.9 Regularisation -- 6.4 Convolutional Neural Networks -- 6.4.1 Image Filters and Kernels -- 6.4.2 Convolutional Layers -- 6.4.3 Pooling Layers -- 6.4.4 Transfer Learning -- 6.5 Summary -- References -- Chapter 7: Image Classification and Object Detection -- 7.1 Hardware Accelerated Deep Learning -- 7.1.1 Training and Associated Hardware -- 7.1.1.1 Development Systems -- 7.1.1.2 Training Systems -- 7.1.1.3 Inferencing Systems -- 7.1.2 Tensor Processing Unit (TPU) -- 7.1.3 Other Hardware Considerations -- 7.2 Object Recognition -- 7.2.1 Image Classification -- 7.2.2 Object Detection -- 7.2.3 Semantic Segmentation -- 7.2.4 Object Segmentation -- 7.3 Model Architectures -- 7.3.1 Single Shot Detector (SSD) -- 7.3.2 YOLO Family -- 7.3.3 R-CNN -- 7.3.4 Fast-RCNN -- 7.3.5 Faster-RCNN -- 7.3.6 EfficientNet -- 7.3.7 Comparing Architectures -- 7.3.7.1 Key Findings -- 7.3.7.2 Most Accurate -- 7.3.7.3 Fastest -- 7.4 Evaluation Metrics -- 7.4.1 Confidence Score -- 7.4.2 Intersection over Union -- 7.4.3 Mean Average Precision (mAP) -- 7.5 Summary -- References.
Chapter 8: Deep Learning Techniques for Time Series Modelling -- 8.1 Introduction to Time-Series Data -- 8.2 Recurrent Neural Network -- 8.2.1 Developing RNNs for Time Series Forecasting -- 8.3 Long-Term Short-Term Memory -- 8.4 Gated Recurrent Unit -- 8.5 One Dimensional Convolutional Neural Network -- 8.6 Summary -- References -- Chapter 9: Natural Language Processing -- 9.1 Introduction to Natural Language Processing -- 9.1.1 Tokenisation -- 9.1.2 Stemming -- 9.1.3 Lemmatization -- 9.1.4 Stop Words -- 9.1.5 Phrase Matching and Vocabulary -- 9.2 Text Classification -- 9.2.1 Text Feature Extraction -- 9.3 Sentiment Analysis -- 9.4 Topic Modelling -- 9.4.1 Latent Semantic Analysis (LSA) -- 9.4.2 Latent Dirichlet Allocation -- 9.4.3 Non-negative Matrix Factorization -- 9.5 Deep Learning for NLP -- 9.5.1 Word Embeddings -- 9.5.2 Word Embedding Algorithms -- 9.5.2.1 Embedding Layer -- 9.5.2.2 Word2Vec -- 9.5.2.3 GloVe -- 9.5.2.4 Natural Language Understanding and Generation -- 9.6 Real-World Applications -- 9.6.1 Chat Bots -- 9.6.2 Smart Speakers -- 9.7 Summary -- References -- Chapter 10: Deep Generative Models -- 10.1 Autoencoders -- 10.1.1 Autoencoder Basics -- 10.1.2 Autoencoder for Dimensionality Reduction -- 10.1.3 Autoencoder for Images -- 10.1.4 Stacked Autoencoders -- 10.1.5 Generative Adversarial Networks (GANS) -- 10.1.5.1 GANs Network Architectures -- 10.2 Summary -- References -- Chapter 11: Deep Reinforcement Learning -- 11.1 What Is Reinforcement Learning? -- 11.2 Reinforcement Learning Definitions -- 11.3 Domain Selection for Reinforcement Learning -- 11.4 State-Action Pairs and Complex Probability Distributions of Reward -- 11.5 Neural Networks and Reinforcement Learning -- 11.6 The Deep Reinforcement Learning Process -- 11.7 Practical Applications of Deep Reinforcement Learning -- 11.8 Summary -- References.
Part IV: Enterprise Machine Learning -- Chapter 12: Accelerated Machine Learning -- 12.1 Introduction -- 12.1.1 CPU/GPU Based Clusters -- 12.2 CPU Accelerated Computing -- 12.2.1 Distributed Accelerated Computing Frameworks -- 12.2.1.1 Local Vs Distributed -- 12.2.1.2 Benefits of Scaling Out -- 12.2.1.3 Hadoop -- 12.2.1.4 Apache Spark -- 12.3 Introduction to DASK -- 12.3.1 DASK Arrays -- 12.3.2 Scikit Learn and DASK Integration (DASK ML) -- 12.3.3 Scikit Learn Joblib -- 12.4 GPU Computing -- 12.4.1 Introduction to GPU Hardware -- 12.4.2 Introduction to NVIDIA Accelerated Computing -- 12.4.3 CUDA -- 12.4.4 CUDA Accelerated Computing Libraries -- 12.5 RAPIDS -- 12.5.1 cuDF Analytics -- 12.5.2 cuML Machine Learning -- 12.5.3 cuGraph Graph Analytics -- 12.5.4 Apache Arrow -- 12.6 Summary -- References -- Chapter 13: Deploying and Hosting Machine Learning Models -- 13.1 Introduction to Deployment -- 13.1.1 Why Is Model Deployment Important -- 13.1.2 Enabling MLOps -- 13.1.3 MLOps Frameworks -- 13.1.4 MLOps Application Programmable Interfaces API´s -- 13.2 Preparing a Model -- 13.2.1 Model Formats -- 13.2.1.1 ProtoBuf (pb) -- 13.2.1.2 ONNX (.ONNX) -- 13.2.1.3 Keras h5 (.h5) -- 13.2.1.4 TensorFlow SavedModel Format -- 13.2.1.5 Scikit-Learn (.pkl) -- 13.2.1.6 IOS Platform (.mlmodel) -- 13.2.1.7 PyTorch (.pt) -- 13.2.2 Freezing and Exporting Models -- 13.2.3 Model Optimisation -- 13.2.4 Deploying the TFLite Model and Undertaking Inference -- 13.3 Web Deployment -- 13.3.1 Flask -- 13.3.2 Why Use Flask -- 13.3.3 Working and Developing in Flask -- 13.4 Summary -- References -- Chapter 14: Enterprise Machine Learning Serving -- 14.1 Docker -- 14.1.1 What Is Docker -- 14.1.2 Working with Docker -- 14.1.2.1 Using Docker -- 14.1.2.2 What´s a Container -- 14.1.2.3 Docker Run -- 14.1.2.4 Container Lifecycle -- 14.1.2.5 Building Custom Dockers -- 14.1.3 Docker Compose.
14.1.4 Docker Volume and Mount.
Record Nr. UNISA-996483161603316
Fergus Paul  
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Applied deep learning : tools, techniques, and implementation / / Paul Fergus, Carl Chalmers
Applied deep learning : tools, techniques, and implementation / / Paul Fergus, Carl Chalmers
Autore Fergus Paul
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (355 pages)
Disciplina 006.31
Collana Computational Intelligence Methods and Applications
Soggetto topico Deep learning (Machine learning)
ISBN 3-031-04420-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Acknowledgements -- Contents -- List of Figures -- List of Tables -- Part I: Introduction and Overview -- Chapter 1: Introduction -- 1.1 Artificial Intelligence, Machine Learning, Deep Learning -- 1.1.1 Artificial Intelligence -- 1.1.2 Machine Learning -- 1.1.3 Deep Learning -- 1.1.4 How they Come Together -- 1.2 Artificial Intelligence Is Driving Innovation -- 1.2.1 Transforming Healthcare -- 1.2.2 Protecting Wildlife -- 1.2.3 Securing the Environment -- 1.3 Tools, Frameworks and Hardware -- 1.3.1 Building Intelligent Applications -- 1.3.2 Python, Notebooks and Environments -- 1.3.3 Pre-Processing -- 1.3.4 Machine Learning -- 1.3.5 Deep Learning -- 1.3.6 Inferencing -- 1.4 How this Book Is Organised -- 1.5 Who Should Read this Book -- 1.6 Summary -- References -- Part II: Foundations of Machine Learning -- Chapter 2: Fundamentals of Machine Learning -- 2.1 What Is Machine Learning? -- 2.1.1 Formal and Non-Formal Definition -- 2.1.2 How AI and Machine Learning Differs from Conventional Software Development -- 2.1.2.1 Rewriting the Rules -- 2.1.2.2 Intelligent Decision Making -- 2.2 Machine Learning Tribes -- 2.2.1 Connectionists -- 2.2.2 Evolutionists -- 2.2.3 Bayesians -- 2.2.4 Symbolists -- 2.2.5 Analogists -- 2.3 Data Management -- 2.3.1 Data Types and Data Objects -- 2.3.1.1 Numerical -- 2.3.1.2 Textual -- 2.3.1.3 Categorical -- 2.3.1.4 Timeseries -- 2.3.2 Data Structure -- 2.3.2.1 Data Objects -- 2.3.3 Datasets -- 2.3.4 Exploratory Data Analysis -- 2.3.4.1 What Is Exploratory Data Analysis -- 2.3.4.2 Data Distributions -- 2.3.4.3 Validate Assumptions -- 2.3.4.4 Feature Engineering -- 2.4 Learning Problems -- 2.4.1 Supervised Machine Learning -- 2.4.2 Semi-Supervised Machine Learning -- 2.4.3 Un-Supervised Machine Learning -- 2.4.4 Regression -- 2.4.5 Reinforcement Learning -- 2.5 Evaluating Machine Learning Models.
2.6 Summary -- References -- Chapter 3: Supervised Learning -- 3.1 Basic Concepts -- 3.2 Supervised Learning Tasks -- 3.2.1 Data Extraction -- 3.2.2 Data Preparation -- 3.2.2.1 Data Size -- 3.2.2.2 Missing Data -- 3.2.2.3 Textual Data -- One Hot Encoding -- 3.2.2.4 Value Ranges (Normalisation and Scaling) -- Scaling -- Normalisation -- Standardisation -- 3.2.2.5 Distribution -- 3.2.2.6 Class Balance -- 3.2.2.7 Correlation Between Features -- 3.2.3 Feature Engineering -- 3.2.3.1 Feature Selection -- 3.2.3.2 Dimensionality Reduction -- 3.2.4 Selecting a Training Algorithm -- 3.3 Supervised Algorithms -- 3.3.1 Linear Regression -- 3.3.2 Logistic Regression -- 3.3.3 Linear Discriminate Analysis -- 3.3.4 Support Vector Machine -- 3.3.5 Random Forest -- 3.3.6 Naive Bayes -- 3.3.7 K-Nearest Neighbours -- 3.4 Summary -- References -- Chapter 4: Un-Supervised Learning -- 4.1 Basic Concepts -- 4.2 Clustering -- 4.2.1 Hierarchical Clustering -- 4.2.2 K-Means -- 4.2.3 Mixture Models -- 4.2.4 DBSCAN -- 4.2.5 Optics Algorithm -- 4.3 Principal Component Analysis -- 4.4 Association Rule Mining -- 4.5 Summary -- References -- Chapter 5: Performance Evaluation Metrics -- 5.1 Introduction to Model Evaluation -- 5.1.1 Evaluation Challenges -- 5.1.2 Taxonomy of Classifier Evaluation Metrics -- 5.2 Classification Accuracy -- 5.3 Train, Test and Validation Sets -- 5.4 Underfitting and Overfitting -- 5.5 Supervised Learning Evaluation Metrics -- 5.5.1 Confusion Matrix -- 5.5.1.1 Accuracy -- 5.5.1.2 Precision -- 5.5.1.3 Recall (Sensitivity) -- 5.5.1.4 Specificity -- 5.5.1.5 False Positive Rate -- 5.5.1.6 F1-Score -- 5.5.2 Receiver Operating Characteristic -- 5.5.3 Regression Metrics -- 5.5.3.1 Mean Square Error (MSE) -- 5.5.3.2 MAE -- 5.5.3.3 R2 (Coefficient of Determination) -- 5.6 Probability Scoring Methods -- 5.6.1 Log Loss Score -- 5.6.2 Brier Score.
5.7 Cross-Validation -- 5.7.1 Challenge of Evaluating Classifiers -- 5.7.2 K-Fold Cross-Validation -- 5.8 Un-Supervised Learning Evaluation Metric -- 5.8.1 Elbow Method -- 5.8.2 Davies-Bouldin Index -- 5.8.3 Dunn Index -- 5.8.4 Silhouette Coefficient -- 5.9 Summary -- References -- Part III: Deep Learning Concepts and Techniques -- Chapter 6: Introduction to Deep Learning -- 6.1 So what´s the Difference Between DL and ML? -- 6.2 Introduction to Deep Learning -- 6.3 Artificial Neural Networks -- 6.3.1 Perceptrons -- 6.3.2 Neural Networks -- 6.3.3 Activation Functions -- 6.3.4 Multi-Class Classification Considerations -- 6.3.5 Cost Functions and Optimisers -- 6.3.6 Backpropagation -- 6.3.7 The Vanishing Gradient -- 6.3.8 Weight Initialisation -- 6.3.9 Regularisation -- 6.4 Convolutional Neural Networks -- 6.4.1 Image Filters and Kernels -- 6.4.2 Convolutional Layers -- 6.4.3 Pooling Layers -- 6.4.4 Transfer Learning -- 6.5 Summary -- References -- Chapter 7: Image Classification and Object Detection -- 7.1 Hardware Accelerated Deep Learning -- 7.1.1 Training and Associated Hardware -- 7.1.1.1 Development Systems -- 7.1.1.2 Training Systems -- 7.1.1.3 Inferencing Systems -- 7.1.2 Tensor Processing Unit (TPU) -- 7.1.3 Other Hardware Considerations -- 7.2 Object Recognition -- 7.2.1 Image Classification -- 7.2.2 Object Detection -- 7.2.3 Semantic Segmentation -- 7.2.4 Object Segmentation -- 7.3 Model Architectures -- 7.3.1 Single Shot Detector (SSD) -- 7.3.2 YOLO Family -- 7.3.3 R-CNN -- 7.3.4 Fast-RCNN -- 7.3.5 Faster-RCNN -- 7.3.6 EfficientNet -- 7.3.7 Comparing Architectures -- 7.3.7.1 Key Findings -- 7.3.7.2 Most Accurate -- 7.3.7.3 Fastest -- 7.4 Evaluation Metrics -- 7.4.1 Confidence Score -- 7.4.2 Intersection over Union -- 7.4.3 Mean Average Precision (mAP) -- 7.5 Summary -- References.
Chapter 8: Deep Learning Techniques for Time Series Modelling -- 8.1 Introduction to Time-Series Data -- 8.2 Recurrent Neural Network -- 8.2.1 Developing RNNs for Time Series Forecasting -- 8.3 Long-Term Short-Term Memory -- 8.4 Gated Recurrent Unit -- 8.5 One Dimensional Convolutional Neural Network -- 8.6 Summary -- References -- Chapter 9: Natural Language Processing -- 9.1 Introduction to Natural Language Processing -- 9.1.1 Tokenisation -- 9.1.2 Stemming -- 9.1.3 Lemmatization -- 9.1.4 Stop Words -- 9.1.5 Phrase Matching and Vocabulary -- 9.2 Text Classification -- 9.2.1 Text Feature Extraction -- 9.3 Sentiment Analysis -- 9.4 Topic Modelling -- 9.4.1 Latent Semantic Analysis (LSA) -- 9.4.2 Latent Dirichlet Allocation -- 9.4.3 Non-negative Matrix Factorization -- 9.5 Deep Learning for NLP -- 9.5.1 Word Embeddings -- 9.5.2 Word Embedding Algorithms -- 9.5.2.1 Embedding Layer -- 9.5.2.2 Word2Vec -- 9.5.2.3 GloVe -- 9.5.2.4 Natural Language Understanding and Generation -- 9.6 Real-World Applications -- 9.6.1 Chat Bots -- 9.6.2 Smart Speakers -- 9.7 Summary -- References -- Chapter 10: Deep Generative Models -- 10.1 Autoencoders -- 10.1.1 Autoencoder Basics -- 10.1.2 Autoencoder for Dimensionality Reduction -- 10.1.3 Autoencoder for Images -- 10.1.4 Stacked Autoencoders -- 10.1.5 Generative Adversarial Networks (GANS) -- 10.1.5.1 GANs Network Architectures -- 10.2 Summary -- References -- Chapter 11: Deep Reinforcement Learning -- 11.1 What Is Reinforcement Learning? -- 11.2 Reinforcement Learning Definitions -- 11.3 Domain Selection for Reinforcement Learning -- 11.4 State-Action Pairs and Complex Probability Distributions of Reward -- 11.5 Neural Networks and Reinforcement Learning -- 11.6 The Deep Reinforcement Learning Process -- 11.7 Practical Applications of Deep Reinforcement Learning -- 11.8 Summary -- References.
Part IV: Enterprise Machine Learning -- Chapter 12: Accelerated Machine Learning -- 12.1 Introduction -- 12.1.1 CPU/GPU Based Clusters -- 12.2 CPU Accelerated Computing -- 12.2.1 Distributed Accelerated Computing Frameworks -- 12.2.1.1 Local Vs Distributed -- 12.2.1.2 Benefits of Scaling Out -- 12.2.1.3 Hadoop -- 12.2.1.4 Apache Spark -- 12.3 Introduction to DASK -- 12.3.1 DASK Arrays -- 12.3.2 Scikit Learn and DASK Integration (DASK ML) -- 12.3.3 Scikit Learn Joblib -- 12.4 GPU Computing -- 12.4.1 Introduction to GPU Hardware -- 12.4.2 Introduction to NVIDIA Accelerated Computing -- 12.4.3 CUDA -- 12.4.4 CUDA Accelerated Computing Libraries -- 12.5 RAPIDS -- 12.5.1 cuDF Analytics -- 12.5.2 cuML Machine Learning -- 12.5.3 cuGraph Graph Analytics -- 12.5.4 Apache Arrow -- 12.6 Summary -- References -- Chapter 13: Deploying and Hosting Machine Learning Models -- 13.1 Introduction to Deployment -- 13.1.1 Why Is Model Deployment Important -- 13.1.2 Enabling MLOps -- 13.1.3 MLOps Frameworks -- 13.1.4 MLOps Application Programmable Interfaces API´s -- 13.2 Preparing a Model -- 13.2.1 Model Formats -- 13.2.1.1 ProtoBuf (pb) -- 13.2.1.2 ONNX (.ONNX) -- 13.2.1.3 Keras h5 (.h5) -- 13.2.1.4 TensorFlow SavedModel Format -- 13.2.1.5 Scikit-Learn (.pkl) -- 13.2.1.6 IOS Platform (.mlmodel) -- 13.2.1.7 PyTorch (.pt) -- 13.2.2 Freezing and Exporting Models -- 13.2.3 Model Optimisation -- 13.2.4 Deploying the TFLite Model and Undertaking Inference -- 13.3 Web Deployment -- 13.3.1 Flask -- 13.3.2 Why Use Flask -- 13.3.3 Working and Developing in Flask -- 13.4 Summary -- References -- Chapter 14: Enterprise Machine Learning Serving -- 14.1 Docker -- 14.1.1 What Is Docker -- 14.1.2 Working with Docker -- 14.1.2.1 Using Docker -- 14.1.2.2 What´s a Container -- 14.1.2.3 Docker Run -- 14.1.2.4 Container Lifecycle -- 14.1.2.5 Building Custom Dockers -- 14.1.3 Docker Compose.
14.1.4 Docker Volume and Mount.
Record Nr. UNINA-9910584485003321
Fergus Paul  
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Automated deep learning using neural network intelligence : develop and design PyTorch and TensorFlow models using Python / / Ivan Gridin
Automated deep learning using neural network intelligence : develop and design PyTorch and TensorFlow models using Python / / Ivan Gridin
Autore Gridin Ivan
Pubbl/distr/stampa New York, New York : , : Apress L. P., , [2022]
Descrizione fisica 1 online resource (396 pages)
Disciplina 733
Soggetto topico Deep learning (Machine learning)
Neural networks (Computer science)
Python (Computer program language)
ISBN 1-4842-8149-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Chapter 1: Introduction to Neural Network Intelligence -- Chapter 2:Hyperparameter Optimization -- Chapter 3: Hyperparameter Optimization Under Shell -- 4. Multi-Trial Neural Architecture Search -- Chapter 5: One-Shot Neural Architecture Search -- Chapter 6: Model Pruning -- Chapter 7: NNI Recipes.
Record Nr. UNINA-9910580173103321
Gridin Ivan  
New York, New York : , : Apress L. P., , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Bioinformatics and medical applications : big data using deep learning algorithms / / edited by A. Suresh
Bioinformatics and medical applications : big data using deep learning algorithms / / edited by A. Suresh
Pubbl/distr/stampa Hoboken, NJ : , : John Wiley & Sons, Inc., , [2022]
Descrizione fisica 1 online resource (343 pages)
Disciplina 005.7
Soggetto topico Bioinformatics
Big data
Deep learning (Machine learning)
Soggetto genere / forma Electronic books.
ISBN 1-119-79267-3
1-119-79266-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910555168103321
Hoboken, NJ : , : John Wiley & Sons, Inc., , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Bioinformatics and medical applications : big data using deep learning algorithms / / edited by A. Suresh [and four others]
Bioinformatics and medical applications : big data using deep learning algorithms / / edited by A. Suresh [and four others]
Pubbl/distr/stampa Hoboken, NJ : , : Wiley, , ℗2022
Descrizione fisica 1 online resource (343 pages)
Disciplina 005.7
Soggetto topico Medical informatics
Bioinformatics
Big data
Deep learning (Machine learning)
ISBN 1-119-79267-3
1-119-79266-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Probabilistic Optimization of Machine Learning Algorithms for Heart Disease Prediction / Jaspreet Kaur, Bharti Joshi, Rajashree Shedge -- Cancerous Cells Detection in Lung Organs of Human Body: IoT-Based Healthcare 4.0 Approach / Rohit Rastogi, DK Chaturvedi, Sheelu Sagar, Neeti Tandon, Mukund Rastogi -- Computational Predictors of the Predominant Protein Function: SARS-CoV-2 Case / Carlos Polanco, Manlio F M̀rquez, Gilberto Vargas-Alarc̤n -- Deep Learning in Gait Abnormality Detection: Principles and Illustrations / Saikat Chakraborty, Sruti Sambhavi, Anup Nandy -- Broad Applications of Network Embeddings in Computational Biology, Genomics, Medicine, and Health / Akanksha Jaiswar, Devender Arora, Manisha Malhotra, Abhimati Shukla, Nivedita Rai -- Heart Disease Classification Using Regional Wall Thickness by Ensemble Classifier / J Prakash, Kumar B Vinoth, R Sandhya -- Deep Learning for Medical Informatics and Public Health / K Aditya Shastry, H A Sanjay, M Lakshmi, N Preetham -- An Insight Into Human Pose Estimation and Its Applications / Shambhavi Mishra, Janamejaya Channegowda, Kasina Jyothi Swaroop -- Brain Tumor Analysis Using Deep Learning: Sensor and IoT-Based Approach for Futuristic Healthcare / Rohit Rastogi, DK Chaturvedi, Sheelu Sagar, Neeti Tandon, Akshit Rajan Rastogi -- Study of Emission From Medicinal Woods to Curb Threats of Pollution and Diseases: Global Healthcare Paradigm Shift in 21st Century / Rohit Rastogi, Mamta Saxena, Devendra Kr Chaturvedi, Sheelu Sagar, Neha Gupta, Harshit Gupta, Akshit Rajan Rastogi, Divya Sharma, Manu Bhardwaj, Pranav Sharma -- An Economical Machine Learning Approach for Anomaly Detection in IoT Environment / N Ambika -- Indian Science of Yajna and Mantra to Cure Different Diseases: An Analysis Amidst Pandemic With a Simulated Approach / Rohit Rastogi, Mamta Saxena, Devendra Kumar Chaturvedi, Mayank Gupta, Puru Jain, Rishabh Jain, Mohit Jain, Vishal Sharma, Utkarsh Sangam, Parul Singhal, Priyanshi Garg -- Collection and Analysis of Big Data From Emerging Technologies in Healthcare / K Nagashri, D S Jayalakshmi, J Geetha -- A Complete Overview of Sign Language Recognition and Translation Systems / Kasina Jyothi Swaroop, Janamejaya Channegowda, Shambhavi Mishra -- Index
Record Nr. UNINA-9910830347903321
Hoboken, NJ : , : Wiley, , ℗2022
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Brain-computer interface : using deep learning applications / / edited by M. G. Sumithra [and four others]
Brain-computer interface : using deep learning applications / / edited by M. G. Sumithra [and four others]
Pubbl/distr/stampa Hoboken, New Jersey : , : John Wiley & Sons, , [2023]
Descrizione fisica 1 online resource (323 pages)
Disciplina 616.8047547
Soggetto topico Brain-computer interfaces
Deep learning (Machine learning)
ISBN 1-119-85765-1
1-119-85764-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Title Page -- Copyright Page -- Contents -- Preface -- Chapter 1 Introduction to Brain-Computer Interface: Applications and Challenges -- 1.1 Introduction -- 1.2 The Brain - Its Functions -- 1.3 BCI Technology -- 1.3.1 Signal Acquisition -- 1.3.1.1 Invasive Methods -- 1.3.1.2 Non-Invasive Methods -- 1.3.2 Feature Extraction -- 1.3.3 Classification -- 1.3.3.1 Types of Classifiers -- 1.4 Applications of BCI -- 1.5 Challenges Faced During Implementation of BCI -- References -- Chapter 2 Introduction: Brain-Computer Interface and Deep Learning -- 2.1 Introduction -- 2.1.1 Current Stance of P300 BCI -- 2.2 Brain-Computer Interface Cycle -- 2.3 Classification of Techniques Used for Brain-Computer Interface -- 2.3.1 Application in Mental Health -- 2.3.2 Application in Motor-Imagery -- 2.3.3 Application in Sleep Analysis -- 2.3.4 Application in Emotion Analysis -- 2.3.5 Hybrid Methodologies -- 2.3.6 Recent Notable Advancements -- 2.4 Case Study: A Hybrid EEG-fNIRS BCI -- 2.5 Conclusion, Open Issues and Future Endeavors -- References -- Chapter 3 Statistical Learning for Brain-Computer Interface -- 3.1 Introduction -- 3.1.1 Various Techniques to BCI -- 3.1.1.1 Non-Invasive -- 3.1.1.2 Semi-Invasive -- 3.1.1.3 Invasive -- 3.2 Machine Learning Techniques to BCI -- 3.2.1 Support Vector Machine (SVM) -- 3.2.2 Neural Networks -- 3.3 Deep Learning Techniques Used in BCI -- 3.3.1 Convolutional Neural Network Model (CNN) -- 3.3.2 Generative DL Models -- 3.4 Future Direction -- 3.5 Conclusion -- References -- Chapter 4 The Impact of Brain-Computer Interface on Lifestyle of Elderly People -- 4.1 Introduction -- 4.2 Diagnosing Diseases -- 4.3 Movement Control -- 4.4 IoT -- 4.5 Cognitive Science -- 4.6 Olfactory System -- 4.7 Brain-to-Brain (B2B) Communication Systems -- 4.8 Hearing -- 4.9 Diabetes -- 4.10 Urinary Incontinence -- 4.11 Conclusion -- References.
Chapter 5 A Review of Innovation to Human Augmentation in Brain-Machine Interface - Potential, Limitation, and Incorporation of AI -- 5.1 Introduction -- 5.2 Technologies in Neuroscience for Recording and Influencing Brain Activity -- 5.2.1 Brain Activity Recording Technologies -- 5.2.1.1 A Non-Invasive Recording Methodology -- 5.2.1.2 An Invasive Recording Methodology -- 5.3 Neuroscience Technology Applications for Human Augmentation -- 5.3.1 Need for BMI -- 5.3.1.1 Need of BMI Individuals for Re-Establishing the Control and Communication of Motor -- 5.3.1.2 Brain-Computer Interface Noninvasive Research at Wadsworth Center -- 5.3.1.3 An Interface of Berlin Brain-Computer: Machine Learning-Dependent of User-Specific Brain States Detection -- 5.4 History of BMI -- 5.5 BMI Interpretation of Machine Learning Integration -- 5.6 Beyond Current Existing Methodologies: Nanomachine Learning BMI Supported -- 5.7 Challenges and Open Issues -- 5.8 Conclusion -- References -- Chapter 6 Resting-State fMRI: Large Data Analysis in Neuroimaging -- 6.1 Introduction -- 6.1.1 Principles of Functional Magnetic Resonance Imaging (fMRI) -- 6.1.2 Resting State fMRI (rsfMRI) for Neuroimaging -- 6.1.3 The Measurement of Fully Connected and Construction of Default Mode Network (DMN) -- 6.2 Brain Connectivity -- 6.2.1 Anatomical Connectivity -- 6.2.2 Functional Connectivity -- 6.3 Better Image Availability -- 6.3.1 Large Data Analysis in Neuroimaging -- 6.3.2 Big Data rfMRI Challenges -- 6.3.3 Large rfMRI Data Software Packages -- 6.4 Informatics Infrastructure and Analytical Analysis -- 6.5 Need of Resting-State MRI -- 6.5.1 Cerebral Energetics -- 6.5.2 Signal to Noise Ratio (SNR) -- 6.5.3 Multi-Purpose Data Sets -- 6.5.4 Expanded Patient Populations -- 6.5.5 Reliability -- 6.6 Technical Development -- 6.7 rsfMRI Clinical Applications.
6.7.1 Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD) -- 6.7.2 Fronto-Temporal Dementia (FTD) -- 6.7.3 Multiple Sclerosis (MS) -- 6.7.4 Amyotrophic Lateral Sclerosis (ALS) and Depression -- 6.7.5 Bipolar -- 6.7.6 Schizophrenia -- 6.7.7 Attention Deficit Hyperactivity Disorder (ADHD) -- 6.7.8 Multiple System Atrophy (MSA) -- 6.7.9 Epilepsy/Seizures -- 6.7.10 Pediatric Applications -- 6.8 Resting-State Functional Imaging of Neonatal Brain Image -- 6.9 Different Groups in Brain Disease -- 6.10 Learning Algorithms for Analyzing rsfMRI -- 6.11 Conclusion and Future Directions -- References -- Chapter 7 Early Prediction of Epileptic Seizure Using Deep Learning Algorithm -- 7.1 Introduction -- 7.2 Methodology -- 7.3 Experimental Results -- 7.4 Taking Care of Children with Seizure Disorders -- 7.5 Ketogenic Diet -- 7.6 Vagus Nerve Stimulation (VNS) -- 7.7 Brain Surgeries -- 7.8 Conclusion -- References -- Chapter 8 Brain-Computer Interface-Based Real-Time Movement of Upper Limb Prostheses Topic: Improving the Quality of the Elderly with Brain-Computer Interface -- 8.1 Introduction -- 8.1.1 Motor Imagery Signal Decoding -- 8.2 Literature Survey -- 8.3 Methodology of Proposed Work -- 8.3.1 Proposed Control Scheme -- 8.3.2 One Versus All Adaptive Neural Type-2 Fuzzy Inference System (OVAANT2FIS) -- 8.3.3 Position Control of Robot Arm Using Hybrid BCI for Rehabilitation Purpose -- 8.3.4 Jaco Robot Arm -- 8.3.5 Scheme 1: Random Order Positional Control -- 8.4 Experiments and Data Processing -- 8.4.1 Feature Extraction -- 8.4.2 Performance Analysis of the Detectors -- 8.4.3 Performance of the Real Time Robot Arm Controllers -- 8.5 Discussion -- 8.6 Conclusion and Future Research Directions -- References -- Chapter 9 Brain-Computer Interface-Assisted Automated Wheelchair Control Management-Cerebro: A BCI Application -- 9.1 Introduction.
9.1.1 What is a BCI? -- 9.2 How Do BCI's Work? -- 9.2.1 Measuring Brain Activity -- 9.2.1.1 Without Surgery -- 9.2.1.2 With Surgery -- 9.2.2 Mental Strategies -- 9.2.2.1 SSVEP -- 9.2.2.2 Neural Motor Imagery -- 9.3 Data Collection -- 9.3.1 Overview of the Data -- 9.3.2 EEG Headset -- 9.3.3 EEG Signal Collection -- 9.4 Data Pre-Processing -- 9.4.1 Artifact Removal -- 9.4.2 Signal Processing and Dimensionality Reduction -- 9.4.3 Feature Extraction -- 9.5 Classification -- 9.5.1 Deep Learning (DL) Model Pipeline -- 9.5.2 Architecture of the DL Model -- 9.5.3 Output Metrics of the Classifier -- 9.5.4 Deployment of DL Model -- 9.5.5 Control System -- 9.5.6 Control Flow Overview -- 9.6 Control Modes -- 9.6.1 Speech Mode -- 9.6.2 Blink Stimulus Mapping -- 9.6.3 Text Interface -- 9.6.4 Motion Mode -- 9.6.5 Motor Arrangement -- 9.6.6 Imagined Motion Mapping -- 9.7 Compilation of All Systems -- 9.8 Conclusion -- References -- Chapter 10 Identification of Imagined Bengali Vowels from EEG Signals Using Activity Map and Convolutional Neural Network -- 10.1 Introduction -- 10.1.1 Electroencephalography (EEG) -- 10.1.2 Imagined Speech or Silent Speech -- 10.2 Literature Survey -- 10.3 Theoretical Background -- 10.3.1 Convolutional Neural Network -- 10.3.2 Activity Map -- 10.4 Methodology -- 10.4.1 Data Collection -- 10.4.2 Pre-Processing -- 10.4.3 Feature Extraction -- 10.4.4 Classification -- 10.5 Results -- 10.6 Conclusion -- Acknowledgment -- References -- Chapter 11 Optimized Feature Selection Techniques for Classifying Electrocorticography Signals -- 11.1 Introduction -- 11.1.1 Brain-Computer Interface -- 11.2 Literature Study -- 11.3 Proposed Methodology -- 11.3.1 Dataset -- 11.3.2 Feature Extraction Using Auto-Regressive (AR) Model and Wavelet Transform -- 11.3.2.1 Auto-Regressive Features -- 11.3.2.2 Wavelet Features -- 11.3.2.3 Feature Selection Methods.
11.3.2.4 Information Gain (IG) -- 11.3.2.5 Clonal Selection -- 11.3.2.6 An Overview of the Steps of the CLONALG -- 11.3.3 Hybrid CLONALG -- 11.4 Experimental Results -- 11.4.1 Results of Feature Selection Using IG with Various Classifiers -- 11.4.2 Results of Optimizing Support Vector Machine Using CLONALG Selection -- 11.5 Conclusion -- References -- Chapter 12 BCI - Challenges, Applications, and Advancements -- 12.1 Introduction -- 12.1.1 BCI Structure -- 12.2 Related Works -- 12.3 Applications -- 12.4 Challenges and Advancements -- 12.5 Conclusion -- References -- Index -- EULA.
Record Nr. UNINA-9910830951703321
Hoboken, New Jersey : , : John Wiley & Sons, , [2023]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Case-based reasoning research and development : 30th International Conference, ICCBR 2022, Nancy, France, September 12-15, 2022, proceedings / / Mark T. Keane, Nirmalie Wiratunga (editors)
Case-based reasoning research and development : 30th International Conference, ICCBR 2022, Nancy, France, September 12-15, 2022, proceedings / / Mark T. Keane, Nirmalie Wiratunga (editors)
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (420 pages)
Disciplina 153.43
Collana Lecture notes in computer science. Lecture notes in artificial intelligence
Soggetto topico Case-based reasoning
Expert systems (Computer science)
Deep learning (Machine learning)
ISBN 3-031-14923-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks -- Seeing Through Black Boxes with Human Vision: Deep Learning and Explainable AI in Medical Image Applications -- Case-Based Reasoning for Clinical Decisions That Are Computer-Aided, Not Automated -- Towards More Cognitively Appealing Paradigms in Case-Based Reasoning -- Contents -- Explainability in CBR -- Using Case-Based Reasoning for Capturing Expert Knowledge on Explanation Methods -- 1 Introduction -- 2 Background -- 3 Case-Based Elicitation -- 3.1 Case Structure -- 3.2 Case Base Acquisition -- 4 CBR Process -- 5 Evaluation and Discussion -- 6 Conclusions -- References -- A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations -- 1 Introduction -- 2 Related Work -- 2.1 What Are Good Counterfactual Explanations? -- 2.2 Perturbation-Based Approaches -- 2.3 Instance-Based Approaches -- 2.4 Instance-Based Shortcomings -- 3 Good Counterfactuals in Multi-class Domains -- 3.1 Reusing the kNN Explanation Cases -- 3.2 Validating Candidate Counterfactuals -- 3.3 Discussion -- 4 Evaluation -- 4.1 Methodology -- 4.2 Results -- 5 Conclusions -- References -- How Close Is Too Close? The Role of Feature Attributions in Discovering Counterfactual Explanations -- 1 Introduction -- 2 Related Work -- 3 DisCERN -- 3.1 Nearest-Unlike Neighbour -- 3.2 Feature Ordering by Feature Attribution -- 3.3 Substitution-Based Adaptation -- 3.4 Integrated Gradients for DisCERN -- 3.5 Bringing the NUN Closer -- 4 Evaluation -- 4.1 Datasets -- 4.2 Experiment Setup -- 4.3 Performance Measures for Counterfactual Explanations -- 5 Results -- 5.1 A Comparison of Feature Attribution Techniques -- 5.2 A Comparison of Counterfactual Discovery Algorithms -- 5.3 Impact of Bringing NUN Closer -- 6 Conclusions -- References -- Algorithmic Bias and Fairness in Case-Based Reasoning.
1 Introduction -- 2 Related Research -- 2.1 Bias in ML -- 2.2 Bias in CBR -- 2.3 Metric Learning -- 3 FairRet: Eliminating Bias with Metric Learning -- 3.1 Bias and The Similarity Knowledge Container -- 3.2 A Metric Learning Approach -- 3.3 Multi-objective Particle Swarm Optimization -- 4 Results -- 4.1 Dealing with Underestimation Bias -- 4.2 Outcome Distortion -- 4.3 Retrieval Overlap -- 5 Conclusions -- References -- "Better" Counterfactuals, Ones People Can Understand: Psychologically-Plausible Case-Based Counterfactuals Using Categorical Features for Explainable AI (XAI) -- 1 Introduction -- 2 Background: Computation and Psychology of Counterfactuals -- 2.1 User Studies of Counterfactual XAI: Mixed Results -- 3 Study 1: Plotting Counterfactuals that have Categoricals -- 3.1 Results and Discussion -- 4 Transforming Case-Based Counterfactuals, Categorically -- 4.1 Case-Based Counterfactual Methods: CB1-CF and CB2-CF -- 4.2 Counterfactuals with Categorical Transforms #1: Global Binning -- 4.3 Counterfactuals with Categorical Transforms #2: Local Direction -- 5 Study 2: Evaluating CAT-CF Methods -- 5.1 Method: Data and Procedure -- 5.2 Results and Discussion: Counterfactual Distance -- 6 Conclusions -- References -- Representation and Similarity -- Extracting Case Indices from Convolutional Neural Networks: A Comparative Study -- 1 Introduction -- 2 Potential Feature Extraction Points in cnns -- 3 Related Work -- 4 Three Structure-Based Feature Extraction Methods -- 4.1 Post-convolution Feature Extraction -- 4.2 Post-dense Feature Extraction -- 4.3 Multi-net Feature Extraction -- 5 Evaluation -- 5.1 Hypotheses -- 5.2 Test Domain and Test Set Selection -- 5.3 Testbed System -- 5.4 Accuracy Testing and Informal Upper Bound -- 6 Results and Discussion -- 6.1 Comparative Performance -- 6.2 Discussion -- 7 Ramifications for Interpretability.
8 Conclusions and Future Work -- References -- Exploring the Effect of Recipe Representation on Critique-Based Conversational Recommendation -- 1 Introduction -- 2 Background -- 2.1 Diversity in Recommender Systems -- 2.2 Critique-Based Conversational Recommender Systems -- 2.3 Diversity in Recipe Recommenders -- 3 DiversityBite Framework: Recommend, Review, Revise -- 3.1 Adaptive Diversity Goal Approach -- 4 Evaluation -- 4.1 Case Base -- 4.2 Implementation: DGF, AGD, and Diversity Scoring -- 4.3 Simulation Study: Incorporating Diversity in Critique -- 4.4 User Study: Comparing Different Recipe Representations -- 5 Conclusion -- References -- Explaining CBR Systems Through Retrieval and Similarity Measure Visualizations: A Case Study -- 1 Introduction -- 2 Related Work -- 3 SupportPrim CBR System -- 3.1 Data -- 3.2 Case Representation and Similarity Modeling -- 3.3 Case Base and Similarity Population -- 4 Explanatory Case Base Visualizations -- 4.1 Accessing the CBR System's Model -- 4.2 Visualization of Retrievals -- 4.3 Visualization of the Similarity Scores for Individual Case Comparisons -- 5 Experiments -- 6 Discussion -- 7 Conclusion -- References -- Adapting Semantic Similarity Methods for Case-Based Reasoning in the Cloud -- 1 Introduction -- 2 Related Work -- 2.1 Clood CBR -- 2.2 Ontologies in CBR -- 2.3 Retrieval with Word Embedding -- 2.4 Serverless Function Benefits and Limitations -- 3 Semantic Similarity Metrics in a Microservices Architecture -- 3.1 Clood Similarity Functions Overview -- 3.2 Similarity Table -- 3.3 Word Embedding Based Similarity -- 3.4 Ontology-Based Similarity Measure -- 4 Implementation of Semantic Similarity Measures on Clood Framework -- 4.1 Word Embedding Similarity on Clood -- 4.2 Ontology-Based Similarity on Clood -- 5 Evaluation of Resource Impact -- 5.1 Experiment Setup -- 5.2 Result and Discussion.
6 Conclusion -- References -- Adaptation and Analogical Reasoning -- Case Adaptation with Neural Networks: Capabilities and Limitations -- 1 Introduction -- 2 Background -- 3 NN-CDH for both Classification and Regression -- 3.1 General Model of Case Adaptation -- 3.2 1-Hot/1-Cold Nominal Difference -- 3.3 Neural Network Structure of NN-CDH -- 3.4 Training and Adaptation Procedure -- 4 Evaluation -- 4.1 Systems Being Compared -- 4.2 Assembling Case Pairs for Training -- 4.3 Data Sets -- 4.4 Artificial Data Sets -- 5 Conclusion -- References -- A Deep Learning Approach to Solving Morphological Analogies -- 1 Introduction -- 2 The Problem of Morphological Analogy -- 3 Proposed Approach -- 3.1 Classification, Retrieval and Embedding Models -- 3.2 Training and Evaluation -- 4 Experiments -- 4.1 Data -- 4.2 Refining the Training Procedure -- 4.3 Performance Comparison with State of the Art Methods -- 4.4 Distance of the Expected Result -- 4.5 Case Analysis: Navajo and Georgian -- 5 Conclusion and Perspectives -- References -- Theoretical and Experimental Study of a Complexity Measure for Analogical Transfer -- 1 Introduction -- 2 Reminder on Complexity-Based Analogy -- 2.1 Notations -- 2.2 Ordinal Analogical Principle: Complexity Definition -- 2.3 Ordinal Analogical Inference Algorithm -- 3 Theoretical Property of the Complexity Measure: Upper Bound -- 3.1 General Case -- 3.2 Binary Classification Case -- 4 Algorithmic Optimisation -- 4.1 Principle -- 4.2 Proposed Optimized Algorithm -- 5 Experimental Study -- 5.1 Computational Cost -- 5.2 Correlation Between Case Base Complexity and Performance -- 5.3 Correlation Between Complexity and Task Difficulty -- 6 Conclusion and Future Works -- References -- Graphs and Optimisation -- Case-Based Learning and Reasoning Using Layered Boundary Multigraphs -- 1 Introduction -- 2 Background and Related Work.
3 Boundary Graphs and Labeled Boundary Multigraphs -- 3.1 Boundary Graphs -- 3.2 Labeled Boundary Multigraphs -- 3.3 Discussion -- 4 Empirical Evaluation -- 4.1 Experimental Set-Up -- 4.2 Classical Benchmark Data Sets -- 4.3 Scaling Analysis -- 5 Conclusion -- References -- Particle Swarm Optimization in Small Case Bases for Software Effort Estimation -- 1 Introduction -- 2 Related Work -- 3 Software Effort Estimation of User Stories -- 4 CBR Approach -- 4.1 Case Representation -- 4.2 Similarity -- 4.3 Adaptation -- 4.4 Weight Optimization with PSO -- 5 Experiments -- 5.1 Experimental Data -- 5.2 Experiment 1 -- 5.3 Experiment 2 -- 5.4 Discussion of Results -- 6 Conclusion -- References -- MicroCBR: Case-Based Reasoning on Spatio-temporal Fault Knowledge Graph for Microservices Troubleshooting -- 1 Introduction -- 2 Related Work -- 3 Background and Motivation -- 3.1 Background with Basic Concepts -- 3.2 Motivation -- 4 Troubleshooting Framework -- 4.1 Framework Overview -- 4.2 Spatio-Temporal Fault Knowledge Graph -- 4.3 Fingerprinting the Fault -- 4.4 Case-Based Reasoning -- 5 Evaluation -- 5.1 Evaluation Setup -- 5.2 Q1. Comparative Experiments -- 5.3 Q2. Ablation Experiment -- 5.4 Q3. Efficiency Experiments -- 5.5 Q4. Case Studies and Learned Lessons -- 6 Conclusion -- References -- .26em plus .1em minus .1emGPU-Based Graph Matching for Accelerating Similarity Assessment in Process-Oriented Case-Based Reasoning -- 1 Introduction -- 2 Foundations and Related Work -- 2.1 Semantic Workflow Graph Representation -- 2.2 State-Space Search by Using A* -- 2.3 Related Work -- 3 AMonG: A*-Based Graph Matching on Graphic Processing Units -- 3.1 Overview and Components -- 3.2 Parallel Graph Matching -- 4 Experimental Evaluation -- 4.1 Experimental Setup -- 4.2 Experimental Results -- 4.3 Discussion and Further Considerations -- 5 Conclusion and Future Work.
References.
Record Nr. UNINA-9910586636203321
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Case-based reasoning research and development : 30th International Conference, ICCBR 2022, Nancy, France, September 12-15, 2022, proceedings / / Mark T. Keane, Nirmalie Wiratunga (editors)
Case-based reasoning research and development : 30th International Conference, ICCBR 2022, Nancy, France, September 12-15, 2022, proceedings / / Mark T. Keane, Nirmalie Wiratunga (editors)
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (420 pages)
Disciplina 153.43
Collana Lecture notes in computer science. Lecture notes in artificial intelligence
Soggetto topico Case-based reasoning
Expert systems (Computer science)
Deep learning (Machine learning)
ISBN 3-031-14923-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Invited Talks -- Seeing Through Black Boxes with Human Vision: Deep Learning and Explainable AI in Medical Image Applications -- Case-Based Reasoning for Clinical Decisions That Are Computer-Aided, Not Automated -- Towards More Cognitively Appealing Paradigms in Case-Based Reasoning -- Contents -- Explainability in CBR -- Using Case-Based Reasoning for Capturing Expert Knowledge on Explanation Methods -- 1 Introduction -- 2 Background -- 3 Case-Based Elicitation -- 3.1 Case Structure -- 3.2 Case Base Acquisition -- 4 CBR Process -- 5 Evaluation and Discussion -- 6 Conclusions -- References -- A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations -- 1 Introduction -- 2 Related Work -- 2.1 What Are Good Counterfactual Explanations? -- 2.2 Perturbation-Based Approaches -- 2.3 Instance-Based Approaches -- 2.4 Instance-Based Shortcomings -- 3 Good Counterfactuals in Multi-class Domains -- 3.1 Reusing the kNN Explanation Cases -- 3.2 Validating Candidate Counterfactuals -- 3.3 Discussion -- 4 Evaluation -- 4.1 Methodology -- 4.2 Results -- 5 Conclusions -- References -- How Close Is Too Close? The Role of Feature Attributions in Discovering Counterfactual Explanations -- 1 Introduction -- 2 Related Work -- 3 DisCERN -- 3.1 Nearest-Unlike Neighbour -- 3.2 Feature Ordering by Feature Attribution -- 3.3 Substitution-Based Adaptation -- 3.4 Integrated Gradients for DisCERN -- 3.5 Bringing the NUN Closer -- 4 Evaluation -- 4.1 Datasets -- 4.2 Experiment Setup -- 4.3 Performance Measures for Counterfactual Explanations -- 5 Results -- 5.1 A Comparison of Feature Attribution Techniques -- 5.2 A Comparison of Counterfactual Discovery Algorithms -- 5.3 Impact of Bringing NUN Closer -- 6 Conclusions -- References -- Algorithmic Bias and Fairness in Case-Based Reasoning.
1 Introduction -- 2 Related Research -- 2.1 Bias in ML -- 2.2 Bias in CBR -- 2.3 Metric Learning -- 3 FairRet: Eliminating Bias with Metric Learning -- 3.1 Bias and The Similarity Knowledge Container -- 3.2 A Metric Learning Approach -- 3.3 Multi-objective Particle Swarm Optimization -- 4 Results -- 4.1 Dealing with Underestimation Bias -- 4.2 Outcome Distortion -- 4.3 Retrieval Overlap -- 5 Conclusions -- References -- "Better" Counterfactuals, Ones People Can Understand: Psychologically-Plausible Case-Based Counterfactuals Using Categorical Features for Explainable AI (XAI) -- 1 Introduction -- 2 Background: Computation and Psychology of Counterfactuals -- 2.1 User Studies of Counterfactual XAI: Mixed Results -- 3 Study 1: Plotting Counterfactuals that have Categoricals -- 3.1 Results and Discussion -- 4 Transforming Case-Based Counterfactuals, Categorically -- 4.1 Case-Based Counterfactual Methods: CB1-CF and CB2-CF -- 4.2 Counterfactuals with Categorical Transforms #1: Global Binning -- 4.3 Counterfactuals with Categorical Transforms #2: Local Direction -- 5 Study 2: Evaluating CAT-CF Methods -- 5.1 Method: Data and Procedure -- 5.2 Results and Discussion: Counterfactual Distance -- 6 Conclusions -- References -- Representation and Similarity -- Extracting Case Indices from Convolutional Neural Networks: A Comparative Study -- 1 Introduction -- 2 Potential Feature Extraction Points in cnns -- 3 Related Work -- 4 Three Structure-Based Feature Extraction Methods -- 4.1 Post-convolution Feature Extraction -- 4.2 Post-dense Feature Extraction -- 4.3 Multi-net Feature Extraction -- 5 Evaluation -- 5.1 Hypotheses -- 5.2 Test Domain and Test Set Selection -- 5.3 Testbed System -- 5.4 Accuracy Testing and Informal Upper Bound -- 6 Results and Discussion -- 6.1 Comparative Performance -- 6.2 Discussion -- 7 Ramifications for Interpretability.
8 Conclusions and Future Work -- References -- Exploring the Effect of Recipe Representation on Critique-Based Conversational Recommendation -- 1 Introduction -- 2 Background -- 2.1 Diversity in Recommender Systems -- 2.2 Critique-Based Conversational Recommender Systems -- 2.3 Diversity in Recipe Recommenders -- 3 DiversityBite Framework: Recommend, Review, Revise -- 3.1 Adaptive Diversity Goal Approach -- 4 Evaluation -- 4.1 Case Base -- 4.2 Implementation: DGF, AGD, and Diversity Scoring -- 4.3 Simulation Study: Incorporating Diversity in Critique -- 4.4 User Study: Comparing Different Recipe Representations -- 5 Conclusion -- References -- Explaining CBR Systems Through Retrieval and Similarity Measure Visualizations: A Case Study -- 1 Introduction -- 2 Related Work -- 3 SupportPrim CBR System -- 3.1 Data -- 3.2 Case Representation and Similarity Modeling -- 3.3 Case Base and Similarity Population -- 4 Explanatory Case Base Visualizations -- 4.1 Accessing the CBR System's Model -- 4.2 Visualization of Retrievals -- 4.3 Visualization of the Similarity Scores for Individual Case Comparisons -- 5 Experiments -- 6 Discussion -- 7 Conclusion -- References -- Adapting Semantic Similarity Methods for Case-Based Reasoning in the Cloud -- 1 Introduction -- 2 Related Work -- 2.1 Clood CBR -- 2.2 Ontologies in CBR -- 2.3 Retrieval with Word Embedding -- 2.4 Serverless Function Benefits and Limitations -- 3 Semantic Similarity Metrics in a Microservices Architecture -- 3.1 Clood Similarity Functions Overview -- 3.2 Similarity Table -- 3.3 Word Embedding Based Similarity -- 3.4 Ontology-Based Similarity Measure -- 4 Implementation of Semantic Similarity Measures on Clood Framework -- 4.1 Word Embedding Similarity on Clood -- 4.2 Ontology-Based Similarity on Clood -- 5 Evaluation of Resource Impact -- 5.1 Experiment Setup -- 5.2 Result and Discussion.
6 Conclusion -- References -- Adaptation and Analogical Reasoning -- Case Adaptation with Neural Networks: Capabilities and Limitations -- 1 Introduction -- 2 Background -- 3 NN-CDH for both Classification and Regression -- 3.1 General Model of Case Adaptation -- 3.2 1-Hot/1-Cold Nominal Difference -- 3.3 Neural Network Structure of NN-CDH -- 3.4 Training and Adaptation Procedure -- 4 Evaluation -- 4.1 Systems Being Compared -- 4.2 Assembling Case Pairs for Training -- 4.3 Data Sets -- 4.4 Artificial Data Sets -- 5 Conclusion -- References -- A Deep Learning Approach to Solving Morphological Analogies -- 1 Introduction -- 2 The Problem of Morphological Analogy -- 3 Proposed Approach -- 3.1 Classification, Retrieval and Embedding Models -- 3.2 Training and Evaluation -- 4 Experiments -- 4.1 Data -- 4.2 Refining the Training Procedure -- 4.3 Performance Comparison with State of the Art Methods -- 4.4 Distance of the Expected Result -- 4.5 Case Analysis: Navajo and Georgian -- 5 Conclusion and Perspectives -- References -- Theoretical and Experimental Study of a Complexity Measure for Analogical Transfer -- 1 Introduction -- 2 Reminder on Complexity-Based Analogy -- 2.1 Notations -- 2.2 Ordinal Analogical Principle: Complexity Definition -- 2.3 Ordinal Analogical Inference Algorithm -- 3 Theoretical Property of the Complexity Measure: Upper Bound -- 3.1 General Case -- 3.2 Binary Classification Case -- 4 Algorithmic Optimisation -- 4.1 Principle -- 4.2 Proposed Optimized Algorithm -- 5 Experimental Study -- 5.1 Computational Cost -- 5.2 Correlation Between Case Base Complexity and Performance -- 5.3 Correlation Between Complexity and Task Difficulty -- 6 Conclusion and Future Works -- References -- Graphs and Optimisation -- Case-Based Learning and Reasoning Using Layered Boundary Multigraphs -- 1 Introduction -- 2 Background and Related Work.
3 Boundary Graphs and Labeled Boundary Multigraphs -- 3.1 Boundary Graphs -- 3.2 Labeled Boundary Multigraphs -- 3.3 Discussion -- 4 Empirical Evaluation -- 4.1 Experimental Set-Up -- 4.2 Classical Benchmark Data Sets -- 4.3 Scaling Analysis -- 5 Conclusion -- References -- Particle Swarm Optimization in Small Case Bases for Software Effort Estimation -- 1 Introduction -- 2 Related Work -- 3 Software Effort Estimation of User Stories -- 4 CBR Approach -- 4.1 Case Representation -- 4.2 Similarity -- 4.3 Adaptation -- 4.4 Weight Optimization with PSO -- 5 Experiments -- 5.1 Experimental Data -- 5.2 Experiment 1 -- 5.3 Experiment 2 -- 5.4 Discussion of Results -- 6 Conclusion -- References -- MicroCBR: Case-Based Reasoning on Spatio-temporal Fault Knowledge Graph for Microservices Troubleshooting -- 1 Introduction -- 2 Related Work -- 3 Background and Motivation -- 3.1 Background with Basic Concepts -- 3.2 Motivation -- 4 Troubleshooting Framework -- 4.1 Framework Overview -- 4.2 Spatio-Temporal Fault Knowledge Graph -- 4.3 Fingerprinting the Fault -- 4.4 Case-Based Reasoning -- 5 Evaluation -- 5.1 Evaluation Setup -- 5.2 Q1. Comparative Experiments -- 5.3 Q2. Ablation Experiment -- 5.4 Q3. Efficiency Experiments -- 5.5 Q4. Case Studies and Learned Lessons -- 6 Conclusion -- References -- .26em plus .1em minus .1emGPU-Based Graph Matching for Accelerating Similarity Assessment in Process-Oriented Case-Based Reasoning -- 1 Introduction -- 2 Foundations and Related Work -- 2.1 Semantic Workflow Graph Representation -- 2.2 State-Space Search by Using A* -- 2.3 Related Work -- 3 AMonG: A*-Based Graph Matching on Graphic Processing Units -- 3.1 Overview and Components -- 3.2 Parallel Graph Matching -- 4 Experimental Evaluation -- 4.1 Experimental Setup -- 4.2 Experimental Results -- 4.3 Discussion and Further Considerations -- 5 Conclusion and Future Work.
References.
Record Nr. UNISA-996485668603316
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui