Explainable Artificial Intelligence (XAI) : Concepts, Enabling Tools, Technologies and Applications
| Explainable Artificial Intelligence (XAI) : Concepts, Enabling Tools, Technologies and Applications |
| Autore | Raj Pethuru |
| Edizione | [1st ed.] |
| Pubbl/distr/stampa | Stevenage : , : Institution of Engineering & Technology, , 2023 |
| Descrizione fisica | 1 online resource (465 pages) |
| Disciplina | 006.3 |
| Altri autori (Persone) |
KöseUtku
SakthivelUsha NagarajanSusila AsirvadamVijanth Sagayan |
| Collana | Computing and Networks Series |
| Soggetto topico |
Artificial intelligence
Machine learning |
| ISBN |
1-83724-425-1
1-5231-6305-4 1-83953-696-9 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto |
Intro -- Title -- Copyright -- Contents -- About the editors -- Preface -- 1 An overview of past and present progressions in XAI -- 1.1 Introduction -- 1.2 Background study -- 1.2.1 Key-related ideas of XAI -- 1.3 Overview of XAI -- 1.4 History of XAI -- 1.5 Top AI patterns -- 1.6 Conclusion -- References -- 2 Demystifying explainable artificial intelligence (EAI) -- 2.1 Introduction -- 2.1.1 An overview of artificial intelligence -- 2.1.2 Introduction to explainable AI -- 2.2 Concept of XAI -- 2.3 Explainable AI (EAI) architecture -- 2.4 Learning techniques -- 2.5 Demystifying EAI methods -- 2.5.1 Clever Hans -- 2.5.2 Different users and goals in EAI -- 2.5.3 EAI as quality assurance -- 2.6 Implementation: how to create explainable solutions -- 2.6.1 Method taxonomy -- 2.6.2 Rules - intrinsic local explanations -- 2.6.3 Prototypes -- 2.6.4 Learned representation -- 2.6.5 Partial dependence plot - global post-hoc explanations -- 2.6.6 Feature attribution (importance) -- 2.7 Applications -- 2.8 Conclusion -- References -- 3 Illustrating the significance of explainable artificial intelligence (XAI) -- 3.1 Introduction -- 3.2 The growing power of AI -- 3.3 The challenges and concerns of AI -- 3.4 About the need for AI explainability -- 3.5 The importance of XAI -- 3.6 The importance of model interpretation -- 3.6.1 Model transparency -- 3.6.2 Start with interpretable algorithms -- 3.6.3 Standard techniques for model interpretation -- 3.6.4 ROC curve -- 3.6.5 Focus on feature importance -- 3.6.6 Partial dependence plots (PDPs) -- 3.6.7 Global surrogate models -- 3.6.8 Criteria for ML model interpretation methods -- 3.7 Briefing feature importance scoring methods -- 3.8 Local interpretable model-agnostic explanations (LIMEs) -- 3.9 SHAP explainability algorithm -- 3.9.1 AI trust with symbolic AI.
3.10 The growing scope of XAI for the oil and gas industry -- 3.10.1 XAI for the oil and gas industry -- 3.11 Conclusion -- Bibliography -- 4 Inclusion of XAI in artificial intelligence and deep learning technologies -- 4.1 Introduction -- 4.2 What is XAI? -- 4.3 Why is XAI important? -- 4.4 How does XAI work? -- 4.5 Role of XAI in machine learning and deep learning algorithm -- 4.6 Applications of XAI in machine learning in deep learning -- 4.7 Difference between XAI and AI -- 4.8 Challenges in XAI -- 4.9 Advantages of XAI -- 4.10 Disadvantages of XAI -- 4.11 Future scope of XAI -- 4.12 Conclusion -- References -- 5 Explainable artificial intelligence: tools, platforms, and new taxonomies -- 5.1 Introduction -- 5.2 ML-based systems and awareness -- 5.3 Challenges of the time -- 5.3.1 Requirement of explainability -- 5.3.2 Impact of high-stake decisions -- 5.3.3 Concerns of society -- 5.3.4 Regulations and interpretability issue -- 5.4 State-of-the-art approaches -- 5.5 Assessment approaches -- 5.6 Drivers for XAI -- 5.6.1 Tools and frameworks -- 5.7 Discussion -- 5.7.1 For researchers outside of computer science: taxonomies -- 5.7.2 Taxonomies and reviews focusing on specific aspects -- 5.7.3 Fresh perspectives on taxonomy -- 5.7.4 Taxonomy levels at new levels -- 5.8 Conclusion -- References -- 6 An overview of AI platforms, frameworks, libraries, and processes -- 6.1 Introduction to AI -- 6.2 Role of AI in the 21st century -- 6.2.1 The 2000s -- 6.2.2 The 2010s -- 6.2.3 The future -- 6.3 How AI transformed the world -- 6.3.1 Transportation -- 6.3.2 Finance -- 6.3.3 Healthcare -- 6.3.4 Intelligent cities -- 6.3.5 Security -- 6.4 AI process -- 6.5 TensorFlow -- 6.5.1 Installation -- 6.5.2 TensorFlow basics -- 6.6 Scikit learn -- 6.6.1 Features -- 6.6.2 Installation -- 6.6.3 Scikit modeling -- 6.6.4 Data representation in scikit -- 6.7 Keras. 6.7.1 Features -- 6.7.2 Building a model in Keras -- 6.7.3 Applications of Keras -- 6.8 Open NN -- 6.8.1 Application -- 6.8.2 RNN -- 6.9 Theano -- 6.9.1 An overview -- 6.10 Why go for Theano Python library? -- 6.10.1 PROS -- 6.10.2 CONS -- 6.11 Basics of Theano -- 6.11.1 Subtracting two scalars -- 6.11.2 Adding two scalars -- 6.11.3 Adding two matrices -- 6.11.4 Logistic function -- References -- 7 Quality framework for explainable artificial intelligence (XAI) and machine learning applications -- 7.1 Introduction -- 7.2 Background -- 7.3 Integrated framework for AI applications development -- 7.4 AI systems characteristics vs. SE best practices -- 7.4.1 Explainable AI characteristics -- 7.5 ML lifecycle (model, data-oriented, and data analytics-oriented lifecycle) -- 7.6 AI/ML requirements engineering -- 7.7 Software effort estimation for AMD, RL, and NLP systems -- 7.7.1 Modified COCOMO model for AI, ML, and NLP applications and apps -- 7.8 Software engineering framework for AI and ML (SEF4 AI and ML) applications -- 7.9 Reference Architecture for AI & -- ML -- 7.10 Evaluation of Reference Architecture (REF) for AI & -- ML: explainable Chatbot case study -- 7.11 Conclusions and further research -- References -- 8 Methods for explainable artificial intelligence -- 8.1 Preliminarily study -- 8.2 Importance of XAI for human-interpretable models -- 8.3 Overview of XAI techniques -- 8.4 Taxonomy of popular XAI methods -- 8.4.1 Backpropagation-based methods -- 8.4.2 Perturbation methods -- 8.4.3 Influence methods -- 8.4.4 Knowledge extraction -- 8.4.5 Concept methods -- 8.4.6 Visualization methods -- 8.4.7 Example-based explanation -- 8.5 Conclusion -- References -- 9 Knowledge representation and reasoning (KRR) -- 9.1 Introduction -- 9.2 Methodology -- 9.2.1 Reference model -- 9.2.2 Ontologies -- 9.2.3 Knowledge graphs. 9.2.4 Semantic web technologies -- 9.2.5 ML -- 9.2.6 Tools and techniques -- 9.3 Results and discussion -- 9.3.1 Case study: using different techniques for representing medical knowledge [7] -- 9.3.2 Case study: using different techniques for representing academic knowledge [8] -- 9.3.3 Case study: using different techniques for representing farmer knowledge [9] -- 9.3.4 Case study: social media knowledge representation techniques [10] -- 9.3.5 Case study: using different techniques for representing cyber security knowledge [11] -- 9.4 Conclusion and future work -- References -- 10 Knowledge visualization: AI integration with 360-degree dashboards -- 10.1 Introduction -- 10.2 Information visualization vs. knowledge visualization -- 10.3 Knowledge visualization in design thinking -- 10.4 Visualization in transferring knowledge -- 10.5 The knowledge visualization model -- 10.5.1 Knowledge visualization framework -- 10.6 Formats and examples of knowledge visualization -- 10.6.1 Conceptual diagrams -- 10.6.2 Visual metaphors -- 10.6.3 Knowledge animation -- 10.6.4 Knowledge maps -- 10.6.5 Knowledge domain visualization -- 10.7 Types and usage of knowledge visualization tools -- 10.8 Knowledge visualization templates -- 10.8.1 Mind maps -- 10.8.2 Swimlane diagrams -- 10.8.3 Matrix diagrams -- 10.8.4 Flowcharts -- 10.8.5 Concept maps -- 10.8.6 Funnel charts or diagrams -- 10.9 Visualization in machine learning -- 10.9.1 Decision trees -- 10.9.2 Decision graph -- 10.10 Conclusion -- References -- 11 Empowering machine learning with knowledge graphs for the semantic era -- 11.1 Introduction -- 11.2 Tending towards digitally transformed enterprises -- 11.3 The emergence of KGs -- 11.4 Briefing the concept of KGs -- 11.5 Formalizing KGs -- 11.6 Creating custom KGs -- 11.7 Characterizing KGs -- 11.8 Use cases of KGs -- 11.9 ML and KGs. 11.10 KGs for explainable and responsible AI -- 11.11 Stardog enterprise KG platform -- 11.12 What CANNOT be considered a KG? -- 11.13 Conclusion -- Bibliography -- 12 Enterprise knowledge graphs using ensemble learning and data management -- 12.1 Introduction -- 12.2 Current ensemble model learning -- 12.2.1 Bagging -- 12.2.2 Boosting -- 12.2.3 Random Forest -- 12.3 Related work and literature review -- 12.4 Methodology -- 12.4.1 Enhanced ensemble model framework -- 12.4.2 Training and testing datasets -- 12.4.3 Enhanced ensemble model and algorithm -- 12.5 Experimental setup and enterprise dataset -- 12.5.1 Ensemble models performance evaluation using enterprise knowledge graph -- 12.5.2 Tree classification as knowledge graph -- 12.6 Result and discussion -- 12.7 Conclusion -- References -- 13 Illustrating graph neural networks (GNNs) and the distinct applications -- 13.1 Introduction -- 13.2 Briefing the distinctions of graphs -- 13.3 The challenges -- 13.4 ML algorithms -- 13.5 DL algorithms -- 13.6 The emergence of GNNs -- 13.7 Demystifying DNNs on graph data -- 13.8 GNNs: the applications -- 13.9 The challenges for GNNs -- 13.10 Conclusion -- Bibliography -- 14 AI applications-computer vision and natural language processing -- 14.1 Object recognition -- 14.2 AI-powered video analytics -- 14.3 Contactless payments -- 14.4 Foot tracking -- 14.5 Animal detection -- 14.6 Airport facial recognition -- 14.7 Autonomous driving -- 14.8 Video surveillance -- 14.9 Healthcare medical detection -- 14.10 Computer vision in agriculture -- 14.10.1 Drone-based crop monitoring -- 14.10.2 Yield analysis -- 14.10.3 Smart systems for crop grading and sorting -- 14.10.4 Automated pesticide spraying -- 14.10.5 Phenotyping -- 14.10.6 Forest information -- 14.11 Computer vision in transportation -- 14.11.1 Safety and driver assistance -- 14.11.2 Traffic control. 14.11.3 Driving autonomous vehicles. |
| Record Nr. | UNINA-9911007174203321 |
Raj Pethuru
|
||
| Stevenage : , : Institution of Engineering & Technology, , 2023 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||
Hybrid PID Based Predictive Control Strategies for WirelessHART Networked Control Systems / / by Sabo Miya Hassan, Rosdiazli Ibrahim, Nordin Saad, Kishore Bingi, Vijanth Sagayan Asirvadam
| Hybrid PID Based Predictive Control Strategies for WirelessHART Networked Control Systems / / by Sabo Miya Hassan, Rosdiazli Ibrahim, Nordin Saad, Kishore Bingi, Vijanth Sagayan Asirvadam |
| Autore | Hassan Sabo Miya |
| Edizione | [1st ed. 2020.] |
| Pubbl/distr/stampa | Cham : , : Springer International Publishing : , : Imprint : Springer, , 2020 |
| Descrizione fisica | 1 online resource (xvi, 154 pages) |
| Disciplina | 629.8 |
| Collana | Studies in Systems, Decision and Control |
| Soggetto topico |
Automatic control
Telecommunication Computational intelligence Control and Systems Theory Communications Engineering, Networks Computational Intelligence |
| ISBN | 3-030-47737-1 |
| Formato | Materiale a stampa |
| Livello bibliografico | Monografia |
| Lingua di pubblicazione | eng |
| Nota di contenuto | Introduction -- Filtered Predictive PI Controller for WirelessHART Networked Systems -- WirelessHART Networked Set-point Weighted Controllers -- Hybrid APSO–Spiral Dynamic Algorithm -- Hybrid ABFA-APSO Algorithm -- Comparison WirelessHART Networked systems. |
| Record Nr. | UNINA-9910483039303321 |
Hassan Sabo Miya
|
||
| Cham : , : Springer International Publishing : , : Imprint : Springer, , 2020 | ||
| Lo trovi qui: Univ. Federico II | ||
| ||