top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Choice Computing: Machine Learning and Systemic Economics for Choosing / / by Parag Kulkarni
Choice Computing: Machine Learning and Systemic Economics for Choosing / / by Parag Kulkarni
Autore Kulkarni Parag
Edizione [1st ed. 2022.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2022
Descrizione fisica 1 online resource (254 pages)
Disciplina 006.31
Collana Intelligent Systems Reference Library
Soggetto topico Computational intelligence
Machine learning
Computer science - Mathematics
Computational Intelligence
Machine Learning
Mathematics of Computing
ISBN 9789811940590
9789811940583
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Introduction -- Decoding Choosing -- ML of Choosing: Architecting Intelligent Choice Framework -- Machine Learning of Choice Economics -- Co-operative Choosing: Machines and Humans Thinking Together to Choose the Right Way -- Choice Architecture – Machine Learning Framework -- Artificial Consciousness and Choosing (Towards Conscious Choice Machines) -- Choice Computing and Creativity -- Experimental Choice Computing and Choice Learning Through Real-Life Stories -- Beyond Choice Computing.
Record Nr. UNINA-9910590053703321
Kulkarni Parag  
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2022
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Reinforcement and systemic machine learning for decision making / / Parag Kulkarni
Reinforcement and systemic machine learning for decision making / / Parag Kulkarni
Autore Kulkarni Parag
Pubbl/distr/stampa Hoboken [New Jersey] : , : John Wiley & Sons, , c2012
Descrizione fisica 1 online resource (311 p.)
Disciplina 006.3/1
006.31
Collana IEEE Press Series on Systems Science and Engineering
Soggetto topico Reinforcement learning
Machine learning
Decision making
ISBN 1-282-13449-3
9786613807076
1-118-27155-6
1-118-27153-X
1-118-26650-1
Classificazione TEC008000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Preface xv -- Acknowledgments xix -- About the Author xxi -- 1 Introduction to Reinforcement and Systemic Machine Learning 1 -- 1.1. Introduction 1 -- 1.2. Supervised, Unsupervised, and Semisupervised Machine Learning 2 -- 1.3. Traditional Learning Methods and History of Machine Learning 4 -- 1.4. What Is Machine Learning? 7 -- 1.5. Machine-Learning Problem 8 -- 1.6. Learning Paradigms 9 -- 1.7. Machine-Learning Techniques and Paradigms 12 -- 1.8. What Is Reinforcement Learning? 14 -- 1.9. Reinforcement Function and Environment Function 16 -- 1.10. Need of Reinforcement Learning 17 -- 1.11. Reinforcement Learning and Machine Intelligence 17 -- 1.12. What Is Systemic Learning? 18 -- 1.13. What Is Systemic Machine Learning? 18 -- 1.14. Challenges in Systemic Machine Learning 19 -- 1.15. Reinforcement Machine Learning and Systemic Machine Learning 19 -- 1.16. Case Study Problem Detection in a Vehicle 20 -- 1.17. Summary 20 -- 2 Fundamentals of Whole-System, Systemic, and Multiperspective Machine Learning 23 -- 2.1. Introduction 23 -- 2.2. What Is Systemic Machine Learning? 27 -- 2.3. Generalized Systemic Machine-Learning Framework 30 -- 2.4. Multiperspective Decision Making and Multiperspective Learning 33 -- 2.5. Dynamic and Interactive Decision Making 43 -- 2.6. The Systemic Learning Framework 47 -- 2.7. System Analysis 52 -- 2.8. Case Study: Need of Systemic Learning in the Hospitality Industry 54 -- 2.9. Summary 55 -- 3 Reinforcement Learning 57 -- 3.1. Introduction 57 -- 3.2. Learning Agents 60 -- 3.3. Returns and Reward Calculations 62 -- 3.4. Reinforcement Learning and Adaptive Control 63 -- 3.5. Dynamic Systems 66 -- 3.6. Reinforcement Learning and Control 68 -- 3.7. Markov Property and Markov Decision Process 68 -- 3.8. Value Functions 69 -- 3.8.1. Action and Value 70 -- 3.9. Learning an Optimal Policy (Model-Based and Model-Free Methods) 70 -- 3.10. Dynamic Programming 71 -- 3.11. Adaptive Dynamic Programming 71 -- 3.12. Example: Reinforcement Learning for Boxing Trainer 75.
3.13. Summary 75 -- 4 Systemic Machine Learning and Model 77 -- 4.1. Introduction 77 -- 4.2. A Framework for Systemic Learning 78 -- 4.3. Capturing the Systemic View 86 -- 4.4. Mathematical Representation of System Interactions 89 -- 4.5. Impact Function 91 -- 4.6. Decision-Impact Analysis 91 -- 4.7. Summary 97 -- 5 Inference and Information Integration 99 -- 5.1. Introduction 99 -- 5.2. Inference Mechanisms and Need 101 -- 5.3. Integration of Context and Inference 107 -- 5.4. Statistical Inference and Induction 111 -- 5.5. Pure Likelihood Approach 112 -- 5.6. Bayesian Paradigm and Inference 113 -- 5.7. Time-Based Inference 114 -- 5.8. Inference to Build a System View 114 -- 5.9. Summary 118 -- 6 Adaptive Learning 119 -- 6.1. Introduction 119 -- 6.2. Adaptive Learning and Adaptive Systems 119 -- 6.3. What Is Adaptive Machine Learning? 123 -- 6.4. Adaptation and Learning Method Selection Based on Scenario 124 -- 6.5. Systemic Learning and Adaptive Learning 127 -- 6.6. Competitive Learning and Adaptive Learning 140 -- 6.7. Examples 146 -- 6.8. Summary 149 -- 7 Multiperspective and Whole-System Learning 151 -- 7.1. Introduction 151 -- 7.2. Multiperspective Context Building 152 -- 7.3. Multiperspective Decision Making and Multiperspective Learning 154 -- 7.4. Whole-System Learning and Multiperspective Approaches 164 -- 7.5. Case Study Based on Multiperspective Approach 167 -- 7.6. Limitations to a Multiperspective Approach 174 -- 7.7. Summary 174 -- 8 Incremental Learning and Knowledge Representation 177 -- 8.1. Introduction 177 -- 8.2. Why Incremental Learning? 178 -- 8.3. Learning from What Is Already Learned. . . 180 -- 8.4. Supervised Incremental Learning 191 -- 8.5. Incremental Unsupervised Learning and Incremental Clustering 191 -- 8.6. Semisupervised Incremental Learning 196 -- 8.7. Incremental and Systemic Learning 199 -- 8.8. Incremental Closeness Value and Learning Method 200 -- 8.9. Learning and Decision-Making Model 205 -- 8.10. Incremental Classification Techniques 206.
8.11. Case Study: Incremental Document Classification 207 -- 8.12. Summary 208 -- 9 Knowledge Augmentation: A Machine Learning Perspective 209 -- 9.1. Introduction 209 -- 9.2. Brief History and Related Work 211 -- 9.3. Knowledge Augmentation and Knowledge Elicitation 215 -- 9.4. Life Cycle of Knowledge 217 -- 9.5. Incremental Knowledge Representation 222 -- 9.6. Case-Based Learning and Learning with Reference to Knowledge Loss 224 -- 9.7. Knowledge Augmentation: Techniques and Methods 224 -- 9.8. Heuristic Learning 228 -- 9.9. Systemic Machine Learning and Knowledge Augmentation 229 -- 9.10. Knowledge Augmentation in Complex Learning Scenarios 232 -- 9.11. Case Studies 232 -- 9.12. Summary 235 -- 10 Building a Learning System 237 -- 10.1. Introduction 237 -- 10.2. Systemic Learning System 237 -- 10.3. Algorithm Selection 242 -- 10.4. Knowledge Representation 244 -- 10.5. Designing a Learning System 245 -- 10.6. Making System to Behave Intelligently 246 -- 10.7. Example-Based Learning 246 -- 10.8. Holistic Knowledge Framework and Use of Reinforcement Learning 246 -- 10.9. Intelligent Agents-Deployment and Knowledge Acquisition and Reuse 250 -- 10.10. Case-Based Learning: Human Emotion-Detection System 251 -- 10.11. Holistic View in Complex Decision Problem 253 -- 10.12. Knowledge Representation and Data Discovery 255 -- 10.13. Components 258 -- 10.14. Future of Learning Systems and Intelligent Systems 259 -- 10.15. Summary 259 -- Appendix A: Statistical Learning Methods 261 -- Appendix B: Markov Processes 271 -- Index 281.
Record Nr. UNINA-9910139690703321
Kulkarni Parag  
Hoboken [New Jersey] : , : John Wiley & Sons, , c2012
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Reinforcement and systemic machine learning for decision making / / Parag Kulkarni
Reinforcement and systemic machine learning for decision making / / Parag Kulkarni
Autore Kulkarni Parag
Descrizione fisica 1 online resource (312 pages)
Disciplina 006.3/1
006.31
Collana IEEE Press Series on Systems Science and Engineering
Soggetto topico Reinforcement learning
Machine learning
Decision making
ISBN 1-282-13449-3
9786613807076
1-118-27155-6
1-118-27153-X
1-118-26650-1
Classificazione TEC008000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Preface xv -- Acknowledgments xix -- About the Author xxi -- 1 Introduction to Reinforcement and Systemic Machine Learning 1 -- 1.1. Introduction 1 -- 1.2. Supervised, Unsupervised, and Semisupervised Machine Learning 2 -- 1.3. Traditional Learning Methods and History of Machine Learning 4 -- 1.4. What Is Machine Learning? 7 -- 1.5. Machine-Learning Problem 8 -- 1.6. Learning Paradigms 9 -- 1.7. Machine-Learning Techniques and Paradigms 12 -- 1.8. What Is Reinforcement Learning? 14 -- 1.9. Reinforcement Function and Environment Function 16 -- 1.10. Need of Reinforcement Learning 17 -- 1.11. Reinforcement Learning and Machine Intelligence 17 -- 1.12. What Is Systemic Learning? 18 -- 1.13. What Is Systemic Machine Learning? 18 -- 1.14. Challenges in Systemic Machine Learning 19 -- 1.15. Reinforcement Machine Learning and Systemic Machine Learning 19 -- 1.16. Case Study Problem Detection in a Vehicle 20 -- 1.17. Summary 20 -- 2 Fundamentals of Whole-System, Systemic, and Multiperspective Machine Learning 23 -- 2.1. Introduction 23 -- 2.2. What Is Systemic Machine Learning? 27 -- 2.3. Generalized Systemic Machine-Learning Framework 30 -- 2.4. Multiperspective Decision Making and Multiperspective Learning 33 -- 2.5. Dynamic and Interactive Decision Making 43 -- 2.6. The Systemic Learning Framework 47 -- 2.7. System Analysis 52 -- 2.8. Case Study: Need of Systemic Learning in the Hospitality Industry 54 -- 2.9. Summary 55 -- 3 Reinforcement Learning 57 -- 3.1. Introduction 57 -- 3.2. Learning Agents 60 -- 3.3. Returns and Reward Calculations 62 -- 3.4. Reinforcement Learning and Adaptive Control 63 -- 3.5. Dynamic Systems 66 -- 3.6. Reinforcement Learning and Control 68 -- 3.7. Markov Property and Markov Decision Process 68 -- 3.8. Value Functions 69 -- 3.8.1. Action and Value 70 -- 3.9. Learning an Optimal Policy (Model-Based and Model-Free Methods) 70 -- 3.10. Dynamic Programming 71 -- 3.11. Adaptive Dynamic Programming 71 -- 3.12. Example: Reinforcement Learning for Boxing Trainer 75.
3.13. Summary 75 -- 4 Systemic Machine Learning and Model 77 -- 4.1. Introduction 77 -- 4.2. A Framework for Systemic Learning 78 -- 4.3. Capturing the Systemic View 86 -- 4.4. Mathematical Representation of System Interactions 89 -- 4.5. Impact Function 91 -- 4.6. Decision-Impact Analysis 91 -- 4.7. Summary 97 -- 5 Inference and Information Integration 99 -- 5.1. Introduction 99 -- 5.2. Inference Mechanisms and Need 101 -- 5.3. Integration of Context and Inference 107 -- 5.4. Statistical Inference and Induction 111 -- 5.5. Pure Likelihood Approach 112 -- 5.6. Bayesian Paradigm and Inference 113 -- 5.7. Time-Based Inference 114 -- 5.8. Inference to Build a System View 114 -- 5.9. Summary 118 -- 6 Adaptive Learning 119 -- 6.1. Introduction 119 -- 6.2. Adaptive Learning and Adaptive Systems 119 -- 6.3. What Is Adaptive Machine Learning? 123 -- 6.4. Adaptation and Learning Method Selection Based on Scenario 124 -- 6.5. Systemic Learning and Adaptive Learning 127 -- 6.6. Competitive Learning and Adaptive Learning 140 -- 6.7. Examples 146 -- 6.8. Summary 149 -- 7 Multiperspective and Whole-System Learning 151 -- 7.1. Introduction 151 -- 7.2. Multiperspective Context Building 152 -- 7.3. Multiperspective Decision Making and Multiperspective Learning 154 -- 7.4. Whole-System Learning and Multiperspective Approaches 164 -- 7.5. Case Study Based on Multiperspective Approach 167 -- 7.6. Limitations to a Multiperspective Approach 174 -- 7.7. Summary 174 -- 8 Incremental Learning and Knowledge Representation 177 -- 8.1. Introduction 177 -- 8.2. Why Incremental Learning? 178 -- 8.3. Learning from What Is Already Learned. . . 180 -- 8.4. Supervised Incremental Learning 191 -- 8.5. Incremental Unsupervised Learning and Incremental Clustering 191 -- 8.6. Semisupervised Incremental Learning 196 -- 8.7. Incremental and Systemic Learning 199 -- 8.8. Incremental Closeness Value and Learning Method 200 -- 8.9. Learning and Decision-Making Model 205 -- 8.10. Incremental Classification Techniques 206.
8.11. Case Study: Incremental Document Classification 207 -- 8.12. Summary 208 -- 9 Knowledge Augmentation: A Machine Learning Perspective 209 -- 9.1. Introduction 209 -- 9.2. Brief History and Related Work 211 -- 9.3. Knowledge Augmentation and Knowledge Elicitation 215 -- 9.4. Life Cycle of Knowledge 217 -- 9.5. Incremental Knowledge Representation 222 -- 9.6. Case-Based Learning and Learning with Reference to Knowledge Loss 224 -- 9.7. Knowledge Augmentation: Techniques and Methods 224 -- 9.8. Heuristic Learning 228 -- 9.9. Systemic Machine Learning and Knowledge Augmentation 229 -- 9.10. Knowledge Augmentation in Complex Learning Scenarios 232 -- 9.11. Case Studies 232 -- 9.12. Summary 235 -- 10 Building a Learning System 237 -- 10.1. Introduction 237 -- 10.2. Systemic Learning System 237 -- 10.3. Algorithm Selection 242 -- 10.4. Knowledge Representation 244 -- 10.5. Designing a Learning System 245 -- 10.6. Making System to Behave Intelligently 246 -- 10.7. Example-Based Learning 246 -- 10.8. Holistic Knowledge Framework and Use of Reinforcement Learning 246 -- 10.9. Intelligent Agents-Deployment and Knowledge Acquisition and Reuse 250 -- 10.10. Case-Based Learning: Human Emotion-Detection System 251 -- 10.11. Holistic View in Complex Decision Problem 253 -- 10.12. Knowledge Representation and Data Discovery 255 -- 10.13. Components 258 -- 10.14. Future of Learning Systems and Intelligent Systems 259 -- 10.15. Summary 259 -- Appendix A: Statistical Learning Methods 261 -- Appendix B: Markov Processes 271 -- Index 281.
Record Nr. UNINA-9910831085703321
Kulkarni Parag  
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Reinforcement and systemic machine learning for decision making / / Parag Kulkarni
Reinforcement and systemic machine learning for decision making / / Parag Kulkarni
Autore Kulkarni Parag
Pubbl/distr/stampa Hoboken, N.J., : Wiley, : IEEE Press, : Systems, Man, & Cybernetics Society, c2012
Descrizione fisica 1 online resource (312 pages)
Disciplina 006.3/1
Collana IEEE Press series on systems science and engineering
Soggetto topico Reinforcement learning
Machine learning
Decision making
ISBN 1-282-13449-3
9786613807076
1-118-27155-6
1-118-27153-X
1-118-26650-1
Classificazione TEC008000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Preface xv -- Acknowledgments xix -- About the Author xxi -- 1 Introduction to Reinforcement and Systemic Machine Learning 1 -- 1.1. Introduction 1 -- 1.2. Supervised, Unsupervised, and Semisupervised Machine Learning 2 -- 1.3. Traditional Learning Methods and History of Machine Learning 4 -- 1.4. What Is Machine Learning? 7 -- 1.5. Machine-Learning Problem 8 -- 1.6. Learning Paradigms 9 -- 1.7. Machine-Learning Techniques and Paradigms 12 -- 1.8. What Is Reinforcement Learning? 14 -- 1.9. Reinforcement Function and Environment Function 16 -- 1.10. Need of Reinforcement Learning 17 -- 1.11. Reinforcement Learning and Machine Intelligence 17 -- 1.12. What Is Systemic Learning? 18 -- 1.13. What Is Systemic Machine Learning? 18 -- 1.14. Challenges in Systemic Machine Learning 19 -- 1.15. Reinforcement Machine Learning and Systemic Machine Learning 19 -- 1.16. Case Study Problem Detection in a Vehicle 20 -- 1.17. Summary 20 -- 2 Fundamentals of Whole-System, Systemic, and Multiperspective Machine Learning 23 -- 2.1. Introduction 23 -- 2.2. What Is Systemic Machine Learning? 27 -- 2.3. Generalized Systemic Machine-Learning Framework 30 -- 2.4. Multiperspective Decision Making and Multiperspective Learning 33 -- 2.5. Dynamic and Interactive Decision Making 43 -- 2.6. The Systemic Learning Framework 47 -- 2.7. System Analysis 52 -- 2.8. Case Study: Need of Systemic Learning in the Hospitality Industry 54 -- 2.9. Summary 55 -- 3 Reinforcement Learning 57 -- 3.1. Introduction 57 -- 3.2. Learning Agents 60 -- 3.3. Returns and Reward Calculations 62 -- 3.4. Reinforcement Learning and Adaptive Control 63 -- 3.5. Dynamic Systems 66 -- 3.6. Reinforcement Learning and Control 68 -- 3.7. Markov Property and Markov Decision Process 68 -- 3.8. Value Functions 69 -- 3.8.1. Action and Value 70 -- 3.9. Learning an Optimal Policy (Model-Based and Model-Free Methods) 70 -- 3.10. Dynamic Programming 71 -- 3.11. Adaptive Dynamic Programming 71 -- 3.12. Example: Reinforcement Learning for Boxing Trainer 75.
3.13. Summary 75 -- 4 Systemic Machine Learning and Model 77 -- 4.1. Introduction 77 -- 4.2. A Framework for Systemic Learning 78 -- 4.3. Capturing the Systemic View 86 -- 4.4. Mathematical Representation of System Interactions 89 -- 4.5. Impact Function 91 -- 4.6. Decision-Impact Analysis 91 -- 4.7. Summary 97 -- 5 Inference and Information Integration 99 -- 5.1. Introduction 99 -- 5.2. Inference Mechanisms and Need 101 -- 5.3. Integration of Context and Inference 107 -- 5.4. Statistical Inference and Induction 111 -- 5.5. Pure Likelihood Approach 112 -- 5.6. Bayesian Paradigm and Inference 113 -- 5.7. Time-Based Inference 114 -- 5.8. Inference to Build a System View 114 -- 5.9. Summary 118 -- 6 Adaptive Learning 119 -- 6.1. Introduction 119 -- 6.2. Adaptive Learning and Adaptive Systems 119 -- 6.3. What Is Adaptive Machine Learning? 123 -- 6.4. Adaptation and Learning Method Selection Based on Scenario 124 -- 6.5. Systemic Learning and Adaptive Learning 127 -- 6.6. Competitive Learning and Adaptive Learning 140 -- 6.7. Examples 146 -- 6.8. Summary 149 -- 7 Multiperspective and Whole-System Learning 151 -- 7.1. Introduction 151 -- 7.2. Multiperspective Context Building 152 -- 7.3. Multiperspective Decision Making and Multiperspective Learning 154 -- 7.4. Whole-System Learning and Multiperspective Approaches 164 -- 7.5. Case Study Based on Multiperspective Approach 167 -- 7.6. Limitations to a Multiperspective Approach 174 -- 7.7. Summary 174 -- 8 Incremental Learning and Knowledge Representation 177 -- 8.1. Introduction 177 -- 8.2. Why Incremental Learning? 178 -- 8.3. Learning from What Is Already Learned. . . 180 -- 8.4. Supervised Incremental Learning 191 -- 8.5. Incremental Unsupervised Learning and Incremental Clustering 191 -- 8.6. Semisupervised Incremental Learning 196 -- 8.7. Incremental and Systemic Learning 199 -- 8.8. Incremental Closeness Value and Learning Method 200 -- 8.9. Learning and Decision-Making Model 205 -- 8.10. Incremental Classification Techniques 206.
8.11. Case Study: Incremental Document Classification 207 -- 8.12. Summary 208 -- 9 Knowledge Augmentation: A Machine Learning Perspective 209 -- 9.1. Introduction 209 -- 9.2. Brief History and Related Work 211 -- 9.3. Knowledge Augmentation and Knowledge Elicitation 215 -- 9.4. Life Cycle of Knowledge 217 -- 9.5. Incremental Knowledge Representation 222 -- 9.6. Case-Based Learning and Learning with Reference to Knowledge Loss 224 -- 9.7. Knowledge Augmentation: Techniques and Methods 224 -- 9.8. Heuristic Learning 228 -- 9.9. Systemic Machine Learning and Knowledge Augmentation 229 -- 9.10. Knowledge Augmentation in Complex Learning Scenarios 232 -- 9.11. Case Studies 232 -- 9.12. Summary 235 -- 10 Building a Learning System 237 -- 10.1. Introduction 237 -- 10.2. Systemic Learning System 237 -- 10.3. Algorithm Selection 242 -- 10.4. Knowledge Representation 244 -- 10.5. Designing a Learning System 245 -- 10.6. Making System to Behave Intelligently 246 -- 10.7. Example-Based Learning 246 -- 10.8. Holistic Knowledge Framework and Use of Reinforcement Learning 246 -- 10.9. Intelligent Agents-Deployment and Knowledge Acquisition and Reuse 250 -- 10.10. Case-Based Learning: Human Emotion-Detection System 251 -- 10.11. Holistic View in Complex Decision Problem 253 -- 10.12. Knowledge Representation and Data Discovery 255 -- 10.13. Components 258 -- 10.14. Future of Learning Systems and Intelligent Systems 259 -- 10.15. Summary 259 -- Appendix A: Statistical Learning Methods 261 -- Appendix B: Markov Processes 271 -- Index 281.
Record Nr. UNINA-9910877710103321
Kulkarni Parag  
Hoboken, N.J., : Wiley, : IEEE Press, : Systems, Man, & Cybernetics Society, c2012
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Reverse Hypothesis Machine Learning : A Practitioner's Perspective / / by Parag Kulkarni
Reverse Hypothesis Machine Learning : A Practitioner's Perspective / / by Parag Kulkarni
Autore Kulkarni Parag
Edizione [1st ed. 2017.]
Pubbl/distr/stampa Cham : , : Springer International Publishing : , : Imprint : Springer, , 2017
Descrizione fisica 1 online resource (XVI, 138 p. 61 illus., 9 illus. in color.)
Disciplina 006.31
Collana Intelligent Systems Reference Library
Soggetto topico Computational intelligence
Knowledge management
Machinery
Management
Industrial management
Electronics
Microelectronics
Computational Intelligence
Knowledge Management
Machinery and Machine Elements
Innovation/Technology Management
Electronics and Microelectronics, Instrumentation
ISBN 3-319-55312-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Pattern Apart -- Understanding Machine Learning Opportunities -- Systemic Machine Learning -- Reinforcement and Deep Reinforcement Machine Learning -- Creative Machine Learning -- Co-operative and Collective learning for Creative Machine Learning -- Building Creative Machines with Optimal Machine Learning and Creative Machine Learning Applications -- Conclusion – Learning Continues.
Record Nr. UNINA-9910254341403321
Kulkarni Parag  
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2017
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui