top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Advanced Computing, Networking and Informatics- Volume 1 : Advanced Computing and Informatics Proceedings of the Second International Conference on Advanced Computing, Networking and Informatics (ICACNI-2014) / / edited by Malay Kumar Kundu, Durga Prasad Mohapatra, Amit Konar, Aruna Chakraborty
Advanced Computing, Networking and Informatics- Volume 1 : Advanced Computing and Informatics Proceedings of the Second International Conference on Advanced Computing, Networking and Informatics (ICACNI-2014) / / edited by Malay Kumar Kundu, Durga Prasad Mohapatra, Amit Konar, Aruna Chakraborty
Edizione [1st ed. 2014.]
Pubbl/distr/stampa Cham : , : Springer International Publishing : , : Imprint : Springer, , 2014
Descrizione fisica 1 online resource (717 p.)
Disciplina 004
Collana Smart Innovation, Systems and Technologies
Soggetto topico Computational intelligence
Artificial intelligence
Computational Intelligence
Artificial Intelligence
ISBN 3-319-07353-2
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Advanced Computing and Informatics -- Machine Learning -- Pattern Analysis and Recognition -- Image Analysis -- Fuzzy Set Theoretic Analysis -- Document Analysis -- Biometric and Biological Data Analysis -- Data and Web Mining -- e-Learning and e-Commerce -- Ontological Analysis -- Human-Computer Interfacing -- Swarm and Evolutionary Computing.
Record Nr. UNINA-9910299752903321
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2014
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Advanced Computing, Networking and Informatics- Volume 2 : Wireless Networks and Security Proceedings of the Second International Conference on Advanced Computing, Networking and Informatics (ICACNI-2014) / / edited by Malay Kumar Kundu, Durga Prasad Mohapatra, Amit Konar, Aruna Chakraborty
Advanced Computing, Networking and Informatics- Volume 2 : Wireless Networks and Security Proceedings of the Second International Conference on Advanced Computing, Networking and Informatics (ICACNI-2014) / / edited by Malay Kumar Kundu, Durga Prasad Mohapatra, Amit Konar, Aruna Chakraborty
Edizione [1st ed. 2014.]
Pubbl/distr/stampa Cham : , : Springer International Publishing : , : Imprint : Springer, , 2014
Descrizione fisica 1 online resource (617 p.)
Disciplina 621.384
Collana Smart Innovation, Systems and Technologies
Soggetto topico Computational intelligence
Artificial intelligence
Computational Intelligence
Artificial Intelligence
ISBN 3-319-07350-8
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Vehicular, Ad hoc and Sensor Networks -- Communication Technologies -- Network Routing -- Data Hiding and Cryptography -- Cloud Computing -- Efficient Architecture and Computing -- Innovative Technologies and Applications.
Record Nr. UNINA-9910299752603321
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2014
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Call admission control in mobile cellular networks / / Sanchita Ghosh and Amit Konar
Call admission control in mobile cellular networks / / Sanchita Ghosh and Amit Konar
Autore Ghosh Sanchita
Edizione [1st ed. 2013.]
Pubbl/distr/stampa Berlin ; ; Heidelberg, : Springer, c2013
Descrizione fisica 1 online resource (XII, 236 p.)
Disciplina 621.384
Altri autori (Persone) KonarAmit
Collana Studies in computational intelligence
Soggetto topico Cell phone systems
Wireless communication systems - Management
ISBN 3-642-30997-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto An Overview of Call Admission Control in Mobile -- Cellular Networks -- An Overview of Computational Intelligence Algorithms -- Automatic call Management in a Cellular Mobile -- Network by Fuzzy Threshold Logic -- An Evolutionary Approach to Velocity and Traffic -- Sensitive Call Admission Control -- Call Admission Control Using Bio-geography Based Optimization.
Record Nr. UNINA-9910437899103321
Ghosh Sanchita  
Berlin ; ; Heidelberg, : Springer, c2013
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Cognitive modeling of human memory and learning : a non-invasive brain-computer interfacing approach / / Lidia Ghosh, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Amit Konar, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Pratyusha Rakshit, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering
Cognitive modeling of human memory and learning : a non-invasive brain-computer interfacing approach / / Lidia Ghosh, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Amit Konar, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Pratyusha Rakshit, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering
Autore Ghosh Lidia
Pubbl/distr/stampa Hoboken, New Jersey : , : Wiley, , [2020]
Descrizione fisica 1 online resource (275 pages)
Disciplina 153.12
Soggetto topico Memory
ISBN 1-119-70591-6
1-119-70587-8
1-119-70592-4
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Chapter 1: Introduction to Human Memory and Learning Models -- 1.1 Introduction 2 -- 1.2 Philosophical Contributions to Memory Research 4 -- 1.2.1 Atkinson and Shiffrin's Model 4 -- 1.2.2 Tveter's Model 6 -- 1.2.3 Tulving's model 6 -- 1.2.4 The Parallel and Distributed Processing (PDP) Approach 8 -- 1.2.5 Procedural and Declarative Memory 9 -- 1.3 Brain-theoretic Interpretation of Memory Formation 11 -- 1.3.1 Coding for Memory 11 -- 1.3.2 Memory Consolidation 13 -- 1.3.3 Location of stored Memories 16 -- 1.3.4 Isolation of Information in Memory 16 -- 1.4 Cognitive Maps 17 -- 1.5 Neural Plasticity 18 -- 1.6 Modularity 19 -- 1.7 The cellular Process behind STM Formation 20 -- 1.8 LTM Formation 21 -- 1.9 Brain Signal Analysis in the Context of Memory and Learning 22 -- 1.9.1 Association of EEG alpha and theta band with memory performances 22 -- 1.9.2 Oscillatory beta and gamma frequency band activation in STM performance 26 -- 1.9.3 Change in EEG band power with changing working memory load 26 -- 1.9.4 Effects of Electromagnetic field on the EEG response of Working Memory 29 -- 1.9.5 EEG Analysis to discriminate focused attention and WM performance 30 -- 1.9.6 EEG power changes in memory repetition effect 31 -- 1.9.7 Correlation between LTM Retrieval and EEG features 34 -- 1.9.8 Impact of math anxiety on WM response: An EEG study 37 -- 1.10 Memory Modelling by Computational Intelligence Techniques 38 -- 1.11 Scope of the Book 43 -- References 47 -- Chapter 2: Working Memory Modeling Using Inverse Fuzzy Relational Approach -- 2.1 Introduction 56 -- 2.2 Problem Formulation and Approach 59 -- 2.2.1 Independent Component Analysis as a Source Localization Tool 61 -- 2.2.2 Independent Component Analysis vs Principal Component Analysis 62 -- 2.2.3 Feature Extraction 63 -- 2.2.4 Phase 1: WM Modeling 64 -- 2.2.4.1 Step I: WM modeling of subject using EEG signals during full face encoding and recall from specific part of same face 65 -- 2.2.4.2 Step II: WM modeling of subject using EEG signals during full face encoding and recall from all parts of same face 68.
2.2.5 Phase 2: WM Analysis 69 -- 2.2.6 Finding Max-Min Compositional of Weight Matrix 70 -- 2.3 Experimental Results and Performance Analysis 75 -- 2.3.1 Experimental Set-up 75 -- 2.3.2 Source Localization using e-LORETA 78 -- 2.3.3 Pre-processing 79 -- 2.3.4 Selection of EEG Features 80 -- 2.3.5 WM Model Consistency across Partial Face Stimuli 81 -- 2.3.6 Inter-person Variability of Weight Matrix W 85 -- 2.3.7 Variation in Imaging Attributes 87 -- 2.3.8 Comparative Analysis with existing Fuzzy Inverse Relations 87 -- 2.4 Discussion 88 -- 2.5 Conclusion 89 -- References 90 -- Chapter 3: Short-Term Memory Modeling in Shape-Recognition Task by Type-2 Fuzzy Deep Brain Learning -- 3.1 Introduction 98 -- 3.2 System Overview 101 -- 3.3 Brain Functional Mapping using Type-2 Fuzzy DBLN 107 -- 3.3.1 Overview of Type-2 Fuzzy Sets 107 -- 3.3.2 Type-2 Fuzzy Mapping and Parameter Adaptation by Perceptron-like Learning 108 -- 3.3.2.1 Construction of the Proposed Interval Type-2 Fuzzy Membership Function 109 -- 3.3.2.2 Construction of IT2FS Induced Mapping Function 110 -- 3.3.2.3 Secondary Membership Function Computation of Proposed GT2FS 112 -- 3.3.2.4 Proposed General Type-2 Fuzzy Mapping 114 -- 3.3.3 Perceptron-like Learning for Weight Adaptation 115 -- 3.3.4 Training of the Proposed Shape-Reconstruction Algorithm 116 -- 3.3.5 The Test Phase of the Memory Model 118 -- 3.4 Experiments and Results 118 -- 3.4.1 Experimental Set-up 118 -- 3.4.2 Experiment 1: Validation of the STM Model with respect to Error Metric 121 -- 3.4.3 Experiment 2: Similar Encoding by a Subject for Similar Input Object-Shapes 122 -- 3.4.4 Experiment 3: Study of Subjects' Learning Ability with Increasing Complexity in Object Shape 123 -- 3.4.5 Experiment 4: Convergence Time of the Weight Matrix G for Increased Complexity of the Input Shape Stimuli 124 -- 3.4.6 Experiment 5: Abnormality in G matrix for the subjects with Brain Impairment 125 -- 3.5 Biological Implications 126 -- 3.6 Performance Analysis 128.
3.6.1 Performance Analysis of the Proposed T2FS Methods 128 -- 3.6.2 Computational Performance Analysis of the Proposed T2FS Methods 130 -- 3.6.3 Statistical Validation using Wilcoxon Signed-Rank Test 130 -- 3.6.4 Optimal Parameter Selection and Robustness Study 131 -- 3.7 Conclusions 133 -- References 135 -- Chapter 4: EEG Analysis for Subjective Assessment of Motor Learning Skill in Driving Using Type-2 Fuzzy Reasoning -- 4.1 Introduction 142 -- 4.2 System Overview 144 -- 4.2.1 Rule Design to determine the degree of learning 145 -- 4.2.2 Single Trial Detection of Brain Signals 148 -- 4.2.2.1 Feature Extraction 149 -- 4.2.2.2 Feature Selection 149 -- 4.2.2.3 Classification 150 -- 4.2.3 Type-2 Fuzzy Reasoning 151 -- 4.2.4 Training and Testing of the Classifiers 151 -- 4.3 Determining Type and Degree of Learning by Type-2 Fuzzy Reasoning 151 -- 4.3.1 Preliminaries on IT2FS and GT2FS 153 -- 4.3.2 Proposed Reasoning Method 1: CIT2FS based Reasoning 153 -- 4.3.3 Computation of Percentage Normalized Degree of Learning 155 -- 4.3.4 Optimal λ Selection in IT2FS Reasoning 156 -- 4.3.5 Proposed Reasoning Method 2: Triangular Vertical Slice Based CGT2FS Reasoning 156 -- 4.3.6 Proposed Reasoning Method 3: CGT2FS Reasoning with Gaussian Secondary Membership Function (MF) 158 -- 4.4 Experiments and Results 162 -- 4.4.1 The Experimental set-up 162 -- 4.4.2 Stimulus Presentation 163 -- 4.4.3 Experiment 1: Source Localization using eLORETA 163 -- 4.4.4 Experiment 2: Validation of the Rules 164 -- 4.4.5 Experiment 3: Pre-processing and Artifact Removal using ICA 165 -- 4.4.6 Experiment 4: N400 Old/New Effect Observation over the Successive Trials 167 -- 4.4.7 Experiment 5: Selection of the Discriminating EEG Features using PCA 168 -- 4.5 Performance Analysis and Statistical Validation 169 -- 4.5.1 Performance Analysis of the LSVM Classifiers 169 -- 4.5.2 Robustness Study 170 -- 4.5.3 Performance Analysis of the Proposed T2FS Reasoning Methods 170 -- 4.5.4 Computational Performance Analysis of the Proposed T2FS Reasoning Methods 171.
4.5.5 Statistical Validation using Wilcoxon Signed-Rank Test 172 -- 4.6 Conclusion 173 -- References 173 -- Chapter 5: EEG Analysis to Decode Human Memory Responses in Face Recognition Task Using Deep LSTM Network -- 5.1 Introduction 182 -- 5.2 CSP Modeling 186 -- 5.2.1 The Standard CSP Algorithm 186 -- 5.2.2 The Proposed CSP Algorithm 187 -- 5.3 Proposed LSTM Classifier with Attention Mechanism 189 -- 5.4 Experiment and Results 195 -- 5.4.1 The Experimental Set-up 195 -- 5.4.2 Experiment 1: Activated Brain Region Selection using eLORETA 196 -- 5.4.3 Experiment 2: Detection of the ERP signals associated with the familiar andunfamiliar face discrimination 198 -- 5.4.4 Experiment 3: Performance Analysis of the Proposed CSP algorithm as a Feature extraction Technique 199 -- 5.4.5 Experiment 4: Performance Analysis of the Proposed LSTM based Classifier 201 -- 5.4.6 Experiment 5: Classifier Performance Analysis with varying EEG Time-Window Length 202 -- 5.4.7 Statistical Validation of the Proposed LSTM Classifier using McNamers' Test 203 -- 5.5 Conclusions 204 -- References 204 -- Chapter 6: Cognitive Load Assessment in Motor Learning Tasks by Near-Infrared Spectroscopy Using Type-2 Fuzzy Sets -- 6.1 Introduction 214 -- 6.2 Principles and Methodologies 216 -- 6.2.1 Normalization of Raw Data 217 -- 6.2.2 Pre-processing 218 -- 6.2.3 Feature Extraction 218 -- 6.2.4 Training Instance Generation for Offline Training 219 -- 6.2.5 Feature Selection using Evolutionary Algorithm 219 -- 6.2.6 Classifier Training and Testing 221 -- 6.3 Classifier Design 221 -- 6.3.1 Preliminaries of IT2FS and GT2FS 221 -- 6.3.2 IT2FS Induced Classifier Design 222 -- 6.3.3 GT2FS Induced Classifier Design 228 -- 6.4 Experiments and Results 230 -- 6.4.1 Experimental Set-up 230 -- 6.4.2 Participants 232 -- 6.4.3 Stimulus Presentation for Online Classification 232 -- 6.4.4 Experiment 1: Demonstration of decreasing Cognitive Load with increasing Learning Epochs for similar stimulus 233 -- 6 .4.5 Experiment 2: Automatic Extraction of Discriminating fNIRs features 234.
6.4.6 Experiment 3: Optimal Parameter Setting of Feature Selection and Classifier Units 235 -- 6.5 Biological Implications 237 -- 6.6 Performance Analysis 239 -- 6.6.1 Performance Analysis of the proposed IT2FS and GT2FS Classifier 239 -- 6.6.2 Statistical Validation of the Classifiers using McNamer;s Test 242 -- 6.7 Conclusion 243 -- References 243 -- Chapter 7: Conclusions and Future Directions of Research on BCI based Memory and Learning -- 7.1 Self-Review of the Works Undertaken in the Book 250 -- 7.2 Limitations of EEG BCI-Based Memory Experiments 252 -- 7.3 Further Scope of Future Research on Memory and Learning 253 -- References.
Record Nr. UNINA-9910555012703321
Ghosh Lidia  
Hoboken, New Jersey : , : Wiley, , [2020]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Cognitive modeling of human memory and learning : a non-invasive brain-computer interfacing approach / / Lidia Ghosh, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Amit Konar, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Pratyusha Rakshit, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering
Cognitive modeling of human memory and learning : a non-invasive brain-computer interfacing approach / / Lidia Ghosh, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Amit Konar, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering, Pratyusha Rakshit, Artificial Intelligence Lab., Dept. of Electronics and Tele-Communication Engineering
Autore Ghosh Lidia
Pubbl/distr/stampa Hoboken, New Jersey : , : Wiley, , [2020]
Descrizione fisica 1 online resource (275 pages)
Disciplina 153.12
Soggetto topico Memory
ISBN 1-119-70591-6
1-119-70587-8
1-119-70592-4
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Chapter 1: Introduction to Human Memory and Learning Models -- 1.1 Introduction 2 -- 1.2 Philosophical Contributions to Memory Research 4 -- 1.2.1 Atkinson and Shiffrin's Model 4 -- 1.2.2 Tveter's Model 6 -- 1.2.3 Tulving's model 6 -- 1.2.4 The Parallel and Distributed Processing (PDP) Approach 8 -- 1.2.5 Procedural and Declarative Memory 9 -- 1.3 Brain-theoretic Interpretation of Memory Formation 11 -- 1.3.1 Coding for Memory 11 -- 1.3.2 Memory Consolidation 13 -- 1.3.3 Location of stored Memories 16 -- 1.3.4 Isolation of Information in Memory 16 -- 1.4 Cognitive Maps 17 -- 1.5 Neural Plasticity 18 -- 1.6 Modularity 19 -- 1.7 The cellular Process behind STM Formation 20 -- 1.8 LTM Formation 21 -- 1.9 Brain Signal Analysis in the Context of Memory and Learning 22 -- 1.9.1 Association of EEG alpha and theta band with memory performances 22 -- 1.9.2 Oscillatory beta and gamma frequency band activation in STM performance 26 -- 1.9.3 Change in EEG band power with changing working memory load 26 -- 1.9.4 Effects of Electromagnetic field on the EEG response of Working Memory 29 -- 1.9.5 EEG Analysis to discriminate focused attention and WM performance 30 -- 1.9.6 EEG power changes in memory repetition effect 31 -- 1.9.7 Correlation between LTM Retrieval and EEG features 34 -- 1.9.8 Impact of math anxiety on WM response: An EEG study 37 -- 1.10 Memory Modelling by Computational Intelligence Techniques 38 -- 1.11 Scope of the Book 43 -- References 47 -- Chapter 2: Working Memory Modeling Using Inverse Fuzzy Relational Approach -- 2.1 Introduction 56 -- 2.2 Problem Formulation and Approach 59 -- 2.2.1 Independent Component Analysis as a Source Localization Tool 61 -- 2.2.2 Independent Component Analysis vs Principal Component Analysis 62 -- 2.2.3 Feature Extraction 63 -- 2.2.4 Phase 1: WM Modeling 64 -- 2.2.4.1 Step I: WM modeling of subject using EEG signals during full face encoding and recall from specific part of same face 65 -- 2.2.4.2 Step II: WM modeling of subject using EEG signals during full face encoding and recall from all parts of same face 68.
2.2.5 Phase 2: WM Analysis 69 -- 2.2.6 Finding Max-Min Compositional of Weight Matrix 70 -- 2.3 Experimental Results and Performance Analysis 75 -- 2.3.1 Experimental Set-up 75 -- 2.3.2 Source Localization using e-LORETA 78 -- 2.3.3 Pre-processing 79 -- 2.3.4 Selection of EEG Features 80 -- 2.3.5 WM Model Consistency across Partial Face Stimuli 81 -- 2.3.6 Inter-person Variability of Weight Matrix W 85 -- 2.3.7 Variation in Imaging Attributes 87 -- 2.3.8 Comparative Analysis with existing Fuzzy Inverse Relations 87 -- 2.4 Discussion 88 -- 2.5 Conclusion 89 -- References 90 -- Chapter 3: Short-Term Memory Modeling in Shape-Recognition Task by Type-2 Fuzzy Deep Brain Learning -- 3.1 Introduction 98 -- 3.2 System Overview 101 -- 3.3 Brain Functional Mapping using Type-2 Fuzzy DBLN 107 -- 3.3.1 Overview of Type-2 Fuzzy Sets 107 -- 3.3.2 Type-2 Fuzzy Mapping and Parameter Adaptation by Perceptron-like Learning 108 -- 3.3.2.1 Construction of the Proposed Interval Type-2 Fuzzy Membership Function 109 -- 3.3.2.2 Construction of IT2FS Induced Mapping Function 110 -- 3.3.2.3 Secondary Membership Function Computation of Proposed GT2FS 112 -- 3.3.2.4 Proposed General Type-2 Fuzzy Mapping 114 -- 3.3.3 Perceptron-like Learning for Weight Adaptation 115 -- 3.3.4 Training of the Proposed Shape-Reconstruction Algorithm 116 -- 3.3.5 The Test Phase of the Memory Model 118 -- 3.4 Experiments and Results 118 -- 3.4.1 Experimental Set-up 118 -- 3.4.2 Experiment 1: Validation of the STM Model with respect to Error Metric 121 -- 3.4.3 Experiment 2: Similar Encoding by a Subject for Similar Input Object-Shapes 122 -- 3.4.4 Experiment 3: Study of Subjects' Learning Ability with Increasing Complexity in Object Shape 123 -- 3.4.5 Experiment 4: Convergence Time of the Weight Matrix G for Increased Complexity of the Input Shape Stimuli 124 -- 3.4.6 Experiment 5: Abnormality in G matrix for the subjects with Brain Impairment 125 -- 3.5 Biological Implications 126 -- 3.6 Performance Analysis 128.
3.6.1 Performance Analysis of the Proposed T2FS Methods 128 -- 3.6.2 Computational Performance Analysis of the Proposed T2FS Methods 130 -- 3.6.3 Statistical Validation using Wilcoxon Signed-Rank Test 130 -- 3.6.4 Optimal Parameter Selection and Robustness Study 131 -- 3.7 Conclusions 133 -- References 135 -- Chapter 4: EEG Analysis for Subjective Assessment of Motor Learning Skill in Driving Using Type-2 Fuzzy Reasoning -- 4.1 Introduction 142 -- 4.2 System Overview 144 -- 4.2.1 Rule Design to determine the degree of learning 145 -- 4.2.2 Single Trial Detection of Brain Signals 148 -- 4.2.2.1 Feature Extraction 149 -- 4.2.2.2 Feature Selection 149 -- 4.2.2.3 Classification 150 -- 4.2.3 Type-2 Fuzzy Reasoning 151 -- 4.2.4 Training and Testing of the Classifiers 151 -- 4.3 Determining Type and Degree of Learning by Type-2 Fuzzy Reasoning 151 -- 4.3.1 Preliminaries on IT2FS and GT2FS 153 -- 4.3.2 Proposed Reasoning Method 1: CIT2FS based Reasoning 153 -- 4.3.3 Computation of Percentage Normalized Degree of Learning 155 -- 4.3.4 Optimal λ Selection in IT2FS Reasoning 156 -- 4.3.5 Proposed Reasoning Method 2: Triangular Vertical Slice Based CGT2FS Reasoning 156 -- 4.3.6 Proposed Reasoning Method 3: CGT2FS Reasoning with Gaussian Secondary Membership Function (MF) 158 -- 4.4 Experiments and Results 162 -- 4.4.1 The Experimental set-up 162 -- 4.4.2 Stimulus Presentation 163 -- 4.4.3 Experiment 1: Source Localization using eLORETA 163 -- 4.4.4 Experiment 2: Validation of the Rules 164 -- 4.4.5 Experiment 3: Pre-processing and Artifact Removal using ICA 165 -- 4.4.6 Experiment 4: N400 Old/New Effect Observation over the Successive Trials 167 -- 4.4.7 Experiment 5: Selection of the Discriminating EEG Features using PCA 168 -- 4.5 Performance Analysis and Statistical Validation 169 -- 4.5.1 Performance Analysis of the LSVM Classifiers 169 -- 4.5.2 Robustness Study 170 -- 4.5.3 Performance Analysis of the Proposed T2FS Reasoning Methods 170 -- 4.5.4 Computational Performance Analysis of the Proposed T2FS Reasoning Methods 171.
4.5.5 Statistical Validation using Wilcoxon Signed-Rank Test 172 -- 4.6 Conclusion 173 -- References 173 -- Chapter 5: EEG Analysis to Decode Human Memory Responses in Face Recognition Task Using Deep LSTM Network -- 5.1 Introduction 182 -- 5.2 CSP Modeling 186 -- 5.2.1 The Standard CSP Algorithm 186 -- 5.2.2 The Proposed CSP Algorithm 187 -- 5.3 Proposed LSTM Classifier with Attention Mechanism 189 -- 5.4 Experiment and Results 195 -- 5.4.1 The Experimental Set-up 195 -- 5.4.2 Experiment 1: Activated Brain Region Selection using eLORETA 196 -- 5.4.3 Experiment 2: Detection of the ERP signals associated with the familiar andunfamiliar face discrimination 198 -- 5.4.4 Experiment 3: Performance Analysis of the Proposed CSP algorithm as a Feature extraction Technique 199 -- 5.4.5 Experiment 4: Performance Analysis of the Proposed LSTM based Classifier 201 -- 5.4.6 Experiment 5: Classifier Performance Analysis with varying EEG Time-Window Length 202 -- 5.4.7 Statistical Validation of the Proposed LSTM Classifier using McNamers' Test 203 -- 5.5 Conclusions 204 -- References 204 -- Chapter 6: Cognitive Load Assessment in Motor Learning Tasks by Near-Infrared Spectroscopy Using Type-2 Fuzzy Sets -- 6.1 Introduction 214 -- 6.2 Principles and Methodologies 216 -- 6.2.1 Normalization of Raw Data 217 -- 6.2.2 Pre-processing 218 -- 6.2.3 Feature Extraction 218 -- 6.2.4 Training Instance Generation for Offline Training 219 -- 6.2.5 Feature Selection using Evolutionary Algorithm 219 -- 6.2.6 Classifier Training and Testing 221 -- 6.3 Classifier Design 221 -- 6.3.1 Preliminaries of IT2FS and GT2FS 221 -- 6.3.2 IT2FS Induced Classifier Design 222 -- 6.3.3 GT2FS Induced Classifier Design 228 -- 6.4 Experiments and Results 230 -- 6.4.1 Experimental Set-up 230 -- 6.4.2 Participants 232 -- 6.4.3 Stimulus Presentation for Online Classification 232 -- 6.4.4 Experiment 1: Demonstration of decreasing Cognitive Load with increasing Learning Epochs for similar stimulus 233 -- 6 .4.5 Experiment 2: Automatic Extraction of Discriminating fNIRs features 234.
6.4.6 Experiment 3: Optimal Parameter Setting of Feature Selection and Classifier Units 235 -- 6.5 Biological Implications 237 -- 6.6 Performance Analysis 239 -- 6.6.1 Performance Analysis of the proposed IT2FS and GT2FS Classifier 239 -- 6.6.2 Statistical Validation of the Classifiers using McNamer;s Test 242 -- 6.7 Conclusion 243 -- References 243 -- Chapter 7: Conclusions and Future Directions of Research on BCI based Memory and Learning -- 7.1 Self-Review of the Works Undertaken in the Book 250 -- 7.2 Limitations of EEG BCI-Based Memory Experiments 252 -- 7.3 Further Scope of Future Research on Memory and Learning 253 -- References.
Record Nr. UNINA-9910828417703321
Ghosh Lidia  
Hoboken, New Jersey : , : Wiley, , [2020]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Emotion recognition : a pattern analysis approach / / edited by Amit Konar, Aruna Chakraborty
Emotion recognition : a pattern analysis approach / / edited by Amit Konar, Aruna Chakraborty
Autore Konar Amit
Edizione [1st ed.]
Pubbl/distr/stampa Hoboken, New Jersey : , : John Wiley & Sons, Inc., , 2015
Descrizione fisica 1 online resource (548 p.)
Disciplina 004.01/9
Soggetto topico Human-computer interaction
Artificial intelligence
Emotions - Computer simulation
Pattern recognition systems
Context-aware computing
ISBN 1-118-91060-5
1-118-91056-7
1-118-91061-3
Classificazione TEC008060COM016000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910132308703321
Konar Amit  
Hoboken, New Jersey : , : John Wiley & Sons, Inc., , 2015
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Emotion recognition : a pattern analysis approach / / edited by Amit Konar, Aruna Chakraborty
Emotion recognition : a pattern analysis approach / / edited by Amit Konar, Aruna Chakraborty
Autore Konar Amit
Edizione [1st ed.]
Pubbl/distr/stampa Hoboken, New Jersey : , : John Wiley & Sons, Inc., , 2015
Descrizione fisica 1 online resource (548 p.)
Disciplina 004.01/9
Soggetto topico Human-computer interaction
Artificial intelligence
Emotions - Computer simulation
Pattern recognition systems
Context-aware computing
ISBN 1-118-91060-5
1-118-91056-7
1-118-91061-3
Classificazione TEC008060COM016000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910815686603321
Konar Amit  
Hoboken, New Jersey : , : John Wiley & Sons, Inc., , 2015
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Multi-agent coordination : a reinforcement learning approach / / Arup Kumar Sadhu, Amit Konar
Multi-agent coordination : a reinforcement learning approach / / Arup Kumar Sadhu, Amit Konar
Autore Sadhu Arup Kumar
Pubbl/distr/stampa Hoboken, NJ : , : Wiley : , : IEEE Press, , 2021
Descrizione fisica 1 PDF
Disciplina 006.31
Soggetto topico Reinforcement learning
Multiagent systems
ISBN 1-119-69902-9
1-119-69899-5
1-119-69905-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto PREFACE -- ACKNOWLEDGEMENT -- CHAPTER 1 INTRODUCTION: MULTI-AGENT COORDINATION BY REINFORCEMENT LEARNING AND EVOLUTIONARY ALGORITHMS 1 -- 1.1 INTRODUCTION 2 -- 1.2 SINGLE AGENT PLANNING 3 -- 1.2.1 Terminologies used in single agent planning 4 -- 1.2.2 Single agent search-based planning algorithms 9 -- 1.2.2.1 Dijkstra's algorithm 10 -- 1.2.2.2 A* (A-star) Algorithm 12 -- 1.2.2.3 D* (D-star) Algorithm 14 -- 1.2.2.4 Planning by STRIPS-like language 16 -- 1.2.3 Single agent reinforcement learning 16 -- 1.2.3.1 Multi-Armed Bandit Problem 17 -- 1.2.3.2 Dynamic programming and Bellman equation 19 -- 1.2.3.3 Correlation between reinforcement learning and Dynamic programming 20 -- 1.2.3.4 Single agent Q-learning 20 -- 1.2.3.5 Single agent planning using Q-learning 23 -- 1.3 MULTI-AGENT PLANNING AND COORDINATION 24 -- 1.3.1 Terminologies related to multi-agent coordination 24 -- 1.3.2 Classification of multi-agent system 25 -- 1.3.3 Game theory for multi-agent coordination 27 -- 1.3.3.1 Nash equilibrium (NE) 30 -- 1.3.3.1.1 Pure strategy NE (PSNE) 31 -- 1.3.3.1.2 Mixed strategy NE (MSNE) 33 -- 1.3.3.2 Correlated equilibrium (CE) 36 -- 1.3.3.3 Static game examples 37 -- 1.3.4 Correlation among RL, DP, and GT 39 -- 1.3.5 Classification of MARL 39 -- 1.3.5.1 Cooperative multi-agent reinforcement learning 41 -- 1.3.5.1.1 Static 41 -- Independent Learner (IL) and Joint Action Learner (JAL) 41Frequency maximum Q-value (FMQ) heuristic 44 -- 1.3.5.1.2 Dynamic 46 -- Team-Q 46 -- Distributed -Q 47 -- Optimal Adaptive Learning 50 -- Sparse cooperative Q-learning (SCQL) 52 -- Sequential Q-learning (SQL) 53 -- Frequency of the maximum reward Q-learning (FMRQ) 53 -- 1.3.5.2 Competitive multi-agent reinforcement learning 55 -- 1.3.5.2.1 Minimax-Q Learning 55 -- 1.3.5.2.2 Heuristically-accelerated multi-agent reinforcement learning 56 -- 1.3.5.3 Mixed multi-agent reinforcement learning 57 -- 1.3.5.3.1 Static 57 -- Belief-based Learning rule 57 -- Fictitious play 57 -- Meta strategy 58 -- Adapt When Everybody is Stationary, Otherwise Move to Equilibrium (AWESOME) 60.
Hyper-Q 62 -- Direct policy search based 63 -- Fixed learning rate 63 -- Infinitesimal Gradient Ascent (IGA) 63 -- Generalized Infinitesimal Gradient Ascent (GIGA) 65 -- Variable learning rate 66 -- Win or Learn Fast-IGA (WoLF-IGA) 66 -- GIGA-Win or Learn Fast (GIGA-WoLF) 66 -- 1.3.5.3.2 Dynamic 67 -- Equilibrium dependent 67 -- Nash-Q Learning 67 -- Correlated-Q Learning (CQL) 68 -- Asymmetric-Q Learning (AQL) 68 -- Friend-or-Foe Q-learning 70 -- Negotiation-based Q-learning 71 -- MAQL with equilibrium transfer 74 -- Equilibrium independent 76 -- Variable learning rate 76 -- Win or Learn Fast Policy hill-climbing (WoLF-PHC) 76 -- Policy Dynamic based Win or Learn Fast (PD-WoLF) 78 -- Fixed learning rate 78 -- Non-Stationary Converging Policies (NSCP) 78 -- Extended Optimal Response Learning (EXORL) 79 -- 1.3.6 Coordination and planning by MAQL 80 -- 1.3.7 Performance analysis of MAQL and MAQL-based coordination 81 -- 1.4 COORDINATION BY OPTIMIZATION ALGORITHM 83 -- 1.4.1 Particle Swarm Optimization (PSO) Algorithm 84 -- 1.4.2 Firefly Algorithm (FA) 87 -- 1.4.2.1 Initialization 87 -- 1.4.2.2 Attraction to Brighter Fireflies 87 -- 1.4.2.3 Movement of Fireflies 88 -- 1.4.3 Imperialist Competitive Algorithm (ICA) 89 -- 1.4.3.1 Initialization 89 -- 1.4.3.2 Selection of Imperialists and Colonies 89 -- 1.4.3.3 Formation of Empires 89 -- 1.4.3.4 Assimilation of Colonies 90 -- 1.4.3.5 Revolution 91 -- 1.4.3.6 Imperialistic Competition 91 -- 1.4.3.6.1 Total Empire Power Evaluation 91 -- 1.4.3.6.2 Reassignment of Colonies and Removal of Empire 92 -- 1.4.3.6.3 Union of Empires 92 -- 1.4.4 Differential evolutionary (DE) algorithm 93 -- 1.4.4.1 Initialization 93 -- 1.4.4.2 Mutation 93 -- 1.4.4.3 Recombination 93 -- 1.4.4.4 Selection 93 -- 1.4.5 Offline optimization 94 -- 1.4.6 Performance analysis of optimization algorithms 94 -- 1.4.6.1 Friedman test 94 -- 1.4.6.2 Iman-Davenport test 95 -- 1.5 SCOPE OF THE Book 95 -- 1.6 SUMMARY 98 -- References 98 -- CHAPTER 2 IMPROVE CONVERGENCE SPEED OF MULTI-AGENT Q-LEARNING FOR COOPERATIVE TASK-PLANNING 107.
2.1 INTRODUCTION 108 -- 2.2 LITERATURE REVIEW 112 -- 2.3 PRELIMINARIES 114 -- 2.3.1 Single agent Q-learning 114 -- 2.3.2 Multi-agent Q-learning 115 -- 2.4 PROPOSED MULTI-AGENT Q-LEARNING 118 -- 2.4.1 Two useful properties 119 -- 2.5 PROPOSED FCMQL ALGORITHMS AND THEIR CONVERGENCE ANALYSIS 120 -- 2.5.1 Proposed FCMQL algorithms 120 -- 2.5.2 Convergence analysis of the proposed FCMQL algorithms 121 -- 2.6 FCMQL-BASED COOPERATIVE MULTI-AGENT PLANNING 122 -- 2.7 EXPERIMENTS AND RESULTS 123 -- 2.8 CONCLUSIONS 130 -- 2.9 SUMMARY 131 -- 2.10 APPENDIX 2.1 131 -- 2.11 APPENDIX 2.2 135 -- References 152 -- CHAPTER 3 CONSENSUS Q-LEARNING FOR MULTI-AGENT COOPERATIVE PLANNING 157 -- 3.1 INTRODUCTION 158 -- 3.2 PRELIMINARIES 159 -- 3.2.1 Single agent Q-learning 159 -- 3.2.2 Equilibrium-based multi-agent Q-learning 160 -- 3.3 CONSENSUS 161 -- 3.4 PROPOSED CONSENSUS Q-LEARNING AND PLANNING 162 -- 3.4.1 Consensus Q-learning 162 -- 3.4.2 Consensus-based multi-robot planning 164 -- 3.5 EXPERIMENTS AND RESULTS 165 -- 3.5.1 Experimental setup 165 -- 3.5.2 Experiments for CoQL 165 -- 3.5.3 Experiments for consensus-based planning 166 -- 3.6 CONCLUSIONS 168 -- 3.7 SUMMARY 168 -- References 168 -- CHAPTER 4 AN EFFICIENT COMPUTING OF CORRELATED EQUILIBRIUM FOR COOPERATIVE Q-LEARNING BASED MULTI-AGENT PLANNING 171 -- 4.1 INTRODUCTION 172 -- 4.2 SINGLE-AGENT Q-LEARNING AND EQUILIBRIUM BASED MAQL 175 -- 4.2.1 Single Agent Q learning 175 -- 4.2.2 Equilibrium based MAQL 175 -- 4.3 PROPOSED COOPERATIVE MULTI-AGENT Q-LEARNING AND PLANNING 176 -- 4.3.1 Proposed schemes with their applicability 176 -- 4.3.2 Immediate rewards in Scheme-I and -II 177 -- 4.3.3 Scheme-I induced MAQL 178 -- 4.3.4 Scheme-II induced MAQL 180 -- 4.3.5 Algorithms for scheme-I and II 182 -- 4.3.6 Constraint QL-I/ QL-II(C ......................................................... 183 -- 4.3.7 Convergence 183 -- Multi-agent planning 185 -- 4.4 COMPLEXITY ANALYSIS 186 -- 4.4.1 Complexity of Correlated Q-Learning 187 -- 4.4.1.1 Space Complexity 187.
4.4.1.2 Time Complexity 187 -- 4.4.2 Complexity of the proposed algorithms 188 -- 4.4.2.1 Space Complexity 188 -- 4.4.2.2 Time Complexity 188 -- 4.4.3 Complexity comparison 189 -- 4.4.3.1 Space complexity 190 -- 4.4.3.2 Time complexity 190 -- 4.5 SIMULATION AND EXPERIMENTAL RESULTS 191 -- 4.5.1 Experimental platform 191 -- 4.5.1.1 Simulation 191 -- 4.5.1.2 Hardware 192 -- 4.5.2 Experimental approach 192 -- 4.5.2.1 Learning phase 193 -- 4.5.2.2 Planning phase 193 -- 4.5.3 Experimental results 194 -- 4.6 CONCLUSION 201 -- 4.7 SUMMARY 202 -- 4.8 APPENDIX 203 -- References 209 -- CHAPTER 5 A MODIFIED IMPERIALIST COMPETITIVE ALGORITHM FOR MULTI-AGENT STICK- CARRYING APPLICATION 213 -- 5.1 INTRODUCTION 214 -- 5.2 PROBLEM FORMULATION FOR MULTI-ROBOT STICK-CARRYING 219 -- 5.3 PROPOSED HYBRID ALGORITHM 222 -- 5.3.1 An Overview of Imperialist Competitive Algorithm (ICA) 222 -- 5.3.1.1 Initialization 222 -- 5.3.1.2 Selection of Imperialists and Colonies 223 -- 5.3.1.3 Formation of Empires 223 -- 5.3.1.4 Assimilation of Colonies 223 -- 5.3.1.5 Revolution 224 -- 5.3.1.6 Imperialistic Competition 224 -- 5.3.1.6.1 Total Empire Power Evaluation 225 -- 5.3.1.6.2 Reassignment of Colonies and Removal of Empire 225 -- 5.3.1.6.3 Union of Empires 226 -- 5.4 AN OVERVIEW OF FIREFLY ALGORITHM (FA) 226 -- 5.4.1 Initialization 226 -- 5.4.2 Attraction to Brighter Fireflies 226 -- 5.4.3 Movement of Fireflies 227 -- 5.5 PROPOSED IMPERIALIST COMPETITIVE FIREFLY ALGORITHM 227 -- 5.5.1 Assimilation of Colonies 229 -- 5.5.1.1 Attraction to Powerful Colonies 230 -- 5.5.1.2 Modification of Empire Behavior 230 -- 5.5.1.3 Union of Empires 230 -- 5.6 SIMULATION RESULTS 232 -- 5.6.1 Comparative Framework 232 -- 5.6.2 Parameter Settings 232 -- 5.6.3 Analysis on Explorative Power of ICFA 232 -- 5.6.4 Comparison of Quality of the Final Solution 233 -- 5.6.5 Performance Analysis 233 -- 5.7 COMPUTER SIMULATION AND EXPERIMENT 240 -- 5.7.1 Average total path deviation (ATPD) 240 -- 5.7.2 Average Uncovered Target Distance (AUTD) 241.
5.7.3 Experimental Setup in Simulation Environment 241 -- 5.7.4 Experimental Results in Simulation Environment 242 -- 5.7.5 Experimental Setup with Khepera Robots 244 -- 5.7.6 Experimental Results with Khepera Robots 244 -- 5.8 CONCLUSION 245 -- 5.9 SUMMARY 247 -- 5.10 APPENDIX 5.1 248 -- References 249 -- CHAPTER 6 CONCLUSIONS AND FUTURE DIRECTIONS 255 -- 6.1 CONCLUSIONS 256 -- 6.2 FUTURE DIRECTIONS 257.
Record Nr. UNINA-9910554865903321
Sadhu Arup Kumar  
Hoboken, NJ : , : Wiley : , : IEEE Press, , 2021
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Multi-agent coordination : a reinforcement learning approach / / Arup Kumar Sadhu, Amit Konar
Multi-agent coordination : a reinforcement learning approach / / Arup Kumar Sadhu, Amit Konar
Autore Sadhu Arup Kumar
Pubbl/distr/stampa Hoboken, NJ : , : Wiley : , : IEEE Press, , 2021
Descrizione fisica 1 PDF
Disciplina 006.31
Soggetto topico Reinforcement learning
Multiagent systems
ISBN 1-119-69902-9
1-119-69899-5
1-119-69905-3
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto PREFACE -- ACKNOWLEDGEMENT -- CHAPTER 1 INTRODUCTION: MULTI-AGENT COORDINATION BY REINFORCEMENT LEARNING AND EVOLUTIONARY ALGORITHMS 1 -- 1.1 INTRODUCTION 2 -- 1.2 SINGLE AGENT PLANNING 3 -- 1.2.1 Terminologies used in single agent planning 4 -- 1.2.2 Single agent search-based planning algorithms 9 -- 1.2.2.1 Dijkstra's algorithm 10 -- 1.2.2.2 A* (A-star) Algorithm 12 -- 1.2.2.3 D* (D-star) Algorithm 14 -- 1.2.2.4 Planning by STRIPS-like language 16 -- 1.2.3 Single agent reinforcement learning 16 -- 1.2.3.1 Multi-Armed Bandit Problem 17 -- 1.2.3.2 Dynamic programming and Bellman equation 19 -- 1.2.3.3 Correlation between reinforcement learning and Dynamic programming 20 -- 1.2.3.4 Single agent Q-learning 20 -- 1.2.3.5 Single agent planning using Q-learning 23 -- 1.3 MULTI-AGENT PLANNING AND COORDINATION 24 -- 1.3.1 Terminologies related to multi-agent coordination 24 -- 1.3.2 Classification of multi-agent system 25 -- 1.3.3 Game theory for multi-agent coordination 27 -- 1.3.3.1 Nash equilibrium (NE) 30 -- 1.3.3.1.1 Pure strategy NE (PSNE) 31 -- 1.3.3.1.2 Mixed strategy NE (MSNE) 33 -- 1.3.3.2 Correlated equilibrium (CE) 36 -- 1.3.3.3 Static game examples 37 -- 1.3.4 Correlation among RL, DP, and GT 39 -- 1.3.5 Classification of MARL 39 -- 1.3.5.1 Cooperative multi-agent reinforcement learning 41 -- 1.3.5.1.1 Static 41 -- Independent Learner (IL) and Joint Action Learner (JAL) 41Frequency maximum Q-value (FMQ) heuristic 44 -- 1.3.5.1.2 Dynamic 46 -- Team-Q 46 -- Distributed -Q 47 -- Optimal Adaptive Learning 50 -- Sparse cooperative Q-learning (SCQL) 52 -- Sequential Q-learning (SQL) 53 -- Frequency of the maximum reward Q-learning (FMRQ) 53 -- 1.3.5.2 Competitive multi-agent reinforcement learning 55 -- 1.3.5.2.1 Minimax-Q Learning 55 -- 1.3.5.2.2 Heuristically-accelerated multi-agent reinforcement learning 56 -- 1.3.5.3 Mixed multi-agent reinforcement learning 57 -- 1.3.5.3.1 Static 57 -- Belief-based Learning rule 57 -- Fictitious play 57 -- Meta strategy 58 -- Adapt When Everybody is Stationary, Otherwise Move to Equilibrium (AWESOME) 60.
Hyper-Q 62 -- Direct policy search based 63 -- Fixed learning rate 63 -- Infinitesimal Gradient Ascent (IGA) 63 -- Generalized Infinitesimal Gradient Ascent (GIGA) 65 -- Variable learning rate 66 -- Win or Learn Fast-IGA (WoLF-IGA) 66 -- GIGA-Win or Learn Fast (GIGA-WoLF) 66 -- 1.3.5.3.2 Dynamic 67 -- Equilibrium dependent 67 -- Nash-Q Learning 67 -- Correlated-Q Learning (CQL) 68 -- Asymmetric-Q Learning (AQL) 68 -- Friend-or-Foe Q-learning 70 -- Negotiation-based Q-learning 71 -- MAQL with equilibrium transfer 74 -- Equilibrium independent 76 -- Variable learning rate 76 -- Win or Learn Fast Policy hill-climbing (WoLF-PHC) 76 -- Policy Dynamic based Win or Learn Fast (PD-WoLF) 78 -- Fixed learning rate 78 -- Non-Stationary Converging Policies (NSCP) 78 -- Extended Optimal Response Learning (EXORL) 79 -- 1.3.6 Coordination and planning by MAQL 80 -- 1.3.7 Performance analysis of MAQL and MAQL-based coordination 81 -- 1.4 COORDINATION BY OPTIMIZATION ALGORITHM 83 -- 1.4.1 Particle Swarm Optimization (PSO) Algorithm 84 -- 1.4.2 Firefly Algorithm (FA) 87 -- 1.4.2.1 Initialization 87 -- 1.4.2.2 Attraction to Brighter Fireflies 87 -- 1.4.2.3 Movement of Fireflies 88 -- 1.4.3 Imperialist Competitive Algorithm (ICA) 89 -- 1.4.3.1 Initialization 89 -- 1.4.3.2 Selection of Imperialists and Colonies 89 -- 1.4.3.3 Formation of Empires 89 -- 1.4.3.4 Assimilation of Colonies 90 -- 1.4.3.5 Revolution 91 -- 1.4.3.6 Imperialistic Competition 91 -- 1.4.3.6.1 Total Empire Power Evaluation 91 -- 1.4.3.6.2 Reassignment of Colonies and Removal of Empire 92 -- 1.4.3.6.3 Union of Empires 92 -- 1.4.4 Differential evolutionary (DE) algorithm 93 -- 1.4.4.1 Initialization 93 -- 1.4.4.2 Mutation 93 -- 1.4.4.3 Recombination 93 -- 1.4.4.4 Selection 93 -- 1.4.5 Offline optimization 94 -- 1.4.6 Performance analysis of optimization algorithms 94 -- 1.4.6.1 Friedman test 94 -- 1.4.6.2 Iman-Davenport test 95 -- 1.5 SCOPE OF THE Book 95 -- 1.6 SUMMARY 98 -- References 98 -- CHAPTER 2 IMPROVE CONVERGENCE SPEED OF MULTI-AGENT Q-LEARNING FOR COOPERATIVE TASK-PLANNING 107.
2.1 INTRODUCTION 108 -- 2.2 LITERATURE REVIEW 112 -- 2.3 PRELIMINARIES 114 -- 2.3.1 Single agent Q-learning 114 -- 2.3.2 Multi-agent Q-learning 115 -- 2.4 PROPOSED MULTI-AGENT Q-LEARNING 118 -- 2.4.1 Two useful properties 119 -- 2.5 PROPOSED FCMQL ALGORITHMS AND THEIR CONVERGENCE ANALYSIS 120 -- 2.5.1 Proposed FCMQL algorithms 120 -- 2.5.2 Convergence analysis of the proposed FCMQL algorithms 121 -- 2.6 FCMQL-BASED COOPERATIVE MULTI-AGENT PLANNING 122 -- 2.7 EXPERIMENTS AND RESULTS 123 -- 2.8 CONCLUSIONS 130 -- 2.9 SUMMARY 131 -- 2.10 APPENDIX 2.1 131 -- 2.11 APPENDIX 2.2 135 -- References 152 -- CHAPTER 3 CONSENSUS Q-LEARNING FOR MULTI-AGENT COOPERATIVE PLANNING 157 -- 3.1 INTRODUCTION 158 -- 3.2 PRELIMINARIES 159 -- 3.2.1 Single agent Q-learning 159 -- 3.2.2 Equilibrium-based multi-agent Q-learning 160 -- 3.3 CONSENSUS 161 -- 3.4 PROPOSED CONSENSUS Q-LEARNING AND PLANNING 162 -- 3.4.1 Consensus Q-learning 162 -- 3.4.2 Consensus-based multi-robot planning 164 -- 3.5 EXPERIMENTS AND RESULTS 165 -- 3.5.1 Experimental setup 165 -- 3.5.2 Experiments for CoQL 165 -- 3.5.3 Experiments for consensus-based planning 166 -- 3.6 CONCLUSIONS 168 -- 3.7 SUMMARY 168 -- References 168 -- CHAPTER 4 AN EFFICIENT COMPUTING OF CORRELATED EQUILIBRIUM FOR COOPERATIVE Q-LEARNING BASED MULTI-AGENT PLANNING 171 -- 4.1 INTRODUCTION 172 -- 4.2 SINGLE-AGENT Q-LEARNING AND EQUILIBRIUM BASED MAQL 175 -- 4.2.1 Single Agent Q learning 175 -- 4.2.2 Equilibrium based MAQL 175 -- 4.3 PROPOSED COOPERATIVE MULTI-AGENT Q-LEARNING AND PLANNING 176 -- 4.3.1 Proposed schemes with their applicability 176 -- 4.3.2 Immediate rewards in Scheme-I and -II 177 -- 4.3.3 Scheme-I induced MAQL 178 -- 4.3.4 Scheme-II induced MAQL 180 -- 4.3.5 Algorithms for scheme-I and II 182 -- 4.3.6 Constraint QL-I/ QL-II(C ......................................................... 183 -- 4.3.7 Convergence 183 -- Multi-agent planning 185 -- 4.4 COMPLEXITY ANALYSIS 186 -- 4.4.1 Complexity of Correlated Q-Learning 187 -- 4.4.1.1 Space Complexity 187.
4.4.1.2 Time Complexity 187 -- 4.4.2 Complexity of the proposed algorithms 188 -- 4.4.2.1 Space Complexity 188 -- 4.4.2.2 Time Complexity 188 -- 4.4.3 Complexity comparison 189 -- 4.4.3.1 Space complexity 190 -- 4.4.3.2 Time complexity 190 -- 4.5 SIMULATION AND EXPERIMENTAL RESULTS 191 -- 4.5.1 Experimental platform 191 -- 4.5.1.1 Simulation 191 -- 4.5.1.2 Hardware 192 -- 4.5.2 Experimental approach 192 -- 4.5.2.1 Learning phase 193 -- 4.5.2.2 Planning phase 193 -- 4.5.3 Experimental results 194 -- 4.6 CONCLUSION 201 -- 4.7 SUMMARY 202 -- 4.8 APPENDIX 203 -- References 209 -- CHAPTER 5 A MODIFIED IMPERIALIST COMPETITIVE ALGORITHM FOR MULTI-AGENT STICK- CARRYING APPLICATION 213 -- 5.1 INTRODUCTION 214 -- 5.2 PROBLEM FORMULATION FOR MULTI-ROBOT STICK-CARRYING 219 -- 5.3 PROPOSED HYBRID ALGORITHM 222 -- 5.3.1 An Overview of Imperialist Competitive Algorithm (ICA) 222 -- 5.3.1.1 Initialization 222 -- 5.3.1.2 Selection of Imperialists and Colonies 223 -- 5.3.1.3 Formation of Empires 223 -- 5.3.1.4 Assimilation of Colonies 223 -- 5.3.1.5 Revolution 224 -- 5.3.1.6 Imperialistic Competition 224 -- 5.3.1.6.1 Total Empire Power Evaluation 225 -- 5.3.1.6.2 Reassignment of Colonies and Removal of Empire 225 -- 5.3.1.6.3 Union of Empires 226 -- 5.4 AN OVERVIEW OF FIREFLY ALGORITHM (FA) 226 -- 5.4.1 Initialization 226 -- 5.4.2 Attraction to Brighter Fireflies 226 -- 5.4.3 Movement of Fireflies 227 -- 5.5 PROPOSED IMPERIALIST COMPETITIVE FIREFLY ALGORITHM 227 -- 5.5.1 Assimilation of Colonies 229 -- 5.5.1.1 Attraction to Powerful Colonies 230 -- 5.5.1.2 Modification of Empire Behavior 230 -- 5.5.1.3 Union of Empires 230 -- 5.6 SIMULATION RESULTS 232 -- 5.6.1 Comparative Framework 232 -- 5.6.2 Parameter Settings 232 -- 5.6.3 Analysis on Explorative Power of ICFA 232 -- 5.6.4 Comparison of Quality of the Final Solution 233 -- 5.6.5 Performance Analysis 233 -- 5.7 COMPUTER SIMULATION AND EXPERIMENT 240 -- 5.7.1 Average total path deviation (ATPD) 240 -- 5.7.2 Average Uncovered Target Distance (AUTD) 241.
5.7.3 Experimental Setup in Simulation Environment 241 -- 5.7.4 Experimental Results in Simulation Environment 242 -- 5.7.5 Experimental Setup with Khepera Robots 244 -- 5.7.6 Experimental Results with Khepera Robots 244 -- 5.8 CONCLUSION 245 -- 5.9 SUMMARY 247 -- 5.10 APPENDIX 5.1 248 -- References 249 -- CHAPTER 6 CONCLUSIONS AND FUTURE DIRECTIONS 255 -- 6.1 CONCLUSIONS 256 -- 6.2 FUTURE DIRECTIONS 257.
Record Nr. UNINA-9910812040703321
Sadhu Arup Kumar  
Hoboken, NJ : , : Wiley : , : IEEE Press, , 2021
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Principles in Noisy Optimization : Applied to Multi-agent Coordination / / by Pratyusha Rakshit, Amit Konar
Principles in Noisy Optimization : Applied to Multi-agent Coordination / / by Pratyusha Rakshit, Amit Konar
Autore Rakshit Pratyusha
Edizione [1st ed. 2018.]
Pubbl/distr/stampa Singapore : , : Springer Singapore : , : Imprint : Springer, , 2018
Descrizione fisica 1 online resource (379 pages)
Disciplina 006.3
Collana Cognitive Intelligence and Robotics
Soggetto topico Artificial intelligence
Mathematical optimization
Computers
Artificial Intelligence
Optimization
Theory of Computation
ISBN 981-10-8642-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Chapter 1. Foundations in Evolutionary Optimization Algorithms -- Chapter 2. Multi-agent Coordination -- Chapter 3. Evolutionary Algorithms in Presence of Noise -- Chapter 4. Learning based Noisy Optimization -- Chapter 5. Noisy Coordination in Multi-objective Settings -- Chapter 6. Integrating Principles of Noisy Optimization with Evolutionary Optimization -- Chapter 7. Conclusion and Future Direction.
Record Nr. UNINA-9910299315803321
Rakshit Pratyusha  
Singapore : , : Springer Singapore : , : Imprint : Springer, , 2018
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui