1.

Record Nr.

UNINA9911019934403321

Autore

Wong Kelvin K. L

Titolo

Cybernetical Intelligence : Engineering Cybernetics with Machine Intelligence

Pubbl/distr/stampa

Newark : , : John Wiley & Sons, Incorporated, , 2023

©2024

ISBN

9781394217496

9781394217519

Edizione

[1st ed.]

Descrizione fisica

1 online resource (433 pages)

Disciplina

006.3/1

Soggetti

Machine learning

Cybernetics

Neural networks (Computer science)

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Nota di contenuto

Cover -- Title Page -- Copyright Page -- Contents -- Chapter 1 Artificial Intelligence and Cybernetical Learning -- 1.1 Artificial Intelligence Initiative -- 1.2 Intelligent Automation Initiative -- 1.2.1 Benefits of IAI -- 1.3 Artificial Intelligence Versus Intelligent Automation -- 1.3.1 Process Discovery -- 1.3.2 Optimization -- 1.3.3 Analytics and Insight -- 1.4 The Fourth Industrial Revolution and Artificial Intelligence -- 1.4.1 Artificial Narrow Intelligence -- 1.4.2 Artificial General Intelligence -- 1.4.3 Artificial Super Intelligence -- 1.5 Pattern Analysis and Cognitive Learning -- 1.5.1 Machine Learning -- 1.5.1.1 Parametric Algorithms -- 1.5.1.2 Nonparametric Algorithms -- 1.5.2 Deep Learning -- 1.5.2.1 Convolutional Neural Networks in Advancing Artificial Intelligence -- 1.5.2.2 Future Advancement in Deep Learning -- 1.5.3 Cybernetical Learning -- 1.6 Cybernetical Artificial Intelligence -- 1.6.1 Artificial Intelligence Control Theory -- 1.6.2 Information Theory -- 1.6.3 Cybernetic Systems -- 1.7 Cybernetical Intelligence Definition -- 1.8 The Future of Cybernetical Intelligence -- Summary -- Exercise Questions -- Further Reading -- Chapter 2 Cybernetical Intelligent Control -- 2.1 Control Theory and Feedback Control Systems -- 2.2 Maxwell's Analysis of Governors -- 2.3 Harold



Black -- 2.4 Nyquist and Bode -- 2.5 Stafford Beer -- 2.5.1 Cybernetic Control -- 2.5.2 Viable Systems Model -- 2.5.3 Cybernetics Models of Management -- 2.6 James Lovelock -- 2.6.1 Cybernetic Approach to Ecosystems -- 2.6.2 Gaia Hypothesis -- 2.7 Macy Conference -- 2.8 McCulloch-Pitts -- 2.9 John von Neumann -- 2.9.1 Discussions on Self-Replicating. Machines -- 2.9.2 Discussions on Machine Learning -- Summary -- Exercise Questions -- Further Reading -- Chapter 3 The Basics of Perceptron -- 3.1 The Analogy of Biological and Artificial Neurons.

3.1.1 Biological Neurons and Neurodynamics -- 3.1.2 The Structure of Neural Network -- 3.1.3 Encoding and Decoding -- 3.2 Perception and Multilayer Perceptron -- 3.2.1 Back Propagation Neural Network -- 3.2.2 Derivative .Equations.for Backpropagation -- 3.3 Activation Function -- 3.3.1 Sigmoid Activation Function -- 3.3.2 Hyperbolic Tangent Activation Function -- 3.3.3 Rectified Linear Unit Activation Function -- 3.3.4 Linear Activation Function -- Summary -- Exercise Questions -- Further Reading -- Chapter 4 The Structure of Neural Network -- 4.1 Layers in Neural Network -- 4.1.1 Input Layer -- 4.1.2 Hidden Layer -- 4.1.3 Neurons -- 4.1.4 Weights and Biases -- 4.1.5 Forward Propagation -- 4.1.6 Backpropagation -- 4.2 Perceptron and Multilayer Perceptron -- 4.3 Recurrent Neural Network -- 4.3.1 Long Short-Term. Memory -- 4.4 Markov Neural Networks -- 4.4.1 State Transition Function -- 4.4.2 Observation Function -- 4.4.3 Policy Function -- 4.4.4 Loss Function -- 4.5 Generative Adversarial Network -- Summary -- Exercise Questions -- Further Reading -- Chapter 5 Backpropagation Neural Network -- 5.1 Backpropagation Neural Network -- 5.1.1 Forward Propagation -- 5.2 Gradient Descent -- 5.2.1 Loss Function -- 5.2.2 Parameters in Gradient Descent -- 5.2.3 Gradient in Gradient Descent -- 5.2.4 Learning Rate in Gradient Descent -- 5.2.5 Update Rule in Gradient Descent -- 5.3 Stopping Criteria -- 5.3.1 Convergence and Stopping Criteria -- 5.3.2 Local Minimum and Global Minimum -- 5.4 Resampling Methods -- 5.4.1 Cross-Validation -- 5.4.2 Bootstrapping -- 5.4.3 Monte Carlo Cross-Validation -- 5.5 Optimizers in Neural Network -- 5.5.1 Stochastic Gradient Descent -- 5.5.2 Root Mean Square Propagation -- 5.5.3 Adaptive Moment Estimation -- 5.5.4 AdaMax -- 5.5.5 Momentum Optimization -- Summary -- Exercise Questions -- Further Reading.

Chapter 6 Application of Neural Network in Learning and Recognition -- 6.1 Applying Backpropagation to Shape Recognition -- 6.2 Softmax Regression -- 6.3 K-Binary Classifier -- 6.4 Relational Learning via Neural Network -- 6.4.1 Graph Neural Network -- 6.4.2 Graph Convolutional Network -- 6.5 Cybernetics Using Neural Network -- 6.6 Structure of Neural Network for Image Processing -- 6.7 Transformer Networks -- 6.8 Attention Mechanisms -- 6.9 Graph Neural Networks -- 6.10 Transfer Learning -- 6.11 Generalization of Neural Networks -- 6.12 Performance Measures -- 6.12.1 Confusion Matrix -- 6.12.2 Receiver Operating Characteristic -- 6.12.3 Area Under the ROC Curve -- Summary -- Exercise Questions -- Further Reading -- Chapter 7 Competitive Learning and Self-Organizing. Map -- 7.1 Principal of Competitive Learning -- 7.1.1 Step 1: Normalized Input Vector -- 7.1.2 Step 2: Find the Winning Neuron -- 7.1.3 Step 3: Adjust the Network Weight Vector and Output Results -- 7.2 Basic Structure of Self-Organizing Map -- 7.2.1 Properties Self-Organizing. Map -- 7.3 Self-Organizing Mapping Neural Network Algorithm -- 7.3.1 Step 1: Initialize Parameter -- 7.3.2 Step 2: Select Inputs and Determine Winning Nodes -- 7.3.3 Step 3: Affect Neighboring Neurons -- 7.3.4 Step 4: Adjust Weights -- 7.3.5 Step 5: Judging the End Condition -- 7.4 Growing Self-Organizing Map -- 7.5 Time Adaptive Self-



Organizing Map -- 7.5.1 TASOM-Based. Algorithms for Real Applications -- 7.6 Oriented and Scalable Map -- 7.7 Generative Topographic Map -- Summary -- Exercise Questions -- Further Reading -- Chapter 8 Support Vector Machine -- 8.1 The Definition of Data Clustering -- 8.2 Support Vector and Margin -- 8.3 Kernel Function -- 8.3.1 Linear Kernel -- 8.3.2 Polynomial Kernel -- 8.3.3 Radial Basis Function -- 8.3.4 Laplace Kernel -- 8.3.5 Sigmoid Kernel.

8.4 Linear and Nonlinear Support Vector Machine -- 8.5 Hard Margin and Soft Margin in Support Vector Machine -- 8.6 I/O of Support Vector Machine -- 8.6.1 Training Data -- 8.6.2 Feature Matrix and Label Vector -- 8.7 Hyperparameters of Support Vector Machine -- 8.7.1 The C Hyperparameter -- 8.7.2 Kernel Coefficient -- 8.7.3 Class Weights -- 8.7.4 Convergence Criteria -- 8.7.5 Regularization -- 8.8 Application of Support Vector Machine -- 8.8.1 Classification -- 8.8.2 Regression -- 8.8.3 Image Classification -- 8.8.4 Text Classification -- Summary -- Exercise Questions -- Further Reading -- Chapter 9 Bio-Inspired Cybernetical Intelligence -- 9.1 Genetic Algorithm -- 9.2 Ant Colony Optimization -- 9.3 Bees Algorithm -- 9.4 Artificial Bee Colony Algorithm -- 9.5 Cuckoo Search -- 9.6 Particle Swarm Optimization -- 9.7 Bacterial Foraging Optimization -- 9.8 Gray Wolf Optimizer -- 9.9 Firefly Algorithm -- Summary -- Exercise Questions -- Further Reading -- Chapter 10 Life-Inspired Machine Intelligence and Cybernetics -- 10.1 Multi-Agent. AI Systems -- 10.1.1 Game Theory -- 10.1.2 Distributed Multi-Agent. Systems -- 10.1.3 Multi-Agent. Reinforcement Learning -- 10.1.4 Evolutionary Computation and Multi-Agent. Systems -- 10.2 Cellular Automata -- 10.3 Discrete Element Method -- 10.3.1 Particle-Based. Simulation of Biological Cells and Tissues -- 10.3.2 Simulation of Microbial Communities and Their Interactions -- 10.3.3 Discrete Element Method-Based. Modeling of Biological Fluids and Soft Materials -- 10.4 Smoothed Particle Hydrodynamics -- 10.4.1 SPH-Based. Simulations of Biomimetic Fluid Dynamic -- 10.4.2 SPH-Based. Simulations of Bio-Inspired. Engineering Applications -- Summary -- Exercise Questions -- Further Reading -- Chapter 11 Revisiting Cybernetics and Relation to Cybernetical Intelligence -- 11.1 The Concept and Development of Cybernetics.

11.1.1 Attributes of Control Concepts -- 11.1.2 Research Objects and Characteristics of Cybernetics -- 11.1.3 Development of Cybernetical Intelligence -- 11.2 The Fundamental Ideas of Cybernetics -- 11.2.1 System Idea -- 11.2.2 Information Idea -- 11.2.3 Behavioral Idea -- 11.2.4 Cybernetical Intelligence Neural Network -- 11.3 Cybernetic Expansion into Other Fields of Research -- 11.3.1 Social Cybernetics -- 11.3.2 Internal Control-Related. Theories -- 11.3.3 Software Control Theory -- 11.3.4 Perceptual Cybernetics -- 11.4 Practical Application of Cybernetics -- 11.4.1 Research on the Control Mechanism of Neural Networks -- 11.4.2 Balance Between Internal Control and Management Power Relations -- 11.4.3 Software Markov Adaptive Testing Strategy -- 11.4.4 Task Analysis Model -- Summary -- Exercise Questions -- Further Reading -- Chapter 12 Turing Machine -- 12.1 Behavior of a Turing Machine -- 12.1.1 Computing with Turing Machines -- 12.2 Basic Operations of a Turing Machine -- 12.2.1 Reading and Writing to the Tape -- 12.2.2 Moving the Tape Head -- 12.2.3 Changing States -- 12.3 Interchangeability of Program and Behavior -- 12.4 Computability Theory -- 12.4.1 Complexity Theory -- 12.5 Automata Theory -- 12.6 Philosophical Issues Related to Turing Machines -- 12.7 Human and Machine Computations -- 12.8 Historical Models of Computability -- 12.9 Recursive Functions -- 12.10 Turing Machine and Intelligent Control -- Summary -- Exercise Questions -- Further Reading -- Chapter 13 Entropy Concepts in Machine Intelligence -- 13.1 Relative



Entropy of Distributions -- 13.2 Relative Entropy and Mutual Information -- 13.3 Entropy in Performance Evaluation -- 13.4 Cross-Entropy Softmax -- 13.5 Calculating Cross-Entropy -- 13.6 Cross-Entropy as a Loss Function -- 13.7 Cross-Entropy and Log Loss -- 13.8 Application of Entropy in Intelligent Control.

13.8.1 Entropy-Based. Control.

Sommario/riassunto

"Cybernetics is a field of study concerned with the understanding of control and communication systems in both natural and artificial systems. It deals with the study of feedback mechanisms, control systems, and information processing, and has had a significant influence on the development of artificial intelligence and computer science. Neural networks, on the other hand, are a type of machine learning algorithm that are modeled after the structure and function of the human brain. Neural networks are capable of learning and making predictions or decisions based on input data, and are widely used in a range of applications, including image recognition, speech recognition, natural language processing, and game playing. The combination of cybernetics and neural networks provides a powerful framework for understanding and developing artificial intelligence systems that can perform a wide range of tasks and make decisions based on data. Cybernetical Intelligence provides a comprehensive overview of these technologies, covering their history, mathematical foundations, different types, applications, and challenges, and is a valuable resource for anyone interested in understanding and working with artificial intelligence and cybernetics"--