LEADER 04131nam 22005535 450 001 9910678248603321 005 20251113200524.0 010 $a9783030997724 010 $a3030997723 024 7 $a10.1007/978-3-030-99772-4 035 $a(MiAaPQ)EBC7209965 035 $a(Au-PeEL)EBL7209965 035 $a(CKB)26240711500041 035 $a(DE-He213)978-3-030-99772-4 035 $a(PPN)269097619 035 $a(EXLCZ)9926240711500041 100 $a20230216d2023 u| 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aAdversarial Machine Learning $eAttack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence /$fby Aneesh Sreevallabh Chivukula, Xinghao Yang, Bo Liu, Wei Liu, Wanlei Zhou 205 $a1st ed. 2023. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2023. 215 $a1 online resource (314 pages) 311 08$a9783030997717 311 08$a3030997715 320 $aIncludes bibliographical references. 327 $aAdversarial Machine Learning -- Adversarial Deep Learning -- Security and Privacy in Adversarial Learning -- Game-Theoretical Attacks with Adversarial Deep Learning Models -- Physical Attacks in the Real World -- Adversarial Defense Mechanisms -- Adversarial Learning for Privacy Preservation. 330 $aA critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed. We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantificationof the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications. In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning. 606 $aArtificial intelligence 606 $aData protection 606 $aArtificial Intelligence 606 $aData and Information Security 615 0$aArtificial intelligence. 615 0$aData protection. 615 14$aArtificial Intelligence. 615 24$aData and Information Security. 676 $a005.8 676 $a005.8 700 $aSreevallabh Chivukula$b Aneesh$01345837 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910678248603321 996 $aAdversarial machine learning$93373037 997 $aUNINA