LEADER 04028nam 2200481 450 001 9910678248603321 005 20230524165429.0 010 $a3-030-99772-3 024 7 $a10.1007/978-3-030-99772-4 035 $a(MiAaPQ)EBC7209965 035 $a(Au-PeEL)EBL7209965 035 $a(CKB)26240711500041 035 $a(DE-He213)978-3-030-99772-4 035 $a(PPN)269097619 035 $a(EXLCZ)9926240711500041 100 $a20230524d2023 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aAdversarial machine learning $eattack surfaces, defence mechanisms, learning theories in artificial intelligence /$fAneesh Sreevallabh Chivukula [and four others] 205 $a1st ed. 2023. 210 1$aCham, Switzerland :$cSpringer Nature Switzerland AG,$d[2023] 210 4$dİ2023 215 $a1 online resource (314 pages) 311 08$aPrint version: Sreevallabh Chivukula, Aneesh Adversarial Deep Learning in Cybersecurity Cham : Springer International Publishing AG,c2023 9783030997717 320 $aIncludes bibliographical references. 327 $aAdversarial Machine Learning -- Adversarial Deep Learning -- Security and Privacy in Adversarial Learning -- Game-Theoretical Attacks with Adversarial Deep Learning Models -- Physical Attacks in the Real World -- Adversarial Defense Mechanisms -- Adversarial Learning for Privacy Preservation. 330 $aA critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of deep learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial deep learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed. We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantification of the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications. In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning. 606 $aComputer security 606 $aDeep learning (Machine learning) 615 0$aComputer security. 615 0$aDeep learning (Machine learning) 676 $a005.8 700 $aSreevallabh Chivukula$b Aneesh$01345837 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910678248603321 996 $aAdversarial machine learning$93373037 997 $aUNINA