LEADER 03369nam 2200529 450 001 996547963703316 005 20230614053059.0 010 $a9789811968143$b(electronic bk.) 010 $z9789811968136 024 7 $a10.1007/978-981-19-6814-3 035 $a(MiAaPQ)EBC7242968 035 $a(Au-PeEL)EBL7242968 035 $a(DE-He213)978-981-19-6814-3 035 $a(OCoLC)1378610667 035 $a(PPN)269657614 035 $a(EXLCZ)9926557874900041 100 $a20230614d2023 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aMachine Learning Safety /$fXiaowei Huang, Gaojie Jin, and Wenjie Ruan 205 $a1st ed. 2023. 210 1$aSingapore :$cSpringer Nature Singapore Pte Ltd.,$d[2023] 210 4$dİ2023 215 $a1 online resource (319 pages) 225 0 $aArtificial Intelligence: Foundations, Theory, and Algorithms Series 311 08$aPrint version: Huang, Xiaowei Machine Learning Safety Singapore : Springer,c2023 9789811968136 320 $aIncludes bibliographical references. 327 $a1. Introduction -- 2. Safety of Simple Machine Learning Models -- 3. Safety of Deep Learning -- 4. Robustness Verification of Deep Learning -- 5. Enhancement to Robustness and Generalization -- 6. Probabilistic Graph Model -- A. Mathematical Foundations -- B. Competitions. 330 $aMachine learning algorithms allow computers to learn without being explicitly programmed. Their application is now spreading to highly sophisticated tasks across multiple domains, such as medical diagnostics or fully autonomous vehicles. While this development holds great potential, it also raises new safety concerns, as machine learning has many specificities that make its behaviour prediction and assessment very different from that for explicitly programmed software systems. This book addresses the main safety concerns with regard to machine learning, including its susceptibility to environmental noise and adversarial attacks. Such vulnerabilities have become a major roadblock to the deployment of machine learning in safety-critical applications. The book presents up-to-date techniques for adversarial attacks, which are used to assess the vulnerabilities of machine learning models; formal verification, which is used to determine if a trained machine learning model is free of vulnerabilities; and adversarial training, which is used to enhance the training process and reduce vulnerabilities. The book aims to improve readers? awareness of the potential safety issues regarding machine learning models. In addition, it includes up-to-date techniques for dealing with these issues, equipping readers with not only technical knowledge but also hands-on practical skills. 410 0$aArtificial Intelligence: Foundations, Theory, and Algorithms,$x2365-306X 606 $aComputer security 606 $aMachine learning$xSafety measures 615 0$aComputer security. 615 0$aMachine learning$xSafety measures. 676 $a005.8 700 $aHuang$b Xiaowei$01355348 702 $aJin$b Gaojie 702 $aRuan$b Wenjie 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 912 $a996547963703316 996 $aMachine Learning Safety$93359461 997 $aUNISA