LEADER 01744nam 2200397z- 450 001 9910346719303321 005 20210211 010 $a1000037166 035 $a(CKB)4920000000094508 035 $a(oapen)https://directory.doabooks.org/handle/20.500.12854/55364 035 $a(oapen)doab55364 035 $a(EXLCZ)994920000000094508 100 $a20202102d2013 |y 0 101 0 $aeng 135 $aurmn|---annan 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 00$aOrbital Effects in Spaceborne Synthetic Aperture Radar Interferometry 210 $cKIT Scientific Publishing$d2013 215 $a1 online resource (XVIII, 139 p. p.) 225 1 $aSchriftenreihe des Studiengangs Geodäsie und Geoinformatik / Karlsruher Institut für Technologie, Studiengang Geodäsie und Geoinformatik 311 08$a3-7315-0134-1 330 $aThis book reviews and investigates orbit-related effects in synthetic aperture Radar interferometry (InSAR). The translation of orbit inaccuracies to error signals in the interferometric phase is concisely described; estimation and correction approaches are discussed and evaluated with special focus on network adjustment of redundantly estimated baseline errors. Moreover, the effect of relative motion of the orbit reference frame is addressed. 606 $aPhysics$2bicssc 610 $abaseline errors 610 $aInSAR 610 $aorbit correction 610 $aorbit errors 610 $areference frame 615 7$aPhysics 700 $aBähr$b Hermann$4auth$0158761 906 $aBOOK 912 $a9910346719303321 996 $aOrbital Effects in Spaceborne Synthetic Aperture Radar Interferometry$93038234 997 $aUNINA LEADER 04826nam 22006375 450 001 9910865259403321 005 20250807152951.0 010 $a3-031-57389-7 024 7 $a10.1007/978-3-031-57389-7 035 $a(MiAaPQ)EBC31357846 035 $a(Au-PeEL)EBL31357846 035 $a(CKB)32169719800041 035 $a(OCoLC)1436832861 035 $a(DE-He213)978-3-031-57389-7 035 $a(EXLCZ)9932169719800041 100 $a20240529d2024 u| 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aBackdoor Attacks against Learning-Based Algorithms /$fby Shaofeng Li, Haojin Zhu, Wen Wu, Xuemin (Sherman) Shen 205 $a1st ed. 2024. 210 1$aCham :$cSpringer Nature Switzerland :$cImprint: Springer,$d2024. 215 $a1 online resource (161 pages) 225 1 $aWireless Networks,$x2366-1445 311 08$a3-031-57388-9 327 $aIntroduction -- Literature Review of Backdoor Attacks -- Invisible Backdoor Attacks in Image Classification Based Network Services -- Hidden Backdoor Attacks in NLP Based Network Services -- Backdoor Attacks and Defense in FL -- Summary and Future Directions. 330 $aThis book introduces a new type of data poisoning attack, dubbed, backdoor attack. In backdoor attacks, an attacker can train the model with poisoned data to obtain a model that performs well on a normal input but behaves wrongly with crafted triggers. Backdoor attacks can occur in many scenarios where the training process is not entirely controlled, such as using third-party datasets, third-party platforms for training, or directly calling models provided by third parties. Due to the enormous threat that backdoor attacks pose to model supply chain security, they have received widespread attention from academia and industry. This book focuses on exploiting backdoor attacks in the three types of DNN applications, which are image classification, natural language processing, and federated learning. Based on the observation that DNN models are vulnerable to small perturbations, this book demonstrates that steganography and regularization can be adopted to enhance the invisibility of backdoor triggers. Based on image similarity measurement, this book presents two metrics to quantitatively measure the invisibility of backdoor triggers. The invisible trigger design scheme introduced in this book achieves a balance between the invisibility and the effectiveness of backdoor attacks. In the natural language processing domain, it is difficult to design and insert a general backdoor in a manner imperceptible to humans. Any corruption to the textual data (e.g., misspelled words or randomly inserted trigger words/sentences) must retain context-awareness and readability to human inspectors. This book introduces two novel hidden backdoor attacks, targeting three major natural language processing tasks, including toxic comment detection, neural machine translation, and question answering, depending on whether the targeted NLP platform accepts raw Unicode characters. The emerged distributed training framework, i.e., federated learning, has advantages in preserving users' privacy. It has been widely used in electronic medical applications, however, it also faced threats derived from backdoor attacks. This book presents a novel backdoor detection framework in FL-based e-Health systems. We hope this book can provide insightful lights on understanding the backdoor attacks in different types of learning-based algorithms, including computer vision, natural language processing, and federated learning. The systematic principle in this book also offers valuable guidance on the defense of backdoor attacks against future learning-based algorithms. 410 0$aWireless Networks,$x2366-1445 606 $aComputer networks 606 $aWireless communication systems 606 $aMobile communication systems 606 $aMachine learning 606 $aComputer Communication Networks 606 $aWireless and Mobile Communication 606 $aMachine Learning 615 0$aComputer networks. 615 0$aWireless communication systems. 615 0$aMobile communication systems. 615 0$aMachine learning. 615 14$aComputer Communication Networks. 615 24$aWireless and Mobile Communication. 615 24$aMachine Learning. 676 $a004.6 700 $aLi$b Shaofeng$01742564 701 $aZhu$b Haojin$01741900 701 $aWu$b Wen$01738862 701 $aShen$b Xuemin (Sherman)$0720658 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910865259403321 996 $aBackdoor Attacks Against Learning-Based Algorithms$94169311 997 $aUNINA