LEADER 01137nam1 2200373 450 001 990003089970203316 005 20080403112552.0 035 $a000308997 035 $aUSA01000308997 035 $a(ALEPH)000308997USA01 035 $a000308997 100 $a20080402d--------km-y0itay50------ba 101 $aeng 102 $aUS 105 $a||||||||001yy 200 1 $aPhysical methods of chemistry$fedited by Bryant W. Rossiter and Roger C. Baetzold 205 $a2 ed. 210 $aNew York [etc.]$cJohn Wiley & Sons 215 $av.$d24 cm 463 \1$1001990003089980203316$12001 $a<> Determination of thermodynamic properties 606 0 $aChimica$xEsperimenti 606 0 $aRicerche di laboratorio 676 $a542 801 0$aIT$bsalbc$gISBD 912 $a990003089970203316 951 $a542 PHY/$c542 959 $aBK 969 $aSCI 979 $aANGELA$b90$c20080402$lUSA01$h1114 979 $aANGELA$b90$c20080403$lUSA01$h1107 979 $aANGELA$b90$c20080403$lUSA01$h1111 979 $aANGELA$b90$c20080403$lUSA01$h1125 996 $aPHYSICAL methods of chemistry$9117436 997 $aUNISA LEADER 04711nam 22006495 450 001 9910746982603321 005 20250628110040.0 010 $a9781484296912 010 $a1484296915 024 7 $a10.1007/978-1-4842-9691-2 035 $a(CKB)5850000000446695 035 $a(DE-He213)978-1-4842-9691-2 035 $a(MiAaPQ)EBC30882798 035 $a(Au-PeEL)EBL30882798 035 $a(PPN)272919977 035 $a(OCoLC)1403550971 035 $a(Perlego)4515747 035 $a(ODN)ODN0010187193 035 $a(EXLCZ)995850000000446695 100 $a20231003d2023 u| 0 101 0 $aeng 135 $aurnn|008mamaa 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aData Parallel C++ $eProgramming Accelerated Systems Using C++ and SYCL /$fby James Reinders, Ben Ashbaugh, James Brodman, Michael Kinsner, John Pennycook, Xinmin Tian 205 $a2nd ed. 2023. 210 $d2023 210 1$aBerkeley, CA :$cApress :$cImprint: Apress,$d2023. 215 $a1 online resource (XXX, 630 p. 329 illus., 294 illus. in color.) 311 08$a9781484296905 311 08$a1484296907 327 $aChapter 1: Introduction -- Chapter 2: Where Code Executes -- Chapter 3: Data Management and Ordering the Uses of Data -- Chapter 4: Expressing Parallelism -- Chapter 5: Error Handling -- Chapter 6: Unified Shared Memory -- Chapter 7: Buffers -- Chapter 8: Scheduling Kernels and Data Movement -- Chapter 9: Local Memory and Work-group Barriers -- Chapter 10: Defining Kernels -- Chapter 11: Vector and Math Arrays -- Chapter 12: Device Information and Kernel Specialization -- Chapter 13: Practical Tips -- Chapter 14: Common Parallel Patterns -- Chapter 15: Programming for GPUs -- Chapter 16: Programming for CPUs -- Chapter 17: Programming for FFGAs -- Chapter 18: Libraries -- Chapter 19: Memory Model and Atomics -- Chapter 20: Backend Interoperability -- Chapter 21: Migrating CUDA Code -- Epilogue. 330 $a"This book, now in is second edition, is the premier resource to learn SYCL 2020 and is the ONLY book you need to become part of this community." Erik Lindahl, GROMACS and Stockholm University Learn how to accelerate C++ programs using data parallelism and SYCL. This open access book enables C++ programmers to be at the forefront of this exciting and important development that is helping to push computing to new levels. This updated second edition is full of practical advice, detailed explanations, and code examples to illustrate key topics. SYCL enables access to parallel resources in modern accelerated heterogeneous systems. Now, a single C++ application can use any combination of devices?including GPUs, CPUs, FPGAs, and ASICs?that are suitable to the problems at hand. This book teaches data-parallel programming using C++ with SYCL and walks through everything needed to program accelerated systems. The book begins by introducing data parallelism and foundational topics for effective use of SYCL. Later chapters cover advanced topics, including error handling, hardware-specific programming, communication and synchronization, and memory model considerations. All source code for the examples used in this book is freely available on GitHub. The examples are written in modern SYCL and are regularly updated to ensure compatibility with multiple compilers. You Will Learn How to: Accelerate C++ programs using data-parallel programming Use SYCL and C++ compilers that support SYCL Write portable code for accelerators that is vendor and device agnostic Optimize code to improve performance for specific accelerators Be poised to benefit as new accelerators appear from many vendors. 606 $aCompilers (Computer programs) 606 $aMakerspaces 606 $aCompilers and Interpreters 606 $aMaker 615 0$aCompilers (Computer programs) 615 0$aMakerspaces. 615 14$aCompilers and Interpreters. 615 24$aMaker. 676 $a005.45 686 $aCOM051010$aCOM067000$2bisacsh 700 $aReinders$b James$4aut$4http://id.loc.gov/vocabulary/relators/aut$0851755 702 $aAshbaugh$b Ben$4aut$4http://id.loc.gov/vocabulary/relators/aut 702 $aBrodman$b James$4aut$4http://id.loc.gov/vocabulary/relators/aut 702 $aKinsner$b Michael$4aut$4http://id.loc.gov/vocabulary/relators/aut 702 $aPennycook$b John$4aut$4http://id.loc.gov/vocabulary/relators/aut 702 $aTian$b Xinmin$4aut$4http://id.loc.gov/vocabulary/relators/aut 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910746982603321 996 $aData Parallel C++$91901797 997 $aUNINA