Astounding wonder [[electronic resource] ] : imagining science and science fiction in interwar America / / John Cheng |
Autore | Cheng John |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Philadelphia, : University of Pennsylvania Press, c2012 |
Descrizione fisica | 1 online resource (401 p.) |
Disciplina | 813/.0876209 |
Soggetto topico |
Science fiction, American - History and criticism
Science fiction - Periodicals - History Literature and science - United States - History - 20th century Science in popular culture - United States |
Soggetto genere / forma | Electronic books. |
ISBN |
1-283-89840-3
0-8122-0667-3 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto | pt. I. Circulation -- pt. II. Reading -- pt. III. Practice. |
Record Nr. | UNINA-9910463583003321 |
Cheng John | ||
Philadelphia, : University of Pennsylvania Press, c2012 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Astounding wonder [[electronic resource] ] : imagining science and science fiction in interwar America / / John Cheng |
Autore | Cheng John |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Philadelphia, : University of Pennsylvania Press, c2012 |
Descrizione fisica | 1 online resource (401 p.) |
Disciplina | 813/.0876209 |
Soggetto topico |
Science fiction, American - History and criticism
Science fiction - Periodicals - History Literature and science - United States - History - 20th century Science in popular culture - United States |
Soggetto non controllato |
American History
American Studies Cultural Studies Literature |
ISBN |
1-283-89840-3
0-8122-0667-3 |
Classificazione | HU 1818 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto | pt. I. Circulation -- pt. II. Reading -- pt. III. Practice. |
Record Nr. | UNINA-9910788680403321 |
Cheng John | ||
Philadelphia, : University of Pennsylvania Press, c2012 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Astounding wonder : imagining science and science fiction in interwar America / / John Cheng |
Autore | Cheng John |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Philadelphia, : University of Pennsylvania Press, c2012 |
Descrizione fisica | 1 online resource (401 p.) |
Disciplina | 813/.0876209 |
Soggetto topico |
Science fiction, American - History and criticism
Science fiction - Periodicals - History Literature and science - United States - History - 20th century Science in popular culture - United States |
Soggetto non controllato |
American History
American Studies Cultural Studies Literature |
ISBN |
1-283-89840-3
0-8122-0667-3 |
Classificazione | HU 1818 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto | pt. I. Circulation -- pt. II. Reading -- pt. III. Practice. |
Record Nr. | UNINA-9910826313803321 |
Cheng John | ||
Philadelphia, : University of Pennsylvania Press, c2012 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Professional CUDA C Programming [[electronic resource]] |
Autore | Cheng John |
Pubbl/distr/stampa | Hoboken, : Wiley, 2014 |
Descrizione fisica | 1 online resource (527 p.) |
Disciplina |
004.35
004/.35 |
Altri autori (Persone) |
GrossmanMax
McKercherTy |
Soggetto topico |
Computer architecture
Multiprocessors Parallel processing (Electronic computers) Parallel programming (Computer science) Engineering & Applied Sciences Computer Science |
Soggetto genere / forma | Electronic books. |
ISBN | 1-118-73927-2 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover; Title Page; Copyright; Contents; Chapter 1 Heterogeneous Parallel Computing with CUDA; Parallel Computing; Sequential and Parallel Programming; Parallelism; Computer Architecture; Heterogeneous Computing; Heterogeneous Architecture; Paradigm of Heterogeneous Computing; CUDA: A Platform for Heterogeneous Computing; Hello World from GPU; Is CUDA C Programming Difficult?; Summary; Chapter 2 CUDA Programming Model; Introducing the CUDA Programming Model; CUDA Programming Structure; Managing Memory; Organizing Threads; Launching a CUDA Kernel; Writing Your Kernel; Verifying Your Kernel
Handling ErrorsCompiling and Executing; Timing Your Kernel; Timing with CPU Timer; Timing with nvprof; Organizing Parallel Threads; Indexing Matrices with Blocks and Threads; Summing Matrices with a 2D Grid and 2D Blocks; Summing Matrices with a 1D Grid and 1D Blocks; Summing Matrices with a 2D Grid and 1D Blocks; Managing Devices; Using the Runtime API to Query GPU Information; Determining the Best GPU; Using nvidia-smi to Query GPU Information; Setting Devices at Runtime; Summary; Chapter 3 CUDA Execution Model; Introducing the CUDA Execution Model; GPU Architecture Overview The Fermi ArchitectureThe Kepler Architecture; Profile-Driven Optimization; Understanding the Nature of Warp Execution; Warps and Thread Blocks; Warp Divergence; Resource Partitioning; Latency Hiding; Occupancy; Synchronization; Scalability; Exposing Parallelism; Checking Active Warps with nvprof; Checking Memory Operations with nvprof; Exposing More Parallelism; Avoiding Branch Divergence; The Parallel Reduction Problem; Divergence in Parallel Reduction; Improving Divergence in Parallel Reduction; Reducing with Interleaved Pairs; Unrolling Loops; Reducing with Unrolling Reducing with Unrolled WarpsReducing with Complete Unrolling; Reducing with Template Functions; Dynamic Parallelism; Nested Execution; Nested Hello World on the GPU; Nested Reduction; Summary; Chapter 4 Global Memory; Introducing the CUDA Memory Model; Benefits of a Memory Hierarchy; CUDA Memory Model; Memory Management; Memory Allocation and Deallocation; Memory Transfer; Pinned Memory; Zero-Copy Memory; Unified Virtual Addressing; Unified Memory; Memory Access Patterns; Aligned and Coalesced Access; Global Memory Reads; Global Memory Writes; Array of Structures versus Structure of Arrays Performance TuningWhat Bandwidth Can a Kernel Achieve?; Memory Bandwidth; Matrix Transpose Problem; Matrix Addition with Unified Memory; Summary; Chapter 5 Shared Memory and Constant Memory; Introducing CUDA Shared Memory; Shared Memory; Shared Memory Allocation; Shared Memory Banks and Access Mode; Configuring the Amount of Shared Memory; Synchronization; Checking the Data Layout of Shared Memory; Square Shared Memory; Rectangular Shared Memory; Reducing Global Memory Access; Parallel Reduction with Shared Memory; Parallel Reduction with Unrolling Parallel Reduction with Dynamic Shared Memory |
Record Nr. | UNINA-9910458434103321 |
Cheng John | ||
Hoboken, : Wiley, 2014 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Professional CUDA C Programming [[electronic resource]] |
Autore | Cheng John |
Pubbl/distr/stampa | Hoboken, : Wiley, 2014 |
Descrizione fisica | 1 online resource (527 p.) |
Disciplina |
004.35
004/.35 |
Altri autori (Persone) |
GrossmanMax
McKercherTy |
Soggetto topico |
Computer architecture
Multiprocessors Parallel processing (Electronic computers) Parallel programming (Computer science) Engineering & Applied Sciences Computer Science |
ISBN | 1-118-73927-2 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover; Title Page; Copyright; Contents; Chapter 1 Heterogeneous Parallel Computing with CUDA; Parallel Computing; Sequential and Parallel Programming; Parallelism; Computer Architecture; Heterogeneous Computing; Heterogeneous Architecture; Paradigm of Heterogeneous Computing; CUDA: A Platform for Heterogeneous Computing; Hello World from GPU; Is CUDA C Programming Difficult?; Summary; Chapter 2 CUDA Programming Model; Introducing the CUDA Programming Model; CUDA Programming Structure; Managing Memory; Organizing Threads; Launching a CUDA Kernel; Writing Your Kernel; Verifying Your Kernel
Handling ErrorsCompiling and Executing; Timing Your Kernel; Timing with CPU Timer; Timing with nvprof; Organizing Parallel Threads; Indexing Matrices with Blocks and Threads; Summing Matrices with a 2D Grid and 2D Blocks; Summing Matrices with a 1D Grid and 1D Blocks; Summing Matrices with a 2D Grid and 1D Blocks; Managing Devices; Using the Runtime API to Query GPU Information; Determining the Best GPU; Using nvidia-smi to Query GPU Information; Setting Devices at Runtime; Summary; Chapter 3 CUDA Execution Model; Introducing the CUDA Execution Model; GPU Architecture Overview The Fermi ArchitectureThe Kepler Architecture; Profile-Driven Optimization; Understanding the Nature of Warp Execution; Warps and Thread Blocks; Warp Divergence; Resource Partitioning; Latency Hiding; Occupancy; Synchronization; Scalability; Exposing Parallelism; Checking Active Warps with nvprof; Checking Memory Operations with nvprof; Exposing More Parallelism; Avoiding Branch Divergence; The Parallel Reduction Problem; Divergence in Parallel Reduction; Improving Divergence in Parallel Reduction; Reducing with Interleaved Pairs; Unrolling Loops; Reducing with Unrolling Reducing with Unrolled WarpsReducing with Complete Unrolling; Reducing with Template Functions; Dynamic Parallelism; Nested Execution; Nested Hello World on the GPU; Nested Reduction; Summary; Chapter 4 Global Memory; Introducing the CUDA Memory Model; Benefits of a Memory Hierarchy; CUDA Memory Model; Memory Management; Memory Allocation and Deallocation; Memory Transfer; Pinned Memory; Zero-Copy Memory; Unified Virtual Addressing; Unified Memory; Memory Access Patterns; Aligned and Coalesced Access; Global Memory Reads; Global Memory Writes; Array of Structures versus Structure of Arrays Performance TuningWhat Bandwidth Can a Kernel Achieve?; Memory Bandwidth; Matrix Transpose Problem; Matrix Addition with Unified Memory; Summary; Chapter 5 Shared Memory and Constant Memory; Introducing CUDA Shared Memory; Shared Memory; Shared Memory Allocation; Shared Memory Banks and Access Mode; Configuring the Amount of Shared Memory; Synchronization; Checking the Data Layout of Shared Memory; Square Shared Memory; Rectangular Shared Memory; Reducing Global Memory Access; Parallel Reduction with Shared Memory; Parallel Reduction with Unrolling Parallel Reduction with Dynamic Shared Memory |
Record Nr. | UNINA-9910791157403321 |
Cheng John | ||
Hoboken, : Wiley, 2014 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Professional CUDA C Programming |
Autore | Cheng John |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Hoboken, : Wiley, 2014 |
Descrizione fisica | 1 online resource (527 p.) |
Disciplina |
004.35
004/.35 |
Altri autori (Persone) |
GrossmanMax
McKercherTy |
Soggetto topico |
Computer architecture
Multiprocessors Parallel processing (Electronic computers) Parallel programming (Computer science) Engineering & Applied Sciences Computer Science |
ISBN | 1-118-73927-2 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover; Title Page; Copyright; Contents; Chapter 1 Heterogeneous Parallel Computing with CUDA; Parallel Computing; Sequential and Parallel Programming; Parallelism; Computer Architecture; Heterogeneous Computing; Heterogeneous Architecture; Paradigm of Heterogeneous Computing; CUDA: A Platform for Heterogeneous Computing; Hello World from GPU; Is CUDA C Programming Difficult?; Summary; Chapter 2 CUDA Programming Model; Introducing the CUDA Programming Model; CUDA Programming Structure; Managing Memory; Organizing Threads; Launching a CUDA Kernel; Writing Your Kernel; Verifying Your Kernel
Handling ErrorsCompiling and Executing; Timing Your Kernel; Timing with CPU Timer; Timing with nvprof; Organizing Parallel Threads; Indexing Matrices with Blocks and Threads; Summing Matrices with a 2D Grid and 2D Blocks; Summing Matrices with a 1D Grid and 1D Blocks; Summing Matrices with a 2D Grid and 1D Blocks; Managing Devices; Using the Runtime API to Query GPU Information; Determining the Best GPU; Using nvidia-smi to Query GPU Information; Setting Devices at Runtime; Summary; Chapter 3 CUDA Execution Model; Introducing the CUDA Execution Model; GPU Architecture Overview The Fermi ArchitectureThe Kepler Architecture; Profile-Driven Optimization; Understanding the Nature of Warp Execution; Warps and Thread Blocks; Warp Divergence; Resource Partitioning; Latency Hiding; Occupancy; Synchronization; Scalability; Exposing Parallelism; Checking Active Warps with nvprof; Checking Memory Operations with nvprof; Exposing More Parallelism; Avoiding Branch Divergence; The Parallel Reduction Problem; Divergence in Parallel Reduction; Improving Divergence in Parallel Reduction; Reducing with Interleaved Pairs; Unrolling Loops; Reducing with Unrolling Reducing with Unrolled WarpsReducing with Complete Unrolling; Reducing with Template Functions; Dynamic Parallelism; Nested Execution; Nested Hello World on the GPU; Nested Reduction; Summary; Chapter 4 Global Memory; Introducing the CUDA Memory Model; Benefits of a Memory Hierarchy; CUDA Memory Model; Memory Management; Memory Allocation and Deallocation; Memory Transfer; Pinned Memory; Zero-Copy Memory; Unified Virtual Addressing; Unified Memory; Memory Access Patterns; Aligned and Coalesced Access; Global Memory Reads; Global Memory Writes; Array of Structures versus Structure of Arrays Performance TuningWhat Bandwidth Can a Kernel Achieve?; Memory Bandwidth; Matrix Transpose Problem; Matrix Addition with Unified Memory; Summary; Chapter 5 Shared Memory and Constant Memory; Introducing CUDA Shared Memory; Shared Memory; Shared Memory Allocation; Shared Memory Banks and Access Mode; Configuring the Amount of Shared Memory; Synchronization; Checking the Data Layout of Shared Memory; Square Shared Memory; Rectangular Shared Memory; Reducing Global Memory Access; Parallel Reduction with Shared Memory; Parallel Reduction with Unrolling Parallel Reduction with Dynamic Shared Memory |
Record Nr. | UNINA-9910826497103321 |
Cheng John | ||
Hoboken, : Wiley, 2014 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|