1.

Record Nr.

UNINA9910143173803321

Autore

Lastovetsky Alexey <1957->

Titolo

Parallel computing on heterogeneous networks / / Alexey Lastovetsky

Pubbl/distr/stampa

Hoboken, New Jersey : , : Wiley-Interscience, , 2003

©2003

ISBN

1-280-36605-2

9786610366057

0-470-34948-4

0-471-45718-3

0-471-65416-7

Descrizione fisica

1 online resource (440 p.)

Collana

Wiley Series on Parallel and Distributed Computing

Wiley series on parallel and distributed computing

Disciplina

005.2/75

Soggetti

Internetworking (Telecommunication)

Parallel programming (Computer science)

Heterogeneous computing

Computer networks

Electronic books.

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

Description based upon print version of record.

Nota di bibliografia

Includes bibliographical references and index.

Nota di contenuto

PARALLEL COMPUTING ON HETEROGENEOUS NETWORKS; CONTENTS; Acknowledgments; Introduction; PART I EVOLUTION OF PARALLEL COMPUTING; 1. Serial Scalar Processor; 1.1. Serial Scalar Processor and Programming Model; 1.2. Basic Program Properties; 2. Vector and Superscalar Processors; 2.1. Vector Processor; 2.2. Superscalar Processor; 2.3. Programming Model; 2.4. Optimizing Compilers; 2.5. Array Libraries; 2.5.1. Level 1 BLAS; 2.5.2. Level 2 BLAS; 2.5.3. Level 3 BLAS; 2.5.4. Sparse BLAS; 2.6. Parallel Languages; 2.6.1. Fortran 90; 2.6.2. The C[ ] Language

2.7. Memory Hierarchy and Parallel Programming Tools2.8. Summary; 3. Shared Memory Multiprocessors; 3.1. Shared Memory Multiprocessor Architecture and Programming Models; 3.2. Optimizing Compilers; 3.3. Thread Libraries; 3.3.1. Operations on Threads; 3.3.2. Operations on



Mutexes; 3.3.3. Operations on Condition Variables; 3.3.4. Example of MT Application: Multithreaded Dot Product; 3.4. Parallel Languages; 3.4.1. Fortran 95; 3.4.2. OpenMP; 3.5. Summary; 4. Distributed Memory Multiprocessors; 4.1. Distributed Memory Multiprocessor Architecture: Programming Model and Performance Models

4.2. Message-Passing Libraries4.2.1 Basic MPI Programming Model; 4.2.2. Groups and Communicators; 4.2.3. Point-to-Point Communication; 4.2.4. Collective Communication; 4.2.5. Environmental Management; 4.2.6. Example of an MPI Application: Parallel Matrix-Matrix Multiplication; 4.3. Parallel Languages; 4.4. Summary; 5. Networks of Computers: Architecture and Programming Challenges; 5.1. Processors Heterogeneity; 5.1.1. Different Processor Speeds; 5.1.2. Heterogeneity of Machine Arithmetic; 5.2. Ad Hoc Communication Network; 5.3. Multiple-User Decentralized Computer System

5.3.1. Unstable Performance Characteristics5.3.2. High Probability of Resource Failures; 5.4. Summary; PART II PARALLEL PROGRAMMING FOR NETWORKS OF COMPUTERS WITH MPC AND HMPI; 6. Introduction to mpC; 6.1. First mpC Programs; 6.2. Networks; 6.3. Network Type; 6.4. Network Parent; 6.5. Synchronization of Processes; 6.6. Network Functions; 6.7. Subnetworks; 6.8. A Simple Heterogeneous Algorithm Solving an Irregular Problem; 6.9. The RECON Statement: A Language Construct to Control the Accuracy of the Underlying Model of Computer Network

6.10. A Simple Heterogeneous Algorithm Solving a Regular Problem6.11. Principles of Implementation; 6.11.1. Model of a Target Message-Passing Program; 6.11.2. Mapping of the Parallel Algorithm to the Processors of a Heterogeneous Network; 6.12. Summary; 7. Advanced Heterogeneous Parallel Programming in mpC; 7.1. Interprocess Communication; 7.2. Communication Patterns; 7.3. Algorithmic Patterns; 7.4. Underlying Models and the Mapping Algorithm; 7.4.1. Model of a Heterogeneous Network of Computers; 7.4.2. The Mapping Algorithm; 7.5. Summary

8. Toward a Message-Passing Library for Heterogeneous Networks of Computers

Sommario/riassunto

New approaches to parallel computing are being developed that make better use of the heterogeneous cluster architectureProvides a detailed introduction to parallel computing on heterogenous clustersAll concepts and algorithms are illustrated with working programs that can be compiled and executed on any clusterThe algorithms discussed have practical applications in a range of real-life parallel computing problems, such as the N-body problem, portfolio management, and the modeling of oil extraction