top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Linux clustering with CSM and GPFS / / [Stephen Hochstetler, Bob Beringer]
Linux clustering with CSM and GPFS / / [Stephen Hochstetler, Bob Beringer]
Edizione [3rd ed.]
Pubbl/distr/stampa Austin, TX, : IBM, International Technical Support Organization, 2004
Descrizione fisica xxii, 316 p. : ill
Disciplina 004/.35
Altri autori (Persone) HochstetlerStephen
BeringerBob
Collana IBM redbooks
Soggetto topico Parallel computers
Computer network architectures
File organization (Computer science)
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front cover -- Contents -- Figures -- Tables -- Notices -- Trademarks -- Preface -- The team that wrote this redbook -- Become a published author -- Comments welcome -- Summary of changes -- December 2003, Third Edition -- Part 1 Fundamentals -- Chapter 1. Clustering concepts and general overview -- 1.1 What is a cluster -- 1.2 Cluster types -- 1.2.1 High availability -- 1.2.2 High performance computing -- 1.2.3 Horizontal scaling -- 1.3 Beowulf clusters -- 1.4 Linux, open source, and clusters -- 1.5 IBM Linux clusters -- 1.5.1 xSeries custom-order clusters -- 1.5.2 The IBM eServer Cluster 1350 -- 1.6 Cluster logical structure -- 1.6.1 Cluster node types and xSeries offerings -- 1.7 Other cluster hardware components -- 1.7.1 Networks -- 1.7.2 Storage -- 1.7.3 Terminal servers -- 1.7.4 Keyboard, video, and mouse switches -- 1.8 Cluster software -- Chapter 2. New Linux cluster offering from IBM: Cluster 1350 -- 2.1 Product overview -- 2.2 Hardware -- 2.2.1 Racks -- 2.2.2 Cluster nodes -- 2.2.3 Remote Supervisor Adapters -- 2.2.4 External storage -- 2.2.5 Networking -- 2.2.6 Terminal servers -- 2.2.7 Hardware console: Keyboard, video, and mouse (KVM) -- 2.3 Software -- 2.3.1 Linux operating system -- 2.3.2 IBM Cluster Systems Management (CSM) for Linux -- 2.3.3 General Parallel File System for Linux -- 2.3.4 Other software considerations -- 2.4 Services -- 2.4.1 Installation planning services -- 2.4.2 On-site installation of the IBM eServer Cluster 1350 -- 2.4.3 Warranty service and support -- 2.4.4 Project support services -- 2.4.5 Installation and customization -- 2.4.6 Continuing support services -- 2.5 Summary -- Chapter 3. Introducing Cluster Systems Management for Linux -- 3.1 IBM Cluster Systems Management overview -- 3.2 CSM architecture -- 3.2.1 Resource Monitoring and Control subsystem -- 3.2.2 CSM components -- 3.2.3 Security in CSM.
3.3 CSM monitoring -- 3.3.1 How CSM monitors a system -- 3.3.2 Resource Managers -- 3.3.3 Predefined conditions -- 3.3.4 Responses -- 3.3.5 Associating conditions and responses -- 3.3.6 Creating new conditions and responses -- 3.4 CSM management components -- 3.4.1 Node and group management commands -- 3.4.2 Controlling the hardware -- 3.4.3 Using DSH to run commands remotely -- 3.4.4 Configuration File Manager (CFM) -- 3.5 CSM hardware requirements -- 3.5.1 Minimum hardware requirements -- 3.6 Software requirements to run CSM -- 3.6.1 IBM CSM software packages -- 3.6.2 Third party software components -- 3.7 Quick installation process overview -- 3.8 CSM futures -- 3.9 Summary -- Chapter 4. Introducing General Parallel File System for Linux -- 4.1 Introduction to GPFS -- 4.1.1 GPFS terms and definitions -- 4.1.2 What is new in GPFS for Linux Version 1.3 -- 4.1.3 GPFS advantages -- 4.2 GPFS architecture -- 4.2.1 GPFS components -- 4.2.2 GPFS Network Shared Disk considerations -- 4.2.3 GPFS global management functions -- 4.2.4 Disk storage used in GPFS -- 4.2.5 Data and metadata replication capability -- 4.2.6 GPFS and applications -- 4.2.7 Scenario and operation example -- 4.3 GPFS requirements -- 4.3.1 Hardware requirements -- 4.3.2 Software requirements -- 4.4 Summary -- Part 2 Implementation and administration -- Chapter 5. Cluster installation and configuration with CSM -- 5.1 Planning the installation -- 5.1.1 Before you begin -- 5.1.2 Develop a network plan -- 5.1.3 Develop a hardware resources plan -- 5.1.4 Develop a plan to update your hardware -- 5.1.5 Develop your security plan -- 5.1.6 Installation media -- 5.1.7 Documenting the cluster configuration -- 5.2 Configuring the management server -- 5.2.1 Red Hat Linux 7.3 installation -- 5.2.2 Install additional Red Hat Linux 7.3 packages -- 5.2.3 Install Red Hat Linux 7.3 updates.
5.2.4 NTP configuration -- 5.2.5 Fix syslogd -- 5.2.6 Domain Name System (DNS) configuration -- 5.2.7 Install Terminal Server -- 5.2.8 System Management hardware configuration -- 5.2.9 Configuring environment variables -- 5.2.10 Deciding which remote shell protocol to use -- 5.2.11 Installing the CSM core package -- 5.2.12 Running the CSM installms script -- 5.2.13 Install the license -- 5.2.14 Verify the CSM installation on the management node -- 5.3 CSM installation on compute and storage nodes -- 5.3.1 BIOS settings for compute and storage nodes -- 5.3.2 Preparing to run the definenode command -- 5.3.3 Running the definenode script -- 5.3.4 Verify that rpower works -- 5.3.5 Customize the KickStart template (optional) -- 5.3.6 Running the csmsetupks script -- 5.3.7 Running the installnode script -- 5.3.8 Verifying compute and storage node installation -- 5.3.9 Configuring NTP on your compute and storage nodes -- 5.4 Special considerations for storage node installation -- 5.5 Summary -- Chapter 6. Cluster management with CSM -- 6.1 Changing the nodes in your cluster -- 6.1.1 Replacing nodes -- 6.1.2 Adding new nodes using the full installation process -- 6.1.3 Adding new nodes using the CSM only installation process -- 6.1.4 Removing nodes -- 6.1.5 Changing host names of nodes -- 6.2 Remote controlling nodes -- 6.2.1 Power control -- 6.2.2 Console access -- 6.2.3 Node availability monitor -- 6.2.4 Hardware status and management -- 6.3 Node groups -- 6.4 Running commands on the nodes -- 6.4.1 Distributed shell (dsh) -- 6.4.2 Distributed command execution manager (DCEM) -- 6.5 Configuration File Manager (CFM) -- 6.6 Software maintenance system (SMS) -- 6.7 Event monitoring -- 6.7.1 RMC components -- 6.7.2 Activating condition responses -- 6.7.3 Deactivating condition responses -- 6.7.4 Creating your own conditions and responses -- 6.7.5 RMC audit log.
6.8 Backing up CSM -- 6.9 Uninstalling CSM -- Chapter 7. GPFS installation and configuration -- 7.1 Basic steps to install GPFS -- 7.2 GPFS planning -- 7.2.1 Network implementation -- 7.2.2 Documentation -- 7.3 Preparing the environment -- 7.3.1 Nodes preparation -- 7.3.2 Prerequisite software -- 7.3.3 Prepare kernel source file for GPFS and Myrinet adapter -- 7.3.4 Time synchronization -- 7.3.5 Setting the remote command environment -- 7.3.6 Myrinet adapter installation -- 7.3.7 Prepare external storage for GPFS -- 7.3.8 Setting PATH for the GPFS command -- 7.4 GPFS installation -- 7.4.1 Installing the source files -- 7.4.2 Building the GPFS open source portability layer -- 7.5 Creating the GPFS cluster -- 7.5.1 Creating the GPFS nodes descriptor file -- 7.5.2 Defining the GPFS cluster -- 7.6 Creating the GPFS nodeset -- 7.7 Starting GPFS -- 7.8 Disk definitions -- 7.8.1 GPFS nodeset with NSD network attached servers -- 7.8.2 GPFS nodeset with direct attached disks -- 7.9 Exporting a GPFS file system using NFS -- 7.10 GPFS shutdown -- 7.11 Summary -- Chapter 8. Managing the GPFS cluster -- 8.1 Adding and removing disks from GPFS -- 8.1.1 Adding a new disk to an existing GPFS file system -- 8.1.2 Deleting a disk in an active GPFS file system -- 8.1.3 Replacing a failing disk in an existing GPFS file system -- 8.2 Removing all GPFS file systems and configuration -- 8.3 Access Control Lists (ACLs) -- 8.4 GPFS logs and traces -- 8.4.1 GPFS logs -- 8.4.2 Trace facility -- 8.5 Troubleshooting: Some possible GPFS problems -- 8.5.1 Authorization problems -- 8.5.2 Connectivity problems -- 8.5.3 NSD disk problems -- 8.6 Gather information before contacting Support Center -- Chapter 9. Migrating xCat clusters to CSM -- 9.1 xCAT overview -- 9.2 Migrating xCAT clusters to CSM -- 9.2.1 Using xcat2csm -- 9.2.2 Edit the generated files.
9.2.3 Importing the files into CSM -- 9.3 xCAT and CSM co-existence -- Part 3 Appendixes -- Appendix A. SRC and RSCT -- SRC and RSCT components overview -- System Resource Controller (SRC) -- Subsystem components -- Reliable Scalable Cluster Technology (RSCT) -- Topology Services subsystem -- Group Services (GS) subsystem -- Appendix B. Common facilities -- DNS server -- Package description -- DNS installation -- DNS configuration -- Starting the DNS server -- Testing the DNS server -- BIND logging -- Other features -- OpenSSH -- Package description -- OpenSSH authentication methods -- Update the file /etc/hosts -- Key generation for the root user -- Generation of authorized_keys file -- Distribution of the authorized_keys file to the other nodes -- Ensuring all nodes know each other -- Verification of the SSH configuration -- Additional information and trouble shooting -- Appendix C. Migrating to GPFS 1.3 from earlier versions -- Migration steps -- Appendix D. Planning worksheets -- CSM planning worksheets -- Management node TCP/IP attributes worksheets -- Compute node TCP/IP attributes worksheet -- Node attributes worksheets -- GPFS Planning worksheets -- File system descriptions -- Network File Shared descriptions -- Glossary -- Abbreviations and acronyms -- Related publications -- IBM Redbooks -- Other publications -- Online resources -- How to get IBM Redbooks -- Help from IBM -- Index -- Back cover.
Record Nr. UNINA-9910813344503321
Austin, TX, : IBM, International Technical Support Organization, 2004
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Managing IBM e-server Cluster 1600 [[electronic resource] ] : power recipes for PSSP 3.4 / / [Christian A. Schmidt ... [et al.]]
Managing IBM e-server Cluster 1600 [[electronic resource] ] : power recipes for PSSP 3.4 / / [Christian A. Schmidt ... [et al.]]
Edizione [1st ed.]
Pubbl/distr/stampa Poughkeepsie, NY, : IBM International Technical Support Organization, 2002
Descrizione fisica xviii, 278 p. : ill
Disciplina 004/.35
Altri autori (Persone) SchmidtChristian A
Collana Redbooks
Soggetto topico Parallel computers
IBM RISC System/6000 computers
File organization (Computer science)
Soggetto genere / forma Electronic books.
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910454606003321
Poughkeepsie, NY, : IBM International Technical Support Organization, 2002
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Managing IBM e-server Cluster 1600 [[electronic resource] ] : power recipes for PSSP 3.4 / / [Christian A. Schmidt ... [et al.]]
Managing IBM e-server Cluster 1600 [[electronic resource] ] : power recipes for PSSP 3.4 / / [Christian A. Schmidt ... [et al.]]
Edizione [1st ed.]
Pubbl/distr/stampa Poughkeepsie, NY, : IBM International Technical Support Organization, 2002
Descrizione fisica xviii, 278 p. : ill
Disciplina 004/.35
Altri autori (Persone) SchmidtChristian A
Collana Redbooks
Soggetto topico Parallel computers
IBM RISC System/6000 computers
File organization (Computer science)
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910782028603321
Poughkeepsie, NY, : IBM International Technical Support Organization, 2002
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Managing IBM e-server Cluster 1600 : power recipes for PSSP 3.4 / / [Christian A. Schmidt ... [et al.]]
Managing IBM e-server Cluster 1600 : power recipes for PSSP 3.4 / / [Christian A. Schmidt ... [et al.]]
Edizione [1st ed.]
Pubbl/distr/stampa Poughkeepsie, NY, : IBM International Technical Support Organization, 2002
Descrizione fisica xviii, 278 p. : ill
Disciplina 004/.35
Altri autori (Persone) SchmidtChristian A
Collana Redbooks
Soggetto topico Parallel computers
IBM RISC System/6000 computers
File organization (Computer science)
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Front cover -- Contents -- Figures -- Tables -- Notices -- Trademarks -- Preface -- The team that wrote this redbook -- Special notice -- Comments welcome -- Chapter 1. Introduction -- 1.1 Cluster concepts -- 1.2 Cluster components -- 1.2.1 Hardware -- 1.2.2 Software -- 1.3 Cluster management -- 1.3.1 Benefits of clustering -- Chapter 2. Managing hardware -- 2.1 Overview of hardware changes -- 2.2 Frames -- 2.3 Adding frames to a cluster -- 2.3.1 Adding frames to the cluster -- 2.3.2 Adding nodes to a cluster -- 2.4 Deleting frames/nodes from a cluster -- 2.4.1 Deleting frames -- 2.4.2 Deleting nodes -- 2.4.3 System firmware and microcodes -- 2.5 Accessing hardware Information -- 2.5.1 Accessing the node information from Vital Product Data (VPD) -- 2.5.2 Accessing information in SDR -- 2.6 Hardware control -- 2.6.1 Hardware control tools -- 2.6.2 Using the spmon command for hardware control -- 2.6.3 Using the hmcmds command -- 2.6.4 Using Hardware Perspective -- 2.7 Monitoring hardware -- 2.7.1 Monitoring SP frames -- 2.7.2 Monitoring cluster nodes -- 2.7.3 Monitoring SP Switch boards -- 2.7.4 Some words on using Tivoli for cluster management -- Chapter 3. Network Installation Management -- 3.1 Network Installation Management -- 3.2 NIM in the cluster environment -- 3.3 The setup_server script -- 3.3.1 The setup_server script flow -- 3.3.2 The NIM wrappers -- 3.3.3 Creating a NIM client -- 3.3.4 Deleting a NIM client -- 3.4 The lpp_source object -- 3.4.1 Creating new lpp_source -- 3.4.2 Checking the lpp_source object -- 3.4.3 Updating the lpp_source object -- 3.5 The spot object -- 3.5.1 Checking the spot object -- 3.5.2 Checking the SPOT log file -- 3.5.3 Updating the spot object -- 3.5.4 Updating with PTFs -- 3.6 Isolating NIM problems -- 3.6.1 Checking NIM log files -- 3.6.2 Checking NIM configuration files -- 3.6.3 The c_sh_lib file.
3.7 Getting NIM information from ODM -- Chapter 4. Customizing a node in a cluster -- 4.1 Overview -- 4.2 Customization -- 4.2.1 When you need to customize -- 4.2.2 /etc/inittab changes on the CWS -- 4.2.3 /etc/inittab changes on the SP node -- 4.2.4 What happens on the node during normal bootup -- 4.3 Customizing a node with or without rebooting -- 4.4 Isolating problems during node customization -- 4.4.1 The customization script files -- 4.4.2 Isolating problems with pssp_script and psspfb_script -- 4.4.3 Meaning of the three digit codes -- 4.4.4 Hints and tips -- 4.5 Customization scenario -- Chapter 5. Disk configuration -- 5.1 Overview -- 5.2 Physical configuration -- 5.3 Disk terminology -- 5.4 Disk management -- 5.4.1 Selecting an installation disk -- 5.4.2 Installing a node without mirroring -- 5.4.3 Installing a node with mirroring -- 5.4.4 Initiating mirroring on a node already installed without it -- 5.4.5 Discontinuing root volume group mirroring -- 5.4.6 Creating an alternate rootvg -- Chapter 6. Network configuration -- 6.1 SPLAN Ethernet -- 6.1.1 Supported Ethernet adapters and their placement -- 6.1.2 IP label convention -- 6.1.3 Replacing an SP Ethernet adapter on a node -- 6.2 Switch network -- 6.2.1 Benefits of a Switch network -- 6.2.2 SP Switch2 -- 6.2.3 Switch IP network and addressing -- 6.2.4 Setting up the Switch2 -- 6.2.5 Configuring a cluster of switched and unswitched nodes -- 6.3 Other networks -- 6.3.1 Configuring additional adapters -- 6.3.2 Deleting or replacing a network adapter -- Chapter 7. Backup and restore -- 7.1 General backup solutions -- 7.2 How to backup the rootvg volume group -- 7.2.1 Backing up the system on the CWS to tape -- 7.2.2 Backing up the system to the CD-ROM -- 7.3 Backing up the spdata volume group -- 7.4 Node backup -- 7.5 How to verify the backup -- 7.6 How to restore the control workstation.
7.6.1 Restoring the system -- 7.6.2 Restoring the spdatavg -- 7.7 Restoring the cluster node -- 7.8 Managing the PSSP databases -- 7.8.1 Backing up the SDR database -- 7.8.2 Restoring the SDR database -- 7.8.3 Backing up the NIM database -- 7.8.4 Restoring the NIM database -- 7.8.5 Backing up the Kerberos database -- 7.8.6 Restoring the Kerberos database -- Chapter 8. Managing Cluster events -- 8.1 Managing events -- 8.2 Event Management subsystem concepts -- 8.2.1 Resource variables -- 8.2.2 Resource IDs -- 8.2.3 Event expressions -- 8.2.4 Rearm expressions -- 8.3 Security considerations for the Event Perspective -- 8.3.1 How to define new conditions -- 8.3.2 How to define event and rearm event actions -- 8.3.3 How to take action -- 8.4 Using the Event Perspective -- 8.4.1 Starting Event Perspective -- 8.4.2 Viewing an event definition -- 8.4.3 Registering an event definition -- 8.4.4 Checking event notification -- 8.4.5 Checking the rearm event notification -- 8.4.6 Unregistering event definition -- 8.5 Using the haemqvar command -- 8.5.1 Listing resource variables -- 8.5.2 Getting an explanation of resource variable -- 8.5.3 Getting the value of a resource variable -- 8.6 Using the pmandef command -- 8.6.1 The pmandefaults file -- 8.6.2 The pman internal environment variables -- 8.6.3 Subscribing an event -- 8.6.4 Testing events -- 8.6.5 Unsubscribing the event -- Chapter 9. Cluster administration -- 9.1 Node group -- 9.1.1 Starting up nodes by group -- 9.1.2 Shutting down nodes by group -- 9.1.3 Managing nodes using the node group -- 9.1.4 Managing nodes using a working collective -- 9.2 File collection technology -- 9.2.1 Getting information -- 9.2.2 Checking status -- 9.2.3 Checking resident files -- 9.2.4 Checking the file collection server -- 9.2.5 Checking the last updated time and date -- 9.2.6 Updating files managed by file collection.
9.2.7 Changing the update cycle -- 9.2.8 Update sequence -- 9.2.9 Checking log files -- 9.3 Log files -- 9.3.1 Getting authorization -- 9.3.2 Collecting AIX error logs -- 9.3.3 Collecting BSD syslog logs -- 9.3.4 Collecting other logs -- 9.3.5 Eliminating increased log files -- 9.3.6 Monitoring log files -- 9.4 Time synchronization -- 9.4.1 How NTP works -- 9.4.2 Getting information -- 9.4.3 Changing NTP time server -- 9.4.4 Monitoring NTP -- 9.4.5 Changing system time -- 9.5 The automounter -- 9.5.1 Getting information -- 9.5.2 Changing home directory server and path -- 9.5.3 Stop using automounter -- 9.5.4 Checking the logs -- Chapter 10. Cluster security -- 10.1 Authentication and authorization methods -- 10.1.1 Enabling authentication -- 10.1.2 Selecting authorization methods -- 10.1.3 Listing authentication methods -- 10.1.4 Authentication daemons -- 10.1.5 Kerberos authenticated-applications -- 10.2 Configuring the Kerberos system -- 10.2.1 Backing up the Kerberos system -- 10.2.2 Unconfiguring the Kerberos system -- 10.2.3 Restoring the Kerberos system -- 10.3 Enabling restricted root access -- 10.4 Secure remote command process -- 10.4.1 Enabling secure remote command process -- 10.4.2 An example scenario - using openSSH -- 10.5 Firewalled RS/6000 SP system -- Chapter 11. Documentation -- 11.1 PSSP documentation -- 11.2 Man page -- 11.2.1 Installing the SP man page -- 11.2.2 Using the man pages -- 11.3 PDF -- 11.3.1 Installing the PDF files -- 11.3.2 Reading PDF files -- 11.4 HTML files -- 11.4.1 Installing HTML files -- 11.4.2 Reading HTML files -- 11.5 RS/6000 SP Resource center -- 11.5.1 Installing the SP Resource Center -- 11.5.2 Running the Resource Center -- 11.5.3 Reading online documentation -- 11.5.4 Customizing the Resource Center -- Appendix A. SP-specific LED/LCD values -- Other LED/LCD codes -- Related publications.
IBM Redbooks -- Other resources -- Referenced Web sites -- How to get IBM Redbooks -- IBM Redbooks collections -- Abbreviations and acronyms -- Index -- Back cover.
Record Nr. UNINA-9910809817303321
Poughkeepsie, NY, : IBM International Technical Support Organization, 2002
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Massively Parallel Computing Systems, 1994 Conference
Massively Parallel Computing Systems, 1994 Conference
Pubbl/distr/stampa [Place of publication not identified], : IEEE Computer Society Press, 1994
Descrizione fisica 1 online resource (xiv, 655 pages) : illustrations
Disciplina 004.3
Soggetto topico Parallel computers
Parallel processing (Electronic computers)
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Architecture -- Communications -- Partitioning & mapping -- MPP, status of the art and future trends: the industry point of view -- Workshop on Parallel and Distributed Spatial Data Structures (P-SPADS) -- Adequacy: algorithms-massively parallel computers -- Image processing -- Algorithms -- Programming paradigms -- Performance & reliability evaluation -- Memory structures.
Record Nr. UNISA-996198229103316
[Place of publication not identified], : IEEE Computer Society Press, 1994
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Massively Parallel Computing Systems, 1994 Conference
Massively Parallel Computing Systems, 1994 Conference
Pubbl/distr/stampa [Place of publication not identified], : IEEE Computer Society Press, 1994
Descrizione fisica 1 online resource (xiv, 655 pages) : illustrations
Disciplina 004.3
Soggetto topico Parallel computers
Parallel processing (Electronic computers)
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Architecture -- Communications -- Partitioning & mapping -- MPP, status of the art and future trends: the industry point of view -- Workshop on Parallel and Distributed Spatial Data Structures (P-SPADS) -- Adequacy: algorithms-massively parallel computers -- Image processing -- Algorithms -- Programming paradigms -- Performance & reliability evaluation -- Memory structures.
Record Nr. UNINA-9910872744303321
[Place of publication not identified], : IEEE Computer Society Press, 1994
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Network and parallel computing : 19th IFIP WG 10.3 International Conference, NPC 2022, Jinan, China, September 24-25, 2022 proceedings. / / Shaoshan Liu, Xiaohui Wei, editors
Network and parallel computing : 19th IFIP WG 10.3 International Conference, NPC 2022, Jinan, China, September 24-25, 2022 proceedings. / / Shaoshan Liu, Xiaohui Wei, editors
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (360 pages)
Disciplina 004.6
Collana Lecture notes in computer science
Soggetto topico Computer networks
Parallel computers
Parallel processing (Electronic computers)
ISBN 3-031-21395-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents -- Architecture -- A Routing-Aware Mapping Method for Dataflow Architectures -- 1 Introduction -- 2 Background and Related Works -- 2.1 Dataflow Architecture -- 2.2 Related Works -- 3 Motivation -- 4 Our Method -- 5 Evaluation -- 5.1 Methodology -- 5.2 Performance Improvement -- 5.3 Energy Saving -- 5.4 Scalability -- 5.5 Compilation Time -- 6 Conclusion -- References -- Optimizing Winograd Convolution on GPUs via Partial Kernel Fusion -- 1 Introduction -- 2 Background -- 2.1 Implementations of Convolution -- 2.2 Winograd Convolution -- 2.3 NVIDIA GPU Architecture and Tensor Cores -- 3 Related Work -- 4 Methodology -- 4.1 Optimizing EWMM Stage -- 4.2 PKF (Partial Kernel Fusion) -- 5 Implementation and Experiment -- 5.1 Implementation PKF on TVM -- 5.2 Experiment -- 6 Conclusion -- References -- Adaptive Low-Cost Loop Expansion for Modulo Scheduling -- 1 Introduction -- 2 Expanded Modulo Scheduling -- 2.1 Data Dependence Graph -- 2.2 Expansion Count and Iteration Interval -- 2.3 Scheduling -- 2.4 Resolving Expansion Faults -- 2.5 Completing the MRT -- 3 Performance Evaluation -- 3.1 Target Architecture -- 3.2 Adaptation of EMS -- 3.3 Experiment Setup -- 3.4 Experiment Results -- 4 Conclusion -- References -- SADD: A Novel Systolic Array Accelerator with Dynamic Dataflow for Sparse GEMM in Deep Learning -- 1 Introduction -- 2 Background -- 2.1 Dataflows in the Systolic Array -- 2.2 Sparsity -- 3 SADD Architecture -- 3.1 Group-Structure-Maintained Compression -- 3.2 The SIS and SWS -- 3.3 The Performance of SIS and SWS with Different GEMM Sizes -- 3.4 The SDD and SADD -- 4 Experimental Results -- 4.1 Experimental Setup -- 4.2 Performance Comparison of Different Dataflows -- 4.3 Comparison of the SADD and the TPU -- 4.4 Scalability Analysis -- 4.5 Hardware Cost Analysis -- 5 Related Work -- 6 Conclusion.
References -- CSR& -- RV: An Efficient Value Compression Format for Sparse Matrix-Vector Multiplication -- 1 Introduction -- 2 The Compressed Sparse Row and Repetition Value Format -- 2.1 CSR& -- RV Representation -- 2.2 SpMV Algorithm -- 3 Experimental Results -- 3.1 Performance Comparison -- 3.2 Memory Overhead -- 3.3 Pre-processing -- 4 Conclusion -- References -- Rgs-SpMM: Accelerate Sparse Matrix-Matrix Multiplication by Row Group Splitting Strategy on the GPU -- 1 Introduction -- 2 Related Work and Motivation -- 3 Rgs-SpMM Design -- 3.1 Data Organization in Rgs-SpMM -- 3.2 Row Group Splitting -- 4 Experiment Evaluations -- 4.1 Overall Performance -- 4.2 Analysis of Results -- 5 Conclusion -- References -- Cloud Computing -- Interference-aware Workload Scheduling in Co-located Data Centers -- 1 Introduction -- 2 Related Work -- 3 Interference-aware Solution -- 3.1 Performance Interference Metric -- 3.2 Performance Interference Model Based on Linear Regression -- 3.3 Interference-aware Workload Scheduling -- 4 Experiment and Evaluation -- 4.1 Prediction Accuracy of Performance Interference Models -- 4.2 Evaluation of Scheduling Strategies on Throughput -- 5 Conclusion -- References -- FaaSPipe: Fast Serverless Workflows on Distributed Shared Memory -- 1 Introduction -- 2 Related Work -- 3 Design and Implementation -- 3.1 The PipeFunc Programming Model -- 3.2 System Architechture of FaaSPipe -- 3.3 Intra-workflow Memory Sharing -- 3.4 Full-Duplex Memory Transfer -- 4 Evaluation -- 4.1 Distributed Word Count -- 4.2 LightGBM -- 4.3 Efficency of FaaSPipe vs. Faasm -- 5 Conclusion -- References -- TopKmer: Parallel High Frequency K-mer Counting on Distributed Memory -- 1 Introduction -- 2 Background -- 2.1 Parallel K-mer Counting -- 2.2 Heavy Hitters -- 3 TopKmer Counter -- 3.1 Multi-layer Hash Table -- 3.2 Insert -- 3.3 Query.
4 Parallel K-Mer Counting Framework -- 5 Results -- 5.1 Experiment Setup -- 5.2 Quality of Counting -- 5.3 Performance Comparison -- 5.4 Scaling Capability -- 5.5 Time Consumption Analysis -- 6 Conclusion -- References -- Flexible Supervision System: A Fast Fault-Tolerance Strategy for Cloud Applications in Cloud-Edge Collaborative Environments -- 1 Introduction -- 2 Flexible Supervision System Architecture -- 3 Fault Detection and Fault-Tolerance Strategy -- 4 Experimental Evaluation -- 5 Related Work -- 6 Conclusions and Future Work -- References -- Adjust: An Online Resource Adjustment Framework for Microservice Programs -- 1 Introduction -- 2 A QoS Awareness Framework for Microservices -- 2.1 Microservice Analyzer (MSA) -- 2.2 Microservice Prediction Model (MSPM) -- 2.3 Microservice Performance Guarantor (MSPG) -- 3 Evaluation -- 3.1 Performance Guarantee -- 3.2 Resource Re-collection -- 4 Conclusion -- References -- Cloud-Native Server Consolidation for Energy-Efficient FaaS Deployment -- 1 Introduction -- 2 Key Design Considerations -- 3 DAC Design -- 3.1 System Overview -- 3.2 Function Classifier -- 3.3 Consolidation Controller -- 4 Evaluation -- 4.1 Methodologies -- 4.2 Evaluation Results -- 5 Conclusion -- References -- Deep Learning -- NeuProMa: A Toolchain for Mapping Large-Scale Spiking Convolutional Neural Networks onto Neuromorphic Processor -- 1 Introduction -- 2 Background -- 2.1 Neuromorphic Processor -- 2.2 Spiking Convolutional Neural Network -- 3 Related Work -- 4 NeuProMa -- 4.1 Splitting -- 4.2 Partitioning -- 4.3 Mapping -- 5 Experiment Setup -- 5.1 Experiment Platform -- 5.2 Evaluated SCNNs -- 6 Experiment Results -- 6.1 Splitting Performance -- 6.2 Partitioning and Mapping Performance -- 7 Conclusion -- References -- Multi-clusters: An Efficient Design Paradigm of NN Accelerator Architecture Based on FPGA -- 1 Introduction.
2 Background -- 2.1 Design Patterns of Accelerator -- 2.2 Related Work -- 3 Overall Method -- 3.1 Division Method -- 3.2 Architecture Design -- 3.3 Design Space Exploration -- 3.4 Scheduling Strategy -- 4 Experiment -- 4.1 Experiment Setup -- 4.2 Comparison with CPU and GPU -- 4.3 Comparison with Previous FPGA Accelerators -- 5 Conclusion -- References -- TrainFlow: A Lightweight, Programmable ML Training Framework via Serverless Paradigm -- 1 Introduction -- 2 Background and Challenges -- 2.1 Distributed ML Training -- 2.2 Challenges -- 3 TrainFlow Design -- 3.1 Overview -- 3.2 Serverless Process Model and Training Basics -- 3.3 Programmability Extension with Event-Driven Hook -- 4 Implementation -- 5 Evaluation -- 5.1 Availability -- 5.2 Programmability -- 6 Related Work -- 6.1 Classic ML Training -- 6.2 Serverless ML Training -- 7 Conclusion -- References -- DRP:Discrete Rank Pruning for Neural Network -- 1 Introduction -- 2 Related Work -- 2.1 Compression Techniques -- 2.2 Sparse Method -- 2.3 Structured Pruning -- 3 Consideration Bias Sparsity -- 4 Discrete Rank Pruning -- 5 Experiment and Evaluation -- 5.1 Datasets and Network Models -- 5.2 Implementation -- 5.3 Results and Analysis on CBS -- 5.4 Results and Analysis on DRP -- 6 Conclusion -- References -- TransMigrator: A Transformer-Based Predictive Page Migration Mechanism for Heterogeneous Memory -- 1 Introduction -- 2 TransMigrator -- 2.1 Design of Neural Network -- 2.2 Page Migration -- 3 Evaluation and Analysis -- 3.1 Trace Collection -- 3.2 Network Training -- 3.3 Migration Simulation -- 3.4 Access Time -- 3.5 Energy Consumption -- 3.6 Network Overhead -- 4 Related Work -- 5 Conclusion -- References -- Hardware Acceleration for 1D-CNN Based Real-Time Edge Computing -- 1 Introduction -- 2 Background -- 2.1 State-of-the-Art CNN Accelerators -- 2.2 CNN in Real-Time Computing.
3 Proposed Architecture for 1D-CNN -- 3.1 Data Reuse -- 3.2 Accelerated 1D-CNN Architecture -- 3.3 Compiler for 1D-CNN Architecture Generation -- 4 Results -- 4.1 Setup -- 4.2 Evaluations of Power, Latency and Bandwidth -- 4.3 Comparative Analysis -- 5 Conclusion -- References -- Emerging Applications -- DLC: An Optimization Framework for Full-State Quantum Simulation -- 1 Introduction -- 2 Background and Related Work -- 2.1 Quantum States and Quantum Circuits -- 2.2 Full-State Quantum Simulator -- 2.3 Related Work -- 3 Framework Overview -- 4 CPU-GPU Locality Enhancement -- 4.1 Data Dependency Analysis -- 4.2 CPU-GPU Locality Enhancement -- 5 Communication Optimization Among Multi-GPU -- 5.1 Challenges of Multi-GPU -- 5.2 Communication Scheme -- 5.3 Optimization of Communication -- 6 Performance Evaluation -- 6.1 Environment Setup -- 6.2 Performance on Single Node -- 6.3 Performance on Multiple Nodes -- 7 Conclusion -- References -- Approximation Algorithms for Reliability-Aware Maximum VoI on AUV-Aided Data Collections -- 1 Introduction -- 2 Related Works -- 2.1 AUV-Aided Data Collection -- 2.2 Orienteering Problem and Variants -- 3 System Model and Problem Definition -- 3.1 System Model -- 3.2 Problem Definition -- 4 Approximation Algorithm for the Path Finding Problem -- 4.1 Approximation Algorithm for the Path Finding Problem Without Real-Time VoI Decay -- 4.2 Approximation Algorithm for the Path Finding Problem with Real-Time VoI Decay -- 5 Simulation and Performance Evaluation -- 6 Conclusion -- References -- CCSBD: A Cost Control System Based on Blockchain and DRG Mechanism -- 1 Introduction -- 2 Related Work -- 3 System Design -- 3.1 System Overview -- 3.2 Medical Evidence-Based Classification Model -- 3.3 Contract Strategy for Clinical Data Sharing -- 4 Evaluation -- 5 Discussion -- 6 Conclusion -- References.
Number of UAVs and Mission Completion Time Minimization in Multi-UAV-Enabled IoT Networks.
Record Nr. UNINA-9910633930803321
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Network and parallel computing : 19th IFIP WG 10.3 International Conference, NPC 2022, Jinan, China, September 24-25, 2022 proceedings. / / Shaoshan Liu, Xiaohui Wei, editors
Network and parallel computing : 19th IFIP WG 10.3 International Conference, NPC 2022, Jinan, China, September 24-25, 2022 proceedings. / / Shaoshan Liu, Xiaohui Wei, editors
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (360 pages)
Disciplina 004.6
Collana Lecture notes in computer science
Soggetto topico Computer networks
Parallel computers
Parallel processing (Electronic computers)
ISBN 3-031-21395-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Organization -- Contents -- Architecture -- A Routing-Aware Mapping Method for Dataflow Architectures -- 1 Introduction -- 2 Background and Related Works -- 2.1 Dataflow Architecture -- 2.2 Related Works -- 3 Motivation -- 4 Our Method -- 5 Evaluation -- 5.1 Methodology -- 5.2 Performance Improvement -- 5.3 Energy Saving -- 5.4 Scalability -- 5.5 Compilation Time -- 6 Conclusion -- References -- Optimizing Winograd Convolution on GPUs via Partial Kernel Fusion -- 1 Introduction -- 2 Background -- 2.1 Implementations of Convolution -- 2.2 Winograd Convolution -- 2.3 NVIDIA GPU Architecture and Tensor Cores -- 3 Related Work -- 4 Methodology -- 4.1 Optimizing EWMM Stage -- 4.2 PKF (Partial Kernel Fusion) -- 5 Implementation and Experiment -- 5.1 Implementation PKF on TVM -- 5.2 Experiment -- 6 Conclusion -- References -- Adaptive Low-Cost Loop Expansion for Modulo Scheduling -- 1 Introduction -- 2 Expanded Modulo Scheduling -- 2.1 Data Dependence Graph -- 2.2 Expansion Count and Iteration Interval -- 2.3 Scheduling -- 2.4 Resolving Expansion Faults -- 2.5 Completing the MRT -- 3 Performance Evaluation -- 3.1 Target Architecture -- 3.2 Adaptation of EMS -- 3.3 Experiment Setup -- 3.4 Experiment Results -- 4 Conclusion -- References -- SADD: A Novel Systolic Array Accelerator with Dynamic Dataflow for Sparse GEMM in Deep Learning -- 1 Introduction -- 2 Background -- 2.1 Dataflows in the Systolic Array -- 2.2 Sparsity -- 3 SADD Architecture -- 3.1 Group-Structure-Maintained Compression -- 3.2 The SIS and SWS -- 3.3 The Performance of SIS and SWS with Different GEMM Sizes -- 3.4 The SDD and SADD -- 4 Experimental Results -- 4.1 Experimental Setup -- 4.2 Performance Comparison of Different Dataflows -- 4.3 Comparison of the SADD and the TPU -- 4.4 Scalability Analysis -- 4.5 Hardware Cost Analysis -- 5 Related Work -- 6 Conclusion.
References -- CSR& -- RV: An Efficient Value Compression Format for Sparse Matrix-Vector Multiplication -- 1 Introduction -- 2 The Compressed Sparse Row and Repetition Value Format -- 2.1 CSR& -- RV Representation -- 2.2 SpMV Algorithm -- 3 Experimental Results -- 3.1 Performance Comparison -- 3.2 Memory Overhead -- 3.3 Pre-processing -- 4 Conclusion -- References -- Rgs-SpMM: Accelerate Sparse Matrix-Matrix Multiplication by Row Group Splitting Strategy on the GPU -- 1 Introduction -- 2 Related Work and Motivation -- 3 Rgs-SpMM Design -- 3.1 Data Organization in Rgs-SpMM -- 3.2 Row Group Splitting -- 4 Experiment Evaluations -- 4.1 Overall Performance -- 4.2 Analysis of Results -- 5 Conclusion -- References -- Cloud Computing -- Interference-aware Workload Scheduling in Co-located Data Centers -- 1 Introduction -- 2 Related Work -- 3 Interference-aware Solution -- 3.1 Performance Interference Metric -- 3.2 Performance Interference Model Based on Linear Regression -- 3.3 Interference-aware Workload Scheduling -- 4 Experiment and Evaluation -- 4.1 Prediction Accuracy of Performance Interference Models -- 4.2 Evaluation of Scheduling Strategies on Throughput -- 5 Conclusion -- References -- FaaSPipe: Fast Serverless Workflows on Distributed Shared Memory -- 1 Introduction -- 2 Related Work -- 3 Design and Implementation -- 3.1 The PipeFunc Programming Model -- 3.2 System Architechture of FaaSPipe -- 3.3 Intra-workflow Memory Sharing -- 3.4 Full-Duplex Memory Transfer -- 4 Evaluation -- 4.1 Distributed Word Count -- 4.2 LightGBM -- 4.3 Efficency of FaaSPipe vs. Faasm -- 5 Conclusion -- References -- TopKmer: Parallel High Frequency K-mer Counting on Distributed Memory -- 1 Introduction -- 2 Background -- 2.1 Parallel K-mer Counting -- 2.2 Heavy Hitters -- 3 TopKmer Counter -- 3.1 Multi-layer Hash Table -- 3.2 Insert -- 3.3 Query.
4 Parallel K-Mer Counting Framework -- 5 Results -- 5.1 Experiment Setup -- 5.2 Quality of Counting -- 5.3 Performance Comparison -- 5.4 Scaling Capability -- 5.5 Time Consumption Analysis -- 6 Conclusion -- References -- Flexible Supervision System: A Fast Fault-Tolerance Strategy for Cloud Applications in Cloud-Edge Collaborative Environments -- 1 Introduction -- 2 Flexible Supervision System Architecture -- 3 Fault Detection and Fault-Tolerance Strategy -- 4 Experimental Evaluation -- 5 Related Work -- 6 Conclusions and Future Work -- References -- Adjust: An Online Resource Adjustment Framework for Microservice Programs -- 1 Introduction -- 2 A QoS Awareness Framework for Microservices -- 2.1 Microservice Analyzer (MSA) -- 2.2 Microservice Prediction Model (MSPM) -- 2.3 Microservice Performance Guarantor (MSPG) -- 3 Evaluation -- 3.1 Performance Guarantee -- 3.2 Resource Re-collection -- 4 Conclusion -- References -- Cloud-Native Server Consolidation for Energy-Efficient FaaS Deployment -- 1 Introduction -- 2 Key Design Considerations -- 3 DAC Design -- 3.1 System Overview -- 3.2 Function Classifier -- 3.3 Consolidation Controller -- 4 Evaluation -- 4.1 Methodologies -- 4.2 Evaluation Results -- 5 Conclusion -- References -- Deep Learning -- NeuProMa: A Toolchain for Mapping Large-Scale Spiking Convolutional Neural Networks onto Neuromorphic Processor -- 1 Introduction -- 2 Background -- 2.1 Neuromorphic Processor -- 2.2 Spiking Convolutional Neural Network -- 3 Related Work -- 4 NeuProMa -- 4.1 Splitting -- 4.2 Partitioning -- 4.3 Mapping -- 5 Experiment Setup -- 5.1 Experiment Platform -- 5.2 Evaluated SCNNs -- 6 Experiment Results -- 6.1 Splitting Performance -- 6.2 Partitioning and Mapping Performance -- 7 Conclusion -- References -- Multi-clusters: An Efficient Design Paradigm of NN Accelerator Architecture Based on FPGA -- 1 Introduction.
2 Background -- 2.1 Design Patterns of Accelerator -- 2.2 Related Work -- 3 Overall Method -- 3.1 Division Method -- 3.2 Architecture Design -- 3.3 Design Space Exploration -- 3.4 Scheduling Strategy -- 4 Experiment -- 4.1 Experiment Setup -- 4.2 Comparison with CPU and GPU -- 4.3 Comparison with Previous FPGA Accelerators -- 5 Conclusion -- References -- TrainFlow: A Lightweight, Programmable ML Training Framework via Serverless Paradigm -- 1 Introduction -- 2 Background and Challenges -- 2.1 Distributed ML Training -- 2.2 Challenges -- 3 TrainFlow Design -- 3.1 Overview -- 3.2 Serverless Process Model and Training Basics -- 3.3 Programmability Extension with Event-Driven Hook -- 4 Implementation -- 5 Evaluation -- 5.1 Availability -- 5.2 Programmability -- 6 Related Work -- 6.1 Classic ML Training -- 6.2 Serverless ML Training -- 7 Conclusion -- References -- DRP:Discrete Rank Pruning for Neural Network -- 1 Introduction -- 2 Related Work -- 2.1 Compression Techniques -- 2.2 Sparse Method -- 2.3 Structured Pruning -- 3 Consideration Bias Sparsity -- 4 Discrete Rank Pruning -- 5 Experiment and Evaluation -- 5.1 Datasets and Network Models -- 5.2 Implementation -- 5.3 Results and Analysis on CBS -- 5.4 Results and Analysis on DRP -- 6 Conclusion -- References -- TransMigrator: A Transformer-Based Predictive Page Migration Mechanism for Heterogeneous Memory -- 1 Introduction -- 2 TransMigrator -- 2.1 Design of Neural Network -- 2.2 Page Migration -- 3 Evaluation and Analysis -- 3.1 Trace Collection -- 3.2 Network Training -- 3.3 Migration Simulation -- 3.4 Access Time -- 3.5 Energy Consumption -- 3.6 Network Overhead -- 4 Related Work -- 5 Conclusion -- References -- Hardware Acceleration for 1D-CNN Based Real-Time Edge Computing -- 1 Introduction -- 2 Background -- 2.1 State-of-the-Art CNN Accelerators -- 2.2 CNN in Real-Time Computing.
3 Proposed Architecture for 1D-CNN -- 3.1 Data Reuse -- 3.2 Accelerated 1D-CNN Architecture -- 3.3 Compiler for 1D-CNN Architecture Generation -- 4 Results -- 4.1 Setup -- 4.2 Evaluations of Power, Latency and Bandwidth -- 4.3 Comparative Analysis -- 5 Conclusion -- References -- Emerging Applications -- DLC: An Optimization Framework for Full-State Quantum Simulation -- 1 Introduction -- 2 Background and Related Work -- 2.1 Quantum States and Quantum Circuits -- 2.2 Full-State Quantum Simulator -- 2.3 Related Work -- 3 Framework Overview -- 4 CPU-GPU Locality Enhancement -- 4.1 Data Dependency Analysis -- 4.2 CPU-GPU Locality Enhancement -- 5 Communication Optimization Among Multi-GPU -- 5.1 Challenges of Multi-GPU -- 5.2 Communication Scheme -- 5.3 Optimization of Communication -- 6 Performance Evaluation -- 6.1 Environment Setup -- 6.2 Performance on Single Node -- 6.3 Performance on Multiple Nodes -- 7 Conclusion -- References -- Approximation Algorithms for Reliability-Aware Maximum VoI on AUV-Aided Data Collections -- 1 Introduction -- 2 Related Works -- 2.1 AUV-Aided Data Collection -- 2.2 Orienteering Problem and Variants -- 3 System Model and Problem Definition -- 3.1 System Model -- 3.2 Problem Definition -- 4 Approximation Algorithm for the Path Finding Problem -- 4.1 Approximation Algorithm for the Path Finding Problem Without Real-Time VoI Decay -- 4.2 Approximation Algorithm for the Path Finding Problem with Real-Time VoI Decay -- 5 Simulation and Performance Evaluation -- 6 Conclusion -- References -- CCSBD: A Cost Control System Based on Blockchain and DRG Mechanism -- 1 Introduction -- 2 Related Work -- 3 System Design -- 3.1 System Overview -- 3.2 Medical Evidence-Based Classification Model -- 3.3 Contract Strategy for Clinical Data Sharing -- 4 Evaluation -- 5 Discussion -- 6 Conclusion -- References.
Number of UAVs and Mission Completion Time Minimization in Multi-UAV-Enabled IoT Networks.
Record Nr. UNISA-996500061303316
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
PACT '10 : proceedings of the Nineteenth International Conference on Parallel Architectures and Compilation Techniques : September 11-15, 2010, Vienna, Austria
PACT '10 : proceedings of the Nineteenth International Conference on Parallel Architectures and Compilation Techniques : September 11-15, 2010, Vienna, Austria
Autore Salapura Valentina
Pubbl/distr/stampa [Place of publication not identified], : Association for Computing Machinery, 2010
Descrizione fisica 1 online resource (596 p.;)
Disciplina 004/.35
Collana ACM Conferences
Soggetto topico Parallel computers
Computer architecture
Parallel processing (Electronic computers)
Compiling (Electronic computers)
Compilers (Computer programs)
Engineering & Applied Sciences
Computer Science
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Altri titoli varianti PACT '10
Record Nr. UNINA-9910138611403321
Salapura Valentina  
[Place of publication not identified], : Association for Computing Machinery, 2010
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
PACT '10 : proceedings of the Nineteenth International Conference on Parallel Architectures and Compilation Techniques : September 11-15, 2010, Vienna, Austria
PACT '10 : proceedings of the Nineteenth International Conference on Parallel Architectures and Compilation Techniques : September 11-15, 2010, Vienna, Austria
Autore Salapura Valentina
Pubbl/distr/stampa [Place of publication not identified], : Association for Computing Machinery, 2010
Descrizione fisica 1 online resource (596 p.;)
Disciplina 004/.35
Collana ACM Conferences
Soggetto topico Parallel computers
Computer architecture
Parallel processing (Electronic computers)
Compiling (Electronic computers)
Compilers (Computer programs)
Engineering & Applied Sciences
Computer Science
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Altri titoli varianti PACT '10
Record Nr. UNISA-996279845103316
Salapura Valentina  
[Place of publication not identified], : Association for Computing Machinery, 2010
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui

Data di pubblicazione

Altro...