High Performance Computing By Parallel Processing

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View High Performance Computing By Parallel Processing as PDF for free.

More details

  • Words: 388
  • Pages: 3
1

High Performance Computing by Parallel Processing Parallel processing, the method of having many small tasks solve one large problem, has emerged as a key enabling technology in modern computing. In the recent years the number of transistors in the microprocessors and other hardware components have been drastically increasing which has proven remarkably astute, but on the other hand the cost of building such “high – end” machineries has also increased. Thus parallel processing is largely adopted for building both for high-performance scientific computing and for more ``general-purpose'' applications, which demand for higher performance, lower cost, and sustained productivity. There are many ways in achieving parallel processing, the one most economic and highly feasible is implemented through “Distributed Computing “. In the simplest sense, it is the simultaneous use of multiple compute resources to solve a computational problem: • • • •

To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs.

By assigning every CPU a concurrent part of the problem the execution is incredibly fast, which is proven by

2

Amdahl's Law: Which states that potential program speedup is defined by the fraction of code (P) that can be parallelized:

speedup

=

1 -------1 - P

3  If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup).  If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).  If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will run twice as fast. Thus by parallelizing the concurrent parts of the programs , with the same computer configuration we can speed up the execution significantly. The processing of the instruction is mainly managed by “Parallel Virtual Machine” The Parallel Virtual Machine (PVM) is a software tool for parallel networking of computers. It is designed to allow a network of heterogeneous Unix /Windows machines by creating a “message passage interface “which synchronizes the data processing in individual computers which pair up to form to be used as a single distributed parallel processor , thus forming a lot more economical “Super Computer “.

RAHUL.R 4thSem Computer Sc 1MV07CS079

Related Documents