Abstract.docx

  • Uploaded by: Aishwarya Sahu
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Abstract.docx as PDF for free.

More details

  • Words: 358
  • Pages: 1
ABSTRACT The sequential programs are time consuming for execution. Nowadays the multiple core architectures have major advantages over single core architectures. According to Moore’s law, the number of transistors in a dense integrated circuit doubles about every two years. But because of factors like power dissipation, nonscalability and reliability the performance of single core architecture is going down, hence there is a need for multi core processors. Multi core processors have paved the way to increase the performance of any application by the virtue of benefits of parallelization. Parallelization is breaking the problem into independent parts so that each processing element can execute its part of the algorithm concurrently with the others. This paperwork outlines the survey of different methods used in parallelization and analyses the same.

1. INTRODUCTION Parallel programming is not a new concept. There are several ways to parallelize a program containing input/output operations but one of the most important challenges is to parallelize loops. The goal is to use the resources of a multiprocessor system efficiently without entirely rewriting the code. This process is called parallelization. There are three ways a piece of code can be parallel depending on the operations and data dependencies involved.   

Control parallelism Data parallelism Functional parallelism

Parallel computing is a type of computing in which numerous directions are done at the same time. Parallel computing works on the standard that extensive issues can practically dependably be isolated into more modest ones, which maybe

completed simultaneously. Parallel processing has been utilized for a long time, chiefly in high performance computing. Parallel programming is a good practice for solving computationally intensive problems in various fields. In operations research for instance, solving maximization problems with simplex method is an area where parallel algorithms are being developed. The primary reasons for using parallel computing are:     

Decreasing execution time Memory utilization is less Solve large volume of data computations Afford concurrency Take advantage of non-local resources

Multi-core offers explicit support for executing multiple threads in parallel and thus reduces the idle time. The factor motivated the design of parallel algorithm for multi-core system is the performance.

More Documents from "Aishwarya Sahu"