Cs High Performance Comp

  • Uploaded by: lipika008
  • 0
  • 0
  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Cs High Performance Comp as PDF for free.

More details

  • Words: 762
  • Pages: 14
HIGH PERFORMANCE COMPUTING

Technical Seminar Report On

Technical Seminar

“High Performance Computing”

under the guidance of Mr. R.K. Shial submitted by

JYOTIMAN PRUSTY Presented by : Jyotiman Prusty

CS200118022

HIGH PERFORMANCE COMPUTING

Technical Seminar

INTRODUCTION

• Computing environment or a computer hardware system for massive and fast computation. • Sometime referred as supercomputing or super-computer. • Maximum amount of computations in a minimum amount of time. • Used in scientific and engineering fields. • Performance more than 1gigaflop. • Cost is more.

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

PARALLEL COMPUTING

Technical Seminar

A collection of processing elements that can communicate and cooperate to solve large problems more quickly than a single processing. TYPE OF PARALLELISM  Overt • Parallelism is visible to the programmer • May be difficult to program (correctly) • Large improvements in performance  Covert • Parallelism is not visible to the programmer • Compiler responsible for parallelism • Easy to do Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

PROCESSORS GENERAL CLASSIFICATION  Vector • Large rows of data are operated on simultaneously.  Scalar • Data is operated in a sequential fashion. ARCHITECTURAL CLASSIFICATIONS  SISD Machines • Conventional single processor computers. • The various circuits of the CPU are split up into functional • units which are arranged into a pipeline. • Each functional unit operates on the result of the previous • functional unit during a clock cycle. Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

PROCESSORS(cont..)  SIMD Machines • Single CPU devoted to control a large collection of subordinate processors. • Each processor has its own local memory. • Each processor runs the same program. • Each processor processes different data streams. • Each subordinate either executes the instruction or sits idle.

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

PROCESSORS(cont..)  MISD Machines • Multiple instructions should act on a single stream of data. • No practical machine in this class has been constructed.  MIMD Machines • Multiple processors(own or shared memory). • Each processor can run the same or different programs. • Each processor processes different data streams. • Processors can work synchronously or asynchronously. • Processors can be either tightly or loosely coupled. •There are shared-memory systems and distributed-memory • systems.

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

MEMORY ORGANIZATION

Technical Seminar

 SHARED MEMORY • One common memory block between all processors • One common bus for all processors

Fig-1

Fig-2

• Utilizes (complex) inter-connected network to connect processors to shared memory module Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

MEMORY ORGANIZATION(Cont..)  DISTRIBUTED MEMORY • Each processor has private memory. • Contents of private memory can only be accessed by that processor. • In general, machines can be scaled to thousands of processors. • Requires special programming techniques.

Fig-3

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

MEMORY ORGANIZATION(Cont..)  VIRTUAL SHARED MEMORY • Objective is to have the scalability of distributed • memory with shared memory. • Global address space mapped onto physically • distributed memory. • Data moves between processors on demand.

Fig-4

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

PARALLEL PROGRAMMING CONCEPTS A process is an instance of a program or subprogram executing autonomously on a processor.  Processes can be considered running or blocked.  A single-program and multiple-data.  In shared memory programming processes have a parent.  In distributed memory programming message passing used.  Data parallelism- data structure is distributed among the processes, and the individual processes execute the same instructions on the parts of data structure.

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

AREAS OF HPC USE  Weather forecasting  Earth quake prediction  River solution simulation  Air pollution  Aircraft simulations  Gene sequencing and mapping  Artificial intelligence  Computer Graphics  Geophysical/Petroleum Modeling  Database Applications

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

DISADVANTAGES

 More costly  More complex  Individual processor speed is not concerned.  Problems of high overhead and the difficulty of writing programs.  Number of bugs Increases

Presented by : Jyotiman Prusty

HIGH PERFORMANCE COMPUTING

Technical Seminar

CONCLUSION The present supercomputing trends indicate the supercomputer of tomorrow will work faster, solve even more complex problems, and provide wider access to supercomputing power . HPC played a crucial role in the advancement of knowledge and the quality of life.

Presented by : Jyotiman Prusty

Technical Seminar

HIGH PERFORMANCE COMPUTING

Thank you!!! Presented by : Jyotiman Prusty

Related Documents


More Documents from ""