Processor Design

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Processor Design as PDF for free.

More details

  • Words: 801
  • Pages: 4
CHAPTER 2: INSIDE THE PC Page#1

Processor Design RISC: Pronounced risk, acronym for Reduced Instruction Set Computer, a type of microprocessor that recognizes a relatively limited number of instructions. Advantages: • Advantage of reduced instruction set computers is that they can execute their instructions very fast because the instructions are so simple. • RISC chips require fewer transistors, which makes them cheaper to design and • • • • •

produce. By making the hardware simpler, RISC architectures put a greater burden on the software. A new microprocessor can be developed and tested more quickly if one of its aims is to be less complicated. Operating system and application programmers who use the microprocessor's instructions will find it easier to develop code with a smaller instruction set. The simplicity of RISC allows more freedom to choose how to use the space on a microprocessor. Higher-level language compilers produce more efficient code than formerly because they have always tended to use the smaller set of instructions to be found in a RISC computer.

CISC: • • • •



Pronounced sisk, and stands for Complex Instruction Set Computer. Most personal computers use a CISC architecture, in which the CPU supports as many as two hundred instructions. Most processors in Mainframe computers and PCs have CISC design. A CISC computers machine language offers programmers a wide variety of instructions from which to choose: 1. Add 2. Multiply 3. Compare 4. Move data, and so on. CISC computers reflect the evolution of increasingly sophisticated machine languages.

îÍf¨m

CHAPTER 2: INSIDE THE PC Page#2

Parallel Processing • • • •

• •

• • •

The simultaneous use of more than one CPU to execute a program.(The concept of using multiple processors in the same computer is known as parallel processing. Ideally, parallel processing makes a program run faster because there are more engines (CPUs) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs can execute different portions without interfering with each other. With single-CPU computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software. Note that parallel processing differs from multitasking, in which a single CPU executes several programs at once. Most computers have just one CPU, but some models have several. There are even computers with thousands of CPUs. Examples: 1. Supercomputers 2. And Mainframes Parallel processing on such large scale is referred to as MPP(Massively Parallel Processing). These super-fast supercomputers have sufficient computing capacity to attack applications that have been beyond that of computers with traditional computer designs. Parallel processing introduced: 1. Multiprogramming 2. Multiprocessing



Parallel processing is also called parallel computing.

îÍf¨m

CHAPTER 2: INSIDE THE PC Page#3

Cache Memory

• • • • •

A cache (pronounced CASH) is a place to store something temporarily. The files you automatically request by looking at a Web page are stored on your hard disk in a cache subdirectory under the directory for your browser (for example, Internet Explorer). When you return to a page you've recently looked at, the browser can get it from the cache rather than the original server, saving you time and the network the burden of some additional traffic. You can usually vary the size of your cache, depending on your particular browser. Computers include caches at several levels of operation, including cache memory and a disk cache.

Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory. In addition to cache memory, one can think of RAM itself as a cache of memory for hard disk storage since all of RAM's contents come from the hard disk initially when you turn your computer on and load the operating system (you are loading it into RAM) and later as you start new applications and access new data. RAM can also contain a special area called a disk cache that contains the data most recently read in from the hard disk. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.

îÍf¨m

CHAPTER 2: INSIDE THE PC Page#4

îÍf¨m

Related Documents

Processor Design
November 2019 19
Processor
November 2019 35
Parallel Processor
May 2020 19
Micro Processor
November 2019 22
Itanium Processor
July 2020 17
Cruose Processor
November 2019 22