Cache Memory

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Cache Memory as PDF for free.

More details

  • Words: 1,481
  • Pages: 10
Student ID: 2486313

Cache Memory Simulator

Introduction: Professor wants to study working of cache memory simulator from a quantitative and qualitative assignment. Cache is the simplest cost effective way to achieve high speed memory and its performance is extremely vital for high speed computers. Simulation is a very popular way to study and evaluate computer architectures, obtaining an acceptable estimation of performance before a system is built. For the past two decades, a primary means of cache memory analysis has been the use of traces of memory access patterns to drive simulators that determine the miss rate of different cache designs. Cache is the simplest cost effective way to achieve high speed memory and its performance is extremely vital for high speed computers.

London South Bank University

Page1 of 10

Student ID: 2486313

Cache Memory Simulator

Technical sheet or observation sheet and attached. Replacement: LRU Placement: Direct Map Loop: Nested size 24,16,8 Capacity 8 16 32

Block 1 1 1

Speed Fast Fast Fast

Hits 37 128 128

Total 144 144 144

% 26 89 89

Misses 107 16 16

Total 144 144 144

% 74 11 11

100

Miss

80 60 40 20 0 1

2

3

Hit

Observation: If the number is one, the cache is set to be direct map. If a cache is neither direct map nor fully-associative, it is called set associative. When we change loop size 16, 8 the percentage is decrease by 1% (Hits) and missing ratio is increased by 1%. Therefore, Hit and Miss is depends upon the size of the loop and the block size. Replacement: LRU Placement: Fully Associate Loop: Nested size 24,16,8 Capacity 8 16 32

Block 1 1 1

Speed Fast Fast Fast

Hits 42 42 42

Total 53 53 53

% 79 79 79

Misses 11 11 11

London South Bank University

Total 53 53 53

% 21 21 21

Page2 of 10

Student ID: 2486313

Cache Memory Simulator

Miss

90 80 70 60 50 40 30 20 10 0 1

2

3

Hit

Observation: If the no. of set 1, the cache is called fully-associative because all the tag must be checked to determine that are reference missed. Typically fetch size 8,16,32 or 16 words are best depending upon the memory characteristics. A loop buffer is a small, every high speed memory maintained by the instruction stage of the pipeline and containing the n most recently fetched instruction in sequence. When we change nested loop outer 12 inner 6 and the block size 4 and cache capacity 8. the percentage of hits is decreased by 26% and missing ratio is increase by 26%.

Replacement: LRU Placement: Direct Map Loop: Nested size 24,16,8

London South Bank University

Page3 of 10

Student ID: 2486313

Miss

Capacity 8 16 32

Block 4 4 4

Cache Memory Simulator

Speed Fast Fast Fast

Hits 103 50 50

Total 144 53 53

% 72 94 94

Misses 41 3 3

Total 144 53 53

% 28 6 6

100 90 80 70 60 50 40 30 20 10 0 1

2

3

Hit

Observation: If the number is one, the cache is set to be direct map. If a cache is neither direct map nor fully-associative, it is called set associative. Typically fetch size 8,16,32 or 16 words are best depending upon the memory characteristics. A loop buffer is a small, every high speed memory maintained by the instruction stage of the pipeline and containing the n most recently fetched instruction in sequence. After comparing the values of Direct Map, Fully-Associative and the block size 4 the hit ratio increased by 47%. And miss ratio also increased by 7%. Therefore, if we increase block size so there is minor changes has been noted in hits and missing.

Replacement: LRU Placement: Fully Associate Loop: Nested size 24,16,8 Capacity 8

Block 4

Speed Fast

Hits 103

Total 144

% 72

London South Bank University

Misses 41

Total 144

% 28

Page4 of 10

Student ID: 2486313

16 32

4 4

Cache Memory Simulator

Fast Fast

140 140

144 144

97 97

4 4

144 144

3 3

120 100

Miss

80 60 40 20 0 1

2

3

Hit

Observation: If the no. of set 1, the cache is called fully-associative because all the tag must be checked to determine that are reference missed. Typically fetch size 8,16,32 or 16 words are best depending upon the memory characteristics. A loop buffer is a small, every high speed memory maintained by the instruction stage of the pipeline and containing the n most recently fetched instruction in sequence. After comparing the values of Direct Map, Fully-Associative and the block size 4 the hit ratio increased by 47%. And miss ratio also increased by 7%. Therefore, if we increase block size so there is minor changes has been noted in hits and missing.

Replacement: Random Placement: Direct Map Loop: Nested size 24,16,8

Capacity

Block

Speed

Hits

Total

%

Misses

London South Bank University

Total

%

Page5 of 10

Student ID: 2486313

8 16 32

Cache Memory Simulator

1 1 1

Fast Fast Fast

42 42 42

53 53 53

79 79 79

11 11 11

53 53 53

21 21 21

100

Miss

80 60 40 20 0 1

2

3

Hit

Observation: In corroborating static random access memory cache per memory bank that provide efficiently double pick data bandwidth. When we change loop size the hit percentage is increased by 1% at the loop size 12,6. And missing ratio decreased by 9%. I think random placement works best in the combination of loop 12, 6. Replacement: Random Placement: Fully Associate Loop: Nested size 24,16,8 Capacity 8 16 32

Block 1 1 1

Speed Fast Fast Fast

Hits 42 42 42

Total 53 53 53

% 79 79 79

Misses 14 11 11

London South Bank University

Total 53 53 53

% 26 21 21

Page6 of 10

Miss

Student ID: 2486313

Cache Memory Simulator

90 80 70 60 50 40 30 20 10 0 1

2

3

Hit

Observation: After make changes in a loop size outer 12, inner 6, the percentage of increased by 7%. And miss ratio is decreased by 12% Hence its mean that if we change the loop size significantly changes comes.

Replacement: Random Placement: Direct Map Loop: Nested size 24,16,8

Miss

Capacity 8 16 32

Block 4 4 4

Speed Fast Fast Fast

Hits 50 50 50

Total 53 53 53

% 94 94 94

Misses 3 3 3

Total 53 53 53

% 6 6 6

100 90 80 70 60 50 40 30 20 10 0 1

2

3

4

Hit

London South Bank University

Page7 of 10

Student ID: 2486313

Cache Memory Simulator

Observation: After selecting different loop size, the hit is decreased by 30%. The miss ratio increased by 6%.

Replacement: Random Placement: Fully Associate Loop: Nested size 24,16,8

Miss

Capacity 8 16 32

Block 4 4 4

Speed Fast Fast Fast

Hits 46 50 50

Total 53 53 53

% 87 94 94

Misses 9 3 3

Total 53 53 53

% 17 6 6

100 90 80 70 60 50 40 30 20 10 0 1

2

3

Hit

Observation:

London South Bank University

Page8 of 10

Student ID: 2486313

Cache Memory Simulator

After comparing fully-associative random and direct map random so there is a minor change found in both reading. The hit ratio 7% decreased but the average of missing ratio is 9.6%.

Conclusion: Cache is a flexible, multi-lateral cache simulator developed to help designers in the middle of the design cycle make cache configuration decisions that would best aid in attaining the desired performance goals of the target processor. Cache is an event-driven, timing-sensitive simulator based on the Latency Effects cache timing model. It can be easily configured to model various multilateral cache configurations by using its library of cache state and data movement routines. The simulator can be easily joined to a wide range of event-driven processor simulators. We showed implementations of two different cache configurations and their resulting performance. These configurations included a direct-mapped and fully associate. cache provides many statistics which can help explain the performance of the potential cache configurations when running target workloads. Information regarding hit, miss, and delayed hit ratios tells of the program’s memory access characteristics, while block size and reuse information tells of the actual data usage within each program. These statistics can all be used to explain the performance of each cache configuration as well as help to drive the development of future cache designs that better handle the reference streams presented by the target workloads.

London South Bank University

Page9 of 10

Student ID: 2486313

Cache Memory Simulator

Reference: 1. http://myweb.lsbu.ac.uk/~chalkbs/research/CacheSimDescription.h

tm [cited 18 March 2006].

2. http://www.zib.de/schintke/ldasim/ [cited 18 March 2006].

3. http://www.cs.ucr.edu/~yluo/cs161L/labs/node6.html [cited 18 March 2006].

4. http://www.cs.ucsd.edu/users/calder/classes/win06/240A/projects/pr

oj2.html

[cited 18 March 2006].

Upload link:

London South Bank University

Page10 of 10

Related Documents

Cache Memory
November 2019 21
Cache Memory
April 2020 8
Cache Memory
June 2020 8
Coa 15 Cache Memory
May 2020 13
04 Cache Memory
June 2020 7
Orkom Cache Memory
December 2019 32