Storage Design For Datawarehousing

  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Storage Design For Datawarehousing as PDF for free.

More details

  • Words: 4,477
  • Pages: 51
Storage Subsystem Design for Datawarehousing Array, Drive and RAID Selection Krishna Manoharan [email protected] http://dsstos.blogspot.com

1

Again, what is this about? An attempt to show how to design a Storage Subsystem for an Datawarehouse Environment from a physical perspective. Aimed at conventional environments using standard devices such as Fibre Channel SAN arrays for Oracle Databases. The presentation will demonstrate the Array and Drive selection process using real life examples. You will be in for a few surprises!

2

Enterprise Business Intelligence (EBI) Most companies have multiple Oracle instances (such as ODS and DW) with an ETL Engine (Informatica) and Reporting tool (Business Objects) all rolled into an Enterprise Business Intelligence (EBI) Environment. ODS is the Operational Data Store (a near real time copy of the company's Transactional data) and DW is the Datawarehouse (a collection of aggregated corporate data). The ETL Engine (such as Informatica) transforms and loads data contained in the ODS into the DW. The Reporting Engine (such as Business Objects) reports off data from both the ODS and DW. This presentation covers the storage design for the DW. Typical size of an DW is around 5-10TB for a large software company. Though the typical Enterprise Warehouse is small in size, it is by no means less busy. 3

Enterprise Business Intelligence (EBI) – contd. Users

Re po

rts

Reporting Engine

Database Layer

One Way Replication from Source Systems

Ex

Load

DW

ODS

t ac tr ETL Engine HR

Online Sales

Click Stream

ERP

CRM

Transaction Systems

4

Datawarehousing and the Storage Subsystem

One of the biggest factors affecting performance in Datawarehousing is the Storage subsystem. Once the environment is live, it becomes difficult to change a storage subsystem or the layers within. So it is important to design, size and configure the Storage subsystem appropriately for any Datawarehousing Environment. 5

What is the Storage Subsystem? The physical components of a conventional storage subsystem are

CPU

Switch Memory

PCI

System PCI Interface

System

SAN Fabric

SAN Switch SAN Fabric

Port 1

Array Port 2

Port n

Front End Ports to Host

CPU

Cache

In this presentation, we talk about the Array component of the Storage Subsystem.

Drives

Array

6

IO Fundamentals READS are

from the Storage Subsystem

WRITES are

to the Storage Subsystem

Storage Subsystem

IO in the simplest of terms is a combination of reads and writes. Reads and Writes can be Random or Sequential. 7

IO Fundamentals – contd. Random or Sequential is determined at the array level (Meaningful) Random Reads Random Read Hit - If present in the Array Cache, then occurs at wire speed. Random Read Miss - If not present in the Array Cache, hits the drives directly. This is the slowest IO operation. Sequential Reads First few reads are treated by the array as Random. Judging by the incoming requests (if determined to be sequential), then data is pre-fetched from the drives in fixed sized chunks and stored in cache. Subsequent reads are met from Cache at wire speed to the requestor. Random/Sequential Writes Normally are staged to cache and then written to disk. Will occur at wire speeds to the requestor. 8

IO Metrics Key IO Metrics are IOPS – Number of IO requests/second issued by the application.

IO Size – The size of the IO requests as issued by the application. Latency – The time taken to complete a single IO Operation (IOP). Bandwidth – The anticipated bandwidth that the IO Operations are expected to consume. Latency or response time is the time it takes for 1 IO Operation (IOP) to complete.

Source

IOPS

264K

264K

1024K

264K

264K

264K

1024K

264K

1024K

1024K

16K

16K

16K

16K

16K

16K

264K

16K

16K 16K 16K

16K

IOPS

Destination

1024K

1024K 264K 16K

16K 16K

1024K 1024K 16K

16K

Bandwidth is the total capacity of the pipe. Bandwidth capabilities are fixed.

9

Datawarehousing Storage Challenges Storage design in a corporate environment is typically Storage Centric - based on Capacity Requirements, not Application Requirements. When applied to Datawarehousing, this results in sub-standard user experience as Datawarehousing is heavily dependent on IO performance.

10

Profiling Datawarehousing IO - Reads From a IO performance perspective - Array capabilities along with Raid and Drive configuration determine Read performance in a Datawarehouse.

1 ·

Normally, in a conventional DW, you would notice many reports running against the same set of objects by different users for different requirements at the same time.

Users

Re ad s Typical DW

·

Since the size of the DW is not very big (~5-10TB) and hence the objects are relatively small in size, it is a normal tendency to place these objects on the same set of spindles (Also given the fact that today’s Drives are geared for capacity, not performance).

2

Object1

Object2

Object5

Object6

Object9

Object3

Object4

Object7

Object8

Objectn

Reads

High degree of random concurrency (along with write intensive Loads) to single set of disks will absolutely kill your user experience.

3

·

Due to high concurrency of the requests, about 60% of these read requests end up as Random Read Miss to the Array. Random Reads Miss is the slowest operation on an Array and require such reads to be met from the Disks.

·

Such Random Reads can be big (1MB Sized IOP) or as small as DB Block Size. To accommodate both such requirements, throughput and latency require to be taken into consideration.

4 ·

Database Objects

(less than 10TB)

11

Profiling Datawarehousing IO – writes From a IO performance perspective, Cache sizing & Cache reservation along with Raid and Disk configurations determine write performance in a Datawarehouse. 1 Users (Temp Tablespace Writes)

ETL Engine In a typical DW, different write operations occur at different intervals - 24*7*365.

·

These writes can be direct path or conventional.

·

In an Array environment, writes are staged to cache on the Array and then written to disks.

·

Write performance would depend on the size of the cache and the speed at which data can be written to disks. If your cache overflows (Array not being able to keep up with the Writes), then you will see an immediate spike in write response times and corresponding impact on your write operation.

DW (Typically less than 10TB)

Fast loading of data is important to be able to present the latest information to your customer. Normally, these are driven by rigid SLA’s.

·

·

Writes

2

Object1

Object2

Object5

Object6

Object9

Object3

Object4

Object7

Object8

Objectn

Database Objects

·

3

Writes

·

Cache (Memory) on Array

The speed at which data can be written to disks depends on the drive busyness.

·

A combination of reads and writes occurring simultaneously to a single set of spindles will result in poor user experience.

·

This normally happens when you place the objects on the same set of spindles without regard as to their usage patterns.

12

Profiling Datawarehousing IO – Summary To summarize (in Storage Terminology) Enterprise Datawarehousing is an environment in which Performance is important, not just capacity. Read and Write intensive ( Typically 70:30 Ratio)

Small (KB) to large sized IOPS (> 1MB) – for both reads and writes. Latency is very important and the IO Operations can consume significant amount of bandwidth.

In order to make these requirements more meaningful, you need to put numbers against each of these terms - for e.g. IOPS, bandwidth and Latency - so that a solution can be designed to meet these requirements. 13

Starting the Design

Okay I get the idea, so where do I begin?

14

Storage Subsystem Design Process If you have an existing Warehouse 1

3

2

Collect Stats from Oracle

Collect Stats from System

Collect Stats from Storage

If not available, then document requirements as best as you can.

Correlate Stats and Summarize/Forward Project 4

Requirements Gathering Phase 5

Identify suitable System(s)

Identify suitable Array

Identify suitable SAN switches 7

6

Drive

RAID

Drive

If you have an existing Warehouse – Collect stats from all sources and correlate to ensure you are reading it correctly.

If not, then you would have to document your requirements based on an understanding of how your environment will be used and proceed to the design phase.

Infrastructure Design

15

Storage Subsystem Design Requirements – contd. If using the data from an existing Warehouse - Do a forward projection, using these stats as raw data, for your design requirements. The existing IO subsystem would be affecting the quality of the stats that you have gathered and you need to factor this in. Separate out Reads and Writes along with the IO size. Document your average and peak numbers at Oracle Level. Anticipated IOPS – Number of IO Requests/Sec. Anticipated IO Request Size – IO Request Sizes as issued by the application for different operations. Acceptable Latency per IO Request. Anticipated Bandwidth requirements as consumed by the IOPS. 16

A Real World approach to the Design

In order to make the design process more realistic, let us look at requirements for a DW for a large software company and use these requirements to build a suitable Storage Subsystem. 17

Requirements for a typical Corporate DW (Assuming 5TB in size)

Performance The requirements below are as to be seen by Oracle. These are today’s requirements. It is expected that as the database grows, the performance requirements would scale accordingly. Read peaks need not be at the same time as the Write peaks. Same scenario for multiblock/single block traffic. Requirement

Reads

Writes

Total

Multi Block Reads

Single Block Reads

Multi Block Writes

Single Block Writes

Acceptable Latency/IO Request

< = 30ms

< = 10ms

< 20ms

< 5ms

5ms to 30ms

Expected IO Request Size

>16KB <= 1MB (Average IOP Size 764K)

16KB

>16KB <= 1MB (Average IOP Size 512K)

16KB

16KB to 1MB

Average

1200 IOPS

4000 IOPS

400 IOPS

450 IOPS

6050 IOPS

Peak

2000 IOPS

5200 IOPS

525 IOPS

650 IOPS

8375 IOPS

Average

918 MB/sec (Using 764K sized IOP)

62.5 MB/sec

200MB/sec (Using 512K sized IOP)

7 MB/sec

1.2 GB/sec

Peak

1492 MB/sec (Using 764K sized IOP)

81.2 MB/sec

262.5 MB/sec (Using 512K sized IOP)

10MB/sec

1.8 GB/sec

IOPS (IO Requests/Sec)

Bandwidth

18

Requirements for a typical EDW (Assuming 5TB in size) – contd.

Capacity The database is 5TB in size (Data/Index). And so provide 10TB of usable space on Day 1 to ensure that sufficient space is available for future growth (Filesystems at 50% capacity). Scale performance requirements appropriately for 10TB.

Misc IO from redo/archive/backup is not included in the above requirements. The storage subsystem needs to have the ability to handle 1024K IO Request size to prevent IO fragmentation. 19

Conventional Storage thinking

Let us look at a typical Corporate Storage Design as a response to the requirements.

20

Requirements to Array & Drive Capabilities The below would be a typical response to the requirements. However as we shall we, implementing as below would result in a failure. Feature

Requirement

Recommended

Notes

Net Bandwidth Consumed

1.8 GB/sec (Today) 3.6 GB/sec (Tomorrow)

1 Hitachi Modular Array AMS1000

AMS1000 has 8*4Gb Front End Ports for a total of 4 GB/sec Bandwidth

IOPS

> 8375 IOPS

Drive Specs of the 146GB, 15K RPM Drive show that the requirements can be easily met.

Latency

5ms to 30ms

Capacity

10TB

146GB, 15K RPM Drives 165 Drives 10TB Usable (RAID 10)

Max IO Size

1024K

1024K

AMS1000 supports a 1024K IOP size.

Cache

Determines Writes Performance

16GB

Maximum Cache is 16GB

Raid Levels

-

RAID10

RAID10 offers the best performance.

Stripe Width

1024K

512K

512K is the maximum offered by the array.

21

Storage Subsystem Design Process - Array

What is required is a more thorough analysis of all the components of the storage subsystem and then fit the requirements appropriately to the solution. We start with the Array. This is the most vital part of the solution and is not easily replaceable.

22

Storage Array – Enterprise or Modular? Arrays come in different configurations – Modular, Enterprise, JBOD etc. Modular arrays are inexpensive and easy to manage. They provide good value for money. Enterprise arrays are extremely expensive & offer a lot more functionality geared towards enterprise needs such as wan replication, virtualization and vertical growth capabilities. As I will show later on, vertical scaling of an array is not really conducive for performance. Adding more modular arrays is a cheaper/flexible option. For this presentation, I am using the Hitachi Modular Series AMS 1000 as an example. 23

Typical Modular Array (simplified) Conventional Array specs include

Servers

SAN Switch

Port 1

Port 2

Port x

Array

Front End Ports to Host

Number/Speed of Host Ports (Ports available for the host system to connect to). Size of Cache.

Management CPU

Raid Controllers

Cache

Disk Controllers Drives

Maximum Number of Drives. Number of Raid Controllers. Number of Backend Loops for Drive connectivity. 24

Oracle requirements to Array Specs

Unfortunately Array specs as provided by the vendor do not allow us to match it with Oracle requirements (apart from Capacity). So you need to ask your Array vendor some questions that are relevant to your requirement.

25

Array Specs – contd. (Questions for Array Vendor) · ·

1 Port 1

Port 2

·

Port x

Are these ports full speed? What is the queue depth they can sustain? Maximum IO Size that the Port can accept?

Front End Ports to Host

2 Can we manipulate the Cache reservation policy between Reads and Writes? Cache

Raid Controllers

Management CPU

3 4 How many CPUs in all?

Disk Controllers

5 ·

·

How may drives can this array sustain before consuming the entire bandwidth of the array? Optimal Raid Configurations?

What is the bandwidth available between these components?

Drives

Array 26

The HDS AMS1000 (Some questions answered) ·

Are these ports full speed? – Only 4 out of 8 are Full Speed for a peak speed of 2048MB/ sec.

·

What is the queue depth they can sustain? – 512/Port

·

Maximum IO Size that the Port can accept? – 1024K

1 Port 1

Port 2

Port x

Front End Ports to Host

n yo ch s a 4 T Chip 1066 MB/sec

2 2 Raid Controllers

U

Management CPU

2 CP

U/ CP r D e I A oll 1 R ontr C

12 2132

MB/s

.8

ec

GB /s

3 What is the bandwidth available between these components? – Effective Bandwidth is 1066 MB/sec

ec

Disk Controllers

How many CPUs in all?

4

Cache

Can we manipulate the Cache reservation policy between Reads and Writes? - No

n chyo 4 Ta ps i Ch

lex) Simp ( c e MB/s Drives 2048

5 ·

How may drives can this array sustain before consuming the entire bandwidth of the array? – Depends on Drive Performance

·

Optimal Raid Configurations? – Raid 1 or Raid 10. For Raid 5 – Not enough CPU/Cache.

·

Raid 10 – Stripe width 64K default, Upto 512K with CPM (License)

AMS1000 27

Analyzing the HDS AMS1000 Regardless of internal capabilities, you cannot exceed 1066 MB/sec as net throughput (Reads and Writes). Limited Cache (16GB) and the inability to manipulate cache reservation means that faster and smaller drives would be required to complete writes in time. The 1066MB/sec limit and the Backend Architecture restricts the number of drives that can be sustained by this array. Limited number of CPUs and Cache rule out using RAID 5 as a viable option. 28

Matching the AMS1000 to our requirements Feature

Requirement

Net Bandwidth 1.8 GB/sec (Today) 3.6 GB/sec (Tomorrow) Consumed

AMS1000 Capability

Recommendation

Notes

1066MB/sec (Theoritical) 750 MB/sec (Realistic)

5 Arrays (Min) 8 Arrays (Recommended)

1 AMS1000 = 750MB/sec 5 AMS1000 = 3.6 GB/sec 8 AMS1000 = 5.8 GB/sec

IOPS

> 8375 IOPS

Latency

5ms to 30ms

Capacity

10TB

Scalability

Future growth

Max IO Size

1024K

1024K

1024K

1024K is supported on the AMS1000.

Cache

Determines Writes Performance

16GB

16GB (Max)

Cache is preset at 50% Reads/Writes

Raid Levels

-

RAID 0, RAID 1, RAID 10, RAID 5

RAID 1 and RAID 10

Not enough CPU for RAID 5

Stripe Width

1024K

64K, 256K and 512K

Test to determine stripe width

Beyond 64K, require additional License feature

Depends on type of IO Operation, RAID/Drive performance and Drive Capacity.

Need to simulate the requirements along with various drive and raid configurations.

29

HDS AMS1000 - Conclusions Bandwidth requirements – We would need min of 5 Arrays to meet today + future requirement. Physical hard drives and RAID configuration determine the storage capacity and other performance requirements (IOPS/Latency).

Testing various configurations of Drive and RAID levels would determine how desired requirements - (IOPS/latencies) can be met. 30

Storage Subsystem Design Process – The Drives

Now that we have established Array capabilities, we can move on to the Drive Selection.

31

Hard Drives Regardless of how capable your array is, the choice of the Drives will ultimately decide the performance. Ultimately all IO gets passed down to the physical hard drives. The performance characteristics (Throughput, IOPS and Latency) vary depending of the type of the IO request and the drive busyness. 32

Hard Drives – FC or SATA or SAS Choice limited by selection of array. The drive interface speed (2Gb/4Gb etc) is not relevant as the bottleneck is the media and not in the interface. SAS is a more robust protocol than FC with native support for dynamic failover. SAS is a switched, serial and point to point architecture whereas FC is Arbitrated Loop at the Backend. The IDE equivalent of SAS is SATA. SATA offers larger capacities at slower speeds. For an Enterprise DW with stringent IO requirements, SAS would be the ideal choice (If Array supports SAS). Faster the drives, better the overall performance. 33

Hard Drives – Capacities – Is bigger better? What Capacity should I pick?

Bigger drives results in the ability to store more objects resulting in more concurrent requests and thus a more busier drive.

450GB

300GB

Object1

Object2

Object3

Object4

146GB

Object1

Object2

Object3

Object4

Object1

Object2

Object3

Object4

Object5

Object6

Object7

Object8

All offer (supposedly) same performance 167 Random IOPS at 8K IO Size 73-125 MB/sec (Sustained)

Object5

Object6

Object7

Object8

Object1

Object2

Object3

Object4

Object5

Object6

Object7

Object8

But if you compare IOPS/GB, then the true picture is revealed. 146GB drive = 1.14 IOPS/GB 300GB drive = 0.55 IOPS/GB 450GB drive = 0.37 IOPS/GB

34

Performance of Drives vis-à-vis Active Surface Usage As free space is consumed on the drive, so does the performance start to degrade. Smaller drives are a better choice for Enterprise Warehousing. 15K RPM Drive

300

IOPS Random 8K

250 200 150 100 50

0 25%

50%

75%

100%

% Active Surface Usage 35

Hard Drives Specs Hard Drive specs from Manufacturers typically include the below: Capacity – 146GB, 300GB, 450GB etc Speed – 7.2K/10K/15K RPM Interface Type/Speed – SAS/FC/SATA, 2/3/4 Gb/sec Internal Cache – 16MB Average Latency – 2 ms

Sustained Transfer rate – 73-125 MB/sec 36

Oracle requirements to Disk Specs Unfortunately Disk specs as provided by the vendor do not allow us to match it with Oracle requirements (apart from Capacity). Also, Hard Drives are always used in a RAID Configuration (In an Array). So you need to test various RAID Configurations and arrive at conclusions that are relevant to your requirement. 37

RAID, Raid Groups & LUNS RAID is essentially a method to improve drive performance by splitting requests between multiple drives and reduce drive busyness. And provide redundancy at the same time. RAID GROUP 1 LUN 4 LUN 5

RAID GROUP 2 LUN 1 LUN 2

Systems

Host Systems see Luns as individual disks (presented by the Array).

LUN 3

Array ·

Luns are carved out from Raid Groups on the Array.

·

Raid Groups are sets of disks in the Array in pre-defined combinations (Raid 1, Raid 5, Raid 10 etc).

38

RAID Levels – RAID 1 Reads from either drive will help reduce drive busyness.

Minimal CPU utilization during routine/recovery operations.

Not Cache intensive. RAID 1

MIRROR

Since traditional RAID 1 is 1D+1P combination, it would require combining multiple such luns on the system to create big volumes.

Writes require 2 IOP (Overwrite existing data)

39

RAID Levels – RAID 5 Reads will be split across drives depending on size of IO request/stripe width and help reduce drive busyness.

DATA1

DATA2

DATA3

PARITY

DATA4

DATA5

PARITY

DATA6

Each additional IO operation will consume bandwidth within the array. Depending on stripe width, a request may be split between drives. CPU Intensive due to parity bit calculation.

Writes require 4 IOP (Retrieve Data & Parity into Cache, Update Data & Parity in Cache and then Write Back into Disk)

RAID 5

High Write Penalty and hence Cache intensive. High CPU overhead during recovery. Bigger the RAID group (More drives), higher the penalty (especially during recovery). 40

RAID Levels – RAID 10 Combines both RAID1 and RAID0.

Writes require 2 IOP (Overwrite existing data)

MIRROR DATA1

DATA1

MIRROR DATA2

DATA2

MIRROR DATA3

DATA3

MIRROR DATA4

DATA4

·

Reads from either of the mirrored drives.

·

Reads will be split across drives depending on size of IO request/stripe width.

Same advantages of RAID1 with the advantage of striping (scaling across multiple drives). With a bigger stripe width, the IO requests can be met within a single drive. Traditionally Modular Arrays have been able to offer lengths of 64K stripe width only (on a single disk). This means that an IO request exceeding 64K would need to be split across the drives. Splitting across drives means more IOP’s and consuming more backend capacity (overall Array+ Drive busyness). Newer arrays (AMS2500) offer up to 512K stripe width. You can do a combination of RAID1 on the array and stripe on the system (Volume Manager) to overcome the array stripe width limitation. 41

Drive and RAID – Initial Conclusions Since the AMS1000 supports only FC/SATA drives, we will use FC Drives. We will test using 146GB 15K RPM drives. RAID5 is not an option due to high write penalty. RAID10 on the array is not an option as the array can offer only 512K stripe width. Our preference is 1024K stripe width so that a single 1024K multiblock IO request from Oracle can (at best) be met from a single drive. This leaves us with only RAID1 on the array. We can test using RAID1 and RAID10 (Striping on the system) under various conditions. 42

RAID Level Performance Requirements

The intent is to identify individual drive performance (in a RAID configuration).

This will allow us to determine the number of drives that will be required to meet our requirements. We will simulate peak reads/writes to identify a worst case scenario. 43

Test Methodology to determine Drive performance We will simulate Oracle traffic for 20 minutes using VxBench. We will test on a subset (400GB) of the 5TB expected data volume. Reads

Writes

Operations

Type

IO Size

IOPS

IOPS for 20 minutes

Multiblock IOP

Asynchronous

784K

156 IOPS

187200

Single Block OP

Sync

16K

406 IOPS

487200

Operations

Type

IO Size

IOPS/sec

IOPS for 20 minutes

Multiblock IOP

Asynchronous

512K

41 IOPS

49200

Single Block OP

Sync

16K

51 IOPS

61200

We will generate the required IOPS and measure latency and consumed bandwidth. 44

And the results are .. RAID Config

RAID 1 4 Concat Volumes across 4 Raid 1 Luns

Active Surface Area /Drive

Data

68%

Drives

8 400 GB

RAID1 8 Concat Volumes across 8 Raid 1 Luns

33%

16

RAID 10 2 Stripe Volumes across 4 RAID 1 luns (Raid 0 on system and Raid 1 on Array)

68% 400 GB

RAID 10 4 Stripe Volumes across 8 RAID 1 luns (Raid 0 on system and Raid 1 on Array)

8

33%

16

Feature

Expectation

Actual

IOPS

654 IOPS

567 IOPS

Bandwidth

147 MB/sec

141 MB/sec

Latency

5ms to 30 ms

46 ms

IOPS

654 IOPS

642 IOPS

Bandwidth

147 MB/sec

142 MB/sec

Latency

5ms to 30 ms

15 ms

IOPS

654 IOPS

509 IOPS

Bandwidth

147 MB/sec

136 MB/sec

Latency

5ms to 30 ms

87 ms

IOPS

654 IOPS

626 IOPS

Bandwidth

147 MB/sec

142 MB/sec

Latency

5ms to 30 ms

25 ms

Notes

Linux Host with NOOP Elevator and Vxvm Volumes.

Linux Host with NOOP Elevator and Vxvm Volumes (1MB Stripe Width)

45

Drive and RAID conclusions RAID1 on a Linux Host outperforms a RAID10 combination (RAID0 + RAID1 Combination). To meet our requirements, usable surface area cannot exceed 33% of a single 146 GB, 15K RPM FC Drive.

For 10 TB (Day 1 + Future growth), we would need 410 drives of 146GB, 15K RPM Drives. 46

Match requirements to Array and Drive capabilities

Now that we have established both Array and Drive capabilities, we can finally match these to our requirements.

47

Requirements to Array & Drive Capabilities Feature

Requirement

Typical Storage Design

1 Hitachi Modular Array Net Bandwidth 1.8 GB/sec (Today) Consumed 3.6 GB/sec (Tomorrow) AMS1000 IOPS

Actual Minimum Requirement

Recommended

Notes

5 AMS1000 Arrays

8 AMS1000 Arrays is preferable.

1 AMS1000 = 750MB/sec 5 AMS1000 = 3.6 GB/sec 8 AMS1000 = 5.8 GB/sec

> 8375 IOPS 146GB, 15K RPM Drives 165 Drives

146GB, 15K RPM Drives 450 Drives (410 + 40) 90 Drives/Array 2TB/Array (Usable space)

1024K

1024K

AMS1000 can meet the required 1024K IOP Size.

-

-

Cache

Determines Writes Performance

16GB

16GB

-

Maximum Cache is 16GB

Raid Levels

-

RAID10

RAID1

RAID1

RAID1 (on a Linux system) performed better than RAID10.

Stripe Width

1024K

512K

NA

-

-

Latency

5ms to 30ms

Capacity

10TB

Max IO Size

146GB, 15K RPM Drives 410 Drives to meet Performance 450 Drives (410 + 40) and Capacity Requirements 60 Drives/Array 450 drives (Including Spares) 1.3 TB/Array (Usable Space)

48

Final Thoughts If we had followed the capacity method of allocating storage to the Instance, a single AMS1000 would have been sufficient. But as we discovered, we would require at least 5 arrays to meet requirements. Similarly, the initial recommendation was 165 146GB drives . However we determined that a minimum of 410 drives is required to meet performance requirements. Out of the 146GB of available capacity in the drive, only 49GB is really usable. RAID1 outperforming RAID10 is a surprise, but this may not be case on all platforms. The choice of Operating System, Volume Management and other configuration aspects do influence the final outcome. 49

The Future is Bright As always, Low Price does not equal Low Cost. If you design the environment appropriately, you will spend more initially, but the rewards are plentiful. Modular Arrays are continuously improving and the new AMS2500 from Hitachi has an internal bandwidth capability of 8GB/sec (Simplex). So a single AMS2500 would suffice for our needs from a Bandwidth perspective. Solid State Devices appears to be gaining momentum in the main stream market and hopefully within the next 2 years, HDD will be history. 50

Questions ?

51

Related Documents

Datawarehousing
November 2019 13
Storage
May 2020 23
Storage
November 2019 47