SAN Disk Metrics .


40 views
Uploaded on:
Description
Current Situation. UNIX External Storage has moved to SANOracle Data File Sizes: 1 to 36 GB (R
Transcripts
Slide 1

SAN Disk Metrics Measured on Sun Ultra & HP PA-RISC Servers, StorageWorks MAs & EVAs, utilizing iozone V3.152

Slide 2

Current Situation UNIX External Storage has moved to SAN Oracle Data File Sizes: 1 to 36 GB (R&D) Oracle Servers are overwhelmingly Sun "Passage Level" HPQ StorageWorks: 24 MAs, 2 EVAs 2Q03 SAN LUN rebuilding utilizing RAID 5 just Oracle DBAs keep on requesting RAID 1+0 Roadmap for future - required

Slide 3

Purpose Of Filesystem Benchmarks Find Best Performance Storage, Server, HW choices, OS, and Filesystem Find Best Price/Performance Restrain Costs Replace "Feelings" with Factual Analysis Continue Abbott UNIX Benchmarks Filesystems, Disks, and SAN Benchmarking started in 1999

Slide 4

Goals Measure Current Capabilities Find Bottlenecks Find Best Price/Performance Set Cost Expectations For Customers Provide a Menu of Configurations Find Simplest Configuration Satisfy Oracle DBA Expectations Harmonize Abbott Oracle Filesystem Configuration Create a Road Map for Data Storage

Slide 5

Preconceptions UNIX SysAdmins RAID 1+0 does not inconceivably outflank RAID 5 Distribute Busy Filesystems among LUNs At slightest 3+ LUNs ought to be utilized for Oracle DBAs RAID 1+0 is Required for Production I Paid For It, So I Should Get It Filesystem Expansion On Demand

Slide 6

Web serving: Small, coordinated framework CPU Database/CRM/ERP: Storage Oracle Server Resource Needs in 3D Memory I/O

Slide 7

Sun Servers for Oracle Databases Sun UltraSPARC UPA Bus Entry Level Servers Ultra 2, 2x300 MHz Ultra SPARC-II, Sbus, 2 GB 220R, 2x450 MHz Ultra SPARC-II, PCI, 2 GB 420R, 4x450 MHz Ultra SPARC-II, PCI, 4 GB Enterprise Class Sun UPA Bus Servers E3500, 4x400 MHz Ultra SPARC-II, UPA, Sbus, 8 GB Sun UltraSPARC Fireplane (Safari) Entry Level Servers 280R, 2x750 MHz Ultra SPARC-III, Fireplane, PCI, 8 GB 480R, 4x900 MHz Ultra SPARC-III, Fireplane, PCI, 32 GB V880, 8x900 MHz Ultra SPARC-III, Fireplane, PCI, 64 GB Other UNIX HP L1000, 2x450 PA-RISC, Astro, PCI, 1024 MB

Slide 8

Oracle UNIX Filesystems Cooperative Standard amongst UNIX and R&D DBAs 8 Filesystems in 3 LUNs/exp/array.1/prophet/<instance> binaries & config/exp/array.2-6/oradb/<instance> data, list, temp, and so forth…/exp/array.7/oraarch/<instance> archive logs/exp/array.8/oraback/<instance> send out, reinforcement (RMAN) Basic LUN Usage Lun1: array.1-3 Lun2: array.4-6 Lun3: array.7-8 (Initially on "far" Storage Node)

Slide 9

StorageWorks SAN Storage Nodes StorageWorks: DEC - > Compaq - > HPQ A customary DEC Shop Initial SAN gear merchant Brocade Switches exchanged under StorageWorks mark Only seller with finish UNIX scope (2000) Sun, HP, SGI, Tru64 UNIX, Linux EMC, Hitachi, and so on… couldn\'t coordinate UNIX scope Enterprise Modular Array (MA) – "Stone Soup" SAN Buy the controller, then 2 to 6 circle racks, then plates 2-3 plate rack configs have prompted to issue RAIDsets which have at long last been reconfigured in 2Q2003 Enterprise Virtual Array (EVA) – Next Generation

Slide 10

MA 8000

Slide 11

EVA

Slide 12

2Q03 LUN Restructuring – 2 nd Gen SAN "Far" LUNs pulled back to "close" Data Center 6 plate, 6 rack MA RAID 5 RAIDsets LUNs are divided from RAIDsets LUNs are estimated as products of circle size Multiple LUNs from various RAIDsets Busy filesystems are conveyed among LUNs Server and Storage Node SAN Fabric Connections mated to basic switch

Slide 13

Results – Generalizations Read Performance - Server Performance Baseline Basic Measure of System Bus, Memory/Cache, & HBA Good assessment of disparate server I/O potential Random Write - Largest Variations in Performance Filesystem & Storage Node Selection Dominant Variables Memory & Cache – Important Processor Cache, System I/O Buffers, Virtual Memory All support diverse information stream estimate execution More Hardware, OS, & Fsys choices To be assessed

Slide 14

IOZONE Benchmark Utility File Operations Sequential Write & Re-compose Sequential Read & Re-read Random Read & Random Write Others are accessible: record modify, read in reverse, read strided, fread/fwrite, pread/pwrite, aio_read/aio_write File & Record Sizes Ranges or individual sizes might be indicated

Slide 15

IOZONE – Output: UFS Seq Read

Slide 16

IOZONE – UFS Sequential Read

Slide 17

IOZONE – UFS Random Read

Slide 18

IOZONE – UFS Sequential Write

Slide 19

IOZONE – UFS Random Write

Slide 20

Results – Server Memory Cache Influences little information stream execution Memory - I/O cushions and virtual memory Influences bigger information stream execution Large Data Streams require Large Memory Past this point of confinement => Synchronous execution

Slide 21

Results – Server I/O Potential System Bus Sun: UPA supplanted by SunFire Peripheral Bus: PCI versus Sbus (Older Sun just) Peak Bandwidth (25 MHz/64-bit) ~200 MB/sec Actual Thruput ~50-60 MB/sec (~25+%) PCI (Peripheral Component Interconnect) Peak Bandwidth (66 MHz/64-bit) ~530 MB/sec Actual Thruput ~440 MB/sec (~80+%)

Slide 22

Server – Sun, UPA, SBus

Slide 23

Server – Sun Enterprise, Gigaplane/UPA, SBus

Slide 24

Server – Sun, UPA, PCI

Slide 25

Server – HP, Astro Chipset, PCI

Slide 26

Server – Sun, Fireplane, PCI

Slide 27

Results – MA versus EVA MA RAID 1+0 & RAID 5 versus EVA RAID 5 Sequential Write EVA RAID 5 is 30-40% speedier than MA RAID 1+0 EVA RAID 5 is up to 2x quicker than MA RAID 5 Random Write EVA RAID 5 is 10-20% slower than MA RAID 1+0 EVA RAID 5 is up to 4x quicker than MA RAID 5 Servers were SunFire 480Rs, utilizing UFS+logging. EVA: 12 72 GB FCAL Disk RAID 5 parceled LUN MA: 6 36 GB SCSI Disk RAIDset

Slide 28

RAID 0 RAID 1

Slide 29

RAID 3 RAID 5

Slide 30

RAID 1+0 RAID 0+1

Slide 31

Results – MA RAIDsets Best: 3 reflect, 6 rack RAID 1+0 3 reflect RAID 1+0 on 2 retires just yield 80% of 6 rack variant 2 plate reflect (2 racks) yields half

Slide 32

Results – MA RAIDsets Best: 3 reflect, 6 rack RAID 1+0 6 circle, 6 rack RAID 5: Sequential Write: 75-80% Random Write: 25-half (2 to 4 times slower) 3 plate, 3 rack RAID 5: Sequential Write: 40-60% Random Write: 25-60% Can beat 6 circle RAID 5 on arbitrary compose

Slide 33

Results – LUNs from Partitions 3 Simultaneous Writers Partitions of same RAIDset Write execution (S or R) Less than half of no-dispute execution No control test performed: 3 servers keep in touch with 3 diverse RAIDsets of same Storage Node Where is the Bottleneck? RAIDset, SCSI channels, or Controllers?

Slide 34

Results – Fabric Locality underway, "far" LUNs fail to meet expectations Monitoring "sar" circle information, "far" LUN filesystems are 4 to 10 times slower. Texture based administration interruptions are drawn into the server when any LUNs are not nearby. This round of testing did not demonstrate wide varieties in execution whether the server was associated with it\'s Storage Node\'s SAN Switch, or 3/4 bounces away.

Slide 35

Results – UFS Options Logging The journaling UFS Filesystem Advised on expansive filesystems to evade long running "fsck". Under Solaris 8, logging presents a 10% compose execution punishment. Solaris 9 promotes its logging calculation is considerably more proficient. Forcedirectio No helpful testing without an Oracle workload

Slide 36

Results – UFS Tuning Bufhwm: Default 2% of memory, Max 20% of memory Extends I/O Buffer impact enhances compose execution on modestly substantial documents Ufs:ufs_LW & ufs:ufs_HW Solaris 7 & 8: 256K & 384K bytes Solaris 9: 8M & 16M bytes More information is held in framework support before being flushed. Fsflush() impact on "sar" information: extensive administration times

Slide 37

Results – VERITAS VxFS Outstanding Write Performance VxFS just on MA 6-circle RAID 5 UFS on MA 6-plate RAID 5 Sequential Write VxFS is 15 times quicker Random Write VxFS is 40 times speedier UFS on MA 6-plate RAID 1+0 Sequential Write VxFS is 10 times quicker Random Write VxFS is 10 times speedier UFS on EVA 12-circle RAID 5 Sequential Write VxFS is 7 times quicker Random Write VxFS is 12 times quicker

Slide 38

Results –Random Write Hardware-just Storage Node Performance MA 1+0 = EVA RAID 5 EVA RAID 5 genius rata cost like MA RAID 5 RAID 1+0 is Not Cost Effective Improved Filesystem is Your Choice Order-of-Magnitude Better Performance Less costly Server Memory Still Is Important for Large Data Streams

Slide 39

Random Write: UFS, MA, RAID 5

Slide 40

Random Write: UFS, MA, RAID 1+0

Slide 41

Random Write: UFS, EVA, RAID 5

Slide 42

Random Write: VxFS, MA, RAID 5

Slide 43

Closer Look: VxFS versus UFS Graphical Comparison: Sun Servers gave RAID 5 LUNs UFS EMA UFS EVA VxFS EMA VxFS EVA File Operations Sequential Read Random Read Sequential Write Random Write

Slide 44

Sequential Read

Slide 45

Random Read

Slide 46

Sequential Write

Slide 47

Random Write

Slide 48

Results – VERITAS VxFS Biggest Performance picks up Everything else is of optional significance Memory Overhead for VxFS Dominates Sequential Write of little records Needs promote examination VxFS & EVA RAID 1+0 not measured Don\'t say what you would prefer not to offer

Slide 49

Implications – VERITAS VxFS Where is the Bottleneck? Changes at Storage Node Modest Increases in Performance Changes inside Server Dramatically Increase Performance The Bottleneck is in the Server, not the SAN The relative cost is recently favorable luck Changing the filesystem is a great deal less costly

Slide 50

Results – Bottom Line Bottleneck Identified It\'s the Server, not Storage VERITAS VxFS Use it on UNIX Servers RAID 1+0 is Not Cost Effective VxFS is much less expensive – Tier 1 servers Server Memory is less expensive than Mirrored Disk Operating System I/O Buffers Configure as extensive as would be prudent

Slide 51

Price & Performance Cost Of Computi

Recommended
View more...