SAN Disk Metrics Measured on Sun Ultra & HP PA-RISC Servers, StorageWorks MAs & EVAs, utilizing iozone V3.152Slide 2
Current Situation UNIX External Storage has moved to SAN Oracle Data File Sizes: 1 to 36 GB (R&D) Oracle Servers are overwhelmingly Sun "Passage Level" HPQ StorageWorks: 24 MAs, 2 EVAs 2Q03 SAN LUN rebuilding utilizing RAID 5 just Oracle DBAs keep on requesting RAID 1+0 Roadmap for future - requiredSlide 3
Purpose Of Filesystem Benchmarks Find Best Performance Storage, Server, HW choices, OS, and Filesystem Find Best Price/Performance Restrain Costs Replace "Feelings" with Factual Analysis Continue Abbott UNIX Benchmarks Filesystems, Disks, and SAN Benchmarking started in 1999Slide 4
Goals Measure Current Capabilities Find Bottlenecks Find Best Price/Performance Set Cost Expectations For Customers Provide a Menu of Configurations Find Simplest Configuration Satisfy Oracle DBA Expectations Harmonize Abbott Oracle Filesystem Configuration Create a Road Map for Data StorageSlide 5
Preconceptions UNIX SysAdmins RAID 1+0 does not inconceivably outflank RAID 5 Distribute Busy Filesystems among LUNs At slightest 3+ LUNs ought to be utilized for Oracle DBAs RAID 1+0 is Required for Production I Paid For It, So I Should Get It Filesystem Expansion On DemandSlide 6
Web serving: Small, coordinated framework CPU Database/CRM/ERP: Storage Oracle Server Resource Needs in 3D Memory I/OSlide 7
Sun Servers for Oracle Databases Sun UltraSPARC UPA Bus Entry Level Servers Ultra 2, 2x300 MHz Ultra SPARC-II, Sbus, 2 GB 220R, 2x450 MHz Ultra SPARC-II, PCI, 2 GB 420R, 4x450 MHz Ultra SPARC-II, PCI, 4 GB Enterprise Class Sun UPA Bus Servers E3500, 4x400 MHz Ultra SPARC-II, UPA, Sbus, 8 GB Sun UltraSPARC Fireplane (Safari) Entry Level Servers 280R, 2x750 MHz Ultra SPARC-III, Fireplane, PCI, 8 GB 480R, 4x900 MHz Ultra SPARC-III, Fireplane, PCI, 32 GB V880, 8x900 MHz Ultra SPARC-III, Fireplane, PCI, 64 GB Other UNIX HP L1000, 2x450 PA-RISC, Astro, PCI, 1024 MBSlide 8
Oracle UNIX Filesystems Cooperative Standard amongst UNIX and R&D DBAs 8 Filesystems in 3 LUNs/exp/array.1/prophet/<instance> binaries & config/exp/array.2-6/oradb/<instance> data, list, temp, and so forth…/exp/array.7/oraarch/<instance> archive logs/exp/array.8/oraback/<instance> send out, reinforcement (RMAN) Basic LUN Usage Lun1: array.1-3 Lun2: array.4-6 Lun3: array.7-8 (Initially on "far" Storage Node)Slide 9
StorageWorks SAN Storage Nodes StorageWorks: DEC - > Compaq - > HPQ A customary DEC Shop Initial SAN gear merchant Brocade Switches exchanged under StorageWorks mark Only seller with finish UNIX scope (2000) Sun, HP, SGI, Tru64 UNIX, Linux EMC, Hitachi, and so on… couldn\'t coordinate UNIX scope Enterprise Modular Array (MA) – "Stone Soup" SAN Buy the controller, then 2 to 6 circle racks, then plates 2-3 plate rack configs have prompted to issue RAIDsets which have at long last been reconfigured in 2Q2003 Enterprise Virtual Array (EVA) – Next GenerationSlide 10
MA 8000Slide 11
2Q03 LUN Restructuring – 2 nd Gen SAN "Far" LUNs pulled back to "close" Data Center 6 plate, 6 rack MA RAID 5 RAIDsets LUNs are divided from RAIDsets LUNs are estimated as products of circle size Multiple LUNs from various RAIDsets Busy filesystems are conveyed among LUNs Server and Storage Node SAN Fabric Connections mated to basic switchSlide 13
Results – Generalizations Read Performance - Server Performance Baseline Basic Measure of System Bus, Memory/Cache, & HBA Good assessment of disparate server I/O potential Random Write - Largest Variations in Performance Filesystem & Storage Node Selection Dominant Variables Memory & Cache – Important Processor Cache, System I/O Buffers, Virtual Memory All support diverse information stream estimate execution More Hardware, OS, & Fsys choices To be assessedSlide 14
IOZONE Benchmark Utility File Operations Sequential Write & Re-compose Sequential Read & Re-read Random Read & Random Write Others are accessible: record modify, read in reverse, read strided, fread/fwrite, pread/pwrite, aio_read/aio_write File & Record Sizes Ranges or individual sizes might be indicatedSlide 15
IOZONE – Output: UFS Seq ReadSlide 16
IOZONE – UFS Sequential ReadSlide 17
IOZONE – UFS Random ReadSlide 18
IOZONE – UFS Sequential WriteSlide 19
IOZONE – UFS Random WriteSlide 20
Results – Server Memory Cache Influences little information stream execution Memory - I/O cushions and virtual memory Influences bigger information stream execution Large Data Streams require Large Memory Past this point of confinement => Synchronous executionSlide 21
Results – Server I/O Potential System Bus Sun: UPA supplanted by SunFire Peripheral Bus: PCI versus Sbus (Older Sun just) Peak Bandwidth (25 MHz/64-bit) ~200 MB/sec Actual Thruput ~50-60 MB/sec (~25+%) PCI (Peripheral Component Interconnect) Peak Bandwidth (66 MHz/64-bit) ~530 MB/sec Actual Thruput ~440 MB/sec (~80+%)Slide 22
Server – Sun, UPA, SBusSlide 23
Server – Sun Enterprise, Gigaplane/UPA, SBusSlide 24
Server – Sun, UPA, PCISlide 25
Server – HP, Astro Chipset, PCISlide 26
Server – Sun, Fireplane, PCISlide 27
Results – MA versus EVA MA RAID 1+0 & RAID 5 versus EVA RAID 5 Sequential Write EVA RAID 5 is 30-40% speedier than MA RAID 1+0 EVA RAID 5 is up to 2x quicker than MA RAID 5 Random Write EVA RAID 5 is 10-20% slower than MA RAID 1+0 EVA RAID 5 is up to 4x quicker than MA RAID 5 Servers were SunFire 480Rs, utilizing UFS+logging. EVA: 12 72 GB FCAL Disk RAID 5 parceled LUN MA: 6 36 GB SCSI Disk RAIDsetSlide 28
RAID 0 RAID 1Slide 29
RAID 3 RAID 5Slide 30
RAID 1+0 RAID 0+1Slide 31
Results – MA RAIDsets Best: 3 reflect, 6 rack RAID 1+0 3 reflect RAID 1+0 on 2 retires just yield 80% of 6 rack variant 2 plate reflect (2 racks) yields halfSlide 32
Results – MA RAIDsets Best: 3 reflect, 6 rack RAID 1+0 6 circle, 6 rack RAID 5: Sequential Write: 75-80% Random Write: 25-half (2 to 4 times slower) 3 plate, 3 rack RAID 5: Sequential Write: 40-60% Random Write: 25-60% Can beat 6 circle RAID 5 on arbitrary composeSlide 33
Results – LUNs from Partitions 3 Simultaneous Writers Partitions of same RAIDset Write execution (S or R) Less than half of no-dispute execution No control test performed: 3 servers keep in touch with 3 diverse RAIDsets of same Storage Node Where is the Bottleneck? RAIDset, SCSI channels, or Controllers?Slide 34
Results – Fabric Locality underway, "far" LUNs fail to meet expectations Monitoring "sar" circle information, "far" LUN filesystems are 4 to 10 times slower. Texture based administration interruptions are drawn into the server when any LUNs are not nearby. This round of testing did not demonstrate wide varieties in execution whether the server was associated with it\'s Storage Node\'s SAN Switch, or 3/4 bounces away.Slide 35
Results – UFS Options Logging The journaling UFS Filesystem Advised on expansive filesystems to evade long running "fsck". Under Solaris 8, logging presents a 10% compose execution punishment. Solaris 9 promotes its logging calculation is considerably more proficient. Forcedirectio No helpful testing without an Oracle workloadSlide 36
Results – UFS Tuning Bufhwm: Default 2% of memory, Max 20% of memory Extends I/O Buffer impact enhances compose execution on modestly substantial documents Ufs:ufs_LW & ufs:ufs_HW Solaris 7 & 8: 256K & 384K bytes Solaris 9: 8M & 16M bytes More information is held in framework support before being flushed. Fsflush() impact on "sar" information: extensive administration timesSlide 37
Results – VERITAS VxFS Outstanding Write Performance VxFS just on MA 6-circle RAID 5 UFS on MA 6-plate RAID 5 Sequential Write VxFS is 15 times quicker Random Write VxFS is 40 times speedier UFS on MA 6-plate RAID 1+0 Sequential Write VxFS is 10 times quicker Random Write VxFS is 10 times speedier UFS on EVA 12-circle RAID 5 Sequential Write VxFS is 7 times quicker Random Write VxFS is 12 times quickerSlide 38
Results –Random Write Hardware-just Storage Node Performance MA 1+0 = EVA RAID 5 EVA RAID 5 genius rata cost like MA RAID 5 RAID 1+0 is Not Cost Effective Improved Filesystem is Your Choice Order-of-Magnitude Better Performance Less costly Server Memory Still Is Important for Large Data StreamsSlide 39
Random Write: UFS, MA, RAID 5Slide 40
Random Write: UFS, MA, RAID 1+0Slide 41
Random Write: UFS, EVA, RAID 5Slide 42
Random Write: VxFS, MA, RAID 5Slide 43
Closer Look: VxFS versus UFS Graphical Comparison: Sun Servers gave RAID 5 LUNs UFS EMA UFS EVA VxFS EMA VxFS EVA File Operations Sequential Read Random Read Sequential Write Random WriteSlide 44
Sequential ReadSlide 45
Random ReadSlide 46
Sequential WriteSlide 47
Random WriteSlide 48
Results – VERITAS VxFS Biggest Performance picks up Everything else is of optional significance Memory Overhead for VxFS Dominates Sequential Write of little records Needs promote examination VxFS & EVA RAID 1+0 not measured Don\'t say what you would prefer not to offerSlide 49
Implications – VERITAS VxFS Where is the Bottleneck? Changes at Storage Node Modest Increases in Performance Changes inside Server Dramatically Increase Performance The Bottleneck is in the Server, not the SAN The relative cost is recently favorable luck Changing the filesystem is a great deal less costlySlide 50
Results – Bottom Line Bottleneck Identified It\'s the Server, not Storage VERITAS VxFS Use it on UNIX Servers RAID 1+0 is Not Cost Effective VxFS is much less expensive – Tier 1 servers Server Memory is less expensive than Mirrored Disk Operating System I/O Buffers Configure as extensive as would be prudentSlide 51
Price & Performance Cost Of Computi
Intended to reliably and impartially figure a material condition availability esteem for hardwar ...
Material Readiness Reporting Tool for Ship Systems. MFOM computes and reports a ... MFOM is the ...
Hard Disk alludes to one plate, while drive alludes to a heap of them ... The limit of Hard Driv ...
What happens when you utilize a USB 2.0 gadget in a USB 1.0 connector? ... To what extent can US ...
The security approach ought to unmistakably characterize the vital system parts to be ... One se ...
Figure 1: (below) a T-Tauri star with an accretion disk. The accretion disk is the source of ...
Why pay consideration on rankings?. In spite of the fact that rankings are horrendously uncertai ...
Making Sense out of Metrics. ISDA / PRMIA 17 th August 2004. In 2002………. A global s ...
disk stack centrifuge. presented by. daniel, emily, etienne. the outline. of our presentat ...
Case Study on Disk Drive Industry. Adapted from Clayton, M. Christensen, The Innovator’ ...
Hayden Melton Computer Science firstname.lastname@example.org May 2006. Software Design and Metr ...
Accelerating the QA Test Cycle Via Metrics and Automation (Larry Mellon, Brian DuBose). Intr ...
Metrics and Databases for Agile Software Development Projects. David I. Heimann IEEE Boston ...
Disk Scheduling. Chapter 14 Based on the slides supporting the text and B.Ramamurthy’s sli ...
Plate STACK CENTRIFUGE. A rotator is a bit of hardware, for the most part determined by an engin ...
Three inquiries. What conduct would you like to promote?What story would you like to tell?Who wo ...
10/19/2003. Measurements. 2. Learning goals. Programming measurements. Measurements for differen ...
Schools and Universities as Worksettings. Extremely extraordinary work environments because of t ...