- Home
- Drupal
- Linux
- WAN Emulator
- Bacula with Tape Autochanger
- DIY iSCSI Virtual Tape Library
- Installing CentOS 6 Linux
- Installing OpenSUSE
- LSI MegaRaid Storage Manager on Ubuntu Linux Deskktop (GUI)
- LSI MegaRaid Storage Manager on Ubuntu Server Linux (No GUI - remote management)
- Let's Build a Spam/Antivirus Filter
- Linux Fibre Channel SAN (Ubuntu)
- Microsoft
- Microsoft System Center 2012
- Failed to get DP locations as the expected version from MP
- SCCM 2012 Certificates and Templates for Internet and Workgroup Clients
- SCCM 2012 R2 Prerequisites
- SCCM 2012 R2 Slow Collection Refresh/Membership Update
- SCCM 2012 Tool
- SCOM 2012 R2 Linux Monitoring Cert Error
- SCOM 2012 R2 Ubuntu Linux Agent
- System Center 2012 Prerequisites
- System Center 2012 R2 and SQL Pit Falls
- System Center Configuration Manager 2012 OSD & PXE
- Exchange 2013
- Microsoft Office 365
- Microsoft Azure and TMG
- Windows Server 2012
- Microsoft System Center 2012
- Security
- Contact us
To Cache or Not to Cache? RAID 6 Or Raid 10? With LSI MegaRAID CacheCade SSD Cache Drives
Submitted by Justin on Sat, 08/30/2014 - 21:21
I was curious, is it worth while having SSD Cache drives in a SSD Array? I also wanted to see the performance differences between RAID 6 vs. RAID 10. Well I built my Poor Man's fiber channel san, and have been using it pretty successfully, but I was wondering just how much it would be worth adding a SSD Cache to it, or If RAID 6 would be better for me than RAID 10. There are no guarantees with this and results may change, but here are my results. Also it is good to note that the CACHE drives are SATA SSD not SAS SSD. Maybe one of these days I will have enough money to but a couple SAS SSD, or be able to beg or borrow a couple SAS SSD drives.:
Lets look at my configuration:
4 - 2TB 7.2K 6G SAS drive in RAID 10 (Mirror/Stripe)
8 - 450GB 15K 6G SAS drives in RAID 10 (Mirror/Stripe)
2 - 250GB SSD Disks In RAID 1 (Mirror) for LSI MegaRAID CacheCade
Disks presented via 2 -4Gig Fiber channel connections (using Multipath) to the Windows 2012 R2 Server
2 seperate Brocade 200E Fiber Channel swithes (one for each port)
LSI MegaRAID 9260-8i 512MB SAS Raid Controller with CacheCade Key
Benchmark Done with ATTO Disk Benchmark v2.47 which is a fee tool
What I found out, first the Obvious:
It is hard to add to the performance of 8 - 450GB 15K 6G SAS drives in RAID 10 (Mirror/Stripe) (1.6TB). so there is really no benefits to adding them to a SSD Cache, in fact I it actually degraded the performance which was not a surprise at all:
Without SSD Cache | With SSD Cache |
Now lets look a little deaper at the 15k drives and do the same thing with 6 Drives in Raid 6 (same amount of Storage 1.6TB). I did not know what to expect from this and I ran the test a couple times to be sure. with no cache reads and writes seemed to be slightly better. With Caching Writes were comperable but read was slow. Here are the results:
Without SSD Cache | With SSD Cache |
Now lets look at the same thing with 8 Drives in Raid 6 (same number of disks, but more storage) (2.45 TB). Again I did not know what to expect from this. Writes were comparable to above with out a cache, we lost some read performance. Not really sure why. With a cache, we actually got the opposite, Reads were actually better, writes were not... So I added a 3rd test using Write Through (no write caching). I ran this test more than a couple just to be sure because the results did not make sense, but here are the results:
Without SSD Cache | With SSD Cache (Write Back) | |
With SSD Cache (Write Through) | ||
Where it did make a difference is on the 4 - 2TB 7.2K 6G SAS drive in RAID 10 (Mirror/Stripe). did not have enough disks to test this with RAID 6 so I am only able to do RAID 10:
Without SSD Cache | With SSD Cache |
While I have not yet run this test on SATA drives, I will throw a couple in shortly and run the same test to see how they perform. I expect that that is where I will see the most benefit.
Other observations:
While I did not think to capture the results, I did see when I was doing a large copy (Large VHDX files) to from SAS 15K drives in RAID 10 to the SAS 7.2K drives with Caching enabled on the 7.2K drives, it copied incredibly fast... until the Cache drives filled up, then it was incredibly slow while it caught up which just makes sense.
Stay tuned... Next weekend if I have the time I will pit the SSD drives against an equal number of 15K SAS drives and see the outcome!
This website and its content is copyright of ITHierarchy Inc - © ITHierarchy Inc 2013-2015. All rights reserved.
Any redistribution or reproduction of part or all of the contents in any form is prohibited other than the following:
- you may print or download to a local hard disk extracts for your personal and non-commercial use only
- you may copy the content to individual third parties for their personal use, but only if you acknowledge the website as the source of the material
You may not, except with our express written permission, distribute or commercially exploit the content. Nor may you transmit it or store it in any other website or other form of electronic retrieval system.