Calling all Raid 0 Gurus: I need raid 0 help

Status
Not open for further replies.
I have a couple questions as I’m fairly new to setting up a raid with a more serious card (Promise Fastrak SX4-M vs. Intel ICH5R). I have a three drive setup (3x Seagate Barracuda 160GB) and my operating system is going on it. I never really paid too much attention to it until now. The promise gives me the option of a 16, 32 and 64k “stripe block”. I formatted it with a 64k stripe size and ran HDTach which gave me an average of like 110mb/sec. I felt it was a little slow as I have a 400gb (Seagate Barracuda) achieving close to 70mb/sec so I went back to see if I configured anything improperly but I don’t think I did as the promise does not offer many options (stripe block: 16, 32 or 64; Fast Init: Y/N). Also, the drives are detected as 159999mb x 3 = 479997. In windows it appears as a 444GB drive in My Computer and when you right click and select properties for the drive, it shows up as a drive with a capacity of 477 as opposed to 479… On my Intel raid, when I had just two drives, it would show up as 320 (2x 160) for the capacity.

Cliffs (Questions):
Should my 3 drive raid 0 configuration be faster than 110mb/sec; Are more drives in a raid 0 environment better for speed?
How do I interpret the 16, 32, and 64 “stripe block” options the card gives me and where should I set it for OS use?
How do I explain the loss of space as explained above?

FYI - I am not setting up this raid for redundancy, all i want out of this is speed. I have separate drives that i regularly back up to. In addition, I have adequate cooling to all drives and have never experienced a failure since aiming fans directly at them...
 
Why is everyone asking for gurus?
Let me give you a good piece of advice - never trust a guy who says he is a guru or knows everything about something.

I have never operated a RAID0 array in my life, but I can still try to answer your questions.

That RAID card is PCI, isn't it? No, you won't be getting much more speed out of the thing. PCI bus has a max transfer rate of <130MB/s and that is only theoretical. I'd guess your 110MB/s is the limit for you.

Stripe size tells you by how big chunks the data is spread across the disks. 16KB means that first 16KB of your data goes on the first disk, next 16KB on the second, next 16KB on the third and the next 16KB on the first again.

For single-tasking systems smaller stripe size is better because it distributes your single-process HD access to as many disks as possible giving you the best response times. For (very) multi-tasking systems bigger stripe size is better because one disk request would only use part of the array, leaving the rest of the drives free to serve some other request.

The drive size numbers differing is probably a combination of the 1000/1024 issue, rounding errors, RAID housekeeping (the RAID controller may keep information about the array on the disks) and filesystem overhead.
 
Status
Not open for further replies.
Back