Building a Monster, need advice for storage options!

Status
Not open for further replies.

gexamb

Posts: 106   +0
Hey guys need your help once again.

I have been given a 4K budget to build the baddest machine on the block. This machine's duties will be:

1. Act as our training PC for MS Cert simulations of Server 2008. All instances of the server will be running on top of VMWare.
2. Act as our disaster recovery backup server just in case sh*t hits the fan and we need our network up and running. Mind you, the current server is a beefy rackmount Dell running 8 VMs of Server 2003 on top of ESXi.

Now this is the build I have in mind:

Core i7 920 2.6ghz
EVGA X58 mobo
Corsair Dominator 2 (3 x 2GB) DDR3 RAM 12GB
Antec Quattro 1000w PSU
Intel G2 160 GB SSD (OS)
CoolerMaster Cosmos Case
2 x EVGA GTX 260 Core 216 Video Cards
3ware 9650SE-4LPML Raid Card
4 x WD Black Caviar 1TB drives

The real question here is what would be the best solution for storage?

What we had in mind was put 4 1TB drives in raid 10 giving us 2TB usable storage space running from the 3ware raid card all in the same system, instead of running this in a separate NAS.

Now would it be enough of a difference to run the storage portion of the system in a separate NAS or would it be fine to sort of emulate a NAS device in the same system?

So far I have hit about $3600 with my build. If were to go with the NAS option, I can easily build one myself, I have tons of spare pcs out of commission with enough case space to fit everything in there. It all comes down to whether or not the storage should be an external NAS or internal raid?

note* I would be using FreeNAS to run the NAS if that were the option chosen.
 
Don't forget that NAS is attached via the network. Make sure that is a DEDICATED interface
that is not accessible to the client systems.

Networking choices (today) are the 1gb ethernet, Firewire (400 vs 800), and the newest USB 3.0 devices (if you can find them).

Also consider that software RAID is not the best choice; look for a PCI card which implements raid in the hardware.
This was once the domain of Adaptec / SCSI but now you might find iSCSI devices.
(oh yea; hot-swap is almost always a SCSI feature).

Lastly, imo, I would be careful as to how much data is 'under one head', consider:
disk platters * number of cylinders * bytes per track becomes a big number for both reliability and performance.
  • the electronics can 'switch heads' faster than it can seek the arm
  • the electronics can switch heads faster than waiting for rotational delay (latency)
I would chose HDs with more platters over one with fewer,
fewer clyinders to reduce arm seeks,
and higher RPM over slower to reduce latency.

So; you have an attachment choice, a controller choice, and HD geometry considerations.
 
Well if I was to go internally setting up the storage, I was going to add a 3ware raid card to handle the 4 drives. I would never depend on internal or onboard raid. If i was to set up my own NAS i would still use the same raid card.

So the connection for the internal would be thru the PCI bus and would have to emulate the raid as a SCSI or something else to make it a separate entity just like a NAS is a separate device seen on a network.

If I were to go the NAS option, I would most likely use 1 Gb NIC connection. I dont need anything fancy for storage connection as this is a last resort backup server application.

Regarding the drives, are you saying that those drives that I chose have too few platters? What drives would you recommend me to look into?

Thank you for the responses. I appreciate it very much.
 
No, I did not analyse your choices, but gave you guidelines for you to do so :)

If you insist on a NAS connection via ethernet, then be sure the layout is like
Code:
client.access----[nic#1] Sever System [nic#2] --- new ext storage
will isolate the clients from the NAS and ensure you have total control of the media usage.

If however, you WANT client direct access to the NAS, then that is not the approach.
 
Well, if it does come down to a disaster, then this backup server will probably have to be able to be accessed directly by the clients because we will have many virtual servers running with different shares for each server for different users.

Thanks jobeard for the help. I just realized that you had helped me out on a few threads a couple years back. Nice to know your still around. Thanks.
 
your welcome :)

However, I would still make all the storage private to the server for management and control reasons.
  • The Shares can be defined as needed and with a common system attachment (the server), you can administer them all in one place.
  • You need backups anyway and that would be better controlled from one point also.
  • You also get a quick idea of which systems are accessing via the (Admin tools->Comp Mgmt ->Shared Folders) Sessions view
For any server, you need to plan for that case when it goes down and it needs to be recovered for the client population (virtual or otherwise).
For critical service delivery, you ought to avoid any single point of failure
(draw out the connections between the services and then one by one, imagine that any one box melts to the ground).

Redundancy is critical in such cases. If the business case just can't aford that initial outlay in cash, then you need to estimate the immediate cash hit to replace box[x], time to get it delivered to the facility, and time to get it running. Notice that the cash WILL be spend somewhere down the line at the expense of delaying service restoration.
 
Status
Not open for further replies.
Back