Hi guys,
Has any of you had experience with the following and it there an explaination, or better, a way to solve it? Some days ago I completed building my new NAS server. Now, I have what seems to be a very slow file sharing performance that I wish to improve.
Server and client are crosscabled using a Cat5e crosscable (properly wired for gigabit). Server uses a Supermicro PDSME motherboard, onboard Intel(R) PRO/1000 PM, 1Gbyte memory, Areca 1120 array controller in raid5 using four seagate barracuda 250Gb disks @ 7200. The client is not new, but holds a 2.8Ghz intel cpu, a Intel(R) PRO/1000 GT network card in 32bit PCI. For now, both are running in a testsetup with 2003 server, no firewalls, no other network connections, no antivirus. The netcars report they are succesfull at negotiating 1Gb FD. CPU loads are 10% @ server and 35% @ client.
I've tried putting the cards in forced 1Gb, making the server a DC and the client a member of the domain (to prevent credential conflicts), changing the TcpWindowSize on both sides (additionally also changing the Tcp1323Opts to 0 to allow manual setting of TcpWindowSize). None of these attempts had any effect at all. When I installed IPX (protocol-check) the performance was slightly slower. Netbeui boosted the performance of my network based copy/paste to 14-16Mbyte/s which still is very slow. I've read some threads about similar slow 100Mbit FD performance and setting it to 100Mbit HD solved it, but my drivers don't allow 1000Mbit HD.
The only out-of-the-ordinary I have noticed is that during my copy/paste filetransfer, on the client I notice about 250 pagereads/s and 0 pagewrites/s. I would assume an equal number, either both 0 or both 250 when windows uses paging to get the file contents in memory and back to the network drive. May this be the source of the problem?
Can anyone say:
BTW: I do realize that presently my server drives at simultaneous 40Mbyte read/write are a bottleneck, then the clients PCI bus at 132Mbyte half duplex may be the bottleneck too, but not yet at 40Mbyte/s transfers. I intent to scale this server with more and faster disks throughout the next few years while I grow my client with it.
Has any of you had experience with the following and it there an explaination, or better, a way to solve it? Some days ago I completed building my new NAS server. Now, I have what seems to be a very slow file sharing performance that I wish to improve.
- Locally, if I log on, I can copy/past a 1.5Gbyte file in the same directory and reach 40Mbyte/s full duplex (40Mbyte read, 40Mbyte write). Nice.
- If I download the same file using HTTP I reach a download speed of 35Mbyte/s. Nice.
- If I copy the same file to my client's harddisk using windows file sharing I reach a speed of 30Mbyte/s (some overhead, ok, but still nice for a NAS).
- If I run iPerf from the client to the server and put it in duplex mode it performs 30-35Mbyte in both directions simultaneously.
- If I run NetCPS from the client to the server it performs 30-40Mbyte/s.
- Now, if I copy/paste the same 1.5Gbyte file in the same directory, but from the client, and I do in on a network disk (net use Z: \\192.168.2.1\data), I reach a performance of only 12-14Mbyte/s. Not nice. :dead:
Server and client are crosscabled using a Cat5e crosscable (properly wired for gigabit). Server uses a Supermicro PDSME motherboard, onboard Intel(R) PRO/1000 PM, 1Gbyte memory, Areca 1120 array controller in raid5 using four seagate barracuda 250Gb disks @ 7200. The client is not new, but holds a 2.8Ghz intel cpu, a Intel(R) PRO/1000 GT network card in 32bit PCI. For now, both are running in a testsetup with 2003 server, no firewalls, no other network connections, no antivirus. The netcars report they are succesfull at negotiating 1Gb FD. CPU loads are 10% @ server and 35% @ client.
I've tried putting the cards in forced 1Gb, making the server a DC and the client a member of the domain (to prevent credential conflicts), changing the TcpWindowSize on both sides (additionally also changing the Tcp1323Opts to 0 to allow manual setting of TcpWindowSize). None of these attempts had any effect at all. When I installed IPX (protocol-check) the performance was slightly slower. Netbeui boosted the performance of my network based copy/paste to 14-16Mbyte/s which still is very slow. I've read some threads about similar slow 100Mbit FD performance and setting it to 100Mbit HD solved it, but my drivers don't allow 1000Mbit HD.
The only out-of-the-ordinary I have noticed is that during my copy/paste filetransfer, on the client I notice about 250 pagereads/s and 0 pagewrites/s. I would assume an equal number, either both 0 or both 250 when windows uses paging to get the file contents in memory and back to the network drive. May this be the source of the problem?
Can anyone say:
- Why my filesharing throughput is so much slower than when using other kinds of file transfers while there seems to be no bottleneck to account for the decreased performance? I agree to some overhead but this seems to be too much (applies to using available bandwidth).
- If I can verify the actual tcpwindowsize in use by the file sharing tcp connections? (similar)
- How I can further boost the gigabit link in general? I know the quality of the cable is important and iPerf currently reports somewhere at 280Mbit. The drivers report a 75% link quality, whatever cable I use. Would a cat6, well made, double-shielded, super-twisted, laser protected, environmentally friendly and politically correct cable do the trick ? (applies to reaching a higher bandwidth)
BTW: I do realize that presently my server drives at simultaneous 40Mbyte read/write are a bottleneck, then the clients PCI bus at 132Mbyte half duplex may be the bottleneck too, but not yet at 40Mbyte/s transfers. I intent to scale this server with more and faster disks throughout the next few years while I grow my client with it.