OCZ Vertex 2 Pro 100GB SSD Review

Julio Franco

Posts: 9,097   +2,048
Staff member
OCZ's first SandForce-based SSD will be the Vertex 2 Pro which is set for release this March. The Vertex 2 Pro uses the SF-1500 controller with MLC memory and is intended to provide enterprise-level reliability and performance. Therefore, the Vertex 2 Pro is not going to be an affordable SSD as its primary focus will be on performance.

Read the full review at:
https://www.techspot.com/review/242-ocz-vertex2-pro-ssd/

Please leave your feedback here.
 
I am really interested in what the SF-1200 controller can do, and will eagerly await reviews of a drive using it. It is too bad that we couldn't see direct comparison with the Intel M & E versions as i suspect that is the decision most users/companies would be making at this price point.
 
Thanks for the great news in the review. I expect the more regularly priced SSD's to drop some as these hit in the channels, so the review is much appreciated.
 
My rough understanding is that the SandForce controller reduces write amplification on uncompressed files and minimizes the write cycle by computing the most efficient place to put files and when to write them. It falters at reducing write amplification when dealing with files that are already "dense" such as DV or MP3 or zipped files.

But my biggest wonder is where it is storing this unwritten data or computational decisions. If it's just going to grab a little system RAM, that's fine, but I'm guessing it's still leaving pages dirty, and unlike many SSD's it's going to lack a capacitor to flush the cache. And we all know there's no capacitor on system RAM.

I can knock write amplification way back on any drive by using SuperSpeed's Super Cache and setting it to never flush its cache (built from system RAM) unless it receives a shutdown or stop cache command. Same for any better RAID controller with BBU. And then it hardly matters if the pages sit dirty since they have a BBU that lasts for days.

You guys need to do some real reviews of these drives by yanking the power or deliberately hanging the system and seeing how SandForce controllers deal with that.
 
My rough understanding is that the SandForce controller reduces write amplification on uncompressed files and minimizes the write cycle by computing the most efficient place to put files and when to write them. It falters at reducing write amplification when dealing with files that are already "dense" such as DV or MP3 or zipped files.

But my biggest wonder is where it is storing this unwritten data or computational decisions. If it's just going to grab a little system RAM, that's fine, but I'm guessing it's still leaving pages dirty, and unlike many SSD's it's going to lack a capacitor to flush the cache. And we all know there's no capacitor on system RAM.

I can knock write amplification way back on any drive by using SuperSpeed's Super Cache and setting it to never flush its cache (built from system RAM) unless it receives a shutdown or stop cache command. Same for any better RAID controller with BBU. And then it hardly matters if the pages sit dirty since they have a BBU that lasts for days.

You guys need to do some real reviews of these drives by yanking the power or deliberately hanging the system and seeing how SandForce controllers deal with that.

Here is what benchmarkreviews.com had to say on this:

Another benefit of SandForce's architecture is that the SSD keeps information on the NAND grid and removes the need for a separate cache buffer DRAM module. The result is a faster transaction, albeit at the expense of total storage capacity. The 128GB model measures only 100GB once formatted, with an estimated 20GB dedicated to transaction space.

link: http://benchmarkreviews.com/index.php?option=com_content&task=view&id=444&Itemid=60

So if I interpret them correctly, what they chose to do with the current incarnation of the architecture is actually reserve some of the primary memory capacity for I/O transaction management. This design is the same as what they do in enterprise database management systems (DBMS), where they use a separate "transaction log" to manage transaction data. The primary influential principle of this design is referred to as ACID, which stands for atomicty, consistency, isolation, and durability. Thus, using a segmented off portion of the primary memory attributes to the durability and isolation principles.

In plain English, if the system gets interrupted either by power or by a crash, when it initializes the next time, it can read from its transaction space and "resume" where it left off. This makes it durable. But I'm not an expert on this. Perhaps a Sandforce engineer should elaborate!

Cheers,

Timex
 
I know im a little late to read this review, but im glad i did because few days ago i bought OCZ Vertex with out doing much reading or research. But after reading this article and many a like about OCZ SSD drives, i am glad and feel much better about my purchase.
 
Some benchmarks are useless when used with a SandForce based SSD.

ATTO uses (if you didn't change anything in the settings) a "00000"-pattern as default. Obviously, such a pattern can be compressed with a SandForce based SSD nearly perfectly and read/writes are over the top. This has nothing to do with real world experience, as you won't write any "0000"'s.
The same should be true for FC-Test. It writes Dummyfiles which can be compressed nearly perfectly (I couldn't test it though with FC-Test, but afaik they are just dummy "000"-files).
The same is even true for IOMeter. A 1GB Testfile can be compressed using 7-zip to some hundred kB.

As you can see, you can't rely on every benchmark using a SF-based SSD. The author of AS SSD Benchmark did confirm that his benchmark uses a random file pattern, the same is true for CrystalDiskMark. So those benchmarks can be trusted.


Some guy at XtremeSystems did also a test with his Vertex LE. Look at the results:
http://www.abload.de/img/sequential_write_teste0mi.png

Complete thread:
http://www.xtremesystems.org/forums/showthread.php?t=249040


So to assess the performance of the upcoming SandForce SSDs, reviewers NEED to adapt their testing methods to benchmark the "real" performance and not some never-to-be-seen performance using dummy files.
 
Don't know why it's not comparing to Intel M drive, aren't they the most two drives buyer's considering?!
 
Back