Samsung announces the PM1733 PCIe 4.0, the industry's highest performing SSD

Shawn Knight

Posts: 15,314   +193
Staff member
The big picture: Samsung said the drive will be available this quarter in both form factors but didn’t mention pricing. Given its massive capacities coupled with the fact that it is an enterprise-class product, buyers can likely expect to shell out a small fortune for the drive.

Samsung on Friday announced the PM1733 PCIe Gen4 Solid State Drive (SSD), an enterprise-class drive that offers the “highest performance of any SSD on the market today.”

Built using Gen5 512Gb TLC V-NAND, the PM1733 will be offered in both U.2 (Gen 4 x4) and HHHL (Gen 4 x8) form factors in maximum capacities of 30.72TB and 15.36TB, respectively. It is backward compatible with the older PCIe 3.0 interface but using it on that platform will severely hinder overall performance.

Speaking of performance, Samsung said the drive reads sequentially at 8.0GB/s and randomly at 1,500,000 IOPS. Additional specs haven’t yet been revealed although Samsung did say it will have double the throughput capabilities of current Gen 3 SSDs.

Interestingly enough, the PM1733 also features dual port capabilities “to support storage as well as server applications.”

In related news, Samsung has also provided its full line-up of RDIMM and LRDIMM dynamic random access memory (DRAM) for AMD’s EPYC 7002 generation processors. AMD announced its second gen EPYC processors earlier this week.

Permalink to story.

 
in maximum capacities of 30.72TB and 15.36TB, respectively

:cold_sweat::cold_sweat:

That's a lot of data to lose at once !!

These are enterprise drives, there will be a robust backup system in place for this data. I wouldn't really be that worried about it.

If this was in the consumer space then you would be on to something
 
Um, max drive speed is not the only bottleneck when rebuilding raid 5, 6, etc...arrays. What do you consider "not bad"?

Sorry I don't understand your overall point. At 30TBs, 8GB/s rebuild will be around an hour (without checks), so let say 2-3 as tolerance. This is faster than M.2s and SSDs.

Raid setups are usually at least 10TBs, how else are you going to rebuild/restore them quicker?

Your standard M.2s only do around 2 GB/s to 4 GB/s (unless the OP is a typo and it's 8Gbp/s)...
 
Sorry I don't understand your overall point. At 30TBs, 8GB/s rebuild will be around an hour (without checks), so let say 2-3 as tolerance. This is faster than M.2s and SSDs.

Raid setups are usually at least 10TBs, how else are you going to rebuild/restore them quicker?

Your standard M.2s only do around 2 GB/s to 4 GB/s (unless the OP is a typo and it's 8Gbp/s)...

Have you eve done raid restores on enterprise systems, even ones with SSD drives? I don't understand how you don't understand drive speed is not the only bottleneck if you have done them? And what do you mean by how else are we going to restore?

Purchase several for a test, make a raid 5 or 6 including hot spare(s), fill it with random info then yank one drive in the raid configuration and see how long it takes it to rebuild. Come back with a log of the results.
 
Last edited:
Back