Hi Neeyik,
Yes, technically. However that does Not explain the huge discrepancy.
MOSFETs using virtually the same Materials, FAB equipment, Process technology
( Masking, Lithography, Implanting, Etching, Wash, Rinse, Repeat ) & Design tools,
the best any SDRAM vendor can do is ~500 Mhz. They use Folded bit lines to attain
higher Frequencies. Capacitors tied to Vcc/2, not Gnd, are faster, bet they do this too.
True, they use extremely tiny Caps., to save space/cost, only this does cause an
increase in Latency, unless one can cut the parasitic Capacitance of the Bit lines.
That only explains part of it, the rest has got to be in the Transistor itself. Then
simultaneously CPU makers are able to get their Passive components ( Capacitors,
Resistors, Vias, et al ) and Transistors, ( the Passives are fundamentally alike, only
the Xsistor is significantly different ) to Slew at 10 to 12 Times the Rate
( ~ 1100 % faster), as if by some kind of new found Magic!
The Claim regarding Total Cache per Core is inaccurate.
For example, Total Cache/Core: 80486 8KB, P1 32K, P2 128K, P3 512K, P4 1MB,
i5 1st-3rd Generation 1M, i5 4th-6th Gen. 1.5M, i5 7th-9th Gen. 2.25M, & R3600X 6MB
If AMD doubles Cache @ 5nm, Hit rate may go from 95% to ~97, so the Lookup time
is twice too, the Average Through-put gain will be /2, for a net of almost none
( Law of Diminishing Returns ). Else if they go to 5nm /w the same Caches and optimize
most every other item, as much as they can( likely gain over 10% based on past History ),
then they may turn the Clock ( CLK ) up to ~5.5 Ghz, staying within the same Heat envelope.
The net increase in the 1st case, at most 2%, for the second, >30%, best approximations.
By cheaper, I must assume you meant the higher cost of switching substrates from SiO2
to GaA? After ~40 Years of research Scientists @ MIT recently discovered that by Plating an
ordinary SiDiO Wafer with a very thin layer of Grapheme and then flowing pure Gallium &
then pure Arsenide on it, they can match or beat the cost of Silicon DiOxide, the question is
how long before it is perfected and ready for Mass production? With 5nm production now
certain & 3nm nearly so, we will have much faster CPUs soon. As ere, I am just trying to
see how SDRAMs can be improved, to catch up and hopefully keep-up? HBM would be
better than DDR5 ( denser, faster & less power ), were it not for its persistently high price.
DDR5-5200 sounds impressive, except that 90% of that Data is waste, so it is actually
more like DDR-520. It was done out of desperation, they could not make QDR work after
all and they still can not seem to get the charge Pump to toggle the States any quicker.
Even today the typical Compiled X86 program only executes 8 to 10 Instructions,
before it is forced to Branch. Despite all Intel's contrary PR, x86 Cores do not yet do
Branch Prediction/Out of Order Execution to well and so they just dump more and more
unused Data into the Bit Bucket. They also turned up the Word Clock (WCK), only the
Latency ( CL ) just goes up right along with it ( so must add a zillion T States, I.e. >23 ),
for a tiny net gain, unless hyperbole counts?
As CPU, GPU, PCI, SSD & USBs all become so swift, SDRAM will be the growing chokepoint.
Mem-makers spend $25+ Billion a year on new FABs and R & D, seems as if they can afford
to create a much more elegant solution? In perspective, Applied Mat. estimates it will cost
$10+ Billion just to develop the 1st 450mm Prototype. You are quite right, it is all so complex
and costly now a days. They did do a great job of upping the Density and Longevity,
cutting Power use and Error rates and lowering the Cost per bit stored !
The JEDEC has hinted their DDR5 Specification may finally support QDR, by about Xmas.
Thank You, dennis