Intel announces 'game-changing' 3D transistors, demos Ivy Bridge

Ah.

Yes

Faster switching transistors @ lower voltage = bigger clocks.
From Anand:
And the Intel pdf (here).

How far Intel go is likely dependant upon how hard they have to go. Since approximately 20% of 2600K's are 5.0-5.7GHz capable outright, and could have been effectively binned at 4+GHz stock, it isn't unreasonable to assume IB is capable of that and then some.
 
Can just use the shrink ray on transistors/die directly? That'll get the process size down quick...
 
red1776 said:
I didn't ask that correctly.
would the aforementioned attributes of the tri-gate afford huge OC's? or does the qualities of the surrounding materials keep the frequency capabilities to about what they are now? I am trying to get a handle on what impact this is going to have. In other words , is this the big breakthrough that has been talked about for years, or do they keep working on carbon nanotubes?

Imagine going from Conroe straight to Nehalem, and some, with Penryn never being in the picture. With this new technology, Intel is now ~3 years ahead of AMD.
 
The video does give a nice dumbed down overview. But then you can't help think "A decade to think of that...?".
 
stewi0001 said:
so what should I do guys for my computer overhaul AMD or intel?

Guess that depends when you want to build it. Sandybridge's are pretty fing awesome right now. I wouldn't get anything until next year. AMD if you are on a super budget.
 
WOW and to think my AMD 965 still crushes everything i throw at it. This should be intresting and massive overkill so count me in.
 
....And a cell tower for the mountain.

They'd get proficient fairly quickly if they texted "Goodnight" to each other at the end of every show.
 
Guest said:
WOW and to think my AMD 965 still crushes everything i throw at it. This should be intresting and massive overkill so count me in.

Download sony vegas or Maya and you'll be retracting that statement.
 
The video does give a nice dumbed down overview. But then you can't help think "A decade to think of that...?".

I'll add some lateral thinking into the equation.

Intel and Micron are already in the process of incorporating FinFET (so-called 3-D transistor) into NAND
High density memory (RAM) using TSV now seems a reality (i.e. Samsung along with Intel et al)
Intel has been exploring how to get ultra-low voltage DDR incorporated into CPU's for 4+ years.

GPGPU (Larrabee/Knights Corner/Ferry), APU's (AMD's Fusion) and CPU's with IGP are all memory dependant -if not memory constrained. It doesn't seem unreasonable to assume that Intel is looking at adding DDR (or GDDR) directly to CPU package. I would also assume that the processor having access to RAM on-die is sure going to add to both bandwidth availability and a huge decrease in latency.
I would think that having access to 1-2Gb of high speed RAM has the potential to be somewhat of a game changer for CPU's with (and probably without) integrated graphics- especially in a mobile/ultraportable package.A next-gen ultra low voltage Atom seems to be high on Intel's list of things-to-do for the mobile space. Adding in a reasonable IGP an integral memory stack seems like a fairly neat solution if power usage/heat generation can be kept low enough.

Maybe all this tech is unconnected.....but I'm thinking that it's probably not. Just a matter of if I'm joining up the dots correctly. If so, then this 3D/FinFET tech allows for a reasonable shot at a much more complete SoC (system on a chip).
Food for thought ?
 
Download sony vegas or Maya and you'll be retracting that statement.

Right...
or try a little rendering in 3DSMax while your at it.

Maybe all this tech is unconnected.....but I'm thinking that it's probably not. Just a matter of if I'm joining up the dots correctly. If so, then this 3D/FinFET tech allows for a reasonable shot at a much more complete SoC (system on a chip).
Food for thought ?

That is where this is heading. What it appears that it has solved is the signaling problem below the (up until now) theoretical limit of 16nm transistors. Intel announced a roadmap that spelled out 14nm (process 1274) for 2013, and 10nm (process 1272) for 2015. with ability to add as many 'stabilizing' or 'control' fins through the gate, and as you say access to 2Gb 's of memory 'on die', does this mean that the on die memory will be the 'active' memory and off chip memory will be 'storage' until the process gets refined and we see 4,8,16,32GB's on die?
would it also be a fair assumption that with these running at sub 1.0V even the substrate/depletion area will be able to shrink with it?This seems to be one of the rare game changing breakthroughs that has no downside or comprimise
 
...access to 2Gb 's of memory 'on die', does this mean that the on die memory will be the 'active' memory and off chip memory will be 'storage' until the process gets refined and we see 4,8,16,32GB's on die?
The memory will certainly be active. IMO (and from what I've read) the prime candidate is GDDR to feed the graphics. Having the vRAM on package has a lot of plus side (latency, dedicated graphics memory), and I sincerely doubt that the options for volatile memory on-die wouldn't include system RAM.
Storage RAM on die would take some considerable time - it doesn't however discount an on(mother)board implementation of Intel's Turbo Memory (Braidwood) which was shelved (albeit as an add-in card/motherboard slot)
would it also be a fair assumption that with these running at sub 1.0V even the substrate/depletion area will be able to shrink with it?This seems to be one of the rare game changing breakthroughs that has no downside or comprimise
I think that is well underway from the articles I've seen. Hand-in-glove as it were.
 
Back