Read the whole story
Nice. When they come out lets see the performance benchmarks for good evidence.
Fascinating, best use of 3d technology to date.
If I'm following this correctly, we're going from one to three connections per gate which means eight possible states per transistor instead of two. If so, that would have far reaching implications for computer design principles which have up to this point been based on two state boolean algebra principles. Need to let this one sink in for a while, it's game changing to say the least.
I hope that prices go down since "New 22nm tri-gate wafers should be cheaper to produce".
If I'm following this correctly, Intel has developed a shrink ray? That will win them the coolest tech of 2011 award for sure.
And that shrink ray runs off there new 22nm 3D transistor chip!
I didn't know the third dimension was technology?
p.s its not a TV
Does this mean I won't need glasses to use it?
wait! how can you run the shrink ray off the 3D chip if you need the shrink ray to invent the ....oh you minx!
You do not understand that correctly. They just called it tri-gate. It does not mean there are three gates on the transistor.
Its been widely theorized that around 14nm is the shrink limit to transistors while maintaining a reliable signal. It would appear that before this we were close the end of Moore's law. Any educated guesses at how much/longer this will extend it?
cool, but if this 3D CPU discovery was copyrighted by Intel, which any logically company would do, what does this mean for AMD..?
Yes I know I was trying to make a funny, sounded funnier before I typed it.
Will be interesting what kind of thermal limits are imposed by the increased transistor density.
Working 22nm silicon in May would auger well for the usual January (2012) launch period for Intel's mainstream platform.
Probably depends on:
1. If the next technology is ready for commercial use, and
2. How aggressive the competition between Intel, AMD and ARM is, and
3. Whether x86 is still the dominant ISA (or whether parallelization and GPGPU is a player in the future of client computing)
Fab building/refurbishment and lithigraphy (and other process) tools will require huge monetary investment afaia once foundries hit 16nm-11nm, so it probably stands to reason that unless nanotubes/graphene/whatever are ready to go by 2015 then the process nodes slow as return on investment and new etching tools/fabs reaches equilibrium.
It means that AMD comes up with the same thing under a slightly different process, under a different name, like Intel did when AMD came out with Hyper-Transport. Then Intel sues AMD over the course of the next decade.
Or..."CPU" and Intel become interchangeable and low end CPU's start at $200. Then AMD CPU division gets thrown on the scrap heap right on top of Cyrix/IBM.
Just being a pessimist I guess....I think Intel is more afraid of the litigation from being a 'monopoly' than AMD actually surviving.
37% and i thought SNB was an improvement. <grumble> if these aren't socket 1155 I will stand on that tiny little !"£$%
The big drawback from selling Global Foundries to the guys from Abu Dhabi is that AMD now is at the tender mercies of whatever GloFo come up with (or don't). While AMD is a customer of GloFo, it isn't the only one -and possibly not even the most lucrative one.
AMD is fabless- they go where GloFo says they'll go.
Another way to look at is (if you're a cynic that is):
GloFo don't keep pace with Intel -> Intel retains process lead -> Intel retains or increases marketshare -> AMD's stock falls -> Mubadala/ATIC (GloFo's owners/AMD shareholder) snap up enough of the remaining AMD shares at decreased value and launch takeover :suspiciou
AMD won't go under. Their cross licensing x86-64 with Intel and IP will keep them viable. The only thing likely up for debate is whose pockets AMD's profits are likely to end up in.
....well....at least they wont be fabless anymore!:rolleyes:
so what? let them have the ultra low end to keep non-monopoly status? or does Intel get busted up as well?
This may be naive, but doesn't AMD design the architecture and go to Glo-Flo with schematics and say "manufacture this"?
See my edit above.
Where AMD (or Intel itself) ends up in the market is likely dependant upon whether x86 is still needed down the track. If RISC/parallel processing makes the leap then it's not beyond the realms of possibility that virtually anyone able to stump up for an ARM licence or similar could be a player in future. Quite the opposite of the virtual hegemony we have now.
Whether RISC has what it takes to satisfy the market is another matter. That depend on a concerted act of will from people that aren't Intel or AMD.
x86 isn't going anywhere.
Don't worry, AMD will think of something, just like the good ol' days. Besides, they still have some access to some of Intel's patents, so they are not out of this yet,
So with that kind of voltage control,voltage reduction, and near zero leakage. Could these things be 5+Ghz right out of the gate as well? (pun intended)
Am I understanding this correctly?
As I understand it, Intel seem to be playing for clocks not to dissimilar to Sandy Bridge and a much lower TDP, rather than similar (~95w) and huge core frequency. This I think would make sense if Intel are looking at 8+ core parts (consumer desktop),ease binning requirements and scaling for entry level/mainstream quad core (I don't think there are supposed to be dual core IB's). Keeping clockrate in check to a degree will also allow Intel to align similar clocked parts -and their friendlier TDP's for the mobile sectors, not to mention keeping some differentiation between IB and Sandy Bridge-E -which is now probably odds-on to receive consumer versions of the 8 core Xeon E5's already publicised.
So, I guess a lot will come down to how well IB comes out of the stove- both at stock, and turbo mode.
I didn't ask that correctly.
would the aforementioned attributes of the tri-gate afford huge OC's? or does the qualities of the surrounding materials keep the frequency capabilities to about what they are now? I am trying to get a handle on what impact this is going to have. In other words , is this the big breakthrough that has been talked about for years, or do they keep working on carbon nanotubes?