Weekend tech reading: Llano GPU 325% faster than Sandy Bridge

By the way Ivy Bridge prototypes are being circulated for testing etc. right now. So the 'good idea' in reality has come to pass.
True enough. The scale of Intel's roadmap is probably not that apparent until you see that Intel started work on incorporating FinFET's (3D Tri-gate transistors) in 2007 - Around the same time as Intel debuted Wolfdale and Yorkfield Core 2 Duo/Quad , and eighteen months before Nehalem's intoduction.
Hardly surprising that some people (including the people who came up with FinFET's) believe Intel's process is, in some cases five years ahead of the competition.

[source]
 
Indeed, well another interesting bit is Intel is planning to put Atom on Tick-Tock cycle as well in about year+ time; I don't know how this was missed in the news, but it would further indicate that Intel is speeding ahead of competition, mainly due to AMD's slow pace innovation.

Another dimension to this debate is, by next year Intel will be launching its own SoC for mobiles; now if they can offer such a solution with graphic performance (not too far behind) of say T2/3, that will make things interesting, as they do have tools to give them the ability to do this and stay in the required power envelops.
 
This is going to seem like a dumb question, but I'm going to ask anyway. I read that how the FinFet's , or the reason why) is that the additional surface area of the tri-gate allows more current when on...and more control (less leakage) when off. sooooo...

1) does this mean that the electrons are 1 deep (so to speak) while running along the gate?
2) is the next move (evolution) of this FinFet's ...with fins/ for more surface area and control?
to put another way, as the control of manufacturing improves, will these gates look like microscopic CPU heatsinks?
 
Julio said:
Interesting claim from AMD. If so that would mean death to a range of budget graphics cards, including their own.
Only for AMD systems. AMD's plan is to take market from Intel at the low end thanks to Llano. If it succeed beyond its wildest dreams, then there would be no place for its budget graphics cards. I'm pretty sure AMD is willing to risk that. :)
 
Regarding the Android security fix, it's good to know that Google has managed to fix it on the server side. There was so much noise about users of older versions of Android being abandoned, and luckily that's not the case.
 
Partial answer to your first question can be that Indium Gallium Arsenide (InGaAs) FinFETs with fins ranging from 100 – 200 nm in length leaks less current and reduce short-channel effects; and technique used in the construction of InGaAs FinFETs is called atomic layer deposition (ALD). This technique is implemented by insulating film of aluminum oxide over the transistor fins in multiple layers, and each layer is just one atom thick.

One correction to my own assumption, that is, Siotec started sampling 300 mm SOI wafers some time ago, which are 12nm thick, and after processing (which uses 7 nm) it is taken down to 5nm, which should mean AMD may remain competitive in the short term (although Intel think UTB-SOI is not fast enough for them, hence it decided to go with FinFET or rather Tri-Gate ;) in the first place).

Unfortunately I am at work and I have to step away from here, but hopefully the best man to answer here i.e. DBZ will look into it, and a) answer your questions, and correct us wherever we are wrong. :D

Edit:
I forgot to add a useful link in this regard, anyway here is it. :eek:
 
... is the next move (evolution) of this FinFet's ...with fins/ for more surface area and control?
to put another way, as the control of manufacturing improves, will these gates look like microscopic CPU heatsinks?
You mean that the fin would be bifurcated (or further) ? I don't think so, from what I understand of what Intel is looking at when they go smaller than 14nm. Multi-gate, but not a gate that is further branched (at least from what I gather from Intel's public doc's. -this one is pdf. Fig 6 Page 3)
How much smaller than 14nm the III-V is aimed at I do not know- that's probably a question best asked of a chip architect, or at least someone with a better grounding in EUV lithography limits and μarchitecture.
 
Thanks for the links to slides DBZ, just a small issue though, the link to PDF somehow wasn't made.
 
Getting back to Llano...
AMD Italy have the official specs up.
Just in case they get taken down, here's a screengrab...
amd11.jpg
 
Thanks DBZ, I think there is some massive ****-up involved here, as when I opened the AMD Italy link it ask for user name/password, but as I don't have any, it still opened the site. :D

Anyway there is one thing which intriguing me here i.e. do these TDP figures include GPU's TDP as well? Or is this simply covering the CPU part of the die. :confused:
 
I got the Akamai password/name popup also. I think AMD take their partnership/sponsorship a step too far sometimes (running ad's during driver installs being a prime example). Anyhow...
the TDP's are CPU+GPU. They fit pretty much exactly with Sandy Bridge mobile (quelle surprise) at 35-45w. Independant benchmarking for power usage, battery life, CPU and GPU performance (hopefully) shouldn't be too far away now that AMD have shown their hand with the SKU range.
 
Indeed, if they stay within this power envelope, and can compete Intel (and I am sure they will surpass them when it comes to GPU performance) it will be great.
 
'Okayish' CPU married with 'reasonably good low end IGP', not a bad mix at least in the budget oriented mobile segment. Seeing those battery times were a pleasant surprise, especially since I can get roughly 4.45/5.0 hours of battery time on this DV6 (i7 Q2630 + 8GB + 6770M); although I must admit not when using discrete GPU. :D
 
Back