Weekend tech reading: Llano GPU 325% faster than Sandy Bridge

By on May 22, 2011, 4:09 PM

AMD - Llano offers 325% better graphics performance than Sandy Bridge AMD continues to ride the GPU performance before the launch of the new mid-range processor Llano. Recently leaked documents from the company points to that the Llano-based Fusion A series will offer up to 325% better graphics performance then equivalent Intel processors based on Sandy Bridge. Nordic Hardware

Jack Wolf, who did the math behind computers, dies at 76 Jack Keil Wolf, an engineer and computer theorist whose mathematical reasoning about how best to transmit and store information helped shape the digital innards of computers and other devices that power modern society, died on May 12 at his home in the La Jolla section of San Diego. He was 76. NYT

Firefox 5 beta arrives, quietly Mozilla officially activated its beta channel on Friday, providing the first beta version that comes out of its accelerated release cycle. Don’t expect revolutionary changes. Following a first “fake” beta build (5.0b1) that was posted on May 2, Mozilla has moved the second build (5.0b2) into the public beta channel. ConceivablyTech

Apple alumni don't fall far from the tree After selling mobile ad startup Quattro Wireless to Apple in late 2009, Lars Albright took a job helping the iPhone maker work with its community of mobile app developers. He noticed that programmers were having trouble keeping users glued to their apps. Voilà: business opportunity. Bloomberg

IT's future: Bring your own PC-tablet-phone to work CIOs should buckle up and brace themselves for a future of flexible IT as employees will be routinely bringing in their own machines and expecting the business to support them, says Tony Henderson, head of communications at UK tech sector trade body Intellect. Silicon.com

Guild Wars 2 interview We’ve already had a lengthy chat with Guild Wars 2 designers Jon Peters and Eric Flannum about how the game’s progressing but the ArenaNet devs were also kind enough to impart to us some new information on a brand new character class, the engineer. Strategy Informer

Editorial: Why Half-life 3 isn't coming soon Are you waiting for Half-life: Episode 3? Or maybe you're thinking Valve's ditched the episodic format altogether (and you'd probably be right). Regardless, don't hold your breath for a sign of life from Gordon Freeman any time soon. IGN

Google silently patches Android authentication flaw Google is implementing a server-side fix to address the authentication flaw that allows third-parties to access Android user data... eWeek

Q&A: How today's tech alienates the elderly On Silver Surfer's Day, a UK academic has blamed unnecessarily complicated user interfaces for putting older people off joining the Government-backed Race Online. PC Pro




User Comments: 41

Got something to say? Post a comment
Staff
Julio Franco Julio Franco, TechSpot Editor, said:

Interesting claim from AMD. If so that would mean death to a range of budget graphics cards, including their own.

H3llion H3llion, TechSpot Paladin, said:

typo?

H3llion H3llion, TechSpot Paladin, said:

artix said:

typo?

Should be CPU not GPU afaik ^^

H3llion H3llion, TechSpot Paladin, said:

artix said:

artix said:

typo?

Should be CPU not GPU afaik ^^

Or nvm, thought Llano was a new AMD CPU series .... (wtb edit, delete button)

Guest said:

Well its an APU a cpu + gpu.

It really should be "Llano APU 325% faster..."

Looks like Sandy Bridge got owned

http://www.youtube.com/watch?v=mdPi4GPEI74

Xero07 said:

GPU is right since its the gpu component that is 325% faster and not the entire APU or or CPU portion.

dividebyzero dividebyzero, trainee n00b, said:

Interesting claim from AMD. If so that would mean death to a range of budget graphics cards, including their own.

Yes, AMD balance sheet is going to be very dependant upon how much business they pick up from OEM's which presently use Nvidia low-end graphics - especially the huge Asian (read China) markets.

Can't help but think that revenue is likely to be up -if they gain a larger share in low end (Brazos) and lower-mainstream (Llano) markets, but margins and profit line could take a hit. I'd assume that Zambesi will cannibalize/replace AMD's current lineup, so it would seem that they really need Interlagos to make a dent in Intel's stranglehold in the server market* to turn the corner substantially.

*[source]

Guest said:

Everyone who knows anything about Intel chipsets know they perform like crap compared to AMD/NVIDIA solutions. Larrabee was always going to fail unless you like brilliant looking graphics at 0.5fps. Took them several chipsets to get HDMI decoding right (they claimed G35 and G45 did it but both had non-functioning or incomplete implementations).

It just needs more press like this so mainstream takes notice.

gwailo247, TechSpot Chancellor, said:

Meh, leaked document.

Leak a chip or two for actual testing.

Jurassic4096 said:

AMD has already proved to us that the GPU solution in Llano will leave anything else integrated today far behind, but we are starting to worry about the lack of material around the actual CPU portion of the chip.

^That's my thought. So the question is, are you really that excited that they mixed old ish with new ish? It's early on and eventually they'll pair it with CPU built on a new micro-architecture, but for now this does not interest me, but then again, neither do laptops.

venomblade said:

"AMD - Llano offers 325% better graphcis performance than Sandy Bridge "

not sure how much it matters, it should be "graphics"

DokkRokken said:

Considering Llano is the mainstream option, AMD making little noise about the CPU portion does make sense. Joe Blow, who has a Core2 with a 5400RPM HD wouldn't notice much of a difference when it comes to viewing email or YouTube.

But then again, he likely wouldn't push Llano's potential graphics power, either. It'll most likely down to power consumption, and platform cost. Not sure if AMD has nailed the latter.

Bulldozer is where processing raw power will be a factor.

red1776 red1776, Omnipotent Ruler of the Universe, said:

Interesting claim from AMD. If so that would mean death to a range of budget graphics cards, including their own.

Enter the 985G Hybrid chipset!

LinkedKube LinkedKube, TechSpot Project Baby, said:

Its about time for amd to shine. The 4 year intel/amd power struggle is just about on schedule for a switch.

red1776 red1776, Omnipotent Ruler of the Universe, said:

Its about time for amd to shine. The 4 year intel/amd power struggle is just about on schedule for a switch.

you don't think 'tri-gate' is going to leave AMD in the dust?

dividebyzero dividebyzero, trainee n00b, said:

Enter the 985G Hybrid chipset!

How about Llano + integrated or embedded graphics + discrete graphics ?....That's like three or more bites of the cherry. Should keep the investors happy.

red1776 red1776, Omnipotent Ruler of the Universe, said:

How about Llano + integrated or embedded graphics + discrete graphics ?....That's like three or more bites of the cherry. Should keep the investors happy.

Right!, and then lets throw a Lucid on there so you can run a gtx 260, a HD3850, and HD6990 with it. Seeee....I should be running the place.

dividebyzero dividebyzero, trainee n00b, said:

Right!, and then lets throw a Lucid on there so you can run a gtx 260, a HD3850, and HD6990....

...at lower framerates than using one card on its own !

red1776 red1776, Omnipotent Ruler of the Universe, said:

...at lower framerates than using one card on its own !

...Now stop messing about with facts!

dividebyzero dividebyzero, trainee n00b, said:

Sorry...

I lost my exuberance for dealing with three (or four) different graphics driver sets on one system.

Oooh, almost missed this gem...though I smell set up..

Its about time for amd to shine. The 4 year intel/amd power struggle is just about on schedule for a switch.

Like your optimism.

With AMD, IBM et al now realising -belatedly- that gate-first is pretty much a dead end, they now need to re-jig for gate-last (where Intel seems to have a clear run to 16/14nm thanks to FinFET's). Given that Intel and TSMC have pretty much all the experience with gate last, wouldn't it be just a little bit surprising if Intel didn't already have their IP and patents over the best ways through the process (and not just FinFET's)?

For AMD to get out from under is going to require some serious R&D investment. Intel will more than likely remain ahead in performance/die-size mm˛ and performance/watt (thanks in part to using a smaller process) for the foreseeable future, so AMD will need to beat Intel at the architecture level, because they certainly wont be doing it at clockspeed or TDP.

As for Llano, it does provide a real opportunity for AMD, although I suspect that beating SB by a whopping margin is somewhat of a hollow victory- Intel aren't exactly known as the gold standard in graphics. While AMD tout the gaming qualities of Llano, I'm sure Intel will be expounding on the "good enough" principle of low-res gaming and pointing to the fact that Llano is still K10 architecture based. AMD will surely reply with "How much computing power do you need in a mainstream CPU?"

rinse, repeat

red1776 red1776, Omnipotent Ruler of the Universe, said:

With AMD, IBM et al now realising -belatedly- that gate-first is pretty much a dead end, they now need to re-jig for gate-last (where Intel seems to have a clear run to 16/14nm thanks to FinFET's). Given that Intel and TSMC have pretty much all the experience with gate last, wouldn't it be just a little bit surprising if Intel didn't already have their IP and patents over the best ways through the process (and not just FinFET's)?

So If I understand this correctly, gate-last avoids some undesired effects that the heating process has on the High K Hafnium transistors in the area of leakage,conductivity etc. How much of an architectural re-write is this process of changing to gate-last exactly? (assuming AMD adopts Tri-gate of its own) It almost seems that Glo-Fo's changing from GF to GL may be a rising 'tide' type of scenario? ...or am I way off the mark here?

dividebyzero dividebyzero, trainee n00b, said:

So If I understand this correctly, gate-last avoids some undesired effects that the heating process has on the High K Hafnium transistors in the area of leakage,conductivity etc.

Aye. By quite a margin it would seem

[Dave Kanter] [Thomas Hoffmann]

How much of an architectural re-write is this process of changing to gate-last exactly?

Wouldn't have a clue tbh. Haven't seem anyone switch from GF to GL on an architecture.

(assuming AMD adopts Tri-gate of its own) It almost seems that Glo-Fo's changing from GF to GL may be a rising 'tide' type of scenario? ...or am I way off the mark here?

Samsung had already jumped ship, IBM and the rest I think compared their ramp to 32nm with Intel's -who had basically launched the Nehalem architecture change, quickly morphed it into Lynnfield whilst simultaneously introducing the same arch on 32nm with Clarkdale. Intel don't seem to have had any problems (manufacturing wise) with Clarkdale, Gulftown or Sandy Bridge

LinkedKube LinkedKube, TechSpot Project Baby, said:

red1776 said:

Its about time for amd to shine. The 4 year intel/amd power struggle is just about on schedule for a switch.

you don't think 'tri-gate' is going to leave AMD in the dust?

Its a good idea on paper, or I should rather say in a video.

Archean Archean, TechSpot Paladin, said:

By the way Ivy Bridge prototypes are being circulated for testing etc. right now. So the 'good idea' in reality has come to pass.

Any way on the GPU side of things, it is good to see AMD showing some progress, this should at least force Intel to improve its IGP enough to enable it to be there or there about. I think the entry level GPUs are total wastage of money, as they doesn't offer compelling enough performance for lot more power consumption; hence, elimination of this segment should be good for consumers (read lot less deception from OEMs).

Google being bit 'deceptive' when it comes to patching up their own holes ?

dividebyzero dividebyzero, trainee n00b, said:

By the way Ivy Bridge prototypes are being circulated for testing etc. right now. So the 'good idea' in reality has come to pass.

True enough. The scale of Intel's roadmap is probably not that apparent until you see that Intel started work on incorporating FinFET's (3D Tri-gate transistors) in 2007 - Around the same time as Intel debuted Wolfdale and Yorkfield Core 2 Duo/Quad , and eighteen months before Nehalem's intoduction.

Hardly surprising that some people (including the people who came up with FinFET's) believe Intel's process is, in some cases five years ahead of the competition.

[source]

Archean Archean, TechSpot Paladin, said:

Indeed, well another interesting bit is Intel is planning to put Atom on Tick-Tock cycle as well in about year+ time; I don't know how this was missed in the news, but it would further indicate that Intel is speeding ahead of competition, mainly due to AMD's slow pace innovation.

Another dimension to this debate is, by next year Intel will be launching its own SoC for mobiles; now if they can offer such a solution with graphic performance (not too far behind) of say T2/3, that will make things interesting, as they do have tools to give them the ability to do this and stay in the required power envelops.

red1776 red1776, Omnipotent Ruler of the Universe, said:

This is going to seem like a dumb question, but I'm going to ask anyway. I read that how the FinFet's , or the reason why) is that the additional surface area of the tri-gate allows more current when on...and more control (less leakage) when off. sooooo...

1) does this mean that the electrons are 1 deep (so to speak) while running along the gate?

2) is the next move (evolution) of this FinFet's ...with fins/ for more surface area and control?

to put another way, as the control of manufacturing improves, will these gates look like microscopic CPU heatsinks?

ET3D, TechSpot Paladin, said:

Julio said:

Interesting claim from AMD. If so that would mean death to a range of budget graphics cards, including their own.

Only for AMD systems. AMD's plan is to take market from Intel at the low end thanks to Llano. If it succeed beyond its wildest dreams, then there would be no place for its budget graphics cards. I'm pretty sure AMD is willing to risk that.

ET3D, TechSpot Paladin, said:

Regarding the Android security fix, it's good to know that Google has managed to fix it on the server side. There was so much noise about users of older versions of Android being abandoned, and luckily that's not the case.

Archean Archean, TechSpot Paladin, said:

Partial answer to your first question can be that Indium Gallium Arsenide (InGaAs) FinFETs with fins ranging from 100 - 200 nm in length leaks less current and reduce short-channel effects; and technique used in the construction of InGaAs FinFETs is called atomic layer deposition (ALD). This technique is implemented by insulating film of aluminum oxide over the transistor fins in multiple layers, and each layer is just one atom thick.

One correction to my own assumption, that is, Siotec started sampling 300 mm SOI wafers some time ago, which are 12nm thick, and after processing (which uses 7 nm) it is taken down to 5nm, which should mean AMD may remain competitive in the short term (although Intel think UTB-SOI is not fast enough for them, hence it decided to go with FinFET or rather Tri-Gate in the first place).

Unfortunately I am at work and I have to step away from here, but hopefully the best man to answer here i.e. DBZ will look into it, and a) answer your questions, and correct us wherever we are wrong.

Edit:

I forgot to add a useful link in this regard, anyway here is it. :o

dividebyzero dividebyzero, trainee n00b, said:

... is the next move (evolution) of this FinFet's ...with fins/ for more surface area and control?

to put another way, as the control of manufacturing improves, will these gates look like microscopic CPU heatsinks?

You mean that the fin would be bifurcated (or further) ? I don't think so, from what I understand of what Intel is looking at when they go smaller than 14nm. Multi-gate, but not a gate that is further branched (at least from what I gather from Intel's public doc's. -this one is pdf. Fig 6 Page 3)

How much smaller than 14nm the III-V is aimed at I do not know- that's probably a question best asked of a chip architect, or at least someone with a better grounding in EUV lithography limits and ?architecture.

dividebyzero dividebyzero, trainee n00b, said:

Updating the discussion on chip process, Research@Intel Day has seen Intel talk up foundry process at the 8nm node, advanced lithography and a host of related matters that we've briefly touched on here.

pdf here

And a selection from the slide deck for your perusal..

[link]

[link]

[link]

Archean Archean, TechSpot Paladin, said:

Thanks for the links to slides DBZ, just a small issue though, the link to PDF somehow wasn't made.

dividebyzero dividebyzero, trainee n00b, said:

Oops..

Fixed.

dividebyzero dividebyzero, trainee n00b, said:

Getting back to Llano...

AMD Italy have the official specs up.

Just in case they get taken down, here's a screengrab...

Archean Archean, TechSpot Paladin, said:

Thanks DBZ, I think there is some massive ****-up involved here, as when I opened the AMD Italy link it ask for user name/password, but as I don't have any, it still opened the site.

Anyway there is one thing which intriguing me here i.e. do these TDP figures include GPU's TDP as well? Or is this simply covering the CPU part of the die.

dividebyzero dividebyzero, trainee n00b, said:

I got the Akamai password/name popup also. I think AMD take their partnership/sponsorship a step too far sometimes (running ad's during driver installs being a prime example). Anyhow...

the TDP's are CPU+GPU. They fit pretty much exactly with Sandy Bridge mobile (quelle surprise) at 35-45w. Independant benchmarking for power usage, battery life, CPU and GPU performance (hopefully) shouldn't be too far away now that AMD have shown their hand with the SKU range.

Archean Archean, TechSpot Paladin, said:

Indeed, if they stay within this power envelope, and can compete Intel (and I am sure they will surpass them when it comes to GPU performance) it will be great.

dividebyzero dividebyzero, trainee n00b, said:

First mobile Llano review up at Tom's Hardware. I don't think the desktop parts are going to set the world alight, but the mobile segment looks promising.

Route44 Route44, TechSpot Ambassador, said:

Llano's APU graphics blow away the Intel 3000 but their A8 quad falls behind a dual core Sandy Bridge in applications, etc.

Link: http://techreport.com/articles.x/21099

Archean Archean, TechSpot Paladin, said:

'Okayish' CPU married with 'reasonably good low end IGP', not a bad mix at least in the budget oriented mobile segment. Seeing those battery times were a pleasant surprise, especially since I can get roughly 4.45/5.0 hours of battery time on this DV6 (i7 Q2630 + 8GB + 6770M); although I must admit not when using discrete GPU.

Load all comments...

Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.