JPR: Intel, AMD gain GPU share at Nvidia's expense

Matthew DeCarlo

Posts: 5,271   +104
Staff

Jon Peddie Research has published the graphics market's first quarter results, showing a 10.3% spike in shipments. JPR said the change was welcomed after a weak holiday quarter, especially considering PC shipments were down 5% in the same period. GPU makers witnessed an unusually low seasonal demand in the fourth quarter and at the time JPR described the 7.8% on-year decline as unimpressive and disappointing.

JPR's enthusiasm for the unusual increase between the fourth and first quarters was followed by a cautious outlook for the second quarter. The outfit noted that the average change between the holiday season and first quarter is -4%, suggesting that vendors might be stocking up on parts. That could negatively affect the industry's second quarter performance as system builders bleed their existing inventory.

Vendor Q1 2011 Share Q4 2010 Share Qtr-Qtr Growth Q1 2010 Share Yr-Yr Growth
AMD 24.8% 24.2% 13.3% 21.5% 15.4%
Intel 54.4% 52.5% 14.2% 49.6% 9.7%
Nvidia 20.0% 22.5% -1.7% 28.0% -28.4%
Matrox 0.05% 0.1% -11.8% 0.1% -16.6%
SiS 0.0% 0.0% 0.0% 0.2% -
VIA/S3 0.7% 0.8% 1.2% 0.7% 5.3%
Total 100.0% 100.0% 10.3% 100.0%  

In all, more than 125 million discrete and integrated graphics chips were shipped in the first quarter. According to the stats, Intel and AMD enjoyed gains at the expense of Nvidia. Intel celebrated its fifth quarter of selling processors with integrated graphics cores, or "Embedded Processor Graphics" (EPG), and felt a 9.7% increase on-year. AMD fared even better with a 15.4% boost, undoubtedly thanks to its Fusion chips.

During the same period, Nvidia's share slipped 28.4% from the year-ago quarter, further cementing the company's third-place position. Intel led the pack, controlling 54.4% of the market, well ahead of AMD's 24.8% cut. Nvidia trailed with a share of 20.0%. The remaining fraction of a percent was largely represented by Matrox and VIA/S3, who had a combined share of 0.75%, while SiS was entirely off the map this quarter.

Permalink to story.

 
That's a pretty interesting statistic. While Team Red and Team Green are battling it out, most people are playing for Intel (Team Blue?).
 
AMD's fusion is still better then Intel's embedded graphics in i3 and low-end i5. And cheaper @ it too
 
That's a pretty interesting statistic. While Team Red and Team Green are battling it out, most people are playing for Intel (Team Blue?).
And pretty much always have done.
Intel's IGP, and lately on-die Clarkdale and on-cpu Sandy Bridge CPU's get sold in vast numbers to OEM's for entry level to lower mainstream systems. Between Intel's hardball (and sometimes illegal) march to marketshare and AMD's almost childlike approach to marketing, it's hardly surprising that Intel holds the numbers it does -even taking into account that Intels discrete GPU marketshare presently sits at 0%.
AMD's fusion is still better then Intel's embedded graphics in i3 and low-end i5. And cheaper @ it too
And if people bought a CPU primarily for it's graphics ability then AMD might be on top of the world.
Couple of points to note:
1.Fusion actually seems to be cannibalizing AMD's own low end discrete graphics market (down 3% yoy - AMD's Q1 CC transcript >>here<<)
2. As for Fusion being better than Intel's HD2000...that kind of depends on what parameters you are looking at. I wouldn't say that these gaming benchmarks would enhance your argument. Zacate and Ontario might give Atom the run around, but Zacate/Ontario v Core i3 is a mismatch........unless you're talking about unreleased parts (Llano / Sabine) and some crystal ball gazing.
 
dividebyzero said:
And pretty much always have done.
Intel's IGP, and lately on-die Clarkdale and on-cpu Sandy Bridge CPU's get sold in vast numbers to OEM's for entry level to lower mainstream systems. Between Intel's hardball (and sometimes illegal) march to marketshare and AMD's almost childlike approach to marketing, it's hardly surprising that Intel holds the numbers it does -even taking into account that Intels discrete GPU marketshare presently sits at 0%.

Yep, both of my parents Dells came with onboard Intel graphics, although I did sneak in an old 8800 GTS into my mom's computer. Not that she ever noticed.
 
dividebyzero said:
And if people bought a CPU primarily for it's graphics ability then AMD might be on top of the world.
Couple of points to note:
1.Fusion actually seems to be cannibalizing AMD's own low end discrete graphics market (down 3% yoy - AMD's Q1 CC transcript >>here<<)
2. As for Fusion being better than Intel's HD2000...that kind of depends on what parameters you are looking at. I wouldn't say that these gaming benchmarks would enhance your argument. Zacate and Ontario might give Atom the run around, but Zacate/Ontario v Core i3 is a mismatch........unless you're talking about unreleased parts (Llano / Sabine) and some crystal ball gazing.

That's true. I was going more in the lines of 890GX and the unreleased parts =P. I really didn't take the hd2000 or hd3000 into account =P. Thanks
 
What Z said about Intel dominating with Sandy Bridge thus domination with their 3000 graphics is right on. Other than AMD's Zacata with E-350 and their 6380 on-die graphics nothing is challenging Intel right now especially in the laptop industry. Sell enough Sandy Bridge and you are selling their new 3000 graphics as well.

I am in the market for a new Lenovo laptop but unless I pay considerable more money for the Nvidia option I'm stuck with the Intel 3000 which is less than thrilling.
 
DBZ, I think things are playing out in the direction we were talking about few months ago. Having used Sandy Bridge based notebook with discrete graphics for a (little) while now, I can tell you that the IGP isn't really bad as it used to be in the old days (best part of it is that battery life of about 4:45+ is phenomenal for a quadcore based notebook). I think as soon as AMD brings out something competitive in the mobile arena nVidia's presence at least at the low end of the market will diminish even more. In the longer run, however, I think nVidia probably will focus more on the other segments of mobile computing i.e. cells/tabs etc. which is pretty smart move IMO. Although interestingly enough, even here the competition is heating up, case in point Samsung's Exynos platform offers slightly better performance when compared to Tegra 2 (e.g. in situations such as accelerated decoding for multiple multimedia codecs and formats), and if this trend of cell phone makers developing their own platforms continues, that will spell trouble for nVidia.
 
I think there comes a time when integrated (whether by chipset or CPU) starts gaining performance in relation to new discrete cards, but I wouldn't see IGP platforms eating into the mainstream (desktop) for quite some time. The low-end discrete graphics-as you say- will surely end up evaporating, but for the short/medium term, the cheap Dell/HP OEM generic system is still with us- and growing in developing markets. Nvidia's groundwork and aggressive pricing keeps these OEM's in close partnership...for now. 1920x1080 is now (or soon will be) the predominant screen resolution- and that represents just too many pixels to push (at good IQ) for both the integrated hardware (core speed, shader, performance/watt) and available frame buffer.
Intel is probably on the right track with on-silicon GDDR (Ivy Bridge or Haswell) and has the process advantage (smaller node, 3D transistors, lower power requirement etc.), while AMD have the proven GPU technology but are in reality more than a full process node behind.

In the laptop segment I think discrete graphics should all but disappear except for desktop/workstation replacements and probably represents less concern for Nvidia IF their Project Denver SoC comes to fruition.

Nvidia seem to think that gaming real-time ray tracing is 3-4 years away, at which point you could say: What do you need after photorealistic gaming ? The answer is probably not much. Once gaming (assuming it ever moves away from DirectX 9) reaches this level, then I think we are close to moving full circle- GPU's taking on CPU attributes, and CPU's having fully integrated graphics ( their own on-die GDDR memory etc.).

As for Nvidia in the handheld/phone market, their acquisition of Icera and their working relationship with Microsoft probably means that they are well aware of how cutthroat the market is. A failure to make inroads there probably makes Nvidia a pretty good buyout target for a company that needs a quick boost into the mobile and GPGPU markets;)
 
Nvidia seem to think that gaming real-time ray tracing is 3-4 years away, at which point you could say: What do you need after photorealistic gaming ? The answer is probably not much. Once gaming (assuming it ever moves away from DirectX 9) reaches this level, then I think we are close to moving full circle- GPU's taking on CPU attributes, and CPU's having fully integrated graphics ( their own on-die GDDR memory etc.).

Are we going to see Larrabee surface again?
 
Are we going to see Larrabee surface again?
My guess is yes. Intel might have cancelled the Larrabee 2 (or was it 3?) but as far as I'm aware, the GPGPU project was never cancelled- only the retail (desktop) discrete card...and I've heard and read the latter might not be completely set in stone either.
Whether Intel have the ability to go toe-to-toe with Nvidia and AMD is another matter entirely....although IP and knowledgeable staff probably aren't hard to come by if you have the kind of financial resources that Intel have. A lot would likely depend upon what (if any) GPU IP sharing/ cross licensing contracts are in place between Intel and AMD (from the $1.25bn settlement in 2009), and Intel/Nvidia (from the $1.5bn settlement in Jan 2011).
 
Intel is probably on the right track with on-silicon GDDR (Ivy Bridge or Haswell) and has the process advantage (smaller node, 3D transistors, lower power requirement etc.), while AMD have the proven GPU technology but are in reality more than a full process node behind.

I think on-silicon GDDR is a good idea, but I am not sure how this will play out with regard to battery life; because that is very important for most users. Perhaps the power they may save from going to smaller node/3D transistors etc. compensate this but I can't find anything about it. Anyway, if nVidia eventually goes belly up in 3-5 years time, I wonder who will show up at the door to buy it .....
 
My guess is yes. Intel might have cancelled the Larrabee 2 (or was it 3?) but as far as I'm aware, the GPGPU project was never cancelled- only the retail (desktop) discrete card...and I've heard and read the latter might not be completely set in stone either.
Whether Intel have the ability to go toe-to-toe with Nvidia and AMD is another matter entirely....although IP and knowledgeable staff probably aren't hard to come by if you have the kind of financial resources that Intel have. A lot would likely depend upon what (if any) GPU IP sharing/ cross licensing contracts are in place between Intel and AMD (from the $1.25bn settlement in 2009), and Intel/Nvidia (from the $1.5bn settlement in Jan 2011).


The last one was Larrabee 3 that was canceled I think. but it has been revised/renamed so many times and with different incarnations it's hard to tell. It hasn't been able to keep up pace with the current graphic demands (appearance). The last 'demo' was actually a sort of embarrassment for them ( an old scene from half life i think) with one synchronous action of rolling waves running at not to impressive frame rates, although they said the whole scene was ray traced.
Maybe some cross licensing with AMD for the GPU multi-threading tech will help l
Larrabee along. I wonder if the real time ray tracing will be dropped from the project as it seems to be the real choking point of the project. But then with the HD 6000, and the Fermi series, that is fast becoming the only functional difference on the GPU side isn't it?
 
@Archean
Not sure I follow.
For instance, HD2000/HD3000 Sandy Bridge graphics uses DDR3 running at a nominal 1.5-1.65v over a 64bit memory bus.
Samsung's (for example) GDDR5 runs at 1.35v and can be paired with 64, 128, 192, 256, 320, 384, 512-bit (and larger) memory bus.
You might also take into consideration that for any given clock rate GDDR5 has twice the bandwidth of DDR3, and while the introduction of DDR4 @ 1.05-1.1v would lower power draw over DDR3 (and double bandwidth), the same process will likely be applied to GDDR5's successor...performance and voltage margin restored.

@red
I think (personally) that rasterization in it's present form is probably playing out the string. Nvidia have had ray-tracing up and running for some time already-and thanks to some revenue opportunities (and risk sharers) seem to be getting through their debugging and proof of concept suites in fairly good order. There is also the next stage of rasterization+ adaptive tessellation to consider (micropolygon rendering*) ( crappy html >here<, primo pdf >here<)....so all in all, it's like Intel bringing their T-ball bat to face Nolan Ryan

This paper on Decoupled Sampling for Graphics Pipelines (also presented at Siggraph) is also worth a once over (pdf link on the site)
Both micropolygon rendering and decoupled sampling have had heavy investment from Nvidia (along with a few other studies and technologies), and I think it would be a certainty that Nvidia wouldn't be pushing these advances if they weren't looking to shape (or reshape) games rendering. Nvidia are obviously working steadily towards a very heavily GPU-compute future as far as games are concerned -which ties in with their GPGPU aspirations. It will be interesting whether AMD tries to foot it with Nvidia or whether they take a different path. I would think that if they choose the former than it will call for a fundamental change in design philisophy, certainly a new architecture, and some serious software development or third party funding.
 
Okay let me re-phrase and please correct me if I am wrong. The way I understood this, is that Intel plans to place GDDR memory on-board; i.e. it will be separate form the main system memory; hence, what it will do is add to power requirements / draw. That will surely have impact on overall battery life. However, if there are reasonable power savings from going to 22nm / 3D transistor setup, they may save enough power to offset this drawback, and overall battery life may remain unchanged from sandy bridge or in best scenario may improve in some way.
 
You could probably look at it from two viewpoints:
The first is that having dedicated GDDR memory should allow for less aggressive timing and lower voltage RAM since the GDDR will take care of most of the heavy lifting bandwidth wise that the RAM would normally be used for. Also you could factor in shorter traces, simpler VRM, lower latency and less demand on system RAM.
The second point is probably that GDDR on-die will become a necessity as more performance is required of the GPU. If core speed (voltage, amperage, heat) becomes restrictive due to the need to keep the package within a thermal limit and set TDP, then the onus falls upon shader pipeline count, memory buswidth and memory speed to add the performance gains...the last two being heavily favoured by GDDR.
Remember that 3D "tri-gate" transistors are already in use with memory IC's, so what gains they have with CPU/APU are gains for memory also.
 
Back