Intel won't benchmark GPUs with more than 768 shaders and 3GB memory

mongeese

Posts: 643   +123
Staff
Flashback: Intel was poised to be bravo six, going dark. Their covert GPU development program was confirmed. It took one burst for them to capture nearly a dozen industry specialists from other companies. They hunkered down. They went radio silent for a year. Then, they hired a marketing team.

That marketing team flipped the paradigm and introduced an unprecedented level of transparency into the development process. (I’m personally grateful to Intel’s marketing team for giving me so much to write about.) Their ploy has often worked. Early promises of ray tracing and a 10nm production node extinguished concerns based on Intel’s CPU strife. An early leak promising a GPU with 4096 cores quickly impressed; only last week photos of the largest GPU in development sparked a wave of new curiosity. But promises made years ago are quickly forgotten and intangible specifications raise concerns over accuracy.

It's been five months since their last press event. In that period, we’ve seen only one Intel GPU make the rounds. Designed with software developers in mind, the Xe DG1 SDV won't impress consumers. As the only visible product, however, and given that it has plenty of RGB, that role has fallen upon it. Yesterday’s SiSoftware leak dispelled rumors suggesting it would have 128 EUs, but rather 96 EUs (the difference was an error in counting the CPU’s integrated graphics as part of the discrete solution). That’s a mediocre 768 shaders/cores. The database entry also showed that it was operating at 1.5 GHz, and that it was paired with 3GB of memory.

That’s by no means an insufficient performance bracket for a device intended for developers. Released as the right consumer-oriented product, in laptops, say, as Intel demonstrated back in January, it may be perfectly competitive. But I will pose this question to Intel’s marketing team: why should anyone care?

A year ago, when this device was leaked alongside three Xe HP (high performance) GPUs with stunningly beautiful core counts, the leak as a whole was a good sign. That only the least impressive of the products has manifested is not a great sign. I’d almost say that last week’s photo of what turned out to be the largest of the previously leaked GPUs was an attempt to distract from that fact. What was very promising five months ago doesn’t look so rosy anymore.

Intel, choose: stop putting RGB on your developer cards and pretending gamers have something to look forward to soon, or outright give us something to be hopeful for.

Permalink to story.

 
Nothing more dangerous to company's reputation than an over promising market team... I remember "Poor Volta"?
 
It may be worth holding back with the pitchforks until more data is available, as some of the Gen 12 results in the SiSoft Sandra database strongly suggest there is a lot of work still to be done with the drivers.

For example, take this Gen GP(GPU) Processing result:


Half-float GP Compute = n/a
Single-float GP Compute = 2496.32Mpix/s
Double-float GP Compute = 0.30Mpix/s
Quad-float GP Compute = n/a

It is possible that the chip has zero support for FP16 or FP128 calculations but this is extremely unlikely, as (a) all previous Intel iGPUs do support these and (b) FP16 is needed for games. Anyway, compare the scores to a random HD Graphics 610 (Gen 9.5 - 96 shaders) result I pulled out of the database:


Half-float GP Compute = 399.43Mpix/s
Single-float GP Compute = 216.38Mpix/s
Double-float GP Compute = 55.27Mpix/s
Quad-float GP Compute = 2.90Mpix/s

The FP32 value of the Gen 12 is more than 10 times the Gen 9.5 one, despite not having 10 times more shader units, so clock speed difference will be coming into play here. An equivalent shader count GPU would be the GeForce 1050, and here's one such result:


Half-float GP Compute = 2672.56Mpix/s
Single-float GP Compute = 2672.61Mpix/s
Double-float GP Compute = 100.34Mpix/s
Quad-float GP Compute = 3.96Mpix/s

If, and it's a Godzilla-sized if, the FP32 value for the Gen 12 is representative of it's capabilities, then compute-wise it's on par with Pascal. So - only 2 generations behind where AMD and Nvidia will be by the time it's out :)
 
Why would Intel cater to you? They have billions, you have nothing. Be real. Journalists will flock to their presentation when they deem to do one. Your "demands" just might work against you.

That's just my unbiased pessimistic observation.

Given historical examples of journalistic power, you are vastly underestimating the fourth estate.
 
It means the following:
First iteration of xe goes into tigerlake mobile + discrete GPU used for software tweaking, game engines optimizations, press, etc.
Second iteration, that is DG2 is still in development. As far as I know, it will be 7nm based, which by all intents and purposes, probably isn't ready yet. So I would expect leaks on other higher end products at the end of this year, earliest.
 
It may be worth holding back with the pitchforks until more data is available, as some of the Gen 12 results in the SiSoft Sandra database strongly suggest there is a lot of work still to be done with the drivers.

For example, take this Gen GP(GPU) Processing result:


Half-float GP Compute = n/a
Single-float GP Compute = 2496.32Mpix/s
Double-float GP Compute = 0.30Mpix/s
Quad-float GP Compute = n/a

It is possible that the chip has zero support for FP16 or FP128 calculations but this is extremely unlikely, as (a) all previous Intel iGPUs do support these and (b) FP16 is needed for games. Anyway, compare the scores to a random HD Graphics 610 (Gen 9.5 - 96 shaders) result I pulled out of the database:


Half-float GP Compute = 399.43Mpix/s
Single-float GP Compute = 216.38Mpix/s
Double-float GP Compute = 55.27Mpix/s
Quad-float GP Compute = 2.90Mpix/s

The FP32 value of the Gen 12 is more than 10 times the Gen 9.5 one, despite not having 10 times more shader units, so clock speed difference will be coming into play here. An equivalent shader count GPU would be the GeForce 1050, and here's one such result:


Half-float GP Compute = 2672.56Mpix/s
Single-float GP Compute = 2672.61Mpix/s
Double-float GP Compute = 100.34Mpix/s
Quad-float GP Compute = 3.96Mpix/s

If, and it's a Godzilla-sized if, the FP32 value for the Gen 12 is representative of it's capabilities, then compute-wise it's on par with Pascal. So - only 2 generations behind where AMD and Nvidia will be by the time it's out :)
Not really true. Gen 12, that is inside Tigerlake will be out in august/september this year. With ~gtx1050 performance. They might be behind nvidia ampere upcoming gpus, but amd is behind. Tigerlake should match or exceed renoir parts at 15W, this year, at least in compute power. FPS in games will be a different matter because game engines aren't yet optimized for Intel gpus, but with time they will be.
 
Given historical examples of journalistic power, you are vastly underestimating the fourth estate.

Yea, you are equaling journalistic power with this one author. I am not saying he doesn't matter, everybody matters, I am saying he is one, not all. He is presenting his demands to Intel, and I don't think Intel cares what he wants. If he thinks they should, fine. that's his opinion. I stand by what I've said.

edit: Personaly, I don't think Intel has a GPU worth any article, if that helps with where "I stand". I even welcomed F CPU series with open arms. That's how good their graphics of any sort are to me. But that's my opinion.
 
Not really true. Gen 12, that is inside Tigerlake will be out in august/september this year. With ~gtx1050 performance. They might be behind nvidia ampere upcoming gpus, but amd is behind. Tigerlake should match or exceed renoir parts at 15W, this year, at least in compute power. FPS in games will be a different matter because game engines aren't yet optimized for Intel gpus, but with time they will be.
But is it really going to be *inside* TGL or a separate chip on the same package, so basically an 'integrated' dGPU.

Asking because all the pictures so far show two separate dies. Does anyone know ?
 
Yea, you are equaling journalistic power with this one author. I am not saying he doesn't matter, everybody matters, I am saying he is one, not all. He is presenting his demands to Intel, and I don't think Intel cares what he wants. If he thinks they should, fine. that's his opinion. I stand by what I've said.

edit: Personaly, I don't think Intel has a GPU worth any article, if that helps with where "I stand". I even welcomed F CPU series with open arms. That's how good their graphics of any sort are to me. But that's my opinion.

I don't neseccarily agree with where you stand precisely but I do understand what you are saying and agree in principle. I do appreciate the well articulated response.
 
Why would Intel cater to you? They have billions, you have nothing. Be real. Journalists will flock to their presentation when they deem to do one. Your "demands" just might work against you.

That's just my unbiased pessimistic observation.
If Intel believe that Techspot is representative of a significant portion of their potential consumer base, then theyd do well to be interested in what they har to say. And you might argue that the techspot reader base is small, but one could also argue that techspot reader represent a helluva lot of mind share, since I cant be the only one of us Who makes pc purchasing decisions for All of My extended family and friends. Sorry for the random capital letters, its My bloody danish auto correct.
 
Yea, you are equaling journalistic power with this one author. I am not saying he doesn't matter, everybody matters, I am saying he is one, not all. He is presenting his demands to Intel, and I don't think Intel cares what he wants. If he thinks they should, fine. that's his opinion. I stand by what I've said.

edit: Personaly, I don't think Intel has a GPU worth any article, if that helps with where "I stand". I even welcomed F CPU series with open arms. That's how good their graphics of any sort are to me. But that's my opinion.
I think you're roughly right. I don't think Intel is particularly interested in what I have to say (and nor should they be) but I know that they do read my articles and maybe I'll make a dot point on a report.

However, I'm not so arrogant as to put stock in my own opinions. Do I think I know more than Intel's marketing team? No way, absolutely not. But I have written dozens of articles on Intel's GPUs and read possibly hundreds, and subsequently, I've also read thousands of comments over the past few years. I very closely observe public opinion of Intel and their GPUs (not that comments necessarily reflect the majority) and I've seen and recorded shifts over time on many platforms including Reddit and Twitter. I believe Intel's GPUs are currently perceived as neutral or negative by most serious technology enthusiasts. I firmly believe Intel knows this.

This article isn't aimed at Intel by any means (like, I could just email them y'know) but if Intel are going to take away anything from this they should know that we know that they're in trouble.

As an aside, they had a large presentation on Xe that could have been a reveal cancelled about two months ago because of the coronavirus. I think that cancellation has cost them mindshare and positive associations, and I think they know that as well. I suspect that they'll try a large marketing push as soon as it is viable IF they intend to sell GPUs to gamers this year.
 
The question is, do they really need a high performance gaming PC ?
The OEM will eat up anything they produce anyhow and if you look at OEM systems, the 1650 and below level seems to be where the volume is and data center / HPC where the money is.

Personally I think at least their short term goal is exactly this - the ability to offer complete bundles to OEM (both desktop and laptop) and thus gaining a larger share of the bom.

As a positive side effect for Intel, this would weaken nVidia and probably get them to fight AMD even more aggressively in the diy / gaming market, depriving both of income.
 
The OEM will eat up anything they produce anyhow and if you look at OEM systems, the 1650 and below level
Intel's last discrete era used that exact same strategy. It clearly didn't work and it's for the same problem that Intel have right now. OEMs will know they will only be able to shift cheap/budget units if the Xe brand has something to shout about, whether it's a USP or a 'legendary status' SKU. In terms of the former, there doesn't look to be anything that it could offer here, that isn't already available from AMD/Nvidia.

The latter is feasible though: Intel could release a 'Titan-like' card, ridiculously expensive but with equally ridiculous performance. It doesn't matter if it barely sells, doesn't fit most cases, or needs a 1 kW PSU. If its performance sets it above the competition, then the Xe brand will get the right kind of coverage it needs.

As things currently stand right now they've got an uphill battle against the names of Radeon and GeForce, and their own integrated GPUs.
 
Intel are a smart company. They are currently making more money than they have in their history because they can provide well supported enterprise grade solutions. Despite at the same time they have lost the performance per dollar battle to AMD in consumer silicon.

But that doesn’t matter. The future doesn’t look very bright for personal machines at this point. We are looking to cloud services to start providing our compute needs. Movie streaming is a thing, Android one is a thing. Soon even game streaming will be commonplace, office 365 etc. Phones and tablets are starting to replace laptops, I could go on.

I don’t think Intel are going to be making a powerful consumer grade GPU because I don’t think there is much future in that market. Certainly I would give up my expensive cumbersome rig if I could just stream all my games via an iPad or something. In the end, a graphics card will be an expensive luxury compared to a gaming sub service.

And this fits Intel’s strategy, small grade GPUs for laptops, OEM parts. And enormous GPUs for datacenter to provide cloud services. Remember, Intel is a business. They aren’t trying to woo the enthusiasts, they are just making money. They missed out on the mobile buzz but they aren’t going to miss out on the cloud revolution.

Oh and we’ve learnt that if you out rgb on your product you’ll get more press attention. So I expect everyone to do that going forward...
 
Intel's last discrete era used that exact same strategy. It clearly didn't work and it's for the same problem that Intel have right now. OEMs will know they will only be able to shift cheap/budget units if the Xe brand has something to shout about, whether it's a USP or a 'legendary status' SKU. In terms of the former, there doesn't look to be anything that it could offer here, that isn't already available from AMD/Nvidia.

The latter is feasible though: Intel could release a 'Titan-like' card, ridiculously expensive but with equally ridiculous performance. It doesn't matter if it barely sells, doesn't fit most cases, or needs a 1 kW PSU. If its performance sets it above the competition, then the Xe brand will get the right kind of coverage it needs.

As things currently stand right now they've got an uphill battle against the names of Radeon and GeForce, and their own integrated GPUs.
Wasn't their last attempt to actually offer the top gaming card ? If someone buys a Dell desktop and goes for a discrete GPU like a 1050 or perhaps 1650, would they care if it came with an Intel card instead, as long as it offered better multi monitor support vs. Intel's iGPU and allowed for the same low end gaming?
Intel otoh could offer OEM pretty much a complete package with associated rebates.

Same for non gaming laptops. Why not replace all those low end nVidia GPU with their own ? Not sexy, but we're talking about millions of units here and the removal of non-Intel stickers.
 
Back