Intel Xe DG1 GPU is shipping and will release this year

mongeese

Posts: 643   +123
Staff
Bottom line: At Intel’s recent earnings report, CEO Bob Swan confirmed that the development of Xe GPUs is progressing well. Their low-power, budget option, codenamed DG1, is already shipping and could be available for the holiday season. Intel's high-performance gaming graphics chip is still in the early phases of testing but is looking good thus far.

After a 20 year hiatus, Intel is returning to the discrete graphics card market. They've divided their GPUs, collectively marketed as Xe, into four categories: low-power (LP), high-performance gaming (HPG), high-performance (HP), and high-performance compute (HPC).

The DG1 is their current LP product. It goes by a few names; as a developer card it was called the DG1 SDV and in laptops, it's been known as the Iris Xe Max and as Iris Xe Graphics. But they're all the same: a 96 EU (Execution Unit, equal to eight shaders) GPU built on Intel's 10nm SuperFin process. Leaks in various databases show that it's been clocked up to 1.5 GHz, and paired with up to 3 GB of memory.

The architecture of the DG1 (LP) versus a theoretical DG2 (HPG) model. Each blue square represents one execution unit. To learn more about the features of Intel's Xe architecture, check out our Intel Xe Graphics Preview v2.0.

Swan commented that the DG1 will "be in systems from multiple OEMs later in Q4," which suggests that the card might only be available in pre-built desktops.

Swan also officially commented on the DG2 for the first time, describing it as an HPG product that "will take our discrete graphics capability up the stack into the enthusiast segment." According to some information Intel leaked on accident, the DG2 models could have between 128 and 512 EUs, but it's possible that Intel's changed things up since then.

Permalink to story.

 
DG2 may surprise us. I am looking forward to it.

Same. I'm also plesantly surprised with Intel's commitment to low-end and low-mid tier gaming GPUs, segments that Nvidia and AMD have pretty much abandoned. I'd like to see these companies return to launching GPUs in these segments as parts of current lineups, and with the same feature set as high end models.
 
I'm starting to feel sorry for Intel. Firstly they are about to get a jolly good rogering from the Ryzen 5000 series and now they decide to try competing with nVidia.... Given AMD have spent years trying to catch up with nVidia in the GPU market I just cant see this going well, but you never know! Keep your fingers crossed and your choccy startfish puckered!
 
Same. I'm also plesantly surprised with Intel's commitment to low-end and low-mid tier gaming GPUs, segments that Nvidia and AMD have pretty much abandoned. I'd like to see these companies return to launching GPUs in these segments as parts of current lineups, and with the same feature set as high end models.
They abandoned it because it's a pointless segment at the moment, the low end stuff Nvidia and AMD release is barely any faster than what's built into Intel CPU's.

There's plenty of videos on the subject, GPU's under the $100 range make no sense.

 
They abandoned it because it's a pointless segment at the moment, the low end stuff Nvidia and AMD release is barely any faster than what's built into Intel CPU's.

There's plenty of videos on the subject, GPU's under the $100 range make no sense.

Yeah, however the point is, who's to blame for that? That's because AMD and Nvidia keep selling GPUs with 10+ year old technology in the $50-$100 price range. But it wasn't always like this. Remember Radeon 9200? Geforce 6200? 8400GS? All current-gen budget cards at their time, with the same feature set (or almost minus a few things) as higher-end cards and offered pretty interesting bang for buck.

It became a pointless segment because AMD and Nvidia's business model made it pointless on purpose.
 
Yeah, however the point is, who's to blame for that? That's because AMD and Nvidia keep selling GPUs with 10+ year old technology in the $50-$100 price range. But it wasn't always like this. Remember Radeon 9200? Geforce 6200? 8400GS? All current-gen budget cards at their time, with the same feature set (or almost minus a few things) as higher-end cards and offered pretty interesting bang for buck.

It became a pointless segment because AMD and Nvidia's business model made it pointless on purpose.
No, it became pointless because Integrated GPU's actually started doing more than 2D and is able to compete with GPU's in that range.

Sure, Nvidia and AMD could lower pricing across the board but, why would they? What incentive do they have to lower their entire product stack prices so they can randomly start selling decent GPU's in the $50-120 range?
 
No, it became pointless because Integrated GPU's actually started doing more than 2D and is able to compete with GPU's in that range.

Sure, Nvidia and AMD could lower pricing across the board but, why would they? What incentive do they have to lower their entire product stack prices so they can randomly start selling decent GPU's in the $50-120 range?

Not really, when it comes to integrated graphics, only AMD APUs and nothing else in the market offer acceptable performance for light gaming.

You are coming off as a corporate apologist and conveniently ignored most of what I said in my previous comment, so we're done here and I won't be replying further.
 
Not really, when it comes to integrated graphics, only AMD APUs and nothing else in the market offer acceptable performance for light gaming.

You are coming off as a corporate apologist and conveniently ignored most of what I said in my previous comment, so we're done here and I won't be replying further.
Here's a video, I'll let someone else explain it to you:

 
AMD and Nvidia keep selling GPUs with 10+ year old technology in the $50-$100 price range.
Part of the reason as to why we're not seeing $50 RDNA or Turing graphics cards is that the GPU themselves are a lot more expensive to manufacture than a 6 year old Kepler-design 28nm chip. There wouldn't be much of a profit margin from using the latest process node to churn little GPUs.

Of course, there is some merit to an argument that AMD, for example, could design a very small Navi 14 - cutting the components right back to make it a similar size as the likes of the GeForce GT 710 GPU (which is just 90 square mm in size). This is the chip in question:

919-die-shot.jpg


This is just under 160 square mm, so one would need to roughly half it. The obvious things to reduce would be the number of CUs (and thus the SPs, TMUs, ROPs, etc) and the memory controllers; the rest of the chip would have to stay there.

So what would you have? Halving this chip's CUs, would leave you with a 12 CU (768 SPs, 48 TMUs, 16 ROPs) GPU using a 64 bit memory bus. That's better than something like a Radeon RX 550 (although the memory is about the same) but it would take up AMD's N7 inventory request at TSMC - something that's already being heavily hit by the Ryzen and Navi processors.

And since this is to be sold at less than $100, then the tiny profit margin and tiny inventory capacity don't make for good incentives for AMD to do this. Far better to churn out cheap old processors, on nodes with far less demand, to meet the sub-$100 market demand.
 
Neeyik, that makes sense. Thank you for the very informative reply.

Since manufacturers seem to be having it tougher and tougher in that front, would you agree that we are approaching the physical limits on silicon chip miniaturization? Though lots of people were already saying this 30 years ago when chips were around 1000nm.
 
Doesn't the used market cater for sub $100 gpus?

Though I don't think it's fair to put the used market in the equation (personally I never buy used parts, too risky), if money is the only issue then yes it does.

There are other benefits to budget segment GPUs however, that always made them interesting in my opinion: Their small board size and smaller heat generation (with some models getting by with passive cooling), and also reduced power draw without the need for auxiliary power connectors. The efficiency of some models is really interesting.
 
Though I don't think it's fair to put the used market in the equation (personally I never buy used parts, too risky), if money is the only issue then yes it does.

There are other benefits to budget segment GPUs however, that always made them interesting in my opinion: Their small board size and smaller heat generation (with some models getting by with passive cooling), and also reduced power draw without the need for auxiliary power connectors. The efficiency of some models is really interesting.

Places like Ebay have strong buyer protection so it's not as risky anymore.
 
Since manufacturers seem to be having it tougher and tougher in that front, would you agree that we are approaching the physical limits on silicon chip miniaturization? Though lots of people were already saying this 30 years ago when chips were around 1000nm.
We are approaching the limits, but they're still some way off. TSMC are currently working on improving N7, while developing N5 and N3 at the same time. The latter is targeted to have a die density 3 times that of N7, although it won't be ready for volume production for another 2 to 3 years.

Samsung are also working on similar improvements:

SFF2019-1%20%289%29.jpg


Nvidia's GA102 is made on the 8LPP node, so if they plan on sticking with Samsung, there's clear scope for future monolithic designs to continue the current trend of 'more of everything.'

What kind of chip could one have with 3 times more logic density than seen in the GA102? For the same sized die, that would give you a transistor count of over 80 billion (the GA102 is 28.3b, the GA100 is 54.2b), so even though we're not going to get anywhere near that level anytime soon, it shows that the limits are nicely some distance ahead in the future.
 
After nVidia canceling the most anticipated GPUs presumably because of lack of threat from AMD , I really hope Intel brings something to the table since this nonsense cannot continue...
 
Typically, initial runs on an entirely new manufacturing process will involve small chips - you get a better sense of the wafer yields this way, as the bin distribution will be larger and have more entries in it. Jumping straight in with hulking big dies will produce the opposite results.

For example, TSMC's N7 first churned out chips in mid-2016 but the yields were pretty poor; low volume production commenced in 2017 and high volume in early 2018, although for fewer than 20 products. By 2019, though, over 100 products had been taped out on that node, with AMD's Navi 10 being one them.

So while TSMC is certainly planning for volume production within 2 years, it's more likely to be 3 before we see any large chips on the process. Much depends on how well everything goes; it is another EUV FinFET technology (same as N7+) and N6 is planned for volume production this year, so it might run better than expectations.
 
Back