Intel reveals Xe-HPG DG2 gaming GPU, confirms 512 execution units

midian182

Posts: 9,745   +121
Staff member
What just happened? Rumors surrounding Intel's upcoming Xe-HPG PC gaming graphics card have been circulating for a while, and now we've got our first look at it. Well, the GPU. Company senior vice president Raja Koduri tweeted a photo of the chip, which uses the codename DG2, confirming that at least one of the SKUs will have 512 EUs.

Koduri tweeted that Intel has been testing the GPU at its Folsom, California lab, adding that game and driver optimization is the next step for the team. "Xe-HPG (DG2) real candy – very productive time at the Folsom lab couple of weeks ago," Koduri wrote. "lots of game and driver optimization work ahead for @gfxlisa's Lisa Pearce's software team. They are all very excited and a little scared."

Intel's Iris Xe DG1 GPU arrived earlier this year as an add-in card for pre-built desktop systems. With its 80 Execution Units and 4 GB of LPDDR4X with 68 GB/s of bandwidth, it's not exactly set the world on fire; benchmarks suggest it can't even keep up with the $79, four-year-old Radeon RX550.

The DG2, on the other hand, looks to offer something a lot more enticing. Intel has confirmed it is using 512, 384, 256, 196, and 128 EUs, where each EU is similarly performant to eight cores (shaders). The flagship GPU shown here is expected to land somewhere between the RTX 3070 and RTX 3080, performance-wise.

Rumored Intel Xe DG2 mobile specifications

  SKU 1 SKU 2 SKU 3 SKU 4 SKU 5
EUs 512 384 256 196 128
Boost Clock 1100 MHz 600 MHz 450 MHz ?
Turbo Clock 1800 MHz 1400 MHz ?
Memory Capacity 16 GB 12 GB 8 GB 4 GB
Memory Speed 16 Gbps
Memory Type GDDR6
Bus Type 256-bit 192-bit 128-bit 64-bit
TDP (exc. memory) 100 W ?

Replying to another Tweet, Koduri said Intel is looking at technologies such as FidelityFX Super Resolution—AMD's answer to Nvidia's DLSS—to improve the performance of its Xe-HPG PC graphics cards.

Intel is turning to TSMC for the manufacture of the DG2 GPUs, which are set to arrive in late 2021 or early next year— Intel's Pete Brubaker said that it "is right around the corner". Whether graphics card availability/pricing will be any better than the current shambles by then remains to be seen.

Permalink to story.

 
512 EUs at 1.8 GHz would equate to a peak FP32 throughput of 14.75 TFLOPS - about the same as an RTX 3080 in mixed INT/FP32 processing or an overclocked RTX 3060 in full FP32 mode. So that what would be pretty reasonable.

And if one assumes DG2 follows the same structure as DG1, it also equates to 256 TMUs and 128 ROPs - the latter is probably going to be less than that, but you never know. Either way, this would put it on par with an RTX 3080, for those metrics. Again, so far, so good.

But 512 GB/s of memory bandwidth? That's definitely not enough. Radeon RX 6800 XTs get away with it thanks to the use of the Infinity Cache, but Intel doesn't seem to have anything like this up its sleeve. But having said that, DG1 packs 4 MB of L3 cache, which is a huge amount for a low end GPU, so there's hope yet.

Intel could well have a decent product on their hands here, as long as the drivers and related software aren't completely borked at launch.
 
512 EUs at 1.8 GHz would equate to a peak FP32 throughput of 14.75 TFLOPS - about the same as an RTX 3080 in mixed INT/FP32 processing or an overclocked RTX 3060 in full FP32 mode. So that what would be pretty reasonable.

And if one assumes DG2 follows the same structure as DG1, it also equates to 256 TMUs and 128 ROPs - the latter is probably going to be less than that, but you never know. Either way, this would put it on par with an RTX 3080, for those metrics. Again, so far, so good.

But 512 GB/s of memory bandwidth? That's definitely not enough. Radeon RX 6800 XTs get away with it thanks to the use of the Infinity Cache, but Intel doesn't seem to have anything like this up its sleeve. But having said that, DG1 packs 4 MB of L3 cache, which is a huge amount for a low end GPU, so there's hope yet.

Intel could well have a decent product on their hands here, as long as the drivers and related software aren't completely borked at launch.
I find the low end models particularly interesting. These could be great for upgrading older / office type PC. Right now, there isn‘t anything modern or interesting in that category and the modern media de/encoder could help older PC stay relevant for longer.

If they support FSR (I think Intel will) and FSR turns out to at least be decent this should also allow for reasonable budget gaming.
 
Manufactured by TSMC... well that will be another paper launch in my opinion where only reviewers will manage get their hands on it. Moreover, what about mining? Lets hope it wont be a strong miner GPU.
Was actually just thinking the same on how it will perform for mining lol.
 
I'm sure they'll be great for mining. After all, that's exactly the reason Intel wants in on the gfx market now. They want some of that miner money. Graphics cards aren't gaming chips that can mine. At this point they are mining chips that can game.
 
But 512 GB/s of memory bandwidth? That's definitely not enough. Radeon RX 6800 XTs get away with it thanks to the use of the Infinity Cache, but Intel doesn't seem to have anything like this up its sleeve. But having said that, DG1 packs 4 MB of L3 cache, which is a huge amount for a low end GPU, so there's hope yet.

RTX 3070 is 448.0GB/s

It might perform somewhere between 3070 and 3080
 
Last edited:
Manufactured by TSMC... well that will be another paper launch in my opinion where only reviewers will manage get their hands on it. Moreover, what about mining? Lets hope it wont be a strong miner GPU.

They needed the 7nm work done by a competent manufacturer ie: TSMC. as Intel can barely get 10nm working in their own fabs
 
They needed the 7nm work done by a competent manufacturer ie: TSMC. as Intel can barely get 10nm working in their own fabs
But if they really just wanted to jump on the mining wagon, then why not do it on 14nm. ETH mining needs fast memory/frame buffer anyway.... and doesnt give a f*ck about the gpu core
 
But if they really just wanted to jump on the mining wagon, then why not do it on 14nm. ETH mining needs fast memory/frame buffer anyway.... and doesnt give a f*ck about the gpu core

I don't think they would have that much spare capacity...if they did, then they wouldn't still have Rocket Lake shortages 3 months after launch!

https://wccftech.com/exclusive-dema...-intel-rocket-lake-due-to-substrate-shortage/

Also, the power consumption would be as bad as Turing.
 
TSMC? Well any dreams of cheap cards flooding the market are firmly slapped down right there.

There cannot be flooding, because the chip production was hit by fake covid pandemic, so that will take 3-4yrs for recovery, good luck too those who wanted to have fun playing games....
 
Back