Intel Xe DG2 GPUs could be "around the corner," or they could be months away

mongeese

Posts: 643   +123
Staff
Why it matters: Intel’s Discrete-Graphics-2 GPUs have been bobbing on the horizon for years. Now, while demand is insatiable and rival manufacturers have got nothing new near readiness, is the optimal time for Intel to start shipping their GPUs.

On Thursday, Intel’s Pete Brubaker said that "DG2 is right around the corner" in a tweeted advertisement for an opening with Intel’s development team. The listing is for a 'Senior Game Developer Relations Engineer' that can help game developers optimize their titles for new hardware. It’s a job that needs to be started months before the launch of a new architecture, which already places Brubaker’s comments in contention with reality.

Igor Wallossek at Igor’s Lab says that Intel has already approached at least Asus and MSI to negotiate. But he’s also said that he’s been told by trusted sources that Intel will begin manufacturing the little DG2 chips in November or December, and will begin manufacturing the big chips in January or February of 2022. A timeline like that would violate Intel’s promise that DG2 would release this year, but Wallossek has a better history of accuracy than Intel’s infamous multi-year forecasts.

The report published by Igor’s Lab also includes some new specifications of the DG2 stack.

Intel’s repeatedly confirmed that they’re using 512, 384, 256, 196, and 128 EU models, where each EU is similarly performant to eight cores (shaders). Here are some new leaked details of the clock speeds and memory configurations of the laptop versions.

Rumored Intel Xe DG2 mobile specifications

  SKU 1 SKU 2 SKU 3 SKU 4 SKU 5
EUs 512 384 256 196 128
Boost Clock 1100 MHz 600 MHz 450 MHz ?
Turbo Clock 1800 MHz 1400 MHz ?
Memory Capacity 16 GB 12 GB 8 GB 4 GB
Memory Speed 16 Gbps
Memory Type GDDR6
Bus Type 256-bit 192-bit 128-bit 64-bit
TDP (exc. memory) 100 W ?

The second anniversary of the leak that first revealed DG2’s 512, 256, and 128 EU configurations is two months away. When I wrote about that leak, I commented that "a 512 EU graphics core operating at 1800 MHz would reach 14.7 TFLOPS, a little more than an RTX 2080 Ti." Two years ago that was thrilling, and while it’s still superb performance for a mobile GPU, it’s no longer flagship performance.

I don’t like to discredit products before they’re released, and of course, Xe is an entirely new architecture whose performance can’t be easily predicted. But I’m worried that Intel is missing their moment. It’s clear, for example, that Rocket Lake would've been a more competitive product if Intel hadn’t delayed it.

Unless it could reach absurdly high clock speeds, a 512 EU GPU with midrange GDDR6 memory couldn't compete with an RTX 3090 on the desktop. But if Intel could bring something with similar specifications to those above to the laptop market before Nvidia was ready, then they could begin dethroning the leather-clad chef and his cooks.

Intel has a brief window of opportunity to enter and rattle the GPU market. Now we’ll watch and see if they’ll be ready in time.

Permalink to story.

 
"14.7 TFLOPS, a little more than an RTX 2080 Ti, ...no longer flagship performance". Does anyone care about flagship performance in our current situation? If it matches an 1070 or 1080GTX, it`s cheap, that would mean plenty of supply and no or limited mining performance then Intel will strike gold.
 
"14.7 TFLOPS, a little more than an RTX 2080 Ti, ...no longer flagship performance". Does anyone care about flagship performance in our current situation? If it matches an 1070 or 1080GTX, it`s cheap, that would mean plenty of supply and no or limited mining performance then Intel will strike gold.
Probably not. Neither AMD or Nvidia produces GPU's as fast as they could. Both know very well that crypto bubble will break and it means tons of used GPU's on market. It happened twice and will happen once again. It's much harder to sell next generation products (RX7000/GTX4000??) if market is flooded with previous generation products.
 
We need some fresh contender.
contender.png
 
Probably not. Neither AMD or Nvidia produces GPU's as fast as they could. Both know very well that crypto bubble will break and it means tons of used GPU's on market. It happened twice and will happen once again. It's much harder to sell next generation products (RX7000/GTX4000??) if market is flooded with previous generation products.

I think it's less about selling new product and more about chip shortages. GoFo kinda shot themselves in the foot. TSMC has several fabs being built in Arizona but that won't be done until later. That leaves the existing TSMC and Samsung fabs for anything not Intel. That means AMD CPUs, GPUs, chips for Chevy, Ford, Dodge, and every smart phone that is built. We also have the roll out of 5G phones like crazy now. Memory modules, SSDs are the norm vs HDD's now, etc. We've moved so much stuff to computer drive that we've out ran our suppliers.

As far as top GPU performance after seeing the 6900X benchmarks the other day I'd say AMD is back in the game there.
 
Like I said on a different site: no way DG2 512 EU will come in 2022!

It will come this year, otherwise it will be a fail on multiple counts. And Intel knows that... also this is not their CPU division - different team, different architecture and different process node.

I'm expecting the entire lineup, or at least the big DG2 to come end of Q3, at the latest end of Q4.
 
I think it's less about selling new product and more about chip shortages. GoFo kinda shot themselves in the foot. TSMC has several fabs being built in Arizona but that won't be done until later. That leaves the existing TSMC and Samsung fabs for anything not Intel. That means AMD CPUs, GPUs, chips for Chevy, Ford, Dodge, and every smart phone that is built. We also have the roll out of 5G phones like crazy now. Memory modules, SSDs are the norm vs HDD's now, etc. We've moved so much stuff to computer drive that we've out ran our suppliers.

As far as top GPU performance after seeing the 6900X benchmarks the other day I'd say AMD is back in the game there.
Yeah, it's partly about manufacturing capacity. But, AMD still can choose what to make with limited 7nm wafers. They could do more GPU's but instead they make something else. Because they have experience what happens when crypto suddenly loses popularity. Same about Nvidia. They also know that when cryptomining is no longer profitable, market will be flooded with used cards.

Making more GPU's is low priority for both AMD and Nvidia. They both know cryptos can crash very quickly.
 
Oh good grief those memory bus bit-rates. If Intel wants to compete, they need to bring those bus-rates up. 64bit is unacceptable and 128bit is pathetic. 128bit needs to be the starting point and 512bit the upper end if they're going to want to actually compete with AMD & NVidia.
 
I think it means Nvidia will have to up their hardware encoders - I think a lot of people were disappointed Nvidia may no improvements in the 3000 series . How is it AMD can do so much on so many fronts - Nvidia did not care enough for the end consumers ( they are like Intel - they will only give more it pressured - and AMD encoding is not good )
However Intels is much better than Nvidias - so if Nvidia needs to work hard on this - But will they care?
 
Oh good grief those memory bus bit-rates. If Intel wants to compete, they need to bring those bus-rates up. 64bit is unacceptable and 128bit is pathetic. 128bit needs to be the starting point and 512bit the upper end if they're going to want to actually compete with AMD & NVidia.
Do note that those are claimed specifications for the laptop versions. Nvidia's GTX 1650 mobile chips are all 128-bit, for example; the RTX 3050 mobile is also expected to be the same. And while 64-bits seems far too narrow, that's the same width as single channel system DRAM.
 
Do note that those are claimed specifications for the laptop versions. Nvidia's GTX 1650 mobile chips are all 128-bit, for example; the RTX 3050 mobile is also expected to be the same.
Yeah, it's still pathetic.
And while 64-bits seems far too narrow, that's the same width as single channel system DRAM.
Thank You for illustrating my point...
 
Intel has the chance to get into the GPU market over 10 years ago, they've wasted so much time and money that I think this will be a massive flop, especially if the crypto boom crashes again.
 
Intel has the chance to get into the GPU market over 10 years ago, they've wasted so much time and money that I think this will be a massive flop, especially if the crypto boom crashes again.
Not this time, if they launch this year.

Only if they delay and launch vs RTX 4000 and RDNA 3 it will flop.
 
Probably not. Neither AMD or Nvidia produces GPU's as fast as they could. Both know very well that crypto bubble will break and it means tons of used GPU's on market. It happened twice and will happen once again. It's much harder to sell next generation products (RX7000/GTX4000??) if market is flooded with previous generation products.

When the market was flooded with 10 series products as soon as everyone saw the new prices and performance of the 20 series people bought all the 1080tis. It's much easier to sell next generation products if they are significantly better and are prices fairly compared to the previous generation. The 30 series is a big jump in performance without a jump in price if they would have been available they would have sold tons of them while any old 20 series would have sat on store shelves or warehouses gathering dust.
 
Yeah, it's still pathetic.

Thank You for illustrating my point...
I would disagree that I have illustrated your point. The width of a PCI Express 4.0 x16 interface is 16-bits, so at 16 GT/s that gives a peak bandwidth of 32 GB/s. I don't think anyone is arguing that 16-bits of width in that scenario is unacceptable and I would suggest that this is similar to what's being claimed for the DG2 lineup.

The table shows that the 64-bit SKUs will apparently be sporting the same 16 Gbps GDDR6 as the 256-bit ones, so (if correct) the peak memory bandwidths will range from 128 GB/s to 512 GB/s. For the most basic model, this is roughly 60% more bandwidth than a GeForce MX450 and 150% more than what the integrated GPUs in the likes of the Core i5-11400 have access to.

It is 20% less than that of a GeForce GTX 1650 Mobile offering, though, but SKU 5 seems to be targeted as being the next step up from the integrated GPU. The one in the i5-11400 only has 32 EUs, so at 256 in count, it's a substantial improvement.

So while a 64-bit memory bus might seem to be awful, the use of fast GDDR6 (16 Gbps is the fastest produced by Samsung) easily mitigates this. It also means that only two DRAM modules will be required, helping to keep costs down. Whether that reduced cost is passed on to the consumer is another matter entirely, though.
 
When the market was flooded with 10 series products as soon as everyone saw the new prices and performance of the 20 series people bought all the 1080tis. It's much easier to sell next generation products if they are significantly better and are prices fairly compared to the previous generation. The 30 series is a big jump in performance without a jump in price if they would have been available they would have sold tons of them while any old 20 series would have sat on store shelves or warehouses gathering dust.
Exactly. You're right. But. As you said, IF 30-series would have been widely available, no-one would have bought 20-series. From Nvidia's point of view that makes sense because ... Right. It makes no sense. Obvious solution for problem is to limit 30-series availability.
 
Back