Intel introduces gamer-focused Xe HPG microarchitecture

Shawn Knight

Posts: 15,296   +192
Staff member
In a nutshell: Intel hasn't forgotten about hardcore gamers with its upcoming Xe graphics architecture. The chipmaker during its Architecture Day presentation on Thursday revealed a fourth microarchitecture in the Xe graphics family that's purpose-built for serious gamers. Unfortunately, we'll have to wait until next year to see how it stacks up to offerings from competitors AMD and Nvidia.

In development since 2018, the Xe HPG microarchitecture is said to leverage the best aspects of the three designs already in progress – the graphics efficiency of Xe LP, the scalability of Xe HP and the compute efficiency of Xe HPE – to create an optimization specifically for enthusiast gamers.

According to Intel, it’ll come with several new graphics features including ray tracing support. Xe HPG will also pack a memory subsystem based on GDDR6, reducing overall cost. In comparison, Xe HP and Xe HPC are based on HBM which is more expensive.

Raja Koduri, Intel’s chief GPU architect, joked that he still has the scars on his back from trying to bring expensive memory subsystems like HBM to gaming – a subtle jab at his former employer, AMD.

Tom’s Hardware further notes that Intel is also removing FP64 (64-bit floating point) support on Xe HPG, or at the very least, scaling it way back. Tensor cores could also get the boot, which would give the chipmaker more space for gaming-specific features.

Key to developing Xe HPG in parallel to its other tasks, Koduri said, was leveraging IP from the external foundry ecosystem. This means Xe HPG won’t slow down other projects relying on its limited 10nm SuperFin process node.

Early examples of Xe HPG are already being tested in Intel’s labs, we’re told, with plans to ship out to gamers sometime in 2021.

Permalink to story.

 
Tensor cores could also get the boot, which would give the chipmaker more space for gaming-specific features.
Intel will have almost certainly talked to all the larger developers and tried to gauge their interest in Nvidia's DLSS. If there's more uptake on its use, I wouldn't be surprised to tensor cores remaining, as there's nothing really proprietary to what Nvidia are doing in the technology (other than the neural network used).
 
Depending on when in 2021 Intel releases its gaming CPU, it may be going up against an Ampere refresh or even Hopper and RDNA3. That should be tough.
 
Intel will have almost certainly talked to all the larger developers and tried to gauge their interest in Nvidia's DLSS. If there's more uptake on its use, I wouldn't be surprised to tensor cores remaining, as there's nothing really proprietary to what Nvidia are doing in the technology (other than the neural network used).

I've been impressed with AI based image upscaling technique's progress in the last few years. The only problem I see, and it's a very big one, is the way it's implemented right now for graphics card drivers. In the case of DLSS, having to manually implement it is too much of a hassle. It's something that should be usable on nearly all titles. The quality of DLSS is good enough where it's nothing to scoff at but it does make some sacrifices compared to non-real time AI models. It should also be noted that Nvidia also adds a slight sharpening filter to DLSS output that's trying to make up for those sacrifices.

Hopefully that problem can be cracked or an alternative solution can be found.
 
But can it defeat the 2080Ti and 3000 series?

It doesn't need to. All is needs is to make money for Intel.

Intel and AMD are diversified in ways that Nvidia is only now trying to catch up to. So Nvidia is the only company that *needs* to have the performance crown to create sales from the halo effect. For Intel and AMD, it would be nice to be fastest but they have many other sources of income.
 
I was thinking of their attempt to get in on the x86 CPU market and Intel denied them a license. If true they have always wanted to diversify and tried a long time ago.
 
That sucks, I didn't know about that. More competiton in the CPU market would be great. Too bad they couldn't have bought or teamed up with Via, was it? who's x86 license is currently being used by a Chinese CPU manufacturer.
 
I was thinking of their attempt to get in on the x86 CPU market and Intel denied them a license. If true they have always wanted to diversify and tried a long time ago.

Couldn't they have done it through reverse engineering just like AMD and others did back in the 286 - 486 days? If there's someone with the R&D resources to do it that would've been Nvidia.
 
I couldn't say. I guess anything is possible. Maybe they were not interested in starting from scratch, even if they did have R&D to do so.
 
Do we really need HBM memory in the new graphic cards? I still remember when AMD first introduced HBM video cards, and they weren't nowhere near impressive compared to Nvidia line up.
If HBM memory is not the core component that is responsible for a significant boost in performance, I would rather support manufacturers going for steady improvement priced competitively.
And I wish Intel best luck. 2.5 Video card makers are always better than 1.5
 
Do we really need HBM memory in the new graphic cards?
It's not going to be used in the general consumer cards, as mentioned in the news report - HBM will only on the HP/HPC models, I.e. the 'Quadro' and 'Telsa' versions of Intel's 'GeForce' chips. The latter will be fitted with GDDR6 or possibly GDDR6X, by the time they appear.
 
So, it was Koduri's fault for bringing in HBM? And he's joking about that? AMD should have fired him right away for such an expensive failure. He stalled the development of AMD GPUs for at least 3 years. Was he maybe on Nvidia's payroll?
 
Couldn't they have done it through reverse engineering just like AMD and others did back in the 286 - 486 days? If there's someone with the R&D resources to do it that would've been Nvidia.

They could but they don't need to start their a patent is only valid for a short time. Nvidia could copy the Pentium III legally as the patent is expired, they are only valid for 20 years. It's the r&d cost as well as certain copy right restrictions that prevent it, and certain instructions as well.
 
It is my understanding that copying a CPU is not the problem. But getting a license to use x86 instruction set is the problem.

Probably the only reason AMD was able to get a license was the x86-64 extension that AMD contributed to Intel's x86. Intel give a handshake to AMD for a shared interest in combining x86 with the 64 bit extensions.
 
It doesn't need to. All is needs is to make money for Intel.

Intel and AMD are diversified in ways that Nvidia is only now trying to catch up to. So Nvidia is the only company that *needs* to have the performance crown to create sales from the halo effect. For Intel and AMD, it would be nice to be fastest but they have many other sources of income.
Also Intel as well as AMD, Nvidia and all the other tech companies, are not speculators on the stock exchange and Forex, they are tech companies. Their purpose is to advance the tech. Money came as a consequence.
 
Back