Intel Xe Graphics Preview: What we know (and what we don't)

Whaaa..?
NO, Nvidia new chip is already being worked on. It takes years to design a chip and Nvidia just can't react to a new card on the market and come out with something, because of it. It doesn't work like that.

Nvidia is all done making GPUs for right now and is instead waiting another Year before releasing their new Ampere chip. There is nothing left in Turing for gamers except excessive prices and marketing gimmicks...

Nvidia's greed has left the market wide open for AMD and Intel to claim.

You really believe that Nvidia decided to release the latest generation (Turing) on TSMC's 12nm instead of TSMC's 7nm because they are "all done making GPUs for right now"? Come on, man.

Nvidia hasn't moved to TSMC's 7nm because they can and don't have to. They have no competition at the top (ahem, AMD) and their offerings at 12nm have profit margins that are just fine.

What if AMD's 5700 XT had actually beat out the 2080 Ti? Nvidia can just do a die shrink of Turing to 7nm and easily beat AMD again. But AMD's newest offering, the 5700 XT, at 7nm can't even compete with Nvidia at 12nm, so Nvidia has zero incentive to do this.

Now if Intel's Xe is actually faster than RTX 2080 Ti, you can bet your booty that the 7nm die shrink of Turing will be out there pronto to one-up Intel. This is business chess 101.

Firstly Nvidia shareholders dictate margins must be at very mimimum 60% but I believe Nvidia could be pushing 80%. At 12nm, the prices they charge vs the wafers allows these large margins. The RX5700xt 251mm2 7nm chip performs inbetween the 12nm RTX 2070 445mm2 and RTX 2070 super 545mm2 (474mm2 core count adjusted for T-104 reduction).
So AMD with the far more expensive 7nm wafers, offers a chip that is roughly half the size in area, the same performance at a slightly cheaper price and it is a transitional design navi 10. Navi 20 will be the full architectural leap. If Nvidia right now makes a 7nm RTX 3070/super it cannot be in the 450-550mm2 range as it would be far too expensive as yields and wafer prices aren't down yet (reason why Ryzen is chiplets. Chiplets increase yield). They could get 2070/super performance at 7nm at in the 200mm2 range but look oh AMD is already doing that. At this point in time die size, cost and performance adjusted Nvidia and AMD are now equal which is a massive jump for AMD as Vega was poor and way behind Nvidia. Also tensor and RT cores don't take large percentage of the die so taking them out won't improve alot, which they won't as Jennsen has gone down this path and he can't u-turn now. Navi 20 will be competition for the RTX2080ti but I believe it's not top of their list with the server market, desktop CPU's, laptops, consoles and navi samsung being higher on their list as it makes more money.
 
Firstly Nvidia shareholders dictate margins must be at very mimimum 60% but I believe Nvidia could be pushing 80%.
It's probably not that high - take the TU104 chip, for example; there are 12 fundamental models off that one design (GeForce RTX 2080 Max-Q, 2080 Mobile, 2080 Super, 2080, 2070 Super; Quadro RTX 4000, 4000 Max-Q, 4000 Mobile, 5000, 5000 Max-Q, 5000 Mobile; Tesla T4), along with clock variants. Now these could also certainly come from high yielding wafers, but it's more likely that they come from lower yielding wafers - take the Quadro 4000: it has 75% of the active CUDA cores as the 2080 Super.

Also tensor and RT cores don't take large percentage of the die so taking them out won't improve alot, which they won't as Jennsen has gone down this path and he can't u-turn now.
Except they have - the TU116 has no Tensor nor RTX units, so Nvidia are clearly happy to remove them for the right market. Agreed that this isn't going to happen for their top-end Ampere models, though.

Navi 20 will be competition for the RTX2080ti but I believe it's not top of their list with the server market, desktop CPU's, laptops, consoles and navi samsung being higher on their list as it makes more money.
AMD have gone against a lot of Intel's sectors, including high-end enthusiast, with the Zen architecture, so you never know - these times are a'changing!
 
It's probably not that high - take the TU104 chip, for example; there are 12 fundamental models off that one design (GeForce RTX 2080 Max-Q, 2080 Mobile, 2080 Super, 2080, 2070 Super; Quadro RTX 4000, 4000 Max-Q, 4000 Mobile, 5000, 5000 Max-Q, 5000 Mobile; Tesla T4), along with clock variants. Now these could also certainly come from high yielding wafers, but it's more likely that they come from lower yielding wafers - take the Quadro 4000: it has 75% of the active CUDA cores as the 2080 Super.

Except they have - the TU116 has no Tensor nor RTX units, so Nvidia are clearly happy to remove them for the right market. Agreed that this isn't going to happen for their top-end Ampere models, though.

AMD have gone against a lot of Intel's sectors, including high-end enthusiast, with the Zen architecture, so you never know - these times are a'changing!

When you have a smaller die, you innately have more dies per wafer. Hence higher yield of dies per wafer = higher yield. Also (on top of that) being smaller dies they will be less defective & have a naturally higher yield allowances = even greater yield.

That is the process that is fueling moor's law.
 
I think the versatile concurrent exexution approach will continue for all big GPU.
The smaller ones where e.g. gaming is targeted show that if the GPU gets overall too slow for Ray-Tracing, both the RT-Core and the Tensor-Core get a change, RT-Core "gone", Tensor-Core replaced by dedicated FP16-ALUs.
This happens because in the target market of this chip the Tensor-Core whould only be useful in combination in the RT-Pineline e.g. denoising, so it gets useless.
Additionally DLSS gets mostly advertised by Nvidia in combinative use with Ray-Tracing, this shows the direction of the marketing and development direction.

As the requirements of Ray-Tracing are achived more and more by smaller chips, I think we will see the line below wich these features get removed will go down over time to the even smaller chips, that seems logical.
 
An article about Intel discrete graphics not mentioning Larrabee... A pure amateurism!
And not really worth mentioning anyway: the project was cancelled completely after just 2 years (and almost a decade ago) and none of its architectural design choices made their way into the likes of the current Gen series. The only thing really learnt from Larrabee was to keep their mouth shut about claims of revolutionary designs and performance until a product was actually finished and on the table. Well...maybe they haven’t quite learned that just yet.
 
Guys you forgot that Intel has tons of money, more than NVIDIA + AMD combined.
You have money, you can do anything.
IF Intel takes GPU market seriously, it can beats NVidia and AMD. You forgot this is the Giant INTEL.
 
For all its wealth, Intel is not immune to making poor decisions, especially with regards to engineering design - e.g. NetBurst architecture, Itanium processor series. At the same time, they're also quite stubborn at sticking with such decisions, working at them until they either work as well as possible (so in the case of NetBurst, tweaking the architecture but mostly just increasing clock speed to silly speeds) or until they finally make a profit (which took 8 years for Itanium). While the Xe is unlikely to be similar to these examples, architecturally at least, they have almost no experience of competing the professional/compute GPU market and they don't interact with game developers to anything like the extent that Nvidia and AMD currently do.

Intel's biggest problem is that people are going to expect the Xe models to be at least as good or better than the same price-range models from AMD and Nvidia; if they're not, then the only option they've got is to go cheap, which just isn't the Intel way of doing things. In which case, who would buy the 3rd slowest model GPU?
 
The cores in Intel's GPUs have been similar to the Steam Processors that AMD use since 2006, at the leas; so it's unlikely to be due to AMD staff moving into Intel.

Being cpu manufacturers they seem to roughly have the same idea of how a gpu should function while nvidia' s approach is a bit different.
 
Back