Microsoft reveals Xbox Series X internal design and specs

What makes you say that? Even though we don't know the full structure of the GPU part of the SoC yet, the information we do know doesn't strike me as being particularly mid-range (I.e. 3328 SIMD32 units, 520 GB/s access to 10 GB of GDDR6).

Just the 7nm+ node shrink alone places a 5700 XT a bit under the same number of compute units as the Xbox Series X. I'm not taking into account any potential changes to the architecture itself, just node. Currently 6-8 GB is mid-range but it would not be far fetched to see 8-10 GB mid range for the upcoming generation of cards.
 
Even if it have a low price, I wouldn't buy it...

1. I'm a PC master race gamer... I stopped playing consoles mid PS 1 era. I got a Xbox wired gamepad to use on my PC, still I struggle to find a game I would like to play on it. K/M all the way.

2. I need a PC and I don't have enough money to keep a PC and a console.

3. I'm a PC builder. Losing the (almost) endless possibilities of parts combination a PC give me would be a deal breaker.

... Still, if I could run ANY Win32 apps on an Xbox, use it with a keyboard/mouse, o may I say, let me use it like an ultracompact computer, then I would embrace it.
 
Apologies - I'd misread your comment. The architect was referring to the BVH calculations: those alone, done via compute shaders, would have required an equivalent of 13 TFLOPs of processing, but since the XBSX GPU has dedicated units solely for such work, the rest of the processing can take place in parallel to the BVH work. This is why he then went on to say that for ray tracing work, the GPU offers an equivalent total of 25 TFLOPs (13 TFLOPS of BVH acceleration + 12 TFLOPS of FP32 shaders).

So a 1080 Ti has a claimed peak throughput of 1.2 billion rays per second, from its 11.3 TFLOPs of FP32; the likes of the RTX 2060 has a claimed peak of 5 billion rays per second, from its 6.5 TFLOPs of FP32 and 30 RT cores.The two architectures aren't the same but Nvidia did offer this image as a comparison of how the differences affect game code using DXR:

geforce-rtx-gtx-dxr-one-metro-exodus-frame-850px.png


Now an RTX 2080 has around 10 TLFOPs of FP32, but you can see from the image that the Turing architecture permits parallel FP32 and INT32 processing, which reduced the frame time by roughly a 1/3 (which would equate to 33% high peak ray throughput). The use of the RT cores then drops the frame rate further.

We don't know what changes AMD have done to the SIMD structures in RDNA2, beyond what has been publicly stated, but GCN (all versions) and RDNA don't have separate integer and float SIMD units: the shader units work in one data format as per instructed. It's possible that AMD have changed this, but I suspect not - instead, preferring to increase the SIMD count instead. So if one assumes some level of equivalence between shader usage in RDNA2 and Pascal in DXR, minus any use of BVH units, then at first glance it would indeed seem that the SBSX GPU is perhaps only going to be somewhere between a 1080 Ti and an RTX 2060, when it comes to ray tracing.

However, the DXR API employed on the console is optimised heavily for that platform, and should open up even more performance. Games using the first implementation of DXR have used a fairly brute-force approach, relying on raw GPU performance more than anything else. As developers become more in tune with the programming nuances and differences in the DXR pipeline, compared to the graphics and compute pipelines in Direct3D, we'll see better performance in the use of ray tracing full stop. The actual peak ray throughput won't matter as much.


My friend...
-"We don't know what changes AMD have done to the SIMD structures in RDNA2, beyond what has been publicly stated, but GCN (all versions) and RDNA don't have separate integer and float SIMD units: the shader units work in one data format as per instructed. It's possible that AMD have changed this, but I suspect not "

Yes, we do know because during the PS5 live stream, he said they can with rdna2. Matter of fact, I believe he said that is the strength of rdna2... it ability to be configured how you want it.
 
Yes, we do know because during the PS5 live stream, he said they can with rdna2. Matter of fact, I believe he said that is the strength of rdna2... it ability to be configured how you want it.
The video linked in that news article, the only facts about the GPU or RDNA2 were as follows:

  • 36 CUs
  • Variable clock rate for a fixed power consumption
  • Clock capped to a maximum of 2.23 GHz
  • 10.23 TFLOPs of peak FP32 throughput
  • Primitive shaders are now part of the geometry engine (back again after being nixed in Vega)
  • Ray tracing is supported (which we knew from the XBSX details)
  • Transistor count of the CUs is 62% higher than that of the PS4 CUs

We can combine the above with the information gleaned from the XBSX details, and this says nothing explicitly about whether or not the SIMD32 units have been configured to be INT32 or FP32 only, which is what the part of my message you quoted refers to.

However, the fact that both Sony and Microsoft have made a big deal over backwards compatibility strongly points to the CUs, and the overall SE structure, in RDNA2 being generally the same as those in RDNA; this is because that architecture's structure is designed to ensure that code written for GCN (especially the old version in the likes of the PS4) is not disadvantaged by the changes made in the newer design.

We do know from the Xbox details that the SIMD32 units support more data formats, specifically low precision INT - in Turing, this work is done by the Tensor units, and not the FP32 nor the INT32 shader units. So AMD have improved the overall flexibility of the SIMD32 units in the CUs, but all evidence thus far says they haven't followed Nvidia's route of having separate SIMD32 units for different data formats:

RDNA/RNDA2
INT4, INT4, INT32, FP16, FP32, FP64 calculations = all done via the SIMD32 units

Turing
FP16, FP32 calculations = done via the FP32 shader units
FP64 = done by the FP64 units
INT32 = done via the INT32 shader units
INT4, INT8 = done by the Tensor units
 
The video linked in that news article, the only facts about the GPU or RDNA2 were as follows:

  • 36 CUs
  • Variable clock rate for a fixed power consumption
  • Clock capped to a maximum of 2.23 GHz
  • 10.23 TFLOPs of peak FP32 throughput
  • Primitive shaders are now part of the geometry engine (back again after being nixed in Vega)
  • Ray tracing is supported (which we knew from the XBSX details)
  • Transistor count of the CUs is 62% higher than that of the PS4 CUs

We can combine the above with the information gleaned from the XBSX details, and this says nothing explicitly about whether or not the SIMD32 units have been configured to be INT32 or FP32 only, which is what the part of my message you quoted refers to.

However, the fact that both Sony and Microsoft have made a big deal over backwards compatibility strongly points to the CUs, and the overall SE structure, in RDNA2 being generally the same as those in RDNA; this is because that architecture's structure is designed to ensure that code written for GCN (especially the old version in the likes of the PS4) is not disadvantaged by the changes made in the newer design.

We do know from the Xbox details that the SIMD32 units support more data formats, specifically low precision INT - in Turing, this work is done by the Tensor units, and not the FP32 nor the INT32 shader units. So AMD have improved the overall flexibility of the SIMD32 units in the CUs, but all evidence thus far says they haven't followed Nvidia's route of having separate SIMD32 units for different data formats:

RDNA/RNDA2
INT4, INT4, INT32, FP16, FP32, FP64 calculations = all done via the SIMD32 units

Turing
FP16, FP32 calculations = done via the FP32 shader units
FP64 = done by the FP64 units
INT32 = done via the INT32 shader units
INT4, INT8 = done by the Tensor units


Again, you are using rdna1 whitepapers to describe rdna2....
rdna2 has 5 patents and is a 100% wholly different uArch than rdna1. Which was a mismatched hybrid, using what they could basterdize from gcn, to make things work,

Btw, did you watch the SONY livestream...?
 
Again, you are using rdna1 whitepapers to describe rdna2....
No, I'm using the information that's freely available to the public about the information pertaining to the custom GPU used in the next PS and Xbox.

rdna2 has 5 patents and is a 100% wholly different uArch than rdna1
100%? Not in the least bit - RDNA2 is an evolution of RDNA, just as GCN 5.1 was an evolution of 5.0, 4.0, 3.0, etc. Every chip has patents - hundreds of them; some owned by the hardware vendors themselves, many others on licence. It would be a logical disaster and commercial suicide for any GPU manufacturer to completely re-architecture an entire GPU design.

But hey - if I'm wrong, I'm wrong; but as you're the one stating this, please expand on your reasoning (and/or provide sources for the information) that will provide the understanding that you're eluding to be lacking.
 
No, I'm using the information that's freely available to the public about the information pertaining to the custom GPU used in the next PS and Xbox.


100%? Not in the least bit - RDNA2 is an evolution of RDNA, just as GCN 5.1 was an evolution of 5.0, 4.0, 3.0, etc. Every chip has patents - hundreds of them; some owned by the hardware vendors themselves, many others on licence. It would be a logical disaster and commercial suicide for any GPU manufacturer to completely re-architecture an entire GPU design.

But hey - if I'm wrong, I'm wrong; but as you're the one stating this, please expand on your reasoning (and/or provide sources for the information) that will provide the understanding that you're eluding to be lacking.


Neeyik, you are wrong about rdna2.

Just so you know, rdna2 was developed BEFORE rdna1. (It is rdna1 that became a hybrid of next-gen and GCN, to meet marketing needs.)

That is why Navi was late and had to go for a re-spin delaying it releasing and AMD giving us Radeon Vii instead. AMD's next-gen architecture (rdna2) was already done, but was NOT being designed for 7nm, but designed for TSMC's advanced 7nm high freq node.

And rdna1 does not share rdna2's 5 patents.... because it is a fully different uArch.
 
Back