Microsoft reveals Xbox Series X internal design and specs

nanoguy

Posts: 1,355   +27
Staff member
In brief: Microsoft's new Xbox Series X console is a beast, courtesy of the biggest generational leap in SoC power and new APIs. The company has worked together with AMD to make a console that can rival some of the most powerful PC gaming rigs on GPU performance and game loading times.

Today, Microsoft revealed the full specifications and design details of its upcoming Xbox Series X console, after a series of leaks and teasers that made PC gamers a little envious. And that's because the new Xbox isn't just chuck full of powerful AMD hardware, but has NVMe expandable storage and ray-tracing support, while maintaining excellent backwards compatibility with older titles.

Microsoft is using a custom designed, 8-core AMD Zen 2 CPU clocked at 3.8 GHz (3.6 GHz with SMT enabled) paired with a RDNA 2-class GPU that can achieve 12 teraflops through its 52 compute units that are clocked at 1.825 GHz. As expected, this is all built on TSMC's 7nm node process and tuned for power efficiency and silent operation.

But more importantly, the company installed 16 GB of GDDR6 RAM along with a 1 TB custom NVMe SSD that can achieve raw speeds of up to 2.4 GB/s - a first for consoles. The RAM is comprised of 10 GB of fast memory for the GPU and 6 GB of slightly slower memory, of which 2.5 GB is reserved for the OS.

There's also an expansion slot for a 1 TB Seagate SSD for those who will feel the need to back up their games whenever they need additional space for more frequently played ones.

And if that isn't enough, there's also support for two additional USB 3.2 external storage drives.

To achieve this compact vertical design, Microsoft split the hardware across two mainboards and placed one 130 mm fan at the top to keep the new console cool. The air is sucked in at the bottom and pulled through a large heatsink, which the company says works best for the hot SoC that's inside the Series X.

Microsoft has a performance target of 4K 60 fps or up to 120 fps - even cutscenes now run at 60 fps for smother transitions. Digital Foundry saw benchmarks on Gears 5 that show the Xbox Series X delivering similar performance to an RTX 2080, with Coalition's Mike Raynor saying there's still room for improvement.

There's DirectX ray tracing support at an equivalent compute performance of over 25 teraflops (12 teraflops for FP32 compute), and Microsoft along with AMD has developed a solution that uses standard shader cores instead of Nvidia's tensor core approach.

Ray tracing aside, Microsoft wanted the Xbox Series X to be fast with loading games, too. The company built a system called Velocity Architecture, which makes 100 GB of game assets on the SSD act like "extended memory." Whenever the game needs some of that data, there's a dedicated hardware decompression block that can send it at over 6 GB/s, which is especially useful for open world games like Assassin's Creed Odyssey and Red Dead Redemption 2.

That means that developers won't have to set aside a significant performance budget for I/O operations, as that is covered through the Velocity Architecture along with DirectStorage and Sampler Feedback Streaming, which Microsoft says will come to PC users in the near future.

Another interesting feature is Quick Resume, which caches data from RAM to the NVMe SSD to make switching between three or four games as fast as possible. The game state will be preserved even after a reboot or a system update, but it'll also eat a sizeable chunk of storage space, depending on the games that are being cached.

Other things that should make a visible difference in the overall gaming experience are mesh shading, variable rate shading, and variable refresh rate for HDMI 2.1 displays. These allow developers to use the GPU compute units more efficiently, avoid screen tearing, and reduce input latency.

Backwards compatibility wasn't a second thought, as Microsoft wanted to use the additional power in the Series X to add HDR mode for older games, even those from an era where HDR wasn't a thing. The company says SDR games will now be enhanced with the help of a reconstruction technique that doesn't require developers to put in any effort.

Existing games that were made for the older Xbox consoles will run at full 4K thanks to an improved version of the Heutchy Method used in previous consoles.

The new Xbox Series X is full of improvements in every department, but if there's one thing that is still a mystery, it's the price.

Permalink to story.

 
Because of many factors, I do not anticipate seeing these available anywhere for less than a grand regardless of the suggested retail.
 
The hardware accelerated decompression technology sounds interesting. Hope it makes it to Zen 3.

Because of many factors, I do not anticipate seeing these available anywhere for less than a grand regardless of the suggested retail.

As MS and Sony are buying all components in the Millions, it stands to reason they are getting much better prices than what we see in Retail, key components like the SOC probably at manufacturing cost plus a small mark up.

So they could still pull it off.
 
Because of many factors, I do not anticipate seeing these available anywhere for less than a grand regardless of the suggested retail.
Because of many factors, I do not anticipate seeing these available anywhere for less than a grand regardless of the suggested retail.
If they came in between $500 and $600 would you buy it? Im PC master race 100% but the more I read about this thing the more it intrigues me. This might be the first console I buy since the 360.
 
If they came in between $500 and $600 would you buy it? Im PC master race 100% but the more I read about this thing the more it intrigues me. This might be the first console I buy since the 360.
I'm a mixed gamer playing mainly older games on PC and all the few newer games that interest me on my OG Xbox One, chose not to upgrade over the last 6 years to the newer systems from M$ and Sony but I am definitely grabbing one of these prob in 2021 at some point. Knowing all the 360 and OG Xbox titles will stay backwards compatible as well as all xbox one titles working makes this move one of the better I have seen on consoles.
 
I said elsewhere it looks very expensive to build, at least for a console. The spine of the machine is a CNC milled piece of aluminium for goodness sake! Look in the video at the start of assembly.

The SSD expansion slot takes custom bespoke drives akin to the old memory cards. That means squeezing more bucks out your pocket. It has two mainboards. Sucks down the best part of 300 watts by the looks of it and has all the heavy duty cooling solutions to match.

It'll be up to Microsoft what kind of losses they are prepared to swallow on this hardware to make it fly at a price for the consumer.
 
Last edited:
Because of many factors, I do not anticipate seeing these available anywhere for less than a grand regardless of the suggested retail.

I predict that this prediction will be wrong, without a doubt. 0% chance they will charge $1,000 or more for a console.

It'll be around $600 or less. The video card in this system is mid range.


If they came in between $500 and $600 would you buy it? Im PC master race 100% but the more I read about this thing the more it intrigues me. This might be the first console I buy since the 360.

If the prices of GPUs remain as ridiculous as they currently are, I can imagine a lot of people buying these consoles or just waiting.

It's actually nice these consoles are coming out, hopefully they are showing people how inflated GPU prices are.

Very intrigued by ray tracing running on the shader cores and it's performance. That would be ideal if it turned out to be good.
 
I said elsewhere it looks very expensive to build, at least for a console. The spine of the machine is a CNC milled piece of aluminium for goodness sake! Look in the video at the start of assembly.

The SSD expansion slot takes custom bespoke drives akin to the old memory cards. That means squeezing more bucks out your pocket. It has two mainboards. Sucks down the best part of 300 watts by the looks of it and has all the heavy duty cooling solutions to match.

It'll be up to Microsoft what kind of losses they are prepared to swallow on this hardware to make it fly at a price for the consumer.
They lose money on the one x. Phil said so himself. I think they dont mind. Also Microsoft has more money than god himself, I doubt itll hurt them at all.
 
They lose money on the one x. Phil said so himself. I think they dont mind. Also Microsoft has more money than god himself, I doubt itll hurt them at all.

Most consoles of this nature have been loss leaders, but only to a certain extent. PS3 lost a huge amount and Sony vowed off that level of subsidising. So PS4 was more or less breakeven on launch. Manufacturing costs reducing slowly over time.

It's one thing losing $20 or $30 per machine (or as Spencer put it, we made no money on X1X) and another taking a hit to the tune of $100 per machine to make it palatable for consumers.

Sony sold 18 million PS4 machines in the first year. Even for Microsoft the balance sheet can look very ugly if you greenlight a $100 subsidy (or more) and then proceed to sell >15 million units at that price in year one. You do the math.

I expect Microsoft to take a hit but to what extent and how much the MSRP settles at is the last big remaining question.
 
#1 user-defined storage needs to be a standard - just like it was in PS3 and PS4. SSD are getting cheaper, although it will be a while before 4TB drops below $400 (right now 1TB is about $100). Thing is, more suppliers can rise to meet the SSD demand over time and prices will drop.

#2 It will be a while before we see true gameplay from big-budget titles. I can't see myself even considering buying until I see exclusive games I want.

#3 No one asked for ray-tracing.
What we wanted was 4K gaming at 144Hz - 240Hz to be stable.

Nvidia gave us Ray Tracing instead.

Now it's a standard everyone is trying to live up to, but Nvidia is dictating the goalpost positioning.
 
a RDNA 2-class GPU that can achieve 12 teraflops through its 52 compute units that are clocked at 1.825 GHz
That's an interesting snippet of information, because with it one can begin a little estimation of what's going on inside the GPU design:

XBSX
50 CUs @ 1.825 GHz = 12 TFLOPS FP32

So if one assumes that the RDNA-2 GPU in the XBSX has the same CU structure as those in the RX 5700 XT, then:

RX 5700 XT
40 CUs @ 1.905 GHz = 9.754 TFLOPS FP32
40 CUS @ 1.825 GHz = 9.344 TFLOPS FP32
50 CUs @ 1.825 GHz = 11.68 TFLOPS FP32
52 CUs @ 1.825 GHz = 12.15 TLFOPS FP32

Now that's a little over 12 TLFOPs, so the XBSX's figure is likely to be just an approximate/rounded one. The justification for this reasoning is that (a) the clock rate given was very specific, and (b) any adjustment to the CU structure would affect the calculation above quite considerably.

So we can be reasonably safe in assuming that the SBSX GPU has Navi 10-like CUs, which leads nicely into a bit of thinking through the rest of the structure. 10 GB of GDDR6 points to a 320 bit memory bus, as the Navi 10 supports 8 GB of GDDR6 through a 256 bit bus, and Nvidia's RTX 2080 Ti supports 11GB of GDDR6 on a 354 bit bus:

256 bit bus = 8 GB GDDR6
288 bit bus = 9 GB GDDR6
320 bit bus = 10 GB GDDR6
354 bit bus = 11 GB GDDR6

In other words, a single 32 bit controller supports 1 GB of GDDR6 RAM. In AMD's Navi chip, these 32 bit controllers are paired up into a single 64 bit controller per 10 CUs.

But here we have 52 CUs and five 64 bit memory controllers, compared to Navi 10's 40 CUs and four 64 bit memory controllers. In the case of the latter, the CUs are paired as Dual Compute Units, and 5 of these are grouped into an Asynchronous Compute Engine. Two of these blocks are grouped into a Shader Engine.

52 isn't divisible by 5, so if the memory controller guesstimate is right, then the SBSX GPU doesn't follow the Navi 10 layout at all - unless it actually does but has 13 Dual Compute Units per ACE and a memory controller just magically floating around....umm, I don't think so.

I suspect that the GPU is actually a 60 CU processor (2 SEs with 3 ACEs each; every ACE has 10 CUs), with six 64 bit memory controllers but 8 CUs and 1 MC are disabled - either by design, by chip binning, or a combination of both. This would mean that a lot of GPUs that fail to meet the full 60 CU GPU design are eligible for use in the XBSX; it would also mean that AMD have a 60 CU GPU available for the desktop graphics card market too!

We'll see :)
 
it would also mean that AMD have a 60 CU GPU available for the desktop graphics card market too!

We'll see :)

They should have something of that size. 5700XT at only 251mm² is actually a pretty small chip. This chip is 360mm². Which means they have confidence in the mass production of larger 7nm chips now. Bigger than the Radeon VII (331mm²) is possible, but with a vastly more efficient design.

On paper it would take the fight to the RTX2080Ti.

In practice it could yet again be about 12 months too late for the high end party. Because Nvidia most likely have several 7nm cards in the pipeline that obliterate the RTX2080Ti.

Still, high end isn't everything. Priced keenly AMD can make gains.
 
This is from Eurogamer/Digital Foundry:
But up until now at least, the focus has been on the GPU, where Microsoft has delivered 12 teraflops of compute performance via 3328 shaders allocated to 52 compute units (from 56 in total on silicon, four disabled to increase production yield) running at a sustained, locked 1825MHz. Once again, Microsoft stresses the point that frequencies are consistent on all machines, in all environments. There are no boost clocks with Xbox Series X.
Source
So 52 CUs on a 56 CU design, with 4 disabled (probably through binning). Total number of SIMD32 units is 3328, so that's 64 units per CU. A full Navi 10 chip has 2560 SIMD32 units in 40 CUs, which also gives the same SIMD per CU ratio - so no change there.

The 5 memory controllers is throwing in a bit of a curve ball, to be honest, as the normal Navi design has 1 MC paired with 1 ACE, which would suggest the XBSX GPU has 5 ACEs. But neither 52 nor 56 are divisible by 5, and that means that either the ACEs don't have the same number of CUs in them (which wouldn't be done) or that MCs are independent of the ACEs - given that AMD are keen on expanding the use of Infinity Fabric in all their products, this is a distinct possibility.

So what is 56 divisible by? The factors of 56 are 1, 2, 4, 7, 8, 14, 28, and 56. The 4 ACEs in Navi 10 pack 10 CUs each, and I suspect that AMD wouldn't want to alter that that too much, so the XBSX GPU is possibly 8 ACEs with 7 CUs apiece; it could just as well be 4 ACEs with 14 CUs each. Be interesting to see what the split is actually like.

Going back to the memory system, it's far more complex than at first glance:
Microsoft's solution for the memory sub-system saw it deliver a curious 320-bit interface, with ten 14gbps GDDR6 modules on the mainboard - six 2GB and four 1GB chips. How this all splits out for the developer is fascinating.

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."
So there are 10 memory chips in total but split into two memory systems: one being 12 GB in size, the other being the remaining 4 GB. But then Microsoft are saying that the speeds are split 10/6, rather than 12/4, but the RAM all runs at 14 Gbps. That clearly show the 560 GB/s memory is on a 320 bit bus, but what about the 336 GB/s memory? That value would equate to a 192 bit bus at 14 Gbps or that it's accessed once every two clock cycles on the same 320 bit bus or the memory slows down for those accesses. But then the article says that games can be allocated all 10 GB of the optimal memory and up to 3.5 GB of the standard memory. I guess Microsoft know what they're doing here but it seems an awfully complicated way of getting round things.

However, for me, the most impressive specification details is that the SoC is 360 square millimetres - that's an 8 core CPU, with an effectively 56 CU GPU (with adjusted SIMD32 units for low precision integer operations, and BVH acceleration units). There's some serious engineering going there to pack all that in.
 
That's an interesting snippet of information, because with it one can begin a little estimation of what's going on inside the GPU design:

XBSX
50 CUs @ 1.825 GHz = 12 TFLOPS FP32

So if one assumes that the RDNA-2 GPU in the XBSX has the same CU structure as those in the RX 5700 XT, then:

RX 5700 XT
40 CUs @ 1.905 GHz = 9.754 TFLOPS FP32
40 CUS @ 1.825 GHz = 9.344 TFLOPS FP32
50 CUs @ 1.825 GHz = 11.68 TFLOPS FP32
52 CUs @ 1.825 GHz = 12.15 TLFOPS FP32

Now that's a little over 12 TLFOPs, so the XBSX's figure is likely to be just an approximate/rounded one. The justification for this reasoning is that (a) the clock rate given was very specific, and (b) any adjustment to the CU structure would affect the calculation above quite considerably.

So we can be reasonably safe in assuming that the SBSX GPU has Navi 10-like CUs, which leads nicely into a bit of thinking through the rest of the structure. 10 GB of GDDR6 points to a 320 bit memory bus, as the Navi 10 supports 8 GB of GDDR6 through a 256 bit bus, and Nvidia's RTX 2080 Ti supports 11GB of GDDR6 on a 354 bit bus:

256 bit bus = 8 GB GDDR6
288 bit bus = 9 GB GDDR6
320 bit bus = 10 GB GDDR6
354 bit bus = 11 GB GDDR6

In other words, a single 32 bit controller supports 1 GB of GDDR6 RAM. In AMD's Navi chip, these 32 bit controllers are paired up into a single 64 bit controller per 10 CUs.

But here we have 52 CUs and five 64 bit memory controllers, compared to Navi 10's 40 CUs and four 64 bit memory controllers. In the case of the latter, the CUs are paired as Dual Compute Units, and 5 of these are grouped into an Asynchronous Compute Engine. Two of these blocks are grouped into a Shader Engine.

52 isn't divisible by 5, so if the memory controller guesstimate is right, then the SBSX GPU doesn't follow the Navi 10 layout at all - unless it actually does but has 13 Dual Compute Units per ACE and a memory controller just magically floating around....umm, I don't think so.

I suspect that the GPU is actually a 60 CU processor (2 SEs with 3 ACEs each; every ACE has 10 CUs), with six 64 bit memory controllers but 8 CUs and 1 MC are disabled - either by design, by chip binning, or a combination of both. This would mean that a lot of GPUs that fail to meet the full 60 CU GPU design are eligible for use in the XBSX; it would also mean that AMD have a 60 CU GPU available for the desktop graphics card market too!

We'll see :)

Your error, is trying to compare rdna2, to rdna1...

 
One thing got me a bit concerned.
Yesterday Andrew Goossen, from Microsoft stated:
“Without hardware acceleration, this work could have been done in the shaders but would have consumed over 13 TFLOPs alone. For the Xbox Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing.”

Here's the thing, an GTX 1080 Ti wich doesn't have hardware RT can do 1.2 Gigarays per second. This card, at 1900Mhz, does 13.6 TFLOPs.

If we consider the statement done by Andrew Goossen, then the ray tracing cores in the Xbox Series X will only do around 1.2 GR/s.
This seems to be very low. Even the RTX 2060 has 6 GR/s while being the weakest card of all the RTX line.

Am I missing something, or is the Xbox Series X ray tracing capabilities really limited.
 
The 1080 Ti has to use shaders to do BVH calculations; RTX cards and the SBSX GPU has separate, dedicated hardware for these.
 
The 1080 Ti has to use shaders to do BVH calculations; RTX cards and the SBSX GPU has separate, dedicated hardware for these.

I don't think you understand me. The chief arquitect of the XBSX stated that the Ray Tracing capabilities of the console are the equivalent of just 13 TFLOPs, if they were done in the shader units. And yes, I know that the SBSX has RT cores. But MS didn't state the specs regarding Ray tracing performance, directly.
But 13 TFLOPs would only be capable of producing 1.2 GygaRays per second.
 
The power supply of this thing is 300W. That means that an 8C/12T CPU running at 3.8 GHz combined with a GPU with 52CU running at 1.825 GHz should be using approximately 200W. The implications for PC graphics cards based on RDNA2 is HUGE.
 
The power supply of this thing is 300W. That means that an 8C/12T CPU running at 3.8 GHz combined with a GPU with 52CU running at 1.825 GHz should be using approximately 200W. The implications for PC graphics cards based on RDNA2 is HUGE.
Isn‘t actually less considering that it only uses the small plug (don‘t know what it‘s called in English) vs the larger ones used by PC PSU ?
 
It'll be up to Microsoft what kind of losses they are prepared to swallow on this hardware to make it fly at a price for the consumer.

Most of the Xbox Division's profit comes from Live Gold, Game Pass, and accessories. My bet is on a $599 MSRP, with MS taking a hit on hardware.
 
I don't think you understand me. The chief arquitect of the XBSX stated that the Ray Tracing capabilities of the console are the equivalent of just 13 TFLOPs, if they were done in the shader units. And yes, I know that the SBSX has RT cores. But MS didn't state the specs regarding Ray tracing performance, directly.
But 13 TFLOPs would only be capable of producing 1.2 GygaRays per second.

The numbers Nvdia's touts for it's Gigaray performance should come with some major notes as all marketing numbers do. More technical info here:
I'll quote a key part

"RT Cores provide a speedup of 1.78x, 1.53x and 1.47x respectively compared to the pure CUDA version "

In effect by getting 13 TFLOPs, effective RT performance out of AMD's new cards is over 2.00x more performance literally. So comparing actual ray tracing benchmark numbers with both RT cores vs CUDA and then looking at AMD's GPU in the Xbox Series One X, AMD's RT solutions are better on paper. The RT minecraft demo did not look bad either.
 
Last edited:
Of couse they are different measurements. But they are comparable. That's why nVidia and Microsoft compared them.

No, not really. Just because a company compares them on a marketing slide or in marketing, doesn't make them a good comparison. Actual performance, like I provided above, is a good comparison.
 
I don't think you understand me. The chief arquitect of the XBSX stated that the Ray Tracing capabilities of the console are the equivalent of just 13 TFLOPs, if they were done in the shader units. And yes, I know that the SBSX has RT cores. But MS didn't state the specs regarding Ray tracing performance, directly.
But 13 TFLOPs would only be capable of producing 1.2 GygaRays per second.
Apologies - I'd misread your comment. The architect was referring to the BVH calculations: those alone, done via compute shaders, would have required an equivalent of 13 TFLOPs of processing, but since the XBSX GPU has dedicated units solely for such work, the rest of the processing can take place in parallel to the BVH work. This is why he then went on to say that for ray tracing work, the GPU offers an equivalent total of 25 TFLOPs (13 TFLOPS of BVH acceleration + 12 TFLOPS of FP32 shaders).

So a 1080 Ti has a claimed peak throughput of 1.2 billion rays per second, from its 11.3 TFLOPs of FP32; the likes of the RTX 2060 has a claimed peak of 5 billion rays per second, from its 6.5 TFLOPs of FP32 and 30 RT cores.The two architectures aren't the same but Nvidia did offer this image as a comparison of how the differences affect game code using DXR:

geforce-rtx-gtx-dxr-one-metro-exodus-frame-850px.png


Now an RTX 2080 has around 10 TLFOPs of FP32, but you can see from the image that the Turing architecture permits parallel FP32 and INT32 processing, which reduced the frame time by roughly a 1/3 (which would equate to 33% high peak ray throughput). The use of the RT cores then drops the frame rate further.

We don't know what changes AMD have done to the SIMD structures in RDNA2, beyond what has been publicly stated, but GCN (all versions) and RDNA don't have separate integer and float SIMD units: the shader units work in one data format as per instructed. It's possible that AMD have changed this, but I suspect not - instead, preferring to increase the SIMD count instead. So if one assumes some level of equivalence between shader usage in RDNA2 and Pascal in DXR, minus any use of BVH units, then at first glance it would indeed seem that the SBSX GPU is perhaps only going to be somewhere between a 1080 Ti and an RTX 2060, when it comes to ray tracing.

However, the DXR API employed on the console is optimised heavily for that platform, and should open up even more performance. Games using the first implementation of DXR have used a fairly brute-force approach, relying on raw GPU performance more than anything else. As developers become more in tune with the programming nuances and differences in the DXR pipeline, compared to the graphics and compute pipelines in Direct3D, we'll see better performance in the use of ray tracing full stop. The actual peak ray throughput won't matter as much.
 
The video card in this system is mid range.
What makes you say that? Even though we don't know the full structure of the GPU part of the SoC yet, the information we do know doesn't strike me as being particularly mid-range (I.e. 3328 SIMD32 units, 520 GB/s access to 10 GB of GDDR6).
 
Back