Seagate's unveils 1TB Xbox Series X expansion card for more storage and better performance

It's always difficult to estimate with any accuracy how fast an individual console conponent precisely is. Not least because it is not individual. It's an integrated system attempting to be compared to a dedicated PC GPU. All of the variable factors you cover in the mixer are valid.

It extends beyond raw performance well into the realm of feature sets, APIs and other updated custom hardware.

I would say raw shading performance is blatantly better than a 5700XT and at least on par with an RTX2080, but not overall as fast as a 2080Ti. Not ray tracing performance, despite them demoing it the performance doesn't look amazing. The die is too small and confirmed there is no dedicated acceleration for DLSS techniques for example.

A big deficit consoles with unified memory must deal with is shared bandwidth, where the GPU had to fight over memory and bandwidth with the rest of the system. Xbox Series X looks to address that with separate buses.

It's a fast machine taken as a whole package. But then again RTX2080 and even the Ti will be old news by the time the console makes it onto shelves, let alone when it has a significant installed user base.

It's always that thing where console gamers claim that'll be faster than the 'average' PC. Yes it will, but that doesn't mean there aren't already millions of PC gamers with 2080 and 2080Ti cards. We're talking about graphics cards that are nearly 18 months old already and due for replacement in another six!

Total current install base of Xbox Series X: Zero. If Microsoft shift an ambitious 15 million in the first year, you're still looking at way more PC gamers with equal or better hardware at that point.

I had this discussion with Xbox One X reveal in 2017. The machine has barely sold as I predicted then. Only few million in well over two years. 5-10m at most. Even by the time it launched there were a zillion people with a GTX1070 or better, which is definitely faster.
I think you're overestimating what kind of hardware PC gamers are buying. Most people are rocking RTX 2060 - RTX 2070S performance cards with an R5 3600 CPU, if that. The ones that have a graphics card faster than a 2070S are a very small portion of users. Unless the hardware equivalent of the XSX all drop under the $300 range for GPU and CPU individually, I don't see the majority of PC gamers having that kind of performance for now.

As for RT, I'm not sure what was going on in the video. But according to a user on Resetera, the XSX RT capability is better than a 2080Ti, although I have no idea of his calculation/estimates are correct at all;

"We know the RDNA2 design has one RT core in each of its TMUs, we know that amd designs have 4 TMUs per compute unit. So for an xbox series X with 52 CUs, at 1825mhz you've got 52 * 4 * 1825mhz = intersections/second, with a 10 deep bvh you divide that 379.6 billion figure by 10 to get 37.96 gigrays/second.

Do that same math for a 2060, or a 2080Ti, or any other turing gpu, turing has one RT core per SM, so for a 2060 you have 30 SMs, and its official turbo clock is 1680mhz, for 50.4 billion intersections, or 5.04 gigrays/sec, nvidia quotes 5 gigrays.
For a 2080Ti you have 68 RT units, at 1545mhz, for 10.56 gigrays, which nvidia also quotes. "


 
I think you're overestimating what kind of hardware PC gamers are buying. Most people are rocking RTX 2060 - RTX 2070S performance cards with an R5 3600 CPU, if that. The ones that have a graphics card faster than a 2070S are a very small portion of users. Unless the hardware equivalent of the XSX all drop under the $300 range for GPU and CPU individually, I don't see the majority of PC gamers having that kind of performance for now.

As for RT, I'm not sure what was going on in the video. But according to a user on Resetera, the XSX RT capability is better than a 2080Ti, although I have no idea of his calculation/estimates are correct at all;

"We know the RDNA2 design has one RT core in each of its TMUs, we know that amd designs have 4 TMUs per compute unit. So for an xbox series X with 52 CUs, at 1825mhz you've got 52 * 4 * 1825mhz = intersections/second, with a 10 deep bvh you divide that 379.6 billion figure by 10 to get 37.96 gigrays/second.

Do that same math for a 2060, or a 2080Ti, or any other turing gpu, turing has one RT core per SM, so for a 2060 you have 30 SMs, and its official turbo clock is 1680mhz, for 50.4 billion intersections, or 5.04 gigrays/sec, nvidia quotes 5 gigrays.
For a 2080Ti you have 68 RT units, at 1545mhz, for 10.56 gigrays, which nvidia also quotes. "



I'm not overestimating that people buy more midrange cards than higher ones. I'm estimating how many people own an Xbox Series X today, or in six months, or probably much longer than that like in the next 12 months.

None.

So PC is already miles ahead in install base and has another year to grow it with newer generation cards.

As for the console's ray tracing performance I am highly skeptical of the claims it is as good or better than an RTX2080Ti. Nvidia don't measure gigarays purely by intersections, so the calculations there are useless against a specific Nvidia defined metric.

They might be showing path tracing demos like Minecraft but they are only in 1080p and they are literally just a visual demo, with nothing else running inside the 'game' at all.

You have been able to run an entire path traced Quake 2 at 1440p and close on 60FPS with an RTX2080Ti for a year now. Double ray bounce, full resolution including diffuse and specular.

No, even if we go by what Microsoft claim as 13 teraflops of pure hardware ray tracing (25 total. 12 on the shaders, thus 13 on the dedicated hardware) then an RTX2080 has considerably more dedicated performance- Nvidia say 23 teraflops on the RT cores. This only puts the console's RT hardware at most as good as the non super RTX2060.

This makes totally logical sense when you look at the die size. There is just no physical room on it for a massive load of dedicated ray tracing transistors. One look at the die should tell you that is the case unless AMD have miraculously beaten Nvidia's RT core efficiency by an order of magnitude. You can bet on that, I wouldn't.
 
Last edited:
Back