AMD's next-gen PlayStation/Xbox chip moves closer to final version

You just came in here with this elitist mentality for no reason at all.

Lols in case you haven't noticed, we have a few names often doing the similar "elitist" thing here: *uantumPhysics *44GHZ Gaming *ausagemeat

Typically the statements start with: "I have XXX", "To me price is not a problem", etc.

Amuses me every single time rofl.
 
Both Xbox One X and PS4 Pro lost my attention and I went and built a gaming PC and bought 2 gaming laptops.

I sincerely doubt this "next gen" console will have the power of an i9ex with a 2080ti, or even an i7 with a 1080.

I doubt "ray tracing" will be a priority in the next-gen consoles.

And I KNOW they won't have 32GB of DDR4.

I'd love to benchmark these things vs. my desktop.

The overwhelming odds are the next gen console will considerably beat a 1080 gpu.

The cpu is the bottleneck on xbox one x. It already slightly outperforms 1070 performance when it is not cpu limited (aa shown by digital foundry with the initial tests with forza)

And this apu will certainly have more cu by a large margin. Last time ps4 doubled the cu in 3 years. This will be 4 years. If they cap out the cu at the same as vega (64) they will easily meet that with the same increase rate. The navi may not even have that cap so a 64 cu it is a low estimate. Early indications are a navi unit with only 20 cu performing as well as a vega 56.

https://www.google.com/amp/s/amp.hothardware.com/news/amd-radeon-rx-7nm-navi-gpu-benchmarks-leaked

If this is true, a navi gpu with 64 cu would be unbelievably fast. If we went on teraflops alone, 64 cu at 1.8ghz is 14.7 teraflops. That is more than twice as fast as the xbox one x before accounting for architecture differences. If this other leak is true, it will be greater than 3 times as fast. This is well beyond gtx 1080 territory, and 24gb of ram is not needed to get to 4k, especially not on a console. This could well be approaching close to rtx 2080ti performance.

If a console with rtx 2080 performance or above launches, and it likely will be above, for $500, I am ditching pc. Especially as now that it has the proper cpu it will never fall below 60 again. That is the main factor for me and graphics are well enough ensured to be comparable to pc especially due to the strong gpu, but also due to diminishing returns.

Could you link those benchmarks showing XBOX One X GPU being faster than a GTX 1070 ? I'm very curious about your claim since the X's GPU is more or less a RX580 which is slower than a GTX1070.

Navi is still based on GCN so the 64CU cap probably isn't going away.

Yeah, uhm, going with TFLOPS is not a good way to judge graphics performance. If we compare the latest from both AMD and NVIDIA, we have the Radeon VII with theoretical FP32 performance of 13.44TFLOPS and RTX 2070 with 7.465TFLOPS. The difference is substantial and yet, both have similair gaming performance. So I'd be cautious about using theoretical peaks to judge real-world gaming performance.

I sincerely doubt we'll get RTX2080 topping GPU in a console for 500$. I mean, AMD currently sells Radeon VIIs with performance in the ballpark of an RTX2070 for 699$. So you'd have a faster GPU than that + 8core Zen2 CPU + RAM + HDD/SSD + controller for less than the RVII alone ?

Then there's power consumption to consider. RVII draws a lot of power. More than the whole XBOX One X. A machine with a more powerfull GPU and all the other components would require even more juice. Don't think Microsoft or Sony are willing to raise their power consumption that hight (if at all).

Uhm, if developers get faster hardware to work with, they are going to increase visual fidelity. What makes you think games will never drop below 60fps ?

Ok where to start.

https://www.eurogamer.net/articles/...pio-is-console-hardware-pushed-to-a-new-level

"From what I've seen so far, there is some evidence that Scorpio's true 4K performance could pose a challenge to the likes of Nvidia's GTX 1070 and AMD's Fury X-class hardware. I've seen Microsoft's new console running a Forza Motorsport 6-level experience locked to 4K60 on the equivalent to PC's ultra settings - cranking up the quality presets to obscene levels was one of the first things developer Turn 10 did when confronted with the sheer amount of headroom it had left after a straight Xbox One port. Out of interest, we tested Forza 6 Apex with similar settings at 4K on GTX 1060, 1070 and 1080. Frames were dropped on GTX 1060 (and a lot of them when wet weather conditions kicked in), while GTX 1070 held firm with only the most intense wet weather conditions causing performance dips. Only GTX 1080 held completely solid in all test cases. It's only one data point, and the extent to which the code is comparable at all is debatable, but it certainly doesn't harm Scorpio's credentials: Forza 6 Apex received plenty of praise for the quality of its PC port."

The primary block has clearly been the CPU on framerate, Forza was clearly selected because it is not CPU limited.

Moving forward:

"Navi is still based on GCN so the 64CU cap probably isn't going away."

Ergo why I said that they will likely hit the cap and said 64. Given the increases so far that is a conservative estimate just looking at history. They would otherwise go beyond 64.

"Uhm, if developers get faster hardware to work with, they are going to increase visual fidelity. What makes you think games will never drop below 60fps"

Because a 12 - 14 teraflop GPU in a console would not be GPU bottle necked even hitting 4k. The 6 teraflop xbox one x hits 4k at 30, and some at 60fps, with medium settings, an again, the CPU is the main block. A gpu this powerful, would definitely have no issue with fidelity at this point, and 60 fps is their focus. The CPU would never have an issue if it is a ryzen 8 core zen 2 with 60fps as a baseline. Literally never, for more reasons as well. CPU's were stagnant for 6 years or more before we had this switch and finally we have competition, and a Zen 2 will literally handle all games at 60 with no draw backs as it removes all the CPU bottleneck. Changes moving forward to CPU even in the PC market will not make a difference for a long time at this point. We only finally moved past 4 cores in the last few years, and the hardware cycle is slowing down, not speeding up. It's common sense. They will hit 60fps on nearly every game. It won't even be an issue, and they are unlikely to max the GPU unless they do something crazy. They could ultra and ray trace at 14 teraflops with Navi. Also, consider that a 6 teraflop gpu did what the AMD did with Forza with headroom. The console nearly matches a 1080 when not considering the CPU block. This means optimization is real buddy. If this navi matches a 64 cu 1800 MHz clock, we are talking what would IN A PC compete with a rtx 2080, and that is not an opinion, this is BEFORE considering ANY architectural increases, an consider that VEGA was the FIRST that did not have an increase and it was as leaked, BECAUSE they focused on Navi that Vega suffered. I'm going to guarantee it will have a huge one, similar to the RX 480. That being the case RTX 2080ti is going to likely be the PC comparison, NOT the optimized one, and the bare minimum is going to be a RTX 2080 PC comparable, but again, NOT the optimized consideration. Anyone with a brain knows that will far outperform a rtx 2080ti when accounting for optimization, now that the CPU bottleneck is gone, there will be no problem of 4k 60fps on consoles. I'm calling it: Not one game will miss it. Think I'm wrong? This is linked to my facebook. Come back when the consoles release. I'm actually betting that Microsoft used the phrase "high frame rate gaming" rather than "60" for this reason. Variable 90 fps will be what they go for. Mark it.

"Then there's power consumption to consider. RVII draws a lot of power. More than the whole XBOX One X. A machine with a more powerfull GPU and all the other components would require even more juice. Don't think Microsoft or Sony are willing to raise their power consumption that hight (if at all)."

You are saying this due to Xbox one and PS4 making that the top concern, and you are disregarding that the Navi 10 is expected to have a 150 watt power envelope, the one that is expected to beat a gtx 1080. This is going to be a move like the RX 480. A huge power efficiency gain. You are also forgetting that if you undervolted even the Vega, it was quite powerful. Amd just couldn't get the clockspeed higher past a certain curve. 1.8 ghz at 150 watts shouldn't be that hard to do, if they pull a rx 480 here, and they could. The die size shrink is to 7nm, they already did 1152 with Xbox one X with 16nm, up from 853 at 28. This die size will be in less than half, an even greater shrink. Xbox one x is almost clocked 40% higher than the xbox one, and it did not shrink in half. This means a Navi gpu with no clockspeed or efficiency increases due to architecture, would be able to get to 1.6 ghz with a similar increase to last time. 1.8 ghz is not outside of the scope of reality.

I believe you like others are basically calling xbox one and ps4 the norm. They aren't. These were garbage consoles when they released, and the Xbox one X was a moderate at best. They were the exception to the rule, not the rule.

"I sincerely doubt we'll get RTX2080 topping GPU in a console for 500$. I mean, AMD currently sells Radeon VIIs with performance in the ballpark of an RTX2070 for 699$. So you'd have a faster GPU than that + 8core Zen2 CPU + RAM + HDD/SSD + controller for less than the RVII alone ?"

You're forgetting that the xbox one used the most powerful mobile gpu, and so did the PS4, slightly modified. They were concerned with power. You're also forgetting that Nvidia has had their gtx 1060 so well optimized, that laptop parts are within 10% of the desktop parts. In other words, power envelopes are getting better, AMD included, they just pushed the poor top clockspeeds with Vega. When down clocked, they had really great power usage. You are also forgetting halving the die size halves the amount of board required, which saves money. And to tie it together: The 7970m was a $699 part to upgrade in laptops which allowed for upgrades. You are using traditional cost methods not console ones. The consoles have consistently used the most powerful, or second most powerful GPU out. Xbox one used the most powerful 7970m modified, instead of a desktop one. Xbox 360 used a modified version of the most powerful GPU when it came out, desktop unit. Trying to state that they won't use a powerful gpu is absurd. It is a question of two things: How much has low power evolved? 1.6 ghz is the minimum at equal standards. And, how much are they willing to move toward desktop parts? Can they do a middle ground with minimal increase?

They've already put as expensive parts in the xbox one included.

Rtx 2080 is the minimum performance increase they will allow. This is especially the case considering the Xbox one X is Gtx 1070 level.

You really believe they will wait 3 years from xbox one x, just to release a console that is say 60% more powerful, not even twice?

You're out of your mind. The xbox one x was 5 times more powerful in 3 years. While we should expect it to slow down, you're talking not even doubling in 4 years. I'm sorry, the next console will bare minimum be twice as powerful as a gtx 1070 for the goal. RTX 2080 performance is coming, from a purely hardware level. From a performance level, RTX 2080ti minimum, due to optimization. And if they do rtx 2080ti hardware, then things will be really exciting.
 
we do still have Xbox One X playing games at 4K on a glorified RX580 and that's worth consideration (especially with that weak CPU).

Ughhh sorry, I am so tired of this - it's not a 580. The 580 doesn't have a 384-bit bus and 2560 SP's. It is easily a solid 30% better than a 580.

The closest comparison would be a 590 with insanely overclocked memory (and if you look at charts, that would place it next to a 1070 or better). The 1070 does 4K30 most of the time, just like the XBX.
 
we do still have Xbox One X playing games at 4K on a glorified RX580 and that's worth consideration (especially with that weak CPU).

Ughhh sorry, I am so tired of this - it's not a 580. The 580 doesn't have a 384-bit bus and 2560 SP's. It is easily a solid 30% better than a 580.

The closest comparison would be a 590 with insanely overclocked memory (and if you look at charts, that would place it next to a 1070 or better). The 1070 does 4K30 most of the time, just like the XBX.

Yes, and then there's that.

The GPU in the Xbox one X is not a Rx 580 at all.
 
Ok where to start.

https://www.eurogamer.net/articles/...pio-is-console-hardware-pushed-to-a-new-level

"From what I've seen so far, there is some evidence that Scorpio's true 4K performance could pose a challenge to the likes of Nvidia's GTX 1070 and AMD's Fury X-class hardware. I've seen Microsoft's new console running a Forza Motorsport 6-level experience locked to 4K60 on the equivalent to PC's ultra settings - cranking up the quality presets to obscene levels was one of the first things developer Turn 10 did when confronted with the sheer amount of headroom it had left after a straight Xbox One port. Out of interest, we tested Forza 6 Apex with similar settings at 4K on GTX 1060, 1070 and 1080. Frames were dropped on GTX 1060 (and a lot of them when wet weather conditions kicked in), while GTX 1070 held firm with only the most intense wet weather conditions causing performance dips. Only GTX 1080 held completely solid in all test cases. It's only one data point, and the extent to which the code is comparable at all is debatable, but it certainly doesn't harm Scorpio's credentials: Forza 6 Apex received plenty of praise for the quality of its PC port."

The primary block has clearly been the CPU on framerate, Forza was clearly selected because it is not CPU limited.

Moving forward:

"Navi is still based on GCN so the 64CU cap probably isn't going away."

Ergo why I said that they will likely hit the cap and said 64. Given the increases so far that is a conservative estimate just looking at history. They would otherwise go beyond 64.

"Uhm, if developers get faster hardware to work with, they are going to increase visual fidelity. What makes you think games will never drop below 60fps"

Because a 12 - 14 teraflop GPU in a console would not be GPU bottle necked even hitting 4k. The 6 teraflop xbox one x hits 4k at 30, and some at 60fps, with medium settings, an again, the CPU is the main block. A gpu this powerful, would definitely have no issue with fidelity at this point, and 60 fps is their focus. The CPU would never have an issue if it is a ryzen 8 core zen 2 with 60fps as a baseline. Literally never, for more reasons as well. CPU's were stagnant for 6 years or more before we had this switch and finally we have competition, and a Zen 2 will literally handle all games at 60 with no draw backs as it removes all the CPU bottleneck. Changes moving forward to CPU even in the PC market will not make a difference for a long time at this point. We only finally moved past 4 cores in the last few years, and the hardware cycle is slowing down, not speeding up. It's common sense. They will hit 60fps on nearly every game. It won't even be an issue, and they are unlikely to max the GPU unless they do something crazy. They could ultra and ray trace at 14 teraflops with Navi. Also, consider that a 6 teraflop gpu did what the AMD did with Forza with headroom. The console nearly matches a 1080 when not considering the CPU block. This means optimization is real buddy. If this navi matches a 64 cu 1800 MHz clock, we are talking what would IN A PC compete with a rtx 2080, and that is not an opinion, this is BEFORE considering ANY architectural increases, an consider that VEGA was the FIRST that did not have an increase and it was as leaked, BECAUSE they focused on Navi that Vega suffered. I'm going to guarantee it will have a huge one, similar to the RX 480. That being the case RTX 2080ti is going to likely be the PC comparison, NOT the optimized one, and the bare minimum is going to be a RTX 2080 PC comparable, but again, NOT the optimized consideration. Anyone with a brain knows that will far outperform a rtx 2080ti when accounting for optimization, now that the CPU bottleneck is gone, there will be no problem of 4k 60fps on consoles. I'm calling it: Not one game will miss it. Think I'm wrong? This is linked to my facebook. Come back when the consoles release. I'm actually betting that Microsoft used the phrase "high frame rate gaming" rather than "60" for this reason. Variable 90 fps will be what they go for. Mark it.

"Then there's power consumption to consider. RVII draws a lot of power. More than the whole XBOX One X. A machine with a more powerfull GPU and all the other components would require even more juice. Don't think Microsoft or Sony are willing to raise their power consumption that hight (if at all)."

You are saying this due to Xbox one and PS4 making that the top concern, and you are disregarding that the Navi 10 is expected to have a 150 watt power envelope, the one that is expected to beat a gtx 1080. This is going to be a move like the RX 480. A huge power efficiency gain. You are also forgetting that if you undervolted even the Vega, it was quite powerful. Amd just couldn't get the clockspeed higher past a certain curve. 1.8 ghz at 150 watts shouldn't be that hard to do, if they pull a rx 480 here, and they could. The die size shrink is to 7nm, they already did 1152 with Xbox one X with 16nm, up from 853 at 28. This die size will be in less than half, an even greater shrink. Xbox one x is almost clocked 40% higher than the xbox one, and it did not shrink in half. This means a Navi gpu with no clockspeed or efficiency increases due to architecture, would be able to get to 1.6 ghz with a similar increase to last time. 1.8 ghz is not outside of the scope of reality.

I believe you like others are basically calling xbox one and ps4 the norm. They aren't. These were garbage consoles when they released, and the Xbox one X was a moderate at best. They were the exception to the rule, not the rule.

"I sincerely doubt we'll get RTX2080 topping GPU in a console for 500$. I mean, AMD currently sells Radeon VIIs with performance in the ballpark of an RTX2070 for 699$. So you'd have a faster GPU than that + 8core Zen2 CPU + RAM + HDD/SSD + controller for less than the RVII alone ?"

You're forgetting that the xbox one used the most powerful mobile gpu, and so did the PS4, slightly modified. They were concerned with power. You're also forgetting that Nvidia has had their gtx 1060 so well optimized, that laptop parts are within 10% of the desktop parts. In other words, power envelopes are getting better, AMD included, they just pushed the poor top clockspeeds with Vega. When down clocked, they had really great power usage. You are also forgetting halving the die size halves the amount of board required, which saves money. And to tie it together: The 7970m was a $699 part to upgrade in laptops which allowed for upgrades. You are using traditional cost methods not console ones. The consoles have consistently used the most powerful, or second most powerful GPU out. Xbox one used the most powerful 7970m modified, instead of a desktop one. Xbox 360 used a modified version of the most powerful GPU when it came out, desktop unit. Trying to state that they won't use a powerful gpu is absurd. It is a question of two things: How much has low power evolved? 1.6 ghz is the minimum at equal standards. And, how much are they willing to move toward desktop parts? Can they do a middle ground with minimal increase?

They've already put as expensive parts in the xbox one included.

Rtx 2080 is the minimum performance increase they will allow. This is especially the case considering the Xbox one X is Gtx 1070 level.

You really believe they will wait 3 years from xbox one x, just to release a console that is say 60% more powerful, not even twice?

You're out of your mind. The xbox one x was 5 times more powerful in 3 years. While we should expect it to slow down, you're talking not even doubling in 4 years. I'm sorry, the next console will bare minimum be twice as powerful as a gtx 1070 for the goal. RTX 2080 performance is coming, from a purely hardware level. From a performance level, RTX 2080ti minimum, due to optimization. And if they do rtx 2080ti hardware, then things will be really exciting.

I asked for benchamrks, you gave me an article talking about a paid-for marketing meeting with Microsoft… Phrases like „Forza Motorsport 6-level experience” and „similar settings” should raise red flags for any rational person. What game did you actually test ? Was it the same on both systems ? What were the exact settings used ? We don’t know. Even the author himself states: „It's only one data point, and the extent to which the code is comparable at all is debatable (…)”. You took all of that and decided it’s proof enough that XONEX is faster than 1070…

CPUs limit framerates only at lower resolutions. The higher the resolution, the less impact the CPU has on performance. At 4K the bottleneck is going to be the GPU, not the CPU.

You seem to not understand computer hardware. 14TFLOPS of THEORETICAL performance, doesn’t mean much. It’s not some grand achievement of computer tech. It most definitely doesn’t mean a GPU capable of that number is omnipotent. It can and will struggle, faster than you think. Hell, we already have GPUs with 14TFLOPS of performance and none of them are able to guarantee stable 4K60 experience in TODAY’S titles. Yet you think that future, more advanced thus demanding titles are going to be no problem ? 4K, ultra, RT and all that at 60fps ? That is just wishful thinking.

Once again you bring up the CPU so I’m going to repeat myself. At 4K the GPU is going to be the bottleneck. Adding a faster CPU won’t have a big impact (if any) on performance. Have you seen even a single review/benchmark exploring how gaming performance scales with resolution and which components are the limiting factor ?

First it’s 1070, now it’s a 1080. Well, at least you are consistent as the X is slower than both of them.

Read what you wrote and think about it for a minute. You stated that a Navi-based GPU with around 1080 levels of performance is going to be a 150W part. Also according to you, the GPU in the PS5 (also Navi-based) is going to be a RTX2080Ti killer. Naturally it’s going to need more power, right ?

For reference, XONEX GPU is rated at 150W. XONE was rated at just 95W. That’s nearly 60% more power that the X needs. Going by what you have said, the PS5’s GPU is gonna have to be rated higher than the X. Question is, how far are MS and Sony willing to push power consumption on their systems. If we apply the same 60% bump, we get to around 240W for the GPU alone. That’s quite high.

To put things in perspective - the Radeon VII is a 7nm GPU that still needs 295W of power to barely outpace an RTX2070. Sure, you can downvolt it and improve power efficiency but you will lose performance. Since we are aiming at above RTX2080Ti performance levels we need to gain performance, not lose it. Polaris efficiency jump is unlikely to happen this time around. RVII is already on the same process node and that is how Polaris gained most of it’s efficiency compared to Hawai. Navi is still GCN based just like Vega 20. The arch refinements by AMD would have to be pretty major to reach your goal.

You will not get a perfect ½ scaling going a full node down. Even if I grant you that perfect scaling, you claim that this GPU will have (among others) 64CUs (24 more than X). How in the world is it going to be 50% of the size of the X’s chip ? XONE and XONEX GPUs have almost identical die size. Expecting next gen to have GPUs half the size AND performing 2-3x better is beyond optimistic.

No, they haven’t used the most powerful GPUs availible. XONE GPU is slower than a GTX650Ti that launched a year before the console. It most definitely did not use a 7970M which had 1280SPs vs 768SPs in XONE. Even if it did, the 7970M was a midrange chip. Mobile GPUs at that time were not an equivalent of desktop variants. A desktop 7970 wiped the floor with 7970M and XONE had even less power than that. XONEX is a step forward GPU-wise which can be considered midrange. Powerful they are not.

XONE – 2013, XONEX – 2017 = 4years.

Like I said, XONE had a whimpy GPU so it wasn’t hard to make a big step forward in that department. It didn’t come for free though. As stated before, power draw had to increase by nearly 60%. To make a similar step in performance again could mean an even greater increase in power draw as it tends to skyrocket once you reach a certain point. Then we have the old, low-power Jaguar cores replaced by full-fat Zen2 cores. 7nm or not they are going to need some juice to give you the performance you expect from them. So now you need a beefier power supply and a higher-end cooling solution to cope with the extra heat. Is it doable ? Yeah but all of that costs money. 7nm is supposed to be very expensive for example. Expecting next-gen consoles to have the kind of technical prowess you expect them to have while still costing only 500$ seems a bit naive.

But maybe I’m wrong about all of it. I wouldn’t mind if I was. If consoles stepped up this much it would push the industry forward and that’s good news for everyone. We could see some spectacular advances not only in visual fidelity but also AI, in-game physics, scale of game worlds etc. I’m down for all that. I rather not get my hopes up though. The XONEX touted as a 4K system can’t even claim to guarantee native 1080p60 in all games. To get from that to 4K60 without issues seems too good to be true.
 
Ughhh sorry, I am so tired of this - it's not a 580. The 580 doesn't have a 384-bit bus and 2560 SP's. It is easily a solid 30% better than a 580.

The closest comparison would be a 590 with insanely overclocked memory (and if you look at charts, that would place it next to a 1070 or better). The 1070 does 4K30 most of the time, just like the XBX.

You can be tired but that doesn't change the fact that XONEX GPU is basically an RX580 performance-wise. Yes it has a wider memory bus and a handful of additional SPs. What you fail to mention though is that it has a lower clock. That alone negates having more SPs. The extra memory bandwidth will help, especially at higher resolutions, but it will not boost performance by 30%.

Let's examine just how ridiculous your claim is. 30% boost from RX580 would place it at Vega 56 level. Vega 56 has 3584 SPs which is significantly more than Scorpio (X's GPU). Not only that, but thanks to HBM2, it also has higher memory bandwidth (409.6 GB/s vs 326.4 GB/s). Do you still think Scorpio is "easily" 30% more powerful than the RX580 ?

The level of mental gymnastics you need to apply to reconcile such an absurd statement with reality is astounding.
 
Ok where to start.

https://www.eurogamer.net/articles/...pio-is-console-hardware-pushed-to-a-new-level

"From what I've seen so far, there is some evidence that Scorpio's true 4K performance could pose a challenge to the likes of Nvidia's GTX 1070 and AMD's Fury X-class hardware. I've seen Microsoft's new console running a Forza Motorsport 6-level experience locked to 4K60 on the equivalent to PC's ultra settings - cranking up the quality presets to obscene levels was one of the first things developer Turn 10 did when confronted with the sheer amount of headroom it had left after a straight Xbox One port. Out of interest, we tested Forza 6 Apex with similar settings at 4K on GTX 1060, 1070 and 1080. Frames were dropped on GTX 1060 (and a lot of them when wet weather conditions kicked in), while GTX 1070 held firm with only the most intense wet weather conditions causing performance dips. Only GTX 1080 held completely solid in all test cases. It's only one data point, and the extent to which the code is comparable at all is debatable, but it certainly doesn't harm Scorpio's credentials: Forza 6 Apex received plenty of praise for the quality of its PC port."

The primary block has clearly been the CPU on framerate, Forza was clearly selected because it is not CPU limited.

Moving forward:

"Navi is still based on GCN so the 64CU cap probably isn't going away."

Ergo why I said that they will likely hit the cap and said 64. Given the increases so far that is a conservative estimate just looking at history. They would otherwise go beyond 64.

"Uhm, if developers get faster hardware to work with, they are going to increase visual fidelity. What makes you think games will never drop below 60fps"

Because a 12 - 14 teraflop GPU in a console would not be GPU bottle necked even hitting 4k. The 6 teraflop xbox one x hits 4k at 30, and some at 60fps, with medium settings, an again, the CPU is the main block. A gpu this powerful, would definitely have no issue with fidelity at this point, and 60 fps is their focus. The CPU would never have an issue if it is a ryzen 8 core zen 2 with 60fps as a baseline. Literally never, for more reasons as well. CPU's were stagnant for 6 years or more before we had this switch and finally we have competition, and a Zen 2 will literally handle all games at 60 with no draw backs as it removes all the CPU bottleneck. Changes moving forward to CPU even in the PC market will not make a difference for a long time at this point. We only finally moved past 4 cores in the last few years, and the hardware cycle is slowing down, not speeding up. It's common sense. They will hit 60fps on nearly every game. It won't even be an issue, and they are unlikely to max the GPU unless they do something crazy. They could ultra and ray trace at 14 teraflops with Navi. Also, consider that a 6 teraflop gpu did what the AMD did with Forza with headroom. The console nearly matches a 1080 when not considering the CPU block. This means optimization is real buddy. If this navi matches a 64 cu 1800 MHz clock, we are talking what would IN A PC compete with a rtx 2080, and that is not an opinion, this is BEFORE considering ANY architectural increases, an consider that VEGA was the FIRST that did not have an increase and it was as leaked, BECAUSE they focused on Navi that Vega suffered. I'm going to guarantee it will have a huge one, similar to the RX 480. That being the case RTX 2080ti is going to likely be the PC comparison, NOT the optimized one, and the bare minimum is going to be a RTX 2080 PC comparable, but again, NOT the optimized consideration. Anyone with a brain knows that will far outperform a rtx 2080ti when accounting for optimization, now that the CPU bottleneck is gone, there will be no problem of 4k 60fps on consoles. I'm calling it: Not one game will miss it. Think I'm wrong? This is linked to my facebook. Come back when the consoles release. I'm actually betting that Microsoft used the phrase "high frame rate gaming" rather than "60" for this reason. Variable 90 fps will be what they go for. Mark it.

"Then there's power consumption to consider. RVII draws a lot of power. More than the whole XBOX One X. A machine with a more powerfull GPU and all the other components would require even more juice. Don't think Microsoft or Sony are willing to raise their power consumption that hight (if at all)."

You are saying this due to Xbox one and PS4 making that the top concern, and you are disregarding that the Navi 10 is expected to have a 150 watt power envelope, the one that is expected to beat a gtx 1080. This is going to be a move like the RX 480. A huge power efficiency gain. You are also forgetting that if you undervolted even the Vega, it was quite powerful. Amd just couldn't get the clockspeed higher past a certain curve. 1.8 ghz at 150 watts shouldn't be that hard to do, if they pull a rx 480 here, and they could. The die size shrink is to 7nm, they already did 1152 with Xbox one X with 16nm, up from 853 at 28. This die size will be in less than half, an even greater shrink. Xbox one x is almost clocked 40% higher than the xbox one, and it did not shrink in half. This means a Navi gpu with no clockspeed or efficiency increases due to architecture, would be able to get to 1.6 ghz with a similar increase to last time. 1.8 ghz is not outside of the scope of reality.

I believe you like others are basically calling xbox one and ps4 the norm. They aren't. These were garbage consoles when they released, and the Xbox one X was a moderate at best. They were the exception to the rule, not the rule.

"I sincerely doubt we'll get RTX2080 topping GPU in a console for 500$. I mean, AMD currently sells Radeon VIIs with performance in the ballpark of an RTX2070 for 699$. So you'd have a faster GPU than that + 8core Zen2 CPU + RAM + HDD/SSD + controller for less than the RVII alone ?"

You're forgetting that the xbox one used the most powerful mobile gpu, and so did the PS4, slightly modified. They were concerned with power. You're also forgetting that Nvidia has had their gtx 1060 so well optimized, that laptop parts are within 10% of the desktop parts. In other words, power envelopes are getting better, AMD included, they just pushed the poor top clockspeeds with Vega. When down clocked, they had really great power usage. You are also forgetting halving the die size halves the amount of board required, which saves money. And to tie it together: The 7970m was a $699 part to upgrade in laptops which allowed for upgrades. You are using traditional cost methods not console ones. The consoles have consistently used the most powerful, or second most powerful GPU out. Xbox one used the most powerful 7970m modified, instead of a desktop one. Xbox 360 used a modified version of the most powerful GPU when it came out, desktop unit. Trying to state that they won't use a powerful gpu is absurd. It is a question of two things: How much has low power evolved? 1.6 ghz is the minimum at equal standards. And, how much are they willing to move toward desktop parts? Can they do a middle ground with minimal increase?

They've already put as expensive parts in the xbox one included.

Rtx 2080 is the minimum performance increase they will allow. This is especially the case considering the Xbox one X is Gtx 1070 level.

You really believe they will wait 3 years from xbox one x, just to release a console that is say 60% more powerful, not even twice?

You're out of your mind. The xbox one x was 5 times more powerful in 3 years. While we should expect it to slow down, you're talking not even doubling in 4 years. I'm sorry, the next console will bare minimum be twice as powerful as a gtx 1070 for the goal. RTX 2080 performance is coming, from a purely hardware level. From a performance level, RTX 2080ti minimum, due to optimization. And if they do rtx 2080ti hardware, then things will be really exciting.

I asked for benchamrks, you gave me an article talking about a paid-for marketing meeting with Microsoft… Phrases like „Forza Motorsport 6-level experience” and „similar settings” should raise red flags for any rational person. What game did you actually test ? Was it the same on both systems ? What were the exact settings used ? We don’t know. Even the author himself states: „It's only one data point, and the extent to which the code is comparable at all is debatable (…)”. You took all of that and decided it’s proof enough that XONEX is faster than 1070…

CPUs limit framerates only at lower resolutions. The higher the resolution, the less impact the CPU has on performance. At 4K the bottleneck is going to be the GPU, not the CPU.

You seem to not understand computer hardware. 14TFLOPS of THEORETICAL performance, doesn’t mean much. It’s not some grand achievement of computer tech. It most definitely doesn’t mean a GPU capable of that number is omnipotent. It can and will struggle, faster than you think. Hell, we already have GPUs with 14TFLOPS of performance and none of them are able to guarantee stable 4K60 experience in TODAY’S titles. Yet you think that future, more advanced thus demanding titles are going to be no problem ? 4K, ultra, RT and all that at 60fps ? That is just wishful thinking.

Once again you bring up the CPU so I’m going to repeat myself. At 4K the GPU is going to be the bottleneck. Adding a faster CPU won’t have a big impact (if any) on performance. Have you seen even a single review/benchmark exploring how gaming performance scales with resolution and which components are the limiting factor ?

First it’s 1070, now it’s a 1080. Well, at least you are consistent as the X is slower than both of them.

Read what you wrote and think about it for a minute. You stated that a Navi-based GPU with around 1080 levels of performance is going to be a 150W part. Also according to you, the GPU in the PS5 (also Navi-based) is going to be a RTX2080Ti killer. Naturally it’s going to need more power, right ?

For reference, XONEX GPU is rated at 150W. XONE was rated at just 95W. That’s nearly 60% more power that the X needs. Going by what you have said, the PS5’s GPU is gonna have to be rated higher than the X. Question is, how far are MS and Sony willing to push power consumption on their systems. If we apply the same 60% bump, we get to around 240W for the GPU alone. That’s quite high.

To put things in perspective - the Radeon VII is a 7nm GPU that still needs 295W of power to barely outpace an RTX2070. Sure, you can downvolt it and improve power efficiency but you will lose performance. Since we are aiming at above RTX2080Ti performance levels we need to gain performance, not lose it. Polaris efficiency jump is unlikely to happen this time around. RVII is already on the same process node and that is how Polaris gained most of it’s efficiency compared to Hawai. Navi is still GCN based just like Vega 20. The arch refinements by AMD would have to be pretty major to reach your goal.

You will not get a perfect ½ scaling going a full node down. Even if I grant you that perfect scaling, you claim that this GPU will have (among others) 64CUs (24 more than X). How in the world is it going to be 50% of the size of the X’s chip ? XONE and XONEX GPUs have almost identical die size. Expecting next gen to have GPUs half the size AND performing 2-3x better is beyond optimistic.

No, they haven’t used the most powerful GPUs availible. XONE GPU is slower than a GTX650Ti that launched a year before the console. It most definitely did not use a 7970M which had 1280SPs vs 768SPs in XONE. Even if it did, the 7970M was a midrange chip. Mobile GPUs at that time were not an equivalent of desktop variants. A desktop 7970 wiped the floor with 7970M and XONE had even less power than that. XONEX is a step forward GPU-wise which can be considered midrange. Powerful they are not.

XONE – 2013, XONEX – 2017 = 4years.

Like I said, XONE had a whimpy GPU so it wasn’t hard to make a big step forward in that department. It didn’t come for free though. As stated before, power draw had to increase by nearly 60%. To make a similar step in performance again could mean an even greater increase in power draw as it tends to skyrocket once you reach a certain point. Then we have the old, low-power Jaguar cores replaced by full-fat Zen2 cores. 7nm or not they are going to need some juice to give you the performance you expect from them. So now you need a beefier power supply and a higher-end cooling solution to cope with the extra heat. Is it doable ? Yeah but all of that costs money. 7nm is supposed to be very expensive for example. Expecting next-gen consoles to have the kind of technical prowess you expect them to have while still costing only 500$ seems a bit naive.

But maybe I’m wrong about all of it. I wouldn’t mind if I was. If consoles stepped up this much it would push the industry forward and that’s good news for everyone. We could see some spectacular advances not only in visual fidelity but also AI, in-game physics, scale of game worlds etc. I’m down for all that. I rather not get my hopes up though. The XONEX touted as a 4K system can’t even claim to guarantee native 1080p60 in all games. To get from that to 4K60 without issues seems too good to be true.


"CPUs limit framerates only at lower resolutions. The higher the resolution, the less impact the CPU has on performance. At 4K the bottleneck is going to be the GPU, not the CPU."

Not when going for 4k 60fps. You are incorrect regarding this. Many games on the Xbox one X will have a CPU limit to get to 60fps.

As for benchmarks: There are no games that can be run the same exact test on xbox one x and Windows due to the OS. Forza is about as close as you're going to get. Forza ended up being well optimized for PC, and Xbox's gpu performed well enough to out perform a gtx 1070.

The Xbox one x GPU is definitely gtx 1070 level.

"You seem to not understand computer hardware. 14TFLOPS of THEORETICAL performance, doesn't mean much. It's not some grand achievement of computer tech. It most definitely doesn't mean a GPU capable of that number is omnipotent. It can and will struggle, faster than you think. Hell, we already have GPUs with 14TFLOPS of performance and none of them are able to guarantee stable 4K60 experience in TODAY'S titles. Yet you think that future, more advanced thus demanding titles are going to be no problem ? 4K, ultra, RT and all that at 60fps ? That is just wishful thinking."

Yes it does, and it is not theoretical. It's a matter of an equation. These do generally scale quite well. Even disregarding they scale quite well, if they scaled equal to the Radeon VII, it would still be GTX 2080 level performance with lower clocks. It would have 4 additional CU, and the MHZ level should be very similar per my post. However, we are talking a highly customized GPU. For example: The PS4 was close to a 7970m, 1.84 vs 2.1 teraflops, but it had 8 ACE units instead of 2, which ended up being a big deal. As for 14 teraflop gpus in present day, yes, they do struggle, but you did not put that in perspective to a Radeon VII, nor did you put in in perspective of optimization on a console when you made that statement of struggling, but the original debate is whether the Xbox next console, or PS5, will have RTX 2080 level performance. You are actually arguing it won't even match the GTX 1080, which is so unbelievably stupid, and I'm sorry, I have to call it as such, it is mind bogglingly inaccurate and ignores all common sense.

"No, they haven't used the most powerful GPUs availible. XONE GPU is slower than a GTX650Ti that launched a year before the console. Even if it did, the 7970M was a midrange chip. Mobile GPUs at that time were not an equivalent of desktop variants. A desktop 7970 wiped the floor with 7970M and XONE had even less power than that. XONEX is a step forward GPU-wise which can be considered midrange. Powerful they are not."

WRONG. And I already proved this. The R500 modified was in the Xbox 360, is was the equivalent of an x1800 gpu, and had 240 gflops as opposed to I believe it was 273, I would have to go look it up.

The xbox one had a mobile variant of the second most powerful GPU, the 7870m, the PS4 was the 7970m and those were NOT MIDRANGE GPU'S FOR LAPTOPS. They were the most expensive most powerful available. You just compared a desktop GPU to a mobile one, and ignored they shot for power envelopes with xbox one due to the red ring issue. The gpus they put in there, while weak to a desktop component, were expensive as all hell to buy in a typical laptop setup. This is why I made the laptop to laptop comparison rather than desktop to laptop when I mentioned xbox one's original release. This is also why I mentioned the power envelopes at equal performance to the last few gains to make a baseline expectation which is clearly 1.6 ghz. You ignored the cost as well when you called these midrange. They can, and will, put the most powerful and the second most powerful GPU's out into their console, the question is how much power envelopes have evolved. The RX 480 had a huge increase there, the vega was clock limited at the peak, but when you went to lower clocks, was mighty power efficient as well. We should expect that curve to improve, not get worse, with a die shrink like this. At equal rates it would be quite capable of 1.6 ghz, that's math. From there, if they do 1.8 ghz, this is not an opinion, but a fact, a 64 cu 1.8ghz part would be hardware wise bare minimum RTX 2080 level. This is not an opinion, it's a fact, proven by the Radeon VII. When looking at whether they will use a 64 cu unit, we look at what has happened so far. A doubling of CU in 3 years, on both consoles, and exceeding that to some degree. If that happened again, they would exceed 64, but there is a limit. 64 CU is nearly confirmed. 1.6ghz is a minimum on the GPU. That part will be RTX 2080 territory, and there is no way it will struggle with a gtx 1080.

"Expecting next-gen consoles to have the kind of technical prowess you expect them to have while still costing only 500$ seems a bit naive."

No it's not. Stating they won't is a bit naive for the reasons of both CU and clockspeed quoted above. The absolute bare minimum is 1.6ghz with 64 cu. Compare that to a VII, 60 CU at 1.8ghz, and the difference will be moot with no architectural benefits, whatsoever, and no increases in power efficiency or using any additional power.

"But maybe I'm wrong about all of it. I wouldn't mind if I was. If consoles stepped up this much it would push the industry forward and that's good news for everyone. We could see some spectacular advances not only in visual fidelity but also AI, in-game physics, scale of game worlds etc. I'm down for all that. I rather not get my hopes up though. The XONEX touted as a 4K system can't even claim to guarantee native 1080p60 in all games. To get from that to 4K60 without issues seems too good to be true."

They technically didn't step down, they spent the same amount, but bought a power efficient GPU before GPU's were ready for that. It was a stupid move, but it made sense to some degree, I'm still mad about it too though. It was not until the GTX 1060 that mobile parts performed in line with desktop. Before then it was nearly always at half. You can easily fact check that fact.
 
Last edited:
"CPUs limit framerates only at lower resolutions. The higher the resolution, the less impact the CPU has on performance. At 4K the bottleneck is going to be the GPU, not the CPU."

Not when going for 4k 60fps. You are incorrect regarding this. Many games on the Xbox one X will have a CPU limit to get to 60fps.

As for benchmarks: There are no games that can be run the same exact test on xbox one x and Windows due to the OS. Forza is about as close as you're going to get. Forza ended up being well optimized for PC, and Xbox's gpu performed well enough to out perform a gtx 1070.

The Xbox one x GPU is definitely gtx 1070 level.

"You seem to not understand computer hardware. 14TFLOPS of THEORETICAL performance, doesn't mean much. It's not some grand achievement of computer tech. It most definitely doesn't mean a GPU capable of that number is omnipotent. It can and will struggle, faster than you think. Hell, we already have GPUs with 14TFLOPS of performance and none of them are able to guarantee stable 4K60 experience in TODAY'S titles. Yet you think that future, more advanced thus demanding titles are going to be no problem ? 4K, ultra, RT and all that at 60fps ? That is just wishful thinking."

Yes it does, and it is not theoretical. It's a matter of an equation. These do generally scale quite well. Even disregarding they scale quite well, if they scaled equal to the Radeon VII, it would still be GTX 2080 level performance with lower clocks. It would have 4 additional CU, and the MHZ level should be very similar per my post. However, we are talking a highly customized GPU. For example: The PS4 was close to a 7970m, 1.84 vs 2.1 teraflops, but it had 8 ACE units instead of 2, which ended up being a big deal. As for 14 teraflop gpus in present day, yes, they do struggle, but you did not put that in perspective to a Radeon VII, nor did you put in in perspective of optimization on a console when you made that statement of struggling, but the original debate is whether the Xbox next console, or PS5, will have RTX 2080 level performance. You are actually arguing it won't even match the GTX 1080, which is so unbelievably stupid, and I'm sorry, I have to call it as such, it is mind bogglingly inaccurate and ignores all common sense.

"No, they haven't used the most powerful GPUs availible. XONE GPU is slower than a GTX650Ti that launched a year before the console. Even if it did, the 7970M was a midrange chip. Mobile GPUs at that time were not an equivalent of desktop variants. A desktop 7970 wiped the floor with 7970M and XONE had even less power than that. XONEX is a step forward GPU-wise which can be considered midrange. Powerful they are not."

WRONG. And I already proved this. The R500 modified was in the Xbox 360, is was the equivalent of an x1800 gpu, and had 240 gflops as opposed to I believe it was 273, I would have to go look it up.

The xbox one had a mobile variant of the second most powerful GPU, the 7870m, the PS4 was the 7970m and those were NOT MIDRANGE GPU'S FOR LAPTOPS. They were the most expensive most powerful available. You just compared a desktop GPU to a mobile one, and ignored they shot for power envelopes with xbox one due to the red ring issue. The gpus they put in there, while weak to a desktop component, were expensive as all hell to buy in a typical laptop setup. This is why I made the laptop to laptop comparison rather than desktop to laptop when I mentioned xbox one's original release. This is also why I mentioned the power envelopes at equal performance to the last few gains to make a baseline expectation which is clearly 1.6 ghz. You ignored the cost as well when you called these midrange. They can, and will, put the most powerful and the second most powerful GPU's out into their console, the question is how much power envelopes have evolved. The RX 480 had a huge increase there, the vega was clock limited at the peak, but when you went to lower clocks, was mighty power efficient as well. We should expect that curve to improve, not get worse, with a die shrink like this. At equal rates it would be quite capable of 1.6 ghz, that's math. From there, if they do 1.8 ghz, this is not an opinion, but a fact, a 64 cu 1.8ghz part would be hardware wise bare minimum RTX 2080 level. This is not an opinion, it's a fact, proven by the Radeon VII. When looking at whether they will use a 64 cu unit, we look at what has happened so far. A doubling of CU in 3 years, on both consoles, and exceeding that to some degree. If that happened again, they would exceed 64, but there is a limit. 64 CU is nearly confirmed. 1.6ghz is a minimum on the GPU. That part will be RTX 2080 territory, and there is no way it will struggle with a gtx 1080.

"Expecting next-gen consoles to have the kind of technical prowess you expect them to have while still costing only 500$ seems a bit naive."

No it's not. Stating they won't is a bit naive for the reasons of both CU and clockspeed quoted above. The absolute bare minimum is 1.6ghz with 64 cu. Compare that to a VII, 60 CU at 1.8ghz, and the difference will be moot with no architectural benefits, whatsoever, and no increases in power efficiency or using any additional power.

"But maybe I'm wrong about all of it. I wouldn't mind if I was. If consoles stepped up this much it would push the industry forward and that's good news for everyone. We could see some spectacular advances not only in visual fidelity but also AI, in-game physics, scale of game worlds etc. I'm down for all that. I rather not get my hopes up though. The XONEX touted as a 4K system can't even claim to guarantee native 1080p60 in all games. To get from that to 4K60 without issues seems too good to be true."

They technically didn't step down, they spent the same amount, but bought a power efficient GPU before GPU's were ready for that. It was a stupid move, but it made sense to some degree, I'm still mad about it too though. It was not until the GTX 1060 that mobile parts performed in line with desktop. Before then it was nearly always at half. You can easily fact check that fact.

Which games are those ?

The main (overwhelmingly so) reason Scorpio has trouble at 4K is because it’s just too weak of a GPU to handle the stress. Same goes for an RX580 (or 1070, or 1080, or even 1080Ti). No matter what kind of CPU you pair it with, it’s not going to suddenly become a 4K GPU, period. We can go over 4K gaming comparisons showing CPUs that cost anywhere from 65$ to over 1000$ having little to no effect but I assumed it was common knowledge at this point. I guess it’s not.

You can’t use marketing stunts from MS as proof of anything. Even if the game being run on the Scorpio was exactly the same as Forza 6 Apex on PC, that is still only one game. Please explain why is Forza the best comparison ? There’s lots of multi-platform games. Many are well optimized for PC. Why not compare other titles ? Is it because Forza games tend to favor AMD cards making them look better than they are in general (like an RX470 beating a GTX1060 6GB) ? The mere fact Microsoft chose that specific game to market it’s upcoming console should tell you it’s most likely a best case scenario. Every company does that. They create an environment to make their product look as good as it can. Examples: Game companies use higher fidelity demos to hype their upcoming titles. AMD does GPU comparisons based on how the experience feels. That’s where the feels per second meme comes from. NVIDIA claims Turing is 4x faster than Pascal. And on, and on. Are you really that gullible ? Or are you choosing to accept this BS tactic just this time because it reaffirms your beliefs ? Either way, it’s sad.

Yes, it is theoretical. That’s what the equation calculates – peak throughput of the SPs under ideal circumstances. It doesn’t take into account any other part of the GPU that might become a bottleneck preventing it from achieving it’s theoretical peak. It’s like calculating a car’s top speed without taking into account drag or tire friction. Once again, look at a simple comparison - RTX2070 vs RVII. The former has half the rated fp32 performance of the latter. In real world scenarios though the RVII isn’t anywhere near twice as fast. It doesn’t get any more obvious than that.

You might say that it’s OK if we stick to one vendor. Again, no it’s not. Example: The GTX1080 is rated at 8.873 TFLOPS, the RTX2070 at 7.465 TFLOPS. By your logic we should conclude that the GTX1080 is the faster GPU. It is not.

So once again, blindly going by theoretical peak numbers is bad practice. The difference between theoretical performance a GPU should have and what it actually achieves in real world is too big to make any solid conclusions. That is why most reasonable people say: wait for actual benchmarks/reviews. Don’t believe the hype, marketing BS, or theories of how it will perform based purely on speculation.

What does it even mean that I didn’t put 14tflops GPUs in perspective to RVII ?

No no, you stated the PS5 will have more power than RTX2080Ti. Don’t try to weasel your way out of that claim saying the original debate was about RTX2080 levels of performance.

Either point to where I claimed that neither XBOX+ nor PS5 will match 1080 in performance or admit that you’re a liar.

You didn’t prove anything. You at best gave me one example of consoles using the best GPU around - the X360. I will not refute your claim as I simply don’t know enough about it’s GPU to challenge that statement. One example does not prove there’s a consistent trend.

So when the X360 uses a GPU that is equivalent of the fastest graphics cards at that time, it’s ok to point that out, but when consoles use mediocre mobile GPUs I’m suddenly supposed to disregard the existence of much faster chips ? Talk about moving the goal posts, wow. You haven’t mentioned 1X or PS4 Pro. What happened, couldn’t spin a narrative about their GPUs being the most powerful at the time ? J

The point is that you claimed, and I quote: „The consoles have consistently used the most powerful, or second most powerful GPU out. „ Now you’re trying to add caveats to that statement like the price or MS/Sony focus on power efficiency etc. You’re like Jensen Huang saying Turing is 4x faster than Pascal… but after you enable DLSS… and RTX. Again, sad.

Oh and btw, the 7970M wasn’t even the best GPU in laptops. GTX 680M was faster and then the GTX 680MX added 15-20% on top of that later the same year. XONE launched a year later with a much weaker GPU than the 7970M. Even the PS4 had a cut-down version of the 7970M. So even if I overlooked your attempt at BS’ing your way out of an argument, you’d still be wrong.

I’m saying it’s unlikely to have the kind of hardware you talk about in a 500$ box, not that it won’t happen. You have a nasty habit of putting words in people’s mouths.

The reason I have doubts is that essentially you’d have to take the RVII, increase it’s performance by more than 40% while at the same time lowering it’s power draw to something console-viable. What MS/Sony are willing to accept in terms of power consumption is anyone’s guess. But I’m sure it’s way lower than VII’s current 295W rating. All that on the same process node and without a new arch. That’s a pretty tall order.

I guess they can do what NVIDIA has done with Kepler and just cut all of the unnecessary “fat” from the GPU and only keep “the gaming stuff”. But how viable it is with GCN, I don’t know.

They went from using the best GPU at the time in the X360 to mediocre mobile parts on the XONE. That’s the very definition of a step down.
 
You can be tired but that doesn't change the fact that XONEX GPU is basically an RX580 performance-wise. Yes it has a wider memory bus and a handful of additional SPs. What you fail to mention though is that it has a lower clock. That alone negates having more SPs. The extra memory bandwidth will help, especially at higher resolutions, but it will not boost performance by 30%.

Let's examine just how ridiculous your claim is. 30% boost from RX580 would place it at Vega 56 level. Vega 56 has 3584 SPs which is significantly more than Scorpio (X's GPU). Not only that, but thanks to HBM2, it also has higher memory bandwidth (409.6 GB/s vs 326.4 GB/s). Do you still think Scorpio is "easily" 30% more powerful than the RX580 ?

The level of mental gymnastics you need to apply to reconcile such an absurd statement with reality is astounding.

Polaris is heavily bandwidth starved - many people have tested Polaris at various overclocks and shown almost linear scaling in performance with a 590 as you overclock the memory (I found similar things even with an RX 470). At 1800p-2160p it is directly linear (the intended resolution for the XBX's 4K30Hz target).

Furthermore you really need to fact check clockspeeds. At stock the 580 has a measly 7% higher base clock than the XBX, and meanwhile the XBX has 11% more stream processors and TMU's. It's core performance in games is definitely closer to a 590 than a 580.

Oh, and being as strong as a 1070, is not a Vega 56 lol. Please do some research:


In fact if Vega 56 is sometimes as much as 30% stronger than a 1070, and usually ~15% better if you remove some outlier results and use the latest games. So yes it would be ridiculous to say the XBX is as strong as Vega, luckily I didn't say that...

Calling the XBX a 580, would be like calling a 1070 a 580 lol. Now please go read a book, I actually don't even like XBOX but you are making me talk about it because I dislike ignorance even more.
 
Polaris is heavily bandwidth starved - many people have tested Polaris at various overclocks and shown almost linear scaling in performance with a 590 as you overclock the memory (I found similar things even with an RX 470). At 1800p-2160p it is directly linear (the intended resolution for the XBX's 4K30Hz target).

Furthermore you really need to fact check clockspeeds. At stock the 580 has a measly 7% higher base clock than the XBX, and meanwhile the XBX has 11% more stream processors and TMU's. It's core performance in games is definitely closer to a 590 than a 580.

Oh, and being as strong as a 1070, is not a Vega 56 lol. Please do some research:


In fact if Vega 56 is sometimes as much as 30% stronger than a 1070, and usually ~15% better if you remove some outlier results and use the latest games. So yes it would be ridiculous to say the XBX is as strong as Vega, luckily I didn't say that...

Calling the XBX a 580, would be like calling a 1070 a 580 lol. Now please go read a book, I actually don't even like XBOX but you are making me talk about it because I dislike ignorance even more.

Found a few RX590 reviews including OC runs. Let’s see what we can learn:

1 - https://www.pcgamer.com/amd-radeon-rx-590-review/

Memory was OCed by 12,5% (8gbps to 9gbps), Core was OCed by less than 7%. Test was done on 14 games. Average bump at 1440p Ultra – less than 9%. Average bump at 4K Ultra – 11,5%. Keeping in mind that those bumps were achieved with a Core OC. None of them show linear scaling.

2 - https://www.guru3d.com/articles_pages/radeon_rx_590_powercolor_red_devil_review,28.html

Same memory and Core OC as last time. Test was done on 4 games at 1440p and 3D Mark Time Spy benchmark. Average bump in games - 9%. Bump in Time Spy – just over 5%. No linear scaling here also.

3 - https://www.techpowerup.com/reviews/Sapphire/Radeon_RX_590_Nitro_Plus/35.html

Same memory and Core OC as last time. Test was unfortunately done only in Unigine Heaven benchmark. Still, none of the other sites used this benchmark so let’s check it out. The result is a 4% bump. Clearly not linear scaling.

4 - https://hexus.net/tech/reviews/graphics/124211-amd-radeon-rx-590/?page=13

Same memory and Core OC as last time. Test was done on 2 games (Far Cry 5 nad Shadow of the Tomb Raider) and 3D Mark Time Spy benchmark, both at 1440p . Bump in FC5 – 3,5%, SOTR – 4,5%, Time Spy – just over 4,5%. Nope, still no linear scaling.

There doesn’t seem to be proof of your claim.

It is so funny you told me to fact check clock speeds :D Sure, base clock of the RX580 is only 7% higher than Scorpio. There’s just one thing – retail RX580s don’t really sit at base clock during gaming. Do you want to know what is the actual clock an RX580 maintains during gameplay ? Let’s find out !

1 - https://techreport.com/review/31754/amd-radeon-rx-580-and-radeon-rx-570-graphics-cards-reviewed/2 – quote: „we observed a solid 1411 MHz boost speed from our sample in our tests”

That would make it 20% higher than Scorpio.

2 - https://www.guru3d.com/articles_pages/msi_radeon_rx_570_and_580_mech_2_8g_oc_review,7.html – quote: „A duration test shows the RX580 card running at that ~ 1380MHz marker continuously

That’s almost 18% higher.

3 - https://www.tomshardware.com/reviews/asrock-phantom-gaming-x-radeon-rx580-8g-oc,5601-5.html – Card with the worst cooling I found and it still managed to run at around 1300MHz (no quote this time, you need to chceck their table.

That gives us 11%.

4 - https://www.techpowerup.com/reviews/ASRock/RX_580_Phantom_Gaming_X/35.html – Here we can read why the ASRock card from previous review performed so much worse than the rest. Even so, in this test it managed an average of around 1340MHz.

That’s over 14% higher.

So nice try with the cherry pick of RX580’s base clock. In real world you can expect 18-20% higher clocks than Scorpio with better implementations and a 11-14% boost with cheaper, worse cooled cards. That’s enoguh to not only make up the loss in SPs (like I originally said) but actually surpass Scorpio’s computational performance. And that’s talking about the RX580. RX590 adds another 300MHz to the core clock and pushes the advantage even further. Saying Scorpio’s core perf is closer to the RX590 is, well, ignorant. I thought you disliked ignorance, hm.

OK, I stand corrected on V56 performance. Did not know V56 gained this much. I was wrong comparing it to 1070.


If you propose to disregard outliers then don’t use said outliers to further your point, ok ? Try to be consistent with the rules you set up.

No, you didn’t say the Scorpio is as powerful as V56, I did (see above). You did however state it’s on par or even faster than 1070 („and if you look at charts, that would place it next to a 1070 or better”). That puts it at a 15% (less if you seriously think it’s faster than 1070) deficit compared to V56 on average. Don’t you think that’s still overly optimistic ? The V56 has substantial advantage in every metric: 40% more SPs, 25% higher bandwidth, 50% more ROPs, 40% more TMUs and higher core clock to boot. That’s some serious difference in computational power to overcome.

No, it wouldn’t be the same. Benchmarks show that 1070 is on average 30% faster than an RX580. Scorpio is barely any better than RX580 in computational power, or even slightly weaker if we take actual real world clocks. Scorpio has 27,5% higher bandwidth but as reviews show, performance doesn’t really scale with bandwidth all that well. Where is this 30+% gain coming from then ?
 
Back