PlayStation 5 Pro will be bigger, faster, and better using the same CPU

A console will be much more focused on implementing dual-issue than PC games, where studios rarely have programmers willing to optimize the code.
This is complete and utter nonsense.
First, dual-issue instructions isn't a feature that you can just choose to implement whenever you want, it's a niche feature that benefits a few use cases only, namely tasks that are compute-heavy while also using low precision data. That's why its applications are mostly reserved to a few workstation/datacenter tasks, and in games it makes almost zero benefit outside of a few compute shaders here or there. The reason dual-issue doesn't affect gaming isn't that devs just weren't willing to implement it, it's because most tasks involved in 3D rendering (regardless of whether it's a PC or console) literally cannot benefit from it.
Second, dual-issue instructions is something you'd have to implement at engine level, not on a per-game basis. Every engine today is multiplatform, and every engine that can support it (for the few shaders that can benefit from it) already does. Whatever benefit this feature offers to consoles will also apply to PC as default because it's the same engine running both versions of the game.
 
Irrelevant, since the Xbox S is the baseline lever of support for this console generation. Devs are always going to target the lowest common denominator; it makes the most financial sense.
Wasn't it recently reported some devs at GDC were talking about if its even worth developing and supporting the Xbox as a whole? Sounds like sales are real bad.

The Series S is a non-issue if you don't bother releasing your title on it in the first place...
 
This is complete and utter nonsense.
First, dual-issue instructions isn't a feature that you can just choose to implement whenever you want, it's a niche feature that benefits a few use cases only, namely tasks that are compute-heavy while also using low precision data. That's why its applications are mostly reserved to a few workstation/datacenter tasks, and in games it makes almost zero benefit outside of a few compute shaders here or there. The reason dual-issue doesn't affect gaming isn't that devs just weren't willing to implement it, it's because most tasks involved in 3D rendering (regardless of whether it's a PC or console) literally cannot benefit from it.
Second, dual-issue instructions is something you'd have to implement at engine level, not on a per-game basis. Every engine today is multiplatform, and every engine that can support it (for the few shaders that can benefit from it) already does. Whatever benefit this feature offers to consoles will also apply to PC as default because it's the same engine running both versions of the game.
It sounds as if the studios are focused on providing the best possible optimization. In the real world, they just want to deliver the result as quickly as possible, making the process as generic and less laborious as possible.

It needs to be manually implemented, because as the detailed analysis of chips and cheese has already shown, the generic compiler code fails to favor dual-issue when it should.

"We only see convincing dual issue behavior with FP32 adds, where the compiler emitted v_dual_add_f32 instructions. The mixed INT32 and FP32 addition test saw some benefit because the FP32 adds were dual issued, but could not generate VOPD instructions for INT32 due to a lack of VOPD instructions for INT32 operations. Fused multiply add, which is used to calculate a GPU’s headline TFLOPs number, saw very few dual issue instructions emitted. Both architectures can execute 16-bit operations at double rate, though that’s unrelated to RDNA 3’s new dual issue capability"

"I’m guessing RDNA 3’s dual issue mode will have limited impact. It relies heavily on the compiler to find VOPD possibilities, and compilers are frustratingly stupid at seeing very simple optimizations. For example, the FMA test above uses one variable for two of the inputs, which should make it possible for the compiler to meet dual issue constraints. But obviously, the compiler didn’t make it happen."

Humans will be much better at seeing dual issue opportunities than a compiler can ever hope to. Wave64 mode is another opportunity. But it does so by pushing heavy scheduling responsibility to the compiler. AMD is probably aware that compiler technology is not up to the task, and will not get there anytime soon.
 
It sounds as if the studios are focused on providing the best possible optimization. In the real world, they just want to deliver the result as quickly as possible, making the process as generic and less laborious as possible.
That's not how any of this works. The devs who work on a game and struggle with crunch aren't the same devs who build engines. Even within the same company those are two completely different roles that require completely different qualifications; and that's not to mention the fact that the vast majority of studios don't even use in-house engines to begin with.

It needs to be manually implemented, because as the detailed analysis of chips and cheese has already shown, the generic compiler code fails to favor dual-issue when it should.
Again, that's not how any of this works. "Generic compiler code" is not a thing. When you use an engine to make a game, that engine will have its own compiler.
Second, you cannot "manually" add something to a program. If a compiler isn't producing the result you want, you need to change the compiler.
Third, Chips and Cheese's microbenchmarking has nothing whatsoever to do with how game engines (or even 3D rendering at all) works. Most of it is OpecCL/compute testing, and the few Vulkan tests they do are completely synthetic max throughput tests.

"We only see convincing dual issue behavior with FP32 adds, where the compiler emitted v_dual_add_f32 instructions. The mixed INT32 and FP32 addition test saw some benefit because the FP32 adds were dual issued, but could not generate VOPD instructions for INT32 due to a lack of VOPD instructions for INT32 operations. Fused multiply add, which is used to calculate a GPU’s headline TFLOPs number, saw very few dual issue instructions emitted. Both architectures can execute 16-bit operations at double rate, though that’s unrelated to RDNA 3’s new dual issue capability"
The whole point of my comment was to explain to you that 3D rendering largely does not benefit from things like FMA and dual FP. Again, it's not something you can just choose to implement. FMA is a niche instruction you use on the very few specific situations where you need both the sum of two numbers and the product of two numbers at the same time. In 3D rendering you never need that, so FMA instructions will do jack squat for you. Same goes for dual-issue instructions, they're useful if and only if you need to use two different instructions on the same data AND can accept a loss in precision. Most 3D rendering tasks cannot accept a loss in precision to begin with, and the few that can you rarely need to run two instructions on the same data, leaving very few cases (like some compute shaders) that can actually benefit. That's not how computing works, you cannot use whatever instruction you want whenever you want, instructions are made for specific use cases and you have to use the appropriate ones.

Humans will be much better at seeing dual issue opportunities than a compiler can ever hope to.
Irrelevant, because without a compiler doing it you simply cannot have those things implemented in an executable. You might improve the compilers used to build the engine, and you might improve engine compilers used to build the game executables. But none of this means consoles will have an advantage over PC, because again every engine is multiplatform and any improvements made to those compilers would also benefit PC, without the game devs having to do anything about it. Your claim that consoles will benefit more from this feature is a delusion from someone who has no idea what they're talking about, because A) those features already largely do not benefit 3D rendering tasks, and B) those features cannot be implemented on engines in a way to works of consoles but not on PC.
 
Good stuff.

Will be trading in my original PS5 at my local EB games store, to offset some of the cost for the PS5 Pro.
 
That's not how any of this works. The devs who work on a game and struggle with crunch aren't the same devs who build engines. Even within the same company those are two completely different roles that require completely different qualifications; and that's not to mention the fact that the vast majority of studios don't even use in-house engines to begin with.


Again, that's not how any of this works. "Generic compiler code" is not a thing. When you use an engine to make a game, that engine will have its own compiler.
Second, you cannot "manually" add something to a program. If a compiler isn't producing the result you want, you need to change the compiler.
Third, Chips and Cheese's microbenchmarking has nothing whatsoever to do with how game engines (or even 3D rendering at all) works. Most of it is OpecCL/compute testing, and the few Vulkan tests they do are completely synthetic max throughput tests.


The whole point of my comment was to explain to you that 3D rendering largely does not benefit from things like FMA and dual FP. Again, it's not something you can just choose to implement. FMA is a niche instruction you use on the very few specific situations where you need both the sum of two numbers and the product of two numbers at the same time. In 3D rendering you never need that, so FMA instructions will do jack squat for you. Same goes for dual-issue instructions, they're useful if and only if you need to use two different instructions on the same data AND can accept a loss in precision. Most 3D rendering tasks cannot accept a loss in precision to begin with, and the few that can you rarely need to run two instructions on the same data, leaving very few cases (like some compute shaders) that can actually benefit. That's not how computing works, you cannot use whatever instruction you want whenever you want, instructions are made for specific use cases and you have to use the appropriate ones.


Irrelevant, because without a compiler doing it you simply cannot have those things implemented in an executable. You might improve the compilers used to build the engine, and you might improve engine compilers used to build the game executables. But none of this means consoles will have an advantage over PC, because again every engine is multiplatform and any improvements made to those compilers would also benefit PC, without the game devs having to do anything about it. Your claim that consoles will benefit more from this feature is a delusion from someone who has no idea what they're talking about, because A) those features already largely do not benefit 3D rendering tasks, and B) those features cannot be implemented on engines in a way to works of consoles but not on PC.
In your fictional world, things operate differently from reality. In the real world, shaders can be tailored to enhance performance on specific architectures, and GPU manufacturers supply drivers containing customized assembly code with finely tuned optimizations. Relying solely on an engine and compiler for optimization is dumb and problematic, that's particularly heightened by UE5 "box of tricks".

There are benchmarks in Starfield that demonstrate the use of dual-issue, along with detailed explanations of how RDNA3 outperforms other architectures in the game.
 
Last edited:
PS6 is years from coming out. Probably at least 3 more. PS5 probably will sell.

I bet more people buy it and I'm sure more than a few in comments will end up buying one.
 
In your fictional world, things operate differently from reality. In the real world, shaders can be tailored to enhance performance on specific architectures
1) That is done on PC too, genius.
2) No amount of tailoring will ever make FMA and dual-issue instructions useful for 3D rendering, because 3D rendering cannot benefit in any meaningful way from those instructions. Those instructions were not created for 3D rendering tasks.

and GPU manufacturers supply drivers containing customized assembly code with finely tuned optimizations.
And what does that have to do with game devs? It's not them who work on these driver optimizations.
Also, driver optimizations aren't "assembly code", 99% of the time they're just rewritten shaders.
Also, driver optimizations have literally nothing whatsoever to do with dual-issue instructions. Because, again, games largely don't use them. Because, again, games largely do not benefit from them.

Relying solely on an engine and compiler for optimization is dumb and problematic, that's particularly heightened by UE5 "box of tricks".
Consoles don't get driver updates with game-specific optimizations like PC GPUs do. In-engine optimizations are literally the only vehicle for optimization that console games have. Claiming that engine optimization is "problematic" is just grotesque ignorance, it's literally the default way every game dev works on optimization.

There are benchmarks in Starfield that demonstrate the use of dual-issue, along with detailed explanations of how RDNA3 outperforms other architectures in the game.
Except that is complete bullshit, since RDNA 3 is not the first GPU architecture that supports dual-issue instructions, and Nvidia GPUs had that feature long before AMD GPUs did. RDNA 3 GPUs outperform Ada and Ampere GPUs (all of which also support dual-issue instructions), but Ada and Ampere GPUs do NOT outperform Turing GPUs (which don't support dual-issue), as you can see on the Starfield GPU test done right here on TechSpot. The RX 7600 doesn't outperform its RDNA 2 counterparts (with no dual-issue support) either.
So Starfield may run better on AMD architectures (just like there are other games that run better on Nvidia architectures), but the one thing we can say with absolute certainty is that this advantage AMD archs have in it has nothing whatsoever to do with dual-issue instruction support.
The fact you just declared "there are benchmarks" instead of linking or pointing to them speaks volumes.
 
1) That is done on PC too, genius.
2) No amount of tailoring will ever make FMA and dual-issue instructions useful for 3D rendering, because 3D rendering cannot benefit in any meaningful way from those instructions. Those instructions were not created for 3D rendering tasks.


And what does that have to do with game devs? It's not them who work on these driver optimizations.
Also, driver optimizations aren't "assembly code", 99% of the time they're just rewritten shaders.
Also, driver optimizations have literally nothing whatsoever to do with dual-issue instructions. Because, again, games largely don't use them. Because, again, games largely do not benefit from them.


Consoles don't get driver updates with game-specific optimizations like PC GPUs do. In-engine optimizations are literally the only vehicle for optimization that console games have. Claiming that engine optimization is "problematic" is just grotesque ignorance, it's literally the default way every game dev works on optimization.


Except that is complete bullshit, since RDNA 3 is not the first GPU architecture that supports dual-issue instructions, and Nvidia GPUs had that feature long before AMD GPUs did. RDNA 3 GPUs outperform Ada and Ampere GPUs (all of which also support dual-issue instructions), but Ada and Ampere GPUs do NOT outperform Turing GPUs (which don't support dual-issue), as you can see on the Starfield GPU test done right here on TechSpot. The RX 7600 doesn't outperform its RDNA 2 counterparts (with no dual-issue support) either.
So Starfield may run better on AMD architectures (just like there are other games that run better on Nvidia architectures), but the one thing we can say with absolute certainty is that this advantage AMD archs have in it has nothing whatsoever to do with dual-issue instruction support.
The fact you just declared "there are benchmarks" instead of linking or pointing to them speaks volumes.
First of all, where did you get the idea that only FMA benefits from dual-issue? Other architectures also support dual-issue, with each presenting it in different operations depending on the case and code quality. However, RDNA3 accomplishes this in a single cycle.

Low-level APIs were introduced to enable programming directly on hardware with optimization similar to consoles. However, due to many developers being either mediocre or pressured by tight deadlines, this optimization often doesn't occur. GPU manufacturers then need to invest hundreds of millions in their software teams to ensure smooth operation, releasing drivers regularly. If engines and compilers could magically bring good optimization, we wouldn't see Intel GPUs struggling to run games correctly( gaining 100-200% performance improvements with driver updates) or AMD GPUs underperforming on little-known titles.

Not seeking information on your own and expecting everything to be handed to you really speaks volumes about your level of laziness.
 
First of all, where did you get the idea that only FMA benefits from dual-issue?
Literally never said that anywhere in any of my comments.
I mentioned FMA because, just like dual-issue instructions, it is another example of a feature that does not benefit 3D rendering.

Other architectures also support dual-issue
FMA is not an architecture. It's an instruction, it stands for "fused multiply add".

However, RDNA3 accomplishes this in a single cycle.
Every architecture that supports it accomplishes it in a single cycle. That's the whole point of dual-issue instructions, issuing two instructions on the same data in a single cycle. That's what Ampere and Ada Lovelace also do, because that's what dual-issue instructions are.

Low-level APIs were introduced to enable programming directly on hardware with optimization similar to consoles.
Completely incorrect.
1) DirectX 12 and Vulkan are not "low-level APIs", they are only less abstracted than DirectX 11 and OpenGL. They're still not the same as the low-level APIs consoles use, and they are NOT "programming directly on hardware".
2) This has nothing whatsoever to do with dual-issue instructions, because you don't need those APIs (DirectX 12 and Vulkan) to use dual-issue instructions.

However, due to many developers being either mediocre or pressured by tight deadlines, this optimization often doesn't occur. GPU manufacturers then need to invest hundreds of millions in their software teams to ensure smooth operation, releasing drivers regularly.
Again, this has nothing whatsoever to do with the subject at hand. This has been a thing for GPU manufacturers for two decades now, since long before dual-issue instructions existed.
You are so desperately scrambling to save face here after being schooled, that your comments are devolving into complete non-sequiturs that have no relationship whatsever with what we were discussing (your nonsensical claim that consoles will benefit from dual-issue instructions while PCs won't).

If engines and compilers could magically bring good optimization, we wouldn't see Intel GPUs struggling to run games correctly( gaining 100-200% performance improvements with driver updates) or AMD GPUs underperforming on little-known titles.
You are literally incapable of comprehending what I'm saying.
I never said that, you dim. YOU are the one who claimed "optimizing games at engine level is problematic". Which, again, is a grotesquely ignorant statement, because optimization at engine level is literally the only form of optimization that console games have. Console devs don't get access to console firmware, and consoles don't get driver updates where the GPU manufacturer optimizes games themselves like PCs do. Literally the only thing console devs have control over is in-engine code, and (sometimes) the engine itself.
You are tripping over yourself here trying to talk about driver optimizations, which again have nothing whatsoever to do with what we're discussing, because consoles don't get driver updates like PCs do and thus support for things like dual-issue instructions has nothing whatsoever to do with driver updates.

Not seeking information on your own and expecting everything to be handed to you really speaks volumes about your level of laziness.
Absolutely hilarious for you to end your comment like this after typing all this nonsense. Zero self-awareness.
 
PS6 is years from coming out. Probably at least 3 more. PS5 probably will sell.

I bet more people buy it and I'm sure more than a few in comments will end up buying one.

PS6 is 2028.
PS5 Pro is a mid-gen release - PS5 came out in 2020.

Xbox can update their console in 2026 all they want but PS will still sell better. Impressive hardware means nothing without good games. Xbox mostly relies on multi platform titles and MS cares most for Game Pass sub numbers. They speak about Game Pass subs ALL THE TIME. It is easy to understand what their Xbox focus will be going forward.

Rumour also claim that Xbox 2026 won't have physical drive at all. Game pass focus, once again.

PS5 Pro will have 1-2 years head start before a new Xbox comes out and PS6 will be ready for 2028 leapfrogging it again. I don't even think Sony cares about Xbox at this point. Xbox sales are miniscule and Microsoft don't even want to talk sales numbers - only shipped numbers - which means SITTING IN A WAREHOUSE. Xbox sales are terrible outside of US and Xbox is part of student bundles here, which could explain it.

PS5 Pro will sell in xx millions for sure, GTA6 will sell tons of PS5 Pros alone. GTA 6 is the biggest game release in years and GTA in general is a SYSTEM SELLER.
 
Last edited:
Literally never said that anywhere in any of my comments.
I mentioned FMA because, just like dual-issue instructions, it is another example of a feature that does not benefit 3D rendering.


FMA is not an architecture. It's an instruction, it stands for "fused multiply add".


Every architecture that supports it accomplishes it in a single cycle. That's the whole point of dual-issue instructions, issuing two instructions on the same data in a single cycle. That's what Ampere and Ada Lovelace also do, because that's what dual-issue instructions are.


Completely incorrect.
1) DirectX 12 and Vulkan are not "low-level APIs", they are only less abstracted than DirectX 11 and OpenGL. They're still not the same as the low-level APIs consoles use, and they are NOT "programming directly on hardware".
2) This has nothing whatsoever to do with dual-issue instructions, because you don't need those APIs (DirectX 12 and Vulkan) to use dual-issue instructions.


Again, this has nothing whatsoever to do with the subject at hand. This has been a thing for GPU manufacturers for two decades now, since long before dual-issue instructions existed.
You are so desperately scrambling to save face here after being schooled, that your comments are devolving into complete non-sequiturs that have no relationship whatsever with what we were discussing (your nonsensical claim that consoles will benefit from dual-issue instructions while PCs won't).


You are literally incapable of comprehending what I'm saying.
I never said that, you dim. YOU are the one who claimed "optimizing games at engine level is problematic". Which, again, is a grotesquely ignorant statement, because optimization at engine level is literally the only form of optimization that console games have. Console devs don't get access to console firmware, and consoles don't get driver updates where the GPU manufacturer optimizes games themselves like PCs do. Literally the only thing console devs have control over is in-engine code, and (sometimes) the engine itself.
You are tripping over yourself here trying to talk about driver optimizations, which again have nothing whatsoever to do with what we're discussing, because consoles don't get driver updates like PCs do and thus support for things like dual-issue instructions has nothing whatsoever to do with driver updates.


Absolutely hilarious for you to end your comment like this after typing all this nonsense. Zero self-awareness.
No, you weren't even able to understand what I said. Your inability to interpret text shows that you have no way of understanding the subject you're diving into.

If you're not completely overcome by laziness you should read the article @neeyik wrote and learn something. https://www.techspot.com/article/2570-gpu-architectures-nvidia-intel-amd/

Plus:
Certain common instructions can execute with full 1-per-cycle throughput even in wave64 mode, thanks to RDNA 3’s dual 32-wide execution units:

 
No, you weren't even able to understand what I said. Your inability to interpret text shows that you have no way of understanding the subject you're diving into.
Brother, you can't just "I'm rubber you're glue" me. I have provided lengthy explanations of exactly why you're wrong point by point, and pointed you to my sources. Your initial comment was a blatantly incorrect statement, followed by a lot of jargon salad that has nothing to do with the subject of your initial comment (dual-issue instructions and how they work).
You are literally a child who cannot handle being told they're wrong.

If you're not completely overcome by laziness you should read the article @neeyik wrote and learn something. https://www.techspot.com/article/2570-gpu-architectures-nvidia-intel-amd/
I did read that article last year. Nowhere in it it disproves anything I have said in this thread. In fact, that article only mentions dual-issue instructions exactly once, and doesn't go into any detail about what they are or do exactly. There is nothing to be learned about this subject in that article.
This is, once again, you just throwing some unrelated nonsense into teh discussion hoping not to get called out on it, in your desperate attempt to save face.

Plus:
Certain common instructions can execute with full 1-per-cycle throughput even in wave64 mode, thanks to RDNA 3’s dual 32-wide execution units:

Again, this is how we know you have no clue what you're talking about. You can literally look at actual benchmarks of Starfield (https://www.techspot.com/review/2731-starfield-gpu-benchmark/) and see that RDNA 3 cards (with dual-issue) DO NOT outperform comparable RDNA 2 cards (without dual-issue). The RX 7600 (supports dual-issue) sits in between the RX 6650 XT and the RX 6700, exactly like it does on every other game.
On that same benchmark, you can also see that Nvidia GPUs that support dual-issue instructions DO NOT outperform Nvidia GPUs that don't support it. You can see the RTX 3070 and the RTX 2080 Ti perform the same, exactly like they do on every other game. That benchmark shows that, for both AMD and Nvidia, the support for dual-issue instructions provides no advantage whatsoever.

Also, it's again hilarious that you start your reply by telling me I'm "unable to interpret text", when you literally are unable to understand the meaning of your own quote from Chips and Cheese. Chips and Cheese is not proving your point here. Saying "certain instructions can execute in dual-mode on RDNA" does not mean that dual-issue instructions have any relevant impact in game performance (because they don't, the Starfield benchmark I linked shows exactly that). I said myself a few replies back that some shaders here and there, particularly some compute shaders, can benefit from this feature. But they are such a small portion of the entire graphics pipeline that their impact is insignificant and its effects on actual performance/framerate are basically null. Which is what you can see in the benchmarks, GPUs that support dual-issue don't perform any better than their counterparts that don't support it.

But all of this is besides the point. It doesn't even matter that dual-issue instructions is an irrelevant feature for 3D rendering. Even if it were relevant, implementing support for dual-issue instructions would need to be an engine feature, and since every engine is multiplatform that support would apply equally to consoles and PC. So even if dual-issue were relevant for rendering, which it isn't, your initial statement that "consoles will be much more focused on implementing dual-issue" would still be wrong.
 
If I do decide to upgrade from my PS4Pro, I think I'll give it a few more years for the PS6... will depend on reviews I suppose, but none of the PS5 games really intrigue me enough to buy a new console.
Same boat... have a PS4 Pro; the PS5 is the first Playstation model I haven't owned. PS5 Pro will be the second. There's just been nothing I consider as motivation for me to Gen up. No new games, no features I don't think I could easily live without... Maybe I'll feel differently about the 6th Gen... but the only upcoming games I'm really excited for are ES6 and Civ7...

Clearly to each their own... but the industry seems to be running out of ideas fast.
 
Brother, you can't just "I'm rubber you're glue" me. I have provided lengthy explanations of exactly why you're wrong point by point, and pointed you to my sources. Your initial comment was a blatantly incorrect statement, followed by a lot of jargon salad that has nothing to do with the subject of your initial comment (dual-issue instructions and how they work).
You are literally a child who cannot handle being told they're wrong.


I did read that article last year. Nowhere in it it disproves anything I have said in this thread. In fact, that article only mentions dual-issue instructions exactly once, and doesn't go into any detail about what they are or do exactly. There is nothing to be learned about this subject in that article.
This is, once again, you just throwing some unrelated nonsense into teh discussion hoping not to get called out on it, in your desperate attempt to save face.


Again, this is how we know you have no clue what you're talking about. You can literally look at actual benchmarks of Starfield (https://www.techspot.com/review/2731-starfield-gpu-benchmark/) and see that RDNA 3 cards (with dual-issue) DO NOT outperform comparable RDNA 2 cards (without dual-issue). The RX 7600 (supports dual-issue) sits in between the RX 6650 XT and the RX 6700, exactly like it does on every other game.
On that same benchmark, you can also see that Nvidia GPUs that support dual-issue instructions DO NOT outperform Nvidia GPUs that don't support it. You can see the RTX 3070 and the RTX 2080 Ti perform the same, exactly like they do on every other game. That benchmark shows that, for both AMD and Nvidia, the support for dual-issue instructions provides no advantage whatsoever.

Also, it's again hilarious that you start your reply by telling me I'm "unable to interpret text", when you literally are unable to understand the meaning of your own quote from Chips and Cheese. Chips and Cheese is not proving your point here. Saying "certain instructions can execute in dual-mode on RDNA" does not mean that dual-issue instructions have any relevant impact in game performance (because they don't, the Starfield benchmark I linked shows exactly that). I said myself a few replies back that some shaders here and there, particularly some compute shaders, can benefit from this feature. But they are such a small portion of the entire graphics pipeline that their impact is insignificant and its effects on actual performance/framerate are basically null. Which is what you can see in the benchmarks, GPUs that support dual-issue don't perform any better than their counterparts that don't support it.

But all of this is besides the point. It doesn't even matter that dual-issue instructions is an irrelevant feature for 3D rendering. Even if it were relevant, implementing support for dual-issue instructions would need to be an engine feature, and since every engine is multiplatform that support would apply equally to consoles and PC. So even if dual-issue were relevant for rendering, which it isn't, your initial statement that "consoles will be much more focused on implementing dual-issue" would still be wrong.

In this discussion I believe you are both lost. But allow me to participate; The 7600's registers are too limited to have the advantages inherent to the dual-issue FP32/1cycle(That's what it says in the techspot article) of RDNA3 that the larger chips in the family have.

About FMA not being used in games. I would say that such a statement is a demonstration of ignorance. In the fragment shader, instructions such as FMA, ADD, and SUB are used for lighting calculations, texture sampling, and other shading operations.

Floating-point arithmetic operations are rounded according to one of the four rounding modes in effect. However, FMA (Fused Multiply-Add) instructions bypass the rounding process and associated rounding error detection on the multiplication operation, using the intermediate result directly for addition. While this optimization provides a performance boost, it can yield different results compared to using separate multiply and add instructions due to the absence of rounding during intermediate steps.

Well, It's noteworthy that the PS5's proprietary API and shader language were designed to maximize the potential of its specific GPU for developers. In contrast, DX12 was created to enable consistent coding across Xbox and PC platforms. This fact suggests that the PS5 API is likely to offer greater efficiency, from my point of view.
 
Last edited:
But allow me to participate; The 7600's registers are too limited to have the advantages inherent to the dual-issue FP32/1cycle(That's what it says in the techspot article) of RDNA3 that the larger chips in the family have.
Completely baseless speculation. If you compare the 7900 XT to the 6950 XT, it is 24% faster in TechSpot's Starfield benchmark, and 21% faster on TechPowerUp's performance summary pages in their reviews (looking at 1440p numbers). Comparing Ada/Ampere to Turing in Starfield vs other games also shows the advantage GPUs with dual-issue support have is little to none.
If you want to participate, at least read what you're replying to properly first and make sure you understand what that is written in it, because what you replied to says the above already.

About FMA not being used in games. I would say that such a statement is a demonstration of ignorance. In the fragment shader, instructions such as FMA, ADD, and SUB are used for lighting calculations, texture sampling, and other shading operations.
Spare me the pedantry. I didn't say FMA instructions are never used in 3D rendering, I said FMA instructions do not benefit 3D rendering to any notable extent. Just like dual-issue instructions, it can be used for minor optimizations of small portions of the graphics pipeline, but the impact it has on the performance of the overall pipeline is minuscule.
The point here is that someone making the claim that "consoles will get a huge boost from dual-issue instructions which PCs will not get" is completely ridiculous for multiple reasons, as ridiculous as someone making the same claim about FMA instructions would be.
 
I wonder why Sony didn't go with Zen3, its much more efficient than Zen2. It has higher IPC and draws less power while also clocks higher, not that it would do that much of a difference at 4K and above

Also, I'm pretty sure Sony's PSSR is an implementation of AMD next gen FSR based on tensor cores.

Looking at the specs, it seems to me the gpu is some sort of RDNA3/4 hybrid or pure RDNA4 seeing that it promises a huge uplift in RT and AI upscaling.
 
I wonder why Sony didn't go with Zen3, its much more efficient than Zen2. It has higher IPC and draws less power while also clocks higher, not that it would do that much of a difference at 4K and above

Also, I'm pretty sure Sony's PSSR is an implementation of AMD next gen FSR based on tensor cores.

Looking at the specs, it seems to me the gpu is some sort of RDNA3/4 hybrid or pure RDNA4 seeing that it promises a huge uplift in RT and AI upscaling.
Cause they don't switch CPUs mid-generation... only GPUs... they'll save that for PS6 - although hopefully that will have Zen4 or Zen5...

Remember, every game made for the PS5 Pro still has to play on the PS5... make the Pro too much faster and there's no real benefit other than making it cost more.
 
Hoo-boy, a lotta flamewar threads, I'm only here because I'm baffled by the phrase in the article

"Update (04/16/18):"

 
Guys hear me out, PS5 never got the proper reviews on techsites like this one or by youtubers (influencers)... Sony just has too much money and lawyers for anyone to speak up, so let me tell you the truth: PS5 is by far the worst console Sony ever spewed out, and one of the worst consoles in history, the locking out users out of their own save files even for always offline games, and the removal of web browser should have destroyed the company for good if people knew what where they about, lets not forget backwards compatibility or lack of such... No performance upgrades can fix the issues of corporate greed and capitalist infinite growth
 
Guys hear me out, PS5 never got the proper reviews on techsites like this one or by youtubers (influencers)... Sony just has too much money and lawyers for anyone to speak up, so let me tell you the truth: PS5 is by far the worst console Sony ever spewed out, and one of the worst consoles in history, the locking out users out of their own save files even for always offline games, and the removal of web browser should have destroyed the company for good if people knew what where they about, lets not forget backwards compatibility or lack of such... No performance upgrades can fix the issues of corporate greed and capitalist infinite growth
PS4 also locks you out of your save files… and original PS didn’t have a web browser… not to mention only the PS2 had backwards compatibility…

There are only 3 consoles… and Nintendo doesn’t really count as they kind of have their own parallel market going… so they only really compete with the Xbox..

Does the XBOX really do any of those things you’re complaining the PS5 doesn’t do?
 
Back