Sony reveals PlayStation 5 specifications, including support for internal NVMe SSD upgrades

We are talking 300% better ray tracing than the 2080ti...

Definitely not. 300 percent better isn't a performance metric anyone will recognise.

A dozen teraflops or so of ray tracing hardware on the faster Series X versus several dozen on a 2080Ti is the realistic comparison. Not to mention the 2080Ti has more memory and bandwidth all to itself, not shared about the system.

Ray tracing is a thing. Just not much of it with much precision on these consoles.

$100 difference? Can you please show me?

Dreamcast/GameCube launched $199 whereas PS2 launched $299.

Xbox was very late to the party and also $299. It didn't sell well so Microsoft actually dropped the price to $199 a month before PS2 was cut to the same figure in NA.

Sega Saturn was $399 and Playstation was $299. This was the most famous price announcement in the history of consoles, it was caught on video at E3 1995 and the audience gasped realising Sega were boned.
 
Last edited:
I think the relevant part for PC users is the option to upgrade storage with fast PCIe 4 nvme.

This should finally mean more and faster drives for PC users. Nice ?
 
Last edited:
That's subjective.

Full stop.

----------------------------
But on a serious note, who cares? Software is where it's at, and anyone can argue that PC has better "exclusives" than either lol

No it's not my friend...

You can love Crackdown 3 but you have to know that God Of War is a better game even if you have fun playing a sh*tty game...

The same example applies to State of Decay 2 and Spider Man or Horizon... no matter if you like it or not deep down you what which one is the better game.
 
Totally feels like Sony are trying to get a mike drop moment on price. Waiting. I think they are banking heavily on Microsoft talking about their fastest console and then pricing it at $549.

So they can nip in and say $449 and come out looking pretty good, despite clearly losing out a bit in performance at the start of this generation.
Then they'll release a ps5 pro after awhile and completely negate that small savings and still push a system thats a bit underwhelming.

sony set a precedent with the ps4 pro and they know cheapskate gamers will chomp at the bit, I dont like what they'll do, but from a business perspective...its smart.
 
No it's not my friend...

You can love Crackdown 3 but you have to know that God Of War is a better game even if you have fun playing a sh*tty game...

The same example applies to State of Decay 2 and Spider Man or Horizon... no matter if you like it or not deep down you what which one is the better game.
Do you know what subjective means?
Because the ironic part is that you're using subjectivity to argue it's not subjective lol
 
Definitely not. 300 percent better isn't a performance metric anyone will recognise.

A dozen teraflops or so of ray tracing hardware on the faster Series X versus several dozen on a 2080Ti is the realistic comparison. Not to mention the 2080Ti has more memory and bandwidth all to itself, not shared about the system.

Ray tracing is a thing. Just not much of it with much precision on these consoles.

Dreamcast/GameCube launched $199 whereas PS2 launched $299.

Xbox was very late to the party and also $299. It didn't sell well so Microsoft actually dropped the price to $199 a month before PS2 was cut to the same figure in NA.

Sega Saturn was $399 and Playstation was $299. This was the most famous price announcement in the history of consoles, it was caught on video at E3 1995 and the audience gasped realising Sega were boned.

Read what you typed...
That 300% isn't something people will recognize...?

Secondly, You don't use teraflops to compute ray tracing as ray tracing can now be done in parallel alongside raster. ANd if you wtached the livestream you will see that 2080ti's "ray tracing" is horrible, as nobody would PAY FOR THAT. Because when you use ray-tracing on a 2080ti, it slows down gameplay by more than 60%.

You simply do not know what you are talking about.... bcz nobody is really using the 2080ti as a reference of how to do ray tracing..... because the 2080ti can't really accelerate ray tracing. rdna2 can...

Nobody cares how much past conslows have been, we are talking about a new breed of Consoles. People Cell phones are $500 ~ $1,200 and they buy them every few years. Expect these new Conoles (that you use for 5+ years) to be more than a cheesy cellphone.
 
Read what you typed...
That 300% isn't something people will recognize...?

Secondly, You don't use teraflops to compute ray tracing as ray tracing can now be done in parallel alongside raster. ANd if you wtached the livestream you will see that 2080ti's "ray tracing" is horrible, as nobody would PAY FOR THAT. Because when you use ray-tracing on a 2080ti, it slows down gameplay by more than 60%.

You simply do not know what you are talking about.... bcz nobody is really using the 2080ti as a reference of how to do ray tracing..... because the 2080ti can't really accelerate ray tracing. rdna2 can...

Nobody cares how much past conslows have been, we are talking about a new breed of Consoles. People Cell phones are $500 ~ $1,200 and they buy them every few years. Expect these new Conoles (that you use for 5+ years) to be more than a cheesy cellphone.

AMD outdid on the Raytracing from the looks of it. Nvidia's choice of RT only cores just isn't a good use of silicon. And I'm sure Nvidia will follow, while still using RT cores until it phases them out outright.

RDNA uses the TMUs (Texture Mapping Units). They can be switched between workloads. Each unit has its own pipeline. Each CU has 4 TMUs. So AMD clearly did Raytracing Smarter. And we will probably see more TMUs in a CU as times goes on and RT becomes more common place.

Nvidia will no doubt be switching to this method in a few years. We already know they will not with their next generation. But Nvidia's RT cores are wasted silicon when not used. So I'd like to see them move to AMD's Method, or go the chiplet route and make a die for RT cores.
 
I regards to the Ray Tracing argument, as I noted before, there is a difference between being able to do Ray Tracing and being able to do so at a playable speed.

What we're seeing here, much like 4k120, is that while the console CAN do the job, it will be rarely used for performance reasons.

These should be good enough for 4k60 though, which is what most of us care about.
 
RDNA uses the TMUs (Texture Mapping Units). They can be switched between workloads. Each unit has its own pipeline. Each CU has 4 TMUs. So AMD clearly did Raytracing Smarter. And we will probably see more TMUs in a CU as times goes on and RT becomes more common place.
This assumes AMD is implemented their patented 'Texture Processor Based Ray Tracing Acceleration' system in RDNA2 but I think this is a safe assumption. It's not the TMUs per se that are doing the work you're referring to: instead they're part of an ASCI that AMD labelled as a Texture Processor in the patent.

amdrt.jpg
The patent provides a summary of the steps involved:

"A texture processor based ray tracing accelerator method and system are described. The system includes a shader, texture processor (TP) and cache, which are interconnected. The TP includes a texture address unit (TA), a texture cache processor (TCP), a filter pipeline unit and a ray intersection engine. The shader sends a texture instruction which contains ray data and a pointer to a bounded volume hierarchy (BVH) node to the TA. The TCP uses an address provided by the TA to fetch BVH node data from the cache. The ray intersection engine performs ray-BVH node type intersection testing using the ray data and the BVH node data. The intersection testing results and indications for BVH traversal are returned to the shader via a texture data return path. The shader reviews the intersection results and the indications to decide how to traverse to the next BVH node."

Note that the pipeline also makes it clear that are various implementations on how these various parts might be structure in the chip. Compared to Nvidia's RT cores, there are some obvious differences: in RTX chips, the management of the BVH access (addressing, cache control) is handled directly by the RT core, whereas here, that functionality is handled by the elements of the texture processor (many of which have always existed - what we call TMUs have always had a texture addressing unit, a texture cache controller, and a texture filter unit. They've always been quite fixed in their functionality, but it shouldn't take too much to alter these for BVHs. But essentially, it's not hugely different to Nvidia's approach: sampling units are used to fetch ray and BVH data, and a separate ASCI is used to calculate the ray-primitive intersection. All of this is done concurrently to the shaders, in both AMD and Nvidia chips.

The physical implementation of AMD's approach is still uncertain, unfortunately - in RTX chips, there is one RT core per SM, whereas the patent suggests that AMD's method has an intersection engine per texture processor. GCN/RDNA chips have a texture 'block' that comprises 4 texture filter units (2 per SIMD32 block) and 16 texture mapping units per compute unit. Texture data is sampled from the L0 cache, and there is a separate unit for handing texture send/receive from the cache.

I can't imagine that AMD would decrease the amount of texturing capability per CU (even though the CU count in RDNA2 is higher), so this suggests to me that there is indeed one intersection engine per CU. Of course, since one knows nothing at all about the actual physical construction of AMD's nor Nvidia's intersection engines, it's not possible to compare them (at least, yet).
 
Read what you typed...
That 300% isn't something people will recognize...?

Secondly, You don't use teraflops to compute ray tracing as ray tracing can now be done in parallel alongside raster. ANd if you wtached the livestream you will see that 2080ti's "ray tracing" is horrible, as nobody would PAY FOR THAT. Because when you use ray-tracing on a 2080ti, it slows down gameplay by more than 60%.

You simply do not know what you are talking about.... bcz nobody is really using the 2080ti as a reference of how to do ray tracing..... because the 2080ti can't really accelerate ray tracing. rdna2 can...

Nobody cares how much past conslows have been, we are talking about a new breed of Consoles. People Cell phones are $500 ~ $1,200 and they buy them every few years. Expect these new Conoles (that you use for 5+ years) to be more than a cheesy cellphone.

Read what you typed! 300 percent more than something you didn't define a figure for is definitely not something people will recognize.

You do realize Microsoft defined Series X's own ray tracing performance in terms of teraflops right? :poop:They said it themselves. 13 teraflops compute. That was the figure they dished out.

Using an industry standard compute measure like teraflops people recognize. If you honest to god think that AMD's ray tracing is so insanely more efficient than Nvidia's they can extract three times the performance from far less die area, less memory bandwidth and less available memory then I really can't help you here. Be realistic. Yes I know Series X is 7nm and Turing is 12nm, but even extrapolation shows they have less dedicated hardware ray tracing on there.

There were multiple technical slides that detail an RTX2080's (NOT the faster Ti) compute performance on RT cores. On the dedicated RT cores alone Nvidia state 23 teraflops. 10 teraflops float we know for the 2080, 23 TF for the RT cores alone.

geforce-rtx-gtx-dxr-metro-exodus-rtx-rt-core-dlss-frame-expanded.png

Please put this concept to bed that Xbox Series X somehow obliterates an RTX2080Ti at ray tracing. It doesn't even touch a normal 2080.
 
Last edited:
Read what you typed! 300 percent more than something you didn't define a figure for is definitely not something people will recognize.

You do realize Microsoft defined Series X's own ray tracing performance in terms of teraflops right? :poop:They said it themselves. 13 teraflops compute. That was the figure they dished out.

Using an industry standard compute measure like teraflops people recognize. If you honest to god think that AMD's ray tracing is so insanely more efficient than Nvidia's they can extract three times the performance from far less die area, less memory bandwidth and less available memory then I really can't help you here. Be realistic. Yes I know Series X is 7nm and Turing is 12nm, but even extrapolation shows they have less dedicated hardware ray tracing on there.

There were multiple technical slides that detail an RTX2080's (NOT the faster Ti) compute performance on RT cores. On the dedicated RT cores alone Nvidia state 23 teraflops. 10 teraflops float we know, 23TF RT cores alone.

View attachment 86301

Please put this concept to bed that Xbox Series X somehow obliterates an RTX2080Ti at ray tracing.

We'll have to see how AMD's upcoming cards perform.

But from the sounds of it, AMD pulled off some pretty good numbers.
 
We'll have to see how AMD's upcoming cards perform.

But from the sounds of it, AMD pulled off some pretty good numbers.

RTX2080Ti is a freaking near 19 billion transistor GPU (the entire Series X die with CPU and cache is 'only' 15Bn.) Nvidia threw the kitchen sink at the RTX2080Ti and it's still not enough.

It won't be enough on these consoles either, assuming you want more than a few nice looking little puddles here and there on your native 4K 60FPS titles. I'm not saying the consoles can't do it, I'm saying they won't do a lot of it for factors including simple primary performance.

I am sure AMD have well leveraged the advantages of 7nm to improve the efficiency of RDNA2 to the point they have a pretty streamlined pipeline for ray tracing. However on a console like Xbox Series X where the whole system fights for resources it won't be delivering truck loads of ray bounces and intersections. This isn't some secret sauce.

RTX has been accused of being pointless, it's been noted that it has a severe performance penalty. While there are cases to be made for these arguments it appears people should be starting to get an idea of how much dedicated performance you need to make complex ray tracing work in a modern video game on the bleeding edge of rasterization.

I think it will take flight with whatever Nvidia manage to push out the door later this year on 7nm. This should come before Series X launches, so we should know soon enough.
 
Back during the Xbox One vs. PS4 early days I would have trolled you and fought you with vicious and violent words.

NOW: I'm forced to agree with you.

My Xbox and Xbox 360 made me extremely loyal to Xbox.

But Xbox One was so underwhelming in terms of exclusives that when the time came to buy XboxOneX, I went out and bought Gaming PC/Laptops instead.

Now that I look back on PS3 and PS4's library, I have to say that SONY does a way better job of pleasing its fanbase than Xbox does.

I may cancel Xbox Live this year and I've been with them for over 10 years.

If I see no must-have exclusives, then they've lost me.
Even then, Sony had GT, Xbox didn't. So PS always had the edge.
 
Would AMD be competing with itself when it comes to pc gaming hardware below next gen consoles spec though?
As a PC enthusiast all this noise means we will get better lowest common denominator gaming development which equates to better ports. Games that will finally plateau into trailer looking visuals. While the next gen spec is impressive it took them 10 years to get flash in there face palm.
 
Another advantage people seem to be missing is that the PS5 doesn't have a split memory pool. Can someone way smarter than me explain why the XSX has fast and slow memory? I don't see any advantage whatsoever.
 
Another advantage people seem to be missing is that the PS5 doesn't have a split memory pool. Can someone way smarter than me explain why the XSX has fast and slow memory? I don't see any advantage whatsoever.

It's been done because of cost. You can either use 1GB or 2GB GDDR6 32 bit chips. 14GBps is the most common cost efficient speed of the available modules.

If you go like Sony and have 8 chips on the board x 2GB each x 32 bit you get 16GB on a 256 bit memory bus. That works out to 448GB/s of bandwidth, like an RX 5700XT.

Problem for Series X is that the GPU is clearly bigger and faster than a 5700XT or PS5's setup. It needs more memory bandwidth.

They have gone up to a wider 320 bit memory bus to get more bandwidth. That means they needed 10 chips x 32 bit.

If you use 10 x 2GB chips you have 20GB of memory. 20GB of GDDR6 is very, very expensive for a console not to mention hot and power sucking. They obviously thought no chance. 10 x 1GB of chips means 10GB, that's not enough memory for a next gen console.

So we'll split it. 6 x 2GB modules and 4 x 1GB modules to give us 16GB, the 10 modules total we wanted to get the 320 bit bus and 560GB/s. 25 percent more bandwidth for Series X GPU over a 5700XT or PS5.

For the memory controllers this means they have to be 'shared' on some of the memory chips. GPU has the use of all the 4 x 1GB chips, and 1GB (half) from the other 6 x 2GB chips. So 10 + 6.

However it leaves 6GB 'only' able to be accessed on a 192 bit width (6 chips x 32 bit = 192) So 6GB of the memory has 336GB/s maximum.

It was a trade off. Either find a bunch extra money on an already expensive machine to make the system 20GB GDDR6 with all the extra caveats, or reduce the amount of RAM and 'overlap' usage of memory controllers which also saves money.

It seems a reasonable plan. What it ultimately means is the GPU is limited to 10GB of RAM if you don't want a latency penalty to pull from the other pool.

10GB is still a lot of video RAM, and 6GB should be enough for everything else.
 
Last edited:
Back