4 Years of AMD RDNA: Another Zen or a New Bulldozer?

1440p to 2160p is a higher image fidelity uplift than stupid RT.
Not only do I disagree, the logic isn't even there, the screen size and how far away from the screen you are matters, very close to a 4k screen? Absolutely true, the resolution bump will be very noticeable, 27 inch screen probably half meter or more away from you? the difference certainly isn't night and day, Path Tracing in Cyberpunk is definitely a better fidelity upgrade vs the resolution bump in that scenario.
Using DLSS is even more retarded because the downgrade in image fidelity is worst than the RT uplift.
Ah, one of "those" people who have clearly never used it but pretend they have. There's plenty of articles and comparisons out there to disprove the "downgrade" and usually show how it can look better than native, games use their own TAA implementations these days, DLSS and FSR are just advanced versions of those. My own experience? At 1440p, anything lower than "quality" mode does have a noticeable downgrade on image quality, in Quality mode though, I notice absolutely no difference other than a better framerate and less shimmering, just a more stable image than the native TAA solution normally.
Not to mention that you paid for a 2160p GPU, not a 1440p GPU. At 1440p, the 4090 is barely 15% faster than an XTX.
Again, I'm talking specifically about RT here, the 4090 is substantially quicker at RT over the 7900XTX even at 1440p.
Lastly, Sony is using AMD hardware and create the most stunning games of the business. I still can't believe that TLOUP1 on PC is a PS4 game. It look and run amazingly. I run the game at 70-75 FPS at 2160p maxed on an XTX.

Not to mention that God of War Ragnarok, the best looking game to date, DOES NOT host ray tracing.

The water in TLOUP2 during the crossing of the harbor during a storm is absolutely breathtaking.
You really know very little about game development, to get to those quality levels, it took hundreds potentially thousands of people over many MANY years to fake it that well.

It does look amazing, No one is disputing that modern games are able to make some incredibly good looking games, we have become masters of fake lighting since we've been doing it since rasterization existed (decades).

Here's a question for you then, how would you keep progress going on graphics quality? You've mentioned God of War, TLOU, moving into the future and lets say these game developers would like to raise the bar again and have more fidelity in their next game, how do you propose we keep progressing?

Instead of inventing 40,000rpm HDD's we moved to SSD's which fundamentally changed the way storage worked for the better, just food for thought.
 
Not only do I disagree, the logic isn't even there, the screen size and how far away from the screen you are matters, very close to a 4k screen? Absolutely true, the resolution bump will be very noticeable, 27 inch screen probably half meter or more away from you? the difference certainly isn't night and day, Path Tracing in Cyberpunk is definitely a better fidelity upgrade vs the resolution bump in that scenario.

Ah, one of "those" people who have clearly never used it but pretend they have. There's plenty of articles and comparisons out there to disprove the "downgrade" and usually show how it can look better than native, games use their own TAA implementations these days, DLSS and FSR are just advanced versions of those. My own experience? At 1440p, anything lower than "quality" mode does have a noticeable downgrade on image quality, in Quality mode though, I notice absolutely no difference other than a better framerate and less shimmering, just a more stable image than the native TAA solution normally.

Again, I'm talking specifically about RT here, the 4090 is substantially quicker at RT over the 7900XTX even at 1440p.

You really know very little about game development, to get to those quality levels, it took hundreds potentially thousands of people over many MANY years to fake it that well.

It does look amazing, No one is disputing that modern games are able to make some incredibly good looking games, we have become masters of fake lighting since we've been doing it since rasterization existed (decades).

Here's a question for you then, how would you keep progress going on graphics quality? You've mentioned God of War, TLOU, moving into the future and lets say these game developers would like to raise the bar again and have more fidelity in their next game, how do you propose we keep progressing?

Instead of inventing 40,000rpm HDD's we moved to SSD's which fundamentally changed the way storage worked for the better, just food for thought.
Tbf I played control with Quality DLSS at 1080p and apart from the odd artifact in cut scenes, the game looked better with DLSS than it off but with 50% boost in performance, which meant RT could be turned on. Control looks significantly better with RT on.
 
Tbf I played control with Quality DLSS at 1080p and apart from the odd artifact in cut scenes, the game looked better with DLSS than it off but with 50% boost in performance, which meant RT could be turned on. Control looks significantly better with RT on.
Yep, people that say "Native is better" are people that don't have DLSS as an option to even try it or are against "AI fakery" not realising they're usually looking at a TAA solution anyway...
 
RT is the way forward for games.
You might not like it, interestingly, the people who don't like this fact are the loudest in the comment sections.

There's no getting around the fact, to improve the graphics in games any further, we not only need to find better ways of lighting scenes, we need to be able to develop games in a much more efficient manor.

RT does both of these, there's multiple interviews across multiple developers that all agree RT not only increases the realism of the lighting (however, we got so good at faking it, the difference can be hard to notice sometimes) it is much quicker to work with.

Before, you'd put a bulb on the roof, then spend ages getting the lighting right and making sure it interacted with everything and dynamically if required. With RT? You literally just place a bulb there, choose how bright, how big, direction (or lack of) and colour, let RT do the rest. No need to bake anything in, no need to fake anything and add bogus light sources, no light bleed through walls, no glowing effects, RT just works.

Unfortunately, there's this sentiment at the moment that we're heading into a future where everything is "fake", mainly DLSS and FSR being blamed as faking everything. That's just not true at all, game engines are now giving us the most realistic and "true" frames and pixels we've ever seen thanks to technology's such as RT.

The problem however, to run these "true" frames at a similar framerate to what we're used to in "older" or "fake" games, the GPU resources required to do that are exponentially higher. So to mitigate this, game developers have been using tech like TAA and denoisers to mask imperfections, This has been going on way before RT mind you, not only have they had no choice but to "fake" lighting until now, it's been hard enough to run on modern hardware, they've had to find ways to lower the processing burden on GPU's.

FSR and particularly DLSS could be considered advanced implementations of TAA, Why many people have such a hatred for a technology that is better than what they were already using is beyond me.

DLSS 3.5 Ray Reconstruction is in its first iteration, even so, it has shone a light on the amount of work denoisers are doing in games to mask and blend the scenes you see onscreen.

If RT isn't the future, what is? Continue to fake everything? To increase fidelity and lighting, give the development teams 20 years to create a single level that's very well faked?

Anyway, to bring it back to AMD, they will need to compete with Nvidia on RT performance, I do think game engines are not optimising for RDNA, It's been pretty impressive what Devs have pulled on console (latest Spiderman on PS5 is using Ray-Tracing in all modes including the 60fps mode) but it seems on desktop, less optimisation has gone into RDNA RT.


How much is RT doing for those who paid $899 for a EVGA 2080 4 years ago...? Why does every RT-moment include the fabled $1,700 4090 and not mainstream cards @ $299..?

Nobody is going to disagree with you saying RT is coming.... but you & they have been beating the drum for 5 years and it's getting tiring, bcs Ray Tracing costs too much and player won't subsidize their performance for console players who have a PC. (Ada lovelace wasn't designed for gaming.)
 
How much is RT doing for those who paid $899 for a EVGA 2080 4 years ago...? Why does every RT-moment include the fabled $1,700 4090 and not mainstream cards @ $299..?

Nobody is going to disagree with you saying RT is coming.... but you & they have been beating the drum for 5 years and it's getting tiring, bcs Ray Tracing costs too much and player won't subsidize their performance for console players who have a PC. (Ada lovelace wasn't designed for gaming.)
I only moved away from my 1080Ti at the start of this year, I've been "beating the RT drum" anyway because it has to start somewhere.

There's a good reason I never jumped on the bandwagon in the 20 or 30 series GPU's. I personally didn't think performance was good enough, even on the 40 series, I only consider the 4090 as powerful enough for truly flawless RT performance, by my own standards that is.

I'm aware not everyone here is into Star Citizen but they showed off their RT at their recent CitizenCon event, they've shown off how their engine can run RT in two different modes, Hardware and Software rendering, I believe Lumen in UE5 does something similar as does the latest CryEngine, for GPU's that don't have hardware acceleration and still perform well, sure the RT isn't as accurate, but it's definitely a good stop gap until hardware catches up.

I totally agree with you, RT performance hasn't been great on PC, Consoles seem to be getting more and more out of it, Spiderman 2 looks absolutely amazing, But I think RT won't be fully replacing fake raster lighting for at least another 5-10 years, we need another Console generation to come about before it truly happens.
 
Without reading the article, my opinion is this. It definitely isn't as successful as zen was and became. But it also isn't nearly as bad as bulldozer was.
 
Without reading the article, my opinion is this. It definitely isn't as successful as zen was and became. But it also isn't nearly as bad as bulldozer was.
I do not remember Buldozer, I never used amd back then. But I definitely remember multiple reviews trashing it to pieces.
 
Seems like they need an opportunity to sprout new bigger fields of what they could do, but doing so that they elevate the rethoric in the same way, innovational, margin effective and what adhere to the demand, rhythm.

So what is seemed to us like they were always doing, just gradual steps in performance, was actually a response to what choices they had. With lots of complexity and choices going behind the scene based on that freaking yeld rate of chips.

No margin, no products, no products no performance desire as the space is too limited to bother.

It's all about continuity to this flow of fabrication process vs margins, power and enough die space.

It is weird that amd is as good if not much better company than Intel and Nvidia, but with prospects/chances that aren't so fruitful.

If just Amd had the AI demand and the profits to pursue more. They don't have and this is stressing them, but it seems it does that in more talent going on the chip.

I read the article. I liked how it envisioned AMD being good but very, very bound by constraints. It seems that one vendor doing good, is really squeezing what the competition can do.

AMD can do high end but it would be in vain because NVIDIA's market share.

This article would have been even better by spicing up what the games would get in the future, what the new tech would improve the most and what would keep us more interested in general.

OK, painted a somehow bleak future and like all the AMD articles seem to be. Cool story.
 
The issue is horrible SW support (driver bugs), and also CUDA being as dominant as it is. That is in part because NV shenanigans, but also because of how painful the AMD counterpart is (again, SW).
Actually AMD has manly the stigma of being worse, I read a couple of months ago, an in depth review of AMD vs Nvidia drivers, over a year timespan, and they were on par with driver problems, and time to fix release.

5nm is up to 50% denser than 7nm...
True and false, as said in the article, many things don't scale well anymore (mainly signaling and SRAM), so even tho many structures can be made smaller, if they don't work then anymore, and you have to scale them back up again, you effectively get less real world effect of your smaller process.
I never cared about RT, I probably never will
There ware many tings I did not care for at the moment they came out, but do now.
Like I never cared for first gen NVMe, as stripe SATA SSDs ware just as vast, for half the price, now I am using a NVMe only PC, times changes.
Real-time ray tracing (RT) in games is flawed from the beginning, mainly because GPUs lack the performance to effectively handle real RT, and they never will.
That's manly because, none is willing to pay +$5K for a monolithic GPU.
But chiplets and 3D stacking is likely to break the mold, and we have only seen the first baby steps of it, the 600mm2 of a flagship chips cost something like 5 times more than 10x 60mm2 chips.
And there is still a large room for growth on the interposer side, like Intel is doing now with their glass interposer, that could give them a 2x growth in power headroom.
Most of us couldn't care less who holds the halo GPU performance crown, because we are not willing to pay four figures for a GPU. We care about what we can get within our budgets.
You and me may think that way, most buyers of graphics cards only know: Nvidia good but expensive, AMD bad but cheap.

And keep paying top price for the halo product, I was lucky that my GTX 980 died just before the release of the RTX 3080, and sorta had to pay €700 for that damn card, not knowing, he would rice in price by 5x months later, still one of the worst and best buy's I ever done! ^_^
 
That's manly because, none is willing to pay +$5K for a monolithic GPU.
But chiplets and 3D stacking is likely to break the mold, and we have only seen the first baby steps of it, the 600mm2 of a flagship chips cost something like 5 times more than 10x 60mm2 chips.
And there is still a large room for growth on the interposer side, like Intel is doing now with their glass interposer, that could give them a 2x growth in power headroom.

You and me may think that way, most buyers of graphics cards only know: Nvidia good but expensive, AMD bad but cheap.

And keep paying top price for the halo product, I was lucky that my GTX 980 died just before the release of the RTX 3080, and sorta had to pay €700 for that damn card, not knowing, he would rice in price by 5x months later, still one of the worst and best buy's I ever done! ^_^
What type of calculation did you do to come to the conclusion that 10 x 60mm² chips are 5x cheaper than a single 600mm² chip?

Let's think about the following parameters,
process: 5nm
Cost per wafer: $17,000
Chips: 600mm²(4090) and 60mm²(theoretical GDC).


Let's think that 5nm has a similar yield to 7nm (Defect density is only 0.09), so we would have 50 usable chips for 4090 at a cost of $340 each. The GDC would get about 600 usable chips, costing $28.33 each, 10 x U$28.33 = 283.33, the savings are about 20% and if you add the additional assembly steps the savings would be even smaller, maybe about 15%
 
What type of calculation did you do to come to the conclusion that 10 x 60mm² chips are 5x cheaper than a single 600mm² chip?

Let's think about the following parameters,
process: 5nm
Cost per wafer: $17,000
Chips: 600mm²(4090) and 60mm²(theoretical GDC).


Let's think that 5nm has a similar yield to 7nm (Defect density is only 0.09), so we would have 50 usable chips for 4090 at a cost of $340 each. The GDC would get about 600 usable chips, costing $28.33 each, 10 x U$28.33 = 283.33, the savings are about 20% and if you add the additional assembly steps the savings would be even smaller, maybe about 15%

Bigger chips have that much more chance of being defective, the defect density effects both chips differently.

Economy of scale is a huge part of this.
 
What type of calculation did you do to come to the conclusion that 10 x 60mm² chips are 5x cheaper than a single 600mm² chip?
If one 600mm2 chip is dead because of a defect, then there will be most of the time, still 9 out of 10 of the 60mm2 chips be okay, meaning you would have about 85~90% less wasted chips than with big monolithic chips.

Next to that, smaller chips are easier to develop and debug, they are a little less efficient than monolithic chips, but as with ZEN, the trait off is mostly worth it.
 
If one 600mm2 chip is dead because of a defect, then there will be most of the time, still 9 out of 10 of the 60mm2 chips be okay, meaning you would have about 85~90% less wasted chips than with big monolithic chips.

Next to that, smaller chips are easier to develop and debug, they are a little less efficient than monolithic chips, but as with ZEN, the trait off is mostly worth it.
As I said, of course there is a cost reduction but not to the point of being 5x cheaper.
 
As I said, of course there is a cost reduction but not to the point of being 5x cheaper.
It will be, if they don't go to chiplets, because the memory and memory bus and many other things in chips ain't gone scale anymore.

We at the point of diminishing returns, ware there is no point anymore to only go smaller, yes there will be incremental steps, but wider is now the way forward.

So imho chiplets is the only forward for now, because, not going to the next node (what AMD is doing on some designs), and get to the same point with more chiplets on older mature nodes is better solution for a cost/performance ratio.

As I don't think we will see a 165% upgrade over the RTX 3090 to 4090 with the 4090 to 5090.
 
Back