AMD Radeon RX 9070 expected to start at $479, available as soon as late January

So if that rumor is true, it will be around 600-650euros in Spain. If the performance can be between a 7900GRE and 7900xtx, it does seem like an upgrade to me.
 
Yeah...........and what about Mr. Trump's tariffs? Plus (especially) greedy retailers. In reality $479 becomes $549.........becomes $600. In Philippines, all computer-related retailers are price-gougers. All-year-round. I expect to see this card selling for the equivalent of $650 here.
 
At this point I am starting to believe AMD's marketing team is intentionally incompetent. FS4 seems to be significantly better than predecessor but was silently available at CES to not take away thunder from Nvidia's dlss 4.0 frame gen smoke and mirrors.

Luckily gamers don't need to pay a premium for multiple frame generation Lossless scaling 3.0 dropped 20x frame generation ( goes straight for the jugular and competes.with dlss 10) 🤣
Lossless Scaling 3 released with frame generation up to x20: "No Shrooms Required" - VideoCardz.com https://search.app/S5CPRQnf6i42auDn8
 
Last edited:
My predictions:

AMD catches up to DLSS upscaling
AMD catches up to NVIDIA's (and Intels tbh) raytracing by simply dedicating more die area to it - it was never a question of if they could but when they would. It's still of questionable value but some games are really starting to push it and they can't afford reviews pointing it out as a weak point anymore.
AMD despite claiming wanting to be price competitive will still shoot themselves in the foot by initially charging too much. If they're waiting this long to release prices that sounds like they want to undercut NVIDIA by a little bit rather than just selling at low margins to gain marketshare. This will lead to poor reviews which affects them in the long run but they always have short-term profit for the first few weeks/months in mind.

NVIDIA will sell stupidly well initially - so I hope they got a large initial supply. Everyone that believed Jensen's quadrupled frame numbers will buy up the first batch. Then as third party reviews come out declaring NVIDIA cards worse bang a buck but still having CUDA up their sleeve they'll sell at a more normal rate.

Ideally AMD is delaying pricing information by this much because they're pressuring partners to drop them as low as possible and seeing how little they're willing to settle for. But I'm pretty sure it's AMD being AMD and just waiting to undercut NVIDIA by 10% whilst NVIDIA offers CUDA and lower powerdraw (and more fake frames).

So basically hoping for AMD to charge very little, gaining massive marketshare and developers having a new 'base level' to aim for rather than GTX 1060s. But realistically it's probably another 20% more performance for 10% more money step. Both AMD pulling the GRE off the market which I take as a sign that it's too price competitive with the new cards and them waiting to long with announcing prices makes me think they're not aiming for gaining marketshare all that much.
 
My predictions:

AMD catches up to DLSS upscaling
AMD catches up to NVIDIA's (and Intels tbh) raytracing by simply dedicating more die area to it - it was never a question of if they could but when they would. It's still of questionable value but some games are really starting to push it and they can't afford reviews pointing it out as a weak point anymore.
AMD despite claiming wanting to be price competitive will still shoot themselves in the foot by initially charging too much. If they're waiting this long to release prices that sounds like they want to undercut NVIDIA by a little bit rather than just selling at low margins to gain marketshare. This will lead to poor reviews which affects them in the long run but they always have short-term profit for the first few weeks/months in mind.

NVIDIA will sell stupidly well initially - so I hope they got a large initial supply. Everyone that believed Jensen's quadrupled frame numbers will buy up the first batch. Then as third party reviews come out declaring NVIDIA cards worse bang a buck but still having CUDA up their sleeve they'll sell at a more normal rate.

Ideally AMD is delaying pricing information by this much because they're pressuring partners to drop them as low as possible and seeing how little they're willing to settle for. But I'm pretty sure it's AMD being AMD and just waiting to undercut NVIDIA by 10% whilst NVIDIA offers CUDA and lower powerdraw (and more fake frames).

So basically hoping for AMD to charge very little, gaining massive marketshare and developers having a new 'base level' to aim for rather than GTX 1060s. But realistically it's probably another 20% more performance for 10% more money step. Both AMD pulling the GRE off the market which I take as a sign that it's too price competitive with the new cards and them waiting to long with announcing prices makes me think they're not aiming for gaining marketshare all that much.
My prediction is that both AMD and Nvidia will make RT far less punishing on hardware. It is insane that it requires 1k worth GPUs to dsiplay it. Technoly is still very immature. It makes perfect sense to push on optimization more than hardware.
Also, they might start selling dedicated RT cards since many motherboards still make extra PCIe slots which are 95% times unused.
 
Yeah...........and what about Mr. Trump's tariffs? Plus (especially) greedy retailers. In reality $479 becomes $549.........becomes $600. In Philippines, all computer-related retailers are price-gougers. All-year-round. I expect to see this card selling for the equivalent of $650 here.
Sounds like an excellent place to open a computer store. Then you could enjoy all those profits or be like a modern robin hood and sell them at cost (or somewhere in the middle).
 
I wonder if AMD will be so aggressive with its pricing to force Nvidia to reduce their price and back and forth .... or will they just cartel the market hand in hand under a rainbow again...
This is the perfect time for AMD to be aggressive and grab some midrange market share. Nvidia is making so much AI money they would react less to price cuts (less willing to sacrifice margins).

BUT will AMD do it? Historically, they haven't, and they are also making good money with enterprise CPUs (and the 9800x3D) and AI, so they may also be unwilling to cut their other margins too much. Hope springs eternal, so I give it a 30% chance of good pricing and a 3% chance of aggressive pricing (at launch).
 
My prediction is that both AMD and Nvidia will make RT far less punishing on hardware. It is insane that it requires 1k worth GPUs to dsiplay it. Technoly is still very immature. It makes perfect sense to push on optimization more than hardware.
Also, they might start selling dedicated RT cards since many motherboards still make extra PCIe slots which are 95% times unused.
I wouldn’t mind this one bit. Having a sister card for the AI processing and RT would be cool. It would also save the buyers money for only features they need, and reduce the cost of regular GPUs.
 
My prediction is that both AMD and Nvidia will make RT far less punishing on hardware. It is insane that it requires 1k worth GPUs to dsiplay it. Technoly is still very immature. It makes perfect sense to push on optimization more than hardware.
Also, they might start selling dedicated RT cards since many motherboards still make extra PCIe slots which are 95% times unused.
But can that even be done? As far as I understand it is part of the rendering pipeline that all happens on the graphics card.

Sending it back and forth during that I imagine would add massive latency and probably involves since series bandwidth requirements as well. It's not like soundcards where it's a whole separate process that can be abstracted. Or even like the dedicated PhysX card of many years ago.

I might be completely wrong due to a lack of understanding but I don't know if it's something that can be abstracted easily to have a whole different device handle it.
And even if it could be it might not make financial sense.
 
Where are the facts? I haven't seen any reviews yet.
v6zDbWw.jpeg


The fact is this piece of information from Nvidia's CES 2025 presentation. Beyond this, there are still no facts. The only fact is what was presented. What reviews and independent benchmarks? My comment is valid because it points to the fact of what Nvidia presented. Did you not watch the presentation?
 
One of the reasons why I just haven't been interested in AMD products was because even if you have a powerhouse like a 7900XTX the image quality on many advanced titles more often than not comes out worse than having a lower end Nvidia card and DLSS enabled. FSR 3.1 is just poor. Soft blurry unstable mess, everything looks like a console game.
You mean like when I turn DLSS on, on my Nvidia card and it makes everything blurry?

Last time I tried with version 3.7. I thought maybe it just looks bad when pixel peeping and I would not notice it in motion.
So I tried playing for ten minutes but ended up disabling it. It was bad.

Maybe it was because I play at 1440p. Perhaps it looks sharper at 4K. Also I only tried the Quality mode. After seeing how soft it looked I did not even bother with other modes. FSR3 NativeAA was better and sharper but screwed up small details such as falling leaves and fire embers.
So I ended up using neither.

At this point I am starting to believe AMD's marketing team is intentionally incompetent.
Some time ago I came up with a conspiracy theory in my head about AMD's GPU division incompetence:

Instead of trying to to compete with Nvidia they intentionally fumble things. This allows Nvidia to dominate and dictate prices. And this helps AMD also raise their own prices while Nvidia gets bulk of the criticism. This way they dont have to put too much money into R&D and they can enjoy higher margins on their cards while not being in a price war with Nvidia. Im almost done believing it's unintentional incompetence year after year.
But can that even be done? As far as I understand it is part of the rendering pipeline that all happens on the graphics card.

Sending it back and forth during that I imagine would add massive latency and probably involves since series bandwidth requirements as well. It's not like soundcards where it's a whole separate process that can be abstracted. Or even like the dedicated PhysX card of many years ago.

I might be completely wrong due to a lack of understanding but I don't know if it's something that can be abstracted easily to have a whole different device handle it.
And even if it could be it might not make financial sense.
That is my understand as well. While we do have much faster PCIe 5.0 these days (back then it was 2.0 I believe) the overhead in latency and energy cost of running a separate card is just too much. Besides consumer platforms have largely done away with second x16 slot. At least electrically. At best there is a second x8 slot on the board and that too on only some select ATX boards.
 
You mean like when I turn DLSS on, on my Nvidia card and it makes everything blurry?

Last time I tried with version 3.7. I thought maybe it just looks bad when pixel peeping and I would not notice it in motion.
So I tried playing for ten minutes but ended up disabling it. It was bad.
Perhaps you have other issues and have not seen the image quality comparisons against TAA or FSR. Of which many have been demonstrated over the years.
 
Perhaps you have other issues and have not seen the image quality comparisons against TAA or FSR. Of which many have been demonstrated over the years.
Not sure what "issues" could I possibly have aside from running 1440p where upscaling (yes even DLSS) does not look as good or im unusually sensitive to the lack of sharpness.

I have tried both TAA and FSR and they are worse than DLSS (even softer/blurrier). So im not disputing the fact that DLSS is better. But DLSS is not flawless. Those who believe that DLSS is better than native are deluding themselves. Only DLAA and FSR NativeAA can achieve that.

Anyway that is my experience in four games where I've tried DLSS Quality so far on my RTX 2080 Ti at 1440p 165Hz:
Death Stranding Directors Cut.
Rise of the Tomb Raider.
Shadow of the Tomb Raider.
Ghost of Tsushima.
 
Not sure what "issues" could I possibly have aside from running 1440p where upscaling (yes even DLSS) does not look as good or im unusually sensitive to the lack of sharpness.
DLSS has sharpness filter settings. As do all these other upscaling methods.
 
DLSS has sharpness filter settings. As do all these other upscaling methods.
In the games I tried I did not see this setting present. Not even all new games have this. I know I could have probably manually forced sharpness trough the control panel but I did not bother doing that to finetune it for each and every game.

In my eyes if I have to manually adjust sharpness from outside the game or do DLL swaps for game's then I cannot consider this user friendly mainstream approach. If the game already has the newest and best DLL and has a sharpness filter in the settings menu then that's much better.

I did try frame generation too in GoT. Only FSR3 as Nvidia in their infinite greed did not bother enabling DLSS3 FG on 20 series. With Reflex enabled it was manageable tho I still noticed the rise in input lag. I ensured my base framerate was 60'ish before enabling it.
 
v6zDbWw.jpeg


The fact is this piece of information from Nvidia's CES 2025 presentation. Beyond this, there are still no facts. The only fact is what was presented. What reviews and independent benchmarks? My comment is valid because it points to the fact of what Nvidia presented. Did you not watch the presentation?
Those are not facts. You don't even know the meaning of the words you use. I don't watch presentations, that is below me.
 
v6zDbWw.jpeg


The fact is this piece of information from Nvidia's CES 2025 presentation. Beyond this, there are still no facts. The only fact is what was presented. What reviews and independent benchmarks? My comment is valid because it points to the fact of what Nvidia presented. Did you not watch the presentation?
What's up?
 
Back