Nvidia gets ready for 7nm Ampere and 5nm next-gen Hopper, places orders with TSMC, Samsung

midian182

Posts: 9,734   +121
Staff member
Rumor mill: Next week will finally see Nvidia confirm its Ampere architecture, and it seems the company has put in a massive order with TSMC for its 7nm node technology. Looking beyond Ampere to the next-generation 5nm Hopper family, Nvidia is lightening the load on TSMC by handing some of the work over to Samsung.

According to reports in The China Times and DigiTimes, Nvidia has been one of TSMC’s main customers of the 7nm process node, which seems to confirm that Ampere GPUs, including those used in consumer graphics cards such as the RTX 3080, will be based on this manufacturing process. Considering rival AMD has been on 7nm since the launch of EPYC Rome last August, this makes sense.

Following the release of Ampere, Nvidia’s next architecture will be Hopper. As one might expect, this will see a shrink from 7nm to 5nm. But reports claim the process will not entirely be handled by the Taiwan Semiconductor Manufacturing Company.

DigiTimes writes that in addition to reserving production slots for TSMC’s 5nm production capacity in 2021, Nvidia will be handing some of the work over to Samsung. Specifically, the Korean giant will produce lower-end Ampere graphics cards, which could use its 7nm EUV or 8nm process nodes.

Last month, we heard that Nvidia was placing orders with TSMC for a mystery 5nm product, which is likely based on Hopper.

Samsung did complete its 5nm EUV development last year and said in its Q1 earnings report that production of the 5nm EUV process will begin in the second quarter of 2020. Nvidia is reportedly in negotiations with the company over whether it can take some 5nm orders, too.

Nvidia CEO Jensen Huang will lead the company’s keynote at the Graphics Technology Conference on May 14, which comes with the byline “get amped.”

It’s expected that Ampere’s consumer graphics cards will arrive in the third quarter, which lines up with rumors of Nvidia AIB partners clearing stock to prepare for the RTX 3000 launch in Q3.

Permalink to story.

 
How much we betting Nvidia inflates the prices of what the 2080 costed at launch Vs the 3080? Honestly they couldn't give a crap about gamers, the prices they charge for mid range used to be high end 8 years ago. I honestly hope they lose market share in this next decade because they've become so greedy it's unreal. Btw I can afford their cards, I just don't see why I should pay nearly double the price what something cost not that long ago, it's not justified.
 
How much we betting Nvidia inflates the prices of what the 2080 costed at launch Vs the 3080? Honestly they couldn't give a crap about gamers, the prices they charge for mid range used to be high end 8 years ago. I honestly hope they lose market share in this next decade because they've become so greedy it's unreal. Btw I can afford their cards, I just don't see why I should pay nearly double the price what something cost not that long ago, it's not justified.
I don't think nVidia is going to price-gouge this time around. We have cyberpunk 2077 coming this fall and many gamers are going to look to be upgrading their systems for, I know I'm doing my first complete new build since 2012. Now I have always been an nVidia customer, I just find their ownership experience to be more pleasant. Everything from interesting new tech to play with down to just liking the look and feel of their software better.

For this cycle I am seriously considering waiting to see what AMD has to over and many people, like you, feel that way. nVidia isn't stupid, they know that people are tired of them bumping the price of a high performance card $100 a generation. I remember when the 8800GTX came out and cost $500-550, everyone thought that was absurd. Now we're seeing 3 times.

Anyway, I'm starting to get off topic. So my point is that nVidia knows they're about to milk the cow dry while also releasing their cards along side a much anticipated videogame and a new console generation. You can buy new xBoxs for the cost of a single 2080ti. I predict nVidia will cut prices by at least $100 percard on their highend and then the fan base will give them tons of praise for lowering prices or something.
 
Not a good time to upgrade now:

1. Intel is still on "Nehalem" 14nm+++++++ (not even joking) chips that overconsume power and overproduce heat.
2. Only AMD sells PCIe 4.0 mobos. PCIe 3.0 mobos are bad for future proofing.
3. DDR5 is coming in 2022 or earlier.

Upgrade now, and you will be obsolete in 2022.

I too feel that NVIDIA will inflate prices for this coming gen. They basically have zero competition in 2070+ levels of perf. and AMD competition in lower product grades is insufficient.
 
Not a good time to upgrade now:

1. Intel is still on "Nehalem" 14nm+++++++ (not even joking) chips that overconsume power and overproduce heat.
2. Only AMD sells PCIe 4.0 mobos. PCIe 3.0 mobos are bad for future proofing.
3. DDR5 is coming in 2022 or earlier.

Upgrade now, and you will be obsolete in 2022.

I too feel that NVIDIA will inflate prices for this coming gen. They basically have zero competition in 2070+ levels of perf. and AMD competition in lower product grades is insufficient.

I don't disagree, but if you upgrade in 2022 you'll be obsolete in 2024. Gotta just make the jump at some point. If one were to insist on an Intel CPU, I'd recommend waiting.
 
How much we betting Nvidia inflates the prices of what the 2080 costed at launch Vs the 3080? Honestly they couldn't give a crap about gamers, the prices they charge for mid range used to be high end 8 years ago. I honestly hope they lose market share in this next decade because they've become so greedy it's unreal. Btw I can afford their cards, I just don't see why I should pay nearly double the price what something cost not that long ago, it's not justified.

Yes please don't buy any Nvidia product ever, I don't want to wait for 3080 Ti stocks to arrive...
 
Best time to buy at the start of a new generation and new node. 7nm TSMC ensures a big leap over current Pascal cards, whether it is EUV or not will determine how big.

The highest end cards will without doubt be as expensive as ever, but like I said elsewhere both Nvidia and AMD have to moderate their prices a little in light of new console launches. If a console is $500 and the GPU is $500 it needs to be quite a bit faster than an entire system.

I'm looking for $400-450 for something close to what is an RTX2080Ti now, perhaps even faster at least in ray tracing terms.
 
Not a good time to upgrade now:

1. Intel is still on "Nehalem" 14nm+++++++ (not even joking) chips that overconsume power and overproduce heat.
2. Only AMD sells PCIe 4.0 mobos. PCIe 3.0 mobos are bad for future proofing.
3. DDR5 is coming in 2022 or earlier.

Upgrade now, and you will be obsolete in 2022.

I too feel that NVIDIA will inflate prices for this coming gen. They basically have zero competition in 2070+ levels of perf. and AMD competition in lower product grades is insufficient.
I'd argue that the best time to upgrade needs to be done on a case-by-case basis. I'm still running a 1700x with a 1070ti and cyberpunk 2077 is right around the corner. AMD 3000 series prices are stabilizing/dropping a little with new cards coming this fall. So, for someone like me with an aging system, a must have gaming coming out and looking to purchase a 1440p high refresh display, an upgrade is almost necessary
 
I'm looking for $400-450 for something close to what is an RTX2080Ti now, perhaps even faster at least in ray tracing terms.
You might have to wait another generation before that happens. $400 is roughly 1/3 the retail cost of a 2080 Ti, and that falls into the price range of a 2060 Super. The GPU in this card is physically about 1/2 to 3/4 the chip of that in the 2080 Ti: 2176 vs 4352 shader (1/2), 136 vs 272 TMUs (1/2), 64 vs 88 ROPs (~3/4), memory controllers 256 vs 352 (~3/4). So that means the $400 GPU you're looking for, is going to need to be about twice the size of the 2060 Super, in terms of component count (or double the clock speed).

If we go back through previous generations of GeForce models, and look at the models that were 1/3 to 1/2 the launch price of the very best consumer SKU in that series (I.e. the Ti cards, rather than Titans), you get the following component counts (shader, TMU, ROP, MC):

$350 RTX 2060 = 1920, 120, 48, 192
$300 GTX 1060 = 1280, 80, 48, 192
$200 GTX 960 = 1024, 64, 32, 128
$250 GTX 760 = 1152, 96, 32, 256
$230 GTX 660 = 960, 80, 24, 192

You obviously can't directly compare performance on component count alone, but if look at how the RTX 2060 fairs against the likes of the 1060 and 960:


You can see that there's a gap of 1 to 2 generations before we see an overall doubling in performance, depending on the test used.

In any case, Nvidia is certainly not going to release a graphics card for $400 to $450 any time soon, that's near the performance of the current absolute best. AMD or Intel could well do so, though.
 
You might have to wait another generation before that happens. $400 is roughly 1/3 the retail cost of a 2080 Ti, and that falls into the price range of a 2060 Super. The GPU in this card is physically about 1/2 to 3/4 the chip of that in the 2080 Ti: 2176 vs 4352 shader (1/2), 136 vs 272 TMUs (1/2), 64 vs 88 ROPs (~3/4), memory controllers 256 vs 352 (~3/4). So that means the $400 GPU you're looking for, is going to need to be about twice the size of the 2060 Super, in terms of component count (or double the clock speed).

If we go back through previous generations of GeForce models, and look at the models that were 1/3 to 1/2 the launch price of the very best consumer SKU in that series (I.e. the Ti cards, rather than Titans), you get the following component counts (shader, TMU, ROP, MC):

$350 RTX 2060 = 1920, 120, 48, 192
$300 GTX 1060 = 1280, 80, 48, 192
$200 GTX 960 = 1024, 64, 32, 128
$250 GTX 760 = 1152, 96, 32, 256
$230 GTX 660 = 960, 80, 24, 192

You obviously can't directly compare performance on component count alone, but if look at how the RTX 2060 fairs against the likes of the 1060 and 960:


You can see that there's a gap of 1 to 2 generations before we see an overall doubling in performance, depending on the test used.

In any case, Nvidia is certainly not going to release a graphics card for $400 to $450 any time soon, that's near the performance of the current absolute best. AMD or Intel could well do so, though.

The thing is if they dont bring 2080Ti performance to RTX3070 minimum who is really going to buy those cards if Xbox Series X will have that performance for probably less? I have a PC with overclocked Radeon VII and I need a better GPU to go with my 4K monitor BUT I also have a decent 4K that has very good picture quality, way better than my monitor so my next GPU might be a console and since I already have a game library for my Xbox One X I dont have to start from 0 : - )
 
The thing is if they dont bring 2080Ti performance to RTX3070 minimum who is really going to buy those cards if Xbox Series X will have that performance for probably less?
Well an RTX 3070 being similar-ish to a 2080 Ti is a possibility, as we can see that a 2070 Super is not too far off a 1080 Ti in some tests:


But the Super version appeared some time after the standard 2070, and almost certainly only appeared because of the 5700 XT.

Don't forget that the Xbox One was $399 to $499 at launch in 2013, when at that time the GeForce GTX 780 Ti and Radeon R9 290X were $699 and $549 respectively. . The One X launched at $499 and the GTX 1080 Ti at $699 (didn't stay at that price for long, though). Top-end GPUs are definitely more expensive than consoles, or certainly have in the past.

The Series X and PS5 are looking very good indeed, but let's just see what 2020 brings out in terms of GPUs...
 
Well an RTX 3070 being similar-ish to a 2080 Ti is a possibility, as we can see that a 2070 Super is not too far off a 1080 Ti in some tests:


But the Super version appeared some time after the standard 2070, and almost certainly only appeared because of the 5700 XT.

Don't forget that the Xbox One was $399 to $499 at launch in 2013, when at that time the GeForce GTX 780 Ti and Radeon R9 290X were $699 and $549 respectively. . The One X launched at $499 and the GTX 1080 Ti at $699 (didn't stay at that price for long, though). Top-end GPUs are definitely more expensive than consoles, or certainly have in the past.

The Series X and PS5 are looking very good indeed, but let's just see what 2020 brings out in terms of GPUs...

The difference this time around is that the upcoming consoles wont be underpowered mess like PS4 and especially Xbox One was, paying $499 for a R9 290X wasn't a bad deal when this card playas most games maxed out at 1080p to this day and at the time you could do some 4K with it and the $499 Xbox One was struggling to keep 30fps at 900p!! This time we are going to get 4K 60fps consoles so I don't know why would you buy a PC especially from scratch over the consoles :)
 
I don't know why would you buy a PC especially from scratch over the consoles
Hasn't that always been a relevant question to ask though, when a decent graphics is the price of a console? Any way... wander too far off-topic now.

With only 8 more days to GTC, I've been mindful over how previous architectures were announced:

Kepler = March 2012 for architecture and models.

Maxwell = February 2014 for architecture and models.

Pascal = April 2016. First card was a Tesla model same month; first GeForce models appear in May.

Volta = May 2017. First card was Titan V in December.

Turing = August 2018. First card was a Quadro RTX; first GeForce models were in September.

So historically, Turing was a glitch in the matrix with the scheduling of things, so hopefully Ampere puts things back to as they were. If so, we could see the first cards in May/June, perhaps?
 
You might have to wait another generation before that happens. $400 is roughly 1/3 the retail cost of a 2080 Ti, and that falls into the price range of a 2060 Super. The GPU in this card is physically about 1/2 to 3/4 the chip of that in the 2080 Ti: 2176 vs 4352 shader (1/2), 136 vs 272 TMUs (1/2), 64 vs 88 ROPs (~3/4), memory controllers 256 vs 352 (~3/4). So that means the $400 GPU you're looking for, is going to need to be about twice the size of the 2060 Super, in terms of component count (or double the clock speed).

If we go back through previous generations of GeForce models, and look at the models that were 1/3 to 1/2 the launch price of the very best consumer SKU in that series (I.e. the Ti cards, rather than Titans), you get the following component counts (shader, TMU, ROP, MC):

$350 RTX 2060 = 1920, 120, 48, 192
$300 GTX 1060 = 1280, 80, 48, 192
$200 GTX 960 = 1024, 64, 32, 128
$250 GTX 760 = 1152, 96, 32, 256
$230 GTX 660 = 960, 80, 24, 192

You obviously can't directly compare performance on component count alone, but if look at how the RTX 2060 fairs against the likes of the 1060 and 960:


You can see that there's a gap of 1 to 2 generations before we see an overall doubling in performance, depending on the test used.

In any case, Nvidia is certainly not going to release a graphics card for $400 to $450 any time soon, that's near the performance of the current absolute best. AMD or Intel could well do so, though.

This depends on what I said, if 7nm EUV is the process. If it's 7nm or 7nm+ then it'll be harder to hit those goals but not impossible. Clocks matter as much as die sizes.

An RTX2080 Super is an unlocked TU104 die, with everything enabled. It's what, 16 percent off a 2080Ti @ 4K according to the Techspot test? 545mm² versus the much larger and slower clocked 754 mm² TU102 2080Ti die. So 2080 Super might have a die 28 percent smaller but it's only 16 percent slower. Just 12 percent at 1440p. Less memory bandwidth constrained.

The TU102 is such a massive die, even in 2080Ti guise it's binned and parts of the die are disabled just to get the yields. Being so huge it's only going to operate at slower clocks.

The way performance is achieved moving from one node to another is architecture design efficiency gains (basically IPC) and raw shrinkage of the transistors to pack more on. But crucially, clockspeed gains by switching the smaller transistors faster. Assuming the node is a good one.

Let's say you take a 12nm RTX2080 Super TU104 die, shrink it to 7nm EUV and boost the clocks by 10 percent. You'll close the gap to 2080Ti to next to nothing. Faster memory would help further.

Taking said TU102 545mm² onto 7nm EUV you'll end up with something much smaller than the current RTX2060 Super die, which is 445mm². It'll easily be sub 400mm². Will you get at least 10 percent better clocks and performance in line? Easily. If you didn't the node is broken.

You should have OC RTX2080 Super performance on a die smaller than a RTX2060S which is a $450 card. In short, 2080Ti performance (or very close to it) at this die size is realistic.

All this assumes straight shrinks of existing designs. If Nvidia do literally nothing else beyond a straight shrink. They are of course revamping every design to find more IPC to boot.

The recent direct comparison when you had this level of node leap was the 980Ti on 28nm to the GTX1070 on 16nm.

You ended up with Ti performance on a one generation newer $450 xx70 class card. 980Ti was 601mm², GTX1070 was based on a 314mm² die, and that was with a bunch of it disabled......
 
You should have OC RTX2080 Super performance on a die smaller than a RTX2060S which is a $450 card. In short, 2080Ti performance (or very close to it) at this die size is realistic.
Navi 10 has a transistor density of 41 million/mm2 so if one assumes a similar value for a N7 TU102, then it would actually be nearer 330 mm2! This would require Nvidia to be looking at increasing their current density by around 65%, which is certainly possible, given that the GP104 was virtually double the GM200.

However, there's a notable difference between architecturally making a product that is currently around $700+ into a product that competes with a $1000 previous generation model, and releasing a new model that does the same thing, but at 35% cheaper.

It's all physically possible, but as things currently stand, there's not much incentive for Nvidia to do something like this. I don't think the Series X nor the PS5 will influence Nvidia's pricing strategy too much, but big Navi and Xe will. But there again, AMD have already set out their stall by setting the launch price of the 5700 XT at $399; a 2080 Ti-esque version of Navi certainly isn't going to be released at just $50 more, unless they've got a mega-Navi in the wings that they can release for $1000 and outperforms the TU102 by a large margin.
 
How much we betting Nvidia inflates the prices of what the 2080 costed at launch Vs the 3080? Honestly they couldn't give a crap about gamers, the prices they charge for mid range used to be high end 8 years ago. I honestly hope they lose market share in this next decade because they've become so greedy it's unreal. Btw I can afford their cards, I just don't see why I should pay nearly double the price what something cost not that long ago, it's not justified.

That's because the past 8 years there hasn't been much competition, so Nvidia could price their cards however the hell they wanted. Now, AMD has been a completely different company since CEO Lisa Su took the helm. Intel is pricing their 10700k for $300! That is unheard of! All because of competition from AMDs fantastic CPUs. AMD is now focusing on their video cards to compete with Nvidia, and Nvidia won't be able to charge those insane prices anymore.
 
PS5 and XBOX SX are the reasons Nvidia has to offer something beastly. But if you think Nvidia will compete in "value", you are mistaken. I think nVidia will design a monstrous chip (both in terms of performance and die size) on the highest tier and, again, say "Hey here is our top performer for $1000++++ BUT here is something for $800 and then the other one below that for $600". I think their "mainstream" 3070 card will offer comparable performance to consoles, hence the $600 price.
 
[
Navi 10 has a transistor density of 41 million/mm2 so if one assumes a similar value for a N7 TU102, then it would actually be nearer 330 mm2! This would require Nvidia to be looking at increasing their current density by around 65%, which is certainly possible, given that the GP104 was virtually double the GM200.

However, there's a notable difference between architecturally making a product that is currently around $700+ into a product that competes with a $1000 previous generation model, and releasing a new model that does the same thing, but at 35% cheaper.

It's all physically possible, but as things currently stand, there's not much incentive for Nvidia to do something like this. I don't think the Series X nor the PS5 will influence Nvidia's pricing strategy too much, but big Navi and Xe will. But there again, AMD have already set out their stall by setting the launch price of the 5700 XT at $399; a 2080 Ti-esque version of Navi certainly isn't going to be released at just $50 more, unless they've got a mega-Navi in the wings that they can release for $1000 and outperforms the TU102 by a large margin.

Indeed, and Navi 10 is not EUV. There is a strong distinction between the current 7nm DUV in Navi and the 7nm EUV process, which is considerably more dense. To the tune of 20 percent.

I would estimate a straight unoptimized shrink of TU104 would come out around 350mm² give or take on 7nm DUV. Better clocks, better memory, fully unlocked part = 2080Ti performance on a chip way smaller than a 2060S.

I doubt that Nvidia will use 7nm EUV. I don't think it will be ready for that kind of production until next year, or at a good price. AMD said they aren't using it yet. Perhaps for a refresh. We'll see.

Nvidia are going to want better margins on their 7nm products than these big die 12nm RTX cards. Despite all logic saying Nvidia will keep prices high, I don't think they can ignore the market.

Even Intel have had to acknowledge AMD pressure with the 10th gen prices. Nvidia see AMD competition, they see new consoles. They see fairly weak Turing sales. I have a positive feeling prices will be kept under control for these new GPUs.

AMD set their price for launch 5700XT at $399, but that was last year. The die itself is barely bigger (251mm² v 232mm²) than a Polaris RX 580 which launched at $229 three years ago! They clearly have sweet margins on Navi. What stops 5700XT being $250?

Nvidia are going to launch these cards and AMD will respond with price cuts they can comfortably afford. Big Navi will hit RTX2080Ti, I doubt it'll be anything like $1000 and neither will Nvidia's competing product.
 
Last edited:
Why can't 5700XT be $250?
Because Nvidia has annual revenues in excess of $10 billion (net income over $4b), despite their prices :)

Even Intel have had to acknowledge AMD pressure with the 10th gen prices. Nvidia see AMD competition, they see new consoles. They see fairly weak Turing sales.
AMD have been successful with their CPUs because (a) they're good and (b) they engineered a product design that could be easily scaled over the years; the fundamental layout of Intel's Skylake isn't particularly scalable (unless you want CPUs as long as a hot dog). In terms of their GPU redesign, that was very much done because of the consoles - making one chip and selling it in millions over several years is a better deal than aiming for the fickle and uncertain PC market only. Don't get me wrong: Navi is a great design, but it absolutely shouts 'console' (not in any kind of a bad way, though).

Now given that Nvidia's console portfolio consists of their own Shield and the Switch (one a huge seller, the other...umm...not), both of which aren't using anything special, there's no major competition for them or there's no reason for them to try to be competitive here, as AMD have the rest of the console market sewn up. Certainly a new console, especially a super powerful one, is almost certainly going to hit PC and, to a lesser extent, graphics card sales so this is going to something they're keeping their eye on.

But I don't think neither the consoles nor Navi are worrying them too much, even with the relatively weak Turing sales. This is because RTX v2.0 is likely to be significant improvement on what's currently possible, and this is all by design. What better way to make your new product look so much better than its predecessor by having the first release of new technology be a very mixed bag of fortunes (great visuals, awful performance, only works great on really expensive products).

RT cores don't take up a huge amount of die space, less than 10% of an SM's overall area, so increasing the number of them is cheap - especially if you're transitioning to a smaller node. Current GPUs have more than enough compute capability for games, they just need more bandwidth (internally and externally) and a fancy new rendering technology. In previous years, this was hardware TnL, vertex and pixel shaders, tesselation, and so on; now it's all about ray tracing.

And for that, you need specialised ASICs, which of course, Nvidia already has. We know that AMD are following suit in the consoles, so they will in PC models too, as are Intel. Nvidia have a notable head start here, despite the sales and the iffy performance; this is why 'RTX on' has been marketed so much. Keep the consumer's eye on the brand label so that when the new batch of GPUs comes out, more people will assume that only RTX means ray tracing.

Nvidia are going to launch these cards and AMD will respond with price cuts they can comfortably afford. Big Navi will hit RTX2080Ti, I doubt it'll be $1000 and neither will Nvidia's competing product.
I hope you're right, I really do, but I fear that Nvidia fancy themselves a slice of the cake that Apple has made for itself.
 
Because Nvidia has annual revenues in excess of $10 billion (net income over $4b), despite their prices :)


AMD have been successful with their CPUs because (a) they're good and (b) they engineered a product design that could be easily scaled over the years; the fundamental layout of Intel's Skylake isn't particularly scalable (unless you want CPUs as long as a hot dog). In terms of their GPU redesign, that was very much done because of the consoles - making one chip and selling it in millions over several years is a better deal than aiming for the fickle and uncertain PC market only. Don't get me wrong: Navi is a great design, but it absolutely shouts 'console' (not in any kind of a bad way, though).

Now given that Nvidia's console portfolio consists of their own Shield and the Switch (one a huge seller, the other...umm...not), both of which aren't using anything special, there's no major competition for them or there's no reason for them to try to be competitive here, as AMD have the rest of the console market sewn up. Certainly a new console, especially a super powerful one, is almost certainly going to hit PC and, to a lesser extent, graphics card sales so this is going to something they're keeping their eye on.

But I don't think neither the consoles nor Navi are worrying them too much, even with the relatively weak Turing sales. This is because RTX v2.0 is likely to be significant improvement on what's currently possible, and this is all by design. What better way to make your new product look so much better than its predecessor by having the first release of new technology be a very mixed bag of fortunes (great visuals, awful performance, only works great on really expensive products).

RT cores don't take up a huge amount of die space, less than 10% of an SM's overall area, so increasing the number of them is cheap - especially if you're transitioning to a smaller node. Current GPUs have more than enough compute capability for games, they just need more bandwidth (internally and externally) and a fancy new rendering technology. In previous years, this was hardware TnL, vertex and pixel shaders, tesselation, and so on; now it's all about ray tracing.

And for that, you need specialised ASICs, which of course, Nvidia already has. We know that AMD are following suit in the consoles, so they will in PC models too, as are Intel. Nvidia have a notable head start here, despite the sales and the iffy performance; this is why 'RTX on' has been marketed so much. Keep the consumer's eye on the brand label so that when the new batch of GPUs comes out, more people will assume that only RTX means ray tracing.


I hope you're right, I really do, but I fear that Nvidia fancy themselves a slice of the cake that Apple has made for itself.

If RX580 can launch at $229, then sell for most of its life more like $200 then 5700XT performance can also be $250 this year. AMD have it at $399 because they can, not because they have to. The market meant they reached 7nm first, Nvidia were not going lower with their big 12nm dies, so $399 is where it sits.

That changes when Nvidia get to 7nm with these cards. 5700XT performance will come way down and big Navi will go on top with the 2080Ti tier performance. It won't be $50 more, it'll be $200 more. But still probably under $500.

I don't think the consoles worry Nvidia in terms of getting a piece of it. They aren't interested much, the margins are too low for them to care. It's the fact when new consoles launch new GPU sales drop, because people will consider taking their $500 to a full console system rather than a new GPU if you're price gouging. You have to offer more GPU performance to the PC crowd for $500 graphics cards otherwise they won't bite.

The RTX series have been prime example. They have not sold great. Too little gain for too much money over Pascal. Debatable ray tracing benefits. If they want to push these technologies into the mainstream they have to control the MSRP of next gen cards.

Nvidia will feel the squeeze. Maybe not if they launch before Big Navi, but certainly after. Navi 10 forced price cuts and performance increases with the Super series, they aren't living in a bubble.

2020 will be a good year for gamers looking for upgrades.
 
I'd argue that the best time to upgrade needs to be done on a case-by-case basis. I'm still running a 1700x with a 1070ti and cyberpunk 2077 is right around the corner. AMD 3000 series prices are stabilizing/dropping a little with new cards coming this fall. So, for someone like me with an aging system, a must have gaming coming out and looking to purchase a 1440p high refresh display, an upgrade is almost necessary

I'm probably an ideal study case for your thinking. I was rocking an R5-1600 and X370 mobo in my system, along with 32 gigs of RAM, RX480, and a few m.2 drives, HDD data drives, etc.

I went the step approach - just bought an X570 motherboard and new gen 3600 CPU, did a brain transplant in my system keeping all the previous components, and now I'm up and screaming along with minimal effort and expense (less than $400, since I opted for a higher quality mobo to future proof myself a bit). Next step will be a new GPU and power supply, maybe bump up to RAM faster than 3200 if it makes sense, new SSDs as needed. You can do things in stages and always be leveraging current tech for different components, kind of a leapfrog approach - if you don't care that you aren't running "100% bleeding edge this is the absolute latest technology" all at once, that is. I gladly sacrifice a few % in fps for games here and there to eliminate punishing my wallet too severely. Tends to let you put more into the components you are interested in changing next, rather than trying to level out the expenses across an entire build and maybe diluting some of the more critical pieces in the process. Like the GPU - this new competition from Nvidia will push AMD to keep churning (as always), so I'll keep watching to see what the best option is when I'm ready to make the next upgrade jump.

And my old mobo / CPU assembly went to a good cause - upgraded the server my group of friends pitched in to build, so we can run some private game sessions a little more efficiently.
 
Like the GPU - this new competition from Nvidia will push AMD to keep churning (as always), so I'll keep watching to see what the best option is when I'm ready to make the next upgrade jump.

And my old mobo / CPU assembly went to a good cause - upgraded the server my group of friends pitched in to build, so we can run some private game sessions a little more efficiently.
I'd argue that AMD was never really "behind" in the GPU space. They, as a company, were behind and had to focus on making one really good product, and we got the ryzen series. With the 3000 series out and now the 4000 series coming soon, they can start dumping more money into graphics R&D.

AMD focused on the largest graphics market share and has been doing that for a long time. They mostly release mid ranged cards that perform very well. We as enthusiasts think it's dumb that AMD doesn't have a 4k60 card, but that's a very small group of people gaming at those resolutions. Heck, the majority of PC gamers don't even have 1080p 120hz monitors. The vast majority of gamers are still gaming at 1080p/60 and AMD's midrange does a fantastic job at that. I see many people who don't even have the peripherals to run more than 1080p/60 who are complaining that nVidia is so expensive and that AMD is behind.

Now, going back to the vega64 cards, they were objectively the faster cards. Raw compute power was vastly ahead of what nVidia was offering at the time. The only reason nVidia stole the show was that in the gaming world, developers spend more time optimizing hardware for nVidia since they have the largest PC marketshare.

But I would like to point out that AMD sells far more GPUs than nVidia, just not in the PC market. People forget about CONSOLE gaming(rightfully so, console plebs). AMD's affordable midranged strategy has put them at the center of everyones living room because it makes them the goto choice for console makers. Looking at the xBox one X, you don't NEED a 2080ti to run 4k60. I run 4k60 in plenty of games on a 1070ti and have very little problems. Sure I'm not running the highest settings, but I'm far from the lowest or even medium settings. I can notice graphics fidelity if I'm actually looking, but if I'm sitting there picking apart texture qualities am I really playing the game? How many of us cranked up crysis to see the graphics but then turned them down and had a BLAST playing the game.

It's too the point where I tell people that your monitor should cost half as much as your graphics card if you want to see any different. People who are buying $4-500 graphics cards aren't spending $400 on 1440p144hz monitors. If you have a $200 monitor you have no business buying a $1000 graphics card.

Anyway, I'm getting away from what I originally wanted to talk about. Of which, I forget now so I'm gonna post this and watch it get flamed. Also, I wrote this such that I'll be about to tell who didn't read the whole thing by their response, let the games begin!
 
Last edited:
Now, going back to the vega64 cards, they were objectively the faster cards. Raw compute power was vastly ahead of what nVidia was offering at the time. The only reason nVidia stole the show was that in the gaming world, developers spend more time optimizing hardware for nVidia since they have the largest PC marketshare.

The problem with Vega was that it came out a year after GTX1080, it was AT THE TIME not faster than it, had double the power draw, was in some cases as expensive as GTX1080Ti and AiB cards didn't arrive with the reference design. I own a Vega 64 Nitro+, I keep it as a spare card, it was a good 1440p card but being this late to the party AMD should have made more of them, made sure they were better models out and sell it at a loss, they wouldn't get so much heat for it, the model I have was priced at £650 from what I can remember, that was only £50 cheaper than some 1080Ti's that would always be faster ( I got mine for £400 on Black Friday 2018 with 3 games ) That was one of the worst product release in AMD's history in my opinion
 
The problem with Vega was that it came out a year after GTX1080, it was AT THE TIME not faster than it, had double the power draw, was in some cases as expensive as GTX1080Ti and AiB cards didn't arrive with the reference design. I own a Vega 64 Nitro+, I keep it as a spare card, it was a good 1440p card but being this late to the party AMD should have made more of them, made sure they were better models out and sell it at a loss, they wouldn't get so much heat for it, the model I have was priced at £650 from what I can remember, that was only £50 cheaper than some 1080Ti's that would always be faster ( I got mine for £400 on Black Friday 2018 with 3 games ) That was one of the worst product release in AMD's history in my opinion
The price and "should have made more of them" was due to the Cryptocurrency mining craze combine with the HBM2 shortage delaying the launch date. If you remember all that, nVidia suffered heavily price and supply wise, too.

Yeah, it was a horrible product release and all people see is AMD's name looking back at it, but essentially everything wrong with that launch was out of AMD's hands.

Now, I decided to do some digging and found something very interesting and is meant as food some food for though. As far as "what is the better card" goes, I want to look at prices. The 1080 and 1080ti were still available, but in limited numbers at the time and above MSRP. 1080ti's were going for around $800 but the vega 64 was being resold in excess of $1000. Now, what do all these numbers mean? Well, something is only worth what people are willing to pay. The portion of the market with money to spend was spending their money on AMD cards. As a gaming card it didn't do so well, but no one can say that it wasn't a sales success. Don't forget, AMD is a business
 
Yeah, it was a horrible product release and all people see is AMD's name looking back at it, but essentially everything wrong with that launch was out of AMD's hands.

Well I and this is coming from a AMD fan ( I do "rock" 2700X + Radeon VII ) no longer buy their excuses, yes there was a mining craze and they had issues with HBM2 but Fury X was a bad product launch with shortages and bad pumps on the AIO's, RX480 had weak cooler and was limited to only 6 Pin which caused power draw issues, Vega 64......total disaster now Navi and their drivers and don't get me started on RX5600XT and the BIOS mess, at some point we have to look at the pattern here and acknowledge they are Fuc****** incompetent when it comes to GPU launches this is why I kinda can't wait for RTX3000 and I might be jumping ships for a bit ?
 
Back