takaozo
Posts: 1,077 +1,641
Except the fact that a 1980 Lambo it's still a Lambo. Can't say that for Geforce 3 500 Ti.
That first estimate was a minimum. AMD's graph is a little too indistinct to draw any really accurate figures from it but there's a margin of error to estimate the cost scaling to be as high as 2.5 times. If that was the case, then AD103 chips would be 74% more expensive to fabricate than TU104s. The biggest unknown factor is just how much TSMC is charging for N4 production, compared to N5 and N6.tl;dr:
1. the chip isn't 2x more expensive to make
No, it's not twice the cost, obviously. The 4080 and 3090 Ti do use different density GDDR6X modules than the 3090 (16Gb vs 8Gb) though, and the former uses faster modules (24Gbps vs 21 Gbps), so they will be more expensive. How much more so is another unknown.2. the RAM isn't 2x more expensive (let's not forget the price the 3090 with 24GB GDDR6X is currently selling at now)
I'm assuming that you're going with the twice as expensive comment because the MSRP of the 4080 is 72% more than that of the 3080 ($1199 vs $699) and estimated the production/purchasing costs can't account for that increase. But R&D, staffing, building, marketing, advertising, software development, packaging, storage, distribution, import fees, licensing fees, etc are all part of that price tag too. For all we know, Nvidia could well be experiencing much higher costs in all of those areas, but equally, they may not.That leaves the cooler, PCB and other components price which I also doubt that they're 2x more expensive (especially with the fairly decent power draw the card has).
Looking forward to the score you give to the $1,000 RX 7900XTX that according to AMD is a 4080 competitor.* We usually score a product based on features, performance, value, competitors, innovation, etc.
* A 90/100 score doesn't mean everyone should buy it, but rather where we believe the product slots among its direct competitors.
* It's up to the consumer to decide if they want it/can pay for it. If not, there are alternatives. You will see us scoring other products (GPUs or otherwise) lower if we think they don't perform where they should within its segment/intended market, if they are not well built, are buggy, etc.
*
I would like to see the XTX about 20% more and XT equal to 4080 raster perf. Only a few more days before reviews.Looking forward to the score you give to the $1,000 RX 7900XTX that according to AMD is a 4080 competitor.
Now we have no idea how it will perform but the review should show that.
Relax dude. Lamborghinis aren't necessary, or a good value, either. But that doesn't mean they "shouldn't exist".
Stop projecting your frustrations with personal income on the rest of the world. If you can't afford it, sorry. But it doesn't mean a product is some sort of an abomination that should be banned. It just means it's not in your future for the moment. DEAL
No matter how outraged one is about the price of the new GeForce cards, there's no real need to get worked up about it or expect some kind of protest by reviewers and the like. There's a far simpler solution: just don't buy it. And if enough people feel the same way about the price tags, then Nvidia will get the message in the clearest possible way -- lower revenues and smaller margins, regardless of how much of a market leader they are or how good their products are at what they do.
All of that was already included in the price of the previous gen. Why would things like r&d suddenly make a huge difference in pricing this gen? (and the server side is usually the one with the highest profit margins and where most of the R&D goes)That first estimate was a minimum. AMD's graph is a little too indistinct to draw any really accurate figures from it but there's a margin of error to estimate the cost scaling to be as high as 2.5 times. If that was the case, then AD103 chips would be 74% more expensive to fabricate than TU104s. The biggest unknown factor is just how much TSMC is charging for N4 production, compared to N5 and N6.
No, it's not twice the cost, obviously. The 4080 and 3090 Ti do use different density GDDR6X modules than the 3090 (16Gb vs 8Gb) though, and the former uses faster modules (24Gbps vs 21 Gbps), so they will be more expensive. How much more so is another unknown.
I'm assuming that you're going with the twice as expensive comment because the MSRP of the 4080 is 72% more than that of the 3080 ($1199 vs $699) and estimated the production/purchasing costs can't account for that increase. But R&D, staffing, building, marketing, advertising, software development, packaging, storage, distribution, import fees, licensing fees, etc are all part of that price tag too. For all we know, Nvidia could well be experiencing much higher costs in all of those areas, but equally, they may not.
No matter how outraged one is about the price of the new GeForce cards, there's no real need to get worked up about it or expect some kind of protest by reviewers and the like. There's a far simpler solution: just don't buy it. And if enough people feel the same way about the price tags, then Nvidia will get the message in the clearest possible way -- lower revenues and smaller margins, regardless of how much of a market leader they are or how good their products are at what they do.
Thank you for at least coming right out and saying that the first cards of each generation will enjoy a permanent buff to their review score, because you won't come back and re-evaluate. First to market wins forever. The honesty is refreshing.* In the case of the RTX 4080, it's a very fast GPU, it's just too expensive for most. In terms of value it's not horrible, but it's not great either. As of writing, there is nothing else that delivers that level of performance (from the competition).
The 'why' is anyone's guess, but Nvidia's financial statements show R&D costs in 2022 being double that of 2021. Their long-term debts are significantly higher too.All of that was already included in the price of the previous gen. Why would things like r&d suddenly make a huge difference in pricing this gen?
At face value, one wouldn't expect 24 chips of slower GDDR6X to be cheaper than 8 chips of faster GDDR6X, but the chips aren't the same - the latter is double the density of the former. It could be harder/more expensive for Micron to fabricate them, but it might not be. The point is we simply don't know.And it doesn't matter that the ram chips are more expensive, it's still 8GB less GDDR6X RAM. Nvidia is most likely actually saving a few bucks here, especially since they are using a smaller bus size. Is 16GB @ 22.4Gbps on a 256bit bus really more expensive than 24GB @ 19.5Gbps on a 384bit bus?
Yes, in theory, and TSMC themselves have said that the N4 process has the same yields as N5. However, that's a same-die comparison. We don't know anything about what differences Nvidia has requested for the fabrication (older architectures, such as Pascal, used an NV-specific node) nor do we know how sensitive the die design is to defects. All we do know is that a full AD103 chip has 84 SMs, whereas those used in the 4080 have 76; it also has the full amount of L2 cache (64MB), so we're looking at dies that are coming out at the top end of the binning process (unlike the first 3080 which used fairly middling GA102 dies).In fact, everything here is based on the process node being much more expensive. According to TSMC the N5 process node has fewer defects than what N7 had at the same point in time of its life. So yields should in theory not be bad.
I never said that the AD103 was double the cost of the GA102. Read again: I said that it was 40% to 70% more than the cost of the TU104. The GA102 was made on Samsung's old 8nm node, for which we have no cost comparisons.With 4080's die size being almost half the dies size of the 3080/3090, the excuse of double the cost of a wafer also falls apart.
What numbers? We don't actually have any actual figures for any of these things - just guesses, estimates, and assumptions. But no matter: you and many other people believe that the price of the 4080/4090 isn't justifiable, and that's absolutely fine. I have no problem with such views, and if millions of people around the world don't buy any RTX 40 series card because of the price, then good on them.We are, by the numbers, looking at a huge increase in price that isn't justified by the BOM and other production costs.
"I'm simply looking at this from a professional perspective."The 'why' is anyone's guess, but Nvidia's financial statements show R&D costs in 2022 being double that of 2021. Their long-term debts are significantly higher too.
At face value, one wouldn't expect 24 chips of slower GDDR6X to be cheaper than 8 chips of faster GDDR6X, but the chips aren't the same - the latter is double the density of the former. It could be harder/more expensive for Micron to fabricate them, but it might not be. The point is we simply don't know.
Yes, in theory, and TSMC themselves have said that the N4 process has the same yields as N5. However, that's a same-die comparison. We don't know anything about what differences Nvidia has requested for the fabrication (older architectures, such as Pascal, used an NV-specific node) nor do we know how sensitive the die design is to defects. All we do know is that a full AD103 chip has 84 SMs, whereas those used in the 4080 have 76; it also has the full amount of L2 cache (64MB), so we're looking at dies that are coming out at the top end of the binning process (unlike the first 3080 which used fairly middling GA102 dies).
I never said that the AD103 was double the cost of the GA102. Read again: I said that it was 40% to 70% more than the cost of the TU104. The GA102 was made on Samsung's old 8nm node, for which we have no cost comparisons.
What numbers? We don't actually have any actual figures for any of these things - just guesses, estimates, and assumptions. But no matter: you and many other people believe that the price of the 4080/4090 isn't justifiable, and that's absolutely fine. I have no problem with such views, and if millions of people around the world don't buy any RTX 40 series card because of the price, then good on them.
I'm simply looking at this from a professional perspective. There's enough evidence around to strongly suggest that the design and manufacturing of graphics cards are notably more expensive than it was, say, four years ago. AMD made a point of highlighting this in the RX 7000 launch, and many slides in their presentations were dedicated to the point of designing everything to keep costs down. Nvidia has taken a different approach and has decided to pass those on to the buyer.
Time, and financial reports, will tell us which approach is better for the companies and consumers.
I know I'm in the extreme minority. I'll be in the the sub-1% group of gamers who purchase a 4090 or 4080. And that's something I'm perfectly cool with. But your point is absolutely the reason why. I am willing to spend that on a GPU because I want every single feature turned on. The performance of the 4080 is finally making RT at "Ultra" graphics settings acceptable on most games for my needs and wants. The 3080 did a decent job, but I still played with RT off in nearly every scenario. I absolutely hate the price of the 4080, but the massive performance uplift was enough for me to spend the dough.I think many people would say that they're features they don't personally care about enough to want to pay extra for. I currently have no intention of buying any 4xxx cards myself. But it seems unreasonable to cast the tech itself as irrelevant.
Upscaling is one of the best technologies/techniques to come out in a long time to help drive higher frame rates without having to sacrifice too much in the way of perceived image quality. If you're running on a low res monitor, sure it won't matter to you. For anyone running a high-refresh 4k monitor, it's fantastic and absolutely could be a deciding factor in which card you buy.
Ray-tracing *was* "fluffy junk" when the performance hits weren't worth the visual trade off. When (some) cards are now able to combine technologies to allow ray tracing and still produce great frame rates, why wouldn't you turn it on if you could? (assuming you place enough value on how pretty things look).
So yeh. I wouldn't write-off technologies that *both* companies are actively pursuing and improving upon. Just don't spend outside of your own needs/means/wants when weighed against what's available.
I mean, the thing is they are not the same product. The 3080 (which I currently have) is barely comparable at all to the 4080. They are the same type of product, but not even close when it comes to performance.Can't believe the score this GPU has gotten, so, TechSpot is actually recommending its readers that paying 70% over the same kind of product (RTX 3080) is the way to go. As of today, PC Gaming is a joke and PC oriented web sites should be trying to do something about it.
Isn‘t that the case with almost every new GPU gen though ? The 3080 was much faster than the 2080 it replaced but at the same msrp.I mean, the thing is they are not the same product. The 3080 (which I currently have) is barely comparable at all to the 4080. They are the same type of product, but not even close when it comes to performance.
You're not wrong. But the power consumption, thermals and lack of true effectiveness at running RT on any resolution not to mention bleeding edge resolutions put the 3080 in a sub-par category. This is just my opinion. The MSRP of the original 3080 was incredible, and I only hoped that would continue. Alas, that does not seem like it will ever be the case moving forward. Honestly, though, Nvidia has simply replace the product slot of what use to be the xx80 series with the xx70 series. I don't think NVIDIA really intends to compare xx80 series with previous xx80 series anymore. The comparisons with the 4080 now are over the 3090 and 3090ti. The 4070 model pricing will reflect that again, as it did with the 3080.Isn‘t that the case with almost every new GPU gen though ? The 3080 was much faster than the 2080 it replaced but at the same msrp.
For me, this is absolutely a luxury product. It's my hobby, and my only hobby. There is no logical reason for me to own a 4080 no matter how much I game. But I can afford it, and I want the best that I am willing to pay for. I'll admit, this pricing is more than I wanted to pay, and I am definitely at the ceiling for a "luxury" graphics card.LMAO
Lamborghini? What are you smoking? This is s graphics card, not a luxury product. Nobody will admire you or give you "social clout" or whatever for having a RTX 4000 card.
And yes, it literally shouldn't exist. When put into the context where every GPU generation before this one both increased the maximum amount of performance available AND increased performance/$ in every segment at the same time (even the RTX 2000 series did it to some degree), the 4000 series, so far, is a huge failure. New GPUs shouldn't just infinitely increase in price every generation, they should replace the older GPUs in the same-ish price brackets, which is how the GPU market has operated in the past two decades.
Also, about the pathetic quip about personal income: The 4080 itself is irrelevant. $1000+ GPUs are a minuscule, insignificant portion of the GPU market. This product is inconsequential to the vast, overwhelming majority of the market. The issue highlighted in my comment is not about this product in particular, it's that the whole reason Nvidia is launching this particular product at a price that everyone, including themselves, knows is ridiculous is because this is their attempt to keep the RTX 3000 series at artificially inflated prices. That affects the market segments that are not irrelevant, like the RTX 3060 Nvidia is asking almost $400 for, or the RTX 3050 that Nvidia is asking a hilarious $300 for. Nvidia is launching 4000 series cards at ridiculous prices in an attempt to make leftover 3000 stock appear to be a better deal than it is. So even though the 4080 is an irrelevant product as far as actual market presence goes, it not receiving a proper $700/$800 or so price means it won't push the 3080 down, which won't push the 3070 down, which won't push the 3060 down, and so on. And THAT affects the products people actually buy.
I mean, the thing is they are not the same product. The 3080 (which I currently have) is barely comparable at all to the 4080. They are the same type of product, but not even close when it comes to performance.
I know I'm in the extreme minority. I'll be in the the sub-1% group of gamers who purchase a 4090 or 4080. And that's something I'm perfectly cool with. But your point is absolutely the reason why. I am willing to spend that on a GPU because I want every single feature turned on. The performance of the 4080 is finally making RT at "Ultra" graphics settings acceptable on most games for my needs and wants. The 3080 did a decent job, but I still played with RT off in nearly every scenario. I absolutely hate the price of the 4080, but the massive performance uplift was enough for me to spend the dough.
Well, my initial thought would be that even if the hit is 40-45%, the overall performance now allows for those RT'd games to be played at a resolution and framerate I would want in order to turn those features on. I'd say that my interest in these products really stops at the performance I can get with the games I play at the settings I want to play them. Sure, it would be phenomenal if the RT hit in this new gen was, say, 25%. I'd find that super interesting and impressive. But the price would be the same if not more. If my interest in these products centered more on technological advancement in the industry and less on actual gaming, I'd have a bigger issue with the 40-45% RT hit not improving.Does it bother you that Nvidia hasn't actually improved RT performance over Ampere?
When you put RT on for Ampere, you get around a 40-45% hit to performance.
When you pur RT on for Ada, you get around a 40-45% hit to performance.
Nvidia made no strides in performance improvement with RT over Ampere. The only reason it appears to be better is simply because of the improved rasterization performance.
I'm just curious to see what you think about it.
One more bit about why I was willing to pay for a 4080 at this admittedly high price: When I built my first gaming PC about eight years ago, I purchased a used 1050ti 2 GB. I traded that in for a 1060 3 gb. I traded that in for a 2070ti (which a discount from a friend). Then I sold that for a 2080ti (with a discount from a friend). Then I sold that for the incredibly priced 3080 and made a profit. I will sell my 3080 to a friend for a reasonable price, most likely $500. So in my mind, I'm paying a more reasonable price to upgrade to the 4080. If I did not have an investment with the 3080, I would absolutely not buy the 4080. No way.Well, my initial thought would be that even if the hit is 40-45%, the overall performance now allows for those RT'd games to be played at a resolution and framerate I would want in order to turn those features on. I'd say that my interest in these products really stops at the performance I can get with the games I play at the settings I want to play them. Sure, it would be phenomenal if the RT hit in this new gen was, say, 25%. I'd find that super interesting and impressive. But the price would be the same if not more. If my interest in these products centered more on technological advancement in the industry and less on actual gaming, I'd have a bigger issue with the 40-45% RT hit not improving.
I mean, the thing is they are not the same product. The 3080 (which I currently have) is barely comparable at all to the 4080. They are the same type of product, but not even close when it comes to performance.
I understand what you're saying. But what I'm pointing to in not pure speed but rather all of the other features possible with the 4080 over the 3080 vs the 3080 over the 2080. While the "pure" speed is somewhere around 50% faster, in many cases more, the ability to handle RT at 1440P and 4K along with improved DLSS performance, power efficiency and essentially the same or better thermal management are definitely factors in the purchase decision for me.The RTX 3080 was 66% faster than the RTX 2080 for 12.5% LESS money ($700 vs $800 MSRP at launch respectively).
The RTX 4080 is 49% faster than the RTX 3080 for 71% more money ($700 vs $1200).
Do you see where the problem is?
It goes even further. The 2080 was 37% faster than the 1080 for 33% more money ($800 vs $600), and that was regarded as a widely disappointing launch. The 1080, in turn, was 66% faster than the 980 for 10% more money ($600 vs $550). The 980 was 30% faster than the GTX 780 for 15% less money ($550 vs $650).
The 4000 series is the first time ever a new generation of GPUs has regressed in performance/$. Not even the 2000 series was this bad.
I completely agree. This 4080 is not intended to be compared with the 3080. That's just a fact. This is the new xx90 series. I also agree with voting with your wallet. And it will happen. The 4080 is not going to sell well. Period.It was Nvidia's mistake not to call the GPUs differently...
They could have gone with the $1600 RTX 4000 Titan
$1200 RTX 4090
$900 RTX 4080 ... just to satisfy the AMD fanboys.
I'm being sarcastic, of course.
Coming off my previous comment, we can vote with our wallets.
I hope next-gen Radeon do extremely well, so they become the better buys for less (and eventually Nvidia will be forced to reduce pricing, just as it used be for a decade before mining was ever part of the equation).
I understand what you're saying. But what I'm pointing to in not pure speed but rather all of the other features possible with the 4080 over the 3080 vs the 3080 over the 2080. While the "pure" speed is somewhere around 50% faster, in many cases more, the ability to handle RT at 1440P and 4K along with improved DLSS performance, power efficiency and essentially the same or better thermal management are definitely factors in the purchase decision for me.
I will never tell you the 4080 price is a better value than the 3080. The original 3080 MSRP was incredible at the time.