Nvidia is speed binning its GeForce RTX 2070

So, when AMD offers an R7 2700 and an R7 2700X don't you think binning is involved too?

TechPowerUp reported this back in Sept. https://www.techpowerup.com/247660/...overclocking-forbidden-on-the-cheaper-variant

They also name the product differently Binning isn't the bad part, the fact that you having two different performing products under the exact same name is. Nvidia calls these chips 2070 regardless of the bin. With the AMD example you provided when you buy a 2700X you get a specific chip that's been tested to work at a certain frequency / power (the "binning" part). A 2700 on the other hand will not have as good power characteristics and max frequency. On the other hand the two different bins of Nvidia chip are both sold and advertised as a 2070, meaning you can get lower performance then advertised. There is nothing to let the consumer know they may get lower performance and Nvidia does not force a name change to AIBs for the two different chips.

Binning makes perfect sense and is in no way fishy.

read above.
 
They also name the product differently Binning isn't the bad part, the fact that you having two different performing products under the exact same name is. Nvidia calls these chips 2070 regardless of the bin. With the AMD example you provided when you buy a 2700X you get a specific chip that's been tested to work at a certain frequency / power (the "binning" part). A 2700 on the other hand will not have as good power characteristics and max frequency. On the other hand the two different bins of Nvidia chip are both sold and advertised as a 2070, meaning you can get lower performance then advertised. There is nothing to let the consumer know they may get lower performance and Nvidia does not force a name change to AIBs for the two different chips.

If you clicked the link I provided, from Sept you would have seen this:

"We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc."

Looks pretty straightforward to me, if you buy a base model that isn't overclocked you get a -300. If you buy and FE or an after market overclock you get a -300A (or whatever chip is happens to be). Of course, the -300 might still overclock, but just not as well as the -300A. Nothing nefarious, and they didn't hide the information as they replied to Tech Power Up without special comment.

Yes, the product is still a 2070 (or 2080 or 2080 Ti) but if you buy the base model you get the cheaper chip. If you buy a higher model, pre-overclocked, you get a better chip, one binned to provide that overclock.

Hmm I wonder how after market makers of AMD video cards decide which GPU to use on the "stock" version of a card or an "overclocked" version of a card? I'm no expert, but I see RX580 video cards ranging from 1.257 to 1.405 GHz. I guess they use the same PN GPU but they must do something to decide if their OC will hold, eh? And look at that, they still call them all RX580s!
 
If you clicked the link I provided, from Sept you would have seen this:

"We reached out to industry sources and confirmed that for Turing, NVIDIA is creating two device IDs per GPU to correspond to two different ASIC codes per GPU model (for example, TU102-300 and TU102-300-A for the RTX 2080 Ti). The Turing -300 variant is designated to be used on cards targeting the MSRP price point, while the 300-A variant is for use on custom-design, overclocked cards. Both are the same physical chip, just separated by binning, and pricing, which means NVIDIA pretests all GPUs and sorts them by properties such as overclocking potential, power efficiency, etc."

Looks pretty straightforward to me, if you buy a base model that isn't overclocked you get a -300. If you buy and FE or an after market overclock you get a -300A (or whatever chip is happens to be). Of course, the -300 might still overclock, but just not as well as the -300A. Nothing nefarious, and they didn't hide the information as they replied to Tech Power Up without special comment.

Yes, the product is still a 2070 (or 2080 or 2080 Ti) but if you buy the base model you get the cheaper chip. If you buy a higher model, pre-overclocked, you get a better chip, one binned to provide that overclock.

Hmm I wonder how after market makers of AMD video cards decide which GPU to use on the "stock" version of a card or an "overclocked" version of a card? I'm no expert, but I see RX580 video cards ranging from 1.257 to 1.405 GHz. I guess they use the same PN GPU but they must do something to decide if their OC will hold, eh? And look at that, they still call them all RX580s!

How is the customer supported to know the cheaper 2070s won't come with the advertised performance in benchmarks? Nothing you provided answers this question.

On the topic of the RX580, the reference card / founders edition was benchmarked and advertised at base clocks. AMD didn't give reviewers higher binned cards and then turn around and sell lower binned cards at the advertised price. Any card you buy with a higher frequency is extra performance, not less like in the case of Nvidia.

Like I said before, binning isn't bad. It's the way Nvidia released and advertised the 2070 at a certain price and performance, only to turn around and sell the lower binned model at the price advertised for what should have been "full" 2070 performance. If I make a car and advertise a certain speed and mileage at $29,999 and then turn around and sell you that same car with a worse engine that has reduced speed and mileage that's wrong and it's exactly what Nvidia has done here.

Nvidia could have easily avoided this issue by providing the lower binned 2070 to the press or by indicating a price premium for the higher binned chip. They did neither, they gave an MSRP for the 2070 for the cheaper chip without notifying reviewers that the MSRP is for the cheaper bin and not the higher bin. This has been one of the worst, if not the worst Nvidia launch ever.
 
Last edited:
I'm certain ATi was doing this many years ago as well.
You could buy a cheap card and unlock extra pipes for a performance boost as it was the same chip as the more expensive card although it was hit and miss if you got a clean OC and unlock or artifacts. I was lucky and got a 15fps boost on doom 3 on high just from a hacked driver. Card is still going now in an old PC for my dad.

It's not shady, you get a cheaper card with less performance so who honestly cares?
You're not paying $1000 for a card that has less performance than a $1000 card are you, unless you are stupid.
They could however manufacture new chips for the lesser cards pushing prices up because they binned a load of chips that were adequate for use but because of everyone bitching about it didn't, so now we have mid-range GPU's at twice the price. Nice one internet.
 
I'm certain ATi was doing this many years ago as well.
You could buy a cheap card and unlock extra pipes for a performance boost as it was the same chip as the more expensive card although it was hit and miss if you got a clean OC and unlock or artifacts. I was lucky and got a 15fps boost on doom 3 on high just from a hacked driver. Card is still going now in an old PC for my dad.

It's not shady, you get a cheaper card with less performance so who honestly cares?
You're not paying $1000 for a card that has less performance than a $1000 card are you, unless you are stupid.
They could however manufacture new chips for the lesser cards pushing prices up because they binned a load of chips that were adequate for use but because of everyone bitching about it didn't, so now we have mid-range GPU's at twice the price. Nice one internet.

That's not even close to what's going on here.
 
How is this "fishy?" Do the specifications on the box not match what is being sold?
Are you saying that having two cards being called as RTX 2070 while they are having different chips with totally different clock speeds is not fishy? In my opinion this would justify calling one of the cards as 2070 and calling the other one 2070ti.
Not to mention Nvidia's former practice with the gtx 1060 that had 3gb, 5gb and 6gb models with even different number of CUDA cores. all three were called 1060 while it is obvious it was intentionally misleading.
So yeah, I think having 2 cards with the same name but with different chips and considerably different clock speeds is fishy.
 
How is the customer supported to know the cheaper 2070s won't come with the advertised performance in benchmarks? Nothing you provided answers this question.

On the topic of the RX580, the reference card / founders edition was benchmarked and advertised at base clocks. AMD didn't give reviewers higher binned cards and then turn around and sell lower binned cards at the advertised price. Any card you buy with a higher frequency is extra performance, not less like in the case of Nvidia.

Like I said before, binning isn't bad. It's the way Nvidia released and advertised the 2070 at a certain price and performance, only to turn around and sell the lower binned model at the price advertised for what should have been "full" 2070 performance. If I make a car and advertise a certain speed and mileage at $29,999 and then turn around and sell you that same car with a worse engine that has reduced speed and mileage that's wrong and it's exactly what Nvidia has done here.

Nvidia could have easily avoided this issue by providing the lower binned 2070 to the press or by indicating a price premium for the higher binned chip. They did neither, they gave an MSRP for the 2070 for the cheaper chip without notifying reviewers that the MSRP is for the cheaper bin and not the higher bin. This has been one of the worst, if not the worst Nvidia launch ever.

Umm, you apparently don't know that the reviewers were all sent base models to review, of the 2070? Here, let me show you because you apparently missed that part of the reviews, probably due to your AMD blinders. The 2070 reviews were messed up because NVidia didn't ship out their own overclocked FE cards, but asked board partners to send the lower priced base models. That created a problem as the board partners want to show off their special more expensive cards, but hey, you keep plugging away at that.

https://www.guru3d.com/articles-pages/asus-turbo-geforce-rtx-2070-8gb-review,1.html

"Partly the core issue is that a week or so ago NVIDIA told the board partners to ship out a 499 USD product, as well they do need at least one RTX product to make some sense pricing wise."

All of the reviews complained that they were given a couple of days to do their reviews because NVidia did not sample FE cards, and they told their board partner to send the lowest priced model for the initial review.
 
Here, let me show you because you apparently missed that part of the reviews, probably due to your AMD blinders.

If you have to make up reasons to dislike people, that's your prerogative but this isn't a valid argument and it isn't even on topic.

Umm, you apparently don't know that the reviewers were all sent base models to review, of the 2070?

Patently false.

https://www.techradar.com/reviews/nvidia-geforce-rtx-2070
https://www.tomshardware.com/reviews/nvidia-geforce-rtx-2070-founders-edition,5851.html
https://www.pcgamer.com/geforce-rtx-2070-founders-edition-review/
https://www.pcworld.com/article/331...geforce-rtx-2070-founders-edition-review.html
https://www.ign.com/articles/2018/10/16/nvidia-geforce-rtx-2070-founders-edition-review
https://www.guru3d.com/articles-pages/nvidia-turing-geforce-2080-(ti)-architecture-review,1.html

Nvidia sent out founders edition cards to every major review outlet. There are only 1 or 2 that didn't do their initial review on the founders edition due to them receiving them later the AIB cards in the mail.

The founders edition cards have the higher clocks speeds of the better bin, not the lower bin.

The 2070 reviews were messed up because NVidia didn't ship out their own overclocked FE cards, but asked board partners to send the lower priced base models.


This statement is proven incorrect by the above links and by the following qoute from techspots's own review

"For whatever reason Nvidia thought it best not to send out samples until a few days before the release, thus we're not expecting our RTX 2070 Founders Edition sample until later this week."

https://www.techspot.com/review/1727-nvidia-geforce-rtx-2070/

As proved in the above links, Nvidia clearly did send out Founders Edition Cards to all major review outlets. Just like in the past, many times the opinion of a card is based off the founders edition.

All of the reviews complained that they were given a couple of days to do their reviews because NVidia did not sample FE cards, and they told their board partner to send the lowest priced model for the initial review.

If you read the quote you provided, the AIBs shipping one low priced product does not exclude Nvidia from shipping founders editions or AIBs from shipping higher priced products. Nowhere in that article is a statement made that supports your assumption that Nvidia did not sample FE because they clearly did as provided in the links above. I would also like to see links to all of the reviews you claimed were complaining about not getting sampled FE cards. You need to back statements like that up with proof.
 
Are you saying that having two cards being called as RTX 2070 while they are having different chips with totally different clock speeds is not fishy? In my opinion this would justify calling one of the cards as 2070 and calling the other one 2070ti.
Not to mention Nvidia's former practice with the gtx 1060 that had 3gb, 5gb and 6gb models with even different number of CUDA cores. all three were called 1060 while it is obvious it was intentionally misleading.
So yeah, I think having 2 cards with the same name but with different chips and considerably different clock speeds is fishy.

Exactly. I don't know why it's so hard to give products with materially different performance a different name
 
If you have to make up reasons to dislike people, that's your prerogative but this isn't a valid argument and it isn't even on topic.

I don't dislike you, I just think that you are extremely biased.

You can look at all the reviews, the reviewers are saying they were asked to review the $500 versions of the cards. Amazingly enough, NVidia's FE card is not one of those, and the partners had to provide them. NVidia's direction to board partners to provide those cards came late, and the cards arrived late, which made it hard on reviewers to do the reviews. That's just a fact.

You can also link all the reviews you want, but the bulk of the reviews on embargo lift day were the cheap models, MSI Armour, EVGA Black, etc. Some outlets did their own thing based on their own decision, but the majority reviewed multiple cards.

I could link them, and quote them, but I am done trying to explain it to you as you just find something else to argue about.
 
I don't dislike you, I just think that you are extremely biased.

You can look at all the reviews, the reviewers are saying they were asked to review the $500 versions of the cards. Amazingly enough, NVidia's FE card is not one of those, and the partners had to provide them. NVidia's direction to board partners to provide those cards came late, and the cards arrived late, which made it hard on reviewers to do the reviews. That's just a fact.

You can also link all the reviews you want, but the bulk of the reviews on embargo lift day were the cheap models, MSI Armour, EVGA Black, etc. Some outlets did their own thing based on their own decision, but the majority reviewed multiple cards.

I could link them, and quote them, but I am done trying to explain it to you as you just find something else to argue about.

I've been on point this entire time. When I'm not then call me out on it. Otherwise I can't force you to back up your side of the argument. You are making some pretty bold claims about what did and didn't happen with launch cards that needs to be verified.
 
I've been on point this entire time. When I'm not then call me out on it. Otherwise I can't force you to back up your side of the argument. You are making some pretty bold claims about what did and didn't happen with launch cards that needs to be verified.

How many quotes would you like from reviewers? I will provide one, you can find the rest on your own, should you be so inclined.

https://www.hardwarecanucks.com/for...idia-geforce-rtx-2070-performance-review.html

"Now we’ve moved onto the RTX 2070 and things are being done a bit differently. You see, NVIDIA has put the burden of this launch upon the board partners’ collective shoulders by insisting they send out samples of cards which have the $500 “starting at” price. The only problem is that NVIDIA’s directive came pretty late in the game and as a result, every partner I spoke to was scrambling to switch production to lower cost cards. Speaking for myself, my samples arrived on a Friday morning with launch on a Tuesday and a flight to Beijing on Saturday afternoon. To say this was all last minute is being generous. "

I have read this, or versions of this, on more than one review site. The fact is that NVidia asked board partners to send out their $499 MSRP card to reviewers. I read this statement on OC3D, Guru3D, Gamers Nexus and heard it on others.

https://videocardz.com/78594/nvidia-geforce-rtx-2070-review-roundup

If you check this link you will find that a majority of reviews are of the $499 versions from various board partners. Yes, there are a number that have the FE, and some that reviewed other faster and higher priced cards as well, but the fact cannot be disputed that NVidia was trying to get reviewers to review the "base" models first. In fact 18 of 31 reviews were of the "base" cards. I would add that since NVidia put out the call late, I imagine that some reviewers decided to go ahead with what they had, since it really isn't incumbent on them to do what NVidia wants them to do, and the added work wasn't worth their while.
 
How exactly does one compare the features of the new RTX cards when there isn't even a single game out yet that uses RT?

Turning is all marketing hype and little substance, just like AMD's Vega. Until there is widespread adoption of RT / DLSS, all those new fancy features and wasted die space do absolutely nothing for you.

"If the new cards did not have the advanced capabilities that no other card had you could compare them head to head in benchmarks on existing games and be done with it. That is not their main purpose so it's hardly relevant."

Um, yes people do buy video cards to play existing games. That's literally how it's always worked. No one buys cards hoping that they will start working like they should have day 1, 5 years later.
I use the tech in apps, gaming is not the primary use for everyone's computer.
 
But binning has been used for years at every manufacturer, this is nothing uncommon. Yes it's running a different chip but the performance bump is small as is the price bump. You have a variety of gtx 1060s to biy and their price and performance can vary a lot. Anyway looking at average benchmarks from the users shows the 1070 is about as powerful as a 1080ti.which makes this even a better bang for ur buck. I'd rather worry about gpus not working.
Choice is always good. I guess some people would prefer companies just recycle chips that over perform the listed specs.
 
I use the tech in apps, gaming is not the primary use for everyone's computer.

We are talking about graphics cards, which are primarily intended for gaming. Nvidia isn't advertising these cards for better excel performance.
 
We are talking about graphics cards, which are primarily intended for gaming. Nvidia isn't advertising these cards for better excel performance.
They are for use in creative design software where they can shave hours off of production time not really useful where graphics are not being rendered since rendering graphics is one of the main uses of graphics cards.
 
They are for use in creative design software where they can shave hours off of production time not really useful where graphics are not being rendered since rendering graphics is one of the main uses of graphics cards.

You might have a point if these were professional cards but they are not. Movies like Avatar were rendered on Quadro cards, not consumer cards. Nvidia is advertising these for gaming because that will by far be their primary use. Otherwise Nvidia has other lines of products to better suite usage.
 
You might have a point if these were professional cards but they are not. Movies like Avatar were rendered on Quadro cards, not consumer cards. Nvidia is advertising these for gaming because that will by far be their primary use. Otherwise Nvidia has other lines of products to better suite usage.
Not every small one person studio has the budget of a large production company making avatar. Lol
Currently the best use is for rendering in real time for creating 3D images and video for use in advertising. That's where I get the most use currently. As time goes on, more games will make use of either raytracing or dlss or both so it will shift to more gaming usage as that happens. For me building a PC Today is all about having a PC that will work well in 3, 4, 5 even 7 years not just one for today. I always like to future-proof when I build a new pc these cards will help you do that today.
I understand that not everyone can afford to buy technology for tomorrow isn't fully used today but to say that it's not something they want if they could afford it is just sour grapes.
My previous comment was in response to your comment that these cards were not being made to run Excel. The cards are being made for applications that render graphics.
 
Back