Nvidia reveals more RTX 3060 specs ahead of next week's release

Stoly

Posts: 84   +47
I know memory subsystems aren’t created equally and I understand that this card has a slower bus and the memory is slower than the 8GB 3060ti. Meaning that the memory bandwidth will be higher on the 8GB card.

But surely if a game uses more than 8GB then having 8GB of faster memory isn’t going to give you as good performance as 12GB of slower memory?
I know memory subsystems aren’t created equally and I understand that this card has a slower bus and the memory is slower than the 8GB 3060ti. Meaning that the memory bandwidth will be higher on the 8GB card.

But surely if a game uses more than 8GB then having 8GB of faster memory isn’t going to give you as good performance as 12GB of slower memory?

That looks like a catch22

If a game requires more than 8gb of memory, the RTX3060 may not be able to handle it anyway as it would be more likely at 1440p. That said, Ampere doesn't seem to benefit that much from higher bandwidth.

I don't think the 3060 would be faster than the Ti, but it could certainly come close in a few cases.
 

brucek

Posts: 769   +1,063
TechSpot Elite
I ask because this is the 3 or so post remind us the plebs that the Nvidia Gods are about to bestow us with the gift that is a 3060 soon and considering what Nvidia did to Steve, I keep wondering why so much nvidia *sskissing?
I see two articles from two different authors, an earlier one with the official release date and a second one when additional details about the card leaked out early. While I bet CNN felt free to skip both pieces of news, I can't imagine many blogs catering to PC gaming enthusiasts did.
 

brucek

Posts: 769   +1,063
TechSpot Elite
Thats the part that many dont get, same thing for their hate towards scalpers.

Everyone points fingers at them, but continue buying from them.

At the same time, they dont blame the retailers that are selling to them.

But as you said, for the same reasons, they dont care who they sell too, they just want that merchandise moved and paid for.

It sucks for us the customers, but from a business point, all of them win.

All we have to do is simply dont buy from scalpers and at least that will be one problem that we dont have to worry.

At this point though, I do wonder how many parts would need to be manufactured that would satisfy both the miners and the gamers?
Again, of all the parties, only the scalper is breaking rules and potentially committing (non-prosecuted) crimes. Most retailers have terms and conditions which prohibit bots and sales of more than one per household. Scalpers who use bots anyway, and provide fake identities and/or addresses, are in violation of at minimum those rules and depending on the circumstances potentially multiple laws around wire fraud and other identity protections.

The manufacturer is free to sell to who they choose. The retailer is free to set their terms too. It is the scalper who is not respecting the terms and thus not respecting the market. They deserve the hate they get and while I have to agree there are a lot of crimes more pressing, I'd love to see some US attorney pick out a couple worst case offenders and try throwing the book at them on the wire fraud for the identity part and computer fraud and abuse act for the bots in violation of terms & conditions. Not sure whether both would land although if the jury is full of people like me, they might. Either way the message would get sent that these could quickly become activities that do not pay.
 

neeyik

Posts: 1,839   +2,151
Staff member
But surely if a game uses more than 8GB then having 8GB of faster memory isn’t going to give you as good performance as 12GB of slower memory?
It can be, due to the high number of passes that take place in a modern game. Each one will write and subsequently read frame data out of the VRAM, and the allocation of memory for this is more important than the asset buffers (e.g. vertex, index, constant, texture). Thus you'd want frame resources stay put and assets get hoofed out.

DRAM latency is pretty poor, even with the GDDR6, so any reads that result in a complete cache miss (I.e. it's not in the shared memory, L1 or L2) will be stalled badly. Since this is all known and well understood, the drivers and the architecture of the GPU is designed to have more than enough threads on the go to mask stalls caused by memory accesses. And where it just can't be helped, having a wider bus means more parallel accesses can take place concurrently, resulting in threads being stalled for a shorter amount of time.

However, if a thread needs such data and it's not in the local memory, the stall will be a far more a serious one, although it will only impact a fairly small section of the overall render time. But that said, it is very dependent on the game, the rendering within, and the settings used.

For example, Doom Eternal uses single large memory pools for vertices and textures, and streams data in and out of them as required, throughout numerous passes. Ordinarily it wouldn't be a massive problem if there wasn't enough room in the local memory for them, but as the renderer accesses this information very frequently, it's performance is hit a lot harder than for other games.

Another aspect to note is that in Ampere, the ROPs are no longer tied to the L2 cache blocks and memory controllers. In Turing and all previous architectures, each ROP clusters is directly connected to a 512 kB L2 cache block, which in turn is connected to a memory controller. So the size of the memory bus dictated the number of ROPs (and thus the performance of all read/writes).

In Ampere, the ROPs are part of the GPCs, so the bus width and ROP count are independent of each other. This is why the RTX 3060 Ti has 80 ROPs in total (16 more than a 2080 Super), so coupled with its greater memory bandwidth, it'll have better read/write performance at all times than the 3060. And even in titles where the latter's larger memory footprint would be expected to come into play, neither model is really aimed at 4K max-quality-settings.
 

Shadowboxer

Posts: 1,448   +1,046
It can be, due to the high number of passes that take place in a modern game. Each one will write and subsequently read frame data out of the VRAM, and the allocation of memory for this is more important than the asset buffers (e.g. vertex, index, constant, texture). Thus you'd want frame resources stay put and assets get hoofed out.

DRAM latency is pretty poor, even with the GDDR6, so any reads that result in a complete cache miss (I.e. it's not in the shared memory, L1 or L2) will be stalled badly. Since this is all known and well understood, the drivers and the architecture of the GPU is designed to have more than enough threads on the go to mask stalls caused by memory accesses. And where it just can't be helped, having a wider bus means more parallel accesses can take place concurrently, resulting in threads being stalled for a shorter amount of time.

However, if a thread needs such data and it's not in the local memory, the stall will be a far more a serious one, although it will only impact a fairly small section of the overall render time. But that said, it is very dependent on the game, the rendering within, and the settings used.

For example, Doom Eternal uses single large memory pools for vertices and textures, and streams data in and out of them as required, throughout numerous passes. Ordinarily it wouldn't be a massive problem if there wasn't enough room in the local memory for them, but as the renderer accesses this information very frequently, it's performance is hit a lot harder than for other games.

Another aspect to note is that in Ampere, the ROPs are no longer tied to the L2 cache blocks and memory controllers. In Turing and all previous architectures, each ROP clusters is directly connected to a 512 kB L2 cache block, which in turn is connected to a memory controller. So the size of the memory bus dictated the number of ROPs (and thus the performance of all read/writes).

In Ampere, the ROPs are part of the GPCs, so the bus width and ROP count are independent of each other. This is why the RTX 3060 Ti has 80 ROPs in total (16 more than a 2080 Super), so coupled with its greater memory bandwidth, it'll have better read/write performance at all times than the 3060. And even in titles where the latter's larger memory footprint would be expected to come into play, neither model is really aimed at 4K max-quality-settings.
Thankyou! That makes sense.
 

McMurdeR

Posts: 259   +243
Not rly.

Mining is an investment and as such subject to a cost-benefit basis.

What am I gonna do with my mining-only GPUs after the mining craze ends? I can't sell them to other miners, I can't sell them to gamers either b/c they lack display ports.

Whereas I can always sell my regular gaming GPUs, used for mining, to you post mining craze as "light gaming, never overclocked" at 75% MSRP thus recouping a very substantial part of my original investment.

Even more ridiculous is the fact that some of the miners buy GPUs in bulk directly from importers at bulk prices and will even make a profit selling their GPUs post mining craze close to MSRP.

Mining isn't a sustainable market, so it's unlikely we'll see Nvidia marketing many products directly at it. In any case, that's where all the gaming product is going at the moment, so it's conspicuous why either AMD or Nvidia are currently bothering to market products to gamers when they're selling so fast they don't even make it as far as the shelves.
Notwithstanding mining disruption - the gaming market is itself a very stable and growing market which Nvidia is unlikely to walk away from.
 

Markoni35

Posts: 1,025   +410
Is there anything ordinary gamers can do to help ethereum miners lose their shirts?
If GameStop can be bid up to $300, maybe there's a reverse version for them...

Yes. I can buy BitCoin. Then it will immediately plummet, within a few minutes. From $50,000 to $500. And miners will lose incentive.

But for this to work I need you collect at least $5000 for me to invest. You invest your fortune, I invest my misfortune. Don't worry, I've got plenty.
 

poohbear

Posts: 614   +518
Just FYI, Bitcoin mining uses ASIC cards. It's Ethereum & other cryptocurrencies that use GPUs for mining. Ethereum has actually grown at a much faster rate than Bitcoin this year, but all the headlines are on Bitcoin. (151% YTD for Ethereum, 78% YTD for Bitcoin).

My Ethereum mining friend is dying to get a RTX 3090 but can't find any on the market. One of those mines more than 5 of the 2017 Radeon cards he managed to buy 2nd hand.