Gigabyte accidentally leaks 12GB RTX 4070 and 8GB RTX 4060

AlphaX

Posts: 98   +20
Staff
Forward-looking: Details about the upcoming RTX 4070 have seemingly been leaked by Gigabyte. Also within the leak is some information on the RTX 4060, however these leaks have left many users somewhat disappointed.

Gigabyte Control Center is a utility that allows users to manage their various Gigabyte hardware. One of the more prominent products that take advantage of this software is the company's graphics cards. Gigabyte is usually quick to update the program to enable support for the latest GPUs but this time, maybe it was too quick.

For Control Center's latest update, version 23.03.02.01, the release notes had your standard bug fixes: clearing issues with RGB, settings not applying, etc. However, at the bottom of the page, there was one more entry in the "Support" category of changes. According to Gigabyte, this version added support for the RTX 4070 Aero OC 12 GB (GV-N4070AERO OC-12GD) and the RTX 4060 Gaming OC 8 GB (GV-N4060GAMING OC-8GD).

Gigabyte clearly made a mistake here, as the line denoting support for these new graphics cards was recently removed. However, this single accidental note appears to have confirmed small bits of information regarding Nvidia's next two Ada Lovelace GPUs.

Details about the soon-to-launch RTX 4070 were bound to leak eventually, and Gigabyte appears to have leaked the memory capacity of the card. The RTX 4070 will have 12 GB of (likely GDDR6X) memory, an improvement over the RTX 3070 which featured only 8 GB of GDDR6. This means the RTX 4070 will boast the same memory setup as its bigger brother, the RTX 4070 Ti.

More interestingly, but also disappointing, is the leaked information about the RTX 4060. Last generation, the RTX 3060 featured 12 GB of GDDR6 memory, more than almost every other card in the Ampere family. This time around, Nvidia plans to be very stingy with memory. Gigabyte's leak indicates the RTX 4060 will only include a memory capacity of 8 GB, four gigabytes less than the previous generation.

While the RTX 4070's increased memory capacity is a welcome addition, it is a bit disappointing to see that Nvidia's upcoming RTX 4060 card actually has a smaller amount of memory than its predecessor. Due to this change, the RTX 4060 could be a risky card for gamers who intend on playing in 4K, simply because of memory constraints.

Permalink to story.

 
With how expensive these GPUs are getting and with how poor their improvements/specs are, we really badly need 10 year warranties on these things, and not just a paper warranty, but a forced actual warranty with true consumer protections in place.

But then again I guess people can grab consoles for a fraction of the price and those do have warranties and massive improvements per gen.
 
With how expensive these GPUs are getting and with how poor their improvements/specs are, we really badly need 10 year warranties on these things, and not just a paper warranty, but a forced actual warranty with true consumer protections in place.

But then again I guess people can grab consoles for a fraction of the price and those do have warranties and massive improvements per gen.
I wouldnt even be gaming on pc if it wasnt for trainers and mods.

ps5 and xbox will be getting the same great games(and more) for years to come, that will just work, all for $500, meanwhile the pc folks will be dropping almost 2grand on cards that need to make up frames, their actual party trick is "imagination", which is pretty sad.

and yeah, you dont have to buy high end, but even the middle ground is expensive and still may not outlast these consoles. the current consoles are honestly so damn good its unfair when you really look at them.
 
192bit bus in some cases kills 4K performance of 4070Ti, 8GB on 128bit in some cases kills 1440p performance on 6600XT. 4060 will be 1080p GPU for 500-600 dollars. Are Those sheeps are seriously going to buy it, because, You know, DLSS and RT? Like, how stupid people actually are?
 
I wouldnt even be gaming on pc if it wasnt for trainers and mods.

ps5 and xbox will be getting the same great games(and more) for years to come, that will just work, all for $500, meanwhile the pc folks will be dropping almost 2grand on cards that need to make up frames, their actual party trick is "imagination", which is pretty sad.

and yeah, you dont have to buy high end, but even the middle ground is expensive and still may not outlast these consoles. the current consoles are honestly so damn good its unfair when you really look at them.
Tbf the 6700XT which is one of the most reasonably priced cards ATM (and still far too expensive imo, although I did get one) isnt far from the consoles in performance. Depending on what the 7600 xt 7700 (xt) look like price / performance wise it might be a good entry point for gamers.
But indeed as bad as the PS4 looked compared to PCs during its early years that's how good it looks this time around.

As someone who does a lot more on the PC than gaming and prefers mouse and keyboard for anything that isn't a racing or fighting game I'll stick to the PC despite the prices. But I do hope AMD and especially Nvidia get their heads out of their asses and drop their prices.

Until the success of chatgpt I did think it would happen as PC gamers would be the #1 customer again instead of miners. But now we're second rate customers again with the data centers being far more profitable and gamers expected to pay high margins as well.
 
Tbf the 6700XT which is one of the most reasonably priced cards ATM (and still far too expensive imo, although I did get one) isnt far from the consoles in performance.

???

The 6700 XT is a good 30% or so faster than the consoles.

In the tests Digital Foundry did, in the vast majority of games the consoles perform somewhere between a RTX 2070 and RTX 2070 Super. In current gen terms, that means RTX 3060 and RX 6600 XT ballpark. In comparison, the 6700 XT performs close to the RTX 2080 Ti.

And the other guy above talking about "consoles are great while PC users will be spending $2000" is completely clueless. If you want console quality, all you need GPU-wise is a ~$250 6600 XT.
 
Does Gigabyte's accidental listing of a 4060 card with 8GB really mean that will be the only option, or the only one they leaked? We still don't know about NVidia's plans at this point.

Anyway, the 4060 is a 1080 and 1440 class gaming card - not 4K. I mean, you'll be able to get a lot of titles to run at that resolution, but it will be with reduced features.
 
Does Gigabyte's accidental listing of a 4060 card with 8GB really mean that will be the only option, or the only one they leaked? We still don't know about NVidia's plans at this point.

Anyway, the 4060 is a 1080 and 1440 class gaming card - not 4K. I mean, you'll be able to get a lot of titles to run at that resolution, but it will be with reduced features.
xx60 card are always been 1080p60 card since 1060 arrival (its also Req for VR back then afaik), and 7 years later , it still on 1080p60 ballpark is pretty much baffling. There is something wrong with how nvidia set this up.
 
128 bit bus seems very low on 4060 :joy:
Even 1440p would probably be a problem.
It will probably not even match 3060 Ti here.

I'd personally not buy a GPU (in 2023-2024) with less than 192 bit and 12GB VRAM for 1440p..
 
I wouldnt even be gaming on pc if it wasnt for trainers and mods.

ps5 and xbox will be getting the same great games(and more) for years to come, that will just work, all for $500, meanwhile the pc folks will be dropping almost 2grand on cards that need to make up frames, their actual party trick is "imagination", which is pretty sad.

and yeah, you dont have to buy high end, but even the middle ground is expensive and still may not outlast these consoles. the current consoles are honestly so damn good its unfair when you really look at them.
I think you are deeply confused. A 350-400 euro card is much faster than any console. What 2k are you talking about?
 
128 bit bus seems very low on 4060 :joy:
Even 1440p would probably be a problem.
It will probably not even match 3060 Ti here.
A 4070 Ti with a 192-bit bus and 504 GB/s of global memory bandwidth has no problems keeping up with a 384-bit, 1008 GB/s 3090 Ti, though. The large, low latency L2 cache in Ada more than makes up for the reduction in memory controller count -- 48 MB vs 6 MB in the AD104 vs GA102. While not fully confirmed yet, the 4060 is likely to have 24 MB, which is still 20 MB more than that in the GA104 and it will be clocked a lot higher too (which will also offset the reduction in SM count, 4060 vs 3060 Ti).

What's particularly interesting to note is that a 128-bit bus means cards will only be able to sport 4 or 8 GDDR6 modules, using clamshell mode for the latter. Both Samsung and Micron only mass produce 16 Gb parts now, so all cards will either be 8 or 16 GB, but the slowest RAM they sell is 14 Gbps -- the same as that in the 3060 Ti. That means the 4060's memory bandwidth will be 224 GB/s, at the very least, up to a potential 320 GB/s if a vendor used 20 Gbps GDDR6.

Sure that's 50 to 71% of the 3060 Ti's bandwidth but just as with the 4070 Ti, the larger and faster L2 cache will make up for this.
 
A 4070 Ti with a 192-bit bus and 504 GB/s of global memory bandwidth has no problems keeping up with a 384-bit, 1008 GB/s 3090 Ti, though. The large, low latency L2 cache in Ada more than makes up for the reduction in memory controller count -- 48 MB vs 6 MB in the AD104 vs GA102. While not fully confirmed yet, the 4060 is likely to have 24 MB, which is still 20 MB more than that in the GA104 and it will be clocked a lot higher too (which will also offset the reduction in SM count, 4060 vs 3060 Ti).

What's particularly interesting to note is that a 128-bit bus means cards will only be able to sport 4 or 8 GDDR6 modules, using clamshell mode for the latter. Both Samsung and Micron only mass produce 16 Gb parts now, so all cards will either be 8 or 16 GB, but the slowest RAM they sell is 14 Gbps -- the same as that in the 3060 Ti. That means the 4060's memory bandwidth will be 224 GB/s, at the very least, up to a potential 320 GB/s if a vendor used 20 Gbps GDDR6.

Sure that's 50 to 71% of the 3060 Ti's bandwidth but just as with the 4070 Ti, the larger and faster L2 cache will make up for this.

There's a world of difference between 128 bit and 192 bit.
Name just one 128 bit card that performs good at 1440p?

Cache can help to some degree but it will still be a small bus with low bandwidth.

I will be VERY surprised if 4060 with 128 bit / 8GB beats 3060 Ti with 256 bit and 8GB.

However, we probably won't see 4060 before H2...
I doubt they will put the best GDDR6 modules on it. I expect 16-18 Gbps.

You mention 4070 Ti and 3090 Ti and yes, they perform almost similar at 1440p but when you go to 4K the 3090 Ti performs 10% better, that is bandwidth bottlenecking for sure.
 
I'm really starting to think that Jensen considers his customers to be very easily impressed if this is what he's offering them.

Oh well, it's like I always say about nVidia products.... It's no skin off my nose because I won't be buying it! :laughing:
 
There's a world of difference between 128 bit and 192 bit.
Name just one 128 bit card that performs good at 1440p?

Cache can help to some degree but it will still be a small bus with low bandwidth.

I will be VERY surprised if 4060 with 128 bit / 8GB beats 3060 Ti with 256 bit and 8GB.

However, we probably won't see 4060 before H2...
I doubt they will put the best GDDR6 modules on it. I expect 16-18 Gbps.

You mention 4070 Ti and 3090 Ti and yes, they perform almost similar at 1440p but when you go to 4K the 3090 Ti performs 10% better, that is bandwidth bottlenecking for sure.
The Radeon RX 6600 XT competes perfectly well against the GeForce RTX 3060, despite having a 128-bit bus (compared to a 192-bit bus) and 34% less memory bandwidth. Why? The 32 MB of L3 cache, that's why. It doesn't fare as well against the 3060 Ti (16% slower, on average, at 1440p) but that's because it has 35% less FP32 throughput than the Ti, compared to 17% less throughput against the standard 3060.

In GPUs, large, low latency cache, especially L2, helps enormously. The 3090 Ti and 4070 Ti have the exactly same FP32 throughput, as well as texel and pixel rates. The Ada card has half the global memory bandwidth of the 3090 Ti and yet is only 10% slower at 4K (Steve's testing showed it was 3% slower, on average).

The final specs of the 4060 are yet to be confirmed, but let's say that the general rumors of it being a 30 SM GPU are true. The 3060 Ti has 38 SMs, and given that Ada has the same architectural structure as Ampere, the 4060 will only need to be clocked 27% higher than the 3060 Ti to have the same baseline performance. The older model's boost clock is 1.665 GHz, so the 4060 will need to have a boost of 2.1 GHz. The 4070 Ti's boost is 2.6 GHz, and routinely exceeds this, and the mobile version of the 4060 has a boost of 2.37 GHz, so the desktop version will have no problem running 27% faster than the 3060 Ti.
 
???

The 6700 XT is a good 30% or so faster than the consoles.

In the tests Digital Foundry did, in the vast majority of games the consoles perform somewhere between a RTX 2070 and RTX 2070 Super. In current gen terms, that means RTX 3060 and RX 6600 XT ballpark. In comparison, the 6700 XT performs close to the RTX 2080 Ti.

And the other guy above talking about "consoles are great while PC users will be spending $2000" is completely clueless. If you want console quality, all you need GPU-wise is a ~$250 6600 XT.

You can't just look at hardware specs for gaming performance though, consoles have a standard setup and are optimized entirely for games.

Arc driver issues (and AMD too) are just the tip of the iceberg but are clear issues for PC gaming. But then add the OS, other components, the specific game ports and whatever storefront they're on, and things can add up.

Console should absolutely be part of the equation/competition. Not really sure why some people are so adamantly against that, if you don't want to ever play on a console that's fine, but just discounting them as options makes zero sense. They're very strong this gen and are a much better value, hands down.
 
The Radeon RX 6600 XT competes perfectly well against the GeForce RTX 3060, despite having a 128-bit bus (compared to a 192-bit bus) and 34% less memory bandwidth. Why? The 32 MB of L3 cache, that's why. It doesn't fare as well against the 3060 Ti (16% slower, on average, at 1440p) but that's because it has 35% less FP32 throughput than the Ti, compared to 17% less throughput against the standard 3060.

In GPUs, large, low latency cache, especially L2, helps enormously. The 3090 Ti and 4070 Ti have the exactly same FP32 throughput, as well as texel and pixel rates. The Ada card has half the global memory bandwidth of the 3090 Ti and yet is only 10% slower at 4K (Steve's testing showed it was 3% slower, on average).

The final specs of the 4060 are yet to be confirmed, but let's say that the general rumors of it being a 30 SM GPU are true. The 3060 Ti has 38 SMs, and given that Ada has the same architectural structure as Ampere, the 4060 will only need to be clocked 27% higher than the 3060 Ti to have the same baseline performance. The older model's boost clock is 1.665 GHz, so the 4060 will need to have a boost of 2.1 GHz. The 4070 Ti's boost is 2.6 GHz, and routinely exceeds this, and the mobile version of the 4060 has a boost of 2.37 GHz, so the desktop version will have no problem running 27% faster than the 3060 Ti.
3060 and 6600XT are both terribly slow for 1440p gaming and are more for 1080p.

3060 Ti and 6700XT are night and day faster than those in 1440p, and these have 256/192 bit bus.

128 bit bus usually means 1080p. Cache won't change much. Cards are still low-end garbage when bus is gimped hard.
 
3060 and 6600XT are both terribly slow for 1440p gaming and are more for 1080p.

3060 Ti and 6700XT are night and day faster than those in 1440p, and these have 256/192 bit bus.
The 3060 Ti also has 28% greater FP32 throughput, 27% more texel fill rate, 56% more pixel fill rate, and 33% more L2 cache than the 3060. The 6700 XT has 25% more FP32 throughput and texel fill rate, and 50% more L2 cache and 200% more L3 cache than the 6600 XT. One cannot simply point at the memory bus widths and say that this is the sole reason why they perform better.

Cache won't change much.
Repeating this once more -- the 4070 Ti has a memory bus half as wide as that of the 3090 Ti. The major metrics for GPU performance in gaming (FP32 throughput, texel and pixel fill rate) are exactly the same. The Ada card competes with the Ampere card because of the large L2 cache, not in spite of it.

The 4060 may well be no better than the 3060 Ti in gaming at 1080p/1440p -- I say this because we don't know what the exact specifications of that card are. We can estimate it from the 4060 Mobile specs but until the clock speeds and final configuration are confirmed, it's nothing more than speculation at this point.

That said, if the 4060 has a sufficient amount L2 cache, clocked high enough too, then the use of a 128-bit wide bus will not be a limiting factor. There's more than enough evidence from the results of Ada, RDNA 2, and RDNA 3 cards to show this.

Cards are still low-end garbage when bus is gimped hard.
That used to be the case, until AMD and Nvidia significantly altered the cache structures in their GPUs.
 
The 3060 Ti also has 28% greater FP32 throughput, 27% more texel fill rate, 56% more pixel fill rate, and 33% more L2 cache than the 3060. The 6700 XT has 25% more FP32 throughput and texel fill rate, and 50% more L2 cache and 200% more L3 cache than the 6600 XT. One cannot simply point at the memory bus widths and say that this is the sole reason why they perform better.


Repeating this once more -- the 4070 Ti has a memory bus half as wide as that of the 3090 Ti. The major metrics for GPU performance in gaming (FP32 throughput, texel and pixel fill rate) are exactly the same. The Ada card competes with the Ampere card because of the large L2 cache, not in spite of it.

The 4060 may well be no better than the 3060 Ti in gaming at 1080p/1440p -- I say this because we don't know what the exact specifications of that card are. We can estimate it from the 4060 Mobile specs but until the clock speeds and final configuration are confirmed, it's nothing more than speculation at this point.

That said, if the 4060 has a sufficient amount L2 cache, clocked high enough too, then the use of a 128-bit wide bus will not be a limiting factor. There's more than enough evidence from the results of Ada, RDNA 2, and RDNA 3 cards to show this.


That used to be the case, until AMD and Nvidia significantly altered the cache structures in their GPUs.

Ada have better memory compression as well, at least this was mentioned in videos about the arch but yeah cache helps

However cards with 128-192 bit busses still struggle at 1440p and 2160p respectively

If bandwidth did not matter, why are Nvidia using 384 bit for flagship? They could up the cache and use twice the memory size for identical VRAM but 504 GB/s is not great for 4K gaming regardless of cache

4080 only uses 256 bit bus but has faster memory than 4090.

I does not change the fact that low end cards get 64-128 bit bus and I would not personally buy or recommend any card with a 64-128 bit bus
 
However cards with 128-192 bit busses still struggle at 1440p and 2160p respectively

If bandwidth did not matter, why are Nvidia using 384 bit for flagship? They could up the cache and use twice the memory size for identical VRAM but 504 GB/s is not great for 4K gaming regardless of cache

4080 only uses 256 bit bus but has faster memory than 4090.

I does not change the fact that low end cards get 64-128 bit bus and I would not personally buy or recommend any card with a 64-128 bit bus
Of course, global memory bandwidth matters -- I'm not suggesting for one moment that it doesn't. However, the cache structures in Ada and RDNA 2/3 greatly reduce the load on the VRAM; without them, the large number of ALUs in the top-end chips would be far more impacted by cache misses and the enormous latency penalty with DRAM.

The AD102 in the RTX 4090, for example, has 16384 ALUs -- 52% more than the GA102 in the 3090 Ti. It needs the 72MB of L2 cache and the 384-bit bus running at 21 Gbps. This is also why AMD went with a 384-bit bus for the Navi 31 in the 7900 XTX, as it has double the ALU count of the Navi 21.

GDDR6X is only available in 16 Gb modules now, so a 256-bit bus would only permit 16 GB or 32 GB in clamshell mode. For the 4090, this would result in a smaller memory footprint than the 3090 and the professional cards that use the GA102, which were 24/48 GB -- for the market they're used in, the decrease wouldn't be acceptable.

One can argue that figures such as 504 GB/s isn't great for 4K gaming, but the evidence clearly shows that it is, when supported by an appropriate cache system. If one wishes to believe otherwise, then that's fine, but AMD, Intel, and Nvidia are all firmly set on the large cache, narrower memory bus path in their GPU designs.
 
You can't just look at hardware specs for gaming performance though, consoles have a standard setup and are optimized entirely for games.

Arc driver issues (and AMD too) are just the tip of the iceberg but are clear issues for PC gaming. But then add the OS, other components, the specific game ports and whatever storefront they're on, and things can add up.

Console should absolutely be part of the equation/competition. Not really sure why some people are so adamantly against that, if you don't want to ever play on a console that's fine, but just discounting them as options makes zero sense. They're very strong this gen and are a much better value, hands down.

That's not how any of this works. You can't pull the "consoles perform better than their specs suggest" nonsense when it's literally Digital Foundry testing actual, released games running in real time with a FPS counter on the screen, and telling you "this is how the consoles run at this resolution and these settings, and to match this on PC you need a 2070/2070 Super." There is no "magic optimization," those are tests with the real performance of real games and any "optimization" that went into it is already accounted for because you're seeing how the actual game runs.

Also, I'm not "adamant against consoles," I'm just correcting the person who said the 6700 XT is "almost as good as the consoles" by letting them know that the 6700 XT is actually a lot faster than the consoles, which is an objective, measurable fact.
 
That's not how any of this works. You can't pull the "consoles perform better than their specs suggest" nonsense when it's literally Digital Foundry testing actual, released games running in real time with a FPS counter on the screen, and telling you "this is how the consoles run at this resolution and these settings, and to match this on PC you need a 2070/2070 Super."
That makes zero sense but okay whatever, just link up what you're talking about and I'll read it when I have time.
 
That makes zero sense but okay whatever, just link up what you're talking about and I'll read it when I have time.
Go to youtube and search "digital foundry pc vs consoles", they have dozens of game tests done over the past two years. Console performance ranges from RTX 2060 Super on the worst case scenario (Watch Dogs Legion) to RTX 2080 on the best case scenario (Assassin's Creed Valhalla, Death Stranding), with the majority of games landing somewhere between the 2070 and 2070 Super.
 
Last edited by a moderator:
Back