Posts: 8,675 +1,549
Unless you need the high refresh rate, HDR modes or even buy monitors or TVs that actually support those features or that can be playable with current GPUs, your point is null.I do not see it a good investment to be buying a video card later this year with HDMI 2.0 and DisplayPort 1.4, when HDMI 2.1 has been released, and DisplayPort 2.0 is almost here.
Looks like video card manufacturers want to suck you in for this one, so next year they will just update the ports, and get your money the second time. They won't suck me into this.
Unless you are living under a rock, do not use high refresh rate, do not use HDR or any new TV-s, then your point is null.Unless you need the high refresh rate, HDR modes or even buy monitors or TVs that actually support those features or that can be playable with current GPUs, your point is null.
This is fortunetelling with very very little to go on. I'm confident Nvidia will do more than fine, but we know very little AFAIK (at least, Techspot mentions next to nothing).
Ampere is like a year away dude....
It's possible that, like with Intel and their 10nm node, Nvidia will have trouble getting high clocks. It might be one of the reasons why they haven't talked much about it and why they'll, "presumably", focus on power efficiency. This should also mean that they'll be able to add more RT and Tensor cores into the GPU to make them more viable.
Getting to play Cyberpunk 2077 with all the RT will be enough to justify my 2080 Ti purchase , just like I bought the Titan X Maxwell when Witcher 3 came out (GTX 980 was shuttering like hell).It's possible that, like with Intel and their 10nm node, Nvidia will have trouble getting high clocks. It might be one of the reasons why they haven't talked much about it and why they'll, "presumably", focus on power efficiency. This should also mean that they'll be able to add more RT and Tensor cores into the GPU to make them more viable.
We'll probably not see the same jump in performance like with Pascal (vs Maxwell) with their first gen GPUs on 7nm. I fully expect them to just refine Turing. I'll be happy if they manage to get 15-20% by just adding more cores even though they'll prolly not increase the clocks. The increased complexity of Nvidia's GPUs makes the transition more tricky.
Hopefully AMD manages to get something out for the high end market by the end of the year or in early 2020 and TSMC's 7nm+ will not be delayed. AMD needs to introduce ray-tracing together with the 7nm+ node in some shape or form (late 2020 or 2021?). I think Nvidia's 2nd gen RT cores should finally be good enough to get more devs to use this feature (thanks 1st gen beta testers )
Arguably, they're both scalar-vector architectures, but AMD themselves class the CUs as vector processors:Nice article! There are a few points that weren't entirely accurate. Navi is not a vector processing architecture. With GCN AMD moved to scalar processing just like Nvidia and that continues with Navi.
Good catch about the SMs and the thread count; I'd misread the CUDA documentation - I'll amend the article now.The article also lists the 2070 super as having 48 SMs and being able to track 6144 threads. It only has 40 SMs and can track 40,960 threads across the chip (same as 5700xt).
In other words, you are clueless.The biggest problem is that game developers are bribed by Nvidia. So now AMD has to bribe them too. Resources are spent on corruption, instead of making chips better.
I mean, just think about this: Everyone knows that AMD chips are better for cryptomining. Which proves that AMD has faster hardware. It's just that cryptomining software is made to use it optimally (because you earn more money that way). While games are made to run faster on whoever bribed the developers more. And that's how Nvidia wins. If Nvidia was really faster, CUDA units would outperform AMD Stream processors in cryptomining too.
Which didn't happen. CUDA was obviously much slower in all the compute benchmarks. Since game calculations are nothing else than massive parallel computing, but Nvidia is somehow faster in that, the only way it can happen is that code is specially made to run slower on AMD. In other words, bribery and corruption. That's why I don't support Nvidia. I don't want to support criminal behavior, even if that would bring me a few FPS more. In criminally written games.
That's why I don't buy those games either. If I'm tempted to play such games, I'll get myself pirated versions, so I don't finance the corrupt game developers. Pirates seem to be more honest than them.
Well 1660 Ti (TU 116) on 7nm would cannibalize the entire Navi and almost all the current Turing lineups. At +25% perf uplift 7nm TU 116 should be on par with 5700 while consuming just 120W. However Nvidia ain't gonna shoot themselves in the foot so it might be while before we see a die shrunk TU 116.So it is official finally, Nvidia architecture is better despite being on the older node and having like half of the chip wasted for raytracing.
In other words, you are clueless.
Nvidia gpus 1070ti/1080ti were actually more profitable and had higher return of investment due to having lower power consumption and being more versatile. Raw power isn't everything, AMD could say a lot about it, their cards always having more TFlops than Nvidia, yet being slower in gaming.
People were buying AMD cards first because of being cheaper, then the situation changed.
As we now know, nobody cared back then about power consumption, did they? And it seems that efficiency is important to you, so now, or in the future RDNA should be right up your ally.
But Maxxi, You can't protect Nvidia anymore, Jensen can't control all the channels. The truth is out. Major discount incoming for RTX cards soon.
AMD has a class act going and it seems the entire gaming industry is on-board.