Nvidia's upcoming Pascal GPU pictured with HBM 2.0

Scorpus

Posts: 2,156   +238
Staff member

Nvidia's next-generation graphics core, codenamed 'Pascal', is expected to launch in the first half of 2016, bringing a large jump in performance that should impress the PC gaming enthusiasts out there. But before the GPU could be publicly detailed by Nvidia, a slide from the company's GTC Taiwan 2015 presentation has leaked, giving us an early look at the design of the chip.

While we aren't getting a complete look at a board featuring Nvidia's Pascal GPU, the image from GTC Taiwan 2015 does show the die closely flanked by HBM 2.0. This second-generation high bandwidth memory technology is expected to give Pascal memory bandwidth in the 1 TB/s range, double that of the HBM 1.0 AMD used with their current-gen Fiji GPUs.

Rumor has it that Nvidia will include a whopping 16 GB of HBM 2.0 with their top-end Pascal products, while the chip itself will feature up to 17 billion transistors. Nvidia will be able to cram that many transistors into a reasonable die size thanks to the use of TSMC's 16nm FinFET+ manufacturing process, which is an effective die shrink compared to current 28nm technology.

Pascal is expected to support mixed floating point precision as well, going beyond what Kepler and Maxwell support by adding FP16 execution alongside FP32 and FP64. Pascal can allegedly perform FP16 calculations at twice the rate of FP32, so if games are willing to sacrifice precision, Pascal can provide a speed boost.

When Pascal launches in the first half of 2016, it will go head to head with AMD's upcoming Arctic Islands line, which is also expected to use HBM 2.0 and be built on a 16nm process. 2016 is shaping up to be an exciting year for graphics card launches, where we might finally see significant performance gains over the previous generation.

Images courtesty of WCCFTech

Permalink to story.

 
Between this and Zen (yes I know GPUs to CPUs is like apples to oranges), 2016 will be a very interesting year. Then again it may be as much a letdown as Bulldozer and Maxwell all over again.
 
I'm so happy that I decided to Skip the 900 series. Pascal will go very nicely with my 4k Philips bdm4065. My gtx 680 can play 4k at medium-high settings in the 40-50 range. Some games stutter because I only have 2gb of vram, but it performs well enough to make the wait tolerable.
 
Last edited:
Being that I don't have the money I expected to have by the end of the year, looks like I'll be skipping out on my new build until next year... good thing, since HBM 2.0 will be on both sides of the aisle? Lets gooooo (hope prices are not obscene) ~
 
16GB sounds extremely expensive.

I agree. What's the point of going from 4 GB to 16 GB if most games can barely use up 4? 8 would be more than enough for years to come. 16 GB is total overkill.

I've been waiting out for pascal but I am afraid of what the price will end up being
Pascal will follow the same pricing scheme as the current gen of NVidia gpus.

This only stands true if AMD releases a competitive product. If not, Nvidia is going to gouge the hell out of the market.
 
Between this and Zen (yes I know GPUs to CPUs is like apples to oranges), 2016 will be a very interesting year. Then again it may be as much a letdown as Bulldozer and Maxwell all over again.

Maxwell a letdown? It was cheaper, more energy-efficient and more powerful than Kepler. To me that was an impossible launch -get those three points at the same time.
 
I agree. What's the point of going from 4 GB to 16 GB if most games can barely use up 4? 8 would be more than enough for years to come. 16 GB is total overkill.



This only stands true if AMD releases a competitive product. If not, Nvidia is going to gouge the hell out of the market.
no, it doesn't. If Pascal doesn't meet the price/performance numbers of current offerings then no one will buy it. Everyone always says that they'll price gouge and it always ends up not being true. At most prices will be inflated by 5-10% for the first 2-3 months, which is true of all releases because of large demand at launch. A 1060 isn't going to be $500 just because AMD doesn't have competitive offerings.
 
Last edited:
16GB sounds extremely expensive.

I agree. What's the point of going from 4 GB to 16 GB if most games can barely use up 4? 8 would be more than enough for years to come. 16 GB is total overkill.

I've been waiting out for pascal but I am afraid of what the price will end up being
Pascal will follow the same pricing scheme as the current gen of NVidia gpus.

This only stands true if AMD releases a competitive product. If not, Nvidia is going to gouge the hell out of the market.
They could be talking about pascal Quadro cards not just Geforce cards
 
Arctic Islands will be produced on 14nm process, Pascal will be made by TSMC on a 16nm process.
 
I can't see the "affordable" cards launching till next Jul/Aug but who knows. Got my GTX970 recently and I'm loving it. Also, with the current gen of consoles being weak sauce I can't imagine devs pushing graphics out that much over the next year or so. However, all hail the death of 28nm transistors, it's been near 5 fricking years!
 
By 2017 4K gaming should be the new standard and I hope there will be plenty of 4K monitors to choose from
with the new unreal engine and the new cards coming games would be able to have a stupendous level of graphics with great levels of detail in the scenes textures and what not going to be so good...after that games should really improve physics to make everything behave realistically
 
no, it doesn't. If Pascal doesn't meet the price/performance numbers of current offerings then no one will buy it. Everyone always says that they'll price gouge and it always ends up not being true. At most prices will be inflated by 5-10% for the first 2-3 months, which is true of all releases because of large demand at launch. A 1060 isn't going to be $500 just because AMD doesn't have competitive offerings.

I was speaking about the scenario in which Nvidia Pascal is awesome and does crush current offerings, as it's forecasted.
 
I was speaking about the scenario in which Nvidia Pascal is awesome and does crush current offerings, as it's forecasted.
I wouldn't worry too much about Pascal vs current offerings. If history is any indicator, Pascal and AMD's Arctic Islands should be close in performance. That will determine the market segment lineup.
AFAIK, GP100 will (at least initially) be exclusively deployed as Tesla products in HPC/ Virtualization/ Visualization/ Neural Net environments (the latter being a potential high dollar goldmine technology going forward). By the time a GeForce variant arrives I would expect both AMD and Nvidia to be fairly competitive (as usual).
Arctic Islands will be produced on 14nm process, Pascal will be made by TSMC on a 16nm process.
Source? Latest rumour has TSMC's 16nmFF+ as the frontrunner (with GloFo tapped for 14nm LPP for Zen in late Q4 2016 and entry/mainstream GPUs). Probably because they are already shipping large Pascal silicon, where Samsung 14nm LPP has yet to deliver anything tangible, and the current 14nm LPE seems to be running into some problems. Samsung were originally tasked with the bulk of Apple's A9 production, but even though Samsung's 14nm LPE A9 is slightly smaller than TSMC's 16nm FF (non "+" the bulk of deliveries seem to favour TSMC. It doesn't auger well for 14nm LPP vs TSMC 16nm FF+/FFC. At least Samsung's HBM2 is made on a mature (20nm) process.
 
I wouldn't worry too much about Pascal vs current offerings. If history is any indicator, Pascal and AMD's Arctic Islands should be close in performance. That will determine the market segment lineup.
AFAIK, GP100 will (at least initially) be exclusively deployed as Tesla products in HPC/ Virtualization/ Visualization/ Neural Net environments (the latter being a potential high dollar goldmine technology going forward). By the time a GeForce variant arrives I would expect both AMD and Nvidia to be fairly competitive (as usual).

Source? Latest rumour has TSMC's 16nmFF+ as the frontrunner. Probably because they are already shipping large Pascal silicon, where Samsung 14nm LPP has yet to deliver anything tangible, and the current 14nm LPE seems to be running into some problems. Samsung were originally tasked with the bulk of Apple's A9 production, but even though Samsung's 14nm LPE A9 is slightly smaller than TSMC's 16nm FF (non "+" the bulk of deliveries seem to favour TSMC. It doesn't auger well for 14nm LPP vs TSMC 16nm FF+/FFC. At least Samsung's HBM2 is made on a mature (20nm) process.

If AMD and Nvidia are neck and neck, AMD still looses. Nvidia will keep banging it's "The way it's meant to be played" and GameWorks drum. Even if the cards were equal on a performance standpoint, Nvidia would win out in actual games. On top of that, no one is going to switch over to AMD because they are equal value to performance compared to Nvidia. AMD absolutely has to beat Nvidia or find another rich donor or else it'll go bankrupt.
 
If AMD and Nvidia are neck and neck, AMD still looses. Nvidia will keep banging it's "The way it's meant to be played" and GameWorks drum. Even if the cards were equal on a performance standpoint, Nvidia would win out in actual games. On top of that, no one is going to switch over to AMD because they are equal value to performance compared to Nvidia. AMD absolutely has to beat Nvidia or find another rich donor or else it'll go bankrupt.
It's a non-issue, I've been watching GPU releases for 20 years and everyone always says the same thing. Don't get yourself worked up over it.
 
It's a non-issue, I've been watching GPU releases for 20 years and everyone always says the same thing. Don't get yourself worked up over it.

Your probably right. It's just that AMD has never been this vulnerable.
 
Back