TechSpot

Meet Pascal, Nvidia's next-generation GPU that could render PCIe obsolete

By Shawn Knight
Mar 25, 2014
Post New Reply
  1. Nvidia chief Jen-Hsun Huang revealed the successor to Maxwell during the company's annual GPU Technology Conference in San Jose, California. Named after 17th century French mathematician Blaise Pascal, the Pascal GPU is more than just an incremental update as it...

    Read more
     
  2. veLa

    veLa TS Evangelist Posts: 708   +168

    Well this is very interesting no doubt, but let's see if these advantages actually translate into a massive real world performance jump rather than another incremental boost.
     
    Jad Chaar and ikesmasher like this.
  3. TomSEA

    TomSEA TechSpot Chancellor Posts: 2,559   +599

    Wow...that's a huge step forward. 12 time faster than PCIe and 1000 times the bandwidth? Sheesh...soon we'll have just one big block of silicon that does everything at the speed of light. ;)
     
    rpjkw11 likes this.
  4. Jad Chaar

    Jad Chaar TS Evangelist Posts: 6,477   +965

    Wow, real props to nVidia. I am really excited for this new tech.

    Anyone care to explain stacked DRAM to me in simple terms? dividebyzero?
     
  5. Revolution 11

    Revolution 11 TS Enthusiast Posts: 40

    This is cool new tech with interesting implications but I see three problems/questions here. First, if this is a proprietary standard, then it is dead on arrival. Like how Thunderbolt is superior to USB in everything except that USB is ubiquitous because of low licensing fees and easy technical access, Nvidia's interconnect won't beat PCI-E unless it is a open standard. The article mentions gaming but why would motherboards incorporate NVLink unless any GPU manufacturer (and I mean AMD) can also use it?

    Second, does this only benefit servers and HPC? As far as I can tell, PCI-E 3.0 is more than enough for gaming GPUs.

    Third, wouldn't you just make a massive SoC with the GPU and CPU on die together with the memory. You would not need addon cards then. This is at most a intermediate step before we get to what TomSEA is referring to as a big block of silicon (BBOS).

    Question 2 makes me think this won't be big for desktop gamers. Question 3 makes me think this is a short-lived product. Question 1 makes me think this will fail on arrival if Nvidia chooses to keep it locked up tight.
     
    dikbozo likes this.
  6. mosu

    mosu TS Guru Posts: 422   +48

    I wonder with what CPU NVLink will connect?.... stacked DRAM= DRAM sandwich
     
  7. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    The stacked DRAM isn't any difference from that touted for Nvidia's Volta architecture. At this stage I'm not sure is Volta is still going ahead, or if Pascal has superceded it. The GDC 2014 slides don't have it shown.
    [​IMG]
    Still working through the slides and technical papers myself at the moment (the presentation was timed awkwardly given the time difference in New Zealand), but there is a quick rundown of NVLINK at Videocardz.
    [​IMG]
    And yes, stacked DRAM is just a cheap way to affect higher density without using more real estate.
     
    Jad Chaar likes this.
  8. I think that “1000 times the bandwidth” is overstating it.


    Stacked RAM will make it easier to wire up the GPU to a bus width of 512, 1024, or 4096 bits, using a smaller footprint.


    How many bits on the demo board?
     
  9. I find impressive how nvidia alone can do so many advances and can compete with a big company as amd. What would happen if intel buy nvidia?
     
  10. Real-time ray tracing in games would be possible, with that much bandwidth/power.
    Low voltage good thing too..Some really cool things will be possible with this tech.
     
  11. hahahanoobs

    hahahanoobs TS Evangelist Posts: 1,631   +432

    It's more than that.
    Volta would be able to deliver memory bandwidth of 1TB/s by stacking the card's RAM right on top of the GPU -- a technique Nvidia is currently calling "stacked DRAM." Along with greatly boosting performance, mounting RAM directly on the GPU should reduce the card's overall footprint.
     
  12. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    Er..., No. Stacking memory on top of a hot GPU would both fry the memory IC's and would act as a very inefficient heatsink for the GPU. Nvidia's 3D (stacked) DRAM- just an Nvidia name for the hybrid memory cube or the related HBM (High Bandwidth Memory) places the memory IC's on the same substrate - next to the GPU.
    An Nvidia slide if you doubt the veracity of what I'm saying.
    [​IMG]
    The bigger improvements are that It also decreases latency and voltage requirement, both due to shorter trace length.
     
    Last edited: Mar 25, 2014
  13. misor

    misor TS Evangelist Posts: 1,163   +197

    right, since mobo and gpu makers will have to make way for pcie 3.0 + ddr4 first, then nvlink + ddr4.
    more money for the mobo/gpu companies, more hype as the tech matures.
    as for early adopters, overtime and/or double jobs?

    is machine learning another term for artificial intelligence development?
     
  14. hahahanoobs

    hahahanoobs TS Evangelist Posts: 1,631   +432

    Your link said on top so forgive me. nVIDIA developer site says on GPU package which is what is shown in your slide.

    Being on package allows the VRM's to also be closer to the GPU to aid in power efficiency, in addition to the shear (potential) boost in bandwidth. Point being, there are many advantages to this new architecture other than just density. I guess I was bothered by how you only mentioned density as an advantage.
     
  15. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    Yup
    More a case of horses for courses. I didn't want to get bogged down in new memory architecture for what seemed a general inquiry, and prefer when possible to link to other Techspot content.
    In reality, stacked DRAM (whether HMC or HBM or WideIO) is a fairly complex issue with little in the way of current concrete specification. With so much of it still theoretical at this stage (and it's first implementation taking place on big iron) there didn't seem much point in delving into the pro's and con's when a simple answer seemed to suffice....at the time.
     
    misor likes this.
  16. Mikymjr

    Mikymjr TS Enthusiast Posts: 121   +8

    Advancement in technology is great ^^
     
  17. theBest11778

    theBest11778 TS Addict Posts: 234   +69

    One upside is I can see Mobo OEM's making motherboards with an NVLink slot, and a PCIe slots. Reminds me of the days of AGP, they still had PCI slots. The PCIe lanes on Intel CPUs aren't going anywhere anytime soon.
     
  18. A GPU is basically a floating point math coprocessor, which is why distributed computing (SETI+FAH) use it, and most super computers have in NVidia hardware. If NVidia licensed 8086 architecture from Intel like AMD does, then Intel and AMD would be in serious trouble. Before NVidia gets any bigger Intel should buy it. Intel has tripled the on chip graphics of the 4th gen i3/5/7 chips and for most applications eliminates the need for a GPU from AMD or NVidia, which is a major threat to them. Can't see discrete graphics existing in 2016.
     
  19. Techspot, I know my comment will be off from the topic.. but, could you bring the news about nVidia GTX Titan-Z? Thanks!
     
  20. I guess, nVidia dropping Volta, then replaced it with Pascal? Also, with this NVLink.. is that mean the mobo must have NVLink slot?
     
  21. Axiarus

    Axiarus TS Addict Posts: 218   +92

    Lookup a video called "Xbox Fans: Stop. Talking. About. The. Cloud." on YouTube. Cloud computing is nowhere near possible.
     
  22. Revolution 11

    Revolution 11 TS Enthusiast Posts: 40

    AMD is not a big GPU company, it is a big CPU company that happens to compete better in GPUs. Nvidia is the bigger GPU company here, not AMD. And when it comes to CPUs, Intel is many times bigger than AMD.

    What is impressive is that AMD is still standing year after year vs. Intel products.
     
  23. Railman

    Railman TS Booster Posts: 708   +101

    I also think such a move would cause a great deal of fear of a monopoly in the electronics industry.
     
    • 3D Memory: Stacks DRAM chips into dense modules with wide interfaces, and brings them inside the same package as the GPU. This lets GPUs get data from memory more quickly – boosting throughput and efficiency – allowing us to build more compact GPUs that put more power into smaller devices. The result: several times greater bandwidth, more than twice the memory capacity and quadrupled energy efficiency.
    Source : Guru3D
     
  24. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    Just as an addition to my earlier post...according to a member at B3D, the stacked DRAM memory tech being used is Hynix's HBM (High Bandwidth Memory).
     

Similar Topics

Add New Comment

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...