What's the difference between DDR3 memory and GDDR5 memory?

  1. dividebyzero

    dividebyzero trainee n00b Posts: 4,812   +646

    Which is pretty minimal compared with the variation due to GPU limitations. The majority of games are GPU limited, not CPU limited.
    Of course, don't take my word for it. Here's Futuremark's 3DMark CPU and GPU scores (note the variations in performance), and if that isn't enough proof, there are plenty of actual game benchmarks showing GPU and CPU dependency here thanks to Steve's gaming benchmarks.

    TBH, I didn't get much past what I quoted in your post- the rest seems like some shoddy PR blurb bigging up consoles- which doesn't have a lot to do with the original topic ( the difference between DDR3 and GDDR5 memory)
  2. TheVogon

    TheVogon Newcomer, in training

    You seem to post a lot of links and cut and pastes but you don't seem to really understand the contents.

    The link you presented as 'pretty minimal' in fact shows over a 400% variation on 3D Mark 11 just due to CPU performance. The other links also show similar variations with CPU power, where it is varied with the same GPU.

    As to if modern console games are CPU limited or GPU limited depends more on the relative hardware balance in question than on the game! Whilst you might be correct for high spec. gaming PCs, the next gen consoles have relatively low powered CPUs, so might well be CPU limited. And in this case, the CPU taking longer to get to data on a high latency GDDR RAM setup will slow the console down. It is already known that the new Jaguar APUs are quite memory latency sensitive.

    Sorry if you didnt understand a relatively basic technical comparison of the consoles. Unfortunately I can't type in crayon for you...
  3. TheVogon

    TheVogon Newcomer, in training

    I run out of edit time, but I would also note that console type multiplayer games tend also to be heaviest on the CPU - for instance BF3 64 Player is often CPU constrained...
  4. cliffordcooley

    cliffordcooley TechSpot Paladin Posts: 5,810   +1,431

    Forgive me if I put more stock in DBZ, which has been around allot longer than you. And thats not to mention he has proven his technical expertise on hundreds of occasions. I'm pretty sure he does understand the basics, now me on the other hand, well.... thats a different story.
  5. dividebyzero

    dividebyzero trainee n00b Posts: 4,812   +646

    Not really. You're comparing apples and oranges. For example Metro Last Light
    400% increase from the GTX 550 Ti to the GTX Titan. It may have escaped your notice that these cards are separated by one generation of architecture.
    313% increase from the Athlon II X2 to the Core i7 3960X. You want to tell me that the same gulf (one generation) exists between these two CPUs ?

    Just for comparison's sake, since you're stating "the other links", and of course you should bear in mind that most of the big variations in CPU benching are due to games optimized for more than the two cores of the lower end processors.
    Bioshock Infinite: 156% variation in CPU....550% variation in GPU
    SimCity..............: 400% variation in CPU....294% variation in GPU (AI intensive)
    Tomb Raider......: 30% variation in CPU....2650% variation in GPU
    Crysis 3..............:279% variation in CPU...239% variation in GPU
    Far Cry 3............: 94% variation in CPU....412% variation in GPU
    Hitman Absolution: 294% variation in CPU..436% variation in GPU
    CoD: Black Ops 2 : 37% variation in CPU...572% variation in GPU
    MoH:Warfighter.....: 28% variation in CPU..217% variation in GPU
    Borderlands 2........:389% variation in CPU..264% variation in GPU
    Max Payne 3.........: 88% variation in CPU...219% variation in GPU
    Diablo III................: 14% variation in CPU..1767% variation in GPU
    Tribes:Ascend.......: 67% variation in CPU...534% variation in GPU
    Mass Effect 3........: 81% variation in CPU...397% variation in GPU
    TESV:Skyrim........: 273% variation in CPU..250% variation in GPU (AI intensive)

    and of course the latest gaming posterboy...
    Battlefield 3.........: 3% variation in CPU.....640% variation in GPU

    With console hardware being a static (non-upgradable) feature, any CPU limited game falls back on the game developers and gaming partners for making the game playable. With PC gaming, the software isn't dependant upon the hardware fit-out, so you can load as much game IQ into it as you like. Consoles games have to exist within the narrow confines of a certain hardware system. In one system the software sets the high bar...in the other the hardware dictates the level of software and game IQ used.

    I'd also note that I haven't actually seen any hard info regarding the memory controllers of the PS4 - I assume that there should be four 64-bit (32 I/O) controllers if Sony's claims are correct. Then it comes down to AMD's pipeline- and of course the not inconsiderable input of AMD's Gaming Evolved program. Bearing in mind the limited game IQ and framerate requirements, AMD are going to be in some considerable trouble if the initial generation of games show pipeline stalls.

    Thanks for the crayon jibe also. You're wasted on tech forums. If I were you I'd consider approaching Dane Cook or Carlos Mencia for writing work..;)
  6. cliffordcooley

    cliffordcooley TechSpot Paladin Posts: 5,810   +1,431

    dividebyzero, your comment give me a question.

    I'm wondering if TechSpot has a list with links of all games that they have reviewed?
  7. dividebyzero

    dividebyzero trainee n00b Posts: 4,812   +646

    On this page . Just hit the Gaming benchmarks button or scroll down past the GPU reviews.

    Just as aside, I'd note that Steve's game benchmarks seem like the only comprehensive ongoing collection on the net. Many sites tackle individual games they deem worthy, but very few go in-depth with both CPU and GPU benches, and even less (if any) do it on a regular basis.
    cliffordcooley likes this.
  8. Burty117

    Burty117 TechSpot Chancellor Posts: 2,490   +302

    Interesting you say the way Microsoft's way of doing things "won't cripple graphics performance", even with the SRAM the bandwidth is still overall lower than what Sony came up with if this article is to be believed:
    http://www.anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4/3
    And will make games slightly more complex to program for, not by much but still.

    And the entire "the cloud will do lots of the processing work" was complete, spoon fed lies since online requirements have been removed meaning the console will be doing all the work, the word "cloud" was used in place of "DRM".
  9. gohanrage

    gohanrage Newcomer, in training

    How did SONY get 8GB of GDDR5 and still sell the PS4 for 100$ Cheaper. Do you believe SONY is going to sell the console at a Loss?
  10. jumpervii

    jumpervii Newcomer, in training

    Direct X V11.2 will allows for graphics to use system memory. Cloud is used when online. It allows data to be processed from servers that are more powerful that both consoles. This takes the load off your console when dealing with Ai and when playing any online multiplayer which will be ran on said servers and not relying on one console to be chosen based on bandwidth to host game sessions. Just because being online isn't required does not mean that it's no longer there. I do want you to consider that this is a discussion about the difference between GDDR5 and DDR3 and not a console superiority war. One more thing to note is the clock speed of the ram.
     
  11. Burty117

    Burty117 TechSpot Chancellor Posts: 2,490   +302

    ooww come on! you know for fact the "cloud" isn't going to be leveraged as they keep telling us it will, imagine how low your internet latency would have to be? and surely it would be relatively bandwidth dependent? I'm sorry, but it is utter bullsh*t, they simply are using the word "cloud" in place of "DRM" in the hope people would see it as a "benefit" and accept it. It will be nice though that games are hosted on an actual server.

    They are also using it to cover up the spec sheet being lower than expected, I'm happy to place money this magnificent "Cloud" won't be anything special that actually decreases load times or cranks the graphics quality up, If you genuinely believe this will happen in the lifetime of the Xbox One your not thinking about the implications of how to implement that with current internet infrastructures and the fact over and over again these services prove that if they have a heavy load on everything stops working, imagine if Halo 5's AI is done on the servers? And some of the graphics (such as particles for explosions) are processed in the cloud? but someone else in your house is downloading something and your internet is being all used up, the AI are then stupid and you simply don't get the explosion effects? or the Xbox servers cannot cope with the load (the Halo series are popular after all). This will not happen, they are simplying saying it to entice you into 24 hour DRM lock down (or at least they were, now its to try and save face).

    Anyway GDDR5 vs DDR3 is a hard topic to cover as DBZ noted above, the architectures are very different from one another, When it comes to clock speeds the Xbox One is running at 2133MHz DDR3 I believe? or 2400 depending on source unless there's an official one? while the PS4's isn't announced so no idea xD

    Overall though, When it comes to GDDR5 and DDR3, GDDR5 has much better bandwidth but worse latency than DDR3, when it comes to graphics processing, the Latency isn't much of an issue and it is why modern Graphics cards use GDDR5.
     
  12. brunogm

    brunogm Newcomer, in training

  13. dividebyzero

    dividebyzero trainee n00b Posts: 4,812   +646

    You're looking at some never-to-achieved "perfect scenario" (note that Hynix -along with Elpida and Samsung list a programmable range rather than single best case scenario of timings). The SiSoft link I posted earlier explains why. If you're looking at real life (I.e. inc prediction/cache misses) then this is a better indicator.
    GPU ( top memory speed 8 GHz effective )
    CPU (system RAM)
    Notice the difference in actual latencies
    Burty117 and hellokitty[hk] like this.


Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...


Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.