What's the difference between DDR3 memory and GDDR5 memory?

Games spring to mind. Game and GPU performance is extremely CPU dependent - just look at the effect of CPU performance on Future Mark for instance.
Which is pretty minimal compared with the variation due to GPU limitations. The majority of games are GPU limited, not CPU limited.
Of course, don't take my word for it. Here's Futuremark's 3DMark CPU and GPU scores (note the variations in performance), and if that isn't enough proof, there are plenty of actual game benchmarks showing GPU and CPU dependency here thanks to Steve's gaming benchmarks.

TBH, I didn't get much past what I quoted in your post- the rest seems like some shoddy PR blurb bigging up consoles- which doesn't have a lot to do with the original topic ( the difference between DDR3 and GDDR5 memory)
 
Which is pretty minimal compared with the variation due to GPU limitations. The majority of games are GPU limited, not CPU limited.
Of course, don't take my word for it. Here's Futuremark's 3DMark CPU and GPU scores (note the variations in performance), and if that isn't enough proof, there are plenty of actual game benchmarks showing GPU and CPU dependency here thanks to Steve's gaming benchmarks.

TBH, I didn't get much past what I quoted in your post- the rest seems like some shoddy PR blurb bigging up consoles- which doesn't have a lot to do with the original topic ( the difference between DDR3 and GDDR5 memory)

You seem to post a lot of links and cut and pastes but you don't seem to really understand the contents.

The link you presented as 'pretty minimal' in fact shows over a 400% variation on 3D Mark 11 just due to CPU performance. The other links also show similar variations with CPU power, where it is varied with the same GPU.

As to if modern console games are CPU limited or GPU limited depends more on the relative hardware balance in question than on the game! Whilst you might be correct for high spec. gaming PCs, the next gen consoles have relatively low powered CPUs, so might well be CPU limited. And in this case, the CPU taking longer to get to data on a high latency GDDR RAM setup will slow the console down. It is already known that the new Jaguar APUs are quite memory latency sensitive.

Sorry if you didnt understand a relatively basic technical comparison of the consoles. Unfortunately I can't type in crayon for you...
 
I run out of edit time, but I would also note that console type multiplayer games tend also to be heaviest on the CPU - for instance BF3 64 Player is often CPU constrained...
 
Sorry if you didnt understand a relatively basic technical comparison of the consoles. Unfortunately I can't type in crayon for you...
Forgive me if I put more stock in DBZ, which has been around allot longer than you. And thats not to mention he has proven his technical expertise on hundreds of occasions. I'm pretty sure he does understand the basics, now me on the other hand, well.... thats a different story.
 
The link you presented as 'pretty minimal' in fact shows over a 400% variation on 3D Mark 11 just due to CPU performance. The other links also show similar variations with CPU power, where it is varied with the same GPU.
Not really. You're comparing apples and oranges. For example Metro Last Light
400% increase from the GTX 550 Ti to the GTX Titan. It may have escaped your notice that these cards are separated by one generation of architecture.
313% increase from the Athlon II X2 to the Core i7 3960X. You want to tell me that the same gulf (one generation) exists between these two CPUs ?

Just for comparison's sake, since you're stating "the other links", and of course you should bear in mind that most of the big variations in CPU benching are due to games optimized for more than the two cores of the lower end processors.
Bioshock Infinite: 156% variation in CPU....550% variation in GPU
SimCity..............: 400% variation in CPU....294% variation in GPU (AI intensive)
Tomb Raider......: 30% variation in CPU....2650% variation in GPU
Crysis 3..............:279% variation in CPU...239% variation in GPU
Far Cry 3............: 94% variation in CPU....412% variation in GPU
Hitman Absolution: 294% variation in CPU..436% variation in GPU
CoD: Black Ops 2 : 37% variation in CPU...572% variation in GPU
MoH:Warfighter.....: 28% variation in CPU..217% variation in GPU
Borderlands 2........:389% variation in CPU..264% variation in GPU
Max Payne 3.........: 88% variation in CPU...219% variation in GPU
Diablo III................: 14% variation in CPU..1767% variation in GPU
Tribes:Ascend.......: 67% variation in CPU...534% variation in GPU
Mass Effect 3........: 81% variation in CPU...397% variation in GPU
TESV:Skyrim........: 273% variation in CPU..250% variation in GPU (AI intensive)

and of course the latest gaming posterboy...
Battlefield 3.........: 3% variation in CPU.....640% variation in GPU

Whilst you might be correct for high spec. gaming PCs, the next gen consoles have relatively low powered CPUs, so might well be CPU limited.
With console hardware being a static (non-upgradable) feature, any CPU limited game falls back on the game developers and gaming partners for making the game playable. With PC gaming, the software isn't dependant upon the hardware fit-out, so you can load as much game IQ into it as you like. Consoles games have to exist within the narrow confines of a certain hardware system. In one system the software sets the high bar...in the other the hardware dictates the level of software and game IQ used.

I'd also note that I haven't actually seen any hard info regarding the memory controllers of the PS4 - I assume that there should be four 64-bit (32 I/O) controllers if Sony's claims are correct. Then it comes down to AMD's pipeline- and of course the not inconsiderable input of AMD's Gaming Evolved program. Bearing in mind the limited game IQ and framerate requirements, AMD are going to be in some considerable trouble if the initial generation of games show pipeline stalls.

Thanks for the crayon jibe also. You're wasted on tech forums. If I were you I'd consider approaching Dane Cook or Carlos Mencia for writing work..;)
 
On this page . Just hit the Gaming benchmarks button or scroll down past the GPU reviews.

Just as aside, I'd note that Steve's game benchmarks seem like the only comprehensive ongoing collection on the net. Many sites tackle individual games they deem worthy, but very few go in-depth with both CPU and GPU benches, and even less (if any) do it on a regular basis.
 
Sony have provided a powerful GPU architecture, but at the cost of crippling their CPU performance. Microsoft have provided a similar level of GPU memory performance without crippling the CPU by using very fast on GPU SRAM cache.



Plus Microsoft have provided 3X the on console compute power available on demand in the cloud....Xbox Live is upgrading from 15,000 physical servers to 300,000 physical servers to support this.....



So likely the final GPU memory performance will be similar between consoles, Sony will have a 50% advantage in shaders, but Microsoft will have an on console CPU performance advantage - plus 3 times more compute power available in the cloud...

Interesting you say the way Microsoft's way of doing things "won't cripple graphics performance", even with the SRAM the bandwidth is still overall lower than what Sony came up with if this article is to be believed:
http://www.anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4/3
And will make games slightly more complex to program for, not by much but still.

And the entire "the cloud will do lots of the processing work" was complete, spoon fed lies since online requirements have been removed meaning the console will be doing all the work, the word "cloud" was used in place of "DRM".
 
How did SONY get 8GB of GDDR5 and still sell the PS4 for 100$ Cheaper. Do you believe SONY is going to sell the console at a Loss?
 
Interesting you say the way Microsoft's way of doing things "won't cripple graphics performance", even with the SRAM the bandwidth is still overall lower than what Sony came up with if this article is to be believed:
http://www.anandtech.com/show/6972/xbox-one-hardware-compared-to-playstation-4/3
And will make games slightly more complex to program for, not by much but still.

And the entire "the cloud will do lots of the processing work" was complete, spoon fed lies since online requirements have been removed meaning the console will be doing all the work, the word "cloud" was used in place of "DRM".

Direct X V11.2 will allows for graphics to use system memory. Cloud is used when online. It allows data to be processed from servers that are more powerful that both consoles. This takes the load off your console when dealing with Ai and when playing any online multiplayer which will be ran on said servers and not relying on one console to be chosen based on bandwidth to host game sessions. Just because being online isn't required does not mean that it's no longer there. I do want you to consider that this is a discussion about the difference between GDDR5 and DDR3 and not a console superiority war. One more thing to note is the clock speed of the ram.
 
Direct X V11.2 will allows for graphics to use system memory. Cloud is used when online. It allows data to be processed from servers that are more powerful that both consoles. This takes the load off your console when dealing with Ai and when playing any online multiplayer which will be ran on said servers and not relying on one console to be chosen based on bandwidth to host game sessions. Just because being online isn't required does not mean that it's no longer there. I do want you to consider that this is a discussion about the difference between GDDR5 and DDR3 and not a console superiority war. One more thing to note is the clock speed of the ram.

ooww come on! you know for fact the "cloud" isn't going to be leveraged as they keep telling us it will, imagine how low your internet latency would have to be? and surely it would be relatively bandwidth dependent? I'm sorry, but it is utter bullsh*t, they simply are using the word "cloud" in place of "DRM" in the hope people would see it as a "benefit" and accept it. It will be nice though that games are hosted on an actual server.

They are also using it to cover up the spec sheet being lower than expected, I'm happy to place money this magnificent "Cloud" won't be anything special that actually decreases load times or cranks the graphics quality up, If you genuinely believe this will happen in the lifetime of the Xbox One your not thinking about the implications of how to implement that with current internet infrastructures and the fact over and over again these services prove that if they have a heavy load on everything stops working, imagine if Halo 5's AI is done on the servers? And some of the graphics (such as particles for explosions) are processed in the cloud? but someone else in your house is downloading something and your internet is being all used up, the AI are then stupid and you simply don't get the explosion effects? or the Xbox servers cannot cope with the load (the Halo series are popular after all). This will not happen, they are simplying saying it to entice you into 24 hour DRM lock down (or at least they were, now its to try and save face).

Anyway GDDR5 vs DDR3 is a hard topic to cover as DBZ noted above, the architectures are very different from one another, When it comes to clock speeds the Xbox One is running at 2133MHz DDR3 I believe? or 2400 depending on source unless there's an official one? while the PS4's isn't announced so no idea xD

Overall though, When it comes to GDDR5 and DDR3, GDDR5 has much better bandwidth but worse latency than DDR3, when it comes to graphics processing, the Latency isn't much of an issue and it is why modern Graphics cards use GDDR5.
 
Well, Hynix GDDR5 apparently have good timings:
http://www.cse.psu.edu/~juz138/files/islped209-zhao.pdf
Memory Clock = 2.5GHz tRAS = 22ns tCL = 8ns tRP = 8ns tRC = 30ns tRCD = 8ns tRRD = 5ns Off-chip GDDR5, 2GB, Bandwidth = 320GB/s
You're looking at some never-to-achieved "perfect scenario" (note that Hynix -along with Elpida and Samsung list a programmable range rather than single best case scenario of timings). The SiSoft link I posted earlier explains why. If you're looking at real life (I.e. inc prediction/cache misses) then this is a better indicator.
GPU ( top memory speed 8 GHz effective )
CPU (system RAM)
Notice the difference in actual latencies
 
Thanks for the information I wanted answers to since I got my laptop 3 years ago and my net book I bought almost 5 years ago, when Best Buy, where I bought them did not have any ddr3 ram for me to up grade the net book.
MM SUPRISE 1103 OM AUGUST 16 2014.png
 
Hi to all at Techspot. I'm not a techie but noticed that there was two kinds of memory, DDR3 and DDR5. My Question is,
What's the difference between DDR3 Memory and the Graphics DDR5 Memory and if the DDR5 is in anyway better then why is it not used for desktop computer memory instead of DDR3.
I know some apps are taking advantage of or utilizing graphics memory Thanks for any explanation




Here the details:::::

As a RAM retailer I’m often asked about whether RAM of different speeds are compatible with certain makes and models of computers. I thought it'd be a good idea to write a small tutorial to explain how the different RAM speeds work, maybe this'll help make things clearer. If you find this guide helpful please remember to click the “Yes” button at the bottom – the more people who click that button the more this guide gets pushed up in the rankings, and the more it will be seen by other people.

Before we talk about the RAM speeds I’d like to talk about something else so I can use it as an analogy when we talk about the speeds. You may have noticed that the capacity of your RAM and your hard drives aren’t exactly what they were advertised at. For example, you might have bought 1GB of RAM only to install it and see it come up as 923MB. The memory business allows a 10% margin of error. A 1GB RAM module actually has room for 8,589,934,592 bits of information – that’s 8 Billion memory addresses. As you can imagine, it would be pretty hard to get a 100% score on a test with 8 Billion questions. They allow themselves to get a 90%, or an A grade if you’re in the US school system. If the module is within that 10% margin of error it’s considered good enough to sell.

Now for the speed conversation. Likewise, not all RAM modules can run at the full speed they were built to run at. There are four main groups of RAM technology: SDRAM, DDR1, DDR2, and DDR3. There are a few others, but we don’t need to go into them for the sake of this tutorial. Each of the four groups I mentioned are broken down into a few possible speeds:

SDRAM – 66Mhz, 100Mhz, 133Mhz
DDR1 – 266Mhz, 333Mhz, 400Mhz
DDR2 – 400Mhz, 533Mhz, 677Mhz, 800Mhz
DDR3 – 400Mhz, 533Mhz, 677Mhz, 800Mhz, 933Mhz, 1066Mhz

On each RAM module is a small chip that communicates with your computer. When it’s installed and you turn on your PC the computer asks the RAM “What speed are you?” The little chip replies with whatever information it’s been programmed to reply with. Each module is built with the hope of it being able to reach the maximum speed for its group, and after being built they go through a Quality Assurance test. That little chip is programmed to respond with information based on how high the module scored on it’s QA test, so if it runs at 800Mhz but has a lot of errors, but runs at 677Mhz without errors it'll be programmed to reply, "I'm 677Mhz."

There are a few nice things about this arrangement. First, say the manufacturer builds 5,000 DDR1 modules and 1,000 of them are marked 266Mhz and 1,000 are marked 333Mhz. If the manufacturer gets an order for 2,000 266Mhz modules they don’t have to build 1,000 more modules, they just take 1,000 of the other ones and reprogram that little chip to respond with “I’m 266Mhz”, label them as 266Mhz, and ship them out the door.

Another nice thing is that if you’re building a computer and you know what I’ve just explained then you know that you’re not necessarily restricted to one specific speed of RAM. If your PC uses DDR1 RAM it could possibly use all three speeds of DDR1. That's handy if you've got other RAM laying around. But there are a few considerations that can affect this.

Computer manufacturers plan in advance. Before a single computer was built designed for DDR3 RAM the computer manufacturers and RAM manufacturers sat down and planned everything out. Intel and AMD might not have a processor capable of using the faster RAM speeds yet. Or the motherboard manufacturers might not have built a motherboard capable of handling the faster RAM yet. But they know they’ll have those designed and produced by next year. So in the mean time they needed to figure out a way for the motherboards and CPUs they're building now to talk to each other and agree on what speed they're going to run at so they can use the slower and faster RAM modules.

So say you put together a computer whose CPU and/or motherboard can only run at 333Mhz but you put in RAM that runs at 400mhz. The Motherboard will ask the RAM “What speed are you?” The RAM replies, “I’m 400Mhz.” The motherboard says, “Okay, but we run at 333Mhz, you’ll have to slow down and run at our speed.” The reverse is also true. The Motherboard might be 333Mhz and you put in 266Mhz RAM. Then the motherboard and CPU will slow down to run at 266Mhz.

There are some really old computers – the ones that used SDRAM – that were built differently. The only thing I can think of is that the computer manufacturers didn’t plan things out as well back then, because some of those old computers won’t accept a module that's a different speed than the motherboard. Those PCs seem designed to ask what speed the RAM is and if they didn’t reply correctly the motherboard would stop and report an error. I guess they realized later they could program them to do what I described above. But I’ve never run into this problem with computers that use any of the DDR types of RAM.

So, having read all this you may still be having problems finding RAM that works in your computer. There are two other factors that are commonly talked about when it comes to RAM: chip density and CAS latency. I’ve written a separate guide on chip density, just look at my other guides (they're listed at the bottom of this page). As for CAS latency, I’ll be honest. I’ve been buying and selling RAM for 5 years and I still don’t completely understand what the CAS latency measures – but I can say that I’ve never had a single module of RAM returned that the problem came down to CAS latency. I used to sell both high and Low Density RAM, and when I did 75% of my returns come from people who didn’t read my listing completely and bought the wrong density. The other 25% of my returns came from people who bought a RAM module that was of a higher capacity than their computer could accept.

I guess that’s a good tip I can add in this guide too. If your computer’s specifications say it can handle 2GB of memory and you have 2 expansion slots – chances are your computer cannot use a 2GB module. It will probably only recognize 1GB in each expansion slot, and you’ll have to buy two 1GB modules to reach the 2GB total your computer can use. There are some computers that will accept it all in one expansion slot, but the overwhelming majority of them require you to use both slots. So if you really want to make sure your computer will be able to use the RAM, buy half the total possible for the computer and install that in each slot.
 
Here the details:::::

As a RAM retailer... For example, you might have bought 1GB of RAM only to install it and see it come up as 923MB.
As a RAM retailer you should actually know that 1GB of RAM is actually depicted as 1024MB not 923.
timings-2.png


You seem to be confusing the decimalization of a gigabyte - a common cause for confusion with hard drive storage capacity - I.e. a gigabyte equalling 1,000,000,000 bytes rather than the IEC standard 1,073,741,824 measurement (Gibibyte, or GiB). Please note that the governing body for memory standards, JEDEC, has always used a Base 2 definition for RAM capacity. Unless you're a Mac (or maybe Linux) user, you'll see 1GB of RAM reported as 1024MB.
A 1GB RAM module actually has room for 8,589,934,592 bits of information (which strangely enough when divided by eight to achieve bytes comes to 1,073,741,824)– that’s 8 Billion memory addresses.
Nope. The number of memory addresses available are the smaller of:
Number of bytes (in this case 1,073,741,824), or
2ᴺ - 1 (where N = processor register size, I.e 32-bit, 64-bit)

I'd suggest some reading up on how memory actually works.
So say you put together a computer whose CPU and/or motherboard can only run at 333Mhz but you put in RAM that runs at 400mhz. The Motherboard will ask the RAM “What speed are you?” The RAM replies, “I’m 400Mhz.” The motherboard says, “Okay, but we run at 333Mhz, you’ll have to slow down and run at our speed.” The reverse is also true. The Motherboard might be 333Mhz and you put in 266Mhz RAM. Then the motherboard and CPU will slow down to run at 266Mhz.
What a load of tosh. Motherboards generally default to the lowest JEDEC profile ( frequency and timing) of the RAM being used. The RAM can be run slower or faster by using RAM dividers supplied by the motherboards BIOS, or in some cases - notably Nvidia chipsets, the memory can be clocked independently of the system bus.
As for CAS latency, I’ll be honest. I’ve been buying and selling RAM for 5 years and I still don’t completely understand what the CAS latency measures
Really? And you're writing guides on the subject!
You might try any number of articles on memory timings.
 
Back