Nvidia x86 processor rumors surface once again

Status
Not open for further replies.
GPU/CPU integration is upon us and I can't imagine it will be long before this is a reality. If it means getting a smaller, cooler and more efficient machine I welcome it. Most of the things people want to do with a PC is not graphically challenging, it will be enthusiasts and gamers that will want a dedicated GPU. I want a standard form factor for PCs that can be attached to back of a monitor. Upgrading would be changing the motherboard or small chips that can be clipped on and off. Miniaturization should include making smaller card slots for upgrading. It would be nice if laptops of the future had a standard form factor for the different sizes and we could then keep a chassis we are happy using and just replace the internals. Currently it is hard because components have been shrinking every few years but the shrinkage will eventually be small enough that this may be a reality?
 
Putting them together doesn't help.

Do we really want GPU + CPU 'unification'? I disagree with this eerily universal consensus which says this is the best way to go.

Ultimately, stuff like CUDA, OpenCL etc.. Are supplementary technologies. They are basically frameworks / layers -- like Direct X and OpenGL -- that let developers take advantage of the underutilized hardware in your computer. I really doubt they are intended to replace your 64-bit-extended CISC CPU in any shape or form.

Think about it: It sounds like an upgrade nightmare... but more importantly, we'll lose the specific performance gains of having an actual GPU. This isn't as simple as just slapping a GPU on a CPU and getting the best of both worlds. They have to share the same space.. the same physical architecture... I think that's what most people fail to see.

The reason GPUs perform so well for graphics operations is that they have a specific instruction set and a physical architecture that focuses on just processing graphics. nVidia, ATI etc.. employ a minimalist approach, focusing on achieving more through optimization... even with smaller pipelines, minimal branch prediction, smaller cache and lower frequencies than CPUs. This works well for what they do, which is graphics processing.

However, CPUs are there to do *everything* and thus, master nothing but versatility. Unlike the "more with less" approach GPU manufacturers have been utilizing, over the years CPU manufacturers have tried to do "more with more". They've developed huge caches, massive pipelines and complex brand prediction. It's less efficient, but it has worked well to meet the needs of compatibility and versatility.

We *are* starting see CPUs becoming more simplified -- perhaps the gap between CPU and GPU is closing -- but I really don't see GPUs and CPUs being combined in any reasonable time frame (decades). In order for this to work, the way software is written and compiled for these platforms needs to be completely redesigned.
 
Status
Not open for further replies.
Back