Putting them together doesn't help.
Do we really want GPU + CPU 'unification'? I disagree with this eerily universal consensus which says this is the best way to go.
Ultimately, stuff like CUDA, OpenCL etc.. Are supplementary technologies. They are basically frameworks / layers -- like Direct X and OpenGL -- that let developers take advantage of the underutilized hardware in your computer. I really doubt they are intended to replace your 64-bit-extended CISC CPU in any shape or form.
Think about it: It sounds like an upgrade nightmare... but more importantly, we'll lose the specific performance gains of having an actual GPU. This isn't as simple as just slapping a GPU on a CPU and getting the best of both worlds. They have to share the same space.. the same physical architecture... I think that's what most people fail to see.
The reason GPUs perform so well for graphics operations is that they have a specific instruction set and a physical architecture that focuses on just processing graphics. nVidia, ATI etc.. employ a minimalist approach, focusing on achieving more through optimization... even with smaller pipelines, minimal branch prediction, smaller cache and lower frequencies than CPUs. This works well for what they do, which is graphics processing.
However, CPUs are there to do *everything* and thus, master nothing but versatility. Unlike the "more with less" approach GPU manufacturers have been utilizing, over the years CPU manufacturers have tried to do "more with more". They've developed huge caches, massive pipelines and complex brand prediction. It's less efficient, but it has worked well to meet the needs of compatibility and versatility.
We *are* starting see CPUs becoming more simplified -- perhaps the gap between CPU and GPU is closing -- but I really don't see GPUs and CPUs being combined in any reasonable time frame (decades). In order for this to work, the way software is written and compiled for these platforms needs to be completely redesigned.