Anyone who has been purchasing PC components lately knows the rave that is dual core. Whether it is an Intel Core Duo inside a new laptop or an Athlon64 X2 inside a new desktop workhorse, dual core has let just about anyone have the benefits of dual-processor systems at significantly less cost than traditionally was available. One question that many have posed about having multiple cores was whether or not there was an advantage they held over systems with two physical processors on a technical level. Due to the nature of SMP, each CPU must be able to read from the other CPU. On a traditional setup, this is done via the system bus through RAM. Considering that the cores on a dual core system are in the same package and don't necessarily need to communicate with RAM, it was a reasonable assertion to think performance might increase. A very thorough examination of this at xbitlabs proves this wrong, showing that without a doubt, all Intel and AMD dual core processors use the exact same method to share information on their L2 cache, and it is roughly the exact same speed as reading from RAM. Their ultimate conclusion was as thus:
None of the processors with separate caches tested in this review can perform fast data transfers between the cores. Intel’s Core Duo (Yonah) and Conroe, each with a shared L2 cache, are the only processors that ensure fast processing of the same data block by two cores, yet their speed is limited too when the common data are modified.
Ultimately, this is an architectural limitation, a legacy of Intel and AMD dual core being based on their dual CPU designs. This doesn't mean that future CPUs won't be enhanced by letting them directly read each other's caches – In fact, it makes sense that Intel and AMD may move in exactly that direction. Now, whether or not other companies CPUs do this, such as IBM, is another story.