Noone really cares about special case "scenarios", but actual average everyday uses - gaming, compressing, converting videos, photoshopping. Also you forgot the link, so I have no idea what review you talk about.
Wow, troll much? Biased, out of any reason, agressive... shall I go on? Dude, the 5820K -other than names, marketing, and chipset- is on par with the "consumer" level Skylake i7 -that has an IGP that won't work as soon as you put a dGPU (you're giving me the reason in your third paragraph)- in price (you can have a similarly priced mobo + CPU combo going with both options; where the DDR4 price argument vs DDR3 no longer applies).
And it comes down to the reading comprehension:
* "here": Techspot
* "handy review": look in the reviews section...
* "...from Conroe to Haswell": damn! Which of the reviews must be about that?! Oh, wait... it must be the "
Then and Now: Almost 10 Years of Intel CPUs Compared" review.
Do I seriously have to put an internal link? And I said, "depending on the scenario" so you can make your own judgement based on what scenario interests you. 3-7 times means "at least 3 times; as much as 7 times", so: worst-case scenario: a 300% improvement; far from "at best 100+%"
No, the comparison is not. CUDA only works on hardware supporting CUDA cores. OpenCL is thus more abstract and is supported on a much, much wider range of hardware.
First, you said I couldn't compare an API with an architecture. Then I show you both are APIs -one propietary, the other open- and you still say it's no fair comparison; regardless of the supporting hardware.
If you didn't know, a major reason the 7000 series was out of stock so much at launch is because it crushed anything Nvidia had at bitcoin mining, which was just starting to take off at the time. It hasn't changed much, AMD cards are just plain better when it comes to compute.
Yep, that's the exact scenario I had in mind when talking about OpenCL and AMD users talking about it; I'm just not going to exemplify and illustrate every single thing I mention. And then you get yourself in a corner: better at computing what? When both AMD and NVIDIA say they're DX12 compliant, they mean it a different feature levels [meaning they have hardware optimized for one or other thing]. Designing a "do-it-all" GPU would be very expensive for the silicon [area] used for all the modules that do anything you can think of; so each company gambles and focuses on supporting certain features in their hardware and if certain software use that it pays off.
NVIDIA cards can handle tesselation a lot better, while AMD focuse in general purpose computing (kind of a co-processor). Let's put it this way: I could program an FPGA to do FP operations with the highest throughput in the market, running just at 800 MHz. Let's say it can blow out of the water everything else that exists. But to do so it sacrifices an ALU for integers and everything else a CPU/GPU may have; it just does that (FP calculations), that's it. You could say that thing is the best for computing (just one group of operations among the broad possibilities a processor can do); but this thing is absolutely incapable of working with integers, or using memory, or doing polynomial reductions (like an AES module can), or any other thing you can think of.
So what you're trying to say is, that by not supporting CUDA, AMD is dividing the market? Wow, who ever thought that not doing something impossible would hurt them so much. No my friend, you have it backwards. Nvidia could optimize more for OpenCL but it won't. AMD cannot optimize for CUDA, a language designed for specific Nvidia hardware. I'm surprised that you would even bring this up, seeing as AMD is constantly trying to standardize basic things like adaptive sync and GPGPU language that Nvidia customers seem so willing to pay for.
Nope, I'm not blaming AMD; I could be blaming NVIDIA, but I'm not blaming any. Simply one optimizing for CUDA and the other OpenCL; one supporting G-SYNC and the other Freesync is what divides the market, and that is one possible outcome of competition. You can either compete with common things (Intel and AMD supporting x86, for example) or totally incompatible things between the competitors (chargers for Apple products, or the SONY memory sticks when the competitors used SD, just to mention some).
You have the war of the currents: the purpose of both sides was the same, the way they produced and distributed electrity was totally different, and in the end one prevailed -with the pros and cons for each side. That is an extreme case, NVIDIA and AMD aren't necessarily doing things completely different; but they surely have several differences in their design to achieve the same purpose. In the end, the user is the one judging and buying; NVIDIA is not putting a gun on their heads or blocking the competitor like Intel did in the past. Even the blind test Tom's Hardware did between G-SYNC and Freesync could be said that favored the former; even the majority of users said they would pay extra money for the best experience, so that's not NVIDIA doing an anti-competitive practice obliging them to feed their market share and buying propietary technology.
If you're using Windows [even if pirated] or Mac OS X you can't be complaining about it (propietary and open; good vs evil company) when it's exactly the same scenario with Linux -CUDA, propietary API; OpenCL, open-source API- but with operative systems.