Intel: No socketed Skylake CPU with eDRAM, Broadwell not discontinued

Scorpus

Posts: 2,159   +239
Staff member

There's good news and bad news for those of you wanting to purchase a socketed Intel processor with high-end integrated graphics. Let's start with the bad news.

As has been reported earlier and now confirmed by Intel, there are no plans to produce a socketed Skylake CPU with eDRAM. This essentially means there will be no flagship Skylake part for desktops with high-end Iris or Iris Pro integrated graphics, as both of these GPU families use eDRAM in Skylake SKUs.

eDRAM acts as L4 cache for the CPU and GPU, improving the performance of both. The extra cache is mostly beneficial when gaming on integrated graphics, but it can boost performance in other situations.

Intel will ship mobile CPUs in the near future with Iris Pro (GT3e/GT4e) graphics and 128 MB of eDRAM as part of the Skylake-H line, while Iris graphics and 64 MB of eDRAM are currently available in some mobile Skylake-U parts. On the desktop, however, Broadwell will remain the flagship part for integrated graphics until Intel launches a new line of CPUs.

The good news is that, contrary to some reports, Intel are not discontinuing their socketed Broadwell processors. The Core i7-5775C and i5-5675C only launched three months ago, and they will continue to be available for system builders wanting a high-end part with powerful integrated graphics.

Permalink to story.

 
So... as we did know already: Intel = stagnation.

Q1'07 Intel Core 2 Quad Q6600 (Kentsfield) 65nm cores:4 Cache: 8MB 2.4GHz
Q3'15 Intel Core i7 6700K (Skylake) 14nm cores:4 Cache: 8MB 4.0GHz

real world application performance gain at best +100% ... with a 8 year R&D.
 
Last edited:
So... as we did know already: Intel = stagnation.

Q1'07 Intel Core 2 Quad Q6600 (Kentsfield) 65nm cores:4 Cache: 8MB 2.4GHz
Q3'15 Intel Core i7 6700K (Skylake) 14nm cores:4 Cache: 8MB 4.0GHz

real world application performance gain at best +100% ... with a 8 year R&D.
Don't blame Intel, blame AMD instead. If they could offer chips to rival Intel's high end ones we wouldn't have to stomach their annual minuscule upgrades.
 
So... as we did know already: Intel = stagnation.

Q1'07 Intel Core 2 Quad Q6600 (Kentsfield) 65nm cores:4 Cache: 8MB 2.4GHz
Q3'15 Intel Core i7 6700K (Skylake) 14nm cores:4 Cache: 8MB 4.0GHz

real world application performance gain at best +100% ... with a 8 year R&D.

Or you could factor in the 5820K, which costs a little bit more than the 6700K:
Q3'14 Intel Core i7 5820K (Haswell-E) 22 nm; cores: 6, cache: 15 MB, 3.3 GHz

And depending in the scenario, the i7 6700K can be 3-7 times faster than the Q6600. Source: a handy review here comparing from Conroe to Haswell.
 
So... as we did know already: Intel = stagnation.

Q1'07 Intel Core 2 Quad Q6600 (Kentsfield) 65nm cores:4 Cache: 8MB 2.4GHz
Q3'15 Intel Core i7 6700K (Skylake) 14nm cores:4 Cache: 8MB 4.0GHz

real world application performance gain at best +100% ... with a 8 year R&D.

yeah the Q6600 is a 95watt chip and it's only 4 cpu cores and it's tech on the die. The 6700k is also a 95watt chip with a memory control, PCIe controller and a integrated graphics chip, the rivals the mid tier cards that were available when the Q6600 was launched. On top of that your chip can now handle 8 threads at once and runs at a clock speed that's 1.6-2.0ghz higher and is unlocked for easy overclocking, as well as a healthy IPC increase. All of this improvement in performance and efficiency can be had for around $125 cheaper then what a Q6600 cost at it's launch.
 
So... as we did know already: Intel = stagnation.

Q1'07 Intel Core 2 Quad Q6600 (Kentsfield) 65nm cores:4 Cache: 8MB 2.4GHz
Q3'15 Intel Core i7 6700K (Skylake) 14nm cores:4 Cache: 8MB 4.0GHz

real world application performance gain at best +100% ... with a 8 year R&D.
Don't blame Intel, blame AMD instead. If they could offer chips to rival Intel's high end ones we wouldn't have to stomach their annual minuscule upgrades.

It's not like AMD can do anything either. Both Intel and Nvidia use anti-competitive practices to force AMD into where they are now. The only reason we get any illusion of competition on the CPU and GPU markets nowadays is because if Intel and Nvidia didn't come out with new products, ARM CPUs and GPUs would eventually be faster. Hell, at this rate it's likely.

AMD is 1/3rd the size it was when it acquired ATI. Right now, AMD can barely finance it's day to day operations, let alone R&D. The last full GPU rollout for AMD was GCN 1.0 and that was quite some time ago. It's even worse when you figure that their last new CPU architecture was bulldozer, even further back in time.

I love AMD but if they don't get bought-out or some serious cash inflow, I don't see them making it to releasing zen.
 
Both Intel and Nvidia use anti-competitive practices
There is a term I don't think I will ever understand.

Companies will either work together or compete. If you want two companies to compete and not create a monopoly, then that is exactly what these companies are doing. You call it anti-competitive practices when in fact competing is exactly what they are doing.
 
Both Intel and Nvidia use anti-competitive practices
There is a term I don't think I will ever understand.

Companies will either work together or compete. If you want two companies to compete and not create a monopoly, then that is exactly what these companies are doing. You call it anti-competitive practices when in fact competing is exactly what they are doing.

You obviously do not know the definition of anti-competitive practices. He means that Nvidia and Intel constantly cheat and connive their way into the top position while AMD competes on a product to product basis while intel and NVidia forges benchmarks I.e. Pentium 4 debacle and introduce programs like GameWorks that cripples performance on AMD counterparts
 
while intel and NVidia forges benchmarks I.e. Pentium 4 debacle and introduce programs like GameWorks that cripples performance on AMD counterparts
Which is nothing that AMD couldn't counter, if they would simply compete. You are simply stating nVidia and Intel shouldn't compete and make it so difficult for AMD. In other words you don't want them competing, you would rather the market be stagnate.

Ohh and when you say cripple you mean running code designed for nVidia on AMD cards. That is stupid! If you have an AMD card, run code designed for AMD not nVidia.
 
You obviously do not know the definition of anti-competitive practices. He means that Nvidia and Intel constantly cheat and connive their way into the top position while AMD competes on a product to product basis while intel and NVidia forges benchmarks I.e. Pentium 4 debacle and introduce programs like GameWorks that cripples performance on AMD counterparts

Ehm... I think GameWorks cripples both AMD and NVIDIA, AMD in a worse manner. But it's an additional load to the GPU. I'm not complaining, but if you load more work into a GPU to enhance realism/simulate in real time the performance will be worse than without it, no doubt.
 
You are simply stating nVidia and Intel shouldn't compete and make it so difficult for AMD. In other words you don't want them competing, you would rather the market be stagnate.

"Which is nothing that AMD couldn't counter, if they would simply compete."

It's funny that you say that AMD isn't competing, because we both agree. AMD cannot compete when they are being locked out, ala Intel's OEM cartel and Nvidia's GameWorks. Tell me, how is one supposed to sell CPUs when no OEM will buy them (remember the old athlons?) or make higher performing video cards when devs implement GameWorks blackboxes that damage AMD's performance and prevent driver optimizations?

The GameWorks program isn't competing nor does it move the market forward. In fact, Nvidia users actually have less performance thanks to it. It's already been proven that the GameWorks Program does ridiculous things (insane amount of tessellation) to give Nvidia an advantage that has nothing to do with how good a graphics card is.

"Ohh and when you say cripple you mean running code designed for nVidia on AMD cards. That is stupid! If you have an AMD card, run code designed for AMD not nVidia."

As if we have a choice. Last time I checked, GameWorks contracts explicitly prevent AMD from touching any code and does not allow AMD to partner up to make this fabled 2nd choice you seem to think AMD users have. Nvidia came out and said that in a video themselves.

Don't give me this "AMD doesn't want to compete", every business does. It's that they cannot.
 
It's not like AMD can do anything either. Both Intel and Nvidia use anti-competitive practices to force AMD into where they are now. The only reason we get any illusion of competition on the CPU and GPU markets nowadays is because if Intel and Nvidia didn't come out with new products, ARM CPUs and GPUs would eventually be faster. Hell, at this rate it's likely.

AMD is 1/3rd the size it was when it acquired ATI. Right now, AMD can barely finance it's day to day operations, let alone R&D. The last full GPU rollout for AMD was GCN 1.0 and that was quite some time ago. It's even worse when you figure that their last new CPU architecture was bulldozer, even further back in time.

I love AMD but if they don't get bought-out or some serious cash inflow, I don't see them making it to releasing zen.
I don't understand. What anti competitive practices do Intel & Nvidia employ to force AMD into a corner?
To me it looks like AMD was badly managed and can count themselves lucky to still be in business.
 
Let me just start by saying that I agree with you. For the past 7 years, AMD has had poor management. That part is on AMD. The part that's not on them is what led to them to change management so much and we can blame a large chunk of their down-spiral on market practices by rivals.

Intel blocked AMD out of the market by controlling OEMs. Even though the Athlon II was mch better then what Intel had when it first came out, AMD couldn't even give it away to OEMs. It was that bad. Now Intel doesn't even have to do this anymore because their processors are so dominate, OEMs wouldn't dare offer more AMD solutions than Intel.

Nvidia has a reputation for releasing drivers that optimize benchmark results without real world gains (although we have not had any incidents lately) and GameWorks speaks for it's self. Just this year we've seen a ridiculous amount of GameWorks games where AMD's performance just drops off. Now it would be fine if these features were actually amazing and did improve the gaming industry but we all know this is not that case. We've all seen these features implement on AMD hardware before, working just fine without Nvidia help. What GameWorks really does is prevent AMD from being able properly market their card. It wouldn't matter how good AMD's card are, GameWorks would bring them down to a level lower than Nvidia's. How can that be fair? How can a 290x be beaten by a 960 in project cars? No, GameWorks isn't "competitive", it's just downright wrong.
 
As if we have a choice. Last time I checked, GameWorks contracts explicitly prevent AMD from touching any code and does not allow AMD to partner up to make this fabled 2nd choice you seem to think AMD users have. Nvidia came out and said that in a video themselves.
And where would we be if one country had to share (you know that will never happen) their national secrets, in order for all the others to compete in global warfare (aka: peace)? Seriously think about what you are saying.
 
And where would we be if one country had to share (you know that will never happen) their national secrets, in order for all the others to compete in global warfare (aka: peace)? Seriously think about what you are saying.

GameWorks has nothing to do with "national secrets", it has to do with locking AMD out of the code and crippling them. You're going to sit here and tell me that Nvidia adding tessellation to every object in a game for no reason is equivalent to a national secret? Please, everything Nvidia has in GameWorks is openly available in other dev kits.

Here's a challenge for you, find one game where GameWorks adds more than it takes. If Nvidia's "national secret" is really so great, why doesn't it produce anything positive in any of the games it's in?
 
Nvidia has a reputation for releasing drivers that optimize benchmark results without real world gains (although we have not had any incidents lately) and GameWorks speaks for it's self. Just this year we've seen a ridiculous amount of GameWorks games where AMD's performance just drops off. Now it would be fine if these features were actually amazing and did improve the gaming industry but we all know this is not that case. We've all seen these features implement on AMD hardware before, working just fine without Nvidia help. What GameWorks really does is prevent AMD from being able properly market their card. It wouldn't matter how good AMD's card are, GameWorks would bring them down to a level lower than Nvidia's. How can that be fair? How can a 290x be beaten by a 960 in project cars? No, GameWorks isn't "competitive", it's just downright wrong.

I totally agree with you when it comes to the Intel-AMD story and OEMs. What you're saying about GameWorks is not all that accurate; if the developer locks it, it's really bad for everyone. But if you can turn it on and off like in The Witcher 3... then I don't see unfair advantage. Benchmarks compare all GPUs in those games without GameWorks and you see their true power.

Joke's on AMD, they dismissed PhysX back in 2009 saying its doom was imminent (see how things turned out); and it can be run in AMD since XCOM with PhysX can run in XB360. Now, NVIDIA bought Ageia, a much better move than AMD buying ATi; and it wouldn't give its IP away for free just after that, would you as a businessman?

Is CUDA and GameWorks fragmenting the market? Yes, as much as AMD pursuing OpenCL is; that's the point of competition: differ from your competitor and let the consumer decide whose offer is more appealing. It's like calling are the brands dominant in their markets anti-competitive for keeping their secrets/IP to their own. It won't happen with either Coca-Cola nor Google any time soon. If a Pepsi had the exact same elaboration and formula as the Coke; people would buy them indifferently and none of them would be dominant -what would be the point in competing if their offers would be the same and none subjectively better than the other?
 
How, for the love of whatever is holy, is "AMD pursuing OpenCL" fragmenting the market? Is nvidia unable to run opencl code just as well?
 
I totally agree with you when it comes to the Intel-AMD story and OEMs. What you're saying about GameWorks is not all that accurate; if the developer locks it, it's really bad for everyone. But if you can turn it on and off like in The Witcher 3... then I don't see unfair advantage. Benchmarks compare all GPUs in those games without GameWorks and you see their true power.

Joke's on AMD, they dismissed PhysX back in 2009 saying its doom was imminent (see how things turned out); and it can be run in AMD since XCOM with PhysX can run in XB360. Now, NVIDIA bought Ageia, a much better move than AMD buying ATi; and it wouldn't give its IP away for free just after that, would you as a businessman?

Is CUDA and GameWorks fragmenting the market? Yes, as much as AMD pursuing OpenCL is; that's the point of competition: differ from your competitor and let the consumer decide whose offer is more appealing. It's like calling are the brands dominant in their markets anti-competitive for keeping their secrets/IP to their own. It won't happen with either Coca-Cola nor Google any time soon. If a Pepsi had the exact same elaboration and formula as the Coke; people would buy them indifferently and none of them would be dominant -what would be the point in competing if their offers would be the same and none subjectively better than the other?

The witcher 3 wasn't a GameWorks title, just so you know. Certain games will add Nvidia features but that doesn't make them a GameWorks title. Another example of this is borderlands 2.

I think what you were trying to say is that CUDA and OpenCL divide the market but this argument is fundamentally flawed. CUDA is designed specifically to only support CUDA cores on Nvidia hardware. Compare that to OpenCL which is only an API. AMD isn't locking it to their hardware nor do I see them pushing devs into it. People who want to use it do and those who don't, don't. CUDA isn't one of the anti-competitive points I have tried to make nor do I think that CUDA is anti-competitive.
 
I think what you were trying to say is that CUDA and OpenCL divide the market but this argument is fundamentally flawed. CUDA is designed specifically to only support CUDA cores on Nvidia hardware. Compare that to OpenCL which is only an API. AMD isn't locking it to their hardware nor do I see them pushing devs into it. People who want to use it do and those who don't, don't. CUDA isn't one of the anti-competitive points I have tried to make nor do I think that CUDA is anti-competitive.

Uhm... yeah... CUDA is also an API with a programming model, toolkit and the sort that runs on CUDA cores. If both CUDA and OpenCL are APIs with similar purposes... the comparison is pretty fair. We all know AMD is optimizing towards and promoting OpenCL since they produce APUs and apparently their GPUs also do a better job at it than NVIDIA. So a software supporting one or the other runs better in one or the other company; thus dividing the market in a bigger or minor degree.
 
Uhm... yeah... CUDA is also an API with a programming model, toolkit and the sort that runs on CUDA cores. If both CUDA and OpenCL are APIs with similar purposes... the comparison is pretty fair. We all know AMD is optimizing towards and promoting OpenCL since they produce APUs and apparently their GPUs also do a better job at it than NVIDIA. So a software supporting one or the other runs better in one or the other company; thus dividing the market in a bigger or minor degree.

No, the comparison is not. CUDA only works on hardware supporting CUDA cores. OpenCL is thus more abstract and is supported on a much, much wider range of hardware.

The only way AMD is "promoting" OpenCl is by supporting it on their GPUs. The reason Nvidia cards don't do as well in OpenCL is for two reasons. One, Nvidia doesn't care. The only company dividing the market is Nvidia, who continuously chooses to force customers into their proprietary content. G-Sync is no special sauce and should have been open and available to everyone.. Two, AMD hardware is much better when it comes to compute. If you didn't know, a major reason the 7000 series was out of stock so much at launch is because it crushed anything Nvidia had at bitcoin mining, which was just starting to take off at the time. It hasn't changed much, AMD cards are just plain better when it comes to compute.

"So a software supporting one or the other runs better in one or the other company; thus dividing the market in a bigger or minor degree."

So what you're trying to say is, that by not supporting CUDA, AMD is dividing the market? Wow, who ever thought that not doing something impossible would hurt them so much. No my friend, you have it backwards. Nvidia could optimize more for OpenCL but it won't. AMD cannot optimize for CUDA, a language designed for specific Nvidia hardware. I'm surprised that you would even bring this up, seeing as AMD is constantly trying to standardize basic things like adaptive sync and GPGPU language that Nvidia customers seem so willing to pay for.

I guess Async compute will divide the market as well. Nvidia claimed full DX 12 support but as it turns out, they don't have hardware level Async Compute. I'm sure Nvidia will have to use as many GameWorks Titles as possible to minimize that advantage that AMD could have.
 
The only company dividing the market is Nvidia, who continuously chooses to force customers into their proprietary content. G-Sync is no special sauce and should have been open and available to everyone..
It is official, you are trying to change the definition of competing. You can't compete while holding hands.
Two, AMD hardware is much better when it comes to compute. If you didn't know, a major reason the 7000 series was out of stock so much at launch is because it crushed anything Nvidia had at bitcoin mining, which was just starting to take off at the time. It hasn't changed much, AMD cards are just plain better when it comes to compute.
It has been a few years since I was part of the project, but AMD sucked at Folding@Home. You say nVidia sucks at compute, but that really depends on the type of computing. Here you are praising compute while badgering Cuda, when they are two competing aspects. This makes you biased, plain and simple.
 
Q1'07 Intel Core 2 Quad Q6600 (Kentsfield) 65nm cores:4 Cache: 8MB 2.4GHz
Q3'15 Intel Core i7 6700K (Skylake) 14nm cores:4 Cache: 8MB 4.0GHz

real world application performance gain at best +100% ... with a 8 year R&D.

Or you could factor in the 5820K, which costs a little bit more than the 6700K:
Q3'14 Intel Core i7 5820K (Haswell-E) 22 nm; cores: 6, cache: 15 MB, 3.3 GHz

No I could not. We're talking about consumer level stuff here. Or I could factor in also 18-core Xeons ... which I also don't.

And depending in the scenario, the i7 6700K can be 3-7 times faster than the Q6600. Source: a handy review here comparing from Conroe to Haswell.

Noone really cares about special case "scenarios", but actual average everyday uses - gaming, compressing, converting videos, photoshopping. Also you forgot the link, so I have no idea what review you talk about.

yeah the Q6600 is a 95watt chip and it's only 4 cpu cores and it's tech on the die. The 6700k is also a 95watt chip with a memory control, PCIe controller and a integrated graphics chip, the rivals the mid tier cards that were available when the Q6600 was launched. On top of that your chip can now handle 8 threads at once and runs at a clock speed that's 1.6-2.0ghz higher and is unlocked for easy overclocking, as well as a healthy IPC increase.

There is still only 4 cores to run them. Same goes about the iGPU, which is completely unusable for gaming purposes or in any actual pro graphical uses. Q6600 was also very easily overclockable. Also the Skylake step back over Broadwell's 128MB eDRAM L4 cache is complete bulls**t from Intel. It's almost like "oh, we finally had some more progress .... and now it's gone!!" Yes, if you are in some video editing business, ofc new CPU-s are way better, no question there... but they could/should be even better! :)
 
Last edited:
Noone really cares about special case "scenarios", but actual average everyday uses - gaming, compressing, converting videos, photoshopping. Also you forgot the link, so I have no idea what review you talk about.

Wow, troll much? Biased, out of any reason, agressive... shall I go on? Dude, the 5820K -other than names, marketing, and chipset- is on par with the "consumer" level Skylake i7 -that has an IGP that won't work as soon as you put a dGPU (you're giving me the reason in your third paragraph)- in price (you can have a similarly priced mobo + CPU combo going with both options; where the DDR4 price argument vs DDR3 no longer applies).

And it comes down to the reading comprehension:
* "here": Techspot
* "handy review": look in the reviews section...
* "...from Conroe to Haswell": damn! Which of the reviews must be about that?! Oh, wait... it must be the "Then and Now: Almost 10 Years of Intel CPUs Compared" review.

Do I seriously have to put an internal link? And I said, "depending on the scenario" so you can make your own judgement based on what scenario interests you. 3-7 times means "at least 3 times; as much as 7 times", so: worst-case scenario: a 300% improvement; far from "at best 100+%"

No, the comparison is not. CUDA only works on hardware supporting CUDA cores. OpenCL is thus more abstract and is supported on a much, much wider range of hardware.

First, you said I couldn't compare an API with an architecture. Then I show you both are APIs -one propietary, the other open- and you still say it's no fair comparison; regardless of the supporting hardware.

If you didn't know, a major reason the 7000 series was out of stock so much at launch is because it crushed anything Nvidia had at bitcoin mining, which was just starting to take off at the time. It hasn't changed much, AMD cards are just plain better when it comes to compute.

Yep, that's the exact scenario I had in mind when talking about OpenCL and AMD users talking about it; I'm just not going to exemplify and illustrate every single thing I mention. And then you get yourself in a corner: better at computing what? When both AMD and NVIDIA say they're DX12 compliant, they mean it a different feature levels [meaning they have hardware optimized for one or other thing]. Designing a "do-it-all" GPU would be very expensive for the silicon [area] used for all the modules that do anything you can think of; so each company gambles and focuses on supporting certain features in their hardware and if certain software use that it pays off.

NVIDIA cards can handle tesselation a lot better, while AMD focuse in general purpose computing (kind of a co-processor). Let's put it this way: I could program an FPGA to do FP operations with the highest throughput in the market, running just at 800 MHz. Let's say it can blow out of the water everything else that exists. But to do so it sacrifices an ALU for integers and everything else a CPU/GPU may have; it just does that (FP calculations), that's it. You could say that thing is the best for computing (just one group of operations among the broad possibilities a processor can do); but this thing is absolutely incapable of working with integers, or using memory, or doing polynomial reductions (like an AES module can), or any other thing you can think of.

So what you're trying to say is, that by not supporting CUDA, AMD is dividing the market? Wow, who ever thought that not doing something impossible would hurt them so much. No my friend, you have it backwards. Nvidia could optimize more for OpenCL but it won't. AMD cannot optimize for CUDA, a language designed for specific Nvidia hardware. I'm surprised that you would even bring this up, seeing as AMD is constantly trying to standardize basic things like adaptive sync and GPGPU language that Nvidia customers seem so willing to pay for.

Nope, I'm not blaming AMD; I could be blaming NVIDIA, but I'm not blaming any. Simply one optimizing for CUDA and the other OpenCL; one supporting G-SYNC and the other Freesync is what divides the market, and that is one possible outcome of competition. You can either compete with common things (Intel and AMD supporting x86, for example) or totally incompatible things between the competitors (chargers for Apple products, or the SONY memory sticks when the competitors used SD, just to mention some).

You have the war of the currents: the purpose of both sides was the same, the way they produced and distributed electrity was totally different, and in the end one prevailed -with the pros and cons for each side. That is an extreme case, NVIDIA and AMD aren't necessarily doing things completely different; but they surely have several differences in their design to achieve the same purpose. In the end, the user is the one judging and buying; NVIDIA is not putting a gun on their heads or blocking the competitor like Intel did in the past. Even the blind test Tom's Hardware did between G-SYNC and Freesync could be said that favored the former; even the majority of users said they would pay extra money for the best experience, so that's not NVIDIA doing an anti-competitive practice obliging them to feed their market share and buying propietary technology.

If you're using Windows [even if pirated] or Mac OS X you can't be complaining about it (propietary and open; good vs evil company) when it's exactly the same scenario with Linux -CUDA, propietary API; OpenCL, open-source API- but with operative systems.
 
It is official, you are trying to change the definition of competing. You can't compete while holding hands.
It has been a few years since I was part of the project, but AMD sucked at Folding@Home. You say nVidia sucks at compute, but that really depends on the type of computing. Here you are praising compute while badgering Cuda, when they are two competing aspects. This makes you biased, plain and simple.

I'm not trying to change the definition of anything. Here you are posing a strawman argument that discusses nothing on point. FYI, just because something doesn't fit your abject definition of "compete" doesn't mean it's wrong.

Now please, I'd dare you to actually respond to the point made earlier instead of posting another strawman.

First, you said I couldn't compare an API with an architecture. Then I show you both are APIs -one propietary, the other open- and you still say it's no fair comparison; regardless of the supporting hardware.



Yep, that's the exact scenario I had in mind when talking about OpenCL and AMD users talking about it; I'm just not going to exemplify and illustrate every single thing I mention. And then you get yourself in a corner: better at computing what? When both AMD and NVIDIA say they're DX12 compliant, they mean it a different feature levels [meaning they have hardware optimized for one or other thing]. Designing a "do-it-all" GPU would be very expensive for the silicon [area] used for all the modules that do anything you can think of; so each company gambles and focuses on supporting certain features in their hardware and if certain software use that it pays off.

NVIDIA cards can handle tesselation a lot better, while AMD focuse in general purpose computing (kind of a co-processor). Let's put it this way: I could program an FPGA to do FP operations with the highest throughput in the market, running just at 800 MHz. Let's say it can blow out of the water everything else that exists. But to do so it sacrifices an ALU for integers and everything else a CPU/GPU may have; it just does that (FP calculations), that's it. You could say that thing is the best for computing (just one group of operations among the broad possibilities a processor can do); but this thing is absolutely incapable of working with integers, or using memory, or doing polynomial reductions (like an AES module can), or any other thing you can think of.



Nope, I'm not blaming AMD; I could be blaming NVIDIA, but I'm not blaming any. Simply one optimizing for CUDA and the other OpenCL; one supporting G-SYNC and the other Freesync is what divides the market, and that is one possible outcome of competition. You can either compete with common things (Intel and AMD supporting x86, for example) or totally incompatible things between the competitors (chargers for Apple products, or the SONY memory sticks when the competitors used SD, just to mention some).

You have the war of the currents: the purpose of both sides was the same, the way they produced and distributed electrity was totally different, and in the end one prevailed -with the pros and cons for each side. That is an extreme case, NVIDIA and AMD aren't necessarily doing things completely different; but they surely have several differences in their design to achieve the same purpose. In the end, the user is the one judging and buying; NVIDIA is not putting a gun on their heads or blocking the competitor like Intel did in the past. Even the blind test Tom's Hardware did between G-SYNC and Freesync could be said that favored the former; even the majority of users said they would pay extra money for the best experience, so that's not NVIDIA doing an anti-competitive practice obliging them to feed their market share and buying propietary technology.

If you're using Windows [even if pirated] or Mac OS X you can't be complaining about it (propietary and open; good vs evil company) when it's exactly the same scenario with Linux -CUDA, propietary API; OpenCL, open-source API- but with operative systems.

First Para,

Yes, you could say that both are an API but CUDA doesn't fit the definition of what everyone would call an API. You've got hardware APIs like DirectX, OpenGL, and OpenCL. You've also got more abstract APIs like google maps integration and social media interfaces. The big difference between these and the CUDA API is that CUDA only works on very specific hardware. Usually APIs allow programmers to reach a larger number of devices more easily. Nvidia's CUDA API doesn't do that. One could program for just as many Nvidia devices just fine without it.

Second Para,

Nvidia explicitly claimed FULL DX 12 compliance. They were not just claiming a subset, they had said they supported every feature. They even contrasted that to AMD, which openly admits to not having Full Dx Support.

AMD cards have better double precision. The 390x beats the 980 ti and the titan x by a large amount.

Third Para,

I've got not problems with this. Each architecture is unique.

Fourth Para,

I agree with you here. Nvidia isn't nearly as bad as Intel was, that's for sure. Nvidia may not be holding a gun to people's heads but they certainly raise eyebrows when you see negative patterns develop. I can't see any other explanation as to why nearly all GameWorks titles have a huge performance drop on AMD hardware. To GPU companies, games are what sell the cards. If you don't do good in games, you don't sell cards.

Last Para,

I don't think that analogy quite works out. It's more like Nvidia is a console OS and AMD is Linux. Nvidia's tech like CUDA only works on a very small selection of hardware. Hell most of Nvidia newest stuff like game streaming, CUDA, and G-Sync require you buy the near latest cards. G-Sync even requires a specific monitor.
 
Now please, I'd dare you to actually respond to the point made earlier instead of posting another strawman.
How about this. Your anti-competitive practices claim is irrelevant. 90% of the market knew nothing of it. The greatest portion of the rest didn't care, so it can't be all that relevant. You can makeup excuses for AMD all you wish, that doesn't make them true.
 
Back