Why is Amazon building CPUs?

There is no efficiency gain in processors that can't even run the software that the consumer expects. In addition to tests where they are compatible and run much slower.

They are just increasing profit margins to please shareholders, nothing more, if you don't understand the difference between a business "making money In a healthy way, developing services and products that add value and quality etc..." and "Pure greed, the focus is on profit regardless of the means. People don't need it, it's not new, it's not faster, but hey... I will increase my profit in few %"

Then you won't understand what's to come when a machine takes your place. No matter what you do, a machine eventually can and will take your place, because the goal is profit. period.
You're stuck in the Windows mindset. You might be surprised to know that the Amazon CPUs run Linux quite well. We're not talking about people who buy Chromebook for school here. We are talking about enterprise class applications. Price and performance always matter to these people. I know, I work with people moving to the cloud every day.
As for your claims of lesser performance, Amazon claims that Graviton 2 delivers 40% better price/performance than comparable x86 and the new generation, G3 delivers 25% more compute than G2.

What makes you think people don't want lower powered devices that cost less to operate? What fantasy world do you live in where that is true?

You might also want to read the recent article on the MacBook w/M2 performing as well as or better than a comparable Intel/Windows laptop.
I'm guessing you don't work in a major corporation, or if you do, you have no visibility into the inner financial workings of the company.

Companies like Amazon are "for profit" companies and as such they do attempt to earn good profits. However, no one sits in the boardroom and thinks about how they can screw customers over for more profit.
 
Production costs are pretty small for CPUs. For example 400mm2 CPU assuming 30% defect rate only costs around 300 dollars to manufacture. Compare that against AMD and Intel server CPU prices. Design costs are different matter but just manufacturing is cheap.

Buyer is always rightsome. Since Intel sells most, power does not matter IRL. It only matters in your imagination.
Like I said, operational cost, cooling and power as an example, can dwarf the price of the CPU. Hence why companies like Amazon and Microsoft are working hard to find lower power solutions to go into the datacenter.

And you're right, the customer is right. You're wrong however, about Intel selling more. ARM outsells x86 by a HUGE margin. According to estimates, x86 does about 350-360 MILLION units a year. ARM did 6.7 BILLION in Q4 2020. Mostly mobile, but also other devices like routers, firewalls, switches, servers and more. Why isn't Intel in those types of deployments? POWER. I'd love to see your x86 in a mobile phone platform. It would have about 30 mins of battery life. LOL

Just because you can't see it, doesn't mean that people who build large datacenters aren't concerned about power consumption. They are very concerned and have been for decades. I've worked with various people at MS and AWS on building out their datacenter. Power was number 1 on their priority list.
 
Both Apple and Amazon are stuck on this, wasting TSMC's silicon and capacity by not being able to produce x86 CPUs. The truth is that they are inferior products in terms of performance per die size, compatibility and even performance per watt considering the worst case scenarios. Yes, I'm talking about the performance on linux itself.

Do you think Amazon knows this or that, they are just making a gamble like intel has already done and failed several times in other segments.

It's quite simple, these corporations have a complex ecosystem that customers are used to and are unlikely to change, they have the power to force these products on their loyal consumers. But overall they are not revolutionary at all, not efficient, and are just wasting precious capacity at TSMC to produce this. .-.

.c6a.4xlarge - The AMD EPYC 7003 "Milan" instance type powered by an AMD EPYC 7R13 processor. The c6a.4xlarge instance was priced on-demand at $0.612 USD per hour.

c7g.4xlarge - The new Graviton3 instance type with Neoverse-V1 cores. The c7g.4xlarge on-demand pricing is currently at $0.58 USD per hour.
 
Last edited:
The reality is, Linux (not counting Android running on Linux kernel... which does count, but not for the point I'm making here) on ARM is mature and fully developed. I ran a Acer Chromebook with a NVidia Tegra K1 (quad-core ARM, and roughly GTX650-speed GPU, older version of what's in the Nintendo Switch.) I had full desktop Ubuntu running on there... I had a full compliment of software, including a working Nvidia driver (both OpenGL and CUDA), android studio, compilers, MySQL, Python, Java, PHP, Rust, etc., libreoffice, firefox & chromium (I *think* there's a chrome build for ARM now?)

For a server, I doubt there's any difference in availability of packages between x86-64 and ARM. In terms of optimizations, the video encoding/decoding libraries, math libraries, etc., have had ARM optimizations in them for years, and gcc and clang will happily spit out SIMD instructions for ARM just as it does for x86-64. I think it may come as a surprise (especially if one comes from a Windows background) how little difference it can make if your Linux install is running on x86-64, ARM, MIPS, POWER, RISCv, or whatever else, just limited by how many MIPS, MBs of RAM, storage speed and capacity you've got and how many watts you're pulling down for it.
 
Both Apple and Amazon are stuck on this, wasting TSMC's silicon and capacity by not being able to produce x86 CPUs. The truth is that they are inferior products in terms of performance per die size, compatibility and even performance per watt considering the worst case scenarios. Yes, I'm talking about the performance on linux itself.

snip
Do you have any sources for that statement?

This article, https://www.notebookcheck.net/Apple...U-efficiency-compared-to-the-M1.637834.0.html, seems to dispute your assertion that ARM is less efficient than x86. In Cinebench benchmarks, the Apple M2 is 6 times less power than a Core i7 in single package and 3 times less power in multi-package tests. In efficiency the M2 is about 5x better in single package and about 2x in multi-package compared to i7.
 
Like I said, operational cost, cooling and power as an example, can dwarf the price of the CPU. Hence why companies like Amazon and Microsoft are working hard to find lower power solutions to go into the datacenter.
Just like it's important to be nature friendly. Pay visit to nearest airport to see how well that Actually goes :D
And you're right, the customer is right. You're wrong however, about Intel selling more. ARM outsells x86 by a HUGE margin. According to estimates, x86 does about 350-360 MILLION units a year. ARM did 6.7 BILLION in Q4 2020. Mostly mobile, but also other devices like routers, firewalls, switches, servers and more. Why isn't Intel in those types of deployments? POWER. I'd love to see your x86 in a mobile phone platform. It would have about 30 mins of battery life. LOL
I was talking about server CPUs. Intel just is not interested because, again, manufacturers are afraid of vendor lock in.

Power is not an issue. Just because ARM is present in low power devices, do not mean x86 could not match ARM on power consumption. Intel just have no real interest on very low power parts now, for example Quark line was discontinued.
Just because you can't see it, doesn't mean that people who build large datacenters aren't concerned about power consumption. They are very concerned and have been for decades. I've worked with various people at MS and AWS on building out their datacenter. Power was number 1 on their priority list.
They say they are concerned but when reality strikes in, that interest just happens to be words. Exactly same that you can see on politics for example. Having little debt is supposed to be very important. In reality it's something else.
 
Just like it's important to be nature friendly. Pay visit to nearest airport to see how well that Actually goes :D

I was talking about server CPUs. Intel just is not interested because, again, manufacturers are afraid of vendor lock in.

Power is not an issue. Just because ARM is present in low power devices, do not mean x86 could not match ARM on power consumption. Intel just have no real interest on very low power parts now, for example Quark line was discontinued.

They say they are concerned but when reality strikes in, that interest just happens to be words. Exactly same that you can see on politics for example. Having little debt is supposed to be very important. In reality it's something else.
Well, power is an issue. One reason Intel has no real interest on very low power parts, is because when they did pursue this path they ended up with some Atom CPUs that were dreadfully slow but still used more power than the ARMs they were meant to compete with.

As for data centers -- yes they are concerned. The big operational costs for a data center are going to be bandwidth and power, it cuts costs there. It lowers cooling costs, further cutting costs. And (if you were designing for it) you'd need 20% less UPS/generator capacity (or the UPS & geneator fuel will last 20% longer.) Or, going the other way, keep the same power, cooling, and power backups but be able to cram 20% more cores into each 1U, same operational expenses with 20% more cash coming in.
 
Do you have any sources for that statement?

This article, https://www.notebookcheck.net/Apple...U-efficiency-compared-to-the-M1.637834.0.html, seems to dispute your assertion that ARM is less efficient than x86. In Cinebench benchmarks, the Apple M2 is 6 times less power than a Core i7 in single package and 3 times less power in multi-package tests. In efficiency the M2 is about 5x better in single package and about 2x in multi-package compared to i7.
This chip is using a much more efficient process, plus it is relatively huge compared to competitors, yet it still ends up being balanced in the fight with AMD's more efficient 7nm/6nm chips: https://www.phoronix.com/review/apple-m2-amd-ryzen/2


Intel and AMD don't use this strategy of creating the biggest chip possible to seek better efficiency in APUs, because they don't have a market as numerous and loyal as apple focused only on expensive high margin products. They offer products for all segments and to reach this broad market they depend on a limited production capacity, so they have to play smarter by seeking efficiency with bigger advances in terms of architecture and execution, thus also having better performance per transistor.
 
This chip is using a much more efficient process, plus it is relatively huge compared to competitors, yet it still ends up being balanced in the fight with AMD's more efficient 7nm/6nm chips: https://www.phoronix.com/review/apple-m2-amd-ryzen/2


Intel and AMD don't use this strategy of creating the biggest chip possible to seek better efficiency in APUs, because they don't have a market as numerous and loyal as apple focused only on expensive high margin products. They offer products for all segments and to reach this broad market they depend on a limited production capacity, so they have to play smarter by seeking efficiency with bigger advances in terms of architecture and execution, thus also having better performance per transistor.
Huge compared to competitors? How so? According to Apple, the M2 die size is 141.7 mm2, Intel CPUs are similar or larger at 153.6 mm2 (i7-8700), 180.3 mm2 (i9-9900) and 206.1 mm2 (i9-10900). PS M2 uses a 5nm process.

First you tell me that Intel/AMD dominate PC builds and then you say Apple has a more "numerous and loyal" market. I think native Americans call that speaking with a forked tongue. There are far more PCs using Intel than Apple M2 so I cannot accept that there is not a larger and very loyal PC market. The business use of PCs alone dwarfs Apple.

But, we're not talking about PCs here. We are talking about datacenter servers. Intel still has the lead in installations but ARM cores are being deployed every day and with Apple's M2 and other ARM architectures, I see that Intel/AMD will have to up their game to maintain their lead.

As for better performance per transistor, that's somewhat irrelevant when the M2 can perform close to or better than the AMD at a fraction of the power. And now we're back to Power and why it matters. You repeatedly ignore every reference article I've provided that shows that data center designers are very concerned about power. Just because they have implemented Intel or AMD CPUs doesn't mean they aren't trying to find better ways to reduce power consumption. Now that lower power alternatives are becoming available, they are starting to deploy them such as the Amazon Graviton processors.
 
Just like it's important to be nature friendly. Pay visit to nearest airport to see how well that Actually goes :D
Not sure how airports relate to cloud computing. It's somewhat of a non sequitur.
I was talking about server CPUs. Intel just is not interested because, again, manufacturers are afraid of vendor lock in.

Power is not an issue. Just because ARM is present in low power devices, do not mean x86 could not match ARM on power consumption. Intel just have no real interest on very low power parts now, for example Quark line was discontinued.
Well, I hate to break it to you but Intel is, in fact, looking at better power utilization. You may want to read this article to see what Intel is doing to increase performance per watt, or how to reduce power consumption while maintaining performance.
They say they are concerned but when reality strikes in, that interest just happens to be words. Exactly same that you can see on politics for example. Having little debt is supposed to be very important. In reality it's something else.
You're confusing the desire to reduce power consumption with the availability of components to effect that strategy. You clearly have no visibility into what Amazon and Microsoft et al are doing to reduce power consumption in data centers above and beyond CPUs. They have been working on this problem ever since they began developing their Cloud offerings.
 
Huge compared to competitors? How so? According to Apple, the M2 die size is 141.7 mm2, Intel CPUs are similar or larger at 153.6 mm2 (i7-8700), 180.3 mm2 (i9-9900) and 206.1 mm2 (i9-10900). PS M2 uses a 5nm process.

First you tell me that Intel/AMD dominate PC builds and then you say Apple has a more "numerous and loyal" market. I think native Americans call that speaking with a forked tongue. There are far more PCs using Intel than Apple M2 so I cannot accept that there is not a larger and very loyal PC market. The business use of PCs alone dwarfs Apple.

But, we're not talking about PCs here. We are talking about datacenter servers. Intel still has the lead in installations but ARM cores are being deployed every day and with Apple's M2 and other ARM architectures, I see that Intel/AMD will have to up their game to maintain their lead.

As for better performance per transistor, that's somewhat irrelevant when the M2 can perform close to or better than the AMD at a fraction of the power. And now we're back to Power and why it matters. You repeatedly ignore every reference article I've provided that shows that data center designers are very concerned about power. Just because they have implemented Intel or AMD CPUs doesn't mean they aren't trying to find better ways to reduce power consumption. Now that lower power alternatives are becoming available, they are starting to deploy them such as the Amazon Graviton processors.
I said big "Relatively" because of the different manufacturing processes, 5nm is more than 50% denser than the first generation 7nm. This chip in 7nm would be the size of an offboard GPU like the RX6700. lol

The M2 has twice as many transistors as a ryzen 6800u, this is an important metric to evaluate the efficiency of the design. In other words it is just a big chip, there is nothing revolutionary there, also it is not as efficient as you imagine it depends on the format and cooling of the MacBook, there are a huge performance loss due to thermal throttling.

I meant to be kind to Apple, in reality they have fanatical consumers who pay much more for products that offer no advantage whatsoever. They don't even consider using a windows PC.
 
Well, power is an issue. One reason Intel has no real interest on very low power parts, is because when they did pursue this path they ended up with some Atom CPUs that were dreadfully slow but still used more power than the ARMs they were meant to compete with.
That is because first Atoms were generally very bad CPUs. Basically Atoms were beaten by underclocked desktop chip, both on performance and power consumption. Intel has never put much money on low power chips. Probably because customers are afraid of vendor lock-in, not problem with ARM.
As for data centers -- yes they are concerned. The big operational costs for a data center are going to be bandwidth and power, it cuts costs there. It lowers cooling costs, further cutting costs. And (if you were designing for it) you'd need 20% less UPS/generator capacity (or the UPS & geneator fuel will last 20% longer.) Or, going the other way, keep the same power, cooling, and power backups but be able to cram 20% more cores into each 1U, same operational expenses with 20% more cash coming in.
More power hungry chips need more power, and so what? You can get electricity from power outlet. It costs money? So what? There are always money for computers. I have seen over 1000 situations where lower powered parts would be cheaper and save money also on electricity, cooling etc. But guess what? Nobody cares.

Because money is not an issue, neither is power consumption. Simple. You seem to think money is issue on servers, but it's not. That's difference between theory and reality.

Huge compared to competitors? How so? According to Apple, the M2 die size is 141.7 mm2, Intel CPUs are similar or larger at 153.6 mm2 (i7-8700), 180.3 mm2 (i9-9900) and 206.1 mm2 (i9-10900). PS M2 uses a 5nm process.
Apple has not told M2 die size. Also M2 is missing support for older instruction sets Apple used and it also lacks memory controller that supports external memory. Comparison is pretty pointless.

As for better performance per transistor, that's somewhat irrelevant when the M2 can perform close to or better than the AMD at a fraction of the power. And now we're back to Power and why it matters. You repeatedly ignore every reference article I've provided that shows that data center designers are very concerned about power. Just because they have implemented Intel or AMD CPUs doesn't mean they aren't trying to find better ways to reduce power consumption. Now that lower power alternatives are becoming available, they are starting to deploy them such as the Amazon Graviton processors.
They already had option for lower power consumption: AMD processors. Guess what? Majority still use Intel. That pretty much invalidates your claims.

I ignore your "sources" because most of them are like this:

Amazon tells us how amazing their new chip is. It has low power consumption and that makes it best choice for out company. We don't even sell it to others because it's so good.

Now, what you would expect them to tell? That their new CPU simply sucks against competition?? Of course Amazon promotes power consumption because they have chip that has only one real advantage: power consumption. Exactly, what else you would expect?

Because Intel has high power consumption but high single core performance, do you expect Intel to promote power consumption? And because AMD has excellent performance per watt but poor single thread performance vs Intel (in some cases), do you expect AMD to promote single core performance?

You use sources where company promotes their OWN product and tells how important is exact same thing where that product happens to be good at. And now you question why I ignore them. Well, you have it above.
Not sure how airports relate to cloud computing. It's somewhat of a non sequitur.
Just an example about thing that is supposed to be important and you can see media telling about it all the time. But in reality, it isn't.

Same applies to this "low power consumption matters". It is supposed to be important, but in reality, it isn't.
Well, I hate to break it to you but Intel is, in fact, looking at better power utilization. You may want to read this article to see what Intel is doing to increase performance per watt, or how to reduce power consumption while maintaining performance.
New manufacturing process Usually offers better performance on same power consumption OR lower power consumption on same performance. Otherwise there is not much point developing new processes. Basically Intel is saying their new processes do not totally suck. Did you expect them to say something else?
You're confusing the desire to reduce power consumption with the availability of components to effect that strategy. You clearly have no visibility into what Amazon and Microsoft et al are doing to reduce power consumption in data centers above and beyond CPUs. They have been working on this problem ever since they began developing their Cloud offerings.
Since AMD released Zen2 Rome, Intel has not got absolutely nothing against AMD when it comes to power consumption and performance per watt. If you are correct, AMD would have sold Every Single Epyc they could manufacture. Since that isn't case, it's very evident that power consumption is nowhere near as important you are saying.
 
Last edited:
I said big "Relatively" because of the different manufacturing processes, 5nm is more than 50% denser than the first generation 7nm. This chip in 7nm would be the size of an offboard GPU like the RX6700. lol
Density is a good thing, especially when you can do that at a lower power draw. At the end of the day, does it really matter? Whether I have a million or a billion transistors on the die I don't care. I think what matters is what performance can I get and at what power draw? For data centers, this comes down to how many devices can I get in a rack without exceeding the maximum power I can deliver to the rack. It impacts the cost of cooling I need as well as the cost of having to build a new data center because I've maxed out on power.
The M2 has twice as many transistors as a ryzen 6800u, this is an important metric to evaluate the efficiency of the design. In other words it is just a big chip, there is nothing revolutionary there, also it is not as efficient as you imagine it depends on the format and cooling of the MacBook, there are a huge performance loss due to thermal throttling.
No one said it was "revolutionary", that's not the conversation at all. What was said is that ARM is more power efficient than x86. Consider that the M2 also is a SOC, which includes memory and other components that you usually might find on the motherboard. It's not really an apples to apple comparison (no pun intended).
Thermal throttling is not an issue, at least not for compute resources in a data center. There would be a huge performance loss if a Core i7 went into thermal throttling as well. It's a moot point.
I meant to be kind to Apple, in reality they have fanatical consumers who pay much more for products that offer no advantage whatsoever. They don't even consider using a windows PC.
Again, Apple really isn't the discussion here. I used M2 as an example of ARM efficiencies because that data was readily available. But, again, this conversation is about using ARM due to lower power requirements, not whether Apple is making the right kind of products or not. As for fanatics, for every Apple fanatic there are dozens of Intel fanatics, AMD fanatics, Widows fanatics, Linux fanatics and more. When it comes to paying more and getting less, you might find this article interesting.

I think some people would tell you that there are advantages to MacOS depending on what your'e doing. Apple has focused on content creators and seems to have done a good job of that. For me, I have a MacBook Air, an ASUS gaming laptop, and I'm getting ready to build a desktop gaming rig. I also have iPhone and iPad so, maybe, I'm a fanatic or maybe I just enjoy the technology and see pros and cons to different computing platforms.
 
More power hungry chips need more power, and so what? You can get electricity from power outlet. It costs money? So what? There are always money for computers. I have seen over 1000 situations where lower powered parts would be cheaper and save money also on electricity, cooling etc. But guess what? Nobody cares.
You clearly have missed the point entirely. You can get electricity out of an outlet, uh no, not when your data center rack is at max power capacity. There is no more electricity to get.

It cost money, yes, and when you're competing for business a la Amazon, Microsoft, Google, those cost matter. You cannot be competitive if you're cost to operate are significantly higher than your competitor. You will also spend more money building additional data centers when you've maxed out your power usage in the primary data center. That is a HUGE expense that NO ONE wants to incur.
Because money is not an issue, neither is power consumption. Simple. You seem to think money is issue on servers, but it's not. That's difference between theory and reality.
Money is NOT an issue? What universe do you live in? I work with major corporations every day doing financial analysis of moving from on-premise computing to cloud computing. You are clearly uninformed here. Money matters to the people who run these corporations.
Apple has not told M2 die size. Also M2 is missing support for older instruction sets Apple used and it also lacks memory controller that supports external memory. Comparison is pretty pointless.
I didn't bring up the size comparison. Someone else did, and sorry to burst your bubble, but Apple did in fact present the M2 die size. You can Google it if you like.
They already had option for lower power consumption: AMD processors. Guess what? Majority still use Intel. That pretty much invalidates your claims.
Sorry, but my claims are not invalidated by your speculation. People chose Intel for a reason, but you'll notice that there are a lot more AMD machines on the market today. Why? Because they offer lower cost and in many cases improved performance over Intel.
I ignore your "sources" because most of them are like this:

Amazon tells us how amazing their new chip is. It has low power consumption and that makes it best choice for out company. We don't even sell it to others because it's so good.

Now, what you would expect them to tell? That their new CPU simply sucks against competition?? Of course Amazon promotes power consumption because they have chip that has only one real advantage: power consumption. Exactly, what else you would expect?

Because Intel has high power consumption but high single core performance, do you expect Intel to promote power consumption? And because AMD has excellent performance per watt but poor single thread performance vs Intel (in some cases), do you expect AMD to promote single core performance?

You use sources where company promotes their OWN product and tells how important is exact same thing where that product happens to be good at. And now you question why I ignore them. Well, you have it above.
If the facts aren't true then show me that they aren't true. You use speculation again with nothing to back up your claims. Not all of the articles I referenced are from the manufacturer.
Just an example about thing that is supposed to be important and you can see media telling about it all the time. But in reality, it isn't.

Same applies to this "low power consumption matters". It is supposed to be important, but in reality, it isn't.
You can keep saying that but you will still be wrong. Power consumption matters to cloud providers. I don't know why you think it's not, but it is and has been for years. You use anecdotal evidence to try to make a point, but where's your reference articles? You want an independent article, here's one.
New manufacturing process Usually offers better performance on same power consumption OR lower power consumption on same performance. Otherwise there is not much point developing new processes. Basically Intel is saying their new processes do not totally suck. Did you expect them to say something else?
Since AMD released Zen2 Rome, Intel has not got absolutely nothing against AMD when it comes to power consumption and performance per watt. If you are correct, AMD would have sold Every Single Epyc they could manufacture. Since that isn't case, it's very evident that power consumption is nowhere near as important you are saying.
So if new processes deliver better performance at the same power level or lower power on the same performance, how can you say that power doesn't matter. You just contradicted yourself. If power doesn't matter why develop new and more efficient processes? If power doesn't matter why is Intel putting efficiency cores into their CPUs? Why do all manufacturers talk about TDP? And why has AMD's marketshare grown over the past 5 years? in Q1 2022, Intel's marketshare dropped 7% while AMD's rose 7%. Power doesn't matter? The market says differently and you have thus, so far, not shown any evidence to the contrary.
 
No one said it was "revolutionary", that's not the conversation at all. What was said is that ARM is more power efficient than x86.
No no and no. Because there is one ARM SOC that is more efficient than some x86 SOCs do not make ARM more power efficient than x86.
Again, Apple really isn't the discussion here. I used M2 as an example of ARM efficiencies because that data was readily available. But, again, this conversation is about using ARM due to lower power requirements, not whether Apple is making the right kind of products or not.
There is no such thing as "ARM is more efficient". ARM CPUs may be more efficient vs x86 because they are built to be efficient. Not that x86 CPU couldn't be more efficient than any ARM CPU.
You clearly have missed the point entirely. You can get electricity out of an outlet, uh no, not when your data center rack is at max power capacity. There is no more electricity to get.
Probably data center power delivery is predicted to be enough from beginning...
It cost money, yes, and when you're competing for business a la Amazon, Microsoft, Google, those cost matter. You cannot be competitive if you're cost to operate are significantly higher than your competitor. You will also spend more money building additional data centers when you've maxed out your power usage in the primary data center. That is a HUGE expense that NO ONE wants to incur.
Who says those big companies need to make money from data center business? They can make profits elsewhere too even if data center business is operating at loss.
Money is NOT an issue? What universe do you live in? I work with major corporations every day doing financial analysis of moving from on-premise computing to cloud computing. You are clearly uninformed here. Money matters to the people who run these corporations.
From what I have seen, money is Never an issue when it comes to computers. There are always exceptions but generally it's just like that.
I didn't bring up the size comparison. Someone else did, and sorry to burst your bubble, but Apple did in fact present the M2 die size. You can Google it if you like.
They did not say how big it is. Just an estimation. Apple likely modified image. No official info about that exist.
Sorry, but my claims are not invalidated by your speculation. People chose Intel for a reason, but you'll notice that there are a lot more AMD machines on the market today. Why? Because they offer lower cost and in many cases improved performance over Intel.
Yeah more. Problem is that we have seen how little power efficiency matters. Just look Prescott vs Opteron. Difference was huge. And still Opteron sales were abysmal.

Considering power consumption difference, AMD should be selling everything they can make, not just bit more.

Btw, AMD is doing much better on desktop than server market. That means desktop users value power efficiency much higher than server users. Right?
If the facts aren't true then show me that they aren't true. You use speculation again with nothing to back up your claims. Not all of the articles I referenced are from the manufacturer.
But Graviton article is purely Amazon's advertising.
You can keep saying that but you will still be wrong. Power consumption matters to cloud providers. I don't know why you think it's not, but it is and has been for years. You use anecdotal evidence to try to make a point, but where's your reference articles? You want an independent article, here's one.
OK, from article:
Throw in Intel’s proprietary Hyperthreading technology and you have processors that can keep with Ryzen’s octa-core offerings.

Almost.

Physical cores always trump over virtual threading, so it should come as no surprise that the Ryzen series have better multi-core performance. That said, the difference is minimal for most applications, with the Tiger Lake CPUs boasting better single-core speeds.
WTF? Intel's "proprietary" Hyperthreading? That is nothing else than Symmetric Multi Threading. Something AMD has also. And AMD does it better than Intel.

That article seems to be made by bot. There is no name about who wrote it. And article is complete nonsense.

Your sources seem to be either ads or complete rubbish.

My references, about what? For example CPU sales figures are very easy to find.
So if new processes deliver better performance at the same power level or lower power on the same performance, how can you say that power doesn't matter. You just contradicted yourself. If power doesn't matter why develop new and more efficient processes?
Because newer processes almost always are denser too. That means more transistors per mm2. That means more chips from wafer. That means lower manufacturing cost. Therefore it makes sense to develop never processes, even leaving out power and performance gains. Power and performance improvements are on some way just side effects from better density.
If power doesn't matter why is Intel putting efficiency cores into their CPUs? Why do all manufacturers talk about TDP? And why has AMD's marketshare grown over the past 5 years? in Q1 2022, Intel's marketshare dropped 7% while AMD's rose 7%. Power doesn't matter? The market says differently and you have thus, so far, not shown any evidence to the contrary.
Intel put efficiency cores because they couldn't put 16 performance cores, CPU would require liquid cooling. That is cooling issue, not power issue. TDP is mostly useless because both manufacturers allow exceeding it by large margin.

AMD got more market share because AMD offered better performance too. On server side Intel has nothing against AMD Rome Epyc from 2019. After that AMD has released Milan and Genoa is coming. Just performance wise, no need to even talk about efficiency.

However looking at desktop CPU share, Intel actually gained ground against AMD. Despite Alder Lake is ultra hot vs AMD's offerings: https://www.techpowerup.com/review/intel-core-i9-12900k-alder-lake-12th-gen/20.html

Compare 12900K vs 5950X, there is no competition. Even i5-12600K (6P+4E cores) consumes more power than 16 core 5950X. Power consumption matters? It does not seem so. That also answers why Intel added efficiency cores. Even with those, AMD runs much cooler.

So if your theory about "power consumption matters" holds true, Intel would not gain CPU share on desktop. It does not seem so...
 
The reality is, Linux (not counting Android running on Linux kernel... which does count, but not for the point I'm making here) on ARM is mature and fully developed. I ran a Acer Chromebook with a NVidia Tegra K1 (quad-core ARM, and roughly GTX650-speed GPU, older version of what's in the Nintendo Switch.) I had full desktop Ubuntu running on there... I had a full compliment of software, including a working Nvidia driver (both OpenGL and CUDA), android studio, compilers, MySQL, Python, Java, PHP, Rust, etc., libreoffice, firefox & chromium (I *think* there's a chrome build for ARM now?)

For a server, I doubt there's any difference in availability of packages between x86-64 and ARM. In terms of optimizations, the video encoding/decoding libraries, math libraries, etc., have had ARM optimizations in them for years, and gcc and clang will happily spit out SIMD instructions for ARM just as it does for x86-64. I think it may come as a surprise (especially if one comes from a Windows background) how little difference it can make if your Linux install is running on x86-64, ARM, MIPS, POWER, RISCv, or whatever else, just limited by how many MIPS, MBs of RAM, storage speed and capacity you've got and how many watts you're pulling down for it.

There are some differences, software we use at work for instance isn't always available on ARM (and we have better things to do then compile it ourselves), so we still use the x86_64 architecture for most of our EC2 fleet. There are also proprietary packages that don't support ARM and until they do we can't migrate those. But we've made the switch to graviton on several places and have noticed improvements. It's less an x86 vs ARM debate though as it is a lithography/ supply chain issue. m6g has been out for a while now, m6i/m6a (for Intel and AMD) have just recently rolled out, and m7g is already rolling out.
 
No no and no. Because there is one ARM SOC that is more efficient than some x86 SOCs do not make ARM more power efficient than x86.

There is no such thing as "ARM is more efficient". ARM CPUs may be more efficient vs x86 because they are built to be efficient. Not that x86 CPU couldn't be more efficient than any ARM CPU.

Probably data center power delivery is predicted to be enough from beginning...

Who says those big companies need to make money from data center business? They can make profits elsewhere too even if data center business is operating at loss.

From what I have seen, money is Never an issue when it comes to computers. There are always exceptions but generally it's just like that.

They did not say how big it is. Just an estimation. Apple likely modified image. No official info about that exist.

Yeah more. Problem is that we have seen how little power efficiency matters. Just look Prescott vs Opteron. Difference was huge. And still Opteron sales were abysmal.

Considering power consumption difference, AMD should be selling everything they can make, not just bit more.

Btw, AMD is doing much better on desktop than server market. That means desktop users value power efficiency much higher than server users. Right?

But Graviton article is purely Amazon's advertising.

OK, from article:

WTF? Intel's "proprietary" Hyperthreading? That is nothing else than Symmetric Multi Threading. Something AMD has also. And AMD does it better than Intel.

That article seems to be made by bot. There is no name about who wrote it. And article is complete nonsense.

Your sources seem to be either ads or complete rubbish.

My references, about what? For example CPU sales figures are very easy to find.

Because newer processes almost always are denser too. That means more transistors per mm2. That means more chips from wafer. That means lower manufacturing cost. Therefore it makes sense to develop never processes, even leaving out power and performance gains. Power and performance improvements are on some way just side effects from better density.
LOL, you are making this too easy. One of the main points to designing denser wafers is power reduction or more processing at the same power level. Here's an article that will help you understand that.

Here's the highlight: (TLDR: Power matters)
[HEADING=2]1. More Power Efficient[/HEADING]
In order to switch on or off, transistors require power. So, a lower nm transistor means there is less power required for it to work. When you look at all the transistors in a CPU, lower power consumption makes a huge difference overall. It makes your processor more power-efficient compared to a higher nm processor with larger transistors.

[HEADING=2]2. Less Cooling Required[/HEADING]
Relating to the first point, when the transistors in your CPU consume less power, less heat is generated overall. So, your machine requires less cooling to keep working optimally.
Intel put efficiency cores because they couldn't put 16 performance cores, CPU would require liquid cooling. That is cooling issue, not power issue. TDP is mostly useless because both manufacturers allow exceeding it by large margin.
Exactly, it would require much better cooling because it using MORE POWER. Hence why they put efficiency cores. They aren't "Processing" efficient, they are power efficient.
AMD got more market share because AMD offered better performance too. On server side Intel has nothing against AMD Rome Epyc from 2019. After that AMD has released Milan and Genoa is coming. Just performance wise, no need to even talk about efficiency.
Once again, I hate to burst your bubble, but in Server deployments Intel eats AMDs lunch every day. AMD is growing, but they have a lot of ground to cover before they catch up to Inter. Read more here.
However looking at desktop CPU share, Intel actually gained ground against AMD. Despite Alder Lake is ultra hot vs AMD's offerings: https://www.techpowerup.com/review/intel-core-i9-12900k-alder-lake-12th-gen/20.html

Compare 12900K vs 5950X, there is no competition. Even i5-12600K (6P+4E cores) consumes more power than 16 core 5950X. Power consumption matters? It does not seem so. That also answers why Intel added efficiency cores. Even with those, AMD runs much cooler.

So if your theory about "power consumption matters" holds true, Intel would not gain CPU share on desktop. It does not seem so...
No one is talking about desktop machines here. I already stated that wattage on a single desktop doesn't matter, but when you have millions of machines in a datacenter it's a different story.
 
LOL, you are making this too easy. One of the main points to designing denser wafers is power reduction or more processing at the same power level. Here's an article that will help you understand that.

Here's the highlight: (TLDR: Power matters)
[HEADING=2]1. More Power Efficient[/HEADING]
In order to switch on or off, transistors require power. So, a lower nm transistor means there is less power required for it to work. When you look at all the transistors in a CPU, lower power consumption makes a huge difference overall. It makes your processor more power-efficient compared to a higher nm processor with larger transistors.

[HEADING=2]2. Less Cooling Required[/HEADING]
Relating to the first point, when the transistors in your CPU consume less power, less heat is generated overall. So, your machine requires less cooling to keep working optimally.
Again, no. If transistors are smaller, meaning more transistors per area, it means more transistors per wafer or in other words, more working chips for same transistor count from wafer. However smaller transistors do not necessarily mean less power or higher performance. That also means only thing guaranteed with smaller transistors is more chips from wafer. Other things are more side effects.
Exactly, it would require much better cooling because it using MORE POWER. Hence why they put efficiency cores. They aren't "Processing" efficient, they are power efficient.
Even with efficiency cores, Intel chips are much hotter than AMD chips. So much for efficiency.
Once again, I hate to burst your bubble, but in Server deployments Intel eats AMDs lunch every day. AMD is growing, but they have a lot of ground to cover before they catch up to Inter. Read more here.
Of course because stupid buyers tend to buy Intel even when AMD is better everywhere. Again, Intel has absolutely nothing against even AMD's 2019 server CPU. It makes me wonder why anyone buys Intel before AMD is really drained out of all server chips. Article is bit outdated since Sapphire Rapids is delayed into 2023.

Intel's best today https://ark.intel.com/content/www/u...atinum-8380-processor-60m-cache-2-30-ghz.html
AMD's best 2019 https://www.amd.com/en/products/cpu/amd-epyc-7742

Not any competition there. And that just proves that power efficiency is not important.
No one is talking about desktop machines here. I already stated that wattage on a single desktop doesn't matter, but when you have millions of machines in a datacenter it's a different story.
OK, I already proved nobody actually cares about server chip efficiency. Since efficiency on desktop chips is not important either, I think I proved my point.
 
Back