How Arm Came to Dominate the Mobile Market, and It's Coming for More

This part of the article is plain wrong:

"A builder using a CISC-like system would be able to get more work done since their tools are more specialized and powerful. The RISC builder would still be able to get the job done, but it would take longer since their tools are much more basic and less powerful."

Nope. Not true. Because RISC instructions execute much faster, complex CISC instructions can actually be a lot slower, especially in case of wrong branch prediction, since numerous and irregular length instructions may take a lot more time to decode. And take up a lot more space on silicon. A RISC CPU has more internal super-fast registers, where the compiler can store local variables into, making tight loop execution a lot faster.

Not to mention that CISC CPUs have lots of unused legacy instructions that just occupy space on the silicon, prolong development and testing, and serve no purpose. RISC has no extra instructions, it's got just those that are needed.

So, the comparison is this: CISC can perform complex tasks by executing less instructions, which take more time, while RISC will do the same job by executing more instructions that take less time. The end result is approximately the same, but RISC is simpler to design, cheaper and can have lower energy consumption.

In other words, Greta would be happier with RISC CPUs.
 
The article does a good job of a VERY simple explanation of RISC vs CISC, but is extremely wrong on how both are approached in ARM vs. X86. Chips produced by both today incorporate both styles of computing in using both RISC and CISC instruction sets. Apples new ships based on ARM have the most CISC instructions of any ARM based chip to date. And, X86 also has extremely good parallel instruction handling on a RISC style instruction sets. it's why the chips do so well with the multiple CPU cores, and threads.
 
Nope. Not true. Because RISC instructions execute much faster, complex CISC instructions can actually be a lot slower, especially in case of wrong branch prediction, since numerous and irregular length instructions may take a lot more time to decode. And take up a lot more space on silicon. A RISC CPU has more internal super-fast registers, where the compiler can store local variables into, making tight loop execution a lot faster.

Not to mention that CISC CPUs have lots of unused legacy instructions that just occupy space on the silicon, prolong development and testing, and serve no purpose. RISC has no extra instructions, it's got just those that are needed.

So, the comparison is this: CISC can perform complex tasks by executing less instructions, which take more time, while RISC will do the same job by executing more instructions that take less time. The end result is approximately the same, but RISC is simpler to design, cheaper and can have lower energy consumption.

In other words, Greta would be happier with RISC CPUs.

Um, you are wrong too. Given the same task, instructions for RISC and CISC would compute in almost identical times, RISC only being slightly faster on most instructions. CISC will handle complex tasks much faster, with less programming which saves RAM too, lowering power requirements. And, both chips today have super fast registers. And legacy instructions have zero impact on computing speed. RISC also have plenty of legacy instructions, and again zero impact on performance.

ARMS have lower power needs because of a shortened pipeline, not just the instructions used. In today's CISC chips, they can also draw super low power. This is done by using the same method ARM does by shutting down as much of the CPU as possible when fewer instructions are needed. Intel and others just failed to saturate the market as well as ARM so they are less seen. So, the CISC guys focus on high power computer and build chips to satisfy that need.

What Does RISC and CISC Mean in 2020? | by Erik Engheim | The Startup | Medium

Also, this: Please note that both the CISC and RISC architectures are a bit dated. Nowadays, they could be considered as playing the role of the cornerstone for developing newer architectures. Today, the demarcation between these architectures is very blurred to say the least. This is because technologies have fused and evolved so that current architectures now share major characteristics of both CISC and RISC (Glaskowsky, 2017). For instance, the Intel family of processors, which started o± with the CISC architecture, evolved from the Pentium series chips (i486 processors and newer) to become a hybrid architecture (Krad and Fida El-Din,2007).To further buttress this point, a study carried out at the University of Wisconsin by Blem et al(2015), showed amongst other things that Instruction Set Architecture (ISA) plays no significant role in today’s processors and that ISA being RISC or CISC seems rather irrelevant.
From: Please note that both the CISC and RISC architectures are a bit dated Nowadays | Course Hero

So, considering that study, this article is outdated.
 
Last edited:
When the RISC concept was originally proposed, all instructions did take one cycle in the first processor designed under that paradigm. But that is no longer true, since the advantages of RISC wouldn't make up for the lack of hardware floating-point.
 
I wonder how would “x86” be if the CISC wrapper is removed, since all X86 CPU have been RISC internally since the Pentium Pro.
 
Listenign to all the RISC vs CISC debate is like listening to two old car guys debating on which is better: 20 weight or 30 weight, when oils like 5w-30 and 0w-20 full synthetics have been available for decades and are far better in every way.

I'll tell you the one major benefit of ARM: It is not licensed by Intel. x86 is effectively a closed market, and intel's greed will never let otherwise. Apple's M1 is the first chip to really show somewhat competitive performance to x86, and shows the potential future of ARM replacing x86

At the same time, the M1 also shows ARMs weaknesses. It took apple many years to get where it is today on chip design. The M1 also has a much larger die then either intel or AMD's comparable parts, and that performance doesnt always translate in production apps, and the power consumption is in roughly the same ballpark. Anyone thinking ARM will let us have 10900k performance in a 25 watt TDP is totally ignorant to how modern processors function. As it turns out, once you add instruction sets to get ARM to the performance we expect on the desktop, it pulls as much power and is just as large, because the legacy x86 takes up a TINY space.

Is ARM the future? Well, I think that depends on whether AMD can make a good ARM processor for mass market adoption. If they made a AM4 compatible ARM processor so DIY and enthusiasts could play with it, and made the same chip available for developers, I could see ARM eventually coming to dominate the market. If AMD could get zen 3 performance without x86, the only thing holding back mass adoption would be backwards compatibility via Windows, and a public build of windows 10 ARM.
 
" It took apple many years to get where it is today on chip design"

While true on the surface, it also took intel a whopping long time to get where it is today. You seem to be conflating that investing billions over decades by Intel and dominating chip markets for years has put a lot of money into the business and, significant falters over the last few years notwithstanding, has resulted in the space of where things are today.

By a direct comparison, while ARM has been around for quite sometime, the designs received very little money and development in comparison and were focused on very specific tasks which were not M1 class devices, but the best uses were in mobile with specific low power applications. So if you actually compare to the M1, development was extremely fast as measured from the time the ARM architecture was decided to be scaled up to a desktop class level from a mobile level.

Also recent developments where ARM is/was the fastest supercomputer in the world (I've read at approximately the same power levels), demonstrates that the net results are equivalent, just a different path to the same destination.
 
Intel seems to have stood still while counting the benjamins for the last five years ...
What Nvidia owning ARM wil do for the adoption and continued use , who knows ..?
I suspect many Co's would like a more "open source" type solution ..maybe we will see some sort of adoption of RISC-V by some diehards.
AND WHAT WILL China and Russia cook up in their LABS ? Some major companys are still running Cobol programs on IBM mainframes .... they could run better on a Pi400 , but ....
 
Yeah ya ya, Blah bla blah

I've heard enough propaganda on what Arm can do

I'm just gonna relax and go play Jetfighter 4 on my 12 year old Nehalem running Windows XP

Try running Jetfighter 4 on your M1 or even from a VM on a modern Windows Box

Try recording what you hear in Adobe Audition from an XP VM on an Arm M1 or even from an XP VM on a modern Windows 10 Box

I can run ALL of the software collected throughout the years from an XP Native boot PC

You can't do that on the M1 and you can't do that from a VM on a modern Windows 10 machine

OH, and I can do it faster than your fastest supercomputer because your supercomputer won't run my software

Let me know when your modern garbage can replace EVERYTHING I already have and then we'll talk

If I wanted to be limited to a locked down spyware platform, I'll consider an M1 Arm chip, or I'll simply reboot my XP machine to a copy of Spyware Platform 10

Until then, I can run much more than either of your gimped systems

Blah bla BLAAAAAAH!
 
Intel seems to have stood still while counting the benjamins for the last five years ...
What Nvidia owning ARM wil do for the adoption and continued use , who knows ..?
I suspect many Co's would like a more "open source" type solution ..maybe we will see some sort of adoption of RISC-V by some diehards.
AND WHAT WILL China and Russia cook up in their LABS ? Some major companys are still running Cobol programs on IBM mainframes .... they could run better on a Pi400 , but ....

I really love how people conveniently ignore how despicable Nvidia is as a company towards the industry, their partners and their own customers.

I wont go into details, because the web is full of them for the ones that dont drink the koolaid.

But I will say, nvida wont pay 40 billions for ARM just to advance the industry.

So pray they are not allowed to buy ARM.
 
In M1 stellar single thread performance per watt, it is hard to ignore the contribution of 5nm process, monolithic design and single thread per core optimization. Not sure how much ARM contribute to this.
 
This. I'm getting really tired of all the pro-ARM propaganda in tech media, which seems to have been ramped up to 11 in 2020. Not only it's been annoying, it seems unnatural and fishy.
Exactly my thoughts for the past 60 years people have been trying to say RISC was going to take over everything, news flash it was only useful in smartphones and tablets where battery consumption is a thing, performance per watt sure but that's the only contest it will ever win. It's laughable people think it can take on high end x86, the M1 is 5nm right so it matches performance of a budget 22nm chip from over a decade ago.......big whoop. The fact it's locked down to Apples OS means it's worthless in testing, anyone can build a half-assed chip arch and run optimization like hell on it and get decent performance, the gaming consoles do it and get low end high teir visuals from budget hardware the only thing legendary here is peoples misconceptions.
 
Intel seems to have stood still while counting the benjamins for the last five years ...
What Nvidia owning ARM wil do for the adoption and continued use , who knows ..?
The USD$40 Billion spend by purchasing ARM by Huang will prove to be cheap. Indeed NVIDIA has now other (much bigger) plans and visions. Huang (CEO) earned his Master's degree from Stanford University. Sincere congratulations to a USA Immigrant. After Stanford he worked for AMD as a Microprocessor Designer and 'key executive' and getting even smarter. Now Huang at NVIDIA is drawing down an annual salary of USD$26 million along with stock options doubling this amount. He is a bilionare 12-times over. Certainly the entire "RTX 3000" effort and since the arrival of ARM is not anymore Huang's primary business objective. His company went public in 1999, shares priced at $19.69 each. An investment of $2000 and your initial capital with reinvested dividends would now have grown to $291,000. Both AMD and Intel will be in for ride of their lives coming soon to this theater thanks to Mr. Huang. His early impressionable work at AMD tought him well and how to move in a corporate American world. In a recent Taiwan Financial Times interview he remarked: "The ARM purchase was a major turning point for NVIDIA and a shift in our corporate culture. We are now standing at the threshold to growth heretofore unimagined and an opportunity to be one of the most technological and progessivily advanced companies in this world."
 
Last edited:
Many people here made some money by having purchased AMD stock and it jumping in about 14-months time from $40 to $90. With that I wonder how NVIDIA will fare given the news that we all know and what happened to Samsung when it first started out on their journey. FYI: The Weapons division of Samsung was founded in 1977 as Samsung Techwin who started out manufacturing jet engines. In India TATA & MAHINDRA, Indian defence weapons suppliers have been good customers of Samsung because its so huge that it can produce more products per day than all of those companies combined. Most all Korean defence manufacturing companies are Samsung properties. Will NVIDIA ever possess the 'breadth of knowledge' like Samsung garnered over the years? Given the fact that Taiwan isn't exactly a friend of China and strives to remain independent at all costs? Food for thought.
 
Last edited:
Back