It's nice to see a mainstream commercial OS trying to function on a non-x86 platform. x86 is fundamentally unchanged since the 1980s. The advancements are in additioas to the instruction set and extensions to the memory addressing that become more complex as we go, all in the name of backward capability. That I can still load a 25 year old (or older) operating system on a modern computer is ridiculous.
Back in the 80's Risc chips seemed to have a performance advantage over x86. That changed when NexGen worked out how to decode x86 instructions into Risc like uops and execute those in a Risc like pipeline. Then came superscalar, and out of order. Now at this point Risc ceased to have an advantage.
It makes sense really. Look at a typical die photo - it is mostly cache memory. If 90% of your die is cache then it's hard to see how the overhead of decoding x86 instructions is all that significant.
And x64 gets rid of segment limits for CS and DS. Even FS and GS have a segment base but not a limit. So you can execute those instructions with just an additional adder. Or an additional uop.
Although this might be a poor result, at least someone is thinking about new architecture in terms of commercial computing. The only powerful new tech we have is in mainframes (Itanium, Z-Processors, etc...) and everyone else is stuck using a nearly 40 year old base architecture on to which we keep gluing new stuff and pretending that makes the whole thing new.
Emulation and migration will always be an issue if we want to truly advance our technology. The difference is that we used to do this all of the time. In the '80s I easily had 4 different architectures at home and regularly used about twice that. The difference is that nobody released a new machine without significant native applications. Hopefully we'll see this as a move towards opening the door to real hardware processing innovation.
I've got a fair few architectures in my office - x86, ARM, MIPS (in a router), PIC, AVR (dev boards). And if you look at WD for example they're going to move from ARM to RiscV to cut down on the IP fees they pay. I've worked on projects that use cores from even more obscure sources. VHDL and Verilog make it easy to develop new architectures. Broadcom have a completely novel architecture designed by Sophie Wilson, the original ARM ISA designer.
Considering the price point I'm wondering how this machine behaves under a non-Windows OS. We have a number of 'nix Kernals compiled for ARM that might make this a beautiful machine for someone wanting a portable workstation.
In an non-Windows OS the performance of a Snapdragon 835 will be pretty comparable to a Celeron N3450.
There's an interesting paper here that shows that ISAs do not matter very much - an ARM implemented in a 5W power budget will have comparable performance to an x86 in a 5W power budget.
https://www.extremetech.com/extreme...-or-mips-intrinsically-more-power-efficient/2
Which makes sense, right? If a CPU die is 90% cache then it's hard to see how changing the other 10% is going to cause much to change. That definitely wasn't true back in the 80386 days or the 68000 days when I remember reading a SPARC in a gate array could outperform a 80386 in a highly optimised custom implementation and a ARM1 with 25000 transistors would far outperform a 68000 - the 68000 was named because it had 68000 transistors. Both the ARM and the 68000 had an elegant instruction set but the ARM executed every instruction in a single cycle because it was pipelined. Pipelining was possible because of the way the ARM ISA was designed.
However once you start converting x86 instructions to uops with a bunch of hardwired logic that difference disappears. Sure that custom logic was a significant part of the die area back in the 586 days, but it's not now. And you can see that by looking at a die photo and seeing it's mostly cache.
Incidentally one of the advantages x86 and x64 have over Risc is code density. So you can look at that instruction to uop decode logic as being a decompressor.
And x86/x64 has an advantage over ARM. Historically the fastest ARM has been about as powerful as the slowest x86/x64 chip. That may change in future but right now x86/x64 pretty much owns the notebook/desktop/server market and ARM owns the phone/tablet one. People aren't running the same code on those two platforms. A Snapdragon 835 would perform very well in a phone but would suck in a notebook as these results show.