Opinion: Edge computing could weaken the cloud

Bob O'Donnell

Posts: 110   +2
Staff member

Ask anyone on the business side of the tech industry about the most important development they’ve witnessed over the last decade or so and they’ll invariably say the cloud. After all, it’s the continuously connected, intelligently managed, and nearly unlimited computing capabilities of the cloud that have enabled everything from consumer services like Netflix, to business applications like Salesforce, to social media platforms like Facebook, to online commerce giants like Amazon, to radically transform our business and personal lives. Plus, more than just the centralized storage and computing capabilities for which it’s best known, cloud computing models have also led to radical changes in how software applications are designed, built, managed, monetized and delivered. In short, the cloud has changed nearly everything in tech.

In that light, suggesting that something as powerful and omnipresent as the cloud could start to weaken may border on the naïve. And yet, there are growing signs—perhaps some “fog” on the cloud horizon?—which suggest that’s exactly what’s starting to happen. To be clear, cloud computing, and all the advancements its driven in products, services and processes, isn’t going away, but I do believe we’re starting to see a shift in some areas from the cloud and towards the concept of edge computing.

In edge computing, certain tasks are done closer to the edge or end of the network on client devices, gateways, connected sensors, and other IoT (Internet of Things) gadgets, rather than on the large servers and other infrastructure elements that make up the cloud. From autonomous cars, to connected machines, to new devices like the Intel Movidius VPU (visual processing unit)-powered Google Clips smart camera, we’re seeing an enormous range of new edge computing clients start to hit the market.

While many of these devices are very different in terms of their capabilities, function and purpose, there are several characteristics that unite them. First, most of these devices are designed to take in, analyze, and react to real-time data from the environment around them. Leveraging a range of connected sensors, these edge devices ingest everything from location and temperature data to sound and images (and much more), and then compute an appropriate response, whether that be to slow a car down, provide a preventative maintenance warning, or take a picture when everyone in view is smiling.

The second shared characteristic involves the manner with which this real-time data is analyzed. While many of the edge computing devices have traditional computing components, such as CPUs or ARM-based microcontrollers, they all also have new and different types of processing components—from GPUs, to FPGAs (field programmable gate arrays), to DSPs (digital signal processors), to neural net accelerators, and beyond. In addition, many of these applications use machine learning or artificial intelligence algorithms to analyze the results. It turns out that this hybrid combination of traditional and “newer” types of computing is the most efficient mechanism for performing the new kinds of calculations these applications require.

It turns out that a hybrid combination of traditional and “newer” types of computing is the most efficient mechanism for performing the new kinds of calculations these edge applications require.

The third unifying characteristic of edge computing devices gets to the heart of why these kinds of applications are being built independent from or migrated (either partially or completely) from the cloud. They all require the kind of real-time performance, limited latency, and/or security and privacy guarantees that best come from on-device computing. Even with the promise of tremendous increases in broadband network speed and reductions in latency that 5G should bring, it’s never going to replace the kind of immediate response that an autonomous car is going to need when it “sees” and has to respond to an obstacle in front of it. Similarly, if we ever want our interactions with personal-assisted powered devices (ie., those using Alexa, Google Assistant, etc.) to move beyond one question requests and into naturally-flowing, multi-part conversations, some amount of intelligence and capability is going to have to be built into edge devices.

Beyond some of the technical requirements driving growth in edge computing, there are also some larger trends at work. With the tremendously fast growth of the cloud, the pendulum of computing provenance had swung towards the side of centralized resources, much like the early era of mainframe-driven computing. With edge computing, we’re starting to see a new evolution of the client-server era that appeared after mainframes. As with that transition, the move to more distributed computing models doesn’t imply the end of centralized computing elements, but rather a broadening of possible applications. The truth is, edge computing is really about driving a hybrid computing model that combines aspects of the cloud with client-side computing to enable new kinds of applications that either aren’t well-suited or are not possible with a cloud-only approach.

With edge computing, we’re starting to see a new evolution of the client-server era that appeared after mainframes.

Exactly what some of these new edge applications turn out to be remains to be seen, but it’s clear that we’re at the dawn of an exciting new age for computing and tech in general. Importantly, it’s an era that’s going to drive the growth of new types of products and services, as well as shift the nexus of power amongst tech industry leaders. For those companies that can adapt to the new realities that edge computing models will start to drive over the next several years, it will be exciting times. But for those that can’t—even if they seem nearly invincible today—the potential for becoming a footmark in history could end up being surprisingly real.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.

Permalink to story.

 
This is the way it will go in the future. The power of mobile computing is increasing to such an extent that server-side processing isn't needed anymore, only data. So we will be seeing the cloud platforms moving away from being business solutions to becoming more of just database servers.

Perhaps the most impressive mobile today is Apple's A11 CPU, with embedded support for AI computing.
 
Website usage of the CDN
(A content delivery network (CDN) is a system of distributed servers (network) that deliver pages and other Web content to a user, based on the geographic locations of the user, the origin of the webpage and the content delivery server)​
is the current most prolific use of Edge Computing. Typically, static data and graphics are pushed to the Edge for quick delivery while the core of the application is still performed on the server host. Little maintenance is required due to the static nature of such distributed content.

An obvious problem would be to distribute active content generation to the CDN, as updating such becomes a real burden and prone to errors in keeping it synchronized.

As recently announced, Techspot has employed this technique for the performance gains it offers.
 
Do I understand correctly, that edge computing is using devices that, while connected to the internet, process significant data internally rather than transmitting much of the data to and from a server for processing?

Isn't that what a PC does????? If so, I have been engaged in edge computing for more than 2 decades now.
 
Do I understand correctly, that edge computing is using devices that, while connected to the internet, process significant data internally rather than transmitting much of the data to and from a server for processing?

Isn't that what a PC does????? If so, I have been engaged in edge computing for more than 2 decades now.
Yeah, they just coined a new, trendy name for client-server computing model. The only change is some new processing units (NN accelerators, FPGAs etc. ) that appear in client devices - but it isn't the first time it happens. Years ago GPUs were also a new invention, and integrated DSPs and sensors in mobile devices have been around for at least a decade (if not much more) now.
 
Do I understand correctly, that edge computing is using devices that, while connected to the internet, process significant data internally rather than transmitting much of the data to and from a server for processing?

Isn't that what a PC does????? If so, I have been engaged in edge computing for more than 2 decades now.
To engage investors you just need to rename some old technology or a product. That's all you need today. But for people who doesn't know fundamental concepts it works perfectly ever.
 
Here's my prediction: powerful devices on the edge will cause a massive growth in cloud workloads. I site Jevon's Paradox for my main reasoning here but, also, history. Since the availability of widely accessible internet, the emergence of new, powerful client devices has only increased demand for the corresponding server side and cloud. This next generation, I predict, will be no different. What will change, however, is the nature of cloud. The cloud will no longer be, primarily centralized data centers. The cloud will become n-tier, a gradient from core to edge. The shift of cloud to the edge will be massive — but it will be an edge cloud that works in conjunction with edge devices, not merely edge devices.
 
Edge computing is about off-loading work or resources to remote servers.
View attachment 83472
The users desktop accesses the Website X and references a CDN which has multiple locations in the DNS. The response from the DNS will (when properly setup) direct the browser to the edge server which is closest geographically to reduce lag time
in this case for example edge#103).

All kinds of resources can be distributed for remote access.
 
The pics below will help in visualizing WHAT the cloud is and HOW it gets used:
There are multiple cloud implementations, not just one
upload_2017-11-1_12-15-36.png
(psst: someone missed IBM here)

A very typical (commercial) use of the Cloud would look like
upload_2017-11-1_12-15-49.jpeg
where the company controls are INTERNAL and the consumer access is in the Cloud.

A block diagram of the components looks like
upload_2017-11-1_12-16-59.jpeg
 
Back