Editor's take: One of the most exciting developments in the tech world is the advancement of edge computing. That is, if you can figure out what "the edge" really means. Nobody seems to be able to explain the edge in a concise and consistent manner with nearly every tech vendor and industry prognosticator, having their own view of what the "edge" and therefore, edge computing is. This is understandable, in part, because there are legitimate cases to be made for how far the edge extends away from the core network, making it reasonable to talk about things like the near edge, the far edge, and so on...

What does seem consistent through all of these discussions though, is that edge computing is a new form of distributed computing, where compute resources are scattered over many different locations. Modern microservice-based, containerized software architectures fit nicely into this world of dispersed, but connected, intelligence.

The other point that seems relatively consistent across the many different versions and definitions of edge computing is that the available resources that can be tapped into at the "edge" are significantly more varied than what has been available in the past. Sure, there will be lots of powerful x86 CPUs – in fact, even more choices than before, given the significant impact that AMD has made and the rejuvenated competitiveness this challenge has brought to Intel – but many other options as well.

Edge computing is a new form of distributed computing, where compute resources are scattered over many different locations

Arm-powered CPUs from major cloud vendors, like the latest Graviton 3 from AWS, and new server CPU options from companies like Ampere, are becoming popular options, too. Some have even suggested Arm-powered processors could become dominant in power-sensitive "far edge" applications like 5G cell towers for MEC (mobile edge compute) implementations.

GPUs from Nvidia and AMD, along with an enormous range of dedicated AI processors from a whole host of both established and startup silicon companies, are also starting to make their presence felt in distributed computing environments, adding to the range of new computing resources available.

As powerful as this concept of seemingly unlimited computing resources may be, however, it does raise a significant, practical question. How can developers build applications for the edge when they don't necessarily know what resources will be available at the various locations in which their code will run?

Cloud computing enthusiasts may point out that a related version of this same dilemma faced cloud developers in the past, and they developed technologies for software abstraction that essentially relieved software engineers of this burden. However, most cloud computing environments had a much smaller range of potential computing resources. Edge computing environments, on the other hand, won't only offer more choices, but also different options across related sites (such as all the towers in a cellular network). The end result will likely be one of the most heterogeneous targets for software applications that has ever existed.

Companies like Intel are working to solve some of the heterogeneity issues with software frameworks. One API is Intel's effort to create tools that will let people write code which will smartly leverage the different capabilities of chips like CPUs, GPUs, FPGAs, AI accelerators and more without needing to learn how to write software for each of them individually. Clearly, it's a step in the right direction. However, it still doesn't solve the bigger issue, because it's only designed for Intel chips.

What seems to be missing are two key standards that can help define and extend the range of edge computing. First, there needs to be a standardized way to query what resources are available – including chip and network types, capacity, network throughput, latency, etc. – and a standard protocol or messaging method for returning the results of that query.

Second, there needs to be a standard mechanism for interpreting those results and then either dynamically adjusting the application or providing the right kind of hardware abstraction layer that would allow the software to run on whatever type of distributed computing environment it finds itself in. By putting these two capabilities together, you could greatly enhance the ability to create a usable and shareable distributed computing environment.

One possible option is the development of a higher-level "meta" platform through which diverse types of hardware and software could communicate and coexist.

These are non-trivial tasks, however, and they would take a great deal of industry cooperation to create. Nevertheless, they seem essential if we don't want edge computing to disintegrate into a convoluted mire of incompatible platforms.

One possible option is the development of a higher-level "meta" platform through which diverse types of hardware and software could communicate and coexist. To be clear, I am not referring to a "metaverse" but rather a higher order software layer. At the same time, creating a metaverse-style digital world would undoubtedly require the unification or at least standardization of different edge computing concepts in order to at least provide a consistent means of visualizing such a world across different devices.

In the same way that internet standards like IP and HTTPS provide a common way to present information, this metaplatform could potentially offer a common means of computing information across an intelligently connected but highly distributed set of resources.

Admittedly, at least part of this discussion may be a bit too theoretical to bring to life soon. But for edge computing to move beyond the interesting concept stage to the realm of compelling experience, a few of these points need to be addressed. If not, I'm concerned the real-world complexities of trying to integrate a highly diverse set of computing resources into a useful, powerful tool capable of running an exciting set of new applications could quickly become overwhelming. And that would be a real shame.

Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter .

Image credit upklyak