In context: One of the more intriguing topics driving evolution in the technology world is edge computing. After all, how can you not get excited about a concept that promises to bring distributed intelligence across a multitude of interconnected computing resources all working together to achieve a singular goal?

Trying to distribute computing tasks across multiple locations and then coordinate those various efforts into a cohesive, meaningful whole is a lot harder than it first appears. This is particularly true when attempting to scale small proof-of-concept projects into full-scale production.

Issues like moving enormous amounts of data from location to location---which, ironically, was supposed to be unnecessary with edge computing---as well as overwhelming demands to label that data are just two of several factors that have conspired to make successful edge computing deployments the exception as opposed to the rule.

IBM's Research Group has been working to help overcome some of these challenges for several years now. Recently they have begun to see success in industrial environments like automobile manufacturing by taking a different approach to the problem. In particular, the company has been rethinking how data is being analyzed at various edge locations and how AI models are being shared with other sites.

At car manufacturing plants, for example, most companies have started to use AI-powered visual inspection models that help spot manufacturing flaws that may be difficult or too costly for humans to recognize. Proper use of tools like IBM's Maximo Applications Suite's Visual Inspection Solution with Zero D (Defects or Downtime), for example, can both help save car manufacturers significant amounts of money in avoiding defects, and keep the manufacturing lines running as quickly as possible. Given the supply chain-driven constraints that many auto companies have faced recently, that point has become particularly critical lately.

The real trick, however, is getting to the Zero D aspect of the solution because inconsistent results based on wrongly interpreted data can actually have the opposite effect, especially if that wrong data ends up being promulgated across multiple manufacturing sites throughout inaccurate AI models. To avoid costly and unnecessary production line shutdowns, it's critical to make sure that only the appropriate data is being used to generate the AI models and that the models themselves are checked for accuracy on a regular basis in order to avoid any flaws that wrongly labelled data might create.

This "recalibration" of the AI models is the essence of the secret sauce that IBM Research is bringing to manufacturers and in particular a major US automotive OEM. IBM is working on something they call Out of Distribution (OOD) Detection algorithms that can help determine if the data being used to refine the visual models is outside an acceptable range and might, therefore, cause the model to perform an inaccurate inference on incoming data. Most importantly, it's doing this work on an automated basis to avoid potential slowdowns that would occur from time-consuming human labelling efforts, as well as enable the work to scale across multiple manufacturing sites.

A byproduct of OOD Detection, called Data Summarization, is the ability to select data for manual inspection, labeling and updating the model. In fact, IBM is working on a 10-100x reduction in the amount of data traffic that currently occurs with many early edge computing deployments. In addition, this approach results in 10x better utilization of person hours spent on manual inspection and labeling by eliminating redundant data (near identical images).

In combination with state-of-the-art techniques like OFA (Once For All) model architecture exploration, the company is hoping to reduce the size of the models by as much as 100x as well. This enables more efficient edge computing deployments. Plus, in conjunction with automation technologies designed to more easily and accurately distribute these models and data sets, this enables companies to create AI-powered edge solutions that can successfully scale from smaller POCs to full production deployments.

Efforts like the one being explored at a major US automotive OEM are an important step in the viability of these solutions for markets like manufacturing. However, IBM also sees the opportunity to apply these concepts of refining AI models to many other industries as well, including telcos, retail, industrial automation and even autonomous driving. The trick is to create solutions that work across the inevitable heterogeneity that occurs with edge computing and leverage the unique value that each edge computing site can produce on its own.

As edge computing evolves, it's clear that it's not necessarily about collecting and analyzing as much data as possible, but rather finding the right data and using it as wisely as possible.

Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter .