Opinion: Internet of Things connections made easy

Bob O'Donnell

Posts: 81   +1
Staff member

For long-time tech industry observers, many of the primary concepts behind business-focused Internet of Things (IoT) feel kind of old. After all, people have been connecting PCs and other computing devices to industrial, manufacturing, and process equipment for decades.

But there are two key developments that give IoT a critically important new role: real-time analysis of sensor-based data, sometimes called “edge” computing, and the communication and transfer of that data up the computing value chain.

In fact, enterprise IoT (and even some consumer-focused applications) are bringing new relevance and vigor to the concept of distributed computing, where several types of workloads are spread throughout a connected chain of computing device, from the endpoint, to the edge, to the data center, and, most typically, to the cloud. Some people have started referring to this type of effort as “fog computing.”

Critical to that entire process are the communications links between the various elements. Early on, and even now, many of those connections are still based on good-old wired Ethernet, but an increasing number are moving wireless. Within organizations, WiFi has grown to play a key role, but because many IoT applications are geographically dispersed, the most important link is proving to be wide-area wireless, such as cellular.

A few proprietary standards such as Sigfox and Lora, that leverage unlicensed radio spectrum (meaning there are no standards bodies that manage it, in contrast to the licensed frequencies used for cellular phone and data service) have arisen to address some specific needs and IoT applications. However, it turns out traditional cellular and LTE networks are well-suited to many IoT applications for several reasons, many of which are not well-known or understood.

First, in the often slower-moving world of industrial computing, there are still many live implementations of, along with relatively large usage of, 2G networks. Yes, 2G. The reason is that many IoT applications generate tiny amounts of data and aren’t particularly time-sensitive, so the older, slower, cheaper networks still work.

Many telcos, however, are in the midst of upgrading their networks for 5G and faster versions of 4G LTE. As part of that process, many are shutting down their 2G networks so that they can reclaim the radio frequencies previously used for 2G in their faster 4G and 5G networks. Being able to transition from those 2G to later cellular standards, however, is a practical, real-world requirement.

(...) enterprise IoT is bringing new relevance to distributed computing, from the endpoint, to the edge, to the data center, and, most typically, to the cloud. Some have started referring to this type of effort as “fog computing.”

Second, there’s been a great deal of focus on creating low-cost and, most importantly, low power wide area networks that can address the connectivity and data requirements of IoT applications, but within a modern network environment.

The two most well-known efforts are LTE Cat M1 (sometimes also called eMTC) and LTE Cat NB1 (sometimes also called NB-IoT or Narrowband IoT), both of which were codified by telecom industry standards association 3GPP (3rd Generation Partnership Project) as part of what they call their Release 13 set of standards.

Essentially, these are variations on the well-known and widely deployed LTE standard (part of the 4G spec—if you’re keeping track) and provide two different speeds and power requirements for different types of IoT applications. Cat M1 demands more power, but also supports basic voice calls and data transfer rates up to 1 Mbps, versus no voice and 250 kbps for NB-IoT. On the power side, however, devices leveraging NB-IoT can run on a single battery for up to 10 years—a critical capability for IoT applications that leverage sensors in remote locations.

Even better, these two can be deployed alongside existing 4G networks with some software-based upgrades of existing cellular infrastructure. This is critically important for carriers, because it significantly reduces the cost of adding these technologies to their networks, making it much more likely they will do so.

Like many core technologies, the world of communications between the billions of devices that are eventually expected to be part of the Internet of Things can be extremely complicated.

In fact, it turns out both eMTC and NB-IoT networks can be run at the same time on existing cellular networks. In addition, if carriers choose to, they can start by deploying just one of the technologies and then either add or transition to the other. This point hasn’t been very clear to many in the industry because several major telcos have publicly spoken about deploying one technology or the other for IoT applications, implying that they chose one over the other. The truth is, the two network types are complementary.

Of course, to take advantage of that flexibility, organizations also require devices that can connect to these various networks and, in some cases, be upgraded to move from one type of network connection to another. Though not widely known, Qualcomm recently introduced a multimode modem specifically for IoT devices called the MDM9206 that not only supports both Cat M1 and Cat NB1, but even eGPRS connections for 2G networks. Plus, it includes the ability to be remotely upgraded or switched as IoT applications and network infrastructures evolve.

Like many core technologies, the world of communications between the billions of devices that are eventually expected to be part of the Internet of Things can be extremely complicated. Nevertheless, it’s important to clear up potential confusions over what kind of networks we can expect to see used across our range of connected devices. It turns out, those connections may be a bit easier than we thought.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.

Permalink to story.

 
In my own personal opinion, the creation of proprietary devices, protocols, etc, etc might seem like a good idea, even give the particular maker an "edge" for a short period of time, but in the long run it diminishes user friendliness to the point that the majority fail. To their credit, Microsoft was the first to create and require registered protocols and insisted that anyone developing "things" to run on MS based systems apply to those same protocols. This came about because a little thing called "interrupts" kept crashing systems as they fought for the same space shared by other programs and once it was implemented, PC's became a lot better and crashed a lot less.

This single act created a LOT of stability across all of the PC universe. Mind you, that was back in the old days of DOS, before the internet, in fact the concept of networking was just starting to be realized with APPLETALK. It is sad that over time this concept became obsolete. When Hypercard first appeared, the idea of an internet started getting a lot of talk and it launched a million great idea's that eventually coagulated into the internet. Back in those days, hearing that almost 10 new internet sites were created a day was .... flabbergasting and we all nearly killed ourselves trying to visit them each and every day. Now? Oh don't even get me started ........

But the bottom line is that without some form of forced or agreed upon standards there will always be unavoidable conflict and corresponding problems. God help the person, persons and/or organization that tries to put it back together .... but if we support their efforts and they are successful, we all will be that much better of in the long run ....
 
In my own personal opinion, the creation of proprietary devices, protocols, etc, etc might seem like a good idea, even give the particular maker an "edge" for a short period of time, but in the long run it diminishes user friendliness to the point that the majority fail. To their credit, Microsoft was the first to create and require registered protocols and insisted that anyone developing "things" to run on MS based systems apply to those same protocols. This came about because a little thing called "interrupts" kept crashing systems as they fought for the same space shared by other programs and once it was implemented, PC's became a lot better and crashed a lot less.

This single act created a LOT of stability across all of the PC universe. Mind you, that was back in the old days of DOS, before the internet, in fact the concept of networking was just starting to be realized with APPLETALK. It is sad that over time this concept became obsolete. When Hypercard first appeared, the idea of an internet started getting a lot of talk and it launched a million great idea's that eventually coagulated into the internet. Back in those days, hearing that almost 10 new internet sites were created a day was .... flabbergasting and we all nearly killed ourselves trying to visit them each and every day. Now? Oh don't even get me started ........

But the bottom line is that without some form of forced or agreed upon standards there will always be unavoidable conflict and corresponding problems. God help the person, persons and/or organization that tries to put it back together .... but if we support their efforts and they are successful, we all will be that much better of in the long run ....

This scenario has been played out so many times and you are correct. Interchangeable parts, ports, standards, or connections are absolutely crucial. Not only is it much easier for everyone to understand but it's much easy to distribute and build for.

Paying telcos like Verizon or AT&T for wireless access just to my IoT devices seems very unappealing. Right now those companies charge full price for each additional device you add onto your plan. You are looking at $20 for just data per device. Even if it were just $10, that's $10 for the car, $10 for the fridge, $20 for the washer / Dryer, ect, ect ,ect.

While the advantages of these emerging technologies are being able to be used on current 2G networks, it would be a whole heck of allot better if they just got out of the way of the much faster and newer standards. It's similar to DSL. While it was great to get something better than Dial-up and something that doesn't require new lines, it's just a band-aid that won't fix the fact that these lines were never built with this use in mind in the first place.
 
Today, T H E issue with ioT is not connectivity, but security. Using standard protocols, ports and default passwords, any script kiddy can hack your home.
 
Today, T H E issue with ioT is not connectivity, but security. Using standard protocols, ports and default passwords, any script kiddy can hack your home.
You beat me to it! :) I was going to mention that.

Seems every article that I have seen on TS that somehow touts how IoT as the next best fad, it fails to mention that any IoT device makers are considering security at all. IMO, this is absolute insanity.

I guess "consultants" need to be paid for something even if it is suggesting that you open a gaping hole in your home network. :confused:
 
As connected products are increasingly integrated into everyday life, measures to address the security of Internet of Things (IoT) devices continue to evolve. Some of the latest initiatives include the following.

  • NTIA issues guidance
  • Internet of Things (IoT) Cybersecurity Improvement Act of 2017
  • ANSI introduces first independent cybersecurity standard


see Lexology.com for details
 
Back