VR in the clouds: Merging two of today's hottest computing trends

Bob O'Donnell

Posts: 79   +1
Staff member

Two of the hottest technologies in tech today are virtual reality/augmented reality (AR/VR) and cloud computing. Together the two have garnered a good percentage of the attention in both consumer and industry publications, blogs and websites. They both also share the appeal of being intriguing new opportunities that each create new kinds of applications and usage models.

But while they’ve been covered at length separately, there hasn’t been much discussion about combining the two. Of course, the primary reason for this is that there aren’t very many companies trying to actively bring these two technologies directly together. Virtually all current VR/AR headset designs demand strong local computing and graphics power, and most all cloud computing models are intentionally independent of the display capabilities of a connected client device.

Setting these and other technical challenges aside for the moment—I’ll address them in a bit—the potential possibilities of creating just such a technological mashup are appealing at many levels. First, one of the core challenges facing the adoption of VR and AR in the near term isn’t just the cost of the headset, but the cost, processing and graphics requirements of the connected PC. In theory, a cloud-based computing model with access to the kinds of multi-core CPU horsepower now available in today’s cloud-based datacenters should be able to easily address the raw computing requirements for demanding VR/AR applications.

More importantly, hyperscale data centers are finally getting access to the kinds of GPU power they need to start doing the task. Between AMD’s Radeon Pro Duo and their new FirePro S9300 X2 graphics engines as well as the extensive range of Tesla datacenter GPU products from nVidia (as well some of their forthcoming Pascal-based offerings), the graphics horsepower necessary to do the kind of GPU compute applications that VR demands is also now here.

The potential possibilities of creating a technological mashup between virtual reality and cloud computing are appealing at many levels.

Second, it’s already clear that truly compelling VR/AR applications are going to demand yet more digital engine speed than even what the best of today’s computing “supercars” have to offer. Rather than forcing rapid and costly upgrades to the base computing devices used for VR/AR, a cloud-based service can more easily enhance the speed of its core infrastructure and pass those benefits along to users of the service.

Third, given the early days of VR/AR applications, we’re also likely to see rapid developments and major changes in the nature, function, UI and virtually all aspects of this software. Once again, cloud-based delivery of these quickly evolving applications would likely create a much better and easier experience for end users, whether it’s for consumer or business purposes.

Finally, VR and AR are the types of applications that won’t be used by most people on a regular basis. Instead, they’re the types of things that are likely to be used more occasionally, making them potentially well-suited to pay-as-you go, service-based business models.

Despite these benefits, there are some clear hurdles to overcome before this conceptual mashup can be made real. The reason most VR/AR headsets use a wired connection to a host computing is that the speed necessary to update the displays is extremely important. Any slowdowns or latency can not only impede the overall experience, they can literally be sickness-inducing. Thankfully, wired technologies such as Thunderbolt 3 and new wireless 60 GHz-based 802.11ad WiFi (and, eventually 802.11ay) should be able to allay most of these concerns in the not-to-distant future.

For certain applications, particularly gaming, network latency can also be an issue, but for most other applications, clever caching algorithms and fast local storage can overcome any connectivity-induced delays.

Potentially more challenging is the current lack of a single standard for delivering the AR/VR content to different headsets. At the moment, each headset vendor tends to use their own methods, making the cloud-based delivery of a single optimized video stream more difficult.

Nevertheless, the overall benefits of a cloud-based VR solution seem to outweigh these potential concerns. As a result, I expect we’ll see several efforts that essentially allow VR/AR capabilities to be delivered in a thin computing-style application delivery model over the next 12-18 months.

Of course, thinking of VR/AR headsets as the great savior of the thin-client computing model seems rather ironic. But given the lack of legacy client hardware and the wide open green field opportunities for VR/AR that lie ahead, this could just be the perfect application for making an older idea look very fresh and new.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.

Permalink to story.

 
I don't follow the author's logic here. The obvious challenge about running VR via the cloud is the latency issue. He blithly dismisses it by assuming

"For certain applications, particularly gaming, network latency can also be an issue, but for most other applications, clever caching algorithms and fast local storage can overcome any connectivity-induced delays."

While I understand caching could smooth out inconsistent frame-rates, the bottom line is if I tilt my head left rather than right the bottleneck will be rendering the frames and getting them down the pipe to my my thin client at a 90/100 Hz refresh rate. Unless your caching can predict which was I'm going to turn my head its useless in solving this problem.
 
Back