Bottom line: Microsoft and Nvidia's new partnership will make it much easier to access high-performance GPUs in the cloud. Researchers can spend less time configuring software and more time solving important problems.

A new collaborative effort to bring Microsoft Azure to Nvidia's GPU Cloud has been announced. Data scientists, developers and researchers will now be able to take advantage of ready-to-run options with extensive access to GPU compute resources.

In total, there are 35 different pre-configured containers with GPU acceleration available. There is a choice of Tesla V100, Tesla P100, or Tesla P40 cards with one, two, or four cards allocated to a workload. Nvidia's containers work seamlessly across different Azure instance types, even among those that have differing numbers of GPUs.

Other solutions available such as Azure CycleCloud can scale to thousands of GPUs so that complex problems can be tackled in hours instead of years. Nvidia's K80 GPUs have been proven to work effectively for widely distributed tasks. Bandwidth in excess of 25 GB/s between a file system and compute nodes has been demonstrated using 2,048 K80 GPUs.

Those that have worked with the cloud and GPU accelerated tasks know that frameworks and libraries receive updates regularly. Upon new versions rolling out, code changes can cause errors and time-consuming update processes. Now, Nvidia GPU Cloud offers easy access to pre-configured environments that can offer significant time savings.

Nvidia and Microsoft have committed to ensuring that containers are updated monthly with all necessary updates for customers.

Remaining in touch with the developer community, Nvidia has made more than 800 contributions to open source projects last year with many more coming this year as well. Popular software such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch and Nvidia TensorRT are well supported.