During VMworld 2015, nVIDIA announced the release of GRID 2.0. Extensive experience with the initial release of the GRID technology in a controlled lab environment has proven to be helpful in designing centralized and secured graphically accelerated virtual workstations. While the initial technology has worked as advertised, limitations inherent with the now-outdated GPU technology contained within the GRID K1 and K2 adapters has made this announcement exciting above and beyond the baseline gains from a refreshed architecture and a more power-efficient design.
The escalation of model numbers within the next generation parts makes more sense than the prior model naming standard. In the old paradigm, the GRID K1 adapter can support up to 16 sessions per GPU and the GRID K2 adapter can support up to 8 sessions per GPU. While the GPU and performance in the K2 was superior to the K1, stepping “up” a model would normally not decrease capabilities in a standard product stack. The GRID 2.0 naming convention follows the industry-standard practice of “bigger is better“.
The ability to provide CUDA compute capability to a guest without directly passing through a given adapter is one of the most promising changes that was noted of this refresh. The improved density per adapter will help drive down the cost of providing virtual workstations with graphical accelerations. The primary caveat that still appears to remain in play relates to an inability to live-state migrate the video memory from a given in-use host to a target host with available capacity. Hopefully, during the lifecycle of GRID 2.0, the engineers at nVidia and the Tier 1 hypervisor solution providers can work together to either find a way to enable this capability or define the standards within GRID 3.0 to finally facilitate a high availability virtual workstation.