Cloud Graphics: Could Your Next Smartphone or PC Use a Virtual GPU?

You've probably heard of OnLive, the cloud gaming service that was all the rage a few years ago, but which never really took off and suddenly came apart this August, when the company experienced a near-collapse that culminated in mass employee layoffs, the sale of the company and its CEO stepping down.

  • Share
  • Read Later
Nvidia

You’ve probably heard of OnLive, the cloud gaming service that was all the rage a few years ago, but which never really took off and suddenly came apart this August, when the company experienced a near-collapse that culminated in mass employee layoffs, the sale of the company and its CEO stepping down.

Whatever happens to OnLive or with rivals like Gaikai, the idea remains intriguing: Server-side gaming or productivity applications streamed to client-side desktops or thin clients, where all the processing is handled in the cloud. In the old mainframe vs. distributed computing battle, think of it as the mainframe model resurgent.

(MORE: Did Cloud Gaming Service OnLive Really Just Fire Its Entire Staff?)

Nvidia’s throwing its silicon hat in the cloud computing ring with something it’s calling a VGX or “virtualized graphics” card, available in high-end K2 or lower-end K1 models. The idea, Nvidia contends, is to offload virtual desktop (VDI) graphics processing from CPUs to GPUs, something existing virtualization tools don’t do well, if at all.

In addition to souping up a VDI session on a PC, Nvidia is shopping this as a way to deliver intensive GPU power to “any connected device,” by which we’re meant to infer tablets, smartphones and so forth.

But wait, you may be asking — doesn’t OnLive already do this? It does, sort of. Take OnLive Desktop, which the company announced back in January at CES 2012. It delivers desktop virtualization to PCs and tablets using the company’s cloud gaming technology, allowing you to pull up a Windows desktop virtually and, on the graphical side alone, fiddle with everything from “rich media” to “video, animation [and] slide transition” to full-on “PC games.”

So I’m a little confused when Nvidia refers to the VGX architecture as the “first cloud-based GPU.” Clearly it isn’t, if we’re counting the GPUs sitting in OnLive and Gaikai’s cloud servers, crunching 3D-intensive games and apps, to serve relatively high-end, low latency visuals to users. Aren’t those cloud-based GPUs, too?

Maybe I’m lost in marketing semantics. The VGX cards are actually versions of Nvidia’s Kepler architecture — used in the company’s 600-series video cards — optimized for cloud-computing systems. The entry-level K1, which supports up to 100 VDIs, comes with 768 CUDA cores (CUDA is Nvidia’s way of letting programmers add performance-enhancing extensions to code) and 16 GB of DDR3 memory, while the high-end K2, which supports up to 64 VDIs, has 3,072 CUDA cores and 8 GB of higher-performance DDR5 memory. With the former, you can basically give a bunch of VDI users relatively good graphical performance, while the latter will give a smaller number a higher-end experience.

What about latency? Nvidia’s talking up “patent-pending remote display technology” that it claims “minimizes the lag traditionally associated with virtual desktop computing.”

If you’re wondering when to look for VGX powering Infinity Blade 3 on your Galaxy S4 or iPhone 6, it sounds like the VGX platform is aimed at enterprise VDI users (for now, anyway). According to Jeff Brown, general manager of the Professional Solutions Group at Nvidia, “With VGX K2 in the data center, designers and engineers who create the core intellectual property for their companies can now access their IP from any device and still enjoy workstation-class performance.”

MORE: Sony Bets On Cloud Gaming with Gaikai Purchase, but Don’t Expect Drastic Changes