Nvidia has used it to promote GP GPU, its growing presence in HPC (high-performance computing), and also new applications for big data, and now mobile. Long ago, Nvidia tired of the tit for tat competition with Intel, AMD, and several companies that have since disappeared. The company’s strategy has always been to change the game, write new rules, and buy the playing field. Thus, as software companies struggled to transition to the new age of multi-core/multi-chip, Nvidia developed CUDA, a slight fork in the code road to make it easier to develop applications that could take advantage of Nvidia multi-core GPUs and heterogeneous computing. As mobile grew big enough to tempt us to write requiems for the PC, Nvidia introduced its Tegra line of mobile products.
Jen Hsun Huang set the tone in his keynote by talking about where the bottlenecks are and how Nvidia intends to overcome them. Nvidia has been talking about a unified memory plan for the CPU and GPU, but Huang surprised attendees with the announcement of the upcoming Pascal for 2015, to follow Maxwell, which succeeds Kepler. Nvidia has done some re-arranging of the road map. In previous road map presentations, Huang had talked about Volta, that chip has now been pushed up the road map.
Pascal addresses the bottleneck between the CPU and GPU with unified memory, the new NVLink. It also features 3D memory and stacked DRAM chips. The NKLink is a serial interconnected-like PCIe, and it will be used for GPU-to-CPU and GPU-to-GPU communication. It will enable much faster bandwidth, five to 12 times faster, to 80GB/sec. The Pascal GPU will also include 3D Stacked DRAM with through-silicon-vias (TSVs) that enable more density for memory with no additional room taken up on the PCB.
Although Huang said the GTC Conference is about everything but gaming, it’s inconceivable that an Nvidia event could happen without a big clanging transformer video of the latest graphics board. This one required earplugs. Nvidia showed off the upcoming GeForce GTC Titan Z, which is based on two GK110 GPUs, each with 6GB of frame buffer memory. Each GPU has 2,880 CUDA cores for a total of 5,760, with 12GB of frame buffer memory.
But really … cars
In the second half of 2013, with the arrival of the Nvidia’s K1 chip in January, Nvidia announced that its Kepler architecture had been extended to the Tegra line of low-power processors for mobile, bringing with it CUDA support and the ability to accelerate apps for automotive, the Internet of Things (IoT), robots, tablets, phones, and whatever magical thing comes along. The K1 includes 2GB memory support for USB 3.0, HDMI 1.4 Gigabit Ethernet, audio, SATA, miniPCIe, and an SD card slot.
At GTC, Nvidia officially announced the TK1 Developer Kit, codenamed Jetson. The company has actually had Jetson out there for some time, but now Nvidia is seeking to make it widely available to encourage tinkerers to start tinkering. The Developer Kit comes with a C/C++ toolkit for CUDA and Nvidia’s VisionWorks toolkit with support for cameras and sensors. In addition to CUDA, the SDK supports OpenGL 4.4. It’s available now to the public for pre-order for $192 from Nvidia, Microscenter, and Newegg. It can also be ordered through Avionic Design, SECO, and Zotac in Europe.
Nvidia has been showing off its UI Composer Studio for some time now. It’s an application that enables car designers to design, prototype, evaluate, and deploy digital instrument clusters and in-vehicle infotainment systems. Even if the manufacturers do not use the tool in their actual production, it makes a nifty demo to show off the promise of digital technology in the car.
Nvidia's UI Composer Studio is a 3D content creation application for car dashboards. The company has
been working with automotive companies to push the evolution for digital dashboards, and Tesla has put it into production with its spectacular center module. (Source: Nvidia)
Audi and Tesla are pretty far along in their work with Nvidia’s chips for automotive, and BMW is another customer. First, the applications are for infotainment systems, which is low-hanging fruit for the companies building mobile application processors. The application isn’t so different from tablets and, in fact, Nvidia showed back-seat tablet companions for automotive infotainment, which could be used to control music, video, and other content from the back seat.
Next up comes assisted driving, and Nvidia is talking about the work it is doing with Audi. Jen Hsung Huang said, “The car of the future is going to be your smartest robot.” Andreas Reich, head of Audi pre-development, arrived on stage with a self-driving Audi. The first features we’ll see are traffic assist so when you’re stuck in a horrible traffic jam, the car can take over the stop-and-go and maybe stave off insanity. Also, self-parking is on the way with systems that can process 120 millions of pixels per second.
Baby you can park my car: Audi’s computer vision can survey the surroundings for spaces in which the car can fit. There is no doubt the machine can probably do a better job of getting into tight spaces than a harried human.
Jen Hsun Huang is enthusiastic about the automotive industry. He said, people ask, why do you want to be in the automotive industry? It’s not a very big industry, and it moves slowly. Huang challenges the idea that the automotive industry of the future will be small, pointing out that subsystems in cars, including computer vision, infotainment, and digital dashboards, will use multiple processors, and some of them are going to be Nvidia’s.
“We're the only semiconductor company that serves all three markets,” Huang told investors at a later meeting.
During the keynote, Reich showed off a computer module in the self-driving car that arrived on the GTC stage, and he talked about how modules can be upgraded. In addition, the cars of the future will get regular upgrades to change features. It will be easily done with digital dashboards and networking. Jen Hsun marveled at the upgrades his Tesla gets that has made it more responsive, even enabling the car door handles to move out to meet Jen Hsun’s hand.
Straightening out the investor community
Nvidia holds its investor day at GTC so it can expose investors to the wider world of Nvidia’s business. This year, Jen Hsun was adamant that the PC business is only a small part of Nvidia’s future. To put the company’s business in perspective, Huang pointed out that the PC business had a CAGR of 9%, which is better than the industry average because he’s counting specialty PCs for gaming and design. Nvidia’s mobile business has enjoyed a 48% CAGR, and cloud/HPC business has grown 64%. Yes, these are smaller businesses, and Nvidia’s mobile business has had challenges to overcome, yet Jen Hsun’s point is that the future for Nvidia is in the direction of these younger, rapidly evolving industries.
One hapless investor asked Jen Hsun how he intended to overcome the “challenges” of the PC business, and he pointedly told her Nvidia is not a chip company. They’re not depending on chips sold for computers. “We use a lot of chips,” he conceded, “but we’re not a chip company.” Rather, he preferred to talk about the company’s software development, their patent technology, and their ability to sell systems for virtualization, cloud-based computing, and cars. Nvidia just loves cars these days.
GameWorks
GTC comes just one week after GDC, the Game Developer Conference (hilarious misunderstandings ensue). At GDC, Nvidia talked about the advances in its freely available technologies, including PhyX and VisualX. These technologies have gotten bundled up in middleware Nvidia calls GameWorks.
Nvidia is increasingly self-identifying as a software company, and the GameWorks along with tools like UI Composer Studio make the argument much more compelling. Nvidia’s Tony Tamasi, SVP of Content and Technology, is heading Nvidia’s software development, which covers the development of tools, middleware, performance, technology, and research. The fact that Tamasi is in charge is an indication of how important Nvidia thinks software development is to Nvidia’s future. GameWorks is only one part of the effort, but it does give Nvidia the chance to show what GPU compute really means. When you can see physics at work moving the complex geometries of water, or the random destruction of walls and flaming particles, you get that it is a job for as many processors as you can put to work.
Titanfall by Respawn Entertainment is the showcase game for the Xbox One, and the game developers are taking advantage of Nvidia’s GameWorks technology for physics and particle effects. (Source: Respawn)
At GTC and GDC, Nvidia demonstrated its latest tool in the GameWorks toy box, FlameWorks, which enables cinematic smoke, fire, and explosions. It includes a grid-based fluid simulator with a volume-rendering engine. Nvidia says it supports user-defined emitters, force fields, and collision objects. Also, the new PhysX FleX is a particle-based simulation technique that includes a highly parallel constraint solver that can take advantage of the GPU processors. At GTC, Jen Hsun unveiled a demo for GameWorks that featured a giant blue whale breaching scattering phospherescent organisms and veils of cascading water. Nvidia’s GameWorks tools have been used in Call of Duty: Ghosts, where HairWorks was used to create more realistic characters and wolves. They also use Nvidia’s tools for turbulence and TXAA (anti-aliasing to reduce crawling and flickering during gameplay). Turbulence was also used for snow, steam, and fog effects in
Batman Arkham Origins.
Nvidia has a long history of exploring these kinds of technologies to enable better, more beautiful, bloody, frightening game effects, but Tamasi’s team is now putting more work in packaging these tools so they’re accessible to game developers. They’re being sold as developer tools with GeForce and Tegra.
Where there’s a pixel … Nvidia’s message at GTC 2014 is that Nvidia has a play just about everywhere a pixel gets to the screen. Nvidia and Audi are working on next-generation cars that will get more and more competent. Meanwhile, under the hood, advances in GPU/CPU performance, like the improvements coming with Tegra, will help drive those smarter cars as Nvidia pushes its architecture across its products. (Source: Jon Peddie Research)
End to end to end
Although cars were the big attention-getters at GTC 2014, and ever more impressive games graphics are emblematic of Nvidia, the advances being made in virtualization have the power to change computing for everyone. Nvidia was early with the introduction of its Grid virtualization technology. So early, that most people didn’t understand how it would affect them and most businesses were more concerned about security than they were about getting to computers in the back room or in the cloud. It’s amazing how fast that has been changing. Dell and HP are offering turnkey virtualization along with their PCs and servers. Companies like Citrix, VMWare, Amazon, and Penguin are improving the path to implementation so that all customers have to do is sign the check.
At GTC, the company announced an alliance with VMware, which claims to be the largest player in enterprise virtualization, whereby people can access remote computer resources and share GPUs or, if more appropriate, get access to multiple GPUs in the cloud for rendering, analysis, simulation, visualization, big data processing, and all that. The company’s primarily business has been in helping build private cloud systems.
VMware and Nvidia are looking at wider markets as they expand VMware’s desktop as a system (DAAS) to public clouds, enabling cloud-hosted desktops, to any device, anywhere. At Nvidia’s annual GPU Tech Conference (GTC) in San Jose, VMware announced they are adding virtual GPU (vGPU) support to vSphere. vSphere was conceived as an enhanced suite of tools for cloud computing, and the company’s announcement that they're bringing Nvidia’s Grid technology to their Horizon DaaS platform is a pretty big deal.
In order for this to happen, two very critical and unusual things had to happen involving closely guarded IP. VMware had to open up their source to Nvidia, and likewise, Nvidia had to open up their Grid source code to VMware. And somehow, without soul-crunching and deal-breaking legal “oversight,” the presidents of the two companies, Pat Gelsinger and Jen Hsun Huang, came to an agreement.
To make DaaS with GRID, Nvidia and VMware embedded Nvidia’s microcode into VMware’s Hypervisor.
Nvidia hardware and software run though the VM stack.
Now, any enterprise VMware private cloud, or any VMware public cloud, will be able to employ, and deploy, virtual GPUs into their virtual systems anywhere in the world. VMware users will be able to have a dedicated GPU or a piece of a virtual GPU on any platform the enterprise is supporting with VMware. As a result, Nvidia is looking at a much larger pool of customers.
What Nvidia wanted to communicate above all else is that the company’s technology can be in everything you touch: your computer tools, your phone, running business in the cloud, accelerating discoveries in science, and in your car. The company has a pretty good force-field going. Nvidia has struggled to get its mobile processors in a significant number of phones, and the company has some impressive wins in the car industry, but they’ve got plenty of company fighting for a job under the hood or in the dashboard – it will be a while before we see if Nvidia’s vision of the car future makes the company a major automotive supplier. But, the company has a strong position in gaming, and it moved fast to gain the advantage in virtualization.
In 10 years, the tools we use and the machines we rely on are going to be completely different from what they are now, but Nvidia intends to be there in a big way.
Kathleen Maher is a contributing editor to CGW, a senior analyst at Jon Peddie Research, a Tiburon, California-based consultancy specializing in graphics and multimedia, and editor in chief of JPR’s “TechWatch.” She can be reached at
Kathleen@jonpeddie.com
.