SAN JOSE, CA - The Nvidia GTC conference kicked off in San Jose, and the first day proved to be quite impressive. In fact, I was won over during the first two hours, which was when Nvidia co-founder/CEO Jen-Hsun Huang delivered his keynote speech.
The overall theme, of course, was the need for GPUs to accelerate computers to do the type of work that those attending the conference do - which is more complex compared to the typical computer user. In a nutshell, the work requires more computing power than the typical PC delivers. And GTC is all about the GPU. And, those requiring accelerated computing have been embracing the technology - this year's GTC is double the size as it was in 2012 when Nvidia revealed Keplar. And it seems that Nvidia keeps delivering.
NVIDIA UNIFIED SDK
During his delivery, Huang highlighted Nvidia Unified SDK - essential tools for anyone doing GPU development. Although not entirely new (some of the SDKs were highlighted at SIGGRAPH 2015), there were some new components and kits. What is the point of the SDKs? They contain tools for those working in specific areas, such as game development or virtual reality. They will help users avoid re-inventing the wheel, so to speak, and focus attention on other facets of development.
Think of the Nvidia SDK as a large umbrella, and there are various SDKs under that, each with certain tools and technologies that are geared to assist users and ensure they are prepared to meet the requirements and challenges within that genre. They help developers create solutions for deep learning, accelerated computing, self-driving cars, design visualization, autonomous machines, gaming, and VR.
The Nvidia SDKs available include:
- GameWorks. Technologies such as PhysX, WaveWorks, FlameWorks, HairWorks, and more for creating realistic environments and images in games.
- DesignWorks. Technologies such as Iray for making photoreal graphics, which have to be accurate and physically simulated.
- ComputeWorks. Revolutionized accelerated computing made available to all industries. Components of this include cuDNN for highlighting relationships of databases, nvGraph for insight into the data, IndeX, the world's largest visualization platform resulting from supercomputer simulations. Also there is Cuda 8. All will be available between now and June.
- VRWorks. Nvidia created all-new technologies for this platform.
- DriveWorks. Still in development, this is for autonomous driving applications. General release is expected in Q1 2017.
- Jetpack. This is for autonomous machines with embedded deep learning, such as drones or a robot. It will be compatible architecturally with all Nvidia suites and includes the Jetson TX1, which can process 24 images per second and is energy efficient for deep learning involving images at very high frame rates and in situations requiring inference and action.
VIRTUAL REALITY
As Huang pointed out, VR is not just a new gadget, it's a brand-new computing platform. "Who doesn't want to be on the battlefield or chasing monsters down the hall?" he asked the audience. But VR entails so much more than gaming. It can be a virtual showroom for cars. Or a realistic rendition of an architectural plan. It can take you places that are far too dangerous to go otherwise. Or it can take you places beyond your reach, like Mars (more on that here and later in the week.)
To give us a look into where we can go with VR, Huang took us to the top of the world, Everest. He also took us to Mars, well, actually, Steve Wozniak did. His VR trip there was initiated on stage in San Jose, while Woz was off-site though his VR headset view was relayed back in real time for all to see unfolding on the stage screen in the hall.
Today, for VR to be successful, images have to be as realistic as possible. Sometimes, though, that is not enough. They have to be photoreal. This requires new rendering technology that follows photons as they bounce around the room. All of that has to be then physically simulated. This is where Iray comes in. Actually, Iray VR.
Iray VR, which will be available in June, is breakthrough photoreal tech for photorealistic rendering that lets architects and design professionals simulate their creations with amazing accuracy in VR. It involves lightfields and light probes - each probe produces a 4K render that takes one hour on a box of eight GPUs. For such a task, there is the Quadro n6000. To illustrate the power of this, Huang provided a look at the new Nvidia future building.
Iray VR Lite is the little brother of iRay VR. If a person designs with an Iray integrated product (like 3ds Max or Maya), it creates a photosphere that is raytraced.
AI
A large part of the address was spent on deep learning. Huang noted that 2015 was an amazing year for AI, with all the new algorithms enabling deep learning. In fact, he calls it "the Big Bang of modern AI."
This is where the future got real. A computer network that picks up subtleties and identifies things more accurately than a human. A robot with reinforced learning that can figure out how to screw on a bottle cap. And with deep learning, robots can learn by themselves.
In fact, Huang says deep learning has changed the computer model whereby commands are typed into a box - domain experts write programs to solve a problem. Now, we can use one general algorithm to solve problem after problem. You will just need processing power. "It's a new computer model," he pointed out. And the model has not gone unnoticed by the industry.
Deep learning will be in every industry in every application, Huang predicted. For this, Nvidia is offering the Tesla M40 (for training) and Tesla M4 (for inference). The M4 processes 20 images/sec/Watt. The M4, which is much larger, is fast. Both GPUs are for hyperscaling.
Presently, there are researchers working with unsupervised learning. Facebook AI is one. One example that was very impressive focused on art. More than 20,000 images were pushed through a machine (images from the Romanticism period). When commanded to draw a landscape, the computer could determine which images were landscapes, and could combine commands - sunset, no clouds, and so forth - for the generated artwork it then produced.
"We want to train larger networks with more data," Huang said. This requires a supercomputer network. These have to be big. The GPU architecture dedicated to deep learning evolves around the Tesla P100, which contains a three-dimensional chip and three-dimensional transistors. The Tesla P100 GPU is the most advanced hyper scale data center accelerator built to date. The latest addition to the Tesla Accelerated Computing Platform, the P100 enables a new class of servers that can deliver the performance of hundreds of CPU server nodes.
Not long ago, Nvidia decided to be all in on AI and needed a GPU architecture dedicated to accelerating deep learning. Huang said this involved five "miracles": the Pascal architecture, 16nm FinFET, CoWoS with HBM2, NVLink, and new AI algorithms that Nvidia hopes would result. Well, servers are being built for Tesla P100 and expect offerings in Q1 2017 from IBM, HP, Cray, and Dell.
Nvidia also discussed the DGX-1, the world's first deep learning supercomputer to meet the computing demands of AI. The turnkey rack system boasts 8x Tesla P100 GPUs. It is the densest computer: it delivers up to 170 teraFLOPS of half-precision (FP16) peak performance – akin to 250 CPU-based servers. It processes 1.33 billion images per day to train a network. It can do in two hours (with eight Pascal GPUs) what took four Maxwell GPUs 25 hours to do. The DGX-1 sells for $129,000. AI research partners will be the first to receive this, entities like Mass General Hospital.
Speaking of deep learning, this is where the self-driving car stuff comes in. AI is coming to cars so they can sense, plan, and act. There is a whole tool set for this.
So, in a nutshell, in the keynote, Huang highlighted the Nvidia SDK; Iray VR, rendering for photoreal VR; Telsa P100, the most advanced GPU; DGX-1, a deep learning supercomputer in a box; and HD mapping and AI driving. Quite a lot for the first two hours of the GTC conference.