SIGGRAPH 2024: Spotlighting the future of computer graphics
Kendra Ruczak
August 5, 2024

SIGGRAPH 2024: Spotlighting the future of computer graphics

This summer, the world’s experts in computer graphics converged in Denver, Colorado, for the 51st annual SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) conference. Hosted by ACM (Association for Computing Machinery), the largest education and scientific computer society, the event showcased the next generation of advancements within the ever-evolving realm of computer graphics and the multitude of industries it powers and influences.

Returning to Colorado for the first time since its inception at the University of Colorado in Boulder half a century ago, the milestone conference united an international community of nearly 9,000 attendees from 76 countries around the world—researchers, scientists, business professionals, filmmakers, artists, developers, students, and journalists—to share resources, catalyze innovation, and address challenges within the field. 


Held from July 28th to August 1st at the Colorado Convention Center, the conference featured exhibits, demonstrations, and programming highlighting the application of computer graphics in animation, visual effects, design, gaming, research, education, and beyond. 

Nvidia founder/CEO Jensen Huang and Meta founder/CEO Mark Zuckerberg took the stage for a rare joint public appearance. In their keynote presentation, the pair of tech luminaries discussed the transformative potential of open source artificial intelligence. Other keynote presenters explored a multitude of new computer graphics applications and concepts ranging from the microscopic to the cosmic.


Conference chair Andrés Burbano of the Open University of Catalonia shared, “SIGGRAPH 2024 was a unique experience marked by rich content and vibrant interactions across the arts, science, and technology. The thought-provoking sessions and dynamic dialogues have truly fostered the idea of a desirable future of computer graphics and interactive techniques in the industry and the conference itself. Our commitment to international participation and engaging industry leaders enhances the discussions and cross-cultural exchange taking place at the conference, ultimately making our industry stronger and more diverse. We are proud to have brought together global perspectives, paving the way for the next 50 years of innovation in the field.”

Notable talking points from SIGGRAPH 2024 included:


Ethical Generative AI

As artificial intelligence becomes increasingly ubiquitous across all facets of everyday life, the concept of ethics has begun to dominate the conversation. With AI models requiring massive amounts of content for training, how will artists be credited and compensated for their contributions? 

Global creative platform Shutterstock addressed this concern head-on with the launch of the first ethical generative 3D application programming interface (API). Built on the multimodal Nvidia Edify generative AI architecture, this new API is trained exclusively on curated Shutterstock content—including more than half a million ethically-sourced 3D models and over 650 million images with detailed metadata. Shutterstock is committed to implementing safeguards to ensure ethical compliance, legal integrity, and brand safety for all users of its generative AI technology.

“With our new generative 3D capabilities, studios and developers can revolutionize their pipelines, leveraging the only generative 3D service entirely trained on licensed data to ensure fair compensation for the original creators who also have the option to opt-out,” explained Dade Orgeron, VP of Innovation at Shutterstock. “What’s truly groundbreaking is that our generative technology transforms a traditionally lengthy process, that usually takes hours, into one that can be accomplished in minutes, dramatically reducing barriers to 3D creation and opening up new possibilities for enterprises across multiple industries.”


Polygon Streaming

In today’s era of widespread remote work and collaboration, it can be difficult to efficiently describe ideas and share assets between teams. Instead of attempting to describe a new concept, it’s often more effective to share a visual representation. 

HTC Viverse’s new standalone polygon streaming service allows teams to integrate high-fidelity, interactive 3D models into their workflows across a range of devices from laptops and PCs to XR headsets. A device-agnostic approach allows easier access to next-generation immersive graphics and interactivity across a diverse variety of platforms.

Combining server-side processing with client-side rendering, polygon streaming significantly reduces bandwidth and processing power requirements. By streaming an asset’s actual 3D data, it transmits only the necessary polygon data for the sections of a 3D model that a user is viewing at the level of detail corresponding to their distance from the object.

Joseph Lin, HTC Viverse general manager, stated, "It's never been easier to develop 3D assets, but sharing them with others has remained a barrier. Polygon streaming opens up new possibilities for everyone, transforming any device so people can enjoy everything from interactive product showcases to collaboration and immersive virtual environments."


Digital Material Capture 

When it comes to realistic computer-generated imagery, the devil really is in the details. The minutiae of materials and textures are what truly make a digital asset appear realistic when perceived by the human eye.  

Back in 2019, HP and Adobe joined forces to form Project Captis, a venture based on the shared belief that digital materials are foundational to the digital creation ecosystem. Now, they hope to revolutionize the way materials are digitized with the announcement of a groundbreaking digital material capture solution.

The new HP Z Captis system is integrated with Adobe Substance 3D1 and powered by an embedded Nvidia Jetson AGX Xavier system-on-module with HP’s Capture Management software development kit. This allows artists to digitize material swatches or surfaces in mere minutes, capturing details in up to 8K resolution with a polarized and photometric computer vision system. Captured materials can then be seamlessly integrated into 3D workflows for real-time collaboration with enhanced efficiency.


Generative Scheduling

One of the most prevalent fears in the realm of AI is that it will someday take creative jobs away from humans. Fortunately, companies are embracing AI technology that will actually improve the workflow of creative jobs, rather than rendering them obsolete. 

Autodesk unveiled new advancements in generative AI that will allow artists to focus more on creativity while empowering teams to maximize efficiency. Designed to accelerate and streamline production planning workflows, new AI-powered Flow Generative Scheduling aims to keep creative projects running smoothly. 

Teams can now easily manage complex project variables such as deadlines, availability, and budgets while quickly comparing multiple schedule scenarios, evaluating tradeoffs, and creating resource-optimized and balanced schedules. 

“Artist’s time is the most valuable resource for our customers. Being able to bring them AI tools to augment their creative process unlocks a host of new possibilities,” explained Eric Bourque, VP of content creation, media, and entertainment at Autodesk. “They can spend more time iterating on their creative ideas, and less time on repetitive tasks.” 


Immersive Work Environments

While AI first captured the world’s attention through easily accessible writing and image generation tools, employers can now harness this technology to help improve work environments. 

Nvidia announced that its Metropolis reference workflow for building interactive visual AI agents can be now be paired with its new inference microservices (NIM) to assist developers with the process of training physical machines and improving the handling of complex tasks.

The company’s OpenUSD (open source universal scene description) NIM microservices are now compatible with the world’s first generative AI models for OpenUSD, allowing developers to implement generative AI copilots and agents into USD workflows—supporting deep learning frameworks and broadening possibilities for 3D worlds. 

Physical AI, which utilizes advanced simulations and learning methods, allows robots and other automated machines to more effectively perceive and navigate their surroundings. Nvidia has launched a new set of microservices designed to support physical AI capabilities for realistic animation, behavior, speech, translation, and vision. This new technology can be applied across a wide variety of industries to help create intelligent, immersive work environments. 


Humanoid Robotics Development

The widespread prevalence of robots was once the stuff of science fiction tales. Now, in the not-so-distant future, we may be interacting with humanoid robotics in our day-to-day lives.

Nvidia announced that it will provide the world’s leading robot manufacturers and AI model developers with a suite of services, models, and platforms to develop, train, and build the next generation of humanoid robotics. 

Aiming to accelerate global robotics development, the company is now offering NIM microservices and frameworks for simulation and learning, an OSMO orchestration service for running multi-stage robotics workloads, and an AI- and simulation-enabled teleoperation workflow allowing developers to train robots with minimal human demonstration data input.

“The next wave of AI is robotics and one of the most exciting developments is humanoid robots,” announced Nvidia founder and CEO Jensen Huang. “We’re advancing the entire Nvidia robotics stack, opening access for worldwide humanoid developers and companies to use the platforms, acceleration libraries and AI models best suited for their needs.”

More about SIGGRAPH 2024: s2024.siggraph.org