Research Library
2018
Research Library Image

Zero to USD in 80 Days

Talk, SIGGRAPH 2018

Productions at DreamWorks starting with How To Train Your Dragon 3 will use Universal Scene Description (USD) [Pixar Animation Studios 2016] as the primary asset and shot representation
across the production pipeline, from modeling to compositing. In this talk, we discuss the motivation for adopting USD at DreamWorks and our strategies for adoption on a highly constrained
timeline – 80 working days from the initial discussion to having the first production-ready USD scenes. We review our methodology for organizing and planning an extensive USD integration,
present details of our implementation, and discuss the successes and challenges encountered in the adoption process.

Research Paper for Zero to USD in 80 Days

Research Library Image

Synthesising Panoramas for Non-Planar Displays: A Camera Array Workflow

Talk, SIGGRAPH 2018

In this talk we present a production workflow to generate panoramic high-resolution images for location-based entertainment and other semi-immersive visualization environments. Typically the display screens at these installations are an integral part of the surrounding architecture and have arbitrary non-planar surfaces. Our workflow is designed to minimize the distortions caused by the screen shape and optimize rendering of the high-resolution images while leveraging our existing feature film pipeline which, uses a standard perspective linear-projection camera model.

Research Paper for Synthesising Panoramas for Non-Planar Displays: A Camera Array Workflow

 

Research Library Image

LibEE 2 – Rich Authoring and Fast Evaluation

Paper, Digipro 2018

The Premo animation platform [Gong et al. 2014] developed by DreamWorks utilized LibEE v1 [Watt et al. 2012] for high performance graph evaluation. The animator experience required fast
evaluation, but did not require fast authoring or editing of the graph. LibEE v1, therefore, was never designed to support efficient edits. This talk presents an overview of how we developed LibEE v2 to enable fast authoring of character rigs while still maintaining or improving upon the speed of evaluation. Overall, LibEEv2 achieves a 100x speedup of authoring operations compared with LibEE v1.

Research Paper for LibEE 2 – Rich Authoring and Fast Evaluation

Research Library Image

Fast and Deep Deformation Approximations

ACM Transactions on Graphics 2018

Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character’s appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications.

Research Paper for Fast and Deep Deformation Approximations

Research Library Image

Firefly Detection with Half Buffers

Paper, Digipro 2018

Fireflies, bright pixels seemingly out of place compared to neighboring pixels, are a common artifact in Monte Carlo ray traced images. They arise from low-probability events, and would be resolved in the limit as more samples are taken. However, these statistical anomalies are often so far out of the expected range that the time for them to converge, even barring numerical instabilities, is prohibitive. Aside from the general problem of fireflies marring a rendered image, their difference in color and variance values can cause problems for denoising solutions. For example, the distance calculation for non-local means filtering [Buades et al. 2005] presented in Rousselle et al. [2012] is not robust under extreme differences in variance.

This paper addresses removing these fireflies to improve both the rendered image on its own, and making the available data more uniform for denoising solutions. This paper assumes a denoising framework that makes use of half buffers and pixel variance, such as set forth in Rousselle et al. [2012]. The variance provides better data than the color channels for determining which pixels do contain fireflies, whereas the half-buffers provide some assurance that the detected firefly is not an expected highlight in the rendered image.

Research Paper for Firefly Detection with Half Buffers

2017
Research Library Image

Vectorized Production Path Tracing

Talk, High Performance Graphics 2017

This paper presents MoonRay, a high performance production rendering architecture using Monte Carlo path tracing developed at DreamWorks. MoonRay is the first production path tracer, to our knowledge, designed to fully leverage Single Instruction/Multiple Data (SIMD) vector units throughout. To achieve high SIMD efficiency, we employ Embree for tracing rays and vectorize the remaining compute intensive components of the renderer: the integrator, the shading system and shaders, and the texturing engine. Queuing is used to help keep all vector lanes full and improve data coherency. We use the ISPC programming language to achieve improved performance across SSE, AVX/AVX2 and AVX512 instruction sets. Our system includes two functionally equivalent uni-directional CPU path tracing implementations: a C++ scalar depth-first version and an ISPC vectorized breadth-first wavefront version. Using side by side performance comparisons on complex production scenes and assets we show our vectorized architecture, running on AVX2, delivers between a 1.3× to 2.3× speed-up in overall render time, and up to 3×, 6×, and 4×, speed-ups within the integration, shading, and texturing components, respectively.

Research Paper for Vectorized Production Path Tracing

Research Library Image

Multi-species simulation of porous sand and water mixtures

Paper, SIGGRAPH 2017

We present a multi-species model for the simulation of gravity driven landslides and debris flows with porous sand and water interactions. We use continuum mixture theory to describe individual phases where each species individually obeys conservation of mass and momentum and they are coupled through a momentum exchange term. Water is modeled as a weakly compressible fluid and sand is modeled with an elastoplastic law whose cohesion varies with water saturation. We use a two-grid Material Point Method to discretize the governing equations. The momentum exchange term in the mixture theory is relatively stiff and we use semi-implicit time stepping to avoid associated small time steps. Our semi-implicit treatment is explicit in plasticity and preserves symmetry of force linearizations. We develop a novel regularization of the elastic part of the sand constitutive model that better mimics plasticity during the implicit solve to prevent numerical
cohesion artifacts that would otherwise have occurred. Lastly, we develop an improved return mapping for sand plasticity that prevents volume gain artifacts in the traditional Drucker-Prager model.

Research Paper for Multi-species simulation of porous sand and water mixtures

Research Library Image

Lighting Grid Hierarchy for Self-illuminating Explosions

ACM Transactions on Graphics 2017

Rendering explosions with self-illumination is a challenging problem. Explosions contain animated volumetric light sources immersed in animated smoke that cast volumetric shadows, which play an essential role and are
expensive to compute. We propose an efficient solution that redefines this problem as rendering with many animated lights by converting the volumetric lighting data into a large number of point lights. Focusing on temporal coherency to avoid flickering in animations, we introduce lighting grid hierarchy for approximating the volumetric illumination at different resolutions. Using this structure we can efficiently approximate the lighting at any point inside or outside of the explosion volume as a mixture of lighting contributions from all levels of the hierarchy.

Research Paper for Lighting Grid Hierarchy for Self-illuminating Explosions

Load More