A Few SIGGRAPH 2019 Highlights

A Few SIGGRAPH 2019 Highlights

It’s been a while since SIGGRAPH 2019, but I’d like to share a few of the highlights:

Interactive Hand Pose Estimation using a Stretch-Sensing Soft Glove
https://igl.ethz.ch/projects/stretch-glove/

Using a new fabricated material of the authors’ design, this stretchy skin-tight glove can track hand poses without being tethered to a machine. The glove requires no external cameras since the tracking is in the material’s sensors. I’m excited to see the product develop, as it’s a promising interface for gesture-based controls and expression. I’ll keep up-to-date on the project, as one of the principal collaborators happens to be in my building, as it turns out.

Rodent: Generating Renderers without Writing a Generator
https://graphics.cg.uni-saarland.de/publications/perard-gayot-2019-siggraph-rodent.html

Rodent is a framework/language designed to generate optimized CPU and GPU code for renderers. Based on knowledge of the scene that needs to be rendered, as well as hand-coded annotations by the programmer to signal intend, it can, for example, optimize-out conditional branches, inline more code than would usually be possible, and remove variability from the program to maximize the amount of use the program gets from the hardware. In a sense, Rodent helps you specialize what might initially be generic code into something perfectly-tuned the scene that needs to be rendered. I find this paper to be an interesting reminder that despite the need for higher level languages, we still need to think about the specifics of the hardware and the use case, especially if we want optimal performance. Rodent is interesting in that it lets you write rather high level code, and yet to get the most benefit, the programmer needs to know about what to annotate and how to tell the compiler about what is important in the rendering code. I wonder how we can strike a balance between general/high-level and specific/low-level frameworks in other domains. In other words, how can we enable customizability/programmability while still keeping in mind ease-of-use/simplicity?

Remote Spatial Programming and Collaboration Using a Real-Time Volumetric Capture Space, PTC Reality Lab

This system allows for remote collaboration in which a remote environment is volumetrically captured in real-time One user can view from a phone, and another user can view the entire space from a desktop screen. Users program real industrial machines with a logical/visual programming language. I like that this project merges programming with remote tangible objects in the real-world, while also enabling collaboration. “AR programming” is an appealing area to explore because it allows for more than textual programming. How can we make more things connected and programmable beyond the IoT of today?

Mary and the Monster

My ticket to see FRL’s “Mary and the Monster Part 1”
A video snippet of Mary and the Monster shown during the prelude to the Electronic Theater

Animation and Production Talks

I enjoyed attended most of the animation/production talks by companies such as Disney/Pixar, Dreamworks, and Marvel. It’s simply fascinating to learn about all of the technical hurdles and solutions devised during the making of their films. Videos and photography in most cases were not allowed, but in any case, one of the most interesting sessions was for the making of Spider-Man: Into the Spider-Verse, which has a unique animation style and philosophy: stylized, no motion blur, animation on 2s. This creates a beautiful aesthetic with so-called “crunchy” character movement. I got to see a scrapped design for Gwen as well as early animation tests that were used to help develop the characters.

From a poster

A Gwen Stacy figure

Dreamworks also showed the making of How to Train Your Dragon: Hidden World. A custom waterfall painting tool was developed to support the realization of the hidden world, which had a lot of waterfalls. These waterfalls, though not 100% physically accurate simulations, did the job. They could be painted in real time, and if objects (e.g.) rocks in the scene blocked those waterfalls, the water would part around the rocks. I can definitely see such a real-time editor making people’s lives easier with the way it enables painting/visual placement as opposed to manual number crunching. The users could easily get the look and feel of the environments right.

Also, Marvel presented a making-of Endgame session. In the display cases outside there were various life-size Captain America, Captain Marvel, and Thor costumes, coupled with the characters’ respective weapons. Here is a life-sized Thanos glove:

–and a not-so-life-sized Woody

There are too many other things to show, but I’ll end this blog post with an image I took while visiting the Broad Museum for the first time–the weekend before SIGGRAPH:


inside Infinity Mirrored Room—The Souls of Millions of Light Years Away

What a beautiful scene this was–an interstellar plane on the edge of a galaxy, stretching towards infinity.

I look forward to what comes next.

Karl Rosenberg
Posted on:
Post author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

خرید vpn کاهش پینگ
%d bloggers like this: