Ups and Downs of Reactive + Procedural Animation
I’m going to take a break from posting about immersive mediums this week and talk about procedural animation. So, I’ve been making these audio reactive projections for the rooms that the Cave show will take place in at SIGGRAPH, as well as these LED floors for a music festival that’s happening in a week.
The software I’ve been using is called Touchdesigner, “a node based visual programming language for real time interactive multimedia content” (wikipedia). And I’ve been using it to learn GLSL code. I’ve always been a very visual learner and with the aid of nodes-based layouts for code, as well as the real-time feedback of values at each node in a block of code has made this journey an incredibly fun and intuitive one.
There was a while where I avoided ever diving too deeply into GLSL code, because it just looks soo damn confusing. Even once understanding what everything does, it’s still hard to picture what a piece of code will look like when it gets executed. Unlike how most programming languages work, in a sequential manner, GLSL code executes on each pixel on a screen in parallel. So in a sense each instruction you give it, is being sent to every single pixel on the screen at the same. And having the aid of touchdesigner visualizing the values or images being passed through in real-time really helped make it easier to understand what my code was doing. But this isn’t a post about GLSL, so I won’t dive any deeper into this rabbit hole.
So designing reactive procedural animation’s an incredibly fun and yet frustrating experience. By setting up parameters for how pixels should behave, and letting it run given an external input (ie. audio), I was often surprised by motions that I hadn’t planned for or even thought of. It’s an amazing feeling when testing the visuals, and you see a pattern or some motion that’s beautiful and has such a complex or intricate movement that you could never have imagined then key framed animated it. But, never be able to create it again – especially when you’re using a microphone as the input – since, it can be near impossible to recreate those exact same patterns of inputs. I can’t describe the number of times I’ve sneezed or dropped something and my room will light up from a pattern created from the sound, but disappear within seconds. Then I’ll spend the next few hours trying to figure out what caused that type of motion to make it happen more often, but will never get it exactly right. Trying to “debug” a procedural piece of animation can be incredibly hard sometimes. You’ll see something that you didn’t like or really like, and trying to figure out which part of your animation system is driving the corresponding animation is like looking for a needle in a haystack while it’s being churned through one of those hay bundling machines.
At the same time, with systems that let you set up environments to feed in a camera or audio to drive animations is a great way to discover / explore new motions and imagery. Instead of sitting down and going about animating the usual way of previsualizing a motion in your mind and going about animating it. You start with thinking about what would look interesting if it was moved around. So for example, you could have a triangle, and what would happen if on a beat it would trigger a smaller triangle to be made and scaled smaller on each frame afterwards, until it disappears. You end up with this pulsating triangle light that creates this “infinite” hallway effect where on each beat, it’d send a triangular light pulse down the void.
Basically going about making art with the intention of letting it surprise you.
(Will update with photos soon)
Leave a Reply