Decoding Noise
What sounds go into silence? Have you ever noticed that there’s never actually moments of actual “silence” when you’re watching a film/tv show? (In fact, when Star Wars The Last Jedi actually decided to include a silent scene that lasted 10 seconds, movie theaters had to put up warning signs since too many people complained that the audio system was “broken”).
When it comes to sound design in films, the Ambiance layer of a mix encompasses all the sounds that provide information about a scene’s setting, but at the same time stays out of the way of whatever important dialogue / SFX is present, that’s driving the story forward. So what sounds should go into the mix? First, you need enough individual sound effects to convey where the setting is supposed to be — is it a location in a city? What sorts of actions are happening? What are the important sounds that are attached to said location? For example, if a scene’s happening inside a coffee shop on a busy corner in New York, you might start with some stereotypical coffeeshop sounds – glasses clinking, low level of indistinguishable chatter, and the door opening occasionally. What about outside the coffeeshop? Cars honking, maybe some rain (if it’s a rainy day… but also what kind of rain? Is it down pouring? Are there metal sheets/AC’s breaking the fall of the rain? Different scenarios will dictate the type of “rain” sound that gets used), and of course, random individuals yelling… (because it’s NY). Now there’s two lists of sounds (interior and exterior), if everything played all at the same time at the same loudness, it wouldn’t feel right. So, the sound mixer works with the sound designer to take all the audio effects and pieces them together to create a cohesive soundscape. You pick and choose when sounds should come in / leave, where the sounds coming from, the balance between different sound files, to craft a soundscape where there’s enough information to inform the audience about the setting but not too overwhelming that the main dialogue/SFX sounds become muddled.
On top of conveying the setting, the soundscape also contains important information about the emotional states that the scene takes place in. Is it supposed to be an intimate scene with a couple? Or are is this a early morning rush hour montage sequence? Is the main character filled with anxiety and rushing around? If so, maybe the “ambiance” layer has its higher frequencies ducked so the environment sounds a bit muffled / underwater, but the car horns are left as is so that they appear in these sudden intrusive bursts. The sounds picked and the way it’s mixed for the soundscape can be used to inform the audience what emotion they should be feeling.
To craft a scene’s soundscape, some additional elements that needs to be considered are how / what each of these individual sounds are like — was it recorded really close / far? How large was the room the audio was originally recorded in? How echoey/reverby is the sound file? All of these need to be noted because what the sound designer/mixer will need to make sure all the of the sound files in the mix share the same qualities of space (ie. echo/reverb/audio filter) — essentially, all the parts of an audio/sound that conveys information about its space. Two main effectors built into many audio editing programs to alter a sound’s quality are the volume (to convey distance), parametric filter (which will allow you to boost/duck certain frequencies to create different effects like hearing a sound from behind a door) and the echo/reverb (will allow you to tweak the feeling of how large the space that a sound is played in and the material of the walls/floor). In traditional film, you would mix the experience to create the illusion of 3D space while the audience stares at a 2D screen. In VR, now that the audience is experiencing the content in 3D space, the approaches to sound designing also need to adapt.
In 3D VR space, depending on where one is and the direction they’re looking in, they’ll hear a different mix of sounds. Since we exist in a 3D world, we’re highly efficient at pinpointing where sounds are coming from and the type. So, some questions to ask are how can we utilize the soundscape to direct audience attention? As well as, how can we create such a rich soundscape that one is able to explore with their ears? (Especially right now with limited FOV in current headsets, how can the soundscape be crafted to enrich the environment and put the audience in a state of suspension of disbelief?)
With our iPhones and mobile wifi everywhere, it’s easier to walk around in our own little isolated bubble blasting music. But, have you ever stopped what you were doing, closed your eyes, and tried to explore your surroundings with your ears? How much information about a space can you get with just carefully listening to the noises of a city/space? What are you able to visualize in your brain? It’s a fun little exercise that you can do anywhere and anytime to increase your sensitivity to the world of sounds that’s constantly around us. And to slowly amass a compilation of soundscapes to draw upon when crafting a soundscape in a VR environment.
Image from Star Wars: The Last Jedi
Leave a Reply