Back To Blog

Sound Design For Video Games

Video games are becoming incredibly immersive. Rapid advances in technology make it possible for engineers to produce games with more detailed graphics and more sophisticated interactive abilities for the player. Aside from stunning visuals and game mechanics, in-game audio is a major part of creating an immersive environment to keep players engaged. Audio for games is important and ‘corny’ or underwhelming sounds can really make a players gaming experience less enjoyable. Just like audio engineers for film and TV, game designers spend many hours getting their sounds to synch to players actions, respond accordingly to the sound sources environment, and sounding exciting and realistic.

Creating In-Game Sounds

Unlike music producers who generally deal in musical sounds like drums and melodic instruments, video game sound designers deal with realistic sound samples or foley like doors creaking open, footsteps, punches, buildings crumbling, etc. Having an extensive sample library of realistic one-shots and ambient sounds can be very helpful for sound designers working to create an immersive virtual environment. Sure, you can take a basic sound from your sample library, add it to your game and call it a day, but that’s what makes a video game boring! Players want the sound of a game to be just as exciting as the gameplay. Great sound designers will take basic action sounds like punches and kicks and will layer, chop, filter, and distort them to create a sound that is unique and exciting. Of course, if a designer doesn’t have a sound they’re looking for in their sample library, foley recording is always an option. You can learn more about the art of foley in our previous blog post, Sound Design, Foley and FX in Horror Movies. When manipulating sounds like these, it is important for a sound designer to consider the context and setting of the game they’re working on. It would be strange to the player if a modern human character threw a punch that sounded as if it had been thrown by a futuristic robot.

Just like music producers, sound designers often incorporate synthesis concepts and use envelopes like ADSR while editing sounds. ADSR stands for Attack, Decay, Sustain, and Release.

As the diagram above demonstrates, attack, decay, sustain, and release refer to specific points in the life of a sound. Attack is the moment from the initial start of a sound to the sounds highest amplitude. Decay is the fall from the highest point of amplitude to its plateau or ‘normal volume’. Sustain is the length of the sounds normal volume. Release is the time it takes for the sound to die off and return to an amplitude of zero.

Every sound you encounter in the real world is an amalgamation of these attributes. Sound designers will adjust the ADSR of any given sound to give it more character and make it sound more natural. This short video below from BluethunderMUSIC does an excellent job of visualizing what it looks like to edit the ADSR of a sound and how editing these attributes effects a sound.

Middleware and Event Objects

Sound Designers for video games use tools called ‘middleware’ to edit audio.  Middleware tools are plugins that are compatible with game engines (softwares used to create characters, environments, and other elements of video games) and give sound designers many more dimensions to work in. Traditional DAWs (Ableton Live or Logic Pro X) only allow users to edit sound in a linear or 2D sense while middleware lets users edit audio in a non-linear way. Middleware gives sound designers further access to traditional audio editing tools like filters and aux sends. Middleware also gives sound designers the ability to add FX’s to audio like reverb, delay, distortion, and automation to samples.

Middleware, most importantly, allows sound designers to handle audio in a dynamic and spatial sense. This means that any in-game event/action can trigger any audio event to happen. An audio event can be anything from the playback of a clip, to beginning the transition of music, to setting BPM, to adjusting reverb, etc. Traditional DAWs are built to output a final, mixed piece(s) of audio in the end, but audio middleware allows us to essentially choose the clips and apply parameters to them that allow us to dynamically play and adjust them at any given point during the game.

Audio middleware also is responsible for managing all of its assets and resources while it is being used in conjunction with other elements of the game engine, such as visuals and input. Most DAWs assume that it has all your available system resources and will typically want to use whatever you have available, but the usage of things like CPU, memory and disk space are all configurable and some are actually done dynamically based on what is happening in the game.

Let’s say you want to add the sound of an explosion to a video game. Adding the audio to the game is as simple as uploading a sample of a desired explosion to a middleware program. But now that your audio is in the game, you need to determine what actions will trigger the sound. Wwise by Audiokinetic is a popular middleware program that will let you assign specific in-game actions to trigger the explosion by creating what is called an ‘Event Object’. By creating an Event Object with middleware, you give instructions to the game engine on when, where, and how to trigger any sort of sound throughout the game.

Below is a video from Eric Houchin who throughly breaks down his process using Wwise to design and link sounds to actions in the video game Limbo. 

Attenuation, Equalization, and Reverberation

Attenuation, equalization, and reverberation are three major attributes of a sound that designers take into consideration when creating sound effects for video games. Attenuation refers to the loss of energy from sound waves as they travel through a space. Let’s take the sound of an explosion as an example again. If your character is standing close to something that explodes, the sound will be much louder than if you were standing further away. This is an effect that happens in the real world and sound designers need to take this effect into consideration when programing their sounds. It would be quite disorientating if there was an explosion somewhere across the game map but it sounded as if it had happened right next to you. Your characters distance from a sound source also demonstrates the need for proper equalization. Audio equalization is the adjustment and filtering of the many frequencies a sound can emit. Continuing to use the sound of an explosion as an example, the closer your character stands to the source of the explosion, you’ll hear more middle and high frequencies. The further away your character is from the explosion, you’ll hear less middle and high frequencies and will hear mostly the bass frequencies made by the boom. Reverberation is the reflection of sound off of objects in an environment over time. Reverberation can be effected by many things including size of the space and type of material surrounding a sound source.

Middleware programs like Wwise (mentioned earlier) can help sound designers manipulate all of these important and complex factors with FX automations.

Below is a video from Riot Games that breaks down some more factors one needs to think about when designing sounds for video games. This video is a part of a larger series of videos from Riot Games that explore all the other facets of video game production.

Depending on the amount of dimensions in a game, designing sounds can get complex. If you’re looking to get into the sound design field for games, it wouldn’t be a bad idea to dabble in music production first to gain a more basic understanding of audio editing. Video game players don’t experience sound in games the same way you would when listening to music however. There are many more factors like environment, distance, and setting that get taken into account when being built. Next time you’re playing a video game, take a moment to appreciate the efforts made to make your gameplay experience as immersive as it can be.

***

To learn about courses at Electronic Music Collective please visit our course page. Email us at [email protected] or call 646-747-0144 for additional information. Follow Us on Twitter, Instagram, Facebook, and YouTube: @makemusicemc