GameSoundCon 2014 was held this week October 7-8, 2014 at the historic Millennium Biltmore Hotel in Los Angeles, California.
The words GAME and CON may bring to mind visions of revellers in anime and superhero costumes, but that is miles off-base from GameSoundCon. This annual meeting of game audio students and professionals is two days of intensive exposure and a deep dive into the many facets and challenges of integrating sound and music into video games. The atmosphere is friendly and questions are welcome. Networking is emphasised, and the speakers are accessible to participants during breaks as well as in a mixer which caps the end of Day 1.
The conference runs four tracks of concentration simultaneously in separate rooms; they are Game Audio Essentials,Game Audio Pro, FMOD and Wwise. Each track offers different 60-minute sessions throughout the day. Attendees can sit in on any presentation that interests them. Speakers include Brian Schmidt, founder and Executive Director of GameSoundCon, discussing essential tech as well as trends in the industry; Paul Lipson, Audio Director, Microsoft Game Studios presenting interactive composition; Richard Ludlow, Creative Director at Hexany Audio, discussing bids and contracts; Perry R. Cook, Princeton University, Smule speaking on synth concepts. Hands on tutorials are presented by Stephan Schütze, Sound Designer/Director of Sound Librarian for FMOD, and Simon Ashby, VP Products at Audiokinetic for Wwise.
Much attention is devoted to the professional game audio tools FMOD and Wwise. These are programs that allow more control over audio in an industry that has been limited in most situations to little more than a "start sample here" code. New technology from iZotope and other companies now allow increasingly complex DSP sound manipulation to be performed inside of game play. These new options open up massive possibilities, as the memory allotted to audio in most games is still limited to only a few megabytes. However they also require a method to program the behaviour and triggers for this sound. This is where these middleware applications come in.
One thing is clear, the creation of game music and audio is incredibly detailed and specialized. If you take a look on Youtube for musical composition techniques for game music phrases such as horizontal re-sequencing or vertical integration come up - what the heck? The latter is a technique where music is composed in multiple layers that can be split and combined in any combination to create multiple possibilities, you begin to see the complexity of composition for such parameters. If you're interested, a good place to start is with Youtube channel GameMediaPR and the work of game music composer Winifred Philips. She's written a book called a Composers Guide To Game Music which explores many of the techniques employed, explaining them in a series of videos designed to accompany her book.
When it comes to sound design and effects such as footsteps, ambience, weapons and creature sounds, FMOD and Wwise are used to create randomisation scripting the game audio engine can understand to ensure repetition of sound FX is kept to a minimum. This is where things get even more complex, with the game engine sending out event triggers (shots, characters moving, doors closing etc) which cause the sound engine to fire off complex scripts that create random mix elements, volume, pan, pitch and blend to eek out almost infinite possibilities from a small amount of audio assets. There are a number of in-depth demos on FMOD and Wwise available on Youtube, if you want to explore this in more detail.
Back to GameSoundCon and kicking things off, an inspiring Keynote Address was given by Marty O'Donnell. He shared his musical and professional journey, highlighting the excitement of early experimentation syncing a Pro One to an 808, writing unforgettable commercial jingles in the 1980s, to creating the music for hit games Myst and the Halo series.
One things for sure, there's a lot to know about the world of game sound and music.