Saturday, April 18, 2015

The Lowest 25%, Part 2: Achieving Space, Clarity, and Dynamics in When Producing and Mixing

Doing less now makes more seem bigger later…it's true!

Last week, I talked about achieving fullness when producing and mixing in a studio setting. You can check out Part 1 of the series here. If you don't have enough in your cup, you won't be able to take a song to the places it needs to go. However, we are often times given extra parts, allowing us many options for achieving and maximizing fullness. But, even when a song is relatively high energy, it's just as important to make sure that your cup isn't 100% full 100% of the time.

Now a full on symphony is going to be quite a bit more dynamic naturally than, say, an R & B song, which is going to be more dynamic than a pop punk anthem. Yet, each has its place for ups and downs, fullness and space that take turns relieving and exciting listeners. In general, a sweet melody or soulful solo shines over a moment of great space, while more aggressive solos or three-part harmonies often find their places among sections that need to push fullness to the very brim. Of course, there are exceptions. For example, a cappella sections are those that feature voices—usually several—and nothing else.

So, what can you do with that overlooked 25% of the cup to balance your mix once you have all of the necessary elements for achieving fullness?

  • Reduce what's being played in the verses / down moments
  • Reduce how many instruments are playing in the verses / down moments
  • Keep volumes low
  • Use automation peaks for transitions
  • Craft EQ differently from main features
  • Don't be afraid of different reverb practices
Reduce what's being played in the verses

This is a bit more obvious, and a good musician or producer will already know to either change playing techniques or how much is being played between different sections of a track. However, this can be a difficult concept to grasp if you are a musician who is new to recording or playing in a live band. It can be even more difficult if you've been playing the same way for 30+ years and don't like change. People who come from solo performance backgrounds are used to having to sum an entire song up on one instrument, and they are used to playing more for the duration of the song.

Some practical examples from my "fullness 5" list would be:
  • changing a full 8th or 16th note background guitar strumming patterns to palm mutes, whole notes, or individual note picking
  • playing whole notes or longer on pianos and keyboards, using fewer notes per chord, adding in very small licks or runs sparingly
  • holding out only one or two notes on a pad or organ
  • only including stabs or short riffs on synths
  • using harmonies halfway through the section or on certain words / phrases
The special thing about this is that, when combined with some of the other techniques, you may decide to do less throughout the song. For example, your verses and choruses both may have whole note filler parts that are connected by a more active pre-chorus.

Reduce how many instruments are playing in those spacious sections

If the previous section is difficult for some to grasp, this one might be impossible. However, in any good composition or production, nobody plays the entire time, with the exception of pop loops and drones. Having electric guitar two or three, Rhodes, or piano play less is good, but having them wait until halfway through the verse, the pre-chorus, and the chorus to play anything at all has the potential to be great. This is especially true when the main instruments (lead guitars, keys, or strings, bass, drums, etc.) of the song have reduced what they are playing.

Keep volumes low

In order for all of these extra "fullness 5" instruments to make up the bottom 25% of your cup, they have to stay in the bottom 25%! For the most part, the majority of these instruments should not be consciously heard. Did you try my suggestion from part 1? Whether you are mixing a verse, chorus, tag, bridge, or otherwise, try lowering the volumes of your filler instruments until you can't really tell if they are there or not. Go back and listen to your mix again, then mute any of those tracks when you get to the section in question. You should feel a big change in fullness / space, and if you practice, you might even be able to learn to hear and distinguish those instruments when you unmute the channel(s) in question.

If you have only one or two extra background instruments, or if you have several that barely play anything, you can get away with a few "feature" moments where they stand out a bit. Just remember, though, that the more you have, the less each part can do. Too many parts at too loud a volume can really over-thicken and muddy up a mix.

Use automation peaks for transitions

Background instruments can greatly manipulate the way transitions as a whole feel. In addition to being able to layer in new instruments at distinct intervals, you can use tiered, steady, or a combination of tiered and steady automation to boost the impact of a downbeat. 

Because your fillers are already at a lowered volume, you have a lot of dynamic range to play with. You might even pull one or two of them higher up in the mix for those few seconds. A B3 with the speed switch activated for the rotor in a Leslie cabinet or an aggressive synth are two examples of tracks that may pop during a transition. 

Then, just when your cup is close to spilling over, you can take a little sip out of it by lowering those volumes back down...Not to where they were before, though. You don't want them to disappear completely. You just want them to sound like they disappeared but not feel like they did. By keeping these instruments out of the way, you allow a greater clarity without losing fullness.

Remember, transitions don't just happen between sections. Sometimes, they occur at the halfway, two-thirds, or other points in a single section to give that section some more meat before trasitioning to the next one.

Craft EQ differently from main features

The daddy of pre and post-production clarity features, EQ has the ability to totally transform the way audio sounds. You're going to want a fairly natural sound for your main instruments in typical situations, though there are many pop mixes that feature heavily EQ'd pianos, vocals, and some other instruments. However, when you over-process tracks like this, you run the risk of a mix sounding too thin or too muddy.

With the bottom 25% of your cup, over, and yes, even under-processing can actually be used to your advantage in order to keep your fillers from competing with more important instruments of a similar frequency range.

Instruments like second acoustic guitar or keys might have all of the bottom end completely cut in order to keep from competing with the bass or mid-friendly instruments in a song. Likewise, an electric guitar or synth might have the high end automated out during the verse and sweep back in later on. Thinning out or muffling instruments like this is a great option when you need to clear up sonic space for a more important instrument, especially when both instruments are panned to a similar location.

On the other hand, especially dealing with synths, very natural sounding piano samples (or live performances), and orchestral recordings, you may do a lot less with the background EQ than you do with the EQ on the main instruments. Synths are so drastic, yet particular in the tones they can create that any change to most of the billion knobs you get to play with adjusts the frequencies the synth plays as well. So, if you've found the perfect synth or pad tone and it isn't overbearing in any range, you may not cut anything and have a beautiful sound when it's pulled down in the mix. Or, you may even boost the range you want to stick out the most and lower the output (which is essentially the same as cutting things you didn't want). One example I often hear is people adding a little more high end somewhere 8k or above to give their synths a bit of extra sparkle. Turning it down afterwards just pulls that sparkle down too, but it clears out some of those mid and lower frequencies that may not have been right for the situation.

Don't be afraid of different reverb practices…

…especially if you are playing really big, spacey, ambient music!

You've all heard how reverb's done in the industry nowadays. You get your track sounding nice and create a send that receives some of your instrument into a very wet auxiliary reverb track. You never bothered to see what changing from pre-fader to post-fader does, and when your friend tried to explain it to you, it went right over your head.

Well, in orchestral music, reverb is so powerful, it can make low end instruments feel like they tripled or quadrupled in size. What? Reverb on the low end? In orchestral works, it's a requirement, not a crazy option. While you may not be adding reverb to the kick and bass of a pop song, most other instruments get reverb, and this is really where you can start to change the function or "appearance" of your fillers. All of a sudden, your keys start to sound like electric guitar, your electric guitar starts to sound like a pad, your organ widens up, and your choir starts to shimmer.

Some of the other reverb practices may cause your background instruments to be more present in a mix or lose identifiability, as you could see above, but that's fine if the ones that do stick out more don't get in the way of those important features. So, how do you achieve those tones?

First, pre-fader sending allows the reverb level (wet) and regular fader levels (dry) to be controlled separately. This means that you're going to hear a lot more reverb at lower send levels on pre-fader tracks. This also means that your main fader can be all the way down, and you will still hear the reverb through the aux channel. If you are looking to keep the digital sonic space relatively the same while making an instrument feel like it is moving forward or backward in that digital room, this is a great option to play with. Or, if you don't want more than a couple of reverb tracks and need a big, far away sound, pre-fader is for you.

Or, and this is my preference much of the time because of the amount of orchestral work I do, you can set up reverbs on aux tracks and change the output of your instruments directly to that aux instead of bussing them through an aux send. This is the same as just putting the reverb plugin right on an instrument's channel, but it keeps you from having to put a separate reverb unit on every single track, which takes a lot of processing power. By changing the output, you can achieve the same effect with multiple instruments using one reverb unit. Like going post-fader, this allows you the ability to control the wet (reverb) and dry (regular signal) levels. Be careful, though, not to edit the reverb unless it is within the unit itself (for example, the EQ tab within a convolution reverb unit), or the effects added will be applied to all of the instruments too, since they are running directly into the aux track. The nice thing about sends is that you can edit the reverb tracks as you like without affecting the instruments.

So why do I prefer to place instruments fully in a reverb unit instead of just going post-fader? Well, they aren't quite the same. Going post-fader or even pre-fader often make instruments feel like they are just moving closer to or further from the monitors on which you are listening. This isn't bad, but going fully through the reverb unit makes tracks feel like they are being placed inside a new room, not just digital space with nice reverb. Since not all instruments are recorded with room mics, especially when you are working with orchestral samples, dry guitar tracks, or the recordings of under-budgeted amateurs, you can almost emulate different mic positions by going directly into the reverb. Nothing feels as close, up front, or in-your-face when you place a track in the reverb unit's room, especially when you compare two identical tracks at the same volume levels. So, this option isn't for every track, especially main ones that you want to use to punch listeners in the face. However, it's great for making tracks feel like they were recorded in a different space than they may have been originally.

Conclusion

Applying even a few of these techniques to mixes that give you trouble or that leave you unsure of what you can do with the excess background instruments will make huge differences in the clarity, dynamics, and space/fullness of any given section. Plus, some of these principles can carry over to the rest of your mix—your lead vocal, bass, lead acoustic/electric, piano, main synth, violin, drums or whatever else is your main focus.

If you haven't read Part 1, be sure to check it out, as it explains the "fullness 5" and ways to use the bottom 25% of your priorities to liven up a song without making it obvious you did anything differently.

Thanks for reading! Be on the lookout for the rest of this series, where I'll cover topics that can be applied to live mixing, like monitor volume, dealing with too much, and yes, having enough!

Friday, April 10, 2015

The Lowest 25%, Part 1: Getting More Fullness when Producing and Mixing

If you've been around the audio world for more than a few months, it's likely you've heard an analogy in which mixing gets compared to filling a cup. Somebody probably recommended filling that cup with kick (or drums) and bass first, warning you not to put in too much right off the bat for fear of your cup overflowing. Still, as you added other elements, power players, such as lead guitar, vocals, and the drums and bass, began to take a lot of your attention.

It's easy to set and forget the instruments and effects that hardly seem to be present already, so why bother dedicating much effort to the things that barely make it into the bottom 25% of your cup? Well, clarity, space (fullness), and dynamics are 3 of the most important areas any mixing engineer can master, and seeing how these things relate and working to achieve them can really add to the sense of unity a great musical group brings to the stage or studio.

Assuming you have at least a basic understanding of effects like EQ, reverb, and compression, this series of articles will focus on how you can use the least popular channels in your mix to enhance the clarity, fullness, and dynamics of those that make up the "attention-grabbing" 75% of a live or studio mix. Often times, what we don't consciously notice—that quietest 25%—is what makes those featured parts sound better…or worse!

Part 1: Studio Mixing - You Must Have Enough in Your Cup

Whether you write pop songs, pump out beats, compose orchestral scores, or focus only on mixing, you've probably run into a situation where something just doesn't feel right. I very regularly listen to work by people who publicly release their work feeling like it wasn't full enough, and they have no idea how to fix the issue except by turning up their favorite channels more and hoping for the best.

Fullness is almost a trick category though. Think about it. If you were asked to bake 12 muffins and were given enough muffin mix for 6, it doesn't matter what you do. You'll end up with 6 muffins. Sure, you could spread it out over 12 cups, but you'll still end up with 6 muffins…just in 12 halves. In the same way, if you only have parts for 6 instruments, all of which play a vital role in the song, you can adjust the EQ over and over or automate the faders to be louder at the chorus, but you'll still only be hearing 6 instruments. This is like filling your cup 75% of the way with the important and "featured" instruments and nothing else.

However, it is possible to take more instruments and more parts and still make it sound like only 6 or 7 things are going on. Kind of like if you were instead given enough muffin mix to bake 12 muffins but only needed 6. You could make 6 regular muffins, or you could pour a little extra batter on 1 or all of them to make them bigger. Maybe a bit drips into an empty cup, and you have an extra bite-sized muffin, and, whatever's left over can be thrown away or saved for another time.

So, it is vital to make sure the bottom 25% is even available before you can worry about using it.
If you're strictly a mixing engineer, this may call for asking the producer for some more tracks, but if you are also creating the music, here is what I call the fullness 5—instruments that make great fillers—and tips on how to appropriately use them:

  • Acoustic Guitar / Second Acoustic Guitar
  • Piano / Keyboards
  • Organ
  • Synths / Pads
  • Extra Electric Guitars
  • (Vocals)
Composer's note: if you are writing for orchestra, and experience fullness problems, it might be time to start studying woodwinds. Odds are you know more about strings and brass. Also, if you are writing hybrid / Hollywood style scores, synths or any of the other instruments on the fullness 5 list are great tools to use too.

The key with fullness parts like these is to fight the urges naïveté will present you. They don't have to be up to the same volumes as everything else in the mix, even if somebody played a killer organ lick or great piano part. In fact, when used as fillers, turning these instruments down just below the point where you can no longer distinguish them—or can just barely hear them when other instruments rest—is best. If you have a hard time believing it makes any difference to have something that quiet in your mix, hit the mute button, and your ears will be surprised at the difference.

Acoustic Guitars

These magical instruments have decay, meaning their sound fades away after initially being plucked, but they can add massive fullness to songs. This is even moreso if you record a second acoustic guitar or double the first acoustic guitar. 

I remember creating a song for which I had two acoustic guitars playing identical parts in some places and slightly different rhythms in others. I panned one somewhere to the left, and the other somewhere to the right. Even though acoustic guitar was a "feature instrument," only one idea was originally written. Plus, I had organ and electric guitars as well. However, I recorded two tracks in case I needed backup, and it was the second acoustic track that really added to the fullness. I left it muted after working on the other tracks in a more distinguished fashion, and said, "Why is this song so weak all of a sudden?" Then, I realized acoustic 2 was still muted, and I was thankful that I had recorded it.

Due to their decay, acoustic guitars can present different amounts of fullness depending on how they are played. If you strum out whole (or longer) notes, the beginning of each chord will be the most full, while if you maintain a steady eighth or sixteenth note rhythm, the sound won't fade away.
Just remember, if you are doubling a single acoustic track, you'll have to offset the timing a touch and maybe edit them slightly differently in order to avoid phasing issues.

Pianos and Keyboards

Whether it's jazz chords on a Wurlitzer, simple and steady triads on a grand piano, or just the 1 and 5 notes repeating on an old upright, keyboards are great at filling out space and blending in, even if the initial attack (striking) of the notes sticks out a bit. Plus, keys enhance the sound of guitars, bass, synths, or other "featured" keyboards when played properly.

Strong power chords can give distorted electric rhythm guitars more punch or bass guitars more tone, and they can give the impression that there is less decay on either. In addition, higher notes on a piano can fatten lead guitar and synths or blend the end of a guitar run into the beginning of a piano lick so seamlessly that you don't know when one instrument stopped and the other started (dovetailing).

Organ

Before there were synths and pads, there was organ…the mother of all non-decaying fillers. Because an organ can hold out a note at the same volume indefinitely, it can potentially be used for an entire song without anyone realizing it was there in the first place. The B3 is the go-to organ for most situations, and programs like Logic Pro and Pro Tools come with some pretty decent stock replicas should you not have access to one of these very pricy instruments.

My biggest tip here? The less you do, the less noticeable organ is…without sacrificing fullness. It has the ability to blend extremely well or stick out on a moment's notice, and it is just as common to hear it stabilizing an acoustic guitar or piano at the beginning of a song as it is not to include it at all until the final verse.

Because organs do not have decay, the low range can be used to beef up long bass notes, perhaps even better than a piano can. The mid and high-mid registers can do the same for guitars. Only in the absolute highest range does an organ have difficulty blending with anything but similarly pitched synths, but when you reduce its volume to be in that lowest 25%, it can add a great presence without being overbearing.

Synths and Pads

Considering pads are just synths edited well-enough to sound calming or deep, this group easily offers the most variety. There's literally an unlimited number of synth types that can be created in any given interface due to the many parameters the user can control. In addition to being able to sound harsh or biting, techno-clubbish, 8-bit, spacey, swirly, stringy, brassy, or even like real acoustic instruments, synths and pads can be either decaying or non-decaying instruments. You get to choose!

Because of the high level of customization, it's very easy to match them to the vibe of most songs, and they require little arranging thought if you aren't great at playing music. Simply one note or one note at a time will do in many cases, and chords only add to the fullness. Pads blend a little more quickly than regular synths, and, while being louder may be appropriate for pads on a real ambient or spacey song, you might never know pads are in some of the other tunes they are featured in…

Electric Guitar Heaven

…Or, even more shockingly, you might think you are hearing a pad, when you are in fact hearing additional electric guitars. These instruments are surprisingly versatile—they can be confused with or help to enhance keyboards, synthesizers, other guitars, organs, and even vocals.

You can quickly overdo it when adding extra electric guitar tracks, but there are lots of possibilities for the many situations background EGs are used in. Simply doubling and panning a part may work for one song, while changing the tone works for another. Muted power chords, drawn-out open chords, or fast CAGED chords may be played in contrast to each other, or each chord could be picked one note at a time. Effects may be used to emulate those pad-like sounds, or ostinatos (short, repeating patterns) may focus on one or two notes.

Background electric guitar writing and mixing could be an entire article in itself, but, like the other instruments, not everything has to be heard distinctly…especially if there are other, more prominent guitars present.

Vocals

Not officially on my fullness-5 list, extra vocals are a great way to add fullness to a song. However, they tend to stick out and catch attention, even when in the background, because we humans love to hear the sound of our voices. Things like "Oohs," "Aahs," choirs, harmonies, doublings, and octave doublings all serve their purposes in the right settings, but they don't often get mixed so low that you couldn't recognize them as vocals.

Wrap-up

So remember, some of these things are probably going to be features in every track you mix. However, if they aren't doing anything interesting, they can be pulled back a bit in order for something else to shine (more on that in my discussion on clarity).

And, if a particular song doesn't have the oomph you want for that epic line, try adding a small amount of something new. Got a song that has two electric guitars, bass, drums, acoustic, pad and piano, yet a crescendo to the pre-chorus just doesn't add the fullness you wanted? Perhaps a quiet synth doubling the pad, an organ, and a Rhodes doubling the piano will help.

Thanks for reading! Be on the lookout for the rest of this series, where I'll cover topics like having too much in the cup, using reverb, monitor volume, and even having enough in the cup in live scenarios.

Tuesday, April 7, 2015

How do You Use the Tube? An Interactive Conversation About the Media Creators and YouTube.

Hey everyone,

I've been doing a lot of research on YouTube lately and thought it would be great to discuss our greatest uses for the massive video site. Sure, self promotion is huge among game developers, filmmakers, composers, voice actors, famous reviewers, singers, bands, artists, and everyone else in the creative field, but if everyone only promoted their own content, YouTube would be dead. It's success is due to a highly interactive exchange of the arts and information through a visual format.

So, other than getting your own work out there, what constantly draws you back to the tube?

For me, I've often used it to find and study new music. In fact, this is the number one reason I sign in. I'm on it right now listening to a new OST! However, I also find reviews on games or musical equipment, demos of those games and compositional products, and trailers for films and other media to be quite useful.

In addition, especially when I first joined the community, I found it very helpful in connecting with others who shared similar interests and am friends with many of those people to this day. Plus, since I have quite a loyal fanbase, I recently started using it as a way to give back to the fans! In fact, for the month of April, I'm doing a giveaway that can be found on any of my videos with 1,000-101,000 views. Check it out!

Go ahead, leave a comment! Let us know what YouTube does for you :)

Monday, March 9, 2015

The Composer's Guide to Game Music: An Award-Winning Book Game Scoring Icon Winifred Phillips

Hey friends, readers, composers, and all artists alike! As promised, I have read through the entire Composer's Guide (paraphrasing the title) by one of my role models, Winifred Phillips, and can now post a full review. I enjoy talking too much online, in one-on-one situations, and when I'm teaching, and a blog is kind of like all three of those situations, so I figured I'll review each chapter. Ok, an overall will be at the end too...and on Amazon. Enjoy!

Chapter 1

The hook. This is what artists commonly use to grasp the attention of those they are presenting a work to, and Winifred definitely hooked me in this chapter. Her writing style is sincere and friendly, quirky and humorous, and full of passion that connects with a game composer like myself. In addition, like any good teacher, Phillips shares a plethora of analogies and personal stories to paint vivid pictures of that which she hopes to convey to us as readers.

The great thing about this chapter is that it also gives people who aren't sure if they want to move into the field of game composing a little test they can assess their passions with. And that is also the biggest takeaway point. Love games!

Chapter 2

This chapter describes the essence of the book. It shows the book's approachable nature (and really that of Phillips, as she loves to engage with fans at conferences or via social media sites like twitter). Throughout the entire text, she continually presents information about the game scoring world that can benefit both complete newbies and experienced veterans. Because I myself lie more on the experienced side, I knew much of what she presented here, but it is always great to hear inspiring quotes, to be refreshed on where you came from, and to learn new ways of teaching old tricks to those who work under you.

In essence, learning the craft of game composing doesn't have to cost hundreds of thousands of dollars, but it will cost time, dedication and passion.

Though not mentioned in the book, I'd like to say that I read an article recently in which Mark Ruffalo stated that he had to audition over 600 times before his career as an actor launched. Do you think he did nothing in the meantime? Of course not. With each rejection, he tried not to take it to heart. Somebody else just happened to be better suited for the part, and he had to continue practicing so he could eventually convince directors he was that guy who was the best for the part. Winifred is telling us the same here. The takeaway here is always be learning, no matter how far along you are in your career.

Chapter 3

This was a very interesting chapter on the science of video game entertainment and how it attracts a following. It shows that before you consider being a game composer, you must understand games. Sure some of the biggest film names out there score games simply for the creative freedom, but there is something more that can be pulled from one who has a personal connection to every step of the video game experience. This particular chapter does a good job explaining how game composers can gently nudge a player toward a more fully immersive experience.

Takeaways will help you to understand the science behind player-game relationships.

Chapter 4

The story in this chapter was one of my favorites. I have not yet had the luxury of attending one of the biggest game festivals in fandom, so it was nice to imagine the scenes she described of raging fans going nuts over hearing their favorite game music performed live.

On the educational side of things, this chapter is extremely important for those who have not had any formal composition or very good private training. I've had formal composition training and I still learned more in-depth things, especially when it comes to the usage of the idée fixe. Winifred argues that it should indeed be considered a distinct entity from leitmotifs against the popular beliefs of many that state they are interchangeable with no real differences. I've heard both countless times in video games, and after reading would myself consider the idée fixe to be a very specific subcategory of the larger generalization of the leitmotif, even though neither side may agree. Again because of in-depth gaming experience and the reading, I was able to conclude that there might even be two distinct types of idée fixe.

The takeaways here should help you to understand music itself better before incorporating its techniques into gaming.

Chapter 5

This chapter makes a slight diversion back to the science of games and the psychology of gamers. It is immensely useful if you aren't well-rounded in your experience of the different genres of music in general, the genres of video games, or the genres of music in video games, and it is equally useful if you don't have the mind of a producer. As mentioned earlier, not only will you study game and music types, but you will also see how different psychological mindsets associate with the various types of games out there.

This chapter also inadvertently encourages you to be a self-disciplined go-getter. In other words, to truly get the most from it, you'll have to do some side-by-side research. Phillips describes the different types of music associated with shooters or RPGs or platformers, but she doesn't necessarily tell you step by step what goes into a rock song or what goes into a fantasy score or how to create an electronic soundscape. What she does do, however, is provide a plethora of in-game examples you can refer to in order to study various effects and techniques. Of course, I just love listening to game music, despite having experience in most of those compositional fields, so I followed along with most of the soundtracks mentioned to really put myself in the world of what she was describing. Some of the scores were old friends, while others I had never heard, and all enhanced the reading greatly.

Even now, as I write this, I'm listening to the full score to Little Big Planet 2 because I've completed the list of OSTs I've compiled over the years and only got to listen to a few of that game's tracks while reading the book. Listen to game music every chance you get! Be familiar with various game and music types. Those are the takeaways.

Chapter 6

A sort of expansion on the previous chapter, this chapter focuses more on the music in games since the reader should have a better understanding of the player. It focuses on what exactly music can do in a game and how important it can even become in the marketing world outside the game. If I recall correctly, this is also where she mentions just how valuable of an asset the game composer really is. If you are brought on board early enough, chances are that teams will listen to your work as they create (an honor I've experienced once). It really does fire them up and inspire them to do even better work! And, boy is it great to hear them say that your music has affected the development of the project in a positive way.

Takeaways demonstrate the relationship between game pacing and music reflective of any given situation.

Chapter 7

Winifred has more experience on much bigger titles than do I, and I found this section to be greatly enlightening on the process of working with a studio that is planning to release a AAA game. There are so many things you must do to prepare yourself for a big job, and she gives a great list of items to request from developers to make sure that you have access to as much source material as possible to inspire your best work. It's also where she first gets into the materials a game composer might need. While the film industry is relying more and more on music technology, the game industry couldn't survive without it. So if you are unfamiliar or uncomfortable with technology, make sure you pay attention to the upcoming chapters.

Preparation and the tools to achieve it can all be form your takeaways here.

Chapter 8

There are so many types of music and audio titles let alone members of other departments for any given game, so this chapter introduces you to the ones you'll likely be working with. It's a relatively short chapter compared to the others, but it can open your eyes to other music and audio-related jobs in the industry if you are interested in managing musicians and sound people instead of just creating music.

The takeaways here will show you who to communicate with (perhaps if you are seeking your first gig), remind you to communicate early and often, and help you to understand the chain of command.

Chapter 9

This is an all-around enjoyable chapter. It finally gets into the different types of game tracks you might hear in a typical game—something I have been studying since I was a child. Nowadays, there are so many cool things you can score: battle sequences, cinematics, general exploration (overworld themes), game trailers, and more.

If you understand the difference between how the various game tracks function within a game, you've nailed the takeaways.

Chapter 10

While chapter 9 explores some of the types of tracks you'll see in the video game composing world, this chapter really gets into the heart of linear-style game music as well as what makes game music dynamic and interesting. A must read for the beginning and intermediate musician. Linear music is very common in projects that are smaller or have engine limitations, and a composer can expect to work with it a lot, especially when first starting out.

Even the more advanced game composer can take away pointers on how to draw the most out of linear music, particularly when looped. At the very least, we can be pushed to pull more out of our journey around each track.

However, not only are loops more difficult to make interesting, they are often the hardest edit and make transition smoothly. In fact, this chapter was what inspired me to write my most recent article on an alternative game looping method.

This really should be considered as one of the most important chapters in the book.

Chapter 11

Unique only to the gaming industry, interactive music is explored here. If you are a true gamer, it is likely in my own musical opinion that interactive music is your favorite type of music to experience in a game…especially if you are a musician. This chapter offers great explanations and advice on what interactive music is and how it works.

Your takeaways may be more educational, but mine are that interactive music is so imperative to games all game composers should have a profound knowledge of how it works. It's super fun!

Chapter 12

MIDI is surprisingly something that many composers, young and experienced alike, have difficulty with. Even then, many who have a general understanding of it don't really take it to its full potential. This chapter will give you a brief understanding of it as well as explain some of the advantages, but only practice, experience, and some very specific topical research will help you to get the most out of MIDI.

After MIDI, the chapter goes on to describe where video game music has tried to go and may one day go, despite the disadvantages of highly experimental styles. Education on generative music (and MIDI if that's new for you) are good points for takeaway this chapter.

Chapter 13

This chapter is non-musical and it is also so huge that an entire book could be written about the topic. In fact, some have already been written. As composers, especially for games, you must have gear. And, that gear must be good! Winifred mentioned that she composes with the assistance of six computers and my brain nearly exploded. How I'd love to have even two! Simply put one machine can't handle all of the tasks you'll want it to do, no matter how strong it is. Not only that, but she gets into the types of software, plugins, controllers, boards, DAWs, libraries, and other gear you may need, though she doesn't advocate any particular brand here. That's OK though. You can always read my product reviews to understand each company specifically.

My personal takeaway from this chapter was that Phillips must use some EastWest equipment because she says a company, whom she leaves nameless, has software she owns that she regularly curses to the skies, but must accept because that creates some beautiful libraries that you can't find anywhere else. I also own EastWest, and as you all may know from my reviews, the products are great, but the player, stability, size of samples, and operation are insane.

Your takeaway may be less silly and more practical, since the other section of the chapter deals with middleware, another thing all game composers should be comfortable operating. If you don't know what middleware is, read this chapter, then start practicing!

Chapter 14

The chapter of hope and frustration. Winifred shares her personal journey and shows us how she got into the game scoring industry as well as how she maintains it. I currently am looking for that next boost to the top tier in my career and can say firsthand that it is indeed a lot of hard work. Even if you follow all of the tips, you'll have to be able to keep up with those tips and repeat many steps until you are satisfied with where you are. You may have to experiment with different ways of approaching each step until you perfect them or find something that is efficient. You will face various rejection, not because you are bad, but because somebody else got there first or fit a particular project in the way the producers had hoped they would. Even if you do everything technically right, your timing could just be a little off or it could just not work out. Phillips strives here to encourage you to keep on keeping on, and that is the final takeaway.

Conclusion

Overall, this is a wonderful book, and I believe classes could be developed in universities that specifically teach game composing using Winifred Phillips' guide as the text. It reaches readers of all ages and understandings of game scoring, and can surely boost the EXP of the newb and andvanced reader alike (level up, anyone?). It's light but useful. Comical but efficient. The least boring textbook you could hope to read. It covers every area of the game scoring world and gives a plethora of musical examples you can listen to while reading in order to fully capture the essence of her ideas, and it gives additional resources you can use to further your understanding of specific topics.

Pick up a copy the next time you're online, which is now, or at the bookstore the next time you're out.

For more information on the game composing sensation that is Winifred Philips, visit her site at www.winifredphillips.com.

Don't hesitate to email me with any questions you may have about the book or about composing in general. And remember, if you need a composer for your upcoming game project, visit www.natecombsmedia.com, or bypass me and go straight to Winifred if you think you can land her!

Thursday, February 26, 2015

Video Game Audio - Cool Way to Make a Perfectly Smooth and Seamless Loop

Introduction

If you work in the gaming industry—whether you're a programmer, a composer, a sound designer, a project director, or anyone else who may deal with game audio—it's almost a guarantee you've come in contact with loops. They drive so many aspects of the gaming experience that even if you aren't the person implementing the loops, chances are that some of the music or sounds you've worked on will be looped at some point in their games.

Because the gaming industry is fueled by highly interactive experiences, your loops most likely will not be handled in a linear format—that is, by copying and pasting or otherwise extending an audio event to match a fixed timeline—as would be the case for loops in film and television. Instead, the game's engine will trigger the looping of various audio files so they can repeat an endless amount of times until something new is triggered to take their place.

For this reason, a smooth transition from the end of a single audio file (or chain of audio files) back to it's beginning is imperative, but creating seamless loops remains one of the trickiest challenges of implementing game audio today. When created, exported, or formatted improperly, an audible pop, a harsh click, a slight gap with no sound, or a lack of sound continuity (such as the reverb tail in a musical loop) will be present.


Current Methods of Making Seamless Loops

While there are a couple of common methods used to attempt to tackle these issues, none are totally perfect or can be used in any situation, and all may require multiple attempts to get an acceptable outcome. I'll review some of these common ways to work with loops and then share a solution for achieving "theoretically perfect" loops that my own experience and roadblocks led me to discover—a method that works extremely well for ambiences and environments and very well for certain ambient music tracks, though with the right amount of work, it could be applied to any loop.

Both sound designers and composers (and all other audio experts) should be familiar with the term "zero crossing." In short, it is the center of a waveform—often represented by a straight line in DAWs. When the wave touches this line on its journey to cross to the opposite side, it is at it's quietest point, or 0db. By finding the point of zero crossing in a waveform and trimming the file to that point, you are able to end a loop with reduced risk for popping or clicking. However, sometimes this means you have to change the length of the file, even if slightly, which can be awkward for rhythmically driven tracks. 

Other times, zero crossing just doesn't line up like you'd hoped and you have to rely on a fade of 5-10 ticks or samples to force out the click that would otherwise be present. Usually, a fade so small is not easily noticeable, but if the track is fairly rambunctious, a similar fade may be required at the beginning of the audio file. These "forced" zero crossings can leave a slight gap in sound, and they of course don't get into the issues of reverb tails, the left over echo-like sounds that simulate a realistic environment. 

If you go into a big hall and shout "echo," you'll hear the reflections—your voice bouncing off of the walls and other surfaces—trail off slowly. If it all stopped the second the you shut your mouth, you would easily notice that something is off. This same unnatural feeling happens when a loop is suddenly cut off at the end in order to keep the timing while transitioning back to the beginning. 

It's not an issue while composing or designing certain sounds because any reverb units you have applied in the project are active and hold out their respective reverb tails when MIDI notes or audio files are played back. So, loops are seamless all the time, every time within the project. However, when you actually write or bounce down all of that information into a single track, the reverb is written along with everything else. There is no active unit gauging when each note is played and waiting to hold them out accordingly. Instead, the reverb of the final moments of the original session carries out beyond the length of the final sound and is written to your new exported file.

Now, in order to maintain that reverb tail's naturalness, you must cut the tail off your exported file and place it on a second track at the beginning of the loop. That poses a few problems in itself, and as renowned game composer Winifred Phillips agrees in her book, The Composer's Guide to Game Music, when done improperly, it can be quite dissonant and displeasing to the listener (p173-174). For this reason, she advises ending and starting loops with the same notes and making sure that reverb times are short enough (at least at the end of the track) to keep multiple notes from overlapping back to the beginning. 

Of course, just as zero crossing fades don't cover reverb issues, reverb edits don't address pops and clicks. Often, the two techniques must be combined to make acceptable loops, and the common result still leaves that tiny, tiny gap in playback. It may not be noticeable in all tracks, but highly reverberated, ambient, or "spacey" loops will tend to stick out like a sore thumb, even for such slight gaps. Luckily for the modern game industry, multiple tracks often play at the same time (ie dialogue, 3 different nature tracks, background voices, clothing and sfx, and music), so a split-second gap in any one loop may be covered by the business of the others. But this isn't always the case. Imagine you are playing a game in which the music is sparse and, because your character is standing still, the only other sound is the light ambience of a cave or abandoned library. Gaps could ruin the experience here.

Another technique that is less preferable but is still present in the game world is to have an ending in the loop, which makes it extremely obvious when the piece starts over due to the intended silence. 

Still another technique is just leaving the file alone and doubling it, either in a DAW or directly in a middleware program like FMOD or WWise. Because middleware is designed for both the composers/sound designers and programmers to be able to share a common template and can integrate sounds into a game's engine, a programmer could then take over at any point. Regardless of who does the work, someone can set a second copy of the file to begin exactly when the first should start over, looping the second file only so that the reverb tail from the first is carried over but not printed on the original track. While that solves the issue of dissonance the first time a loop is played, it still leaves room for pops and presents a bigger issue of files potentially being twice the size. That can really add up considering game engines can only hold so much at once.


A Newer Method for Making Seamless Loops

Earlier, I mentioned that my method is "theoretically perfect," but because of reverb tails and certain melodic writing styles, it is not yet perfected for music…and since reverb tails are so important in original musical sessions, it may never be perfected for music. However, it does the three other things that may be hard to come by in any other method, still making it a gem and viable option for many musical tracks. 

So, here is what I discovered along with its pros and musical cons:

When you are doing the regular editing of audio files in your DAW of choice, you sometimes need to cut or separate them into two or more pieces. However, if you don't move those pieces after they're split and initiate playback, you'll find that they flow together seamlessly, as if they were still a single file. I noticed this in the first song I ever edited, but it wasn't until the spring of 2014, when I was tasked with transforming some existing environment tracks for a game into loops, that I began to think bigger. 

These pesky tracks had all sorts of weird high points and low points, and nothing was ever consistent enough to just have them start and finish. The audio director knew this and said we were to follow the common method used by his team of layering them all with different starting points and ending points so that any time one faded out, the others would still be going. That way, no one would ever know that several individual tracks were constantly fading in and out. I still presented a version this way, but I remembered how important file size and the amount of files per project were to the development process. 

That's when the idea hit me! If regular audio tracks could play seamlessly even when cut in half, why couldn't loops work in the same way? The logic behind the thought said, almost counterintuitively, that rather than searching for near-zero crossing points that are closely lined up to loop with, why not loop anywhere by reversing the order of two split regions, regardless of wave positioning?

So, on my own time, I pulled up one of the files on my computer. I created a new project in my DAW, loaded it, then looped it. The change from end to beginning was abrupt. Then, I cut it in half. It still sounded like one single track, as I was expecting, with the exception of the abruptness of the loop point. I excitedly switched the order of the two halves, so that the second half played first and the first half immediately followed.

The result was the same! My file started in the middle of the environmental atmosphere, abruptly changed halfway through, and then looped back through the beginning as if it was a single file. That's all I needed to know for my experiments to begin.

I quickly undid everything and shortened the file on both ends so I could have some leftover room for crossfading. Then, I cut it more carefully, looking for a point closer to zero crossing and perhaps more importantly, listening for a low point in environmental activity (because unique sounds will cause clicks when interrupted, even if there is no huge spike in the waveform).

When I flipped the order of the two files this time, the change near the middle was still noticeable, but much less abrupt. At that point, I added a small crossfade. When an ambience is very consistent throughout, that's all that is required, and it worked on some of the tracks I played with later. But not here. The buzzing still felt different. This was a wild track with monkeys, birds, bugs, tree branches, and all sorts of things happening as it progressed. So, I made a large crossfade, which slowly and unnoticeably transformed the various buzzes, chirps, and calls into different buzzes, chirps, and calls. In fact, anything like chirps and calls are never too big of a deal because the fades make them just seem quieter. That may even be desirable to further differentiate them from similar sounds.

The key here was getting the subtle changing of the buzzing frequency to feel like it wasn't suddenly changing. After all, it had gradually risen in pitch throughout the original file, so my goal was to slowly lower it back to the first pitch I heard. A short fade couldn't do that, but a long one did the trick! By extending the length of the fade, the change in buzz pitch became much more natural, and the middle of my file was now seamless. So, when I played the track back a final time, the result was an unending loop that never popped or clicked, never had moment of silence, and used crossfades in the center of the file as opposed to fadeins and fadeouts on the ends. 

The director was of course pleased with my innovation and allowed me to implement the techniques into the project. The result allowed us more consistent environments as well as the ability to reduce and condense files further when required.

That's why I believe this is sheer gold to game sound designers. Not only can you condense tracks that don't need to act independently into a single file, you don't necessarily have to worry about reverb tails either. Yes you could add your own reverbs to looped ambiences and atmospheres in your DAW, but then they are written forever. Developers have much greater control over those loops if you create them dry and allow the engine to trigger the appropriate reverbs through middleware. Then, if a swirling orb follows you out of a cave into a forest and finally through the front door of a small cottage, the reverb can automatically change to reflect those environments, and the same reverbs can be applied to all sfx for uniformity.

Either way, reverb tails are much less off-putting at the beginning of an ambience should you need to write them to the track. This differs from music because different instruments or instrument groups may all have different types of reverb and different amounts applied in order to best suit the composition. Often, ambient tracks only require a single reverb at a time to reflect the space in which they are sounding, as was outlined in the orb example above.


How this Method Relates to Music

If you are primarily a composer, perhaps you can see some issues that may arise. A lot of them pertain to looping in general, but let's address them all anyway and then explore some solutions.

The first and biggest concern: why would you cut your music in half and start in the middle? Great point. Don't. I wouldn't either. There are at least two ways around this, though one seems way too difficult and is untested. The other is to change the order of your composition before you export it. Pick a good spot to have as your "alternate" beginning, and then move everything that is before it so that it starts after the last bar of your intended ending. Bounce the file. Keep in mind that, just like all of your other methods, this will never work if you are trying to loop a composition that doesn't start and end the in a similar way or if you are trying to make a shortened loop out of a longer piece. A loop has to be prepared to flow back into its beginning from the end, so two wildly different sections will end up just like my bug problem explained above.

By moving your track's beginning to the end before bouncing, you essentially get two reverb tails—the one you wanted at the beginning of your track (if you have been working with one of the common methods mentioned above) and the one that you still need for the middle of your piece (which is now temporarily at the end). Because you are working with loops, your DAW should have fixed tempo rates and a grid that perfectly aligns the sections to the tempo. It's cutting time! Cut where your piece should start and move the two sections of your newly bounced file back into the correct places. Now, you'll notice that the tail from the first half overlaps the second half, so you can move one of the two pieces to a second track and fade away the any pops in order to blend them perfectly.

That brings us to our a couple of myth-related issues: what if the track is highly rhythmic and ends with a downbeat on the one? Well, unfortunately my friend, you will have two hits on the one count overlapping each other. This shouldn't be an issue though and here's why. If it's looping, you shouldn't really be bouncing an ending downbeat to begin with. It's fine for the soundtrack so that fans know the song is over, but it would get cut from a loop. Remember how I said don't make a shortened loop out of a longer piece. That's the other reason why. Then you will have extra information you can't get rid of, making crossfades useless. The only solutions, if you have a downbeat ending anyway, are to cut (or more accurately trim and fade juuuust right) the one without the reverb tail, though I suppose you could get creative by cutting the other hit and leaving the tail or automating both to lower volumes if the hits aren't identical (phasing issues would otherwise occur), or you can find a new way to end the piece.

The other rhythm and percussion-related issue has to do with embellishments such as cymbal swells, chimes, bells, etc. that make this method and loops in general harder or impossible to work with. These ring out themselves in addition to whatever reverb tails are produced and therefore must be able to sound at the beginning of the track. Again, if you loop playback while composing, you won't notice that your final cymbal swell is off because the reverb in the DAW will actively carry the tail over to the beginning of repeated loops and it won't play it the first time. As soon as you bounce it though, you'll have a reverb tail that also includes the pinnacle of a cymbal swell. Moving that tail to the beginning of your piece following any method will be awkward since you don't want half a cymbal swell the first time the track plays. If you really need such embellishments at the end of a piece, you may consider seeing if your development team can trigger that sound separately within the engine, though rare is the occurrence that this will be agreed to. On a side note, this is also why loops don't usually feature upbeats, especially melodic upbeats. They have to be structured so differently, and melodic, upbeat reverb is a nightmare to deal with outside of clever middleware usage.

Now that we are back around to reverb tails, let's discuss the other and only important issue they may have: clicks or pops the first time only. In other editing methods, the first loop is great but the subsequent loops are subjected either to no tail, brief silence, or pops and clicks. In this method, those subsequent times are great, but the triggering of the initial music file may be the only point that causes a pop or click because of the written reverb tail. This issue would still be present in other methods, but you have the ability to add a fade just to the reverb tail because it is on a separate track. You may or may not want to keep that tail separate from the technically seamless part of the loop for this reason (though you are then combining methods and giving yourself a lot more work since that probably involves bouncing wet and dry versions of everything).

If you are able to just draw out the reverb tail's initial click with an advanced DAW pencil tool, great. Or, since music, especially when played live, has all sorts of subtle noises in it, you may be able to leave a quick, quiet click as the beginning of a track is the only place that listeners usually are unprepared to pay attention to the details. I've certainly heard them before…only subsequent loops still had the problem. 

Don't fear though; there are two other places for hope. A ridiculously small fade on the front end only may be just enough to keep the file from sounding like there is a slight pause. However, I'm often not a fan of this method because it doesn't work for all of the tracks out there. 

The other method you have less control over, but it is the most ideal situation. Often, game engines will fade in tracks in order to better segue from various locations or gameplay ideas, such as combat and free exploration. If that's the case, the initial pop will be eliminated because the track will start at 0db or will be crossfaded to "resume" from a later point in the loop thanks to the game engine. The really nice thing about this is that even dissonant reverbs can be more acceptable at the beginning of the piece because they will be gone before the full volume of the track is in place. Don't rely on engine fades to save a poorly executed piece though. Only use them when it works with the vision of the game.

So, what about ambient music tracks? 

Those issues listed above you may see with tracks that are highly structured, rhythmic, and melodic or that have huge amounts of reverb copied over to the beginning. Luckily, structured, rhythmic tracks tend to be the easiest to loop with common methods. However, ambient tracks can be trickier when trying to loop them under these methods. But, as they can be created closer to sound design elements and inhabit the "background" by nature, there is great potential for them to be arranged according to this new method with little difficulty or effect on the track. In addition, their ebbs, flows, swells, and quiet moments make them able to potentially start anywhere, and some even offer space for reverb to decay on its own significantly before the next movement starts.

I was recently messing around with an ambient track consisting of various string instrument chords to help further illustrate this point. I opened a new session at the correct tempo and imported the piece. I was then able to move chord progressions around and fade them into each other without any pop or click at the beginning of even the first playback thanks to strings having a slightly delayed attack (the amount of time it takes for a sound to amp up to its desired volume after the note is initiated). 

Because "ambient" can be a vague stylistic term by nature, your ambient track may feature creepy sounds or spacey synths or woodwinds and horns or even warped percussion, so you may have to experiment on your own a bit to find the right spot to split the file. However, you may also find that there is no need for the piece to start from the originally intended beginning, thus saving you the task of having to bounce and flip the tracks again.


Some Final Notes
  • Users of Logic Pro X should be aware that it's loop feature by default adds a few milliseconds to playback. I haven't changed mine, even though I primarily use that DAW for MIDI and scoring, but I'm pretty sure I recall seeing the option to turn this off in the preferences. Pro Tools and most other DAWs shouldn't encounter this issue. Part of the reason this option comes with the newest Logic Pro is because people often choose to humanize MIDI tracks, and some of the quantization options causes notes to sound slightly earlier or later than exactly on tick 1 of beat 1 of measure 1. With the extra few milliseconds, playback can anticipate those notes that are just outside of the actual looped region and still trigger them so you can hear your loops as you're supposed to while composing.
  • This also means that you must watch out for humanized notes. The beginning of a track cannot have notes that start before the first measure or they won't be included in a bounce, notes shouldn't humanized to a position outside of the beginning or end of their regions or they can be cut off, and wherever you intend to make your cut in the bounced file should keep both of these in mind as if that really is the beginning of your track. In fact, it is good to keep notes at the beginning of a Logic score or region at least a few ticks after the first beat.
  • Mp3s add time to the end of loops that some game engines can't account for. This will cause a slight pause in even the perfect loop. Unless your track is a one-shot, it's usually better to convert your wave and aiff files to ogg vorbis unless the developers ask otherwise, will do the converting themselves, or plan to implement full-sized uncompressed files.
  • Always work with a grid and tempo information, and deal with bounced files in a separate project. If the final files aren't perfectly aligned to a grid, loops will become staggered.
Thanks for reading! If you'd like to hire Nate Combs to work on your game project, visit his website at www.natecombsmedia.com or contact him directly at natecombsmedia@gmail.com.

Friday, February 6, 2015

Dave 'Chuck' Bennett - Soon

David Chuck Bennett is an English singer / songwriter I've interviewed before. His indie and folk style music once inspired a short film-music video hybrid, and he is back at it again! This time, he visually went for a more pensive look, and his new single "Soon" is stripped down to just piano and light sampled accompaniment behind the vocals. However, that same indie folk spirit that inspires films is still very much present. The beautiful solo piano work and emotionally connective lyrics again draw listeners into a story, and the indie film community could benefit from either the song or the instrumental.

Finally, this piece marks Chuck's debut appearance in one of his music videos, so go on, enjoy "Soon."


Tuesday, February 3, 2015

New Changes to the Blog!

As of 2015, big growth has come to my company, and as a part of that growth, I have completely redone my main website. Now, this blog is able to be incorporated right into it! Isn't that exciting? Plus, the blog has proved really useful as a stage for musical artists to promote their work and connect with others, and we've even had a few famous people stop by.

The name of the blog will change just a bit and may change further as my company, which also works with voiceover artists, begins to absorb it. In addition, we want to be able to provide another platform through which others can connect with and inquire of those with musical and voice talent. After all, they say that efficient marketing is everything, right? It's fun to share your music with people who get you but are so busy trying to do the same thing in a different way. However, wouldn't it add to the fun to share your abilities with people who might become your fans or even hire you for a gig? The answer is yes.

So, I will continue to share my thoughts, research, and experience on topics I believe we can all benefit from, and you might start hearing people who work in other creative fields. Film, TV, and video game producers, that means you! Voice artists, say hello! Sound designers, kablam! Anyone and everyone who is involved with the production process and interacts with musicians regularly, tell us your thoughts and share your current projects.

Saturday, January 10, 2015

The North American Conference on Video Game Music with Winifred Phillips


You all remember Winifred Phillips, award-winning author of the book "A Composer's Guide to Game Music" and the musical mind behind hit games such as Little Big Planet 3, Assassin's Creed Liberation, God of War, and much more. Well, in just a few days—that is January 17 & 18—she will be honored as the keynote speaker for this year's North American Conference on Video Game Music (press release below). This is a big deal, so be sure to congratulate her and offer her your thanks for doing so much good for the world of video games and for the world of music!

I got to do a short interview with Winifred, and here is what she had to say:

1) What is your wildest dream in video game music? Live concerts, best selling titles, teaching and awareness, something else?

All of the above!  Game music is a vibrant genre with an extraordinarily diverse musical vocabulary and a huge community of devoted fans.  There are many live concert series and bestselling sound track albums, and conferences such as the North American Conference on Video Game Music are a great step towards making the subject more available within academic institutions.

2) What advice would you give to others who look up to you and aspire to achieve similar goals?


I actually write a lot about that in my book, A Composer’s Guide to Game Music (The MIT Press), which was published last March. In my book, I discuss what steps an aspiring composer should take to secure gigs in the competitive field of game music composition.  It’s important for an aspiring composer to grab the attention of decision makers at a game developer or publisher.  After developing an excellent demo reel, the aspiring composer can begin researching what new development studios are forming, or what new game projects may be reaching the production stage. Timing is everything, so game composers have to stay alert and keep reaching out to potential clients. The subject can be pretty complex, and I discuss it much more thoroughly in my book.


My thoughts on the interview:


I have many wild dreams in video game music and other artistic areas as well, but my most outlandish one involves founding a massive festival / competition…on par with the world cup or the Olympics.


I was a bit silly to ask the other question without being more specific since Ms. Phillips gives a lifetime of advice in her book, but, luckily what I had hoped would happen, happened! Out of all of the advice she has to give, she chose to share getting the attention of decision makers and researching projects in production. I've heard both of these things time and time again in my own career and cannot stress how important they are. Of course, though they are two of the most difficult areas to establish a skill in at first, they are nothing without that excellent demo reel.


Thanks again to Winifred Phillips for her interview. You can learn more about her on her site at winifredphillips.com. 


I also got to interview William Gibbons, the conference organizer, and here is what he had to say about similar topics:


1) How do you believe the study of video game music could further the development of the rest of the music world?


I think the study of game music is important in a lot of ways, and to many different groups of people. Some of the most innovative and interesting compositional techniques happening in music today comes from games. Technology and player expectations change so quickly that composers and audio designers really have to come up with solutions to new and unique problems constantly, and even those scholars and composers who don’t work on games can really learn from exploring how those problems get solved.


But most importantly, millions of people listen to game music every day, whether while playing the games or just listening to the soundtracks for enjoyment. I’m a big believer in being educated consumers of music, and learning a little about the music we enjoy listening to. That applies to us as scholars and composers, but also to our students and the people who read the articles and books we write.


2) What is your wildest dream in video game music? Live concerts, best selling titles, teaching and awareness, something else?


We’re already seeing game music take a much more prominent position in music culture. Live concerts of game music are selling out around the world, people are buying and listening to albums of original and remixed music, and sheet music for performers is even available for some games. For me as an educator, I’d like to see game music become more common in schools, both in performance and in classrooms, right alongside “classical” music, jazz, film music, and the other musics we teach.


3) What advice would you have to others who look up to you and aspire to achieve similar goals?


To any music students or professionals that want to study game music, I say go right ahead! There’s so much left to research for musicologists like me, or music theorists—we’ve really only started to scratch the surface of what’s there, and there’s a constant new supply of great new music to study (and enjoy). And for composers, studying at least the basics of game music is absolutely one of the smartest things you could do career wise.


My thoughts on the interview:


I completely agree! As one who listens to game music a lot (perhaps even more than I have time to play the actual games), and as one who is always writing video game tunes, I can easily say innovation and passion in studying and applying game scoring techniques leads to educational advancements, and, quite frankly, creative freedom.


I am just finishing up my thesis semester in graduate school for film and game composing, and I am one of the lucky few students who has had teachers and courses present us with game music in class!





And now, the press release for the NACVGM:


Conference Brings Leading Game Music Scholars and Composers to Texas

Fort Worth, TX – Video game music has come a long way from bleeps and bloops. Today’s game soundtracks often equal film scores in quality, and this music is consumed in large amounts by millions of players every day: studies suggest that 58% of US citizens—and 97% of young adults—play video games, with an average weekly play time of around 8 hours. Concerts of game music regularly play to sellout audiences across the globe, as orchestras and bands cater to audiences eager to hear live versions of their favorite tunes.

Game music has also emerged as a major topic of academic study, and on January 17-18 many leadings game-music scholars and composers from across the US and Canada will gather in Fort Worth, TX on the campus of TCU for the North American Conference on Video Game Music. This conference will feature two days of presentations and discussions on all aspects of music in games, including new composition techniques, approaches to the analysis of game music, and case studies of specific games.

The keynote address will be given by Winifred Phillips (Twitter: @winphillips), the award-winning composer for games including Assassin’s Creed: Liberation, God of War, Speed Racer, Total War Battles: KINGDOM, and six games in the popular LittleBigPlanet series, including LittleBigPlanet 3. Phillips is also the author of the bestselling book, A Composer's Guide to Game Music (The MIT Press, 2014), which recently was awarded the 2014 Global Music Award Gold Medal for an exceptional book in the field of music. (http://www.winifredphillips.com/composersguide)

More information about the conference is available at http://vgmconference.weebly.com.

For further information or interviews, please contact:

William Gibbons
Assistant Professor of Musicology
TCU School of Music
Email: william.gibbons@tcu.edu
Phone: 919.357.1769
Twitter: @musicillogical