Simply put, when working with MIDI instruments, control your use of note velocities and have slight variety in your velocity range. You can stop reading here if you've got the strength to keep things under 70 for the majority of a song, but I still recommend at least peeking at the example.
***
MIDI velocities, or volume + tonal differences associated with common volume levels, have a range of 1-127, 1 being the quietest 127 being so strong, it should and will ruin your songs. The average producer and under-learned composer do not know this. So often, I see people create music by recording a MIDI instrument, jacking the velocities all to 100 (which, by the way is still a loud forte to fortissimo) or to 127 (even worse). This is bad. If you're doing this, stop.
That's a fantastic way to destroy your tone, especially if you are working with high end sample libraries. It's way better to write with your ranges between 30 and 60 for average volumes rather than between 90 and 127. Then, if you have a solo instrument or want a slightly more aggressive tone, sure 80 is OK, and look at that. You actually had room to go up! Even if you are writing epic Hollywood-style trailer music, there's no need to beat the crap out of your 100% of your instruments for 5 straight minutes. And please, don't make every velocity exactly the same. Give your work the expression you already have in your head!
Not only will you achieve better and way more realistic tone by following this advice, but you will also be forced and able to use layering with other instrument types/sections to achieve more volume. This goes for pop recordings that use MIDI instruments just as much as it does orchestral works.
That's it!
***
In case you need a little encouragement to kick your butt into gear and start using MIDI correctly, here's an example of a song I'm working on right now:
This is an oboe from the Vienna Instruments collection in Logic Pro. It is my solo instrument for the first main part of the song, yet that shade of green is a velocity of around 55—and I haven't even set a wide range of dynamics or articulations yet!
And this is my cello section. It starts at 32 and builds up to 76 for the "big part." However, the real jump in volume comes from brass and woodwind doublings. So let's listen to a fully unmixed demo.
That's right! Other than a little bit of batch reverb over each of the three instrument sections and the faders being set to about -15db for now (slightly higher for the oboes and double bass), this excerpt is completely unmixed. Because my velocities are set right, I'm going to have a really easy time creating dynamics, additional articulations, and the final mixdown.
Original Music Stop | A Place for Musicians, Composers, and Producers
Welcome to the site for those create original music! This a hub for musicians to learn practical, crazy, or interesting things and ask questions while connecting with one another.
Friday, February 19, 2016
Saturday, April 18, 2015
The Lowest 25%, Part 2: Achieving Space, Clarity, and Dynamics in When Producing and Mixing
Doing less now makes more seem bigger later…it's true!
Last week, I talked about achieving fullness when producing and mixing in a studio setting. You can check out Part 1 of the series here. If you don't have enough in your cup, you won't be able to take a song to the places it needs to go. However, we are often times given extra parts, allowing us many options for achieving and maximizing fullness. But, even when a song is relatively high energy, it's just as important to make sure that your cup isn't 100% full 100% of the time.
Now a full on symphony is going to be quite a bit more dynamic naturally than, say, an R & B song, which is going to be more dynamic than a pop punk anthem. Yet, each has its place for ups and downs, fullness and space that take turns relieving and exciting listeners. In general, a sweet melody or soulful solo shines over a moment of great space, while more aggressive solos or three-part harmonies often find their places among sections that need to push fullness to the very brim. Of course, there are exceptions. For example, a cappella sections are those that feature voices—usually several—and nothing else.
So, what can you do with that overlooked 25% of the cup to balance your mix once you have all of the necessary elements for achieving fullness?
Last week, I talked about achieving fullness when producing and mixing in a studio setting. You can check out Part 1 of the series here. If you don't have enough in your cup, you won't be able to take a song to the places it needs to go. However, we are often times given extra parts, allowing us many options for achieving and maximizing fullness. But, even when a song is relatively high energy, it's just as important to make sure that your cup isn't 100% full 100% of the time.
Now a full on symphony is going to be quite a bit more dynamic naturally than, say, an R & B song, which is going to be more dynamic than a pop punk anthem. Yet, each has its place for ups and downs, fullness and space that take turns relieving and exciting listeners. In general, a sweet melody or soulful solo shines over a moment of great space, while more aggressive solos or three-part harmonies often find their places among sections that need to push fullness to the very brim. Of course, there are exceptions. For example, a cappella sections are those that feature voices—usually several—and nothing else.
So, what can you do with that overlooked 25% of the cup to balance your mix once you have all of the necessary elements for achieving fullness?
- Reduce what's being played in the verses / down moments
- Reduce how many instruments are playing in the verses / down moments
- Keep volumes low
- Use automation peaks for transitions
- Craft EQ differently from main features
- Don't be afraid of different reverb practices
Reduce what's being played in the verses
This is a bit more obvious, and a good musician or producer will already know to either change playing techniques or how much is being played between different sections of a track. However, this can be a difficult concept to grasp if you are a musician who is new to recording or playing in a live band. It can be even more difficult if you've been playing the same way for 30+ years and don't like change. People who come from solo performance backgrounds are used to having to sum an entire song up on one instrument, and they are used to playing more for the duration of the song.
Some practical examples from my "fullness 5" list would be:
- changing a full 8th or 16th note background guitar strumming patterns to palm mutes, whole notes, or individual note picking
- playing whole notes or longer on pianos and keyboards, using fewer notes per chord, adding in very small licks or runs sparingly
- holding out only one or two notes on a pad or organ
- only including stabs or short riffs on synths
- using harmonies halfway through the section or on certain words / phrases
The special thing about this is that, when combined with some of the other techniques, you may decide to do less throughout the song. For example, your verses and choruses both may have whole note filler parts that are connected by a more active pre-chorus.
Reduce how many instruments are playing in those spacious sections
If the previous section is difficult for some to grasp, this one might be impossible. However, in any good composition or production, nobody plays the entire time, with the exception of pop loops and drones. Having electric guitar two or three, Rhodes, or piano play less is good, but having them wait until halfway through the verse, the pre-chorus, and the chorus to play anything at all has the potential to be great. This is especially true when the main instruments (lead guitars, keys, or strings, bass, drums, etc.) of the song have reduced what they are playing.
Keep volumes low
In order for all of these extra "fullness 5" instruments to make up the bottom 25% of your cup, they have to stay in the bottom 25%! For the most part, the majority of these instruments should not be consciously heard. Did you try my suggestion from part 1? Whether you are mixing a verse, chorus, tag, bridge, or otherwise, try lowering the volumes of your filler instruments until you can't really tell if they are there or not. Go back and listen to your mix again, then mute any of those tracks when you get to the section in question. You should feel a big change in fullness / space, and if you practice, you might even be able to learn to hear and distinguish those instruments when you unmute the channel(s) in question.
If you have only one or two extra background instruments, or if you have several that barely play anything, you can get away with a few "feature" moments where they stand out a bit. Just remember, though, that the more you have, the less each part can do. Too many parts at too loud a volume can really over-thicken and muddy up a mix.
Use automation peaks for transitions
Background instruments can greatly manipulate the way transitions as a whole feel. In addition to being able to layer in new instruments at distinct intervals, you can use tiered, steady, or a combination of tiered and steady automation to boost the impact of a downbeat.
Because your fillers are already at a lowered volume, you have a lot of dynamic range to play with. You might even pull one or two of them higher up in the mix for those few seconds. A B3 with the speed switch activated for the rotor in a Leslie cabinet or an aggressive synth are two examples of tracks that may pop during a transition.
Then, just when your cup is close to spilling over, you can take a little sip out of it by lowering those volumes back down...Not to where they were before, though. You don't want them to disappear completely. You just want them to sound like they disappeared but not feel like they did. By keeping these instruments out of the way, you allow a greater clarity without losing fullness.
Remember, transitions don't just happen between sections. Sometimes, they occur at the halfway, two-thirds, or other points in a single section to give that section some more meat before trasitioning to the next one.
Craft EQ differently from main features
The daddy of pre and post-production clarity features, EQ has the ability to totally transform the way audio sounds. You're going to want a fairly natural sound for your main instruments in typical situations, though there are many pop mixes that feature heavily EQ'd pianos, vocals, and some other instruments. However, when you over-process tracks like this, you run the risk of a mix sounding too thin or too muddy.
With the bottom 25% of your cup, over, and yes, even under-processing can actually be used to your advantage in order to keep your fillers from competing with more important instruments of a similar frequency range.
Instruments like second acoustic guitar or keys might have all of the bottom end completely cut in order to keep from competing with the bass or mid-friendly instruments in a song. Likewise, an electric guitar or synth might have the high end automated out during the verse and sweep back in later on. Thinning out or muffling instruments like this is a great option when you need to clear up sonic space for a more important instrument, especially when both instruments are panned to a similar location.
On the other hand, especially dealing with synths, very natural sounding piano samples (or live performances), and orchestral recordings, you may do a lot less with the background EQ than you do with the EQ on the main instruments. Synths are so drastic, yet particular in the tones they can create that any change to most of the billion knobs you get to play with adjusts the frequencies the synth plays as well. So, if you've found the perfect synth or pad tone and it isn't overbearing in any range, you may not cut anything and have a beautiful sound when it's pulled down in the mix. Or, you may even boost the range you want to stick out the most and lower the output (which is essentially the same as cutting things you didn't want). One example I often hear is people adding a little more high end somewhere 8k or above to give their synths a bit of extra sparkle. Turning it down afterwards just pulls that sparkle down too, but it clears out some of those mid and lower frequencies that may not have been right for the situation.
Don't be afraid of different reverb practices…
…especially if you are playing really big, spacey, ambient music!
You've all heard how reverb's done in the industry nowadays. You get your track sounding nice and create a send that receives some of your instrument into a very wet auxiliary reverb track. You never bothered to see what changing from pre-fader to post-fader does, and when your friend tried to explain it to you, it went right over your head.
Well, in orchestral music, reverb is so powerful, it can make low end instruments feel like they tripled or quadrupled in size. What? Reverb on the low end? In orchestral works, it's a requirement, not a crazy option. While you may not be adding reverb to the kick and bass of a pop song, most other instruments get reverb, and this is really where you can start to change the function or "appearance" of your fillers. All of a sudden, your keys start to sound like electric guitar, your electric guitar starts to sound like a pad, your organ widens up, and your choir starts to shimmer.
Some of the other reverb practices may cause your background instruments to be more present in a mix or lose identifiability, as you could see above, but that's fine if the ones that do stick out more don't get in the way of those important features. So, how do you achieve those tones?
First, pre-fader sending allows the reverb level (wet) and regular fader levels (dry) to be controlled separately. This means that you're going to hear a lot more reverb at lower send levels on pre-fader tracks. This also means that your main fader can be all the way down, and you will still hear the reverb through the aux channel. If you are looking to keep the digital sonic space relatively the same while making an instrument feel like it is moving forward or backward in that digital room, this is a great option to play with. Or, if you don't want more than a couple of reverb tracks and need a big, far away sound, pre-fader is for you.
Or, and this is my preference much of the time because of the amount of orchestral work I do, you can set up reverbs on aux tracks and change the output of your instruments directly to that aux instead of bussing them through an aux send. This is the same as just putting the reverb plugin right on an instrument's channel, but it keeps you from having to put a separate reverb unit on every single track, which takes a lot of processing power. By changing the output, you can achieve the same effect with multiple instruments using one reverb unit. Like going post-fader, this allows you the ability to control the wet (reverb) and dry (regular signal) levels. Be careful, though, not to edit the reverb unless it is within the unit itself (for example, the EQ tab within a convolution reverb unit), or the effects added will be applied to all of the instruments too, since they are running directly into the aux track. The nice thing about sends is that you can edit the reverb tracks as you like without affecting the instruments.
So why do I prefer to place instruments fully in a reverb unit instead of just going post-fader? Well, they aren't quite the same. Going post-fader or even pre-fader often make instruments feel like they are just moving closer to or further from the monitors on which you are listening. This isn't bad, but going fully through the reverb unit makes tracks feel like they are being placed inside a new room, not just digital space with nice reverb. Since not all instruments are recorded with room mics, especially when you are working with orchestral samples, dry guitar tracks, or the recordings of under-budgeted amateurs, you can almost emulate different mic positions by going directly into the reverb. Nothing feels as close, up front, or in-your-face when you place a track in the reverb unit's room, especially when you compare two identical tracks at the same volume levels. So, this option isn't for every track, especially main ones that you want to use to punch listeners in the face. However, it's great for making tracks feel like they were recorded in a different space than they may have been originally.
Conclusion
Applying even a few of these techniques to mixes that give you trouble or that leave you unsure of what you can do with the excess background instruments will make huge differences in the clarity, dynamics, and space/fullness of any given section. Plus, some of these principles can carry over to the rest of your mix—your lead vocal, bass, lead acoustic/electric, piano, main synth, violin, drums or whatever else is your main focus.
If you haven't read Part 1, be sure to check it out, as it explains the "fullness 5" and ways to use the bottom 25% of your priorities to liven up a song without making it obvious you did anything differently.
Thanks for reading! Be on the lookout for the rest of this series, where I'll cover topics that can be applied to live mixing, like monitor volume, dealing with too much, and yes, having enough!
The daddy of pre and post-production clarity features, EQ has the ability to totally transform the way audio sounds. You're going to want a fairly natural sound for your main instruments in typical situations, though there are many pop mixes that feature heavily EQ'd pianos, vocals, and some other instruments. However, when you over-process tracks like this, you run the risk of a mix sounding too thin or too muddy.
With the bottom 25% of your cup, over, and yes, even under-processing can actually be used to your advantage in order to keep your fillers from competing with more important instruments of a similar frequency range.
Instruments like second acoustic guitar or keys might have all of the bottom end completely cut in order to keep from competing with the bass or mid-friendly instruments in a song. Likewise, an electric guitar or synth might have the high end automated out during the verse and sweep back in later on. Thinning out or muffling instruments like this is a great option when you need to clear up sonic space for a more important instrument, especially when both instruments are panned to a similar location.
On the other hand, especially dealing with synths, very natural sounding piano samples (or live performances), and orchestral recordings, you may do a lot less with the background EQ than you do with the EQ on the main instruments. Synths are so drastic, yet particular in the tones they can create that any change to most of the billion knobs you get to play with adjusts the frequencies the synth plays as well. So, if you've found the perfect synth or pad tone and it isn't overbearing in any range, you may not cut anything and have a beautiful sound when it's pulled down in the mix. Or, you may even boost the range you want to stick out the most and lower the output (which is essentially the same as cutting things you didn't want). One example I often hear is people adding a little more high end somewhere 8k or above to give their synths a bit of extra sparkle. Turning it down afterwards just pulls that sparkle down too, but it clears out some of those mid and lower frequencies that may not have been right for the situation.
Don't be afraid of different reverb practices…
…especially if you are playing really big, spacey, ambient music!
You've all heard how reverb's done in the industry nowadays. You get your track sounding nice and create a send that receives some of your instrument into a very wet auxiliary reverb track. You never bothered to see what changing from pre-fader to post-fader does, and when your friend tried to explain it to you, it went right over your head.
Well, in orchestral music, reverb is so powerful, it can make low end instruments feel like they tripled or quadrupled in size. What? Reverb on the low end? In orchestral works, it's a requirement, not a crazy option. While you may not be adding reverb to the kick and bass of a pop song, most other instruments get reverb, and this is really where you can start to change the function or "appearance" of your fillers. All of a sudden, your keys start to sound like electric guitar, your electric guitar starts to sound like a pad, your organ widens up, and your choir starts to shimmer.
Some of the other reverb practices may cause your background instruments to be more present in a mix or lose identifiability, as you could see above, but that's fine if the ones that do stick out more don't get in the way of those important features. So, how do you achieve those tones?
First, pre-fader sending allows the reverb level (wet) and regular fader levels (dry) to be controlled separately. This means that you're going to hear a lot more reverb at lower send levels on pre-fader tracks. This also means that your main fader can be all the way down, and you will still hear the reverb through the aux channel. If you are looking to keep the digital sonic space relatively the same while making an instrument feel like it is moving forward or backward in that digital room, this is a great option to play with. Or, if you don't want more than a couple of reverb tracks and need a big, far away sound, pre-fader is for you.
Or, and this is my preference much of the time because of the amount of orchestral work I do, you can set up reverbs on aux tracks and change the output of your instruments directly to that aux instead of bussing them through an aux send. This is the same as just putting the reverb plugin right on an instrument's channel, but it keeps you from having to put a separate reverb unit on every single track, which takes a lot of processing power. By changing the output, you can achieve the same effect with multiple instruments using one reverb unit. Like going post-fader, this allows you the ability to control the wet (reverb) and dry (regular signal) levels. Be careful, though, not to edit the reverb unless it is within the unit itself (for example, the EQ tab within a convolution reverb unit), or the effects added will be applied to all of the instruments too, since they are running directly into the aux track. The nice thing about sends is that you can edit the reverb tracks as you like without affecting the instruments.
So why do I prefer to place instruments fully in a reverb unit instead of just going post-fader? Well, they aren't quite the same. Going post-fader or even pre-fader often make instruments feel like they are just moving closer to or further from the monitors on which you are listening. This isn't bad, but going fully through the reverb unit makes tracks feel like they are being placed inside a new room, not just digital space with nice reverb. Since not all instruments are recorded with room mics, especially when you are working with orchestral samples, dry guitar tracks, or the recordings of under-budgeted amateurs, you can almost emulate different mic positions by going directly into the reverb. Nothing feels as close, up front, or in-your-face when you place a track in the reverb unit's room, especially when you compare two identical tracks at the same volume levels. So, this option isn't for every track, especially main ones that you want to use to punch listeners in the face. However, it's great for making tracks feel like they were recorded in a different space than they may have been originally.
Conclusion
Applying even a few of these techniques to mixes that give you trouble or that leave you unsure of what you can do with the excess background instruments will make huge differences in the clarity, dynamics, and space/fullness of any given section. Plus, some of these principles can carry over to the rest of your mix—your lead vocal, bass, lead acoustic/electric, piano, main synth, violin, drums or whatever else is your main focus.
If you haven't read Part 1, be sure to check it out, as it explains the "fullness 5" and ways to use the bottom 25% of your priorities to liven up a song without making it obvious you did anything differently.
Thanks for reading! Be on the lookout for the rest of this series, where I'll cover topics that can be applied to live mixing, like monitor volume, dealing with too much, and yes, having enough!
Friday, April 10, 2015
The Lowest 25%, Part 1: Getting More Fullness when Producing and Mixing
If you've been around the audio world for more than a few months, it's likely you've heard an analogy in which mixing gets compared to filling a cup. Somebody probably recommended filling that cup with kick (or drums) and bass first, warning you not to put in too much right off the bat for fear of your cup overflowing. Still, as you added other elements, power players, such as lead guitar, vocals, and the drums and bass, began to take a lot of your attention.
It's easy to set and forget the instruments and effects that hardly seem to be present already, so why bother dedicating much effort to the things that barely make it into the bottom 25% of your cup? Well, clarity, space (fullness), and dynamics are 3 of the most important areas any mixing engineer can master, and seeing how these things relate and working to achieve them can really add to the sense of unity a great musical group brings to the stage or studio.
Assuming you have at least a basic understanding of effects like EQ, reverb, and compression, this series of articles will focus on how you can use the least popular channels in your mix to enhance the clarity, fullness, and dynamics of those that make up the "attention-grabbing" 75% of a live or studio mix. Often times, what we don't consciously notice—that quietest 25%—is what makes those featured parts sound better…or worse!
Part 1: Studio Mixing - You Must Have Enough in Your Cup
Whether you write pop songs, pump out beats, compose orchestral scores, or focus only on mixing, you've probably run into a situation where something just doesn't feel right. I very regularly listen to work by people who publicly release their work feeling like it wasn't full enough, and they have no idea how to fix the issue except by turning up their favorite channels more and hoping for the best.
Fullness is almost a trick category though. Think about it. If you were asked to bake 12 muffins and were given enough muffin mix for 6, it doesn't matter what you do. You'll end up with 6 muffins. Sure, you could spread it out over 12 cups, but you'll still end up with 6 muffins…just in 12 halves. In the same way, if you only have parts for 6 instruments, all of which play a vital role in the song, you can adjust the EQ over and over or automate the faders to be louder at the chorus, but you'll still only be hearing 6 instruments. This is like filling your cup 75% of the way with the important and "featured" instruments and nothing else.
However, it is possible to take more instruments and more parts and still make it sound like only 6 or 7 things are going on. Kind of like if you were instead given enough muffin mix to bake 12 muffins but only needed 6. You could make 6 regular muffins, or you could pour a little extra batter on 1 or all of them to make them bigger. Maybe a bit drips into an empty cup, and you have an extra bite-sized muffin, and, whatever's left over can be thrown away or saved for another time.
So, it is vital to make sure the bottom 25% is even available before you can worry about using it.
If you're strictly a mixing engineer, this may call for asking the producer for some more tracks, but if you are also creating the music, here is what I call the fullness 5—instruments that make great fillers—and tips on how to appropriately use them:
Due to their decay, acoustic guitars can present different amounts of fullness depending on how they are played. If you strum out whole (or longer) notes, the beginning of each chord will be the most full, while if you maintain a steady eighth or sixteenth note rhythm, the sound won't fade away.
Because of the high level of customization, it's very easy to match them to the vibe of most songs, and they require little arranging thought if you aren't great at playing music. Simply one note or one note at a time will do in many cases, and chords only add to the fullness. Pads blend a little more quickly than regular synths, and, while being louder may be appropriate for pads on a real ambient or spacey song, you might never know pads are in some of the other tunes they are featured in…
Electric Guitar Heaven
…Or, even more shockingly, you might think you are hearing a pad, when you are in fact hearing additional electric guitars. These instruments are surprisingly versatile—they can be confused with or help to enhance keyboards, synthesizers, other guitars, organs, and even vocals.
You can quickly overdo it when adding extra electric guitar tracks, but there are lots of possibilities for the many situations background EGs are used in. Simply doubling and panning a part may work for one song, while changing the tone works for another. Muted power chords, drawn-out open chords, or fast CAGED chords may be played in contrast to each other, or each chord could be picked one note at a time. Effects may be used to emulate those pad-like sounds, or ostinatos (short, repeating patterns) may focus on one or two notes.
Background electric guitar writing and mixing could be an entire article in itself, but, like the other instruments, not everything has to be heard distinctly…especially if there are other, more prominent guitars present.
Vocals
Not officially on my fullness-5 list, extra vocals are a great way to add fullness to a song. However, they tend to stick out and catch attention, even when in the background, because we humans love to hear the sound of our voices. Things like "Oohs," "Aahs," choirs, harmonies, doublings, and octave doublings all serve their purposes in the right settings, but they don't often get mixed so low that you couldn't recognize them as vocals.
Wrap-up
So remember, some of these things are probably going to be features in every track you mix. However, if they aren't doing anything interesting, they can be pulled back a bit in order for something else to shine (more on that in my discussion on clarity).
And, if a particular song doesn't have the oomph you want for that epic line, try adding a small amount of something new. Got a song that has two electric guitars, bass, drums, acoustic, pad and piano, yet a crescendo to the pre-chorus just doesn't add the fullness you wanted? Perhaps a quiet synth doubling the pad, an organ, and a Rhodes doubling the piano will help.
Thanks for reading! Be on the lookout for the rest of this series, where I'll cover topics like having too much in the cup, using reverb, monitor volume, and even having enough in the cup in live scenarios.
It's easy to set and forget the instruments and effects that hardly seem to be present already, so why bother dedicating much effort to the things that barely make it into the bottom 25% of your cup? Well, clarity, space (fullness), and dynamics are 3 of the most important areas any mixing engineer can master, and seeing how these things relate and working to achieve them can really add to the sense of unity a great musical group brings to the stage or studio.
Assuming you have at least a basic understanding of effects like EQ, reverb, and compression, this series of articles will focus on how you can use the least popular channels in your mix to enhance the clarity, fullness, and dynamics of those that make up the "attention-grabbing" 75% of a live or studio mix. Often times, what we don't consciously notice—that quietest 25%—is what makes those featured parts sound better…or worse!
Part 1: Studio Mixing - You Must Have Enough in Your Cup
Whether you write pop songs, pump out beats, compose orchestral scores, or focus only on mixing, you've probably run into a situation where something just doesn't feel right. I very regularly listen to work by people who publicly release their work feeling like it wasn't full enough, and they have no idea how to fix the issue except by turning up their favorite channels more and hoping for the best.
Fullness is almost a trick category though. Think about it. If you were asked to bake 12 muffins and were given enough muffin mix for 6, it doesn't matter what you do. You'll end up with 6 muffins. Sure, you could spread it out over 12 cups, but you'll still end up with 6 muffins…just in 12 halves. In the same way, if you only have parts for 6 instruments, all of which play a vital role in the song, you can adjust the EQ over and over or automate the faders to be louder at the chorus, but you'll still only be hearing 6 instruments. This is like filling your cup 75% of the way with the important and "featured" instruments and nothing else.
However, it is possible to take more instruments and more parts and still make it sound like only 6 or 7 things are going on. Kind of like if you were instead given enough muffin mix to bake 12 muffins but only needed 6. You could make 6 regular muffins, or you could pour a little extra batter on 1 or all of them to make them bigger. Maybe a bit drips into an empty cup, and you have an extra bite-sized muffin, and, whatever's left over can be thrown away or saved for another time.
So, it is vital to make sure the bottom 25% is even available before you can worry about using it.
If you're strictly a mixing engineer, this may call for asking the producer for some more tracks, but if you are also creating the music, here is what I call the fullness 5—instruments that make great fillers—and tips on how to appropriately use them:
- Acoustic Guitar / Second Acoustic Guitar
- Piano / Keyboards
- Organ
- Synths / Pads
- Extra Electric Guitars
- (Vocals)
Composer's note: if you are writing for orchestra, and experience fullness problems, it might be time to start studying woodwinds. Odds are you know more about strings and brass. Also, if you are writing hybrid / Hollywood style scores, synths or any of the other instruments on the fullness 5 list are great tools to use too.
The key with fullness parts like these is to fight the urges naïveté will present you. They don't have to be up to the same volumes as everything else in the mix, even if somebody played a killer organ lick or great piano part. In fact, when used as fillers, turning these instruments down just below the point where you can no longer distinguish them—or can just barely hear them when other instruments rest—is best. If you have a hard time believing it makes any difference to have something that quiet in your mix, hit the mute button, and your ears will be surprised at the difference.
Acoustic Guitars
These magical instruments have decay, meaning their sound fades away after initially being plucked, but they can add massive fullness to songs. This is even moreso if you record a second acoustic guitar or double the first acoustic guitar.
I remember creating a song for which I had two acoustic guitars playing identical parts in some places and slightly different rhythms in others. I panned one somewhere to the left, and the other somewhere to the right. Even though acoustic guitar was a "feature instrument," only one idea was originally written. Plus, I had organ and electric guitars as well. However, I recorded two tracks in case I needed backup, and it was the second acoustic track that really added to the fullness. I left it muted after working on the other tracks in a more distinguished fashion, and said, "Why is this song so weak all of a sudden?" Then, I realized acoustic 2 was still muted, and I was thankful that I had recorded it.
Due to their decay, acoustic guitars can present different amounts of fullness depending on how they are played. If you strum out whole (or longer) notes, the beginning of each chord will be the most full, while if you maintain a steady eighth or sixteenth note rhythm, the sound won't fade away.
Just remember, if you are doubling a single acoustic track, you'll have to offset the timing a touch and maybe edit them slightly differently in order to avoid phasing issues.
Pianos and Keyboards
Whether it's jazz chords on a Wurlitzer, simple and steady triads on a grand piano, or just the 1 and 5 notes repeating on an old upright, keyboards are great at filling out space and blending in, even if the initial attack (striking) of the notes sticks out a bit. Plus, keys enhance the sound of guitars, bass, synths, or other "featured" keyboards when played properly.
Strong power chords can give distorted electric rhythm guitars more punch or bass guitars more tone, and they can give the impression that there is less decay on either. In addition, higher notes on a piano can fatten lead guitar and synths or blend the end of a guitar run into the beginning of a piano lick so seamlessly that you don't know when one instrument stopped and the other started (dovetailing).
Organ
Before there were synths and pads, there was organ…the mother of all non-decaying fillers. Because an organ can hold out a note at the same volume indefinitely, it can potentially be used for an entire song without anyone realizing it was there in the first place. The B3 is the go-to organ for most situations, and programs like Logic Pro and Pro Tools come with some pretty decent stock replicas should you not have access to one of these very pricy instruments.
My biggest tip here? The less you do, the less noticeable organ is…without sacrificing fullness. It has the ability to blend extremely well or stick out on a moment's notice, and it is just as common to hear it stabilizing an acoustic guitar or piano at the beginning of a song as it is not to include it at all until the final verse.
Because organs do not have decay, the low range can be used to beef up long bass notes, perhaps even better than a piano can. The mid and high-mid registers can do the same for guitars. Only in the absolute highest range does an organ have difficulty blending with anything but similarly pitched synths, but when you reduce its volume to be in that lowest 25%, it can add a great presence without being overbearing.
My biggest tip here? The less you do, the less noticeable organ is…without sacrificing fullness. It has the ability to blend extremely well or stick out on a moment's notice, and it is just as common to hear it stabilizing an acoustic guitar or piano at the beginning of a song as it is not to include it at all until the final verse.
Because organs do not have decay, the low range can be used to beef up long bass notes, perhaps even better than a piano can. The mid and high-mid registers can do the same for guitars. Only in the absolute highest range does an organ have difficulty blending with anything but similarly pitched synths, but when you reduce its volume to be in that lowest 25%, it can add a great presence without being overbearing.
Synths and Pads
Considering pads are just synths edited well-enough to sound calming or deep, this group easily offers the most variety. There's literally an unlimited number of synth types that can be created in any given interface due to the many parameters the user can control. In addition to being able to sound harsh or biting, techno-clubbish, 8-bit, spacey, swirly, stringy, brassy, or even like real acoustic instruments, synths and pads can be either decaying or non-decaying instruments. You get to choose!
Because of the high level of customization, it's very easy to match them to the vibe of most songs, and they require little arranging thought if you aren't great at playing music. Simply one note or one note at a time will do in many cases, and chords only add to the fullness. Pads blend a little more quickly than regular synths, and, while being louder may be appropriate for pads on a real ambient or spacey song, you might never know pads are in some of the other tunes they are featured in…
Electric Guitar Heaven
…Or, even more shockingly, you might think you are hearing a pad, when you are in fact hearing additional electric guitars. These instruments are surprisingly versatile—they can be confused with or help to enhance keyboards, synthesizers, other guitars, organs, and even vocals.
You can quickly overdo it when adding extra electric guitar tracks, but there are lots of possibilities for the many situations background EGs are used in. Simply doubling and panning a part may work for one song, while changing the tone works for another. Muted power chords, drawn-out open chords, or fast CAGED chords may be played in contrast to each other, or each chord could be picked one note at a time. Effects may be used to emulate those pad-like sounds, or ostinatos (short, repeating patterns) may focus on one or two notes.
Background electric guitar writing and mixing could be an entire article in itself, but, like the other instruments, not everything has to be heard distinctly…especially if there are other, more prominent guitars present.
Vocals
Not officially on my fullness-5 list, extra vocals are a great way to add fullness to a song. However, they tend to stick out and catch attention, even when in the background, because we humans love to hear the sound of our voices. Things like "Oohs," "Aahs," choirs, harmonies, doublings, and octave doublings all serve their purposes in the right settings, but they don't often get mixed so low that you couldn't recognize them as vocals.
Wrap-up
So remember, some of these things are probably going to be features in every track you mix. However, if they aren't doing anything interesting, they can be pulled back a bit in order for something else to shine (more on that in my discussion on clarity).
And, if a particular song doesn't have the oomph you want for that epic line, try adding a small amount of something new. Got a song that has two electric guitars, bass, drums, acoustic, pad and piano, yet a crescendo to the pre-chorus just doesn't add the fullness you wanted? Perhaps a quiet synth doubling the pad, an organ, and a Rhodes doubling the piano will help.
Thanks for reading! Be on the lookout for the rest of this series, where I'll cover topics like having too much in the cup, using reverb, monitor volume, and even having enough in the cup in live scenarios.
Tuesday, April 7, 2015
How do You Use the Tube? An Interactive Conversation About the Media Creators and YouTube.
Hey everyone,
I've been doing a lot of research on YouTube lately and thought it would be great to discuss our greatest uses for the massive video site. Sure, self promotion is huge among game developers, filmmakers, composers, voice actors, famous reviewers, singers, bands, artists, and everyone else in the creative field, but if everyone only promoted their own content, YouTube would be dead. It's success is due to a highly interactive exchange of the arts and information through a visual format.
So, other than getting your own work out there, what constantly draws you back to the tube?
For me, I've often used it to find and study new music. In fact, this is the number one reason I sign in. I'm on it right now listening to a new OST! However, I also find reviews on games or musical equipment, demos of those games and compositional products, and trailers for films and other media to be quite useful.
In addition, especially when I first joined the community, I found it very helpful in connecting with others who shared similar interests and am friends with many of those people to this day. Plus, since I have quite a loyal fanbase, I recently started using it as a way to give back to the fans! In fact, for the month of April, I'm doing a giveaway that can be found on any of my videos with 1,000-101,000 views. Check it out!
Go ahead, leave a comment! Let us know what YouTube does for you :)
I've been doing a lot of research on YouTube lately and thought it would be great to discuss our greatest uses for the massive video site. Sure, self promotion is huge among game developers, filmmakers, composers, voice actors, famous reviewers, singers, bands, artists, and everyone else in the creative field, but if everyone only promoted their own content, YouTube would be dead. It's success is due to a highly interactive exchange of the arts and information through a visual format.
So, other than getting your own work out there, what constantly draws you back to the tube?
For me, I've often used it to find and study new music. In fact, this is the number one reason I sign in. I'm on it right now listening to a new OST! However, I also find reviews on games or musical equipment, demos of those games and compositional products, and trailers for films and other media to be quite useful.
In addition, especially when I first joined the community, I found it very helpful in connecting with others who shared similar interests and am friends with many of those people to this day. Plus, since I have quite a loyal fanbase, I recently started using it as a way to give back to the fans! In fact, for the month of April, I'm doing a giveaway that can be found on any of my videos with 1,000-101,000 views. Check it out!
Go ahead, leave a comment! Let us know what YouTube does for you :)
Monday, March 9, 2015
The Composer's Guide to Game Music: An Award-Winning Book Game Scoring Icon Winifred Phillips
Hey friends, readers, composers, and all artists alike! As promised, I have read through the entire Composer's Guide (paraphrasing the title) by one of my role models, Winifred Phillips, and can now post a full review. I enjoy talking too much online, in one-on-one situations, and when I'm teaching, and a blog is kind of like all three of those situations, so I figured I'll review each chapter. Ok, an overall will be at the end too...and on Amazon. Enjoy!
Chapter 1
The hook. This is what artists commonly use to grasp the attention of those they are presenting a work to, and Winifred definitely hooked me in this chapter. Her writing style is sincere and friendly, quirky and humorous, and full of passion that connects with a game composer like myself. In addition, like any good teacher, Phillips shares a plethora of analogies and personal stories to paint vivid pictures of that which she hopes to convey to us as readers.
The great thing about this chapter is that it also gives people who aren't sure if they want to move into the field of game composing a little test they can assess their passions with. And that is also the biggest takeaway point. Love games!
Chapter 2
This chapter describes the essence of the book. It shows the book's approachable nature (and really that of Phillips, as she loves to engage with fans at conferences or via social media sites like twitter). Throughout the entire text, she continually presents information about the game scoring world that can benefit both complete newbies and experienced veterans. Because I myself lie more on the experienced side, I knew much of what she presented here, but it is always great to hear inspiring quotes, to be refreshed on where you came from, and to learn new ways of teaching old tricks to those who work under you.
In essence, learning the craft of game composing doesn't have to cost hundreds of thousands of dollars, but it will cost time, dedication and passion.
Though not mentioned in the book, I'd like to say that I read an article recently in which Mark Ruffalo stated that he had to audition over 600 times before his career as an actor launched. Do you think he did nothing in the meantime? Of course not. With each rejection, he tried not to take it to heart. Somebody else just happened to be better suited for the part, and he had to continue practicing so he could eventually convince directors he was that guy who was the best for the part. Winifred is telling us the same here. The takeaway here is always be learning, no matter how far along you are in your career.
Chapter 3
This was a very interesting chapter on the science of video game entertainment and how it attracts a following. It shows that before you consider being a game composer, you must understand games. Sure some of the biggest film names out there score games simply for the creative freedom, but there is something more that can be pulled from one who has a personal connection to every step of the video game experience. This particular chapter does a good job explaining how game composers can gently nudge a player toward a more fully immersive experience.
Takeaways will help you to understand the science behind player-game relationships.
Chapter 4
The story in this chapter was one of my favorites. I have not yet had the luxury of attending one of the biggest game festivals in fandom, so it was nice to imagine the scenes she described of raging fans going nuts over hearing their favorite game music performed live.
On the educational side of things, this chapter is extremely important for those who have not had any formal composition or very good private training. I've had formal composition training and I still learned more in-depth things, especially when it comes to the usage of the idée fixe. Winifred argues that it should indeed be considered a distinct entity from leitmotifs against the popular beliefs of many that state they are interchangeable with no real differences. I've heard both countless times in video games, and after reading would myself consider the idée fixe to be a very specific subcategory of the larger generalization of the leitmotif, even though neither side may agree. Again because of in-depth gaming experience and the reading, I was able to conclude that there might even be two distinct types of idée fixe.
The takeaways here should help you to understand music itself better before incorporating its techniques into gaming.
Chapter 5
This chapter makes a slight diversion back to the science of games and the psychology of gamers. It is immensely useful if you aren't well-rounded in your experience of the different genres of music in general, the genres of video games, or the genres of music in video games, and it is equally useful if you don't have the mind of a producer. As mentioned earlier, not only will you study game and music types, but you will also see how different psychological mindsets associate with the various types of games out there.
This chapter also inadvertently encourages you to be a self-disciplined go-getter. In other words, to truly get the most from it, you'll have to do some side-by-side research. Phillips describes the different types of music associated with shooters or RPGs or platformers, but she doesn't necessarily tell you step by step what goes into a rock song or what goes into a fantasy score or how to create an electronic soundscape. What she does do, however, is provide a plethora of in-game examples you can refer to in order to study various effects and techniques. Of course, I just love listening to game music, despite having experience in most of those compositional fields, so I followed along with most of the soundtracks mentioned to really put myself in the world of what she was describing. Some of the scores were old friends, while others I had never heard, and all enhanced the reading greatly.
Even now, as I write this, I'm listening to the full score to Little Big Planet 2 because I've completed the list of OSTs I've compiled over the years and only got to listen to a few of that game's tracks while reading the book. Listen to game music every chance you get! Be familiar with various game and music types. Those are the takeaways.
Chapter 6
A sort of expansion on the previous chapter, this chapter focuses more on the music in games since the reader should have a better understanding of the player. It focuses on what exactly music can do in a game and how important it can even become in the marketing world outside the game. If I recall correctly, this is also where she mentions just how valuable of an asset the game composer really is. If you are brought on board early enough, chances are that teams will listen to your work as they create (an honor I've experienced once). It really does fire them up and inspire them to do even better work! And, boy is it great to hear them say that your music has affected the development of the project in a positive way.
Takeaways demonstrate the relationship between game pacing and music reflective of any given situation.
Chapter 7
Winifred has more experience on much bigger titles than do I, and I found this section to be greatly enlightening on the process of working with a studio that is planning to release a AAA game. There are so many things you must do to prepare yourself for a big job, and she gives a great list of items to request from developers to make sure that you have access to as much source material as possible to inspire your best work. It's also where she first gets into the materials a game composer might need. While the film industry is relying more and more on music technology, the game industry couldn't survive without it. So if you are unfamiliar or uncomfortable with technology, make sure you pay attention to the upcoming chapters.
Preparation and the tools to achieve it can all be form your takeaways here.
Chapter 8
There are so many types of music and audio titles let alone members of other departments for any given game, so this chapter introduces you to the ones you'll likely be working with. It's a relatively short chapter compared to the others, but it can open your eyes to other music and audio-related jobs in the industry if you are interested in managing musicians and sound people instead of just creating music.
The takeaways here will show you who to communicate with (perhaps if you are seeking your first gig), remind you to communicate early and often, and help you to understand the chain of command.
Chapter 9
This is an all-around enjoyable chapter. It finally gets into the different types of game tracks you might hear in a typical game—something I have been studying since I was a child. Nowadays, there are so many cool things you can score: battle sequences, cinematics, general exploration (overworld themes), game trailers, and more.
If you understand the difference between how the various game tracks function within a game, you've nailed the takeaways.
Chapter 10
While chapter 9 explores some of the types of tracks you'll see in the video game composing world, this chapter really gets into the heart of linear-style game music as well as what makes game music dynamic and interesting. A must read for the beginning and intermediate musician. Linear music is very common in projects that are smaller or have engine limitations, and a composer can expect to work with it a lot, especially when first starting out.
Even the more advanced game composer can take away pointers on how to draw the most out of linear music, particularly when looped. At the very least, we can be pushed to pull more out of our journey around each track.
However, not only are loops more difficult to make interesting, they are often the hardest edit and make transition smoothly. In fact, this chapter was what inspired me to write my most recent article on an alternative game looping method.
This really should be considered as one of the most important chapters in the book.
Chapter 11
Unique only to the gaming industry, interactive music is explored here. If you are a true gamer, it is likely in my own musical opinion that interactive music is your favorite type of music to experience in a game…especially if you are a musician. This chapter offers great explanations and advice on what interactive music is and how it works.
Your takeaways may be more educational, but mine are that interactive music is so imperative to games all game composers should have a profound knowledge of how it works. It's super fun!
Chapter 12
MIDI is surprisingly something that many composers, young and experienced alike, have difficulty with. Even then, many who have a general understanding of it don't really take it to its full potential. This chapter will give you a brief understanding of it as well as explain some of the advantages, but only practice, experience, and some very specific topical research will help you to get the most out of MIDI.
After MIDI, the chapter goes on to describe where video game music has tried to go and may one day go, despite the disadvantages of highly experimental styles. Education on generative music (and MIDI if that's new for you) are good points for takeaway this chapter.
Chapter 13
This chapter is non-musical and it is also so huge that an entire book could be written about the topic. In fact, some have already been written. As composers, especially for games, you must have gear. And, that gear must be good! Winifred mentioned that she composes with the assistance of six computers and my brain nearly exploded. How I'd love to have even two! Simply put one machine can't handle all of the tasks you'll want it to do, no matter how strong it is. Not only that, but she gets into the types of software, plugins, controllers, boards, DAWs, libraries, and other gear you may need, though she doesn't advocate any particular brand here. That's OK though. You can always read my product reviews to understand each company specifically.
My personal takeaway from this chapter was that Phillips must use some EastWest equipment because she says a company, whom she leaves nameless, has software she owns that she regularly curses to the skies, but must accept because that creates some beautiful libraries that you can't find anywhere else. I also own EastWest, and as you all may know from my reviews, the products are great, but the player, stability, size of samples, and operation are insane.
Your takeaway may be less silly and more practical, since the other section of the chapter deals with middleware, another thing all game composers should be comfortable operating. If you don't know what middleware is, read this chapter, then start practicing!
Chapter 14
The chapter of hope and frustration. Winifred shares her personal journey and shows us how she got into the game scoring industry as well as how she maintains it. I currently am looking for that next boost to the top tier in my career and can say firsthand that it is indeed a lot of hard work. Even if you follow all of the tips, you'll have to be able to keep up with those tips and repeat many steps until you are satisfied with where you are. You may have to experiment with different ways of approaching each step until you perfect them or find something that is efficient. You will face various rejection, not because you are bad, but because somebody else got there first or fit a particular project in the way the producers had hoped they would. Even if you do everything technically right, your timing could just be a little off or it could just not work out. Phillips strives here to encourage you to keep on keeping on, and that is the final takeaway.
Conclusion
Overall, this is a wonderful book, and I believe classes could be developed in universities that specifically teach game composing using Winifred Phillips' guide as the text. It reaches readers of all ages and understandings of game scoring, and can surely boost the EXP of the newb and andvanced reader alike (level up, anyone?). It's light but useful. Comical but efficient. The least boring textbook you could hope to read. It covers every area of the game scoring world and gives a plethora of musical examples you can listen to while reading in order to fully capture the essence of her ideas, and it gives additional resources you can use to further your understanding of specific topics.
Pick up a copy the next time you're online, which is now, or at the bookstore the next time you're out.
For more information on the game composing sensation that is Winifred Philips, visit her site at www.winifredphillips.com.
Don't hesitate to email me with any questions you may have about the book or about composing in general. And remember, if you need a composer for your upcoming game project, visit www.natecombsmedia.com, or bypass me and go straight to Winifred if you think you can land her!
Chapter 1
The hook. This is what artists commonly use to grasp the attention of those they are presenting a work to, and Winifred definitely hooked me in this chapter. Her writing style is sincere and friendly, quirky and humorous, and full of passion that connects with a game composer like myself. In addition, like any good teacher, Phillips shares a plethora of analogies and personal stories to paint vivid pictures of that which she hopes to convey to us as readers.
The great thing about this chapter is that it also gives people who aren't sure if they want to move into the field of game composing a little test they can assess their passions with. And that is also the biggest takeaway point. Love games!
Chapter 2
This chapter describes the essence of the book. It shows the book's approachable nature (and really that of Phillips, as she loves to engage with fans at conferences or via social media sites like twitter). Throughout the entire text, she continually presents information about the game scoring world that can benefit both complete newbies and experienced veterans. Because I myself lie more on the experienced side, I knew much of what she presented here, but it is always great to hear inspiring quotes, to be refreshed on where you came from, and to learn new ways of teaching old tricks to those who work under you.
In essence, learning the craft of game composing doesn't have to cost hundreds of thousands of dollars, but it will cost time, dedication and passion.
Though not mentioned in the book, I'd like to say that I read an article recently in which Mark Ruffalo stated that he had to audition over 600 times before his career as an actor launched. Do you think he did nothing in the meantime? Of course not. With each rejection, he tried not to take it to heart. Somebody else just happened to be better suited for the part, and he had to continue practicing so he could eventually convince directors he was that guy who was the best for the part. Winifred is telling us the same here. The takeaway here is always be learning, no matter how far along you are in your career.
Chapter 3
This was a very interesting chapter on the science of video game entertainment and how it attracts a following. It shows that before you consider being a game composer, you must understand games. Sure some of the biggest film names out there score games simply for the creative freedom, but there is something more that can be pulled from one who has a personal connection to every step of the video game experience. This particular chapter does a good job explaining how game composers can gently nudge a player toward a more fully immersive experience.
Takeaways will help you to understand the science behind player-game relationships.
Chapter 4
The story in this chapter was one of my favorites. I have not yet had the luxury of attending one of the biggest game festivals in fandom, so it was nice to imagine the scenes she described of raging fans going nuts over hearing their favorite game music performed live.
On the educational side of things, this chapter is extremely important for those who have not had any formal composition or very good private training. I've had formal composition training and I still learned more in-depth things, especially when it comes to the usage of the idée fixe. Winifred argues that it should indeed be considered a distinct entity from leitmotifs against the popular beliefs of many that state they are interchangeable with no real differences. I've heard both countless times in video games, and after reading would myself consider the idée fixe to be a very specific subcategory of the larger generalization of the leitmotif, even though neither side may agree. Again because of in-depth gaming experience and the reading, I was able to conclude that there might even be two distinct types of idée fixe.
The takeaways here should help you to understand music itself better before incorporating its techniques into gaming.
Chapter 5
This chapter makes a slight diversion back to the science of games and the psychology of gamers. It is immensely useful if you aren't well-rounded in your experience of the different genres of music in general, the genres of video games, or the genres of music in video games, and it is equally useful if you don't have the mind of a producer. As mentioned earlier, not only will you study game and music types, but you will also see how different psychological mindsets associate with the various types of games out there.
This chapter also inadvertently encourages you to be a self-disciplined go-getter. In other words, to truly get the most from it, you'll have to do some side-by-side research. Phillips describes the different types of music associated with shooters or RPGs or platformers, but she doesn't necessarily tell you step by step what goes into a rock song or what goes into a fantasy score or how to create an electronic soundscape. What she does do, however, is provide a plethora of in-game examples you can refer to in order to study various effects and techniques. Of course, I just love listening to game music, despite having experience in most of those compositional fields, so I followed along with most of the soundtracks mentioned to really put myself in the world of what she was describing. Some of the scores were old friends, while others I had never heard, and all enhanced the reading greatly.
Even now, as I write this, I'm listening to the full score to Little Big Planet 2 because I've completed the list of OSTs I've compiled over the years and only got to listen to a few of that game's tracks while reading the book. Listen to game music every chance you get! Be familiar with various game and music types. Those are the takeaways.
Chapter 6
A sort of expansion on the previous chapter, this chapter focuses more on the music in games since the reader should have a better understanding of the player. It focuses on what exactly music can do in a game and how important it can even become in the marketing world outside the game. If I recall correctly, this is also where she mentions just how valuable of an asset the game composer really is. If you are brought on board early enough, chances are that teams will listen to your work as they create (an honor I've experienced once). It really does fire them up and inspire them to do even better work! And, boy is it great to hear them say that your music has affected the development of the project in a positive way.
Takeaways demonstrate the relationship between game pacing and music reflective of any given situation.
Chapter 7
Winifred has more experience on much bigger titles than do I, and I found this section to be greatly enlightening on the process of working with a studio that is planning to release a AAA game. There are so many things you must do to prepare yourself for a big job, and she gives a great list of items to request from developers to make sure that you have access to as much source material as possible to inspire your best work. It's also where she first gets into the materials a game composer might need. While the film industry is relying more and more on music technology, the game industry couldn't survive without it. So if you are unfamiliar or uncomfortable with technology, make sure you pay attention to the upcoming chapters.
Preparation and the tools to achieve it can all be form your takeaways here.
Chapter 8
There are so many types of music and audio titles let alone members of other departments for any given game, so this chapter introduces you to the ones you'll likely be working with. It's a relatively short chapter compared to the others, but it can open your eyes to other music and audio-related jobs in the industry if you are interested in managing musicians and sound people instead of just creating music.
The takeaways here will show you who to communicate with (perhaps if you are seeking your first gig), remind you to communicate early and often, and help you to understand the chain of command.
Chapter 9
This is an all-around enjoyable chapter. It finally gets into the different types of game tracks you might hear in a typical game—something I have been studying since I was a child. Nowadays, there are so many cool things you can score: battle sequences, cinematics, general exploration (overworld themes), game trailers, and more.
If you understand the difference between how the various game tracks function within a game, you've nailed the takeaways.
Chapter 10
While chapter 9 explores some of the types of tracks you'll see in the video game composing world, this chapter really gets into the heart of linear-style game music as well as what makes game music dynamic and interesting. A must read for the beginning and intermediate musician. Linear music is very common in projects that are smaller or have engine limitations, and a composer can expect to work with it a lot, especially when first starting out.
Even the more advanced game composer can take away pointers on how to draw the most out of linear music, particularly when looped. At the very least, we can be pushed to pull more out of our journey around each track.
However, not only are loops more difficult to make interesting, they are often the hardest edit and make transition smoothly. In fact, this chapter was what inspired me to write my most recent article on an alternative game looping method.
This really should be considered as one of the most important chapters in the book.
Chapter 11
Unique only to the gaming industry, interactive music is explored here. If you are a true gamer, it is likely in my own musical opinion that interactive music is your favorite type of music to experience in a game…especially if you are a musician. This chapter offers great explanations and advice on what interactive music is and how it works.
Your takeaways may be more educational, but mine are that interactive music is so imperative to games all game composers should have a profound knowledge of how it works. It's super fun!
Chapter 12
MIDI is surprisingly something that many composers, young and experienced alike, have difficulty with. Even then, many who have a general understanding of it don't really take it to its full potential. This chapter will give you a brief understanding of it as well as explain some of the advantages, but only practice, experience, and some very specific topical research will help you to get the most out of MIDI.
After MIDI, the chapter goes on to describe where video game music has tried to go and may one day go, despite the disadvantages of highly experimental styles. Education on generative music (and MIDI if that's new for you) are good points for takeaway this chapter.
Chapter 13
This chapter is non-musical and it is also so huge that an entire book could be written about the topic. In fact, some have already been written. As composers, especially for games, you must have gear. And, that gear must be good! Winifred mentioned that she composes with the assistance of six computers and my brain nearly exploded. How I'd love to have even two! Simply put one machine can't handle all of the tasks you'll want it to do, no matter how strong it is. Not only that, but she gets into the types of software, plugins, controllers, boards, DAWs, libraries, and other gear you may need, though she doesn't advocate any particular brand here. That's OK though. You can always read my product reviews to understand each company specifically.
My personal takeaway from this chapter was that Phillips must use some EastWest equipment because she says a company, whom she leaves nameless, has software she owns that she regularly curses to the skies, but must accept because that creates some beautiful libraries that you can't find anywhere else. I also own EastWest, and as you all may know from my reviews, the products are great, but the player, stability, size of samples, and operation are insane.
Your takeaway may be less silly and more practical, since the other section of the chapter deals with middleware, another thing all game composers should be comfortable operating. If you don't know what middleware is, read this chapter, then start practicing!
Chapter 14
The chapter of hope and frustration. Winifred shares her personal journey and shows us how she got into the game scoring industry as well as how she maintains it. I currently am looking for that next boost to the top tier in my career and can say firsthand that it is indeed a lot of hard work. Even if you follow all of the tips, you'll have to be able to keep up with those tips and repeat many steps until you are satisfied with where you are. You may have to experiment with different ways of approaching each step until you perfect them or find something that is efficient. You will face various rejection, not because you are bad, but because somebody else got there first or fit a particular project in the way the producers had hoped they would. Even if you do everything technically right, your timing could just be a little off or it could just not work out. Phillips strives here to encourage you to keep on keeping on, and that is the final takeaway.
Conclusion
Overall, this is a wonderful book, and I believe classes could be developed in universities that specifically teach game composing using Winifred Phillips' guide as the text. It reaches readers of all ages and understandings of game scoring, and can surely boost the EXP of the newb and andvanced reader alike (level up, anyone?). It's light but useful. Comical but efficient. The least boring textbook you could hope to read. It covers every area of the game scoring world and gives a plethora of musical examples you can listen to while reading in order to fully capture the essence of her ideas, and it gives additional resources you can use to further your understanding of specific topics.
Pick up a copy the next time you're online, which is now, or at the bookstore the next time you're out.
For more information on the game composing sensation that is Winifred Philips, visit her site at www.winifredphillips.com.
Don't hesitate to email me with any questions you may have about the book or about composing in general. And remember, if you need a composer for your upcoming game project, visit www.natecombsmedia.com, or bypass me and go straight to Winifred if you think you can land her!
Subscribe to:
Posts (Atom)