A while back, I had posted a project I did for my Electronic Music course. With the close of Fall semester, I have my final project completed. (FINALLY.)
This particular song is a full-length track, mixed, mastered (as best as I know how) that involves actual vocals recorded.
It’s worth knowing a few details first:
The song was composed and written by Melissa. I took her original chord progression and elaborated from it. The reason it’s called the “Epic Fail Mix” is because every time I would play her what I had made so far, she would just shake her head. Not that it’s bad, per se, just that it’s very different from her original folksy acoustic version.
Give it a listen — it’s about 5 minutes long; get ready for a bit of music whiplash about 3 minutes in. Details about the production are below the jump.
The Music Nerdery
I began with a very simple chord progression — I think it’s a minor (B min) key, but I’m not entirely sure. I arpeggiated the chord and used the Matrix Pattern Sequencer to load up all the variations of the arpeggio, so I had the full chord progression locked up in easy to use patterns.
The clicking rhythm in the background is actually a Camera shutter click I found on freesound.org. I loaded it up in ReCycle, sliced it up, and used Dr. Rex to rearrange the slices so that it made a kind of waltzy swing — whirrrrrr, click-click.
I borrowed an M-Audio Fast Track Pro and a nice Shure microphone from the music lab at school. The M-Audio allowed me to hook my laptop (with Reason 4) up to my Clavinova, which is awesome. Virtually no hum or noise, and great fidelity.
We recorded Melissa’s vocals (for the first movement) by having her listen to the chord progression for the first part, and singing into the mic.
With those raw vocals, I imported them into ReCycle, sliced them up into phrases (to make it easier to sync them up), then imported that rx2 file into Dr. Rex. Mapping the vocal slices to the music wasn’t too hard. It wasn’t exactly a slam-dunk because Melissa isn’t a robot, but as long as the first note of each phrase matched up, the rest of the phrase synced pretty well.
Later on, the second movement vocals were recorded in a similar way, except we used a click-track since I didn’t have that part of the song fully composed yet.
With her vocals, in the first part, I used some effects (RV-7000 Reverb, Scream with tape distortion to warm it up, a DDL delay unit, and a Chorus/Flanger to fill it out) to help bolster the vocals and thicken them up. It sounded a lot cleaner and more fitting for the mood.
After that, it still felt like the beginning needed something, so I loaded up a Strings patch in NN-XT and tried tapping out some melodies while the song played. I knew in the background of my mind what I wanted…sort of. I grew up listening to a lot of classical, jazz, and swing — so sometimes, if I don’t consciously think about it, I sort of have a hunch about what sort of melodies it needs; But I am a blind man in a dark room otherwise — I don’t have much background in composition, so while it may SOUND good, I probably have no idea why.
I did a little reading and this sort of thing is called a “counter-melody”: a separate melody that complements the primary melody, but does not follow it exactly. I don’t know much about it beyond that, though.
I recorded three dubs of strings, then took the parts of each that I liked, and automated the volume control for each of the notes so that it faded off. I realize I could have just used a zero sustain and a long decay, but I felt I needed to have a little more control over the volume.
The second movement is a bit more aggressive — it’s actually a different time signature. (Movement I is 6/8, Movement II is 4/4) The tempo is the same, but it just sounds faster because of the meter change. I knew I wanted to do some kind of Punkish theme for that part.
The vocals for the second movement were recorded to a click-track and processed in the same way. Instead of using the Tape distortion for warming, I used the Feedback setting — it gave her voice kind of a “tin-can” sound that is common with some modern artists. It makes it sound nice and lo-fi.
Composing the melody for the second movement was really hard — the chord progression is similar but not exactly the same. I actually did the bass line first. There was a pretty basic bass guitar patch in NN-XT that I used; I essentially took the rhythm that Melissa uses when she plays this part on guitar, and only played the root note of the chord. It was up-tempo enough that it sounded kind of like a punk bass line. I improvised just a little variation into it.
I tried several approaches at creating a nice clean distorted guitar sample (yes, I realize that is oxymoronic) — first, a Subtractor patch with a Scream 4, then a Thor with a few multi-oscillators, but nothing quite had that just-right sound. With some googling, I found some discussion boards and youtube videos where people used some factory Combinator patches that sounded just like what I wanted. The sample I ended up deciding on is called “Hell Spawn Guitar” — it has a really awesome aftertouch effect (when you let go of the key) that sounds like the fingers sliding up the fret board; It adds a nice authentic flair to the riff.
For the actual guitar melody, I picked notes from the scale that sounded good with the vocals, and made a very simple riff that didn’t dominate too strongly.
The drums were pretty simple — a basic rock kit from ReDrum, Scream 4 laid over-top. There were really only two patterns I used, really basic rock beats.
One last note on the organization of the song in Reason. I’ve gotten in the habit of using Combinators for every instrument; Toss a Line 6:2 mixer with any digital effects. I also often use a Spider Audio splitter, run the raw signal into it and then the splits off into different digital effects (one for Scream, one for DDL, etc.) so that I can use the 6:2 mixer to vary the amount of presence each effect has in the total mix. The old way, chaining one effect into the next, often ends up in a weird unclean sound — artifacts from the effects themselves get processed also. (ie. running a Scream into a DDL will cause the Scream distortion to be delayed also).
My rack almost always ends up being a series of Combinators — but it works out well in the sequencer. I keep things even more modular by not automating the instruments within the Combinator directly; I map the combinator knobs / switches to the individual instrument parameters I want to control, and then automate the Combinator. It helps keep the Sequencer clean.