Tuesday, 17 November 2015

Unit 49: Sequencing a pop song

Information on MIDI
MIDI is short for Musical Instrument Digital Interface, and it allows electronic instruments and computers. Information can be carried between devices by a MIDI cable; the cable has 16 pins, one for every channel. A MIDI signal carries what are called "event messages' - these specify several things, including pitch, what note should be played, velocity (how hard the note should be played) and note length. So that all instruments are connected and respond in the same way, General MIDI was created. This is a standard specification for anything that responds to MIDI signals.

MIDI messages, as with all data, is sent using binary bits (either 0 or 1). There are two types of message bytes. Status bytes always begin with 1, and data bytes with 0; the other 7 bytes represent the message. All MIDI message begin with the status byte; the first three of the remaining 7 bytes represent the type of message, with the other 4 bits represent which channel numbers the data applies to.

There are several types of MIDI connections. Whilst some MIDI interfaces (such as the one I chose to use) use USB, others use the standard MIDI pins. MIDI in and out ports are the most common; these respectively take in data from a device and send data to a device. MIDI Thru is less common; this connection copies what comes in from the MIDI In port and sends it to another device, allowing for several devices to be controlled at once.

We are able to manage MIDI data in our DAW (digital audio workstation) in several ways. In Cubase, there are two methods. The first one is the key editor, allowing us to select which keys to play using a virtual 'keyboard'.



The second method is to use the list editor, which provides a list view for viewing the MIDI data.



Within a DAW, there are several examples of MIDI controllers. These are event messages that affect a selection of the notes in some way. These messages are known as 'continuous controllers.' In Cubase, there are several to choose from, for example, one of these affects the overall volume of the track.

MIDI files are generally saved in the .mid format. However, there are two types of this format: MIDI Type 0 merges information for all of the tracks into one track, whilst MIDI Type 1 contain separate information for each track.

Information on Computer Systems
To sequence in MIDI, a DAW (digital audio workstation) is required. A DAW is essentially a replacement for a conventional mixing desk and analogue tape for today's modern recording engineers and producers. Just as there are several different mixing boards, each with their own 'kinks' and workflow, there are many DAWs available for use; the main difference being their workflow for different situations.

Here is a list of several DAWs:
  • Steinberg Cubase - this is the DAW we are using (version 6). It is equally suitable for both recording audio and sequencing MIDI files.
  • Ableton Live - this DAW originally started as a performance instrument for electronic musicians and DJs to be able to sequence samples and backing tracks live, but has since evolved to become just as good for home recording, within its 'Arrangement' view. In comparison, Ableton Live's 'Session' view is perfect for live users, as it is able to fit onto a laptop screen perfectly.
  • Avid Pro Tools - this DAW is considered to be the gold standard for professional recording software, and is used in many professional studios worldwide. Traditionally, Pro Tools' strength is within mixing/mastering audio files, although it still comes with a large array of virtual instruments to use.
  • Image Line FL Studio - this DAW initially started out as 'Fruity Loops' but was forced to change name due to legal action from a certain famous breakfast company. FL Studio has become extremely popular among electronic composers for its easy workflow and high quality piano roll.
  • Apple Logic Pro X - After Apple purchased this DAW, it has become highly regarded among Mac users for being an extremely useful composition and recording tool, both for MIDI and audio. The latest version of Logic Pro X has been praised for its drum kit builder and wide variety of audio unit plugins.
Whilst DAWs can run on both Windows and Mac OS X, I sequenced my song on an iMac running Mac OS X Lion 10.7.5. An iMac is one of Apple's highest powered computers, so contains more than enough processing power and memory to handle a DAW.

Samplers and Keyboards
Throughout the 1980's, samplers became common. These were devices that allowed the user to play back sounds from instruments (both live and digitally recreated) within the sampling library. Devices of this type include the Synclavier (1978) and the Fairlight CMI (1979); the Fairlight is extremely well known for its drum sounds, which have been used on many hit pop songs:



As well as pop, the Fairlight was also used on many hit rock records, notably Def Leppard's Hysteria. Producer Robert John "Mutt" Lange employed the Fairlight to aid with the recording of Rick Allen's drum tracks, Allen having lost his arm in a car accident.



These days, hardware samplers are no longer required. Software samplers, such as the Groove Agent One (included within Cubase) allow samples to be assigned to separate MIDI notes; these can then be played back via a MIDI interface or a key/list editor within a DAW.

In recent years, the need for a dedicated synthesiser has been replaced by MIDI interfaces (such as the M-Audio Keystudio 49i) and virtual software instruments. MIDI messages are used to trigger notes within the software.

Synthesizers have also changed in modern times. One of the first synths was the miniMoog (developed by Robert Moog in the 1960s), and quickly became of the most popular musical devices of the time, notably being used by Gary Numan.



By the 1970's, synths had been invented that played back saved sounds. An example of this is the Mellotron, a device that was able to play back pre recorded sounds from a tape inserted into it. The Beatles and Genesis adopted the device throughout the 60's and 70's, and, despite being outdated, was still used in the 80's by Orchestral Manoeuvres in the Dark.


These days, a wide variety of sounds can be obtained via VST plugins. These are virtual instruments that can be triggered via the use of a MIDI interface, and allow home recording hobbyists to easily get synth, piano and even orchestral sounds as long as they can read music. This has also led to the development of multi-timbral plugins (virtual instruments which can replicate many sounds at once - this is achieved by having each 'sound' use its own separate MIDI channel).



Even so, some will still prefer to use analogue synthesisers. An example of this is the MiniBrute, made by Arturia. 



This generates sound using three processes:
  1. Generating a wave (whether this is a sine, triangle. sawtooth or square)
  2. Running this wave through an ADSR (Attack - Decay - Sustain - Release), which controls the listed four parameters
  3. Using an LFO (Low-Frequency Oscillator) to create a sweep

Diary
Lesson 1 - 17/11/2015
In today's lesson, I was introduced to the unit via the assignment brief, and given a score of the song (Clean Bandit's Rather Be) that I was to sequence. After setting up a blank project in the DAW (Cubase), I quickly created two new instrument tracks; both used the HALion VST synth plugin on a violin setting.





To create a new track, I used the project menu drop down, as shown:




In order to test whether the sound was relevant or not, I used a MIDI interface (an M-Audio Keystation 49i) to play random notes, tweaking the settings to taste.




(A VST plugin will take live audio and/or MIDI data and simulate real equipment. In this case, the plugin simulated a violin.)




Using the pencil tool in Cubase, I then set to work reading the bars where strings are included (bars 1-7), then 'drew' the notes into Cubase. After this was complete, I then repeated the process for the notes in the bass clef. When notes used stacatto (shown by a dot above the note), I reduced the note length; for example, if the note was a quarter note, I reduced its length by 75%.

Next, I created a new synth track, designed to play notes from bar 9 onwards. As the tempo changes at this point for 115bpm to 121bpm, I edited the tempo track in Cubase, keeping the slower tempo for bars 1-7 then increasing it from bar 8 onwards. For this instrument, I used a Sci-Fi synth sound, completely removing any reverb to accenuate the stacatto in the score.


Lesson 2 - 24/11/2015
In today's lesson, I continued with my work on the synth track, adding in the lower register. In addition to this, I also added the vocal track for the bars I had done so far; I chose to use an alto saxophone, as I felt it was one of the only instruments that actually sounded like a real voice.

In addition to this, I was introduced to the Groove One sampler. This allowed me to trigger drum samples from Cubase's media bay by playing the corresponding keyboard note. This week, I programmed in a kick and hi-hat sound then wrote those out on the drum track.

In order to program using the Groove Agent, I loaded up the relevant control panel, then dragged in the samples that I wished to use from the Media Bay:





Lesson 3 - 01/12/2015
In today's lesson, I continued on the vocal track, making my way up towards the first chorus. In addition, I also used the HALion synth to add a bass guitar track which starts at the prechorus.

In addition, I also researched MIDI and computer systems. I found a brief history on both MIDI and DAWS; my information can be found at the start of this blog.

Lesson 4 - 05/01/2016
In today's lesson, I expanded all other tracks towards the first chorus. In addition, I also set up the remaining blank tracks ready for next lesson. This included the background vocals, piano solo and percussion.

To add to this, I also added extra screenshots. These mostly revolved around informing on how to set up a blank project then creating the first instrument tracks and setting them up with either the HALion or Groove Agent.

Lesson 5 & 6 - 12/01/2016 and 19/01/2016
In both of these lessons, I focused on completely finishing all parts until the end of the first chorus. 

As well as this, I gave the track a basic mix - firstly I added compression to the drums in order to reduce their overall volume. I then gave the strings, piano and synths a small amount of reverb, and finally added an EQ to both the lead and backing vocals. To ensure both tracks could be heard over the top of the instrumentation, I gave them both a large boost in the mid frequencies.


Lesson 7 - 02/02/16
In this lesson, I chose to replace both sequenced vocal tracks with live vocals. I recorded three different vocal tracks:
  • The lead vocal, which lasts the length of the recording
  • A second lead vocal, used in the chorus as a doubletrack
  • Backing vocals, used in the chorus
Ultimately, I chose not to use the chorus double track part as it suffered from some very slight clipping in the higher frequency. I then gave the track a mix, and then exported it using Cubase's Audio Mixdown feature.

Here is a link to my final mixdown:

Below is a track list of my final mixdown, along with what effects were used:


Track No.
Name/Purpose of Track
Effects Used
1
Lead Vocals
(Panned Center)
  • ·      EQ
  • ·      Reverb
  • ·      Compressor

2
Backing Vocals
(Panned Left 17)
  • ·      EQ

3
Strings – treble clef
(Panned Left 11)
  • ·      Reverb

4
Strings – bass clef
(Panned Right15)
  • ·      Reverb

5
Synth – treble clef
(Panned Center)
  • ·      EQ
  • ·      Reverb

6
Synth – bass clef
(Panned Center)
  • ·      EQ

7
Piano solo
(Panned Center)
  • ·      EQ

8
Piano – treble clef
(Panned Center)
  • ·      EQ
  • ·      Reverb

9
Piano – bass clef
(Panned Center)
  • ·      EQ
  • ·      Reverb

10
Bass
(Panned Right 28)
  • ·      EQ
  • ·      Compressor

11
Drums
(Panned Left 17)
  • ·      EQ
  • ·      Compressor
  • ·      Reverb

12
Percussion
(Panned Center)
  • ·      Reverb

Stereo Out
Stereo Mixdown
  • ·      EQ
  • ·      Compression
  • ·      Limiter




Tuesday, 3 November 2015

Unit 48: Recording a pop song cover

Cover Song Recording - Yellow, Coldplay
Information on the song
Yellow is a an alternative rock song, written and recorded by the British band Coldplay. It was released in 2000 as a single from their debut album, Parachutes, and was produced by the band and Ken Nelson, an English record producer best known for work with other indie artists such as Snow Patrol and Paolo Nutini.

Whilst recording the song, the band had a problem picking a suitable tempo for the song to record at. This problem was eventually rectified, but Nelson felt that the song sounded 'wrong' on tape, so Pro Tools was used to fix timings before it was recorded onto 2 inch tape.


Lyrics
Look at the stars,
Look how they shine for you,
And everything you do,
Yeah, they were all yellow.

I came along,
I wrote a song for you,
And all the things you do,
And it was called "Yellow".

So then I took my turn,
Oh what a thing to have done,
And it was all yellow.

Your skin,
Oh yeah your skin and bones,
Turn into
Something beautiful,
Do you know,
You know I love you so,
You know I love you so.

I swam across,
I jumped across for you,
Oh what a thing to do.
'Cause you were all yellow,

I drew a line,
I drew a line for you,
Oh what a thing to do,
And it was all yellow.

Your skin,
Oh yeah your skin and bones,
Turn into
Something beautiful,
Do you know,
For you I'd bleed myself dry,
For you I'd bleed myself dry.

It's true,
Look how they shine for you,
Look how they shine for you,
Look how they shine for,
Look how they shine for you,
Look how they shine for you,
Look how they shine.

Look at the stars,
Look how they shine for you,
And all the things that you do.




Initial Recording Schedule
     ·     Session 1 – Acoustic Guitar and Vocals
     ·     Session 2 – Drums
     ·     Session 3 – Bass and Clean Guitar
     ·     Session 4 – Distorted Guitars
     ·     Session 5 - Piano


Draft Plan
Producer: Myself
Primary Sound Engineer: Jon Wells
Name and artist of song chosen: Yellow - Coldplay
Instruments you will use, and how you will mic these (microphone placement).

Vocals will be miced with a Rode NT1A, protected with a pop shield to prevent unwanted noise and a sound reflector to capture more of the acoustic surrounding the vocalist. I will choose to also add vocal harmonies, which will thicken the texture of the vocals and make them sound more professional. Both the lead vocal line and harmonies will be double tracked.



Guitars will be split up into acoustic and electric guitars.  For acoustic, I will only use one take, but will use two microphones; one placed close to the neck, the other by the sound hole, creating a stereo recording that will sound much wider. In addition, I will choose to use two different microphones; an AKG C1000S will be placed over the sound hole as it is more suited to capturing the higher frequencies closer to the bridge, but will use a Shure SM57 close to the neck as its lower frequency response will be more suited to capturing lower overtones closer to the nut. 





For electric guitar, I will take several different takes using a direct input through the desk into Guitar Rig. Direct input (or DI) is when an electric instrument is plugged straight into the mixing desk. Previously, a piece of equipment called a DI box would have to be used, as the instrument level signal from the instrument would be too quiet, and a DI box would be used to convert this into microphone level to be picked up by the mixing board. However, the new mixing board at the college (a Mackie Onyx Firewire 1640I) includes this built in, allowing electric guitars to be plugged in without using a DI box.



Advantages of using DI include that there will be no background noise as the signal is coming directly from the instrument. In addition, this will also result in no feedback, which is especially important when recording high gain electric guitar. However, a significant disadvantage of DI is that the recording will lack reverb and/or acoustic reflection as a result of this. This issue may be able to be solved using reverb plugins in the DAW when mixing in the next assignment, using artificial echo to give the impression the recordings were done in a open space.

As this is a mellow song, I will avoid using the heavier amp simulations such as the Marshall, Peavey 5150 and Soldano. For the heavier parts of the song, I will play the same part twice. The first take will feature a Vox AC30 simulation and I will “boost” this track using a overdrive preset. As the AC30 would typically feature brighter sounding speakers, namely the Celestion Greenback, I will choose to virtually mic this take with a Shure SM57 simulation, to capture a more rounded tone. For the second track, I will use a simulation of the Fender Twin Reverb. However, as this amp is best known for producing clean tones due to a large headroom, I will “boost” this track using a overdrive preset. I will mic this take with a Shure SM57 as well. For the third and final take, which is the clean backup guitar behind the acoustic line, I will again the AC30, set at a lower volume and miced up the exactly same way as before.

For bass, I will again use a direct input, similar to how most producers track bass today. Thankfully, Guitar Rig includes a bass cabinet and head simulation (similar to a professional Ampeg SVT rig), helping to emulate a professional bass tone. Due to its low frequency response, this will be miced using a Shure SM7.

For the piano, I will choose to use a DI, reducing the time taken to actually mic and set up an actual electronic piano.

As I am using electric guitars with distortion, I will back this up by recording a drum kit, using a Samson Drum Mic kit for particular parts of the kit. To help capture the low end from the bass drum, I will mic using an AKG D112, which has a very low frequency response that is perfectly suited to recording the bass drum.


  
Draft Track List
Track list:
Instrument/part of instrument/mic or DI/name of possible performer:
Track 1


 Acoustic guitar – Shure SM57 close to the neck to capture more of the low frequencies
Track 2


 Acoustic guitar – AKG C1000S close to the sound hole to capture higher frequencies closer to the bridge
Track 3

Vocals 1 – miced up with Rode NT1A
Track 4
Vocals 2 – miced up with Rode NT1A
Track 5
Vocal harmony 1 – miced up with Rode NT1A
Track 6


Vocal harmony 2 – miced up with Rode NT1A
Track 7


Drum – miced up with Samson Drum Kit Mic Set
Track 8


Drum – miced up with Samson Drum Kit Mic Set
Track 9


Drum – miced up with Samson Drum Kit Mic Set
Track 10


Drum – miced up with Samson Drum Kit Mic Set
Track 11


Kick Drum – miced up with AKG D112
Track 12


Bass – Direct In, using a Guitar Rig simulation of a Tech 21 Sansamp into a bass cabinet miced using a Shure SM7.
Track 13




Electric Guitar 1 – Distorted Vox AC30 miced up with Shure SM57
Track 14


Electric Guitar 2 – Distorted Fender Twin Reverb miced up Shure SM57

Track 15
Electric Guitar 3 – Clean Vox AC30 miced up with Shure SM57
Track 16
Electric Guitar 4 – Clean Vox AC30 miced up with Shure SM57


Track 17
DI Piano 1
Track 18
DI Piano 2


Multi-track Recording log
Session 1
Name: Daniel Meer       
Number and length of session: Session 1 – 2 hours
Date of session: 9/11/15
Names of engineers: Lauren Thomas, Daniel Meer
Names of musicians and instruments being recorded: Electric guitar, vocals
Goals of the session (at least 2):
·         Record vocals
·         Record at least two guitar takes
  
I began by setting up the backing track in Cubase: I downloaded a version of the song from the Internet then imported it in, to allow my performers to play along to the actual song. In addition, I also set up a click track (set to around 86.5 bpm) so performers could always be in time.
As stated, I had originally planned to record vocals, but Lauren had a throat infection and was unable to perform. Despite this, I was still able to have several guitar takes recorded; this included the two distorted guitar takes, and a third, clean guitar which plays in the verses and choruses. In addition to this, I also had a bass and piano recorded, played by my friend Sam Keys. All of the instruments today were recorded via Direct Input, and with the exception of the piano, they were all played through Guitar Rig.





Next session, I aim to record the drums, as well as the acoustic guitar takes. Once Lauren recovers from her throat infection, I will also get her to record vocal tracks when she is available.

Session 2
Name: Daniel Meer       
Number and length of session: Session 2 – 2 hours
Date of session: 10/11/15
Names of engineers: Jon Wells, Daniel Meer
Names of musicians and instruments being recorded: Acoustic guitar, drums
Goals of the session (at least 2):
·         Record drums
·         Record the acoustic guitar

During this session, I was able to record the drums; they were played by my friend Lochlan Hope. His kit was miced up using a Samson Drum Microphone Kit (on the snare and hihat), two AKG C1000S for the crash and ride, and an AKG D112 for the kick drum. Unfortunately, I did run out of time to mic and record the acoustic guitar, but instead I was able to lay down a 'guide' track for the acoustic part using an electric guitar, DI'd into Guitar Rig on a clean setting to simulate an acoustic. In addition to this, I also recorded another clean guitar track to back up the acoustic part which I will record later.





Session 3
Name: Daniel Meer       
Number and length of session: Session 2 – 2 hours
Date of session: 20/11/15
Names of engineers: Jon Wells, Daniel Meer
Names of musicians and instruments being recorded: Vocals
Goals of the session (at least 2):
·         Record vocals
During this session, I was able to record the vocals; performed by Lauren Thomas. After completing a basic take (using a Rode NT1A), I then recorded a second take to double track the vocals. In addition to this, I also recorded vocal harmonies during the choruses, then double tracked those too. As well as creating an effect similar to that of the original song, it also served to help the vocals cut through above the myriad of guitars I had already recorded.

Next session, I aim to record the acoustic guitar part and thus then be able to remove the guide track.



Session 4

Name: Daniel Meer       

Number and length of session: Session 2 – 2 hours

Date of session: 01/12/15

Names of engineers: Jon Wells, Daniel Meer

Names of musicians and instruments being recorded: Acoustic

Goals of the session (at least 2):

·         Record vocals and give the song a basic mix


During this session, I was able to record the acoustic guitar part, using the SM57 placed close to the neck in order to capture the lower frequencies and the C1000S by the sound hole to capture the higher frequencies.

After this, I sought to gave the song a draft final mix. I began by firstly removing the piano track, as I felt that it didn't fit too well with the other instrumentation. Next, I made sure all of the volumes were balanced with each other, so that no parts were drowned out. I did have a problem with the vocals after the second chorus, as they couldn't be heard over the top of the distorted guitar. I remedied this turning down the volume of the guitar tracks, and changing their EQ so they didn't interfere with the mid-range (where the vocals should sit in a mix).


Evaluation of Individual Tracks




https://soundcloud.com/user-239608709/sets/yellow-dan-meer-seperate-instrument-stems

When I listen back to my individual recorded instrument tracks, I am impressed with the results. All of the recorded live tracks (vocals, acoustic guitar and drums) sound very clear and are of a professional quality. In particular, I am very pleased with the results of the acoustic guitar tracks; my combination of the SM57 and C1000S (which are two contrasting microphone) worked together very well, as they helped to fill in gaps left by the other (the C1000S covered high and low frequencies, whereas the SM57 dealt with mid-range). The vocal tracks were done to a very high standard, and each take matched up very well. The drums are very clear, with bleed from other parts of the kit minimised as much as possible.

Evaluation of Mix




https://soundcloud.com/user-239608709/yellow-dan-meer-final-mixdown

I am extremely pleased with the results of my final mix, especially as this is the first time I have mixed a track of this scale. All of the vocal takes matched up relatively well, ensuring they cut through the mix without much of a audible "double track" effect. The acoustic guitar sat well with the other tracks due to my combination micing technique, and the DI stringed instruments (electric guitar and bass) could be EQ'd in Guitar Rig, so I mostly kept them within the high mid, as not to interfere with the kick and toms in the bass end nor the cymbals in the high end.

However, without the use of EQ or plugins, I do feel there are some balancing issues, mainly after the second chorus, as the distorted guitars come in over the top of the vocals. If I reduced the volume of those tracks, they would no longer be heard over the top of the drums or other guitars. In future, this could be fixed whilst mixing by placing an EQ on all of the vocal tracks and boosting the mids so the tracks would sit right in the mix. In addition, the cymbals did tend to come in over the top of other instruments, so this could be sorted by removing some of the high end (around the 2k area) using a parametric EQ. Finally, I did have a slight issue with the final stereo mix clipping ever so slightly; during mixing, I could sort this by using a multiband compressor/limiter within Cubase to balance the low/high frequencies.