Why use dynamic EQ?
If you try controlling a boomy or harsh frequency using a static EQ you can alter the quality of the audio unnecessarily. By using a dynamic EQ, you can remove the problem only when the specific frequency passes the set threshold, saving you the effort of automating an EQ.
The Waves F6 has some intriguing features, like mid side, per band sidechaining, per band parallel processing all with compression and expansion potential. The plugin is very CPU efficient therefore the EQ itself can very easily replace your standard DAW EQ, with the bonus of offering you dynamic processing when needed.
I decided to implement the F6 on a song I am busy mixing to put it through its paces.
The obvious place to start testing the F6 is vocals as vocals these days might not be recorded in the best space available. With so many bedroom recordings there are a lot of problems with room modes, and resonances. Normally in the past I would use either just the standard EQ3 from Avid, or a Waves C6 to resolve these issues but the F6 provides the best middle ground between static EQ and Multiband compression.
Listening to the vocal two problems stood out. The first, more noticeable frequency was around 2kHz, where the mic just wasn’t flattering to the female vocalist. The cool thing about the F6 is that it operates exactly like a normal EQ, but has the added features of compression. Another smart thing is that if you right click on any EQ band and drag the number it solos that band automatically.
So, I right clicked and searched for the offensive frequency and removed it just like I would with a standard EQ, setting the Q correctly and everything. Then instead of cutting that frequency, I set the range to -8 and just dialed in the threshold until the frequency is tamed. The nice thing about this is that it won’t be as drastic in sections where that frequency is not as obtrusive. I also removed some low-mid build up in the same way, cleaning up the vocal in a more natural sounding way.
Next was the guitar bus. All the electric guitars are bussed through a single aux with some harmonic processing affecting it. Listening to the guitars I noticed that it was very dark (possibly because I didn’t add any EQ), so I decided to try the F6. Firstly, I added a small boost around 2khz with a relatively wide Q, then set the range to -3 and dialed in the threshold to compress by a small amount. Immediately the guitars got some bite but whenever it gets too much the compressor controls it. I also added a band to remove just a bit of 200Hz, which was more prevalent in the chorus of the song. Helping to keep the guitars present in the verse, but removing the buildup in the chorus.
Bass Sidechain Technique
I wanted to try some of the other features of this plugin next so I decided to use the sidechain input on the bass guitar. I set up a send on the kick to use as a key input to the sidechain of the F6. I searched for the frequency in the bass where the kick needed to be more prominent, set the Q as needed and a decent range (-10). Then I adjusted the threshold, attack and release settings to taste. Next you need to set the sidechain source to external so the EQ band looks to the key input from the kick. Immediately the low end of the kick had more space and was more audible without overpowering the bass.
After success with the bass I explored even more. On the overheads, I decided to experiment with the mid-side features of the plugin. The overheads had a very natural, well recorded picture of the drums. I decided to increase the brightness with a shelve EQ type, which made the cymbals pop out too much. Easily fixed with the F6 by just setting the range and threshold to react mostly to the crash cymbals.
Next I went to the high pass filter, set it to only affect the sides, and filtered out frequencies below 150Hz on the sides of the overheads.
I wanted to bring out the snare more so I searched for a frequency that would help it stand out by right clicking a band and searching for a frequency. By boosting around 1300Hz the snare stood out more but it made the overheads more nasal. The solution was to use Mid/Side, set it to the mid band & cut 3dB. Then I set a positive value in the range and set the threshold to react to the snare (and toms). Now every time the snare hit it brings up the level of that frequency band while leaving it cut the rest of the time.
Finally, I decided to bring out more kick in the overheads as well. I boosted 100Hz in the Mid channel by 2dB, and again set a positive range of +10, then adjusted the threshold to react to the kick. The kick now had more power in the overheads, again without the overheads being overpowering in the 100Hz range.
Vocals can be very challenging to record. Taking all the aspects into account - having a good vocalist in front of a good microphone into a good pre amp, will get you about halfway there.
But you could still end up with vocals either having too much bass, too much sibilance, being too nasal, not having enough body, or not enough detail.
As it turns out the fix to all those problems is relatively easy – adjust the distance from the mic.
Seeing as how each singer sounds different and each song requires a unique performance it stands to reason that we should adjust our mic positions for each vocalist.
A good starting point is to place the microphone between 10 – 15 cm away from the vocalist. If the vocals are too thin you can move it a little closer - but then you might have too much sibilance – If the vocal is too muddy, move the mic further away.
For a sibilant vocal moving the mic’s capsule out of the direct path of the vocalist could solve that problem.
I often find that placing the microphone just a little higher than the vocalist’s mouth and aiming it down towards his/her sternum give a very balanced sound without any major sonic issues.
A vocalist’s natural instinct would be to lift their chin to sing into the microphone but you can fool them by placing the pop filter lower and ask them to sing into the pop filter.
Microphones Tailored for Instruments
When you browse the spec sheets of some microphones you will come across a bullet point stating the following: “Frequency response tailored for drums, guitars & vocals”. If you check the frequency response of these microphones you will notice that it is far from being flat, in fact these microphones have peaks and dips at certain frequencies some of which might seem extreme.
Recording brass instruments can be a daunting task for an inexperienced engineer - largely because you have probably not had the opportunity to record these more unusual instruments during your studies. A few weeks ago a client brought a trombone and a saxophone that he wanted to record for a project. To record these two instruments is not difficult per se, but require sufficient knowledge of their timbres and how the instruments produce sound. When you know this, the biggest challenge is to place the musician in the correct spot in the room; use your ears to determine the best microphone position for the instrument.
Saxophones are reed instruments, which means that part of the mouthpiece include a reed which vibrates when wind is blown over it. The vibration stimulates that air within the tube which also vibrates - thus creating the particular sound of the instrument. Assuming that the sound only emulates from the bell of the sax and therefore placing a microphone there will cause problems when the project goes to mixing, causing the instrument to sound harsh.
Knowing that the sound is generated by the whole body of the instrument should affect how you place your microphone. You can use a dynamic, condenser or ribbon microphone - depending on what you have available and which tone you want. For our session we used a Shure SM57. Firstly, place the microphone more to the right of the player - approximately halfway up the keys of the instrument, slightly facing the bell. A good starting place is to have the microphone about 50 cm away from the instrument and adjust your microphone position while listening to the player performing through headphones.
Recording Brass Instruments
Brass instruments generate sound when air gets blown with pursed lips into the mouthpiece of the instrument. The sonic character of the instrument is produced from the vibration of the bell of the instrument.
To record brass instruments you can select any dynamic, condenser or ribbon microphone that you have available and you should experiment to hear which microphone gives you the best tone for your project; we again used the Shure SM57 for the project. If you place a microphone in front of the instrument facing it towards the bell, a very bright sound will be captured which is not typically what you want. If you place it too close to the instrument it will pick up all the noises that the instrument and the player make during the performance.
Try placing the microphone approximately 50 cm away from the instrument, slightly above the bell of the instrument, aimed towards the mouthpiece. Listen to the instrument being played through headphones and move the microphone to a better position if required. This might be a little awkward to musicians who are used to play directly into a microphone, but it is your responsibility as engineer to explain to them where to stand and where to aim their instruments.
In both scenarios if you want to capture more of the room, move the microphone further back. If you want to capture more of the instrument you can move it closer to the instrument.