fbpx

The Entire Music Production
Process Explained

From Songwriting to the Final Master

The music production process can take months or even years from the songwriting stage all the way through to mastering and pressing of the final product on CD or vinyl. While other artists need to rely solely on themselves to produce their art, musicians will call on many specialists to help them convert their creation into a product they can sell.

Pre-Production

The band have finished writing their songs and are ready to begin their album production. They know they’re parts, have probably performed them live on stage and gauged the audience’s response. But before they lay their tracks down onto tape, they want to be sure that each song is as strong as it can be.

Pre-production is a phase in which the finished songs are recorded in a quick and dirty way so that the members of a band can listen to a rough version of their album before committing to expensive recording studio time. They will analyse their songs and make sure that aspects such as tempo, rhythm, song structure and other musical elements are as close to perfect as possible. There might be vocal harmonies that need tweaking, or the song might need a double chorus at the end or a slightly slower tempo. Each band member should sit down and listen to each song in turn and take notes before having a discussion with the rest of the group about possible changes that have to be made.

Bands with larger budgets may also use the pre-production phase to head into the studio and record demos they can present to labels and record companies. They will then use the recording to gain funding, a record deal or to approach a music producer who they are looking to work with.

The Tools Of Music Production

Audio production can get very expensive. Recording studios will be equipped with gear costing thousands if not hundreds of thousands of dollars. From microphones to mixing desks, computers, digital audio workstations and high-end analogue compressors, each piece of equipment has an important part to play in the music production process.

Microphones

music producer working with microphones

Mics are the first thing we think of when discussing audio production. There are several types of microphones that are used regularly to record music and a few specific mics that are almost always used in recording studios.

Dynamic Microphones

The most sturdy type of microphone and the most common are dynamic mics. They tend to be the least expensive and the most durable of all types of microphones. Common models include the Shure SM57 and Sennheiser MD421. Dynamic microphones are used in music production for recording instruments such as drums and electric guitars and bass. Brass instruments such as the saxophone, trumpet and trombone are also commonly recorded using dynamic microphones.

Condenser Microphones

Condenser microphones are often used to record acoustic instruments such as vocals, acoustic guitar, strings or instruments with high frequencies. Models such as the AKG C414 B-ULS, Neumann U87 and KM184 are used for recording drum overheads. The Neumann TLM 103 is a go-to mic for recording vocals. Schoeps microphones are often used for recording strings classical music production.

Ribbon Microphones

Ribbon microphones use thin strips of metal, such as aluminium, suspended in a magnetic field. They are used for recording drums, electric guitars, brass and woodwind instruments. They work especially well on harsh sound sources and have become a favourite among audio engineers for drum overheads and overdriven or distorted guitar amplifiers.

Examples of popular ribbon microphones include the Beyerdynamic M160, Royer R-122 or the Coles 4038.

Digital Audio Workstations – The Heart Of Music Production

The DAW or digital audio workstation plays the central role in the modern music production process. Where in the past a large analogue mixing desk was connected to an analogue tape machine, today audio engineers use computers and audio software to record music digitally.

Mixing Desks, Audio Interfaces and All Those Different Cables

mixing desk music production

Mixing desks are still used today for the purpose of sculpting the sounds with equalisation, but before the music can be captured on a computer, it must be converted to the digital realm. An audio interface has inputs for microphone and line signals as well as outputs for speakers and headphones. It sends sound signals to the computer via USB or Firewire.

The DAW communicates with the audio interface and captures the incoming signals, saving them to disk on the computer. The most common digital audio workstations used by modern recording studios include Avid Pro Tools, Steinberg Cubase and Apple Logic Pro. All work in roughly the same manner, adhering to the traditional music production workflow.

When a mixing desk is used in conjunction with a DAW, a recording engineer is able to enjoy the best of both digital and analogue worlds. Outputs are routed from the digital audio workstation via the audio interface and into the inputs of a mixer. The producer or mixing engineer can then use the mixing desk to shape the sounds using EQ.

Equalisation – Frequency-Shaping A Music Production

Mixing Engineer

EQ – short for equalisation – is the method of manipulating the frequencies of an audio signal. An audio engineer is able to boost or cut specific frequencies for technical or creative purposes. This can be achieved using a mixing desk, a special piece of analogue equipment or with a software plugin on a computer DAW.

Equalisation is probably the most important step in the music production process, as it corrects any faults in the frequency spectrum of a signal. A poorly recorded instrument can be improved dramatically by using EQ.

The creative possibilities provided by equalisation are also very valuable when mixing a song. Instruments that take up similar places on the sonic spectrum can be shuffled around sonically in order to create more coherence and space within a mix.

Compression – Reducing The Dynamic Range Of An Audio Production

Mastering Music

A producer will also use compression to achieve one of the main goals of music production: making music sound larger than life. Despite the term concerning reducing size, compression, when used correctly, can actually make instruments sound bigger and louder. A compressor reduces the dynamic range of an audio signal by lowering the highest amplitudes, and lifting up the more nuanced elements.

Compression is regularly used on dynamic instruments such as drums, vocals, bass and brass instruments, as it can help tame and sculpt the sounds. In styles of music such as classical and jazz, compression is used sparingly during music production due to the desire for more natural recordings.

Reverb – Creating Atmosphere And Space

Creating an atmosphere for instruments and a song to reside in on a record is one of the key goals of music production. The main tool an audio engineer uses to achieve this goal is reverb. Reverb is an echo effect that when added to a signal source makes it sound like it was created in a space different to the actual recording room. Traditionally, large, reflective rooms were used in recording studios to add reverb, as were special plate reverb devices. Spring reverb is still used today in some guitar amplifiers. The most common form of reverb device in modern music production is either a digital outboard reverb or a VST plugin in a DAW.

Reverb plugins can range from simple algorithms with few controls to complex modelling software in which impulse responses form real-world spaces can be imported. The plugin is added to an auxiliary input track and fed the dry signal from an instrument. The wet signal with the reverb is then mixed in with the original instrument which gives the impression that the instrument was recorded in the reverberating space.
Reverb is an effect that really lifts instruments and nests them into the mix. Drums, vocals and piano all benefit greatly from a bit of added reverb.

Closely related to, but not quite the same as reverb is the delay effect. Delay is used a little more creatively than reverb during music production, but also often to achieve similar goals. Delay is simply a copy of the original signal repeated back at a later time. The copy can be played back once, several or an infinite number of times. The possibilities are endless.

Monitoring During Music Production – Speakers Or Headphones?

Recording Studio Control Room Hamburg

There are two options for monitoring during music production. Both have their advantages and pitfalls. The most obvious option is to monitor with a pair of speakers. For optimal results, an audio engineer should mix music on good quality speakers which have been installed properly in an acoustically treated room. With the proper setup, you are able to hear the mix in a neutral environment and mix the sounds coming from the DAW properly.

However, if the speakers are of poor quality or the room hasn’t been acoustically treated, thus colouring the sound, all attempts to create a coherent mix will be for nothing. It is imperative to have a good listening environment when mixing music using speakers.

Headphones Can Be Tricky

Monitoring using headphones removes the necessity for a good listening environment. Yet headphones also have their disadvantages. They tend to not have such a linear frequency response as studio speakers, presenting a slightly different overall sound than what is actually being played back by the DAW.

Monitoring reverb and effects reliably is also often tricky when using headphones. A music producer must be careful not to add too much, because cans tend to mask the actual volume of the effect.One also tends to monitor a mix at too high a volume when using headphones, resulting in a mix that lacks punch when played at low volumes.

It is always a good idea to listen to a mix on several different sound systems in order to gauge how it will translate to different speakers. Every time a mix version is finished, an audio engineer should listen on earbuds, on a hifi stereo, a car stereo and any other systems they have available. It is at these times that elements tend to stick out that were otherwise not particularly disruptive.

Mastering – Fine-Tuning The Mix For Broadcast And Release

music producer

The final stage of music production is mastering. This involves processing the stereo mix and fine-tuning it for maximum compatibility on different sound systems and media platforms. Mastering involves processing the frequency spectrum, adjusting the overall loudness, and adding meta data such as song and album titles and release codes to the digital audio file.

Adjustments to the frequency spectrum involve cutting any areas that are too overpowering, or boosting any frequencies that aren’t present enough. A mastering engineer will generally only add or cut small amounts of any given frequency to enhance the mix rather than change it dramatically.

Achieving a certain level of loudness is required so that there are little to no differences in volume between tracks. A song that is quieter than others will attract less attention on the radio than a louder one. On the other hand, a song that has been mastered for maximum loudness may be unpleasant to listen to, with all instruments fighting to be heard. A good mastering engineer will use compression and limiting to lift a song to a good level of loudness without destroying the original mix.

Other tasks performed by mastering engineers include a separate master version for vinyl or particular streaming platforms. A CD master is delivered to production plants as a DDP, a special digital format with all album information in text for embedded into the file. Digital audio files also contain text information to display artist, song and album titles when played on the radio.

© 2023 Upaya Sound