Features of mastering in academic music. Typical mistakes of sound engineers when recording and mixing phonograms Sound processing and mixing phonograms from to
Typical mistakes of sound engineers when recording and mixing phonograms
Determining the quality of a recorded phonogram is a rather difficult task. For professionals, the "like it or not like it" criterion is not enough. to determine your next steps, you need to know the detailed advantages and disadvantages of the created sound image. the resulting sound must be divided, disassembled into its constituent components.
This is how, for example, artists do when evaluating drawing, color, perspective, brushstroke texture, elaboration of details, etc. in a painting. The tool for such analysis is known - it is the "OIRT Test Protocol". It helps the sound engineer to quickly and orderly assess the shortcomings of his work during the formation of the sound image and make appropriate corrections.
Therefore, it is convenient to systematize the typical mistakes of novice sound engineers based on this document.
Preliminarily, it should be noted that all parameters included in the protocol are closely dependent on each other, and by changing one of them, one cannot but change the rest. Thus, transparency is influenced by a combination of spatiality and timbre: it improves in the presence of bright and clear timbres and deteriorates with an increase in spatial characteristics. Outlines depend on the combination of the spatiality of the sound components and their musical balance, and their presence, in turn, affects the transparency. long-term practice suggests that a poorly performed piece will never sound effective and beautiful in the recording. For the discordant playing of performers inevitably leads to insufficient transparency, and if the performer does not fully master the instrument, a beautiful timbre will not work.
The first assessment parameter is spatial impression, or spaciousness. This is the impression of the room where the recording took place. Spatiality characterizes the sound picture in width (stereo impression) and in depth (the presence of one or several plans).
Spatiality creates a sense of distance from an instrument or group of instruments. It's good if the distance to the performers in the recording seems natural and easy to determine. Spatiality in a recording is influenced by early signal reflections and reverberation, timing and level.
The optimal, most comfortable feeling of spaciousness for the listener depends on the genre of the music. In this regard, it is possible to point out the following features of the musical material: the scale (ie chamberliness or, conversely, the grandeur, mass character) of the musical drama laid down by the composer; belonging of music to any time layer, for example, medieval Gregorian chant, baroque music or modern musical constructions.
The use of the main spatial "instrument" - reverberation - gives the sound volume, flightiness, increases the volume of the phonogram and, as it were, increases the number of performers. However, this spectacular sound paint, if used ineptly, leads to a loss of transparency: the attack of subsequent notes is "smeared". In addition, timbre is lost, because only the mid frequencies are reverberated. If the timbre of the signals of the far and near shots is too different, the sound may split according to the plans. It is especially unnatural to mix artificial reverberation with a signal from a nearby microphone. At the same time, all the fast-flowing sound processes (the vocalist's consonants, the knock of the accordion or clarinet valves) remain very close, and the sound itself (vowels, long notes) recede, "fly".
Next parameter by which a sound recording is judged is transparency. Transparency means the clarity of the musical texture, the distinguishability of the lines of the score. The concept of "transparency" also includes the legibility of the text if it is a vocal piece with words.
Transparency is one of the most perceptible sound parameters for the listener. However, clarity and transparency of the recording may not always be required due to certain genre characteristics. For example, when recording a choir, it is necessary to avoid distinguishing the voices of individual choristers in the parts. This is usually achieved by sacrificing some degree of transparency, moving the chorus away and making the recording more airy, spatial. About one of the recordings of D. Shostakovich's 15th symphony, where all the components of the orchestra were shown too vividly, some sound engineers said that this was not a recording of a symphony, but a textbook of the composer's instrumentation.
Transparency is a litmus test for a sound engineer. actively working with space while maintaining complete sonic clarity is the most difficult thing in sound engineering. As already mentioned, transparency degradation occurs as a result of loss of timbre and as a result of spatial errors. For example, a large diffuse field level ("a lot of reverberation"). This means that too many signals from "adjacent" instruments are being fed into the microphones, or the engineer has "caught up" with a lot of artificial reverb that has masked the weaker components of the direct signals. And the loss of timbre occurs mainly due to inaccurately placed microphones (more on that later).
Dynamic signal processing is also fraught with transparency loss. Let's start with the fact that one of the most important parameters of a timbre is the process of sound generation, its attack. If we choose the compressor response time less than the attack time of the instrument, we will get its timbre sluggish, pressed and expressionless. And if at the same time all the phonogram signals are compressed, then we will generally get a "mess" - after all, the secret of a good ensemble playing lies precisely in yielding to each other.
A very important parameter is musical balance, that is, the ratio between the parts of an ensemble or orchestra. In some cases, when recording a large instrument, such as a grand piano or organ, one may speak of a balance between its registers. The musical balance should come from the score, correspond to the intention of the composer or conductor, and be preserved with all the nuances from pp to ff.
Good balance in a recording is not difficult to achieve, especially when dealing with acoustically unrelated signals (i.e. mixing multi-track recordings). But failures happen here too. Besides simple carelessness, there are a number of objective reasons for poor balance.
First, the lack of musical culture and the development of the taste of the sound engineer. He often does not understand the degree of importance of a particular party. It seems to him that everything played by the musicians should sound equally loud. The recording becomes "flat", rumbling. We call this type of balance "engineering mixing".
Secondly, excessive (more than 92 dB) listening volume during mixing can play a cruel joke with the sound engineer. When listening to such a recording at home, especially through a cheap "soap box", all mid-frequency components of the signal will become louder. this applies primarily to solo parts - singers, brass, electric guitars. Cymbals, various shakers, bells and, most importantly, the bass and the big drum will disappear into the shadows. On the whole, the entire accompaniment will hide behind the vocals and the dramatic "support" of the solo part, echoes and counterpoints, laid down by the arranger, will disappear.
Thirdly, there are obvious flaws that can result from the imperfection of the sound engineering "mirror" - control units and the listening room. It is especially difficult to balance narrow-band signals such as hi-hat, harpsichord (especially one that has only one "iron" recorded, without body resonance), longitudinal flute. Such signals also include bass, "shot" without overtones, and a kick drum, recorded without the characteristic percussive attack. On different acoustics and in different rooms, with unavoidable overshoots of frequency characteristics, the balance of such instruments will be different. Often on the record you do not know which pair of control units to believe, especially since the headphones show the third result. In addition, the use of only headphones gives a dramatic improvement in transparency, and it is very difficult to predict how the phonogram will sound during normal listening.
Some sounds give the sound engineer a "ticking clock effect" over time. Then he ceases to notice the repetitive sounds of hi-hat, "automatic" percussion, etc. This also constantly leads to imbalances, since the sound engineer simply ceases to control such signals during the mixing process.
Next the most important parameter sound recordings - timbre of instruments and voices. The tone transmission should be natural, the instrument should be easily recognized by the listener.
However, in many cases, the natural sound of the instrument is purposefully converted by the sound engineer, for example, to compensate for distortions introduced when the signal is picked up by a microphone, or in case of imperfections in the sound of the instrument itself. For example, it is often necessary to "correct" the timbre of flutes, domras, balalaikas. To do this, you can raise the area of the main tones in the region of the first octave. In a twelve-string guitar, it is usually distinguished by a corrector "silvery", and the sound of a harpsichord can be given a spectacular "nasal" sound using a parametric filter. Very often, deliberate transformation of timbre is used to create new colors - for example, when recording, fairy tales, music for films, etc.
The timbre is influenced by all devices included in the path. If you analyze it, you can find "pitfalls" that threaten the emergence of marriage. for example a microphone. It is known that when it approaches the sound source, a brighter timbre is obtained due to the perception of its full frequency spectrum. However, almost all acoustic sound sources have redundancy in timbre components. Indeed, on the way to the listener in an ordinary, without amplification, hall, some of them are inevitably lost. Therefore, by placing the microphone at a close point from which the listener never listens to the instrument, you can get a sound that is not quite similar to the usual one. A violin recorded by a microphone, which is located near the instrument and directed perpendicular to the top deck, will sound harsh, harsh, rough, as the musicians say, with a "rosin" tinge. In the sound of a voice recorded by a close microphone aimed directly at the mouth of a singer or reader, hissing consonants are amplified, and when recording an academic vocal - a high singing formant with a frequency of about 3 kHz, very effective when perceived from two and further meters and unbearably sharp at 40 centimeters.
The equalizer should enrich and embellish the sound. But its excessive use often leads to the opposite result: the sound becomes "narrow", with a "gramophone" sound, especially if an inexpensive remote control is used with one parametric filter frequency on all rulers. At one time, we called such records "presensed" (from the presence filter, "presence filter"). In addition, inept correction can lead to increased media or studio noise.
The next parameter is execution. Perhaps it is of primary importance for the quality of the recording. It is the performance that is the decisive factor for the listener. Equally important here are both technical features (the quality of sound production, the structure of the ensemble, the purity of intonation, etc.) and artistic and musical (the interpretation of the work, its correspondence to the style of the era, the composer).
The role of the sound engineer is also very great here, since he influences both the technical and artistic side of the performance. He must find mutual language with musicians of all ranks and create a fruitful creative atmosphere during the recording process. Only this allows the artist to reveal his talent to the fullest. A trusting atmosphere is needed between the sound engineer and the artists: musicians should without fear reveal their weaknesses to him, knowing that they will definitely be helped and will do everything to make the recording session complete, with a good artistic result. At the same time, one should never forgive a performer who is clearly unprepared for recording. Not a single recording should come out of the hands of a sound engineer, for which the performer would eventually have to blush and no immediate benefits should reduce this exactingness.
I especially want to warn novice sound engineers from the frequently held position: "What can they do without me, these artists! They always play false, not together, I save them, if it weren't for me ..." This is nothing but a manifestation of my own complex inferiority. In fact, the relationship with the performer should be based on the principles: respect for the artist, respect for oneself and mutual benevolence. To such a specialist, as they say now, "a client will go", people of different personalities will be happy to work with him.
How to evaluate the parameter "performance" in the very fashionable computer music now? I have heard quite a few discs recorded on synthesizers rhythmically, effectively in timbre and ... boring after the first ten minutes of listening. The impression of a very beautifully playing musical snuffbox with its dead metronomics. These sounds lack one of the main features of live performance - the "human factor", the physical labor of playing musical instruments. Composer Eduard Artemyev, while overlaying the trumpet part on the phonogram, drew our attention to the fact that not a single synthesizer, even very accurately reproducing the timbre of the instrument, is unable to "depict" the muscle tension of the earpiece of a musician playing high notes. This tension is always in the sound, it affects the listener, forcing him to empathize.
And about metronomy - when synthesizers-rhythmboxes had just appeared, I said to the now-deceased VB Babushkin: "Now death will come to the musicians!" To which he wisely replied: - "This is the death of labukham ...".
The parameter "technical quality" is perhaps the most volatile of all. The recordings made ten years ago are today technically imperfect and must be restored. To traditional interference (noise, hum, electrical clicks), sound distortion, disturbances frequency response, resonances at individual frequencies were added: the presence of quantization noise, jitter, the effects of various computer noise suppressors, and much more.
In phonograms prepared for CD release, radio broadcasting, etc., electrical interference is unacceptable. Acoustic noises, in turn, are divided into studio noises: the hum of working ventilation, external intrusions, and performance noises (breathing of musicians, creaking of furniture, knocking of a piano pedal or valves of woodwind instruments, etc.). The degree of permissibility of performing noises in the technical conditions for magnetic phonograms is determined as follows: "Performing noises are allowed if they do not interfere with the perception of music." And this is absolutely correct, since it is the aesthetic standards that should be applied to the performance noise, which are in the competence of the sound engineer conducting the recording.
The opposite is also true - one should mercilessly fight the noise that interferes with artistic perception. There is a recording of a very respected guitarist, on which, in addition to the heartfelt performance, one can clearly hear ... puffing. The sound engineer, as well as the editor, producer, and others, who released such a recording for mass sale, committed, frankly, an malfeasance!
Perhaps the most common defect in technical quality is the result of overload. The mixer is a rather insidious pitfall on the way to good sound... Naturally, a sound engineer wants to hear his recording as loudly, brighter and more effective as possible. But this showiness is achieved by a combination of several parameters. The main ones are timbre and transparency. When listening loudly, the disadvantages of these parameters are compensated to some extent by a natural auditory perception mechanism known as the "Fletcher-Manson effect" (or "curves of equal loudness"). Of course, it is better to listen loudly than to write with the utmost level. Then the distortion of the analogue or completely unacceptable "over" on the digital begins. But the fact that it is pointless to turn up the volume to improve the recording is obvious even for beginners, but "pulling" the faders slowly in turn always seems more reasonable. As a result, the fight against overloads such as "individual fader moves up, and master - down" begins.
In fact, in sound engineering, the law of "reverse action" often operates: you want to make it louder - remove the disturbing one, you want to raise the bass - highlight the middle ...
If the recording is stereophonic, then one more parameter is determined in it - the quality of the stereo picture. Here we consider the width and fullness of the stereo base, the absence of a "hole in the middle", the uniform information content of the left and right sides, lack of distortions.
The basic rule for the formation of a stereo picture - "air is wider than sound" - means that a mono signal will sound "stereophonic" only if there is a wide-sounding diffuse field with it. All artificial reverbs are built on this principle, which, as a rule, have one input and two outputs.
A common mistake when creating a stereo image is the unnecessary narrowing of the base with the panoramic controls. It should be remembered that the "ping-pong" effect looks unnatural only in the classics, and even then not always. In a pop recording, the "roll call of the parties" works only for the benefit, and there is no need to be afraid of it.
A few words about the concept of "information content". Musical fabric is divided into essential, defining, well-localized components, and auxiliary, filling the texture. The first include, for example, a melody, accents-"riffs" filling in the pauses, etc., everything that the listener pays attention to in the first place. Auxiliary components of the musical fabric are various kinds of pedals (long notes or chords), duplication of the main voice (with another instrument or delay - all the same), continuous textured-harmonic figuration. It is the same informativeness of the left and right sides of the stereo base that creates a comfortable feeling for the listener of the correct stereo balance, and not the same level of signals of the right and left channels, which is usually shown by the indicators. This means that if the melody sounds for one instrument, then it should be located in the center. If the melody appears alternately in two voices, then they should be located on the sides of the base. An example of an unsuccessful arrangement of instruments is the so-called "American" concert seating of a symphony orchestra, where all melodic and well-localized parts - violins, flutes, trumpets, percussion, harps - are located to the left of the conductor. On the right, of the frequently played instruments, there are only oboes and cellos.
Perhaps we can stop at this. The question is often asked: - what is the most difficult thing in sound recording? The answer is simple: you need to develop the ability to overcome stress, always be in control of the situation and constantly control the resulting sound. This usually takes about ten years of independent work, when a self-taught specialist comprehends the secrets of mastery by the method of "trial and error". I really want our article to shorten this period a little for the readers ...
Do you need mastering in academic music? Why do we hear the word "mastering" much less often in the classics than on the stage? If you look at discs with academic music, it is often indicated who recorded the disc (recording), who edited it (editing), but about mastering - not always ... And if you still "tinker" - then what to do? Classics do not need such RMS levels as in the stage, and naturalness is above all - why then this separate procedure?
Such questions are of concern to many sound engineers and advanced listeners, because, given the very different approaches to recording, mixing and, in general, the formation of sound in the academic and pop genres, phonograms turn out to be radically different in dynamic, timbre, loudness characteristics. What to do - to leave the classics natural, but "quiet" and super-dynamic, or bring it closer to the stage, risking losing important dynamic nuances? Let's try to figure it out.
About restrictions
Mastering can be roughly divided into two components. During technical part of the process, the sound engineer usually forms the order of the tracks, adjusts the pauses between the tracks, downsizes the audio files if necessary using dithering and, if the output requires a CD-Audio format, "cuts" the master disc in accordance with the Red standard Book (it is more accurate to call this process pre-mastering - ed.).
These processes, of course, have their own subtleties, but in this article we will talk about creative parts of mastering. common goal similar work- impact on a ready-made mixed phonogram for its more successful publication - playback on various acoustic systems, radio rotation, commercial sale, etc. This is a fresh look by a new person at the phonogram in new (usually better) conditions, the final touch in the musical picture, the addition of something that, according to the mastering engineer, is not enough to get the best sound of the existing track, or the removal of unnecessary and disturbing ones.
In this article, we will also touch less on mastering as an assembly job. different records into one album. In this case, in order to equalize differing phonograms, the range of actions and the degree of engineer intervention may differ significantly from the actions discussed below.
Ears and apparatus
Just in case, let us remind you that the mastering engineer and the sound engineer who created the phonogram in the studio should not be the same person. It is believed that the hearing of a mastering engineer must be specially tuned to the perception and analysis of the phonogram. For example, it is less focused on the balance of instruments, space, plans, and more on the general impression made by the phonogram on the listener - on the general timbre balance, dynamic characteristics. Here, undoubtedly, the richest experience of listening to ready-made phonograms affects.
Mastering studio
In addition, the equipment and acoustics of mastering rooms are much more demanding than recording and mixing studios. Since the mastering engineer is the final link in the chain of work on the phonogram, the conditions of his work should be as honest as possible - the acoustics of the room are as neutral as possible, it is highly desirable to have top-class mid- or far-field loudspeakers in order to control the widest possible frequency range. Means of influencing the phonogram in mastering studios are usually of the highest class external equipment (Outboard) - digital, analog, tube devices for dynamic and frequency processing.
About genre differences
So, creative mastering is ubiquitous (at least it should be) in pop music. Moreover, in this genre, the mastering is usually done by an individual person. Comparing the situation with academic music, let's try to understand why in this area the situation is different.
In the stage, before mastering, in most cases there is a separate mixing process, during which the sound engineer strongly, and in some places dramatically affects the timbre, dynamics and other characteristics of the signals, moving them away from natural states for the sake of the overall sound of the mix. Making such deep interventions, the sound engineer inevitably becomes dependent on the conditions of the mixing studio - control monitors, control room acoustics, processing devices ...
In addition, as you repeatedly listen to the phonogram during mixing (and before that - during recording and editing), the hearing inevitably gets blurred, you get used to the sound, and the chances of being led by local acoustic and hardware conditions increase. And here the fresh look of the mastering engineer is just what comes in handy - listening to this new music for himself in his own conditions, he is able to catch and influence the details that other project participants have long been accustomed to, to make the final touches based on his rich experience of listening to ready-made phonograms.
Also, the mastering function to increase the density and loudness of the phonogram (RMS) is especially in demand in the stage, so that the song sounds favorably in comparison with other phonograms and does not get lost in level, for example, when broadcasting on the radio.
And in academic music? Here one of the main goals is, on the contrary, to convey the naturalness of timbres. Moreover, the mixing process is often absent altogether - until now, academic music is often recorded immediately in stereo and is not subjected to further processing - for both creative and technical reasons (for example, in the First and Fifth studios of the GDRZ, the possibility of multitrack recording by local means in currently not).
The first studio of the GDRZ
And if there is mixing, then it is balancing various microphones and microphone systems, panning, the use of artificial reverberation and, to a much lesser extent, frequency equalization and dynamics processing.
As for the volume, the situation is ambiguous. On the one hand, it is logical to strive to convey the dynamics of live performance in all its richness. But dynamic range symphony orchestra reaches 75-80 dB. At the same time, the level of background noise, for example, in a quiet rural room can be 20-25 dBA, and in city apartments - up to 40 dB and higher. Now let's turn to the table.
![](https://i0.wp.com/prosound.ixbt.com/recording/mastering/table.jpg)
Loudness table
It shows the correspondence of the actually measured sound pressure levels in the room with the sound of a symphony orchestra and the volume levels indicated in the score.
It turns out that in an average noisy city apartment, the listener, having set the playback at approximately the volume "like in a hall", simply will not hear the nuance ppp, a pp it will mix with the background noise of the apartment. If you increase the volume by 15-20 dB, the situation in quiet places will stabilize, but loud ones will burst out in the zone of unpleasant sensation of the auditory system, in addition, problems with neighbors may arise ...
Thus, listening to the full dynamic range of academic music requires a room with a low background noise level, and not all listeners have such rooms.
As for the overall RMS-level of the phonogram - if the natural dynamic range is preserved, classical music will most likely sound quieter in terms of overall volume in direct comparison with the stage, which lowers its chances of being "in all its glory" perceived by the listener ...
Hence, another approach is to tighten the dynamic range to more acceptable values, raise the average level, and hence the overall loudness, with dynamic processing. But there are also disadvantages - the natural breath of the music begins to be lost, the dynamic nuances are smoothed out ...
Thinking in this way about the role of mastering in academic music, we come to two conclusions. On the one hand, it is not needed if the goal is to maintain complete naturalness. On the other hand, at the present stage of the development of sound recording, such a phonogram can dynamically and timbre be a loser in comparison with others.
The thought about the possible absence of mastering as a separate process was confirmed by a famous Norwegian sound engineer Morten Lindbergh(Morten Lindberg, www.lindberg.no) in our correspondence:
"Our work is structured in such a way that there is no clearly separated mastering process as such. Recording, editing, mixing and mastering are inextricably linked and have very blurred boundaries, where each next stage is largely prepared during the previous one."
From words to deeds!
Reasoning by reasoning, but what about the real modern sound engineering practice? To find out the details of the process, I decided to find out the opinions of experienced colleagues-sound engineers who are directly involved in academic music, mastering engineers, and also to find out how things are in the foreign sound community.
Do you need a specialist?
In the course of communication with professionals, it turned out that none of the Moscow academic sound engineers I interviewed give their recordings of academic music for mastering to individual specialists, everyone prefers to do the work on their own until the end. The reasons are various:
- no need to transfer the phonogram to an additional person, availability of everything necessary for independent mastering;
- lack of budget for a separate additionally paid operation;
- the absence of engineers with sufficient experience in mastering academic music in a professional environment, which means that there is no one to entrust this process to, there is a "danger of unfamiliar hands".
In such a situation, accordingly, it turns out that mastering engineers have very little experience in working with academic music.
There are many funny stories on the topic of inept mastering of academic music by people far from this genre. The sound engineer of the Mosfilm tone studio shared with me several of them. Gennady Papin:
“There was a case of sending finished disk ensemble Aleksandrov for release to Poland. Having listened to the phonogram, the sound engineers "pulled up" the high frequencies by 15 (!) DB, considering the sound to be deaf and undiscovered.
Another time, American sound engineers turned on a noise suppressor on our choir recording in order to get rid of analog noise, as a result, the recording sounded like cotton wool.
Another common case, when variety mastering engineers cut out live pauses between numbers in the classics and make fast fades, the listener thus constantly switches between the atmosphere of the hall and the dead digital silence. "
![](https://i0.wp.com/prosound.ixbt.com/recording/mastering/papin.jpg)
Gennady Papin
Discussing the issue of having a separate mastering sound engineer, a famous Moscow sound engineer Alexander Volkov noticed:
"In our country, when recording classics, in most cases there is no division into producer and sound engineer, what can we say about a separate mastering sound engineer! ...".
The situation is different in the academic music industry in the United States. Using a reliable foreign source, who wished to remain anonymous, we managed to find out that mastering there is a necessary stage in the production of the final product: different material requires a different approach, since it is intended for a different consumer. Much also depends on the specific producer in charge of the project. Local mastering is most often done by an individual person. In particular, it is believed that the technical part of mastering should take place in the best possible conditions. For example, much attention is paid to the painstaking selection of dithering and sampling rate conversion algorithms; for different music, different digital-to-analog converters are selected for reproduction ... and this is not to mention the actual creative part.
Untouched frequency
Frequency correction (equalization) - is it necessary for mastering in academic music? According to Alexander Volkov, no:
"I try to achieve the desired timbre coloring of the phonogram on the recording by means of correct placement and selection of suitable microphone models. In extreme cases, I can correct something on mixing. At mastering, I no longer have a reason to use an equalizer."
The same opinion is shared by an outstanding Russian sound engineer and teacher. Igor Petrovich Veprintsev by adding:
"Among other things, the tone and nuance of the musicians' playing greatly influences the timbre of the recording, so the sound engineer, working with the musicians on the nuances of their performance, certainly influences the timbre of the recording."
Not all pros are so adamant about equalization. Famous Moscow sound engineer and teacher Maria Soboleva notes:
"I do not always use the equalizer on mastering, and if I do use it, then in minimal, barely audible," homeopathic "doses."
![](https://i2.wp.com/prosound.ixbt.com/recording/mastering/soboleva.jpg)
Maria Soboleva
Sound engineer of the Great Hall of the Moscow State Conservatory and teacher of the Academy. Gnesins Mikhail Spassky adds:
"Sometimes it happens that you want to slightly correct the sound. It very much depends on the material and the specific situation and the ensemble: I allow equalization on a symphony orchestra or choir; much less often there is a desire to equalize, for example, a solo grand piano."
What can a correction give if you still use it? There are situations when the acoustics of a room sounds too bright or, on the contrary, boomy - the entire recording is colored accordingly. A slight equalization in this case can correct the acoustics and at the same time affect the timbre of the sounding instruments. Another case is an instrument that was initially not rich in timbre. By adding a little low or low mids in mastering, you can add depth to the instrument and at the same time its echo in the room.
In my practice, there was a case of participation in the student recording competition at the International Congress of the Audio Engineering Society (AES) in London in 2010. I presented the recording of the Academic Orchestra of Folk Instruments of the All-Russian State Television and Radio Broadcasting Company under the direction of N.N. Nekrasov, made from a concert in the large hall of the M. Gnesins. The recording was mixed by me in Surround and took the second place in the corresponding nomination. One of the complaints made by the judges was that the overall sound of the recording was excessively “humming” in the low middle. I think this was due to the timbre of the hall, and as a result, this color was emphasized in the sound of the instruments of the orchestra. Then I noticed for myself for the first time that in this situation delicate mastering could significantly improve the situation.
Another possible role of the equalizer in mastering is to try to slightly bring the sound of academic music closer to the bright, high-top-saturated modern sound of the stage. But, again, within very narrow limits.
Maria Soboleva:
"At mastering, I mainly affect the upper and lower frequency regions of the range, and if I touch the middle, then at mixing. But often, listening to the material some time after the work is done, as a result I turn off the equalizer. If the sum of the introduced advantages does not exceed the disadvantages that have appeared. , I make a decision to refuse correction. Here the main principle is "do no harm!" ".
What is the danger of the equalizer in the classics? A feeling of unnaturalness, excessive brightness, or, conversely, a boomy sound. Good live instruments, sounding in good acoustic conditions and successfully recorded, are usually harmonious and balanced in timbre, so once again we emphasize the need for extremely delicate handling of the equalizer when mastering academic music.
Renowned Mastering Engineer Wim Bilt(Wim Bult, Inlinemastering) shares his experience:
"I don't have a lot of work in mastering the classics. We worked on a few live recordings and used a bit of analog EQ for crisp and compressor.Weiss DS1 formanagementdynamics ".
![](https://i0.wp.com/prosound.ixbt.com/recording/mastering/wim-bult-inlinemastering.jpg)
Wim Bult, studio Inlinemastering
From ppp before fff
Dynamics processing is used in one way or another in academic music by all sound engineers and mastering engineers with whom I managed to talk. In this case, several tasks can be set, in general arising from each other:
- reduce the dynamic range if it is too wide;
- increase the average sound level of the phonogram;
- to give the phonogram a denser sound.
Much depends on the future of the phonogram. For example, in the USA, a lot of classical music is recorded for parishioners of various churches, this audience listens to music mainly in the car, so they need strong enough dynamic processing, unlike music lovers, for whom it is important to preserve as much dynamic range as possible.
The tools used to achieve these goals also vary. Someone uses exclusively the "manual" method of drawing the volume envelope, some use dynamics processing devices, someone uses both approaches. This is what a famous Russian mastering engineer says. Andrey Subbotin(Saturday Mastering studio):
“I had experience in mastering academic music, it mainly consisted in drawing the volume envelope in some moments. Compressors on the classics do not like at all, it is better to do it“ by hand ”.
Morten Lindbergh:
"Mastering in our studio is mainly dynamics control and export of files in the required format. We do not use no dynamics processing devices, and we use volume envelopes in Pyramix for manual containment of "firing" attacks, as well as emphasizing and revealing expressiveness in musical fragments. "
Wim Bilt:
" In the classics, this is more volume control - raising very soft passages and possibly lowering the volume in dynamic ones. "
Indeed, the formation manually the volume envelope is the most controllable effect on the dynamics of a piece, but at the same time it is a very painstaking work. Having a certain taste and skills, you can greatly improve the sound - slightly soften the dynamic contrasts, but preserve the drama of the work, while making the phonogram more accessible to the listener. In this work, it is important to avoid strongly audible jumps in volume as a result of the envelope - this kind of "sound engineer" is audible and undesirable.
And before ...
The predecessor of the modern manual rendering of the volume envelope can be considered "manual" compression in the analog era (and in the digital one too), when, when recording a stereo sum, the sound engineer, watching the level of the master, often held his hand near the master fader. If there was a sharp dangerous surge in the level, the master cleaned up slightly, usually with a subsequent return to the starting position, in quiet places, on the contrary, the level rose slightly.
It is also worth mentioning the "corduble" (correcting take) - the final copy of the recording, which, as a result, went to the library. It was made by rewriting, usually through an analog console, a copy glued by the editor onto a new tape. This was done because it was considered more correct to keep a solid, not glued version in the library - it deteriorated more slowly and lasted longer. During this rewriting, the sound engineer could do whatever he saw fit on the console as a final touch: a slight frequency adjustment, the same manual master level adjustment, and sometimes, if required, adding reverb. This process is very similar to modern mastering, isn't it?
Usage compression - this, on the one hand, is a means of influencing the dynamic range by automating manual drawing of the envelope, on the other hand, it is a way to introduce some density into the phonogram, which is more typical for pop phonograms, but in the classics it can be useful in small doses as well. Not all academic sound engineers use compression for mastering, but some do.
Maria Soboleva:
"I see the role of dynamics processing in mastering in revealing nuances in music, but not in changing its character. Then this music has more opportunities to open up to the listener.rationo more than 1.5-1.6. This influence should not have a decisive influence on dynamic contrasts.Pianoat the same time it remains quiet, but does not fall outside the boundaries of perception, it is slightly manifested, andFortesounds bright but not deafening. "
Alexander Volkov:
“I love the effect that soft multiband compression has on the sound. As an example, I’ll take the BK (Bob Katz) presets from the TC Electronic System 6000 mastering section. Ratio and rather long Attack and Release parameters, the compression turns out to be soft. It is not audible as such, but it certainly gives a benefit, thickening the phonograms and raising the average level. "
Mikhail Spassky:
"If I use compression, I prefer parallel. Unlike conventional compression, this mode allows you to beautifully tighten quiet places without affecting the dynamics of loud fragments."
An effect similar to parallel compression is provided by the so-called upward compressor, where, in contrast to lowering the levels of loud places, as in a conventional (downward) compressor, quiet places are pulled up.
![](https://i1.wp.com/prosound.ixbt.com/recording/mastering/Upward%20Compressor.jpg)
Upward Compressor
So, if you use compression, then it should be a delicate and soft effect that does not harm the sound. Whether it is necessary to do this is a moot point, and many people prefer manual actions, but it is definitely possible, while always showing taste and a sense of proportion.
Bob Katz and Parallel Compression
American engineer Bob Katz is a prominent figure in the field of mastering. In addition to a number of books, including the famous Mastering Audio, he has many articles, answers to questions, useful comments in forums on the Internet.
Bob Katz's Mastering Audio Book
One of the techniques he actively recommends is "Parallel" compression. Its essence is in the duplication of a phonogram, in which only one of the created copies is subjected to compression (moreover, rather hard). This compressed phonogram is mixed with the intact one. As a result of this process, loud places (and, importantly, sound attacks) are practically unaffected, and quiet ones become brighter and denser - the result, it seems to me, is quite desirable for academic music. For more information on this technique, visit Bob Katz's website http://www.digido.com.
Limiting - the extreme degree of compression, it would seem, should not be directly related to the classics. But again it is decisive degree using the appliance. Most of the pros who spoke to me confirmed that they use the limiter mainly as the very last piece of the chain for a variety of purposes.
One of them is, in fact, normalization plus dithering. Devices like Waves L2 / L3 do this job successfully. The limiter itself can be used simply to raise the volume of the track to the level where limiting begins. Or you can go further and tune the device for a stronger effect, but at the same time make sure that only fragments of the waveform that stand out from the middle level are cut off. As soon as the limiter starts to work constantly on all the material, it already greatly damages the phonogram. In this way, on some phonograms, it is possible to achieve a gain of 5-6 dB, which is significant. The figure shows an exaggerated limiting situation, but I think the general idea is clear. Above the original version, below - limited.
Limiting
The limiter is also useful in cases when there are several peak fragments in the entire soundtrack that stand out from the average level and do not allow raising the overall level. Then you can configure the device to respond to such peaks. Another way to deal with such places is to manually draw the volume again, and this can be done on the specific track of the multitrack that caused this peak (for example, a particularly strong blow of the timpani). In this case, there is less chance of getting the effect of "sinking" of the entire phonogram in level.
![](https://i2.wp.com/prosound.ixbt.com/recording/mastering/ovchinnikov.jpg)
Vladimir Ovchinnikov
Sound and mastering engineer of the Mosfilm tone studio Vladimir Ovchinnikov:
"In classical music, the dynamics on mastering is for me even more limiting than compression. Compression makes serious changes in the sound, musical balance. the gain in the general level is significant. "
So, we examined the basic techniques used in mastering academic music. There are also some actions that are relatively frequent in pop mastering, less common in academic mastering.
For example, MS-conversion is rather an attribute of the process of restoration and assembly of various phonograms into one disc. In other cases, issues of balance of the center and edges of the stereo base are usually solved during recording and mixing.
The general reverberation in the master section is also rare - the desired spatial impression is built up already in the process of recording and mixing and extra space on the master is usually not required.
Eventually…
With a much less obvious need than in the stage, the need for mastering of academic music, this stage of creating the final product, as we see, is almost always present. Somewhere with an almost imperceptible difference before and after, somewhere with a more serious transformation of the phonogram. It happens that large foreign publishers of academic music loudly declare the complete "naturalness" of the released records and the absence of mastering. However, there are reports from sound engineers who compared the options before and after, about the presence of serious differences in the sound, which means that some kind of mastering is still being done.
The situation when there is only academic music on the disc, hopefully, is now clearing up for us. But today we increasingly see cases of combining and intertwining genres, styles, sounds ... A striking example- soundtracks for movies, which can contain purely orchestral numbers, and rock or pop compositions. What to do in this case - to bring the orchestral to the stage? Or make pop numbers quieter? Or act both ways? These are questions for further exploration of the immense topic of mastering.
The author thanks Maria Soboleva, Igor Petrovich Veprintsev, Mikhail Spassky, Alexander Volkov, Gennady Papin, Andrey Subbotin, Vladimir Ovchinnikov, Wim Bult, Morten Lindberg for their help in creating this article.
We attach, as audio illustrations, recordings of unmastered and mastered phonograms made in one of the largest Moscow studios.
Want to learn how to process sound and mix phonograms "from and to". You will learn: What is a sound signal, power and pitch, sound pressure, phase and volume, how much money to spend on studio equipment, how to set up an ASIO sound card driver, latency, as well as virtual inputs and outputs, what analyzers are, what are they there are, why they are needed and how to use them, get acquainted in detail with the principles and features of equalization of bass drums of different timbres, how gating, compression and equalization of a bass drum occurs in practice, how a snare drum is processed with an equalizer and a compressor, features of processing a percussion instrument " Oh-ho "with the parametric equalizer and much more. All this knowledge is provided to you in a form that will allow you to assimilate and apply it in practice as easily, quickly and efficiently as possible.
Course 1. Theory of digital sound
This theoretical course is a mini video encyclopedia, and was created to help beginners understand the most important concepts, terms and fundamentals of digital audio.
Course 2. Home studio equipment for mixing phonograms
In this video course, you will learn what equipment is needed in order to produce high-quality sound processing and mixing of phonograms and why it is necessary. You will also learn how to choose the right equipment that is optimal in all respects with such a variety.
Course 3. Steinberg Nuendo 4. Tuning, Optimization for Mixing and Audio Basics
In this video tutorial, you will learn about all the necessary features and capabilities of Nuendo 4 to work effectively with audio. The course will allow those who come across it for the first time to learn how to work in this program.
Course 4. Sound processing devices. Device and practical application
In this course, we will focus on the basic sound processing devices, their design and practical application. You will find out where, why and, most importantly, how which device is used. The course covers all the basic effects and tools used in the processing and mixing of phonograms.
Course 5. Drum set. Treatment
The course is divided into five parts, each of which is devoted to the processing of a specific tool. Various examples detail the principles and techniques for handling bass drum, snare drum, tom toms, hi-hat and overhead mics. In this course, we will only talk about processing, not about mixing, and only about an acoustic, not about an electronic drum kit.
Course 6. Processing and mixing of a drum kit "from and to" I
Course 7. Processing and mixing of the drum kit "from and to" II
Course 8. Processing and mixing of the drum kit "from and to" III
Course 9. Percussion. Processing and mixing
In this course, you will learn about the processing of the most common percussion instruments and how to mix a percussion group.
Course 10. Electronic drums. Processing and mixing
Throughout this course, the author shows the processing and mixing of the drum section using one example. In this course, we will talk about electronic drums in dance music, about the peculiarities of their processing and, of course, about mixing them.
Course 11. Bass. Processing and mixing
The course is divided into two parts. The first is devoted to the processing and mixing of the bass guitar, the second is to the processing and mixing of electronic synthesized bass in dance music. The course will talk about both processing and mixing bass with a drum section.
Course 12. Processing and mixing of guitars
The first part of the course is devoted to processing and mixing acoustic, the second - processing and mixing of an electric guitar. A variety of examples explore the principles and methods of processing various types of acoustic and electric guitars.
Course 13. Processing and Mixing Keyboard Instruments
The course consists of several parts, each of which focuses on a specific keyboard instrument. Despite the fact that instruments such as marimba and vibraphone are keyboard percussion instruments, their processing and mixing will also be discussed in this course.
Course 14. Processing of winds and strings
This video course is devoted to the processing of wind and stringed bowed instruments. Each lesson covers the processing of a specific musical instrument.
Course 15. Mixing phonogram "from and to" I
During this practical course, the author mixes the mix in a pop style (in the chanson style) from scratch to the end. This course was created so that you can see the process of mixing a phonogram "from and to". Throughout the course, the author's actions are accompanied by comments, but without focusing on the theory, definitions, and the basics of working with sound.
Course 16. Mixing phonogram "from and to" II
During this practical course, the author mixes the mix in a dance style "inside and out." This course was created so that you can see the process of mixing a phonogram from start to finish. Throughout the course, the author's actions are accompanied by comments, but without focusing on the theory, definitions, and basics of working with sound.
Bonuses:
- Copyright support
- Free mastering of the phonogram