Menu
Is free
registration
home  /  Installation and configuration/ Features of mastering in academic music. Typical mistakes of sound engineers when recording and mixing phonograms Sound processing and mixing phonograms from to

Features of mastering in academic music. Typical mistakes of sound engineers when recording and mixing phonograms Sound processing and mixing phonograms from to

Typical mistakes of sound engineers when recording and mixing phonograms

Determining the quality of a recorded phonogram is a rather difficult task. For professionals, the "like it or not like it" criterion is not enough. to determine your next steps, you need to know the detailed advantages and disadvantages of the created sound image. the resulting sound must be divided, disassembled into its constituent components.

This is how, for example, artists do when evaluating drawing, color, perspective, brushstroke texture, elaboration of details, etc. in a painting. The tool for such analysis is known - it is the "OIRT Test Protocol". It helps the sound engineer to quickly and orderly assess the shortcomings of his work during the formation of the sound image and make appropriate corrections.

Therefore, it is convenient to systematize the typical mistakes of novice sound engineers based on this document.

Preliminarily, it should be noted that all parameters included in the protocol are closely dependent on each other, and by changing one of them, one cannot but change the rest. Thus, transparency is influenced by a combination of spatiality and timbre: it improves in the presence of bright and clear timbres and deteriorates with an increase in spatial characteristics. Outlines depend on the combination of the spatiality of the sound components and their musical balance, and their presence, in turn, affects the transparency. long-term practice suggests that a poorly performed piece will never sound effective and beautiful in the recording. For the discordant playing of performers inevitably leads to insufficient transparency, and if the performer does not fully master the instrument, a beautiful timbre will not work.

The first assessment parameter is spatial impression, or spaciousness. This is the impression of the room where the recording took place. Spatiality characterizes the sound picture in width (stereo impression) and in depth (the presence of one or several plans).

Spatiality creates a sense of distance from an instrument or group of instruments. It's good if the distance to the performers in the recording seems natural and easy to determine. Spatiality in a recording is influenced by early signal reflections and reverberation, timing and level.

The optimal, most comfortable feeling of spaciousness for the listener depends on the genre of the music. In this regard, it is possible to point out the following features of the musical material: the scale (ie chamberliness or, conversely, the grandeur, mass character) of the musical drama laid down by the composer; belonging of music to any time layer, for example, medieval Gregorian chant, baroque music or modern musical constructions.

The use of the main spatial "instrument" - reverberation - gives the sound volume, flightiness, increases the volume of the phonogram and, as it were, increases the number of performers. However, this spectacular sound paint, if used ineptly, leads to a loss of transparency: the attack of subsequent notes is "smeared". In addition, timbre is lost, because only the mid frequencies are reverberated. If the timbre of the signals of the far and near shots is too different, the sound may split according to the plans. It is especially unnatural to mix artificial reverberation with a signal from a nearby microphone. At the same time, all the fast-flowing sound processes (the vocalist's consonants, the knock of the accordion or clarinet valves) remain very close, and the sound itself (vowels, long notes) recede, "fly".

Next parameter by which a sound recording is judged is transparency. Transparency means the clarity of the musical texture, the distinguishability of the lines of the score. The concept of "transparency" also includes the legibility of the text if it is a vocal piece with words.

Transparency is one of the most perceptible sound parameters for the listener. However, clarity and transparency of the recording may not always be required due to certain genre characteristics. For example, when recording a choir, it is necessary to avoid distinguishing the voices of individual choristers in the parts. This is usually achieved by sacrificing some degree of transparency, moving the chorus away and making the recording more airy, spatial. About one of the recordings of D. Shostakovich's 15th symphony, where all the components of the orchestra were shown too vividly, some sound engineers said that this was not a recording of a symphony, but a textbook of the composer's instrumentation.

Transparency is a litmus test for a sound engineer. actively working with space while maintaining complete sonic clarity is the most difficult thing in sound engineering. As already mentioned, transparency degradation occurs as a result of loss of timbre and as a result of spatial errors. For example, a large diffuse field level ("a lot of reverberation"). This means that too many signals from "adjacent" instruments are being fed into the microphones, or the engineer has "caught up" with a lot of artificial reverb that has masked the weaker components of the direct signals. And the loss of timbre occurs mainly due to inaccurately placed microphones (more on that later).

Dynamic signal processing is also fraught with transparency loss. Let's start with the fact that one of the most important parameters of a timbre is the process of sound generation, its attack. If we choose the compressor response time less than the attack time of the instrument, we will get its timbre sluggish, pressed and expressionless. And if at the same time all the phonogram signals are compressed, then we will generally get a "mess" - after all, the secret of a good ensemble playing lies precisely in yielding to each other.
A very important parameter is musical balance, that is, the ratio between the parts of an ensemble or orchestra. In some cases, when recording a large instrument, such as a grand piano or organ, one may speak of a balance between its registers. The musical balance should come from the score, correspond to the intention of the composer or conductor, and be preserved with all the nuances from pp to ff.

Good balance in a recording is not difficult to achieve, especially when dealing with acoustically unrelated signals (i.e. mixing multi-track recordings). But failures happen here too. Besides simple carelessness, there are a number of objective reasons for poor balance.

First, the lack of musical culture and the development of the taste of the sound engineer. He often does not understand the degree of importance of a particular party. It seems to him that everything played by the musicians should sound equally loud. The recording becomes "flat", rumbling. We call this type of balance "engineering mixing".

Secondly, excessive (more than 92 dB) listening volume during mixing can play a cruel joke with the sound engineer. When listening to such a recording at home, especially through a cheap "soap box", all mid-frequency components of the signal will become louder. this applies primarily to solo parts - singers, brass, electric guitars. Cymbals, various shakers, bells and, most importantly, the bass and the big drum will disappear into the shadows. On the whole, the entire accompaniment will hide behind the vocals and the dramatic "support" of the solo part, echoes and counterpoints, laid down by the arranger, will disappear.

Thirdly, there are obvious flaws that can result from the imperfection of the sound engineering "mirror" - control units and the listening room. It is especially difficult to balance narrow-band signals such as hi-hat, harpsichord (especially one that has only one "iron" recorded, without body resonance), longitudinal flute. Such signals also include bass, "shot" without overtones, and a kick drum, recorded without the characteristic percussive attack. On different acoustics and in different rooms, with unavoidable overshoots of frequency characteristics, the balance of such instruments will be different. Often on the record you do not know which pair of control units to believe, especially since the headphones show the third result. In addition, the use of only headphones gives a dramatic improvement in transparency, and it is very difficult to predict how the phonogram will sound during normal listening.

Some sounds give the sound engineer a "ticking clock effect" over time. Then he ceases to notice the repetitive sounds of hi-hat, "automatic" percussion, etc. This also constantly leads to imbalances, since the sound engineer simply ceases to control such signals during the mixing process.

Next the most important parameter sound recordings - timbre of instruments and voices. The tone transmission should be natural, the instrument should be easily recognized by the listener.

However, in many cases, the natural sound of the instrument is purposefully converted by the sound engineer, for example, to compensate for distortions introduced when the signal is picked up by a microphone, or in case of imperfections in the sound of the instrument itself. For example, it is often necessary to "correct" the timbre of flutes, domras, balalaikas. To do this, you can raise the area of ​​the main tones in the region of the first octave. In a twelve-string guitar, it is usually distinguished by a corrector "silvery", and the sound of a harpsichord can be given a spectacular "nasal" sound using a parametric filter. Very often, deliberate transformation of timbre is used to create new colors - for example, when recording, fairy tales, music for films, etc.

The timbre is influenced by all devices included in the path. If you analyze it, you can find "pitfalls" that threaten the emergence of marriage. for example a microphone. It is known that when it approaches the sound source, a brighter timbre is obtained due to the perception of its full frequency spectrum. However, almost all acoustic sound sources have redundancy in timbre components. Indeed, on the way to the listener in an ordinary, without amplification, hall, some of them are inevitably lost. Therefore, by placing the microphone at a close point from which the listener never listens to the instrument, you can get a sound that is not quite similar to the usual one. A violin recorded by a microphone, which is located near the instrument and directed perpendicular to the top deck, will sound harsh, harsh, rough, as the musicians say, with a "rosin" tinge. In the sound of a voice recorded by a close microphone aimed directly at the mouth of a singer or reader, hissing consonants are amplified, and when recording an academic vocal - a high singing formant with a frequency of about 3 kHz, very effective when perceived from two and further meters and unbearably sharp at 40 centimeters.

The equalizer should enrich and embellish the sound. But its excessive use often leads to the opposite result: the sound becomes "narrow", with a "gramophone" sound, especially if an inexpensive remote control is used with one parametric filter frequency on all rulers. At one time, we called such records "presensed" (from the presence filter, "presence filter"). In addition, inept correction can lead to increased media or studio noise.

The next parameter is execution. Perhaps it is of primary importance for the quality of the recording. It is the performance that is the decisive factor for the listener. Equally important here are both technical features (the quality of sound production, the structure of the ensemble, the purity of intonation, etc.) and artistic and musical (the interpretation of the work, its correspondence to the style of the era, the composer).

The role of the sound engineer is also very great here, since he influences both the technical and artistic side of the performance. He must find mutual language with musicians of all ranks and create a fruitful creative atmosphere during the recording process. Only this allows the artist to reveal his talent to the fullest. A trusting atmosphere is needed between the sound engineer and the artists: musicians should without fear reveal their weaknesses to him, knowing that they will definitely be helped and will do everything to make the recording session complete, with a good artistic result. At the same time, one should never forgive a performer who is clearly unprepared for recording. Not a single recording should come out of the hands of a sound engineer, for which the performer would eventually have to blush and no immediate benefits should reduce this exactingness.
I especially want to warn novice sound engineers from the frequently held position: "What can they do without me, these artists! They always play false, not together, I save them, if it weren't for me ..." This is nothing but a manifestation of my own complex inferiority. In fact, the relationship with the performer should be based on the principles: respect for the artist, respect for oneself and mutual benevolence. To such a specialist, as they say now, "a client will go", people of different personalities will be happy to work with him.

How to evaluate the parameter "performance" in the very fashionable computer music now? I have heard quite a few discs recorded on synthesizers rhythmically, effectively in timbre and ... boring after the first ten minutes of listening. The impression of a very beautifully playing musical snuffbox with its dead metronomics. These sounds lack one of the main features of live performance - the "human factor", the physical labor of playing musical instruments. Composer Eduard Artemyev, while overlaying the trumpet part on the phonogram, drew our attention to the fact that not a single synthesizer, even very accurately reproducing the timbre of the instrument, is unable to "depict" the muscle tension of the earpiece of a musician playing high notes. This tension is always in the sound, it affects the listener, forcing him to empathize.

And about metronomy - when synthesizers-rhythmboxes had just appeared, I said to the now-deceased VB Babushkin: "Now death will come to the musicians!" To which he wisely replied: - "This is the death of labukham ...".

The parameter "technical quality" is perhaps the most volatile of all. The recordings made ten years ago are today technically imperfect and must be restored. To traditional interference (noise, hum, electrical clicks), sound distortion, disturbances frequency response, resonances at individual frequencies were added: the presence of quantization noise, jitter, the effects of various computer noise suppressors, and much more.

In phonograms prepared for CD release, radio broadcasting, etc., electrical interference is unacceptable. Acoustic noises, in turn, are divided into studio noises: the hum of working ventilation, external intrusions, and performance noises (breathing of musicians, creaking of furniture, knocking of a piano pedal or valves of woodwind instruments, etc.). The degree of permissibility of performing noises in the technical conditions for magnetic phonograms is determined as follows: "Performing noises are allowed if they do not interfere with the perception of music." And this is absolutely correct, since it is the aesthetic standards that should be applied to the performance noise, which are in the competence of the sound engineer conducting the recording.

The opposite is also true - one should mercilessly fight the noise that interferes with artistic perception. There is a recording of a very respected guitarist, on which, in addition to the heartfelt performance, one can clearly hear ... puffing. The sound engineer, as well as the editor, producer, and others, who released such a recording for mass sale, committed, frankly, an malfeasance!

Perhaps the most common defect in technical quality is the result of overload. The mixer is a rather insidious pitfall on the way to good sound... Naturally, a sound engineer wants to hear his recording as loudly, brighter and more effective as possible. But this showiness is achieved by a combination of several parameters. The main ones are timbre and transparency. When listening loudly, the disadvantages of these parameters are compensated to some extent by a natural auditory perception mechanism known as the "Fletcher-Manson effect" (or "curves of equal loudness"). Of course, it is better to listen loudly than to write with the utmost level. Then the distortion of the analogue or completely unacceptable "over" on the digital begins. But the fact that it is pointless to turn up the volume to improve the recording is obvious even for beginners, but "pulling" the faders slowly in turn always seems more reasonable. As a result, the fight against overloads such as "individual fader moves up, and master - down" begins.

In fact, in sound engineering, the law of "reverse action" often operates: you want to make it louder - remove the disturbing one, you want to raise the bass - highlight the middle ...

If the recording is stereophonic, then one more parameter is determined in it - the quality of the stereo picture. Here we consider the width and fullness of the stereo base, the absence of a "hole in the middle", the uniform information content of the left and right sides, lack of distortions.

The basic rule for the formation of a stereo picture - "air is wider than sound" - means that a mono signal will sound "stereophonic" only if there is a wide-sounding diffuse field with it. All artificial reverbs are built on this principle, which, as a rule, have one input and two outputs.

A common mistake when creating a stereo image is the unnecessary narrowing of the base with the panoramic controls. It should be remembered that the "ping-pong" effect looks unnatural only in the classics, and even then not always. In a pop recording, the "roll call of the parties" works only for the benefit, and there is no need to be afraid of it.

A few words about the concept of "information content". Musical fabric is divided into essential, defining, well-localized components, and auxiliary, filling the texture. The first include, for example, a melody, accents-"riffs" filling in the pauses, etc., everything that the listener pays attention to in the first place. Auxiliary components of the musical fabric are various kinds of pedals (long notes or chords), duplication of the main voice (with another instrument or delay - all the same), continuous textured-harmonic figuration. It is the same informativeness of the left and right sides of the stereo base that creates a comfortable feeling for the listener of the correct stereo balance, and not the same level of signals of the right and left channels, which is usually shown by the indicators. This means that if the melody sounds for one instrument, then it should be located in the center. If the melody appears alternately in two voices, then they should be located on the sides of the base. An example of an unsuccessful arrangement of instruments is the so-called "American" concert seating of a symphony orchestra, where all melodic and well-localized parts - violins, flutes, trumpets, percussion, harps - are located to the left of the conductor. On the right, of the frequently played instruments, there are only oboes and cellos.

Perhaps we can stop at this. The question is often asked: - what is the most difficult thing in sound recording? The answer is simple: you need to develop the ability to overcome stress, always be in control of the situation and constantly control the resulting sound. This usually takes about ten years of independent work, when a self-taught specialist comprehends the secrets of mastery by the method of "trial and error". I really want our article to shorten this period a little for the readers ...

Do you need mastering in academic music? Why do we hear the word "mastering" much less often in the classics than on the stage? If you look at discs with academic music, it is often indicated who recorded the disc (recording), who edited it (editing), but about mastering - not always ... And if you still "tinker" - then what to do? Classics do not need such RMS levels as in the stage, and naturalness is above all - why then this separate procedure?

Such questions are of concern to many sound engineers and advanced listeners, because, given the very different approaches to recording, mixing and, in general, the formation of sound in the academic and pop genres, phonograms turn out to be radically different in dynamic, timbre, loudness characteristics. What to do - to leave the classics natural, but "quiet" and super-dynamic, or bring it closer to the stage, risking losing important dynamic nuances? Let's try to figure it out.

About restrictions

Mastering can be roughly divided into two components. During technical part of the process, the sound engineer usually forms the order of the tracks, adjusts the pauses between the tracks, downsizes the audio files if necessary using dithering and, if the output requires a CD-Audio format, "cuts" the master disc in accordance with the Red standard Book (it is more accurate to call this process pre-mastering - ed.).

These processes, of course, have their own subtleties, but in this article we will talk about creative parts of mastering. common goal similar work- impact on a ready-made mixed phonogram for its more successful publication - playback on various acoustic systems, radio rotation, commercial sale, etc. This is a fresh look by a new person at the phonogram in new (usually better) conditions, the final touch in the musical picture, the addition of something that, according to the mastering engineer, is not enough to get the best sound of the existing track, or the removal of unnecessary and disturbing ones.

In this article, we will also touch less on mastering as an assembly job. different records into one album. In this case, in order to equalize differing phonograms, the range of actions and the degree of engineer intervention may differ significantly from the actions discussed below.

Ears and apparatus

Just in case, let us remind you that the mastering engineer and the sound engineer who created the phonogram in the studio should not be the same person. It is believed that the hearing of a mastering engineer must be specially tuned to the perception and analysis of the phonogram. For example, it is less focused on the balance of instruments, space, plans, and more on the general impression made by the phonogram on the listener - on the general timbre balance, dynamic characteristics. Here, undoubtedly, the richest experience of listening to ready-made phonograms affects.

Mastering studio

In addition, the equipment and acoustics of mastering rooms are much more demanding than recording and mixing studios. Since the mastering engineer is the final link in the chain of work on the phonogram, the conditions of his work should be as honest as possible - the acoustics of the room are as neutral as possible, it is highly desirable to have top-class mid- or far-field loudspeakers in order to control the widest possible frequency range. Means of influencing the phonogram in mastering studios are usually of the highest class external equipment (Outboard) - digital, analog, tube devices for dynamic and frequency processing.

About genre differences

So, creative mastering is ubiquitous (at least it should be) in pop music. Moreover, in this genre, the mastering is usually done by an individual person. Comparing the situation with academic music, let's try to understand why in this area the situation is different.

In the stage, before mastering, in most cases there is a separate mixing process, during which the sound engineer strongly, and in some places dramatically affects the timbre, dynamics and other characteristics of the signals, moving them away from natural states for the sake of the overall sound of the mix. Making such deep interventions, the sound engineer inevitably becomes dependent on the conditions of the mixing studio - control monitors, control room acoustics, processing devices ...

In addition, as you repeatedly listen to the phonogram during mixing (and before that - during recording and editing), the hearing inevitably gets blurred, you get used to the sound, and the chances of being led by local acoustic and hardware conditions increase. And here the fresh look of the mastering engineer is just what comes in handy - listening to this new music for himself in his own conditions, he is able to catch and influence the details that other project participants have long been accustomed to, to make the final touches based on his rich experience of listening to ready-made phonograms.

Also, the mastering function to increase the density and loudness of the phonogram (RMS) is especially in demand in the stage, so that the song sounds favorably in comparison with other phonograms and does not get lost in level, for example, when broadcasting on the radio.

And in academic music? Here one of the main goals is, on the contrary, to convey the naturalness of timbres. Moreover, the mixing process is often absent altogether - until now, academic music is often recorded immediately in stereo and is not subjected to further processing - for both creative and technical reasons (for example, in the First and Fifth studios of the GDRZ, the possibility of multitrack recording by local means in currently not).


The first studio of the GDRZ

And if there is mixing, then it is balancing various microphones and microphone systems, panning, the use of artificial reverberation and, to a much lesser extent, frequency equalization and dynamics processing.

As for the volume, the situation is ambiguous. On the one hand, it is logical to strive to convey the dynamics of live performance in all its richness. But dynamic range symphony orchestra reaches 75-80 dB. At the same time, the level of background noise, for example, in a quiet rural room can be 20-25 dBA, and in city apartments - up to 40 dB and higher. Now let's turn to the table.


Loudness table

It shows the correspondence of the actually measured sound pressure levels in the room with the sound of a symphony orchestra and the volume levels indicated in the score.

It turns out that in an average noisy city apartment, the listener, having set the playback at approximately the volume "like in a hall", simply will not hear the nuance ppp, a pp it will mix with the background noise of the apartment. If you increase the volume by 15-20 dB, the situation in quiet places will stabilize, but loud ones will burst out in the zone of unpleasant sensation of the auditory system, in addition, problems with neighbors may arise ...

Thus, listening to the full dynamic range of academic music requires a room with a low background noise level, and not all listeners have such rooms.

As for the overall RMS-level of the phonogram - if the natural dynamic range is preserved, classical music will most likely sound quieter in terms of overall volume in direct comparison with the stage, which lowers its chances of being "in all its glory" perceived by the listener ...

Hence, another approach is to tighten the dynamic range to more acceptable values, raise the average level, and hence the overall loudness, with dynamic processing. But there are also disadvantages - the natural breath of the music begins to be lost, the dynamic nuances are smoothed out ...

Thinking in this way about the role of mastering in academic music, we come to two conclusions. On the one hand, it is not needed if the goal is to maintain complete naturalness. On the other hand, at the present stage of the development of sound recording, such a phonogram can dynamically and timbre be a loser in comparison with others.

The thought about the possible absence of mastering as a separate process was confirmed by a famous Norwegian sound engineer Morten Lindbergh(Morten Lindberg, www.lindberg.no) in our correspondence:

"Our work is structured in such a way that there is no clearly separated mastering process as such. Recording, editing, mixing and mastering are inextricably linked and have very blurred boundaries, where each next stage is largely prepared during the previous one."

From words to deeds!

Reasoning by reasoning, but what about the real modern sound engineering practice? To find out the details of the process, I decided to find out the opinions of experienced colleagues-sound engineers who are directly involved in academic music, mastering engineers, and also to find out how things are in the foreign sound community.

Do you need a specialist?

In the course of communication with professionals, it turned out that none of the Moscow academic sound engineers I interviewed give their recordings of academic music for mastering to individual specialists, everyone prefers to do the work on their own until the end. The reasons are various:

  • no need to transfer the phonogram to an additional person, availability of everything necessary for independent mastering;
  • lack of budget for a separate additionally paid operation;
  • the absence of engineers with sufficient experience in mastering academic music in a professional environment, which means that there is no one to entrust this process to, there is a "danger of unfamiliar hands".

In such a situation, accordingly, it turns out that mastering engineers have very little experience in working with academic music.

There are many funny stories on the topic of inept mastering of academic music by people far from this genre. The sound engineer of the Mosfilm tone studio shared with me several of them. Gennady Papin:

“There was a case of sending finished disk ensemble Aleksandrov for release to Poland. Having listened to the phonogram, the sound engineers "pulled up" the high frequencies by 15 (!) DB, considering the sound to be deaf and undiscovered.

Another time, American sound engineers turned on a noise suppressor on our choir recording in order to get rid of analog noise, as a result, the recording sounded like cotton wool.

Another common case, when variety mastering engineers cut out live pauses between numbers in the classics and make fast fades, the listener thus constantly switches between the atmosphere of the hall and the dead digital silence. "

Gennady Papin

Discussing the issue of having a separate mastering sound engineer, a famous Moscow sound engineer Alexander Volkov noticed:

"In our country, when recording classics, in most cases there is no division into producer and sound engineer, what can we say about a separate mastering sound engineer! ...".

The situation is different in the academic music industry in the United States. Using a reliable foreign source, who wished to remain anonymous, we managed to find out that mastering there is a necessary stage in the production of the final product: different material requires a different approach, since it is intended for a different consumer. Much also depends on the specific producer in charge of the project. Local mastering is most often done by an individual person. In particular, it is believed that the technical part of mastering should take place in the best possible conditions. For example, much attention is paid to the painstaking selection of dithering and sampling rate conversion algorithms; for different music, different digital-to-analog converters are selected for reproduction ... and this is not to mention the actual creative part.

Untouched frequency

Frequency correction (equalization) - is it necessary for mastering in academic music? According to Alexander Volkov, no:

"I try to achieve the desired timbre coloring of the phonogram on the recording by means of correct placement and selection of suitable microphone models. In extreme cases, I can correct something on mixing. At mastering, I no longer have a reason to use an equalizer."

The same opinion is shared by an outstanding Russian sound engineer and teacher. Igor Petrovich Veprintsev by adding:

"Among other things, the tone and nuance of the musicians' playing greatly influences the timbre of the recording, so the sound engineer, working with the musicians on the nuances of their performance, certainly influences the timbre of the recording."

Not all pros are so adamant about equalization. Famous Moscow sound engineer and teacher Maria Soboleva notes:

"I do not always use the equalizer on mastering, and if I do use it, then in minimal, barely audible," homeopathic "doses."

Maria Soboleva

Sound engineer of the Great Hall of the Moscow State Conservatory and teacher of the Academy. Gnesins Mikhail Spassky adds:

"Sometimes it happens that you want to slightly correct the sound. It very much depends on the material and the specific situation and the ensemble: I allow equalization on a symphony orchestra or choir; much less often there is a desire to equalize, for example, a solo grand piano."

What can a correction give if you still use it? There are situations when the acoustics of a room sounds too bright or, on the contrary, boomy - the entire recording is colored accordingly. A slight equalization in this case can correct the acoustics and at the same time affect the timbre of the sounding instruments. Another case is an instrument that was initially not rich in timbre. By adding a little low or low mids in mastering, you can add depth to the instrument and at the same time its echo in the room.

In my practice, there was a case of participation in the student recording competition at the International Congress of the Audio Engineering Society (AES) in London in 2010. I presented the recording of the Academic Orchestra of Folk Instruments of the All-Russian State Television and Radio Broadcasting Company under the direction of N.N. Nekrasov, made from a concert in the large hall of the M. Gnesins. The recording was mixed by me in Surround and took the second place in the corresponding nomination. One of the complaints made by the judges was that the overall sound of the recording was excessively “humming” in the low middle. I think this was due to the timbre of the hall, and as a result, this color was emphasized in the sound of the instruments of the orchestra. Then I noticed for myself for the first time that in this situation delicate mastering could significantly improve the situation.

Another possible role of the equalizer in mastering is to try to slightly bring the sound of academic music closer to the bright, high-top-saturated modern sound of the stage. But, again, within very narrow limits.

Maria Soboleva:

"At mastering, I mainly affect the upper and lower frequency regions of the range, and if I touch the middle, then at mixing. But often, listening to the material some time after the work is done, as a result I turn off the equalizer. If the sum of the introduced advantages does not exceed the disadvantages that have appeared. , I make a decision to refuse correction. Here the main principle is "do no harm!" ".

What is the danger of the equalizer in the classics? A feeling of unnaturalness, excessive brightness, or, conversely, a boomy sound. Good live instruments, sounding in good acoustic conditions and successfully recorded, are usually harmonious and balanced in timbre, so once again we emphasize the need for extremely delicate handling of the equalizer when mastering academic music.

Renowned Mastering Engineer Wim Bilt(Wim Bult, Inlinemastering) shares his experience:

"I don't have a lot of work in mastering the classics. We worked on a few live recordings and used a bit of analog EQ for crisp and compressor.Weiss DS1 formanagementdynamics ".


Wim Bult, studio Inlinemastering

From ppp before fff

Dynamics processing is used in one way or another in academic music by all sound engineers and mastering engineers with whom I managed to talk. In this case, several tasks can be set, in general arising from each other:

  • reduce the dynamic range if it is too wide;
  • increase the average sound level of the phonogram;
  • to give the phonogram a denser sound.

Much depends on the future of the phonogram. For example, in the USA, a lot of classical music is recorded for parishioners of various churches, this audience listens to music mainly in the car, so they need strong enough dynamic processing, unlike music lovers, for whom it is important to preserve as much dynamic range as possible.

The tools used to achieve these goals also vary. Someone uses exclusively the "manual" method of drawing the volume envelope, some use dynamics processing devices, someone uses both approaches. This is what a famous Russian mastering engineer says. Andrey Subbotin(Saturday Mastering studio):

“I had experience in mastering academic music, it mainly consisted in drawing the volume envelope in some moments. Compressors on the classics do not like at all, it is better to do it“ by hand ”.

Morten Lindbergh:

"Mastering in our studio is mainly dynamics control and export of files in the required format. We do not use no dynamics processing devices, and we use volume envelopes in Pyramix for manual containment of "firing" attacks, as well as emphasizing and revealing expressiveness in musical fragments. "

Wim Bilt:

" In the classics, this is more volume control - raising very soft passages and possibly lowering the volume in dynamic ones. "

Indeed, the formation manually the volume envelope is the most controllable effect on the dynamics of a piece, but at the same time it is a very painstaking work. Having a certain taste and skills, you can greatly improve the sound - slightly soften the dynamic contrasts, but preserve the drama of the work, while making the phonogram more accessible to the listener. In this work, it is important to avoid strongly audible jumps in volume as a result of the envelope - this kind of "sound engineer" is audible and undesirable.

And before ...

The predecessor of the modern manual rendering of the volume envelope can be considered "manual" compression in the analog era (and in the digital one too), when, when recording a stereo sum, the sound engineer, watching the level of the master, often held his hand near the master fader. If there was a sharp dangerous surge in the level, the master cleaned up slightly, usually with a subsequent return to the starting position, in quiet places, on the contrary, the level rose slightly.

It is also worth mentioning the "corduble" (correcting take) - the final copy of the recording, which, as a result, went to the library. It was made by rewriting, usually through an analog console, a copy glued by the editor onto a new tape. This was done because it was considered more correct to keep a solid, not glued version in the library - it deteriorated more slowly and lasted longer. During this rewriting, the sound engineer could do whatever he saw fit on the console as a final touch: a slight frequency adjustment, the same manual master level adjustment, and sometimes, if required, adding reverb. This process is very similar to modern mastering, isn't it?

Usage compression - this, on the one hand, is a means of influencing the dynamic range by automating manual drawing of the envelope, on the other hand, it is a way to introduce some density into the phonogram, which is more typical for pop phonograms, but in the classics it can be useful in small doses as well. Not all academic sound engineers use compression for mastering, but some do.

Maria Soboleva:

"I see the role of dynamics processing in mastering in revealing nuances in music, but not in changing its character. Then this music has more opportunities to open up to the listener.rationo more than 1.5-1.6. This influence should not have a decisive influence on dynamic contrasts.Pianoat the same time it remains quiet, but does not fall outside the boundaries of perception, it is slightly manifested, andFortesounds bright but not deafening. "

Alexander Volkov:

“I love the effect that soft multiband compression has on the sound. As an example, I’ll take the BK (Bob Katz) presets from the TC Electronic System 6000 mastering section. Ratio and rather long Attack and Release parameters, the compression turns out to be soft. It is not audible as such, but it certainly gives a benefit, thickening the phonograms and raising the average level. "

Mikhail Spassky:

"If I use compression, I prefer parallel. Unlike conventional compression, this mode allows you to beautifully tighten quiet places without affecting the dynamics of loud fragments."

An effect similar to parallel compression is provided by the so-called upward compressor, where, in contrast to lowering the levels of loud places, as in a conventional (downward) compressor, quiet places are pulled up.


Upward Compressor

So, if you use compression, then it should be a delicate and soft effect that does not harm the sound. Whether it is necessary to do this is a moot point, and many people prefer manual actions, but it is definitely possible, while always showing taste and a sense of proportion.

Bob Katz and Parallel Compression

American engineer Bob Katz is a prominent figure in the field of mastering. In addition to a number of books, including the famous Mastering Audio, he has many articles, answers to questions, useful comments in forums on the Internet.


Bob Katz's Mastering Audio Book

One of the techniques he actively recommends is "Parallel" compression. Its essence is in the duplication of a phonogram, in which only one of the created copies is subjected to compression (moreover, rather hard). This compressed phonogram is mixed with the intact one. As a result of this process, loud places (and, importantly, sound attacks) are practically unaffected, and quiet ones become brighter and denser - the result, it seems to me, is quite desirable for academic music. For more information on this technique, visit Bob Katz's website http://www.digido.com.

Limiting - the extreme degree of compression, it would seem, should not be directly related to the classics. But again it is decisive degree using the appliance. Most of the pros who spoke to me confirmed that they use the limiter mainly as the very last piece of the chain for a variety of purposes.

One of them is, in fact, normalization plus dithering. Devices like Waves L2 / L3 do this job successfully. The limiter itself can be used simply to raise the volume of the track to the level where limiting begins. Or you can go further and tune the device for a stronger effect, but at the same time make sure that only fragments of the waveform that stand out from the middle level are cut off. As soon as the limiter starts to work constantly on all the material, it already greatly damages the phonogram. In this way, on some phonograms, it is possible to achieve a gain of 5-6 dB, which is significant. The figure shows an exaggerated limiting situation, but I think the general idea is clear. Above the original version, below - limited.


Limiting

The limiter is also useful in cases when there are several peak fragments in the entire soundtrack that stand out from the average level and do not allow raising the overall level. Then you can configure the device to respond to such peaks. Another way to deal with such places is to manually draw the volume again, and this can be done on the specific track of the multitrack that caused this peak (for example, a particularly strong blow of the timpani). In this case, there is less chance of getting the effect of "sinking" of the entire phonogram in level.


Vladimir Ovchinnikov

Sound and mastering engineer of the Mosfilm tone studio Vladimir Ovchinnikov:

"In classical music, the dynamics on mastering is for me even more limiting than compression. Compression makes serious changes in the sound, musical balance. the gain in the general level is significant. "

So, we examined the basic techniques used in mastering academic music. There are also some actions that are relatively frequent in pop mastering, less common in academic mastering.

For example, MS-conversion is rather an attribute of the process of restoration and assembly of various phonograms into one disc. In other cases, issues of balance of the center and edges of the stereo base are usually solved during recording and mixing.

The general reverberation in the master section is also rare - the desired spatial impression is built up already in the process of recording and mixing and extra space on the master is usually not required.

Eventually…

With a much less obvious need than in the stage, the need for mastering of academic music, this stage of creating the final product, as we see, is almost always present. Somewhere with an almost imperceptible difference before and after, somewhere with a more serious transformation of the phonogram. It happens that large foreign publishers of academic music loudly declare the complete "naturalness" of the released records and the absence of mastering. However, there are reports from sound engineers who compared the options before and after, about the presence of serious differences in the sound, which means that some kind of mastering is still being done.

The situation when there is only academic music on the disc, hopefully, is now clearing up for us. But today we increasingly see cases of combining and intertwining genres, styles, sounds ... A striking example- soundtracks for movies, which can contain purely orchestral numbers, and rock or pop compositions. What to do in this case - to bring the orchestral to the stage? Or make pop numbers quieter? Or act both ways? These are questions for further exploration of the immense topic of mastering.

The author thanks Maria Soboleva, Igor Petrovich Veprintsev, Mikhail Spassky, Alexander Volkov, Gennady Papin, Andrey Subbotin, Vladimir Ovchinnikov, Wim Bult, Morten Lindberg for their help in creating this article.

We attach, as audio illustrations, recordings of unmastered and mastered phonograms made in one of the largest Moscow studios.

Sound processing and mixing of phonograms "from and to"

Easy-to-understand, step-by-step explanations of the material, starting from the very basics.

Whole 16 video courses containing a total of 187 video tutorials lasting more 42 hours.

3 DVDs with information totaling almost 9 Gigabytes.

Many illustrative examples, practical advice, useful recommendations, copyright techniques, know-how and other valuable information.

SpoilerTarget "> Spoiler: details

What is included in this course?

The comprehensive course consists of 16 video courses

Contains 187 video tutorials

Lasts a total of 42 hours

On this moment this is the most voluminous video course ever created by the author.

After reading the full content of the course further, you will be convinced of this and understand why it took so long to create.

Course 1. Theory of digital sound

11 lessons. Duration: 1 hour 21 minutes

This theoretical course is a mini video encyclopedia, and was created to help beginners understand the most important concepts, terms and fundamentals of digital audio.

Lesson 1 - What is sound

Lesson 2 - Analog-to-digital conversion

Lesson 3 - Sample Rate and Bit Depth

Lesson 4 - Jitter and Quantization Noise

Lesson 5 - Noise

Lesson 6 - Dithering

Lesson 7 - Digital to Analog Conversion

Lesson 8 - Loudness in digital audio

Lesson 9 - Stereo and Panorama

Lesson 10 - Basic Audio File Formats

Lesson 11 - Digital Sound Processing Techniques

In this course, you will learn:

What sound signal, power and pitch, sound pressure, phase and volume.

What is digital audio and analog to digital conversion.

What is bit depth, sampling, quantization and digitization of a signal.

What is jitter.

What is noise, and what types it is.

What is dithering.

What is digital-to-analog conversion and how does it work.

What is RMS and what is the optimal level for human perception.

What is the precedence effect.

What is panorama, pseudostereo, and also what is panning and what types it is.

What is lossless and lossy audio compression.

What are the types of audio processing, and how is destructive processing different from non-destructive.

Course 2. Home studio equipment for mixing phonograms


7 lessons. Duration: 1 hour 13 minutes

In this video course, you will learn what equipment is needed in order to produce high-quality sound processing and mixing of phonograms and why it is necessary. You will also learn how to choose the right equipment that is optimal in all respects with such a variety.

Lesson 1 - General Thoughts About Equipment for Mixing

Lesson 2 - Software

Lesson 3 - Audio Interface

Lesson 4 - Studio Monitors

Lesson 5 - Studio Headphones

Lesson 6 - Workplace of a sound engineer

Lesson 7 - Summing up

In this course, you will learn:

How much money to spend on studio equipment.

What is the principle of choosing studio equipment and how not to make a lot of mistakes at this stage.

What program to start with, how many programs are needed and whether the sound quality depends on the program.

What is an audio interface and what groups are they divided into.

Which one to prefer: USB, Fire-Wire or PCI-interface.

How many channels should a sound card have for mixing, and what should be the sampling rate and bit depth of its ADC / DAC.

Is it necessary to have built-in microphone preamplifiers and headphone amplifiers in the sound card?

What are the professional sound card there should be a signal-to-noise ratio, frequency and dynamic range.

What are studio monitors, why are they needed, what categories they are divided into and what type of monitors should be preferred.

What power and frequency range should monitors for a home studio have, and what type of bass reflex is preferable for monitors.

Rules for listening to monitors when buying.

Models of monitors of different price categories, with many of which the author has dealt with and which he advises to pay attention to.

What types are studio headphones divided into, what frequency range, sensitivity, impedance and level of distortion they should have.

Headphone models of different price categories, which the author recommends paying attention to.

You will also learn how to install monitors in your listening room.

Course 3. Steinberg Nuendo 4. Tuning, Optimization for Mixing and Audio Basics


15 lessons. Duration: 2 hours 34 minutes

In this video course, you will learn about all the necessary functions and capabilities of the Nuendo 4 program for effective work with audio. The course will allow those who come across it for the first time to learn how to work in this program.

Lesson 1 - Configuring ASIO and Latency

Lesson 2 - Configuring Virtual Switching

Lesson 3 - Creating and Configuring a Project

Lesson 4 - The main tracks of the program. Part 1

Lesson 5 - The main tracks of the program. Part 2

Lesson 6 - The main tracks of the program. Part 3

Lesson 7 - The main tracks of the program. Part 4

Lesson 8 - Nuendo Interface and Optimization. Part 1

Lesson 9 - The Nuendo interface and its optimization. Part 2

Lesson 10 - Nuendo Interface and Optimization. Part 3

Lesson 11 - The Nuendo interface and its optimization. Part 4

Lesson 12 - Navigation, Playback and Editing

Lesson 13 - Effects in Nuendo

Lesson 14 - Automation

Lesson 15 - Convert to Stereo File

In this course, you will learn:

How to configure ASIO sound card driver, latency, and virtual inputs and outputs.

How to create a project, set its bit depth and sample rate.

How to create and manage virtual program tracks: audio tracks, effect channels, subgroups, automation tracks and folder tracks.

How the Track List section interface is being optimized, as well as color marking of tracks and objects for ease of use.

How to optimize menus, toolbar, transport bar, and work windows.

How to customize the program for yourself.

How is navigation, playback and editing of audio files.

How to use the effects and how to connect them using the “insert” and “send” methods.

How to export a multi-channel project to a stereo file.

What is automation, how to write and edit it.

Course 4. Sound processing devices. Device and practical application


38 lessons. Duration: 6 hours 21 minutes

In this course, we will talk about the main devices for sound processing, about their structure and practical application... You will find out where, why and, most importantly, how which device is used. The course covers all the basic effects and tools used in the processing and mixing of phonograms.

Lesson 1 - Analyzers

Lesson 2 - Equalizer. Theory

Lesson 3 - Equalizer in action. Resolution of frequency conflicts

Lesson 4 - Equalizer in action. Tone Correction

Lesson 5 - Equalizer in action. Artistic equalization

Lesson 6 - Equalizer. Equalization rules

Lesson 7 - Compressor. Theory

Lesson 8 - Compressor. Compressor parameters

Lesson 9 - Compressor in Action

Lesson 10 - Multiband Compressor. Theory

Lesson 11 - Multiband Compressor in Action

Lesson 12 - Limiter. Theory

Lesson 13 - Limiter in Action

Lesson 14 - Parallel Compression. Theory

Lesson 15 - Parallel Compression in Action

Lesson 16 - Side Chain Compression. Theory

Lesson 17 - Side Chain Compression in Action

Lesson 18 - Serial Compression. Theory and practice

Lesson 19 - Reverse Compression. Theory and practice

Lesson 20 - Noise Gate. Theory

Lesson 21 - Noise Gate in Action

Lesson 22 - DeEsser. Theory

Lesson 23 - DeEsser in Action

Lesson 24 - Adaptive Squelch. Theory and practice

Lesson 25 - Exciter. Theory

Lesson 26 - Exciter in Action

Lesson 27 - Horus. Theory

Lesson 28 - Horus in Action

Lesson 29 - Fazer. Theory

Lesson 30 - Phaser in Action

Lesson 31 - Flanger. Theory

Lesson 32 - Flanger in Action

Lesson 33 - Delay. Theory

Lesson 34 - Delay in Action

Lesson 35 - Reverb. Theory

Lesson 36 - Reverb. Reverb types

Lesson 37 - Reverb. Reverb parameters

Lesson 38 - Reverb in Action

In this course, you will learn:

What are analyzers, what are they like, why are they needed and how to use them.

What is an equalizer and what is it for.

What is parametric and what is a graphic equalizer, what is the difference between them.

The most commonly used filter types in equalizers.

What is frequency conflict and how the equalizer helps to solve it.

Seven rules of equalization.

You will see how, in practice, EQ resolves frequency conflicts between kick and bass, between drums and percussion group, and between bass and guitar.

You will see how, in practice, the timbre of various instruments is corrected using the equalizer.

You will learn what artistic equalization is, how it is applied in practice, and how to create unusual effects by controlling the equalizer using automation.

What is a compressor, how is it arranged and what is it intended for.

What parameters does the compressor have and what each of them is responsible for.

You will see with a practical example how the compressor is applied to various musical instruments and how it should be used.

What is a multi-band compressor, how it works and in what cases it is used.

You will learn from a practical example of a drum kit how a 3-band compressor is applied.

What is a limiter, how it differs from a compressor and what is a "brick wall".

With a practical example of a drum kit, you will learn how the limiter is applied and how the use of different limiters affects the processing result.

What is parallel compression, and what three types it can be. Various types of parallel compression are discussed in a separate lesson with a practical example.

What is sidechain compression, how to use it to create a kind of "pumping" effect and how the so-called ducking, which is widespread on radio and TV, is implemented.

What is Serial and Reverse Compression. How both compression methods are put into practice.

What is a Noise Gate, what is it used for, what parameters this device has and what each of them is responsible for. You will learn how to put this device into practice with an example of electric guitar and vocals.

What is DeEsser, what is it used for, what types are there and how to use it.

What is adaptive squelch, what it is used for and how to use it.

What is an exciter, what is it used for and how it differs from an equalizer. You will see how to use the exciter in practice with two examples.

What are chorus, flanger and phaser, how they work, how they differ, what they are used for, what parameters they have and what each of their parameters is responsible for.

Over the course of several lessons, you will learn how to use chorus, flanger, and phaser in practice with acoustic guitar processing.

What is a delay, what is it used for, how it works, what parameters it has, and how it is used in practice.

What is reverberation, its properties, and the properties that affect the spatial impression of it.

You will learn what types of reverb are, what each of the many reverb parameters is responsible for, and learn how reverb is used when processing various musical instruments.

Course 5. Drum set. Treatment


17 lessons. Duration: 4 hours 50 minutes

The course is divided into five parts, each of which is devoted to the processing of a specific tool. On different examples Explains in detail the principles and techniques for handling bass drum, snare drum, tom toms, hi-hat and overhead mics. In this course, we will only talk about processing, not mixing, and only about an acoustic, not an electronic drum kit.

Lesson 1 - Bass kick. Part 1

Lesson 2 - Bass kick. Part 2

Lesson 3 - Bass kick. Part 3

Lesson 4 - Bass kick. Part 4

Lesson 5 - Bass kick. Part 5

Lesson 6 - Snare drum. Part 1

Lesson 7 - Snare drum. Part 2

Lesson 8 - Snare drum. Part 3

Lesson 9 - Snare drum. Part 4

Lesson 10 - Volumes. Part 1

Lesson 11 - Volumes. Part 2

Lesson 12 - Volumes. Part 3

Lesson 13 - Volumes. Part 4

Lesson 14 - Volumes. Part 5

Lesson 15 - Hi-hat. Part 1

Lesson 16 - Hi-hat. Part 2

Lesson 17 - Overhead Microphones

In this course, you will learn:

You will study the structure of the frequency range of the bass drum literally "from and to" on different examples of its sound and in all details.

Learn more about the principles and features of EQ for bass drums of different timbres.

In this course, you will learn in detail about the various methods of bass drum compression.

You will learn exactly how and for what purposes the gate is used to process the bass drum and how to clean the bass drum from extraneous noise with the help of the stereo base expander.

You will see how to specifically process the bass drum with reverb.

Learn in detail the structure of the frequency spectrum of the snare drum, as well as the key principles and features of its equalization on different sound examples.

You will learn in detail all the features of snare drum compression using examples with different playing techniques.

Learn to operate a gate when processing a snare drum.

Learn how to use snare reverb, including paired with a gate to create a gated reverb.

In separate lessons, you will get acquainted with the structure of the frequency spectrum of tom-toms and consider all the main principles and features of their equalization using different sound examples.

You will also learn in detail about the methods of compressing tom-toms, including using a multiband compressor to process them.

You will learn how to operate the gate and how to apply reverb to tom toms, including gated reverb.

In two separate video tutorials, you will go through all the features of working with a hi-hat, including analysis of its frequency range, equalization and excitation processing.

The final lesson of the course will cover overhead microphone processing, including spectrum analysis, equalization, psychoacoustic processing, phase normalization and compression.

Course 6. Processing and mixing of a drum kit "from and to" I


7 lessons. Duration: 2 hours 55 minutes

Lesson 1 - Project Analysis

Lesson 2 - Bass kick

Lesson 3 - Snare drum

Lesson 4 - Volumes

Lesson 7 - Reverb

In this course, you will learn:

How gating, compression and equalization of the bass kick take place in practice.

How the snare processing, including EQ and compression, proceeds.

You will learn how to properly pan Toms, and again you will see the peculiarities of their processing by the gate, compressor and EQ.

Take another look at how to equalize the hi-hat, channels with overhead mics, and general mics.

Learn how to apply normal limiting and Brick-Wall limiting to channels with overhead and general microphones, and learn how to normalize phase with a stereo expander.

The course will also cover the use of general compression to "thicken" a drum kit, and the final lesson will cover the use of reverb to process its various components.

Course 7. Processing and mixing of a drum kit "from and to" II


7 lessons. Duration: 1 hour 55 minutes

Lesson 1 - Project Analysis

Lesson 2 - Bass kick

Lesson 3 - Snare drum

Lesson 4 - Volumes

Lesson 5 - Hi-hat, overhead and general microphones

Lesson 6 - General Dynamics

Lesson 7 - Reverb

In this course, you will learn:

How the bass drum is processed in practice with a gate, compressor and equalizer.

How the snare drum is processed with an EQ and a compressor.

Once again, you'll see how to pan toms and manipulate them with a gate, compressor, and EQ.

Once again, you will learn how to equalize hi-hat, channels with overhead mics and general mics.

The course also covers the use of regular and deep limiting when processing channels with general microphones and overhead microphones.

This course also covers the application of general drum dynamics processing, and the concluding lesson covers the use of reverb.

Course 8. Processing and mixing of a drum kit "from and to" III


7 lessons. Duration: 1 hour 38 minutes

Lesson 1 - Project Analysis

Lesson 2 - Bass kick

Lesson 3 - Snare drum

Lesson 4 - Volumes

Lesson 5 - Hi-hat, overhead and general microphones

Lesson 6 - Parallel Compression

Lesson 7 - General Dynamics and Spatial Processing

In this course, you will learn:

How in practice is the processing of the bass drum by the gate and its equalization to create a timbre characteristic of a given style of music.

How the snare drum is processed with an EQ, gate and compressor.

Once again, you'll see how to pan toms and handle them individually with gate, compressor, and EQ.

See how the frequency processing of the hi-hat, channels with overhead mics and general mics is performed.

This course also undoubtedly covers the use of dynamics processing tools for channel polishing with general microphones and overhead microphones.

In a separate lesson, you will see in practice how to perform parallel compression of certain components of a drum kit.

And in the final lesson of the course, take another look at the use of general drum dynamics processing and the use of reverb.

Course 9. Percussion. Processing and mixing


9 lessons. Duration: 1 hour 39 minutes

In this course, you will learn about the processing of the most common percussion instruments and how to mix a percussion group.

Lesson 1 - A Go Go Bells

Lesson 2 - Kongi

Lesson 3 - Shaker

Lesson 4 - Woodblock

Lesson 5 - Cowbell

Lesson 6 - Big Drum

Lesson 7 - Tambourine

Lesson 8 - Triangle

Lesson 9 - Mixing a Percussion Group

In this course, you will learn:

Features of processing percussion instrument "A-ho" using a parametric equalizer.

How to Process Congas with EQ and Compressor.

Features of the shaker equalization.

A technique for processing a woodblock using an equalizer and a gate.

You will learn how and with what tools the cowbell should be processed.

Features of processing large, low-sounding ethnic drums with a compressor and parametric equalizer.

How and what to handle a tambourine (tambourine) and a triangle.

In the final lesson of the course, you will see the entire processing and mixing of 8 different percussion instruments with a drum kit.

Course 10. Electronic drums. Processing and mixing


9 lessons. Duration: 1 hour 11 minutes

Throughout this course, the author shows the processing and mixing of the drum section using one example. In this course, we will talk about electronic drums in dance music, about the peculiarities of their processing and, of course, about mixing them.

Lesson 1 - Project Analysis

Lesson 2 - Bass kick

Lesson 3 - Snare

Lesson 4 - Claps

Lesson 5 - Percussion

Lesson 6 - Hi-hat and ride

Lesson 7 - Magnifiers

Lesson 8 - Crash and Firmware

Lesson 9 - General Compression and Resurfacing of the Project

In this course, you will learn:

How to Gate, Compress, and EQ an Electronic Sampled Kick Drum.

How and what to handle snare, claps and various percussion instruments in electronic dance music.

Learn how to EQ the electronic hi-hat and ride cymbal.

Become familiar with the processing and mixing of loops, learn how to apply equalization to them.

The course also covers mixing crash cymbals, breaks and applying general compression to the entire drum section in the final lesson.

Course 11. Bass. Processing and mixing


6 lessons. Duration: 1 hour 50 minutes

The course is divided into two parts. The first is devoted to the processing and mixing of the bass guitar, the second is to the processing and mixing of electronic synthesized bass in dance music. The course will talk about both processing and mixing bass with a drum section.

Lesson 1 - Bass - I

Lesson 2 - Bass - II

Lesson 3 - Bass Guitar - III

Lesson 4 - Electronic Bass - I

Lesson 5 - Electronic Bass - II

Lesson 6 - Electronic Bass - III

In this course, you will learn:

How to process and how to mix a bass guitar recorded "in line" with a drum kit, so that its sound becomes dense, rich and deep.

You will learn how to process a multi-band compressor and EQ and how to mix with a drum kit a bass recorded from a combo amp for a more intelligible and clearer sound.

You will learn how to process a bass guitar, which is not very high quality recorded "in line", with two equalizers at the same time, as well as an exciter and a compressor, to make it sound much more powerful and massive.

In two different examples, you will see how two different timbre-based basses in electronic dance music are processed and mixed with the drum section and how to apply an equalizer, compressor, stereo base expander and reverb in this case.

In the final lesson, you will see how to process and mix with the drum section a bass that is simple in tone, with which, nevertheless, beginners often have a lot of problems.

Course 12. Processing and mixing of guitars


12 lessons. Duration: 3 hours 33 minutes

The first part of the course is devoted to processing and mixing acoustic, the second - processing and mixing of an electric guitar. A variety of examples explore the principles and methods of processing various types of acoustic and electric guitars.

Lesson 1 - Acoustic Guitar Processing - I

Lesson 2 - Acoustic Guitar Processing - II

Lesson 3 - Acoustic Guitar Processing - III

Lesson 4 - Acoustic Guitar Processing - IV

Lesson 5 - Acoustic Guitar Processing and Mixing - I

Lesson 6 - Acoustic Guitar Processing and Mixing - II

Lesson 7 - Processing Electric Guitar - I

Lesson 8 - Processing Electric Guitar - II

Lesson 9 - Processing Electric Guitar - III

Lesson 10 - Processing Electric Guitar - IV

Lesson 11 - Processing Electric Guitar - V

Lesson 12 - Processing and Mixing Electric Guitar

In this course, you will learn:

How to process an acoustic guitar that strikes back with an EQ, compressor, and reverb.

Learn how to EQ, Compressor, and Reverb an acoustic guitar that plays a country-style accompaniment.

See how an acoustic guitar playing an “open beat” part is processed using a compressor, EQ and modulation processing to increase the volume of the sound.

One of the lessons examines in detail the complex processing of a "problem" acoustic guitar with nylon strings, as a result of which a dry and barrel-sounding guitar gains transparency and volume.

You will see the processing and mixing of an accompanying double track acoustic guitar with drums and bass.

A separate lesson of the course is devoted to processing an ensemble of three acoustic guitars, which covers everything from equalization and compression to balancing and spatial processing.

In one of the lessons of the course, you will see the peculiarities of processing an accompanying electric guitar in a funky style.

Learn how to handle a blank electric guitar playing a muffled part with an EQ, compressor, tube emulator and tape recorder.

In a separate lesson of the video course, the complex processing of a "problematic" clean electric guitar playing with brute force is analyzed in detail, as a result of which a callous sounding instrument acquires excellent transparency, depth and volume.

You will certainly see what devices and how exactly to process an electric guitar with distortion, both solo, playing a slide, and accompanying.

The last lesson of the course is devoted to the processing and mixing of an electric rhythm guitar with overdrive, recorded as a double double track with drums and bass in the style of rock.

Course 13. Processing and Mixing Keyboard Instruments


7 lessons. Duration: 3 hours 17 minutes

The course consists of several parts, each of which focuses on a specific keyboard instrument. Despite the fact that instruments such as marimba and vibraphone are keyboard percussion instruments, their processing and mixing will also be discussed in this course.

Lesson 1 - Acoustic Piano Arrangement - I

Lesson 2 - Acoustic Piano Arrangement - II

Lesson 3 - Acoustic Piano Processing and Mixing

Lesson 4 - Processing and mixing the Hammond organ

Lesson 5 - Rhodes Piano Processing

Lesson 6 - Piano, Rhodes, Mallets

Lesson 7 - Synthesizers in dance music. Processing and mixing

In this course, you will learn:

The first lesson is devoted to the processing of "soft" acoustic piano in a ballad style. Learn how to get the sound you want with fine EQ, multi-band compression, and the right reverb.

You will learn how the classical piano is processed.

In a separate lesson you will see "inside and out" how to process and how to mix with a mix of drums and bass an acoustic piano in the style of "blues-rock".

One of the lessons of the course is devoted to processing and mixing with a mix of drums and bass of a Hammond organ. You will learn how to handle this tool equalizer, compressor, guitar combo emulator and reverb, so that it fits into the mix without difficulty.

You will see the process of transforming a "dead" sounding Rhodes piano into a beautiful, transparent and soft in timbre instrument due to thoughtful complex processing.

A separate 45-minute lesson is designed to teach you the processing and mixing of acoustic piano accompaniment and Hammond organ solos, rhodes, marimba and vibraphone with bass and drums.

The final lesson is devoted to the complete mixing of 4 synthesizers of different timbre with a drum section and bass in electronic dance music.

Course 14. Processing of winds and strings


7 lessons. Duration: 1 hour 49 minutes

This video course is devoted to the processing of wind and stringed bowed instruments. Each lesson covers the processing of a specific musical instrument.

Lesson 1 - Alto Saxophone

Lesson 2 - Tenor Saxophone

Lesson 3 - Trumpet

Lesson 4 - Mute Trumpet

Lesson 5 - Flute

Lesson 6 - Violin

Lesson 7 - Cello

In this course, you will learn:

The first lesson of the course is devoted to alto saxophone processing. You will learn how to achieve a soft and velvety sound from a shrill and bright saxophone with the help of correct equalization, serial compression and the competent application of psychoacoustic and spatial processing.

You will learn how the tenor saxophone is processed by an equalizer, multi-band compressor and reverb.

You will certainly get acquainted with the main principles of technical and artistic processing of a pipe, and in a separate lesson you will learn the features of processing a pipe with a mute.

A separate lesson is devoted to flute processing. Here you will see how to process the flute with EQ, compressor and reverb to get rid of dry sound and make it soft and flighty.

You will also see how the violin and cello are handled.

Course 15. Mixing phonogram "from and to" I


12 lessons. Duration: 2 hours 57 minutes

During this practical course, the author mixes the mix in a pop style (in the chanson style) from scratch to the end. This course was created so that you can see the process of mixing a phonogram "from and to". Throughout the course, the author's actions are accompanied by comments, but without focusing on the theory, definitions, and the basics of working with sound.

Lesson 2 - Bass kick

Lesson 3 - Snare drum

Lesson 4 - Overhead Microphones, Master Microphones, and Shaker

Lesson 5 - Bass guitar

Lesson 6 - Acoustic Guitar

Lesson 7 - Electric Guitar

Lesson 8 - Pad

Lesson 9 - Sequence

Lesson 10 - Strings

Lesson 11 - Lead Guitars

Lesson 12 - Spatial processing and final polishing of the mix

In this course, you will learn:

In the first lesson of the course, you will learn how to properly prepare a project so that further work with it is comfortable and effective.

You will study the structure of the frequency range of the bass drum of the project, you will see the process of its equalization, compression and gating.

Master the structure of the frequency spectrum of the snare drum of the project, after which it will be EQ and compression.

A separate lesson covers overhead microphone processing, including spectrum analysis, equalization, phase normalization, and dynamics processing.

You will see how to process your bass with an EQ and a compressor to make it blend in better with the drums.

You will definitely see the processing and mixing of the accompanying acoustic guitar recorded as a double track.

In a separate lesson, you will see the process of complex processing and mixing of a clean accompanying electric guitar, recorded as a double track, as a result of which "plastic" sounding guitars will sound noticeably better and easily fit into the mix.

You will see the process of correctly processing a muddy-sounding pad, which only after thoughtful processing will be able to fit into the mix without creating dirt and without overlapping half of the other instruments.

A separate lesson of the course is devoted to the processing and mixing of an arpeggiated synthesizer.

There is a dedicated lesson on Strings Processing, in which you will learn not only the features of equalization and dynamics processing of synthesized strings, but also learn how to apply interesting modulation processing to better match them with the mix.

In one of the lessons, you will also see the process of processing and mixing distorted electric solo guitars using equalization, compression, as well as paw emulation and emulation of the sound of a tape recorder.

And in the final lesson, you will see the process of spatial processing of all the tools that need it and the process of finishing and sanding the project.

Course 16. Mixing phonogram "from and to" II


16 lessons. Duration: 3 hours

During this practical course, the author mixes the mix in a dance style "from and to". This course was created so that you can see the process of mixing a phonogram from start to finish. Throughout the course, the author's actions are accompanied by comments, but without focusing on the theory, definitions, and basics of working with sound.

Lesson 1 - Preparing the project for work

Lesson 2 - Bass kick

Lesson 3 - Snare & Claps

Lesson 4 - Other drums and percussion

Lesson 5 - Magnifiers

Lesson 6 - Bass

Lesson 7 - Guitars

Lesson 8 - Synthesizer I

Lesson 9 - Synthesizer II

Lesson 10 - Synthesizer III

Lesson 11 - Synthesizer IV

Lesson 12 - Synthesizer V

Lesson 13 - Synthesizer VI

Lesson 14 - Synthesizer VII

Lesson 15 - Effects

Lesson 16 - Spatial processing and final polishing of the mix

In this course, you will learn:

In the first lesson of the course, you will learn how to properly prepare a project so that the subsequent work with it is more comfortable and effective.

You will study the structure of the frequency range of the bass drum of the project, and, of course, you will see the process of equalization and compression in order to obtain the desired timbre.

You will master the structure of the snare and claps frequency spectra, after which dynamic and frequency equalization will be done to better match the bass drum.

A separate lesson covers the processing and mixing of as many as six percussion components of the drum section, including the analysis of their spectra, equalization, panning, balancing, as well as modulation and dynamics processing.

You will familiarize yourself with the processing and mixing of loops, see how EQ is applied to them.

You will see how three parallel and different in timbre basses are processed and mixed with the drum section.

In a separate lesson, you will see the process of processing and mixing clean accompanying electric guitars, as a result of which the characterless sounding guitars will sound much more interesting and fit better into the mix.

As many as 7 lessons of this course with a total duration of 50 minutes are devoted to the processing and mixing of all synthesizers in the project.

A separate lesson of the course is devoted to the correct processing and mixing of various special effects, accent cymbals and drum breaks.

And in the final almost half-hour video tutorial, you will see the process of spatial processing of all the components of the mix that need it and the process of finalizing and polishing the project.

What are the advantages and benefits?

Of course, if you have read the full content of the course above, then you will not argue that its main advantage is huge amount information. But this is far from the only advantage.

Effectively! The lessons in the course are organized in screencast format. The effect of viewing them is the same as if you were next to the author and he showed his actions on a real computer and at the same time gave comments.

Structured! This course is a smart learning system, not just a collection of lessons. The course is divided into chapters, chapters into individual lessons, and each lesson is devoted to a specific topic.

Informative! The video courses really cover the topic in detail. After studying it, you are unlikely to have an urgent need to search for additional materials.

Unique! The course contains a lot of copyright techniques, know-how and other useful information, which you are unlikely to find anywhere.

Clearly! When explaining a theory, complex things are explained using clearer examples, and key points are shown on the screen in the form of visual illustrations.

Really! In this video course, only reliable and relevant information is given, which will not lose its relevance for several more years.

Comfortable! You can watch any lesson of the course when you want and as many times as necessary. You will not need to walk or drive anywhere to study it.

Think you have alternatives?

Yes, you might think that learning from books, taking face-to-face courses and looking for free information online is much better than learning from this video course.

Of course, we have nothing against all this, but think about it, is it really better for you personally?

Books

Books are good, but ...

Most of them are written in rather boring language and oversaturated with theory. You rarely meet in them practical examples, and, as you know, theory without practice is meaningless.

Many people understand text worse than video. Reading is one thing, watching and listening is another.

Most of the worthwhile books on this topic are in English. Anyway, have you seen a lot of books on mixing and sound processing? In fact, you can count them on your fingers.

Full-time courses

Their undoubted advantage is good assimilation of information, but at the same time:

Face-to-face courses are expensive and, at the same time, do not always provide comprehensive information.

Majority face-to-face courses is held in large cities and, therefore, they will simply not be available for residents of small cities.

By signing up for courses, you will depend on them. You will have to visit them regularly, spending time and money on the road, which, you see, is very inconvenient.

Free materials from the Internet

Well, as for free information found on the Internet, it has only one plus - a relatively low cost, since you will only pay for the Internet. Otherwise, free materials from the Internet have significant drawbacks:

They are not structured, often uninformative, often outdated, and sometimes completely unreliable.

You will have to spend a lot of time looking for them and structuring them.

And most importantly, even with all your desire, you will not be able to find on your own all the information contained in this video course, since a significant part of it is unique and is not found anywhere else.

Don't believe me? Good. Try, for example, to find a free video tutorial in Russian on handling a mute pipe or a video course on mixing a funky "inside and out" drum kit.

Who is this course for?

The video course is designed mainly for beginners and more or less advanced sound engineers who want to learn how to process sound and mix phonograms.

The course can also be useful for musicians, arrangers, DJs and composers who want to improve their knowledge and skills in sound processing and mixing phonograms.

If you are a beginner sound engineer and want to learn sound processing and mixing, this course is most likely exactly what you need.

The main thing is that it does not matter whether or not you have absolute hearing to complete the course, it is not necessary to be seven inches in your forehead, and even more so it is not necessary to understand all the intricacies of working with sound.

All you need is to carefully watch the lessons, delve into and constantly apply the knowledge gained from them in practice. It is important to act. If you take action, the result will not be long in coming.

If you are not going to study, make an effort and are just looking magic wand that will solve all your problems overnight - this course is definitely not for you.

Of course, you are probably already worried about the question:

What is par for the course?

How much do you think it should cost? How much do you think a training of 16 video courses containing a total of 187 video lessons over 42 hours in length should cost?

Of course, the price of such a course cannot be low in any way, but at the same time it should not be too high, no matter how valuable the material is. Still, among those who want to study there are people of different ages and with different income levels.

Want to learn how to process sound and mix phonograms "from and to". You will learn: What is a sound signal, power and pitch, sound pressure, phase and volume, how much money to spend on studio equipment, how to set up an ASIO sound card driver, latency, as well as virtual inputs and outputs, what analyzers are, what are they there are, why they are needed and how to use them, get acquainted in detail with the principles and features of equalization of bass drums of different timbres, how gating, compression and equalization of a bass drum occurs in practice, how a snare drum is processed with an equalizer and a compressor, features of processing a percussion instrument " Oh-ho "with the parametric equalizer and much more. All this knowledge is provided to you in a form that will allow you to assimilate and apply it in practice as easily, quickly and efficiently as possible.

Course 1. Theory of digital sound

This theoretical course is a mini video encyclopedia, and was created to help beginners understand the most important concepts, terms and fundamentals of digital audio.

Course 2. Home studio equipment for mixing phonograms

In this video course, you will learn what equipment is needed in order to produce high-quality sound processing and mixing of phonograms and why it is necessary. You will also learn how to choose the right equipment that is optimal in all respects with such a variety.

Course 3. Steinberg Nuendo 4. Tuning, Optimization for Mixing and Audio Basics

In this video tutorial, you will learn about all the necessary features and capabilities of Nuendo 4 to work effectively with audio. The course will allow those who come across it for the first time to learn how to work in this program.

Course 4. Sound processing devices. Device and practical application

In this course, we will focus on the basic sound processing devices, their design and practical application. You will find out where, why and, most importantly, how which device is used. The course covers all the basic effects and tools used in the processing and mixing of phonograms.

Course 5. Drum set. Treatment

The course is divided into five parts, each of which is devoted to the processing of a specific tool. Various examples detail the principles and techniques for handling bass drum, snare drum, tom toms, hi-hat and overhead mics. In this course, we will only talk about processing, not about mixing, and only about an acoustic, not about an electronic drum kit.

Course 6. Processing and mixing of a drum kit "from and to" I

Course 7. Processing and mixing of the drum kit "from and to" II

Course 8. Processing and mixing of the drum kit "from and to" III

Course 9. Percussion. Processing and mixing

In this course, you will learn about the processing of the most common percussion instruments and how to mix a percussion group.

Course 10. Electronic drums. Processing and mixing

Throughout this course, the author shows the processing and mixing of the drum section using one example. In this course, we will talk about electronic drums in dance music, about the peculiarities of their processing and, of course, about mixing them.

Course 11. Bass. Processing and mixing

The course is divided into two parts. The first is devoted to the processing and mixing of the bass guitar, the second is to the processing and mixing of electronic synthesized bass in dance music. The course will talk about both processing and mixing bass with a drum section.

Course 12. Processing and mixing of guitars

The first part of the course is devoted to processing and mixing acoustic, the second - processing and mixing of an electric guitar. A variety of examples explore the principles and methods of processing various types of acoustic and electric guitars.

Course 13. Processing and Mixing Keyboard Instruments

The course consists of several parts, each of which focuses on a specific keyboard instrument. Despite the fact that instruments such as marimba and vibraphone are keyboard percussion instruments, their processing and mixing will also be discussed in this course.

Course 14. Processing of winds and strings

This video course is devoted to the processing of wind and stringed bowed instruments. Each lesson covers the processing of a specific musical instrument.

Course 15. Mixing phonogram "from and to" I

During this practical course, the author mixes the mix in a pop style (in the chanson style) from scratch to the end. This course was created so that you can see the process of mixing a phonogram "from and to". Throughout the course, the author's actions are accompanied by comments, but without focusing on the theory, definitions, and the basics of working with sound.

Course 16. Mixing phonogram "from and to" II

During this practical course, the author mixes the mix in a dance style "inside and out." This course was created so that you can see the process of mixing a phonogram from start to finish. Throughout the course, the author's actions are accompanied by comments, but without focusing on the theory, definitions, and basics of working with sound.

Bonuses:

  1. Copyright support
  2. Free mastering of the phonogram