Menu
Is free
registration
home  /  Education/ What information does the midi messages contain. MIDI is a digital audio recording standard for the format of interaction and data exchange between electronic type musical instruments

What information does the midi messages contain. MIDI is a digital audio recording standard for the format of interaction and data exchange between electronic type musical instruments

Chapter 4 MIDI Interface

MIDI stands for Musical Instruments Digital Interface(Musical Instrument Digital Interface). This is standard digital interface data exchange between electronic musical instruments. It is not the sound signal itself that is transmitted via MIDI, but various control signals: pressing and releasing a key, hitting a key, volume, vibrato, smooth pitch change, and also - to ensure synchronization - time information (time codes, time codes) and even digital audio information (samples), etc. The simplest case using MIDI - generating control commands by the main device (MIDI sequencer) and transmitting them to the controlled device (most often a synthesizer). Signals are transmitted as a digital sequence, broken up into bytes. Unlike digital recording audio information Recording a MIDI sequence takes up a small amount of memory. A separate MlDl message usually consists of one, two, or three bytes (excluding system exceptions). When you play the keyboard or listen to a chord recorded in the sequencer, all notes of the chord are transmitted and played in turn. However, we hear a solid chord, since the command transmission rate is quite high. By ear, the delay of sounds is imperceptible, and the MIDI interface is able to convey the vast majority of the nuances of a musician's playing.

The system is used to control multi-timbral musical instruments and other devices that support MIDI communication at the same time. MlDl channels. It is assumed that each MIDI message is transmitted on one of sixteen MIDI channels and each channel can be assigned a different instrument or tone. The information about the MIDI channel is contained in the least significant four bits of the first byte of the MIDI message.

Among the variety of MIDI messages, one can single out those that are transmitted only on their own MIDI channel. it Channel Messages(Channel Messages): Commands Note on(Hit the note) and Note Off(Release), various MIDI controllers, commands for switching sounds and changing modes Program Change(Program change). In addition, there are messages that are transmitted without being tied to specific channels - System Messages(System messages). it System Real Time Messages(Real time messages): Timing clock(MIDI system clock), a number of other commands to keep the system stable, and System Exclusive Messages(Exceptional System Messages) is a group of MIDI messages separate from all others.

The original purpose of MIDI was to be able to control multiple instruments at once from the keyboard of a single instrument. Now widely spread MlDl sequencers, or simply sequencers- devices or programs that allow you to record a piece of music as a sequence of MIDI messages. Reproducing it later using the same devices from which the recording was carried out, we will get an identical sound result.

MIDI channels and sequencer channels are not the same thing. Usually, sequencer channels are called tracks. There are only 16 MIDI channels, and there are usually much more virtual sequencer tracks, so several sequencer tracks can be sent to the same MIDI channel. This can be useful, for example, for switching from one recorded part to another, or for “driving” drums, when it is necessary to play each drum instrument on its own track and at the same time not occupy scarce MIDI channels.

Recently, the standard has also become widespread General MIDI. It assumes that in musical devices from different manufacturers, tones that are similar in sound have the same numbers. For example, a regular grand piano is tone # 1, timbrel is tone # 49, etc. Thus, if you have a MIDI sequence recorded on General MIDI devices, you can play it on any device that supports this standard. The sound result will only slightly differ from the original material.

So, technically, MIDI is a serial interface. But when working with MIDI, it is more convenient to represent this system in a "parallel" form, that is, in the form of simultaneously existing sixteen channels.

From the book Time is money. Building a software development team author Sullivan Ed

From book Music Center on the computer the author Leontiev Vitaly Petrovich

MIDI keyboard We somehow rarely think that any home computer equipped with a more or less decent sound card, is fraught with the capabilities of a professional music studio. And the truth is - in the section dedicated to sound cards, we have already written about their ability

From the Linux user book the author Kostromin Viktor Alekseevich

Chapter 7. Graphical interface Although Linux is a very powerful and advanced operating system, but if you work with it only through the interface command line, it is quite difficult to use and "unfriendly" to the user. All necessary operations are performed

From the book Sound Forge 9 author Quint Igor

Sound synthesis and MIDI format Until now, it was about digitizing and processing real sound received and recorded from various sources. There is also a completely different task - the creation (synthesis) of sound on a computer. A synthesizer is a set of controllable

From the book ArchiCAD 11 the author Dneprov Alexander G

Chapter 2 Sound Forge 9.0 Interface Design of the Main Window Workspace Windows Tool Bars Controls Using the Mouse and Hot Keys The great variety of tools and functions available in Sound Forge require a user-friendly

From the book 3ds Max 2008 the author Workbench Vladimir Antonovich

Working with MIDI As you know, Sound Forge is designed to work with digital sound, but the program also contains some additional MIDI functions that can be useful when working on sound. For example, you can make the application work as a device

From the book Adobe InDesign CS3 the author Zavgorodny Vladimir

Chapter 2 The ArchiCAD 11 Interface Menu Toolbars Palettes Working Environment Customization Help System Any program, no matter how powerful and wonderful "internal" functions it has, will not be appreciated if it does not have convenient tools

From the book Getting Started with Windows 7. A Beginner's Guide the author Kolisnichenko Denis N.

Chapter 1 Program Interface Interface Elements Plug-ins Configuring the Program Acquaintance with such a complex and voluminous program as 3ds Max 2008, it is logical to start by studying its interface and capabilities. Deep knowledge of the application makes it much easier

From the 3ds Max 2008 book 100% the author Workbench Vladimir Antonovich

Chapter 6 Installation and Program Interface First of all, to work with Adobe InDesign, we need to Adobe program InDesign. Those of our readers who already have the program installed on their computer are lucky and can skip this chapter with a clear conscience.

From the book Computer Sound Processing the author Zagumennov Alexander Petrovich

Chapter 3 User Interface 3.1. What is this chapter about? There is probably no computer user who does not know how to work with Windows. Whatever you say, but Windows has become a real de facto standard on user computers (I'm talking about conventional computers, not about

From the book FictionBook Editor V 2.66 Guide by Izekbis

Chapter 1 Program interface? Interface elements? Plugins? Setting up the program Why is it important to learn the program interface? The interface provides access to control all the capabilities of the application. Many users, neglecting to study it,

From the author's book

Editing a MIDI score in a sequencer program In the simplest programs, editing a MIDI score boils down to assigning instruments to each recording track and determining their relative loudness, as well as spatial localization. If

From the author's book

General MIDI Standard General MIDI, or simply GM, is the result of an agreement between MIDI equipment manufacturers that any General MIDI compliant instrument must meet certain minimum requirements as listed below.

From the author's book

Chapter 7 Combining Audio and MIDI So, we know that MIDI sequences and audio recording of real sound are two completely different types of sound representations, for which - most importantly - are responsible different devices... Can't record in MIDI format to a tape recorder

From the author's book

Exclusively MIDI http://www.midi.ru - community of Russian midi-sites MIDI.RU. Author's MIDI music, lyrics of popular songs, everything about karaoke, Christian MIDI music, club of music MIDI pages, music computer programs, music from films, music

Part 1 of a series of articles detailing the MIDI protocol.

Almost from its inception, the Musical Instrument Digital Interface (MIDI) protocol has become a standard for the entire electronic music industry with an unprecedented degree of compatibility. Even light bulbs, network and telephone sockets still do not have such compatibility. The situation now is such that if an electronic musical device is produced that is incompatible with MIDI, it is doomed to be cut off from the rest of the world.

The reason MIDI has enjoyed overwhelming success for twenty years is simple - the protocol was very carefully designed before being presented to the public. There are no "holes" in it, and the requirements for hardware implementation and device interaction are clearly defined and cannot be interpreted in two ways. In addition, MIDI is not owned by one company, but is the product of an entire association of manufacturers.

The basic premise for the emergence of MIDI was the urgent need for musicians of the time to control several synthesizers from one keyboard at the same time. At the same time, developers were required to make the connection of tools simple, and the interface itself to be reliable and inexpensive. Now, after twenty years, we can confidently say: these conditions for their time were fulfilled by the developers ideally.

MIDI was designed to be a simple, inexpensive, and reliable means of controlling one synthesizer from another.

This should be remembered whenever questions and perplexities arise "why is this done in MIDI this way?" Moreover, you need to remember the main purpose of MIDI before criticizing the protocol. And MIDI has been criticized since its inception and is still criticized today, especially for its too slow data transfer and rhythmic inaccuracy. Especially in the light modern technologies... The advantages and disadvantages of the protocol, ways to overcome them and alternatives to MIDI are such an extensive topic for discussion that a separate article will be devoted to this.

Despite all the shortcomings, MIDI still fulfills its purpose quite successfully today. And not only that - the scope of the protocol has long gone beyond the control of synthesizers. Many effect processors, mixing consoles, even lighting, pyrotechnic devices and smoke machines are controlled via MIDI. What can we say about personal computers and the related multimedia industry! It is now in the order of things to download a MIDI file from the Internet as a ringtone for a mobile phone. I won't be surprised if soon it will be possible to download a MIDI file to control the food processor ...

World before MIDI
The mid-60s - early 70s of the last century were the time of the emergence and rapid flowering of electric musical instruments. On stage and in the studio, a fundamentally new type of musical instrument - a synthesizer - has been added to the already widely used electric guitars and electric organs. The first synthesizers were very difficult to set up, transport and maintain, but they gave musicians something that could not be obtained in any other way - new, fresh sounds.

All synthesizers of those years were monophonic, that is, they could only produce one note at a time. To play several sounds or musical parts at the same time had to be tricky. Basically, there were only two ways to do this: either use multiple synthesizers (and in the case of modular synthesizers, buy a separate generator for each voice), or record the part of each voice on a multitrack recorder.

Synthesizers at that time were completely analog, all their internal blocks (sound generators, envelope generators, filters) were voltage controlled. For example, an instrument's sonic generator, when a voltage of 1 V was applied, could give a pitch of 100 Hz, 2 V - 200 Hz, 3 V - 400 Hz, and so on. Obviously, only the analog interface could be used for external control of such a device. It was called CV / Gate. A Control Voltage proportional to the pitch of the note was applied to the CV input, and a trigger to the Gate input, from which the note was started and turned off.

There were several variants of the CV / Gate interface. The most widely used option was offered by Roland. In it, the CV voltage increased by 1 V with an increase in pitch by an octave. The gate signal, called the Voltage Trigger (V-Trigger), was a positive pulse with a width equal to the time the note was held down. This variant, along with Roland, was used in their instruments by Sequential Circuits and ARP. The Moog synthesizers used a different type of Gate called the S-Trigger. There were also instruments with other CV / Gate signal parameters. Often, the control voltage changed at a rate of 1.2 V per octave.

A signal called Trigger was also used, which was a short pulse. Many synthesizers with arpeggiators have a dedicated clock input for these signals. As soon as a pulse arrived at the input, the next note of the arpeggio was triggered. Many drum machines and analog sequencers generated the Trigger signal (most often every 8th or 16th note, but sometimes the distance between the pulses could be set arbitrarily). The Trigger signal could also be connected to the Gate input of the synthesizer.

The main drawback of the CV / Gate interface was that it could control the extraction of only one note at a time. Polyphonic instruments needed as many CV / Gate interfaces as there were polyphonic voices the instrument had. In addition, information about the actions of the performer in CV / Gate systems is very scarce, in fact, it is only the pitch of the note taken and the very fact of taking / removing it.

In the mid-70s, Oberheim released the first affordable polyphonic synthesizer, Two Voice. The instrument was easy to use, had a built-in keyboard, two-voice polyphony, and an uncomplicated set of controls that could quickly create beautiful, rich sounds. The tool had, in contrast to its predecessors, small size and an easy way of programming. Soon after that, polyphonic instruments from other companies began to appear: Sequential Circuits, Yamaha, Moog, Roland, ARP. They have become very popular with the growing mass of electronic musicians.

After polyphony, the next most important innovation was programmable memory. The synthesizer now has a small computer that allows you to store the position of all the knobs and buttons on the front panel in the instrument's memory, which opened up new possibilities for live performance. In addition, the computer monitored keystrokes and transmitted the pitch of the notes played to sound generators. This made it possible to use digital control interfaces in the future.

Before memory appeared, each instrument had to be programmed in advance, and during a concert it could only produce one sound. Therefore, at concerts of such musicians as Keith Emerson and Rick Wakeman, you could see huge "shelves" of keyboards. It took hours of work to prepare all this stuff for the concert and unite into a working ensemble. When memory became available, one instrument could be programmed for several sounds, and the desired sound was selected by pressing one button right during the concert.

But how many different synthesizers - so many characters. Some produced wonderful trumpet sounds, others strings, and still others special effects. The musicians wanted to take the best from each instrument and get a single, great sounding system.

At that time, the technique of playing two keyboards at the same time was common, which made it possible to create layered sounds. For example, one and the same part could be played with both hands, with the right hand on an instrument that is strong in strings, with the left hand on an instrument with an excellent brass section. It was quite difficult, even their own technique of playing was developed for the system from specific models synthesizers.

All of these techniques served the same purpose - to get the most out of new tools. Layering the sounds of various synthesizers has become one of the performing techniques, business card many musicians of that time.

In the late 70s, digital electronics began to be widely used in synthesizers, which was caused by the cheapening of microprocessors and the mass production of integrated circuits. Many synthesizer blocks were more profitable to produce from compact, cheaper and more stable digital components over time. Naturally, the question of instrument management arose with renewed vigor: analog CV / Gate interfaces were no longer suitable for the new digital technologies shaping sound. As a result, in the early 80s, synthesizers began to be equipped with a digital interface.

Instruments such as Oberheim OB-X (1981) and Rhodes Chroma (1982) appeared that could be connected to another instrument of the same model and brand. For example, Oberheim OB-X could be connected to another Oberheim OB-X (up to three instruments in total). When a musician played the keyboard of one of them, both instruments sounded at the same time. It was a huge advance - you could play on the same keyboard to get layered sounds. However, the main problem was still not solved: how to connect the tools to each other different manufacturers and different models.

Herbie Hancock, for example, tried to resolve this issue. on their own... He refined his synthesizers with custom digital interfaces. And they worked!

At the same time, more and more musicians turned to synthesizer manufacturers to make their own digital interface for them. The introduction of the first digital sequencers such as the Roland MC 4 Micro Composer and the Oberheim DSX added fuel to the fire. If the instruments from different manufacturers were compatible, the musician could "load" parts into these sequencers, and then play them using a whole group of synthesizers. But alas ...

Shortly before MIDI, Roland developed the DCB digital interface, which was only used in two synthesizers (the Juno 60 and Jupiter 8) and the MSQ 700 sequencer. The DCB interface provided basic sound fetching capabilities through note commands.

It should be noted that along with attempts to connect synthesizers with each other, back in the 60s, attempts were made to connect a synthesizer to a computer. But they did not lead to noticeable practical results due to the colossal cost of computers. In the late 70s - early 80s, there were several incompatible interfaces produced by handicraft or small firms. Only a developer like that computer system could write software for her. Usually similar systems were created by adding special boards to the computer, which either directly generated sound (compare with modern virtual synthesizers!), or generated several channels of control voltage for modular synthesizers.

The birth of MIDI
So, by the beginning of the 80s of the last century, the need to create a universal interface was realized by many leading manufacturers. The task was this: to develop digital transmission standard between all types of electric musical instruments. The first exchange of views on this topic, which was attended by Ikutaro Kakehashi (President of Roland), Tom Oberheim (Oberheim) and Dave Smith (President of Sequential Circuits), took place in June 1981 at the NAMM exhibition.

Dave Smith began his work by studying the literature on computer networks. When developing network protocols two specifications were drawn up - the hardware connection of devices and the format of messages transmitted over the network. At the same time, the internal work of the computer remained isolated, it appeared to other network participants as something like a "black box" that reacted to messages in accordance with the standard. This approach was also chosen for the combination of musical instruments. As a result, it was possible to avoid the dependence of the communication language of the instruments on their device. This is the basic principle of MIDI and has remained unchanged since then. It is thanks to him that the protocol continues its prohibitively long, by computer standards, life.

By the fall of 1981, Smith had prepared the first version of his protocol called USI (Universal Synthesizer Interface). In October of the same year, at an exhibition in Japan, there was a meeting of representatives of Sequential, Roland, Korg, Yamaha and Kawai, at which USI was presented to the Japanese, and in November at the AES congress in New York, Dave Smith officially presented the specification. Japanese manufacturers were working on their own standard at that time, which was more difficult than USI.

In January 1982, at the NAMM show, Sequential Circuits organized a meeting attended by most of the synthesizer manufacturers. At the meeting, it turned out that other American companies, for various reasons, do not want to participate in the creation of a unified interface. After the meeting, Sequential Circuits and Japanese firms (Roland, Korg, Yamaha, Kawai) decided to continue working together independently of the rest. Five months later, the fruits of this international development were presented at the June NAMM show. It's time for an official name for the interface. The USI was rejected because the word "universal" could cause legal problems. The Japanese proposed UMII (Universal Music Instrument Interface). But since the title also contained the word "universal", Dave Smith suggested correcting it for MIDI, which everyone agreed with.

In October 1982, the preliminary MIDI specification was completed. In December, the Sequential Circuits Prophet 600 was released, the first synthesizer equipped with a MIDI interface. And in January 1983 at the NAMM exhibition, the Prophet 600 and Roland Jupiter 6 were connected via MIDI. Roland JX 3 P appeared in March, and Yamaha DX 7 appeared in June.

Before the advent of MIDI, synthesizers consisted of two components in "one bottle". The first component is the sound production system, which actually produced sound. The second component is a controller, usually a keyboard, which was used to convert the actions of the performer into voltage and current, that is, into a language that the first component understood. This process was even given a name - "grabbing the performing touches".

The MIDI protocol made the distinction between the two components explicit, in effect severing their relationship. Any controller could now control any sound generator. This was of great psychological importance - the musician could freely select the necessary equipment, without fear that it would become outdated in six months, as is the case with other electronic devices.

Although the firms worked together on MIDI, they were still competitors in the market. Therefore, some companies added their own specifications to MIDI, in some cases misinterpreting the existing parameters (both through misunderstanding and deliberately), while all non-MIDI companies criticized this interface. At the same time, the firms associated with MIDI could not reveal all the secrets to competitors. For example, Sequential Circuits planned to release a multitimbral instrument (Six-Trak) and proposed to introduce the necessary features in the specifications, but last of all they wanted Japanese manufacturers to know about their plans.

However, it was necessary to coordinate work on MIDI instruments, and in mid-1983 a MIDI Standards Committee (JMSC) was formed in Japan. In August of the same year, the MIDI 1.0 specification was released. Also in 1983, the International MIDI Users Group (IMUG) was formed, which later became the IMA, the International MIDI Association. However, it represented users, not manufacturers, and could not seriously influence them. Therefore, in June 1984, the MIDI Manufacturers Association (MMA) was formed.

The MMA and JMSC are jointly involved in all MIDI standardization and extension activities. Any registered member of these organizations can propose their addition to the protocol, after which it will be put to a vote.

1983 - 2003
The MIDI protocol opened up tremendous possibilities for computer synthesis and sound control. Computers began to be used as a means of controlling synthesizers (as a sequencer or composer program that produces control actions based on special algorithms).

In 1984, Jim Miller released Personal Composer for the IBM PC, which was a MIDI sequencer and printed sheet music. Passport Designs and Sequential Circuits have unveiled four- and eight-track sequencer programs for Apple computers II and Commodore 64. Roland released the GR 700 MIDI guitar controller and SBX 80 clock and SMPTE interface, which revolutionized the synchronization of drum machines and sequencers with analog tape recorders. Yamaha introduced the D 1500 Digital Delay, the first effects processor whose presets could be changed via a MIDI Program Change message. Emu's Emulator II combines MIDI, SMPTE and computer control for the first time.

1985 saw the capture of the European market by Atari computers with built-in MIDI ports. MOTU and Opcode provide software MIDI sequencers for the Macintosh. At the same time, Yamaha is developing the QX 1 hardware sequencer with 80,000 note memory and the ability to edit the list of MIDI events. The next year, PC computers begin to conquer the market. For the PC, there are many programs that use MIDI. Lexicon introduces the PCM 70 Reverb, the first effects processor with MIDI controllable preset parameters.

The protocol itself is also moving forward. Designed for further expansion, it is enriched with new features. In March 1987, the MIDI Time Code was added (a clock signal for the interaction of MIDI devices with tape recorders and other equipment working with SMPTE timecode), in May 1987 - Sample Dump Standard (protocol for transferring samples over MIDI). In December 1988 a Reset all controllers message appears, in April 1990 a Bank Select message appears.

In 1990, Opcode released the Studio Vision MIDI audio sequencer for the Macintosh, as well as Galaxy program- versatile editor / librarian of MIDI devices. In May 1991, the protocol was replenished with the message All sounds off (remove all sounds), in July 1991 - with commands for controlling light and pyrotechnic devices MIDI Show Control, as well as the format of standard MIDI files (SMF - Standard MIDI Files) for platform-independent storage and exchange sequencer data. In October 1991, the General MIDI standard appears, which defines some minimum requirements for GM-compatible devices and the names of sounds are assigned to patch numbers. The first GM-compatible Roland SC 55 Sound Canvas also appears. Opcode releases the OMS (Opcode Music System) MIDI extension for the Macintosh computer operating system.

In December 1991, the MIDI Tuning Specification was released - a way to fine-tune the tuning of instruments. In January 1992, the MIDI protocol was finally integrated into the recording studio - the MIDI Machine Control standard appeared, which allows you to control the transport functions of recording devices via MIDI.

With the advent of Microsoft Windows 3.1, PC users have MIDI support at the operating system level. Cakewalk for Windows is released; Cubase, previously available for Atari and Macintosh, is available on the PC. 1993 - the beginning of the multimedia boom. Sound cards with a MIDI interface appear for the PC. MIDI technology is actively used in two market sectors: professional and amateur.

Virtual studios based on a personal computer begin their development. Virtual synthesizers, effect processors and other programs communicate via MIDI with outside world(and even with each other, inside the same computer, connecting with a virtual MIDI cable).

In May 1996, the Downloadable Sounds (DLS) Level 1 specification was released, which allows you to supplement the sets of General MIDI patches available on the device with your own sounds.

Over the past five years, the MMA organization has released more than a dozen new specifications. January 1998 - SMF Lyrics Specification (lyrics to songs in standard MIDI files), January 1999 - MIDI Tuning Bank and Dump Extensions (new messages for fine tuning instruments) and DLS Level 1 specification version 1.1, June 1999 - SMF Language and Display Extensions (storing and displaying characters in MIDI files), SMF Device Name and Program Name messages (playing a MIDI file on multiple devices simultaneously), November 1999 - General MIDI 2.

In February 2000, it was proposed new format RMID, which allows you to combine the data of a standard MIDI file and a DLS file in one file. In October 2000 - MIDI Media Adaptation Layer for IEEE-1394 (a method of transmitting MIDI messages over the FireWire protocol), in August 2001 - the DLS Level 2.1 specification, in November 2001 - General MIDI Lite (for mobile applications and portable devices), as well as the XMF (eXtensible Music Format) specification, which is proposed to replace the RMID format.

The last addition (May 2002) is the Scalable Polyphony MIDI Specification - a method that allows you to play the same MIDI file as correctly as possible, regardless of the available polyphony.

Despite all these additions, the MIDI specification is still 1.0.

The basics
MIDI is a communication protocol between a control device that generates commands and a slave device that executes those commands. If we narrow down this definition very much, then we can give a typical example: MIDI allows a performer to press a key on one instrument, and at the same time get the sound of another or even several. Any actions of the performer on the controls (pressing keys, pedals, changing the positions of the knobs, etc.) can be converted into commands that can be transmitted via a MIDI cable to other instruments. These tools, when they receive commands, process them in the same way as when acting on their own controls.

In fact, the MIDI protocol does not specify the composition of the interacting devices and does not require a live performer. The essence of the protocol is that in a system consisting of several devices, one device (master) generates control commands, and all other devices (slaves) execute these commands. If the slave devices are sound sources (synthesizers, sound modules, samplers, drum machines, in a word, tone generators), then they are controlled by commands related to sound production: for example, "take a note to the first octave" or "switch the tone to a number 5". If the slave devices perform other functions, for example, audio signal processing, then the commands for them will be slightly different. Be that as it may, the device receives control commands through its MIDI input (MIDI In).

Any device that has a MIDI output (MIDI Out) and is capable of sending control commands to this output can act as a master device. Master devices can be divided into two types: devices that are directly influenced by the performer (for example, a synthesizer) and devices that generate control commands automatically (without the participation of the performer), based on previously entered data. A typical example of the latter type of device is a sequencer.

The sequencer resembles a tape recorder, only it does not record sound, but control commands, and not to tape, but to the computer's memory (in the broad sense of the word, it can also be the built-in computer of a synthesizer). The sequencer allows you to record the actions of the performer (including performance dynamics, style, strokes, etc.), and then play them back in their original form, just as if the performer sat down at the instrument again and played the same thing. In addition, in the sequencer, you can edit the recorded information in ways that are impossible on a tape recorder: transpose parts or individual notes, change the rhythmic position of events or the timbre with which the synthesizer will play the part.

The MIDI protocol was developed to control synthesizers, and in them, as you know, the most important control is the keyboard. It is not surprising, therefore, that the designers of MIDI chose the principle of the keyboard instrument to describe the actions of the performer.

MIDI is a pronounced keyboard-oriented protocol.

This does not mean that you can control the tone generator only from the keyboard - there are many other input methods, for example, electronic pads and whole drum kits, guitar or wind controllers (we will talk about them separately and in more detail). However, whichever input device is used, messages from it are converted to keyboard-oriented.

Sound production techniques that are not typical for a keyboard instrument can only be simulated by means of MIDI with varying degrees of fidelity.

Commutation
How do devices connect in MIDI? Let's imagine ourselves in the place of developers. We have two synthesizers, and we want the second synthesizer to play the same note, but with its own sound, when a key is pressed on one of them. Obviously, for this you need to make a MIDI output connector on the first synthesizer, and an input MIDI connector on the second, and connect the instruments with a MIDI cable. When a key is pressed, the first synthesizer must generate a note-taking message and send it to its output, and the second synthesizer must receive this message through the input and play the sound (Fig. 3).


MIDI (Musical Instrument Digital Interface) - simply put, musical instrument digital interface... If it is still not clear, then listen to my story.
When computers began to penetrate into music, the developers of electronic instruments thought: "Shouldn't we put some of the complex control of electric musical instruments (EMP) on a computer?" What did this promise? As you know, the line-ups of musical teams have been decreasing and decreasing over time. This, of course, gives freedom to creativity, but the composer wants to use not one, but a couple of dozen different instruments when arranging his song. Moreover, he does not want to wait for the rehearsal of a large orchestra to hear his new idea. Often he doesn't have an orchestra. So it would be nice to have the ability to program scores and then automatically play them back.
By that time, all different EMPs were just a shaft. Even to some of them it was possible to connect an "electric musician" with a wire (a kind of box with bulbs and buttons, called sequencer) a special wire that sent commands like "press a certain note". But the main problem was that the "musician" from one model of the instrument did not fit the other.
Then it was decided to create a single interface (a predetermined set of control commands and a method of connection between devices) for connecting electronic musical instruments to sequencers and among themselves. This interface became MIDI. Now we can connect the Yamaha synthesizer to the Roland sequencer and it will work. By the way, now mainly a computer is used as a sequencer.

Now let's look at what else MIDI allows us to do, besides transmitting commands for pressing notes.

    The synthesizer has a bunch of different knobs and buttons (filters, modulation, vibrato, reverb level), in order to increase the expressiveness of your performance, they have to be constantly twisted while playing. The MIDI command set includes control commands(with these same knobs and buttons, as well as piano-type foot pedals). This means that the computer, when playing music, can send the synthesizer a command "at what angle (to what position) to turn the knob" or "press / release a button", to turn on the sound of a grand piano or violin.

    For example, we created-created sounds on our synthesizer and filled all its memory. What do we do now? Via MIDI, we will be able to transfer the contents of the instrument's memory (or any other MIDI device) to the computer as a data block ( MIDI bulk dump) and save it to your hard drive. By MIDI, we can load the data from the machine back into the synthesizer.

    There is still a problem. MIDI - commands common to all instruments. But it was impossible to foresee all possible commands when developing this standard, and the allotted number of controllers may not be enough, so a loophole was left - SYSX (System Exclusive Messages- exclusive - special - messages of indefinite length for each model of MIDI device). They have only a standard beginning (header) and ending, and in the middle, each developer writes what he wants.

Probably you have already met the term more than once GENERAL MIDI? This is the standard in which the numbers of the controllers are specified (the volume knob for all instruments that meet this standard is always numbered 7, the reverb knob is always 91, etc.), the set and order of patches (patch, sounds - for example, the piano is always number 1, and the church organ is always number 20). This does not mean that all synthesizers made according to the General MIDI standard will play the same sounds. No. On different instruments, patch number 1 will contain a piano, but with different sound quality. Sometimes so bad that even experts in this matter have a hard time guessing what the sound is. This standard is mainly used to create music for games.
In addition, there are still more standards for the set of sounds. GENERAL SOUND and XG.

If we have a computer and several synthesizers on which we want to play different parts in one song (drums, solo, bass, background), then they must all be connected to the same MIDI cable. The sequencer (software running on a computer) transmits commands to all instruments on this cable. How, you ask, will each of the synthesizers distinguish the commands intended for him personally? That's what MIDI channels are for.
The principle of operation is approximately the same as in a radio receiver. Your receiver only receives the radio station to which you tune it. Just imagine, the MIDI channel is the frequency of the radio station (such as 104 and 4 FM) to which the receiver is tuned. The computer contains 16 radio stations with different frequencies, each of which transmits the part of only its own instrument, and in each synthesizer there is a receiver tuned to the radio station that transmits its part. Radio waves do not travel through the air, but through a wire.
In general, you can transmit any batch over any channel. True, in General MIDI it is customary to use the 10th MIDI channel for the drum part.
In reality, MIDI channels are created without any radio waves involved. We give the synthesizer an address (MIDI channel number). And at the beginning of each MIDI command, the channel number of the synthesizer is transmitted to which it is intended. The synthesizer accepts all commands, but only executes those that contain its channel number.

It is most convenient to place one or more meta-events of this type at the very beginning of the MTrk recording, since these events carry auxiliary information that informs the user about which instrument is playing a given track, as well as other useful data. Typically, the actual parameters defining the type of instrument playing the track are stored in a file as MIDI Program Change events, and the meta events described here allow you to provide the user with easy-to-read descriptions corresponding to the configurations made in MTrk recordings.

The words

FF 05 len text

A textual meta-event containing the words of a vocal piece corresponding to a particular musical beat. One meta-event "Words" must contain one single syllable of text.

Note that len ​​is represented as a variable length value.

Marker

FF 06 len text

Text Meta-Event The marker is placed on a specific beat. This event can be used to organize loops and can indicate the start and end point of the loop.

Note that len ​​is represented as a variable length value.

Cue Point entry point

FF 07 len text

The entry point text meta-event can be used to indicate the entry point of an external data stream, such as the start point of a digital audio file. The text value of this meta-event can contain the name of a WAV file containing digital audio.

Note that len ​​is represented as a variable length value.

MIDI channel

FF 20 01 cc

This optional meta-event is usually located at the beginning of the MTrk message, before the first non-zero timestamp and before the first meta-event, except for the sequence number meta-event. The “MIDI channel” meta-event sets the value of the MIDI channel with which all subsequent meta-events and SYSEX events will be associated. The cc data byte is the MIDI channel number, 0 corresponds to the first channel.

The MIDI specification does not provide for specifying a channel number for SYSEX events and meta events. If a file of type 0 is created, then all SYSEX events and meta-events are on the same track and it is difficult to distribute these events between the corresponding channel (voice) messages (for example, if you want to designate the part of channel 1 as "Flute solo", and the part of channel 2 like Trumpet Solo, you will have to use two "Track Name" meta events to enter these names, but since both of these tracks are on the same channel, you must place the "MIDI channel" meta message before the first track name meta message , in which indicate the number of the corresponding channel, and before the second meta-message of the track name put the meta-message of the MIDI channel indicating the number of the second channel.

More than one “MIDI channel” meta-message can be used on a single MIDI track if the events of this track need to be distributed among several MIDI channels.

MIDI port

FF 21 01 pp

This is an optional event, which is usually located at the beginning of the MTrk recording, before the first non-zero delta time and before the first MIDI event, which determines which MIDI port (or device) the events of this MTrk message are associated with. The pp data byte is the port number, and pp zero is the first MIDI device in the system.

The MIDI specification provides only 16 channels per input or output port (device, connector, instrument - terminology may vary) MIDI. The MIDI channel number of each MIDI event is contained in the status byte of the event, where it occupies the least significant four bits. Thus, the channel number is always a number in the range from 0 to 15. Sometimes the system allows you to work with more than 16 MIDI channels, it becomes necessary to overcome the limitations imposed by the small number of MIDI channels, and expand the possibilities of exchanging MIDI data, making information exchange with external MIDI devices it is more efficient, that is, allow the musician to work with more than 16 channels. Some sequencers also allow more than 16 MIDI channels to be input and output simultaneously. Unfortunately, the MIDI protocol does not provide for the ability to use more than 16 MIDI channels within the status byte in a MIDI event. Therefore, an additional method is needed that allows you to distinguish events that correspond to the first channel on the first MIDI port from events corresponding to, say, the first channel on the second MIDI port. The described meta-event allows the sequencer to determine which MIDI port to send the events of this MTrk message to.


A MIDI keyboard is connected to a sound card installed in a computer via a MIDI interface. In order to make the necessary connections, it is not at all necessary to call a specialist. You are able to do it yourself. Here's everything you need to know about MIDI.

Musical Instrument Digital Interface (MIDI)

Let's start with the word "interface". Interface (Interface) - a system of unified communications and signals through which devices or programs interact with each other.

Musical Instrument Digital Interface (MIDI) is a musical instrument digital interface. The interface standard was created by leading manufacturers of musical instruments: Yamaha, Roland, Korg, E-mu, etc.

Distinguish between hardware MIDI interface and MIDI data format. The hardware interface is used to physically connect the source and destination of messages, the data format is used to create, store and transmit MIDI messages. We will consider issues related to the data format in Sec. 1.2, and now let's get acquainted with the hardware component of the MIDI interface.

The MIDI interface is a start / stop serial asynchronous current loop interface.

The phrase "start-stop" means that each transmitted message must contain signs that the transmission process has begun ("Start" signal) and completed ("Stop" signal).

In a serial interface, binary data is not transmitted simultaneously, but alternately (sequentially).

The asynchronous nature of the interface is that the beginning of data transmission in it is not tied to any particular moment in time. The transfer is carried out when the need arises. We pressed a key - a message about it appeared in the interface. The transmitting side of the interface is active, there is a current source and a switching element (ultimately, a switch) on it, and the receiving side is passive, on it there is only a current-receiving device. The principle of the current loop is that as soon as the circuit breaker is closed, current will flow through it from the positive pole of the source (on the transmitting side) through the "straight" connecting conductor of the cable, then through the current sink (on the receiving side) and along the "reverse" the cable conductor will return to the receiving side ("flows" into the negative pole of the source). So much for the current loop. Passing through the receiver, the current will fulfill its prescribed role: it will activate the sensitive element, as a result of which the incoming signal will be recorded in the receiver.

Elementary MIDI signal structure

The active transmitter generates a current burst with a current of 5 mA. The current message corresponds to a logical zero, a current-free one - to a logical unit. The structure of an elementary MIDI signal (Fig. 1.1) is characterized by the following features: 7 data bits, one (most significant) status bit, one start bit, one stop bit. There is no parity check.

You can see that the table bit is one, not zero. That is, in the "Stop" state, no current flows in the circuit. This is very sensible. Energy and resources of interface elements are saved. Indeed, most of the time in the M1DI system, no events occur: on average, the length of pauses is much longer than the length of those time intervals when you play a MIDI keyboard. True, the current may be absent in the circuit, not only because there are no messages, but also because of its breakage. For timely detection of a faulty state of the MIDI network, a periodic transmission of a special test signal is provided. If, after a certain time, the receiver does not detect it, then this will be considered a failure, after which the MIDI system will work out a predetermined sequence of actions.

Rice. 1.1 Structure of the elementary MIDI signal:

The bandwidth of the MIDI channel is 3.125 kb / s. Commands can be one-, two-, and three-byte. The first byte is the status byte. It determines the action of the command. It can be followed by 1 - 2 bytes of data. The most significant bit of the status byte is 1 and the data byte is 0.

MIDI connectors and MIDI cable

A full-fledged MIDI device has three connectors: MIDI In (input), MIDI Out (output) and MIDI Thru (a copy of the signal coming from an external MIDI device to the MIDI In is retransmitted to the MIDI Thru connector via a buffer). All connectors are 5-pin. Contacts 4 and 5 - signal, contact 2 - shield. The polarity of the signals is determined relative to the current source: pin 4 - plus (current flows out of the pin), pin 5 - minus (current flows into the pin). Thus, the pin assignments are the same for the MIDI Out and MIDI Thru connectors, and the reverse for the MIDI In connector.

Rice. 1.2. Pinout diagram for MIDI cable connectors:

A two-core shielded cable is used for the connection. The connection of the connectors on the two ends of the cable is straight (2-2, 4-4, 5-5). The pinout diagram of the MIDI cable connectors is shown in Fig. 1.2.

Principle of connecting MIDI devices

The principle of connecting two MIDI devices is shown in Fig. 1.3. The contact of the transmitter from which the signal is removed to the external circuit is called MIDI TXD (Transmitter Data). The receiver pin to which the signal should be received from the external circuit is MIDI RXD (Receiver Data).

Rice. 1.3. The principle of connecting two MIDI devices:

The remarkable thing about the hardware of the MIDI interface is that the developers have provided several measures in it to reduce the level of noise and interference. The simplest, but quite effective measures include the obligatory shielding of cables connecting MIDI devices. The shield is a wire braid that protects the conductors from the penetration of electromagnetic waves carrying interference. Last but not least, the shield prevents the emission of electromagnetic waves into the environment by the MIDI cable itself. By means of the screen, noise does not penetrate from one instrument to another, since in accordance with the MIDI standard, electrical connection of the screen to the housings of two MIDI devices at the same time is excluded. Most importantly, interference cannot get from one instrument to another because even the signal wires do not have a direct (they say: galvanic) connection with both the transmitter and the receiver of MIDI messages. Of course, there is no paradox here: if information is transmitted through wires, then there is a connection, but this connection is actually not galvanic, but optical. A pair of optoelectronic devices are included in the input circuit of the MIDI interface. The LED starts to light up when a logical zero is transmitted through the cable, and goes out if a logical one is transmitted. The light is directed to the photodiode, the current through which is the stronger, the more this device is illuminated. The signal conversion chain is as follows: electricity- light - electric current. In this way, an insurmountable obstacle is created in the path of currents carrying interference (the magnitude of these currents is not enough for the LED to emit light), at the same time, digital signals pass completely freely.

The standard stipulates that in a network of MIDl devices at the same time, only one of them can be a transmitter of MIDI messages, and all the others can only be receivers. One MIDI transmitter allows up to four receivers to be connected. In fig. 1.4 shows a variant of connecting MIDI devices to a MIDI interface sound card installed in the computer.

Rice. 1.4. Connecting MIDI devices to your sound card:

MIDI signals in the sound card's game port connector

It should be noted that sound cards usually lack standard MIDI connectors. This is due to the fact that the dimensions do not allow them to be placed in the slots on the back of the computer, designed for fixing expansion cards. "Semi-finished" MIDI signals (MIDI RXD and MIDI TXD) are output to the pins of the game port connector (Fig. 1.5).

For the correct orientation in the pin numbers, it must be taken into account that the connector is shown as it would appear to an observer sitting inside the computer. Not a very convenient observation point, but the picture usually given in the description of a sound card corresponds exactly to it. In order not to confuse you, in fig. 1.5 we did not change the direction of our gaze.

Rice. 1.5. Purpose of some pins of the game port connector:

Most of the contacts are intended for connecting a joystick, however, they are not of interest to us now. Pay attention to the following contacts:

  • 4, 5 - connected to the common wire of the computer's power supply or, as they sometimes say, to the case, to the ground (in the diagrams this connection is denoted by GND);
  • 1, 8, 9 - connected to the +5 V terminal of the computer power supply;
  • 15 - which of the external circuits should receive the MIDI RXD (Receiver Data) signal;
  • 12 - from which the MIDI TXD (Transmitter Data) signal is taken to the external circuit.
The presence of pins 12 and 15, as well as their corresponding signals, allows manufacturers and sellers to claim that this sound card is equipped with a MIDI interface. However, in reality, the MIDI TXD and MIDI RXD signals should be regarded as semi-finished products of true MIDI signals. With their help, you can receive and transmit information represented by standard voltage values ​​for computers (they say, the levels of transistor-transistor logic - TTL). And even if you replace one of the five-pin connectors of the MIDI cable with a connector corresponding to that shown in Fig. 1.8, then you will not be able to connect the synthesizer to the sound card via this cable. The fact is that the MIDI TXD signal will not be correctly perceived by the LED, with the help of which useful signals are transmitted in the MIDI interface and interrupt the galvanic communication of MIDI devices with each other.

To connect a sound card to MlDI devices, you need an adapter cable containing optocoupler isolation. When connecting MIDI devices, you need to adhere to a simple rule: the cable should not connect the same-named connectors of two devices, that is, you cannot connect the MIDI Out of one device to the MIDI Out of another, and also MIDI In to MIDI In. However, if you accidentally make a mistake, nothing bad will happen: there is the necessary protection in the MIDI interface circuit.

But one cable or two should be pulled between MlDI devices, depending on what kind of devices they are and for what purposes they are used.

Let's look at the most likely situation first. Let's say you purchased a MIDI keyboard and want to connect it to your sound card using the MIDI interface. Nothing is easier, but first you need to figure out how a MIDI keyboard differs from a keyboard electronic musical instrument (synthesizer). The latter contains both the keyboard and the synthesizer, therefore it is able to independently shape sounds. All modern synthesizers are equipped with a MIDI interface. A MIDI keyboard does not have the ability to synthesize sound. It is intended only to control the operation of an external (in relation to it) synthesizer via the MIDI interface. This is, first of all, the cheapest option for sharing multiple synthesizers. In this case, they may not have their own keyboards, which determines their relatively low cost. A synthesizer that does not have its own keyboard is called a tone generator.

Connecting to a sound card MIDI keyboard and MIDI synthesizer

Let's return to the question of connecting a MIDI keyboard to a sound card (Fig. 1.6). Indeed, it is very simple to do this: insert the adapter's MIDI In connector into the keyboard's MIDI Out socket, and connect the 15-pin connector of the MIDI adapter to the game port connector located on the sound card. Here the MIDI keyboard will act as the MIDI master, and the sound card will act as the slave.

Rice. 1.6. Connecting a MIDI keyboard to a sound card:

If you already have a modern one with wide functionality sound card and you want to play music not with a mouse, but in a proven old-fashioned way, fiddling with white and black keys, then a MIDI keyboard is a way out. Note that there are musical synthesizers on sale with a keyboard and a MIDI interface. Some of them (relatively simple) are slightly more expensive than MIDI keyboards. The synthesizer can be used as a MIDI keyboard in Performance and Song Record modes. To do this, make the same connection as in the case of connecting a MIDI keyboard: connect the synthesizer's MIDI Out to the adapter's MIDI In.

When playing a composition, an external synthesizer with a keyboard can be used as an addition to the sound card and extract from it the sounds of those instruments that are not available in the sound card palette. To implement this feature, the adapter's MIDI Out should be connected to the synthesizer's MIDI In (Fig. 1.7).

Rice. 1.7. Connection diagram of an external synthesizer to a sound card:

Solving the problem of self-oscillation of a MIDI system

If the music editor operation mode is selected incorrectly, the connection according to the diagram shown in Fig. 1.7, can cause an unpleasant effect: a message sent from the keyboard, for example, pressing a key, will go to the sound card, and from there back to the synthesizer, and from the synthesizer back to the sound card ... And so on ad infinitum. The system will loop, get excited, and reboot. Sounds will be audible uninteresting. What should be done to avoid this?

From fig. 1.7 it follows that both devices - the sound card and the synthesizer - are both MIDI receivers and MIDI transmitters at the same time. It is unacceptable. The trivial way out - to disconnect the second cable while using the synthesizer as a MIDI keyboard and connect it when playing a previously recorded melody - is extremely inconvenient. All these disconnections, connections, believe me, will end badly. It is easier and safer for the hardware and your wallet to perform the necessary switching at the logical level. This is done either directly in the synthesizer (with the Local Off switch), or in the music editor.

However, it would be more correct to solve the looping problem by manipulating the relay options for MIDI messages. The crux of the matter is that MIDI information arriving at the input of a device (or a program, in our case Cubase SX) is transmitted to its output. Let's consider a classic example, when a sound card synthesizer is used in conjunction with an external synthesizer, which, in turn, also performs the functions of a MIDI keyboard. Looping will inevitably occur if you select a track that has ports that are physically connected to an external synthesizer as input / output ports. The sequence of occurrence of the undesirable looping effect is as follows:

1. You press a key on the synthesizer, the synthesizer plays the corresponding note.
2. A MIDI message of the Note On type (see section 1.2.1) is sent to the sound editor.
3. In the sound editor, thanks to relaying MIDI messages, the same message is transmitted to the input port of the synthesizer.
4. The synthesizer, having received the Note On message, fulfills it by playing the corresponding note (note, this is not the first time).
5. Retransmission of MIDI messages also works in the synthesizer (whether it can be turned off and how to do it - look in the user manual), so see further step 2.

To break this chain, you should disable relaying of MIDI messages either in the synthesizer or in the program (as a rule, this option is enabled by default in music editors). In Cubase SX, proceed as follows: open the menu File, select the command Preferences... A dialog box will open Preferences... In the tree on the left side of the window, select the MIDI branch. On the opened tab MIDI uncheck the box MIDI Thru Active... Now there will be no looping. You can verify this by clicking OK followed by the dialog box Preferences will close. Alternatively, click Apply, window Preferences will remain open and the changes you made will be applied.

When unchecked MIDI Thru Active the ability to use an external synthesizer as a MIDI keyboard to control the built-in synthesizer of a sound card is lost.