Menu
Is free
check in
the main  /  BY / Discrete channel. Interference in communication channels Discrete canal

Discrete channel. Interference in communication channels Discrete canal

An example of a discrete channel without memory can serve a single channel. The transmission channel is fully described if the source alphabet is specified, the probability of the appearance of alphabet characters, the rate of symbol transmission, the recipient alphabet, , and the values \u200b\u200bof the transient probabilities of the symbol appear subject to the transmission of the symbol.

The first two characteristics are determined by the properties of the message source, the speed of the bandwidth of the continuous channel included in the discrete. The volume of the alphabet of the output symbols depends on the solid scheme operation algorithm; Transitional probabilities are based on the analysis of the characteristics of the continuous channel.

Inpatient is called a discrete channel in which transient probabilities do not depend on time.

The discrete channel is called a channel without memory if transient probabilities do not depend on which characters were transmitted and taken earlier.

As an example, consider the binary channel (Fig. 4.6). In this case, i.e. At the channel inlet, the alphabet of the source and the recipient alphabet consists of two characters "0" and "1".



Alphabet input signals has two characters h. 0 I. h. one . The message selected randomly is the source of messages, one of these characters is fed to the input of the discrete channel. At the reception is registered w. 0 I. y. one . The output alphabet also has two characters. Symbol w. h. 0. The probability of such an event is r(y. 0 ½ x. 0). Symbol w. 0 can be registered when transmitting a signal h. one . The probability of such an event is r(y. 0 ½ x. one). Symbol y. 1 can be registered when transmitting signals h. 0 I. h. 1 with probabilities r(y. 1 ½ x. 0) I. r(y. 1 ½ X. 1), respectively. The correct reception corresponds to events with probabilities of appearance. r(y. 1 ½ x. 1) I. r(y. 0 ½ x. 0). Error receiving the symbol occurs when events appear with probabilities r(y. 1 ½ x. 0) I. r(y. 0 ½ x. one). Arrows in Fig. 4.6 It is shown that possible events are in the symbol transition h. 1 B. y. 1 I. h. 0 B. y. 0 (this corresponds to an error-free reception), as well as in the transition h. 1 B. y. 0 I. h. 0 B. y. 1 (this corresponds to the erroneous reception). Such transitions are characterized by appropriate probabilities. r(y. 1 ½ x. 1), r(y. 0 ½ x. 0), r(y. 1 ½ x. 0), r(y. 0 ½ x. 1), and probabilities themselves are called transitional. Transient probabilities characterize the probabilities of playback at the output of the channel transmitted characters.

The channel without memory is called symmetrical if the corresponding transient probabilities are the same, namely the same probability of the correct reception, as well as the same probability of any errors. I.e:

Proper reception

Error reception.

For general case

(4.9)

It should be noted that in the general case in the discrete channel, the volume of alphabets of input and output characters may not coincide. An example can be a canal with erasure (Fig. 4.7). In fig. 4.7 The notation is introduced: - the probability of erroneous reception is the probability of erasing, the probability of proper reception. The alphabet at its output contains one extension symbol compared to the alphabet at the inlet. This extension symbol (erasing symbol "?") Appears at the channel output when the analyzed signal fails to identify any of the transmitted characters. Erase of characters when applying the appropriate noise-resistant code allows to increase noise immunity.

Most real channels have "memory", which manifests itself in the fact that the probability of error in the next symbol depends on which characters were transmitted to it and how they were taken. The first fact is due to the intersomol distortion that is the result of the scattering of the signal in the channel, and the second - by changing the signal-to-noise ratio in the channel or the nature of the interference.

In a constant symmetrical channel without memory, the conditional probability of erroneous reception () -th, the symbol of the same symbol is erroneously equal to the unconditional probability of the error. In the memory channel, it may be greater or less than this value.

The most simple model of the binary channel with memory is the Markov model, which is set by the transient probability matrix:

,

where is the conditional chance that adopted () -th symbol is erroneously, if they are accepted correctly; 1- - the conditional probability that accepted () -th symbol is correct if they are accepted correctly; - the conditional probability that adopted () -th symbol is erroneous if they are mistakenly accepted; 1- - the conditional likelihood that adopted () -th symbol is correct, if they are mistakenly accepted.

Unconditional (average) probability of error in the channel under consideration should satisfy the equation:

,

.

This model has dignity - ease of use, not always adequately reproduces the properties of real channels. Great accuracy allows you to get a Hilbert model for a discrete channel with memory. In such a model, the channel may be in two states and. The error state does not occur; The error state occurs independently with the probability. Also considered known probabilities of transition from the state in and the likelihood of transition from state to state. In this case, a simple Markov chain forms no error sequence, but a sequence of transitions:

.

The most common channel type is a telephone with a bandwidth of kHz and the frequency range from \u003d 0.3 kHz to \u003d 3.4 kHz.

Data from the source of information, after converting the parallel code to the serial, is usually represented as an inconusing signal without returning to zero (BVN), which corresponds to a signal with a bipolar AM (Fig. 2.1). To transmit rectangular pulses without distortion, the frequency band from zero to infinity is required. Real channels have a final frequency band with which it is necessary to coordinate the transmitted signals by modulation.

The structural diagram of the discrete channel with the World Cup is shown in Fig. 2.2.

The transmitted message from the source of the II information in the parallel code enters the KC channel encoder, which converts the parallel code into the serial binary BVN code. In this case, the channel encoder introduces redundant characters to the message (for example, a parity control bit) and forms starting and stop bits For each frame of transmitted data. Thus, the output signal from the encoder is a modulating signal for the modulator.

Depending on the state of the modulating signal ("0" or "1"), the frequency modulator forms frequency parcels with a frequency and. When a positive polarity is received, the modulator generates a frequency , called the upper characteristic frequency.

Fig. 14.2 - Structural diagram of the information transmission system with frequency modulation:

AI - source of information; IP is a source of interference; Kk - channel encoder; PF2 - receiver strip filter; M - modulator; UO - restricter amplifier; PF1 - band transfer filter; DM - demodulator; DK - Channel Decoder; LS - communication line; P - Recipient of the information AI - source of information; IP is a source of interference; Kk - channel encoder; PF2 - receiver strip filter; M - modulator; UO - restricter amplifier; PF1 - band transfer filter; DM - demodulator; DK - Channel Decoder; LS - communication line; P - Recipient information

The frequency is an average frequency - the deviation (deviation) of the frequency. When the negative sending modulator arrives at the output, the frequency appears at its output. , called the lower characteristic frequency. The signal at the output of the modulator can be considered as the superposition of two AM signals, one of which has a carrier and the other. Accordingly, the spectrum of the signal of the signal can be represented as the superposition of the spectra of two AM signals (Fig. 2.3).

The width of the SM signal spectrum is wider than at AM signal by the value determined by the distance between carriers and. Value It characterizes the change in the frequency when the unit is transmitted or zero relative to its average value and is called the deviation of the frequency. The ratio of the deviation of frequency to the speed of modulation IN called frequency modulation index:

.

Fig. 14.3 - SM signal spectrum

The PF1 transmitter strip filter limits the signal spectrum transmitted to the communication channel according to the bottom and upper boundary of the channel band. The width of the spectrum of the signal At the output of the modulator depends on the speed of binary modulation and the deviation of the frequency. About . The greater the modulation index, the wider, with other things being equal, the spectrum of the FM signal.

The PF2 receiver strip filter selects the phone channel frequency band, which allows to get rid of interference that are out of the bandwidth of PF2. Next, the signal is amplified by the WE-limiter amplifier. The amplifier compensates for the loss of signal energy in the line by damping. In addition, the amplifier performs additional feature - Signal limit function in level. It is possible to ensure the constancy of the signal level at the input of the frequency demodulator D when the level is changed at the receiver input in fairly wide limits. In the demodulator impulses alternating current Transformed into parcels of DC. In the channel decoder there is a conversion of characters to messages. At the same time, depending on the encoding method used, errors are detected or correcting.

In the study of the radio inspection, the use and models of the discrete channel are necessary. This is due to the fact that in many types of RTS, a larger information protection load under conditions of intensive interference bears the use of encoding and decoding methods. To consider the tasks of this type, it is advisable to deal with only the peculiarities of the discrete channel, excluding from considering the properties of the continuous channel. In the discrete channel, the input and output signals are pulse sequences representing the stream of code symbols. This determines this property of the discrete channel, which in addition to restrictions on the parameters of a plurality of possible signals at the input, the distribution of the conditional probabilities of the output signal at a given input signal indicates. When determining a plurality of input signals, there is information about the number of different characters. t., number pulses in the sequence p and, if necessary, duration T IN. and OI, each pulse at the inlet and outlet of the channel. In most practically important cases, these durations are the same and, consequently, the same and duration of any // - sequences at the inlet and outlet. The result of interference can be the difference between the input and output sequences. Consequently, for any // it is necessary to indicate the likelihood that when transmitting some

random sequence IN Severe appears on the output IN.

Considered // - sequences can be represented as vectors in /// "- a measuring euclide space in which under operations" addition "and" subtraction "is understood to be a bonnetic summation t. And it is similar to the integer multiplication by an integer. In the selected space, you need to enter the concept of "error vector" E, under which it is understood by the discharge difference between the input (transmitted) and output (accepted) vectors. Then the vector adopted will be the sum of the transmitted random sequence and error vector B \u003d B + E. From the form of the recording, it can be seen that the random error vector E is an analogue of interference // (/) in the model of the continuous channel. The discrete channel models differ in each other by the distribution of the probabilities of the error vector. In the general case, the distribution of probabilities E may be dependent on the sale of the vector IN. Visually explain the concept of the meaning of the error vector for the case /// \u003d 2 - binary code. The appearance of a symbol 1 in any place of the error vector informs about the presence of an error in the corresponding discharge transmitted // - sequence. Consequently, the number of non-zero characters in the error vector can be called the weight of the error vector.

The symmetric channel without memory is the simplest model of the discrete channel. In such a channel, each transmitted code character can be accepted erroneously with some probability. R and adopted correctly with probability q. = 1 - R. If an error occurred, instead of the transmitted symbol 6. Any other symbol can be transmitted with an equal probability. B

The use of the term "without memory" suggests that the likelihood of an error in any discharge "sequence does not depend on which characters are transmitted to this discharge and how they were taken.

The probability that in this channel will appear "-Mal error vector weighing ?, equal

The likelihood that took place I. Any errors located randomly throughout I sequence is determined by the law of Bernoulli:

where FROM[ = p/[(!(« - ?)] - Binomine coefficient, i.e. The number of different combinations ? Errors in "Reception.

The model of a symmetric channel without memory (binomine channel) is an analogue of the channel with additive white noise at a constant signal amplitude - its approximation.

The asymmetrical channel without memory differs from the symmetric variety of symbols 1 to 0 and back while maintaining the independence of their appearance from the prehistory.

Discrete Channel - communication channel used to transmit discrete messages.

Composition and parameters electrical chains At the input and output of the DC are defined by the relevant standards. Characteristics can be economical, technological and technical. The main are specifications. They can be external and internal.

External - information, technical and economic, technical and operational.

There are several definitions on the transmission speed.

Technical speed characterizes the speed of equipment included in the transmitting part.

where M i is the base of the code in the i-Ohm channel.

Information transfer rate is associated with channel bandwidth. It appears with the emergence and rapid development of new technologies. Information speed depends on the technical velocity, from the statistical properties of the source, on the type of COP, received signals and interference operating in the channel. The limit value is bandwidth CS:

where? F - BSS strip;

By speed of transmission of discrete channels and the corresponding UPS, it is customary to:

  • - low-speed (up to 300 bits / s);
  • - medium-speed (600 - 19600 bits / s);
  • - high-speed (more than 24000 bits / s).

Effective transmission rate is the number of signs per unit of time provided to the recipient, taking into account the non-productive time of time (SS phasing time, time allocated to excess symbols).

Relative transfer rate:

The accuracy of information transmission is used due to each channel there are extraneous emitters that distort the signal and make it difficult to determine the species of the transmitted unit element. According to the method of converting messages to the interference signal there are additive and multiplicative. Form: harmonic, impulse and fluctuation.

Interference leads to errors in receiving single elements, they are random. Under these conditions, the probability is characterized by an error-free transmission. The ratio of the faithfulness of the transmission can be the ratio of the number of erroneous symbols to the total

Often the probability of the transmitter turns out to be less required, therefore, take measures to increase the likelihood of errors, eliminate the errors, inclusion in the channel of some additional deviceswhich reduce the properties of the channels, therefore reduce errors. Improving loyalty is associated with additional material costs.

Reliability is a discrete channel, as well as any DS cannot work well.

The refusal is called an event ending in a complete or partial womb of the performance system. In relation to the data transmission system, the failure is an event that causes the delay of the received message at time tands\u003e T add. At the same time t extra in different systems Various. The communication system property providing the normal execution of all specified functions is called reliability. Reliability is characterized by an average time of operation for failure T o, average recovery time T B, and a preparedness ratio:

The probability of trouble-free operation shows how probability the system can work without a single failure.

Information - This is a combination of information about any event, phenomenon, subject. In order for the information to be stored and transmitted, it is present in the form of messages.

Message - This is a set of signs (characters) containing one or another information. For transmitting communication system messages, material carriers can use (for example, paper, storage devices on magnetic disks or tapes) or physical processes (changing electric current, electromagnetic waves, ray of light).

The physical process that displays the transmitted message is called signal. The signal is always a time function.

If the signal is a function S (T)taking for any fixed value t., only defined, forever set values S K., such a signal and the message displayed by them are called discrete. If the signal takes in a certain time interval any value, it is called continuous or analog.

Many possible discrete messages (or signal) values DS. represents alphabet Messages. The alphabet of the message is denoted by the title letter, for example, BUT, and in curly brackets, all its possible values \u200b\u200bare indicated - symbols.


IDS Source of discrete PDS messages - recipient of discrete messages

SPDS - Discrete Message Transmission System

Denote the alphabet of the transfer messages (the alphabet of the input message, the input alphabet) - A, the alphabet of the reception (alphabet of the output message, the output alphabet) - V.

In general, these alphabets may have an infinite set of values. But in practice they are finite and coincide. This means that when taking a symbol b K. It is believed that the symbol was transmitted a K..

Distinguish two types discrete signals:

· Discrete random continuous processes (DSH), in which the change of signal values \u200b\u200b(characters) can occur at any time on an arbitrary interval.

· Discrete random discrete time processes (DSDV), in which symbols change can only occur in fixed points T 0, T 1, T 2 ... T I ..., where t i \u003d t 0 + I *  0. Value   called single interval.

The second type of discrete signals is called discrete random chipboard sequences.

In case of continuous time, the discrete random process may have infinite set of sales at the time interval , and in the case of a signal in the form of a chipboard, the number of possible implementations is limited to a set


Where k is an index indicating the alphabet symbol number, I - an index indicating the moment of time. At the volume of alphabet equal K. and the length of the sequence n. characters number of possible implementations equals K N..

In general, source of discrete messages or signals (IDS) - This is any object that generates a discrete random process at its output.

Discrete Channel (DC) - Call any section of the transmission system, at the entrance and the output of which are interrelated discrete random processes.

Consider structural scheme Transformations in the transmission system of discrete messages.