Menu
Is free
registration
home  /  ON/ Design and calculation of reliability and efficiency of a local area network. Computer network topology is determined Supports different types of traffic

Designing and calculating the reliability and efficiency of a local area network. Computer network topology is determined Supports different types of traffic

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

Introduction

local area network

Today there are more than 130 million computers in the world, and more than 80% of them are united in various information and computer networks, from small local area networks in offices to global networks such as the Internet.

The experience of operating networks shows that about 80% of all information sent over the network is locked within one office. Therefore, special attention of developers began to attract the so-called local area networks.

A local area network is a collection of computers, peripheral devices (printers, etc.) and switching devices connected by cables.

Local area networks differ from other networks in that they are usually limited to a moderate geographic area (one room, one building, one district).

Much depends on the quality and thoughtfulness of the implementation of the initial stage of LAN implementation - on the pre-project survey of the document management system of the enterprise or the organization where it is supposed to install the computer network. It is here that such important indicators of the network as its reliability, range of functional capabilities, its service life, continuous uptime, maintenance technology, operating and maximum network load, network security and other characteristics are laid.

The worldwide trend towards connecting computers in a network is due to a number of important reasons, such as the acceleration of the transmission of information messages, the ability to quickly exchange information between users, receive and transmit messages without leaving the workplace, the ability to instantly receive any information from anywhere in the world, and information exchange between computers of different manufacturers, running under different software.

Such huge potential opportunities that the computer network carries, and the new potential rise that the information complex is experiencing, as well as a significant acceleration of the production process, do not give us the right not to accept this for development and not to apply them in practice.

1. The purpose of the work.

The purpose of the work is to acquire skills in the development of the structure of local computer networks, the calculation of the main indicators that determine the operation of the network.

2. Theoretical part

2.1. The main goals of creating a local area network (LAN).

The constant need to optimize the distribution of resources (primarily information) periodically puts us in front of the need to develop a fundamental solution to the issue of organizing an information and computing network (ICT) on the basis of an existing computer park and software complex that meets modern scientific and technical requirements, taking into account the growing needs and the possibility of further gradual development of the network in connection with the emergence of new technical and software solutions.

Briefly, the main advantages of using a LAN can be highlighted:

Sharing resources

Sharing resources allows for economical use of resources,

for example, control peripheral devices such as laser printers from all attached workstations.

Data separation.

Data sharing provides the ability to access and manage databases from peripheral workstations that need information.

Separation of software

Separation of software tools provides the ability to simultaneously use centralized, previously installed software tools.

Sharing processor resources

By dividing processor resources, it is possible to use computing power for data processing by other systems in the network.

The mainclear definitions and terminology

A local area network (LAN) is a high-speed communication line of data processing hardware in a limited area. A LAN can unite personal computers, terminals, minicomputers and general purpose computers, printing devices, speech processing systems and other devices -

Network devices (ND) are specialized devices designed to collect, process, transform and store information received from other network devices, workstations, servers, etc.

The main component of a local area network is a workstation of a local area network (RSLVS), i.e., a computer, the hardware capabilities of which make it possible to exchange information with other computers.

A local area network is a complex technical system that is a combination of hardware and software, since a simple connection of devices, however, does not mean that they can work together. To communicate effectively between different systems, appropriate software is required. One of the main functions of the operational support of a LAN is to maintain such a connection.

The seeding rules - how the system polls and should be polled - are called protocols.

Systems are called similar if they use the same protocols. When using different protocols, they can also communicate with each other using software that performs mutual protocol conversion, LANs can be used to communicate not only with a PC. They can link video systems, telephony systems, production equipment, and almost anything that requires high-speed communication. Several local area networks can be connected through local and remote connections in the interconnection mode.

Personal computers are networked primarily for sharing programs and data files, transmitting messages (e-mail mode) and for sharing resources (printing devices, modems, and hardware and software interconnection). In this case, personal computers are called workstations of the local computer network.

Modern LAN technology allows different types of cables to be used on the same network, as well as to seamlessly connect different LAN equipment such as Ethernet, Archnet, and Token-ring into one network.

Percottages solved when creating a LAN

When creating a LAN, the developer faces a problem: with known data on the purpose, the list of LAN functions and the basic requirements for a set of hardware and software tools for a LAN, build a network, that is, solve the following tasks:

define the LAN architecture: select the types of LAN components;

evaluate the performance indicators of the LAN;

determine the cost of the LAN.

In this case, the rules for connecting LAN components based on the standardization of networks and their limitations specified by the manufacturers of LAN components should be taken into account.

The configuration of a LAN for an ICS depends significantly on the characteristics of a specific application area. These features are reduced to the types of transmitted information (data, speech, graphics), the spatial location of subscriber systems, the intensity of information flows, permissible delays of information during transmission between sources and recipients, the amount of data processing in sources and consumers, characteristics of subscriber stations, external climatic, electromagnetic factors, ergonomic requirements, reliability requirements, LAN cost, etc.

Determining the network topology

Consider the topology options and the composition of the components of the local area network.

The topology of a network is determined by the way its nodes are connected by communication channels. In practice, 4 basic topologies are used:

star-shaped (Fig. 1, a, 1, b);

annular (Fig. 2);

tire (Fig. 3);

tree-like or hierarchical (Fig. 4).

AK - active concentrator PC - passive concentrator Fig. 4. Hierarchical network with hubs.

The selected network topology must comply with the geographical location of the LAN network, the requirements established for the characteristics of the network listed in Table. 1.

Table 1. Comparative data on the characteristics of the LAN.

The choice of the type of communication means. Twisted pair

The cheapest cable connection is a twisted two-wire wiring connection, often referred to as a "twisted pair". It allows you to transfer information at a speed of up to 10 Mbit / s, it can be easily upgraded, but it is also insecure. The length of the cable cannot exceed 1000 m at a transmission rate of 1 Mbit / s - The advantages are low price and hassle-free installation.To increase the noise immunity of information, shielded twisted pair is often used, that is, a twisted pair placed in a shielding sheath, like the shield of a coaxial cable. This increases the cost of the twisted pair and brings its price closer to the price of a coaxial cable,

Coaxial cable

Coaxial cable has an average price, good noise immunity and is used for communication over long distances (several kilometers). Information transfer rates from 1 to 10 Mbps, and in some cases can reach 50 Mbps - Coaxial cable is used for basic and broadband information transmission,

Broadband coaxial cable

Broadband coaxial cable is immune to interference, easy to build, but expensive. The information transfer rate is 500 Mbit / s. When transmitting information in the base frequency band over a distance of more than 1.5 km, an amplifier, or a so-called repeater (repeater) is required, Therefore, the total distance during information transmission increases to 10 km. For computer networks with a bus or tree topology, the coaxial cable must have a terminating resistor (terminator) at the end.

Ethernet cable

Ethemet is also a 50 ohm coaxial cable. It is also called thick Ethernet or yellow cable.

Due to its noise immunity, it is an expensive alternative to conventional coaxial cables. The maximum available distance without a repeater does not exceed 500 m, and the total distance of the Ethernet network is about 3000 m. The Ethernet cable, due to its trunk topology, uses only one terminating resistor at the end.

Cheapernet - cable

Cheapernet cable, or thin Ethernet as it is often called, is cheaper than Ethernet. It is also a 50 ohm coaxial cable with a data transfer rate of ten million bits / s. Repeaters are also required when connecting Cheapernet cable segments. Computing networks with a Cheapernet cable are inexpensive and have minimal expansion costs. The network cards are connected using widely used small bayonet connectors (CP-50). Additional shielding is not required. The cable connects to the PC using T-connectors. The distance between two workstations without repeaters can be a maximum of 300 m, and the total distance for the network on the Cheapernet cable is about 1000 m. The Cheapernet transceiver is located on the network board both for galvanic isolation between the adapters and for amplifying the external signal.

Fiber optic lines

The most expensive are optical conductors, also called fiberglass cables. The speed of information dissemination through them reaches several gagabits per second. The permissible distance is more than 50 km. There is practically no external influence of interference. This is currently the most expensive LAN connection. They are used where electromagnetic interference fields occur or information transmission is required over very long distances without the use of repeaters. They have anti-jamming properties, since the branching technique in fiber optic cables is very complex. The optocouplers are connected to the LAN using a star connection.

Choosing the type of building setand by the method of information transfer

Local token ring network

This standard was developed by IBM. Unshielded or shielded twisted pair (UPT or SPT) or optical fiber is used as the transmission medium. Data transfer rate 4 Mbps or 16 Mbps. The Token Ring method is used as a method for controlling station access to the transmission medium. The main points of this method:

Devices are connected to the network using a ring topology;

All devices connected to the network can transmit data only after receiving permission to transfer (token);

at any given time, only one station in the network has this right.

The network can connect computers in a star or ring topology.

Local Area Network Arcnet

Arknet (Attached Resource Computer NETWork) is a simple, inexpensive, reliable and flexible enough LAN architecture. Developed by Datapoint Corporation in 1977. Subsequently, the Arcnet license was acquired by Standard Microsistem Corporation (SMC), which became the main developer and manufacturer of equipment for Arcnet networks. Twisted pair, coaxial cable (RG-62) with a characteristic impedance of 93 Ohm and fiber-optic cable are used as the transmission medium, the data transfer rate is 2.5 Mbit / s. When connecting devices in Arcnet, bus and star topologies are used. The method of controlling access of stations to the transmission medium is a token bus (Token Bus). This method provides the following rules:

At any given time, only one station in the network has this right;

Basic principles of work

The transfer of each byte to Arcnet is performed by a special message ISU (Information Symbol Unit), consisting of three service start / stop bits and eight data bits. At the beginning of each packet, the initial AB (Alert Burst) separator is transmitted, which consists of six service bits. The leading delimiter acts as a preamble for the packet.

Two topologies can be used in an Arcnet network: star and bus,

Local Ethernet network

The Ethernet specification was introduced in the late seventies by Xerox Corporation. Later, Digital Equipment Corporation (DEC) and Intel Corporation joined this project. In 1982, the Ethernet specification version 2.0 was published. Based on Ethernet, the ieee institute developed the ieee 802.3 standard. The differences between them are minor.

Basic principles of work:

At the logical level, Ethernet uses a bus topology;

All devices connected to the network are equal, that is, any station can start transmission at any time (if the transmission medium is free);

Data transmitted by one station is available to all stations on the network.

Selectnetwork operating system op

The wide variety of types of computers used in computer networks entails a variety of operating systems: for workstations, for department-level network servers, and enterprise-level servers in general. They may have different requirements for performance and functionality, it is desirable that they have the property of compatibility, which would allow for the interoperability of different operating systems. Network operating systems can be divided into two groups: department-wide and enterprise-wide. OS for departments or workgroups provide a set of network services, including the sharing of files, applications, and printers. They must also provide fault tolerance properties, for example, work with RAID arrays, support cluster architectures. Departmental NOS are generally easier to install and manage than enterprise NOS, they have less functionality, less data protection, less interoperability with other types of networks, and lower performance. An enterprise-wide network operating system must first of all have the basic properties of any enterprise product, including:

scalability, that is, the ability to work equally well in a wide range of different quantitative characteristics of the network,

interoperability with other products, that is, the ability to operate in a complex heterogeneous inter-network environment in a plug-and-play mode.

An enterprise network operating system must support more complex services. Like a workgroup network operating system, an enterprise network operating system must allow users to share files, applications, and printers, and do so for more users and data volumes, and with better performance. In addition, the enterprise-wide network operating system provides the ability to connect heterogeneous systems - both workstations and servers. For example, even if the OS runs on an Intel platform, it must support UNIX workstations running on RISC platforms. Likewise, a server OS running on a RISC computer must support DOS, Windows, and OS / 2. An enterprise network operating system must support multiple protocol stacks (such as TCPNR, IPX / SPX, NetBIOS, DECnet, and OSI), providing easy access to remote resources, convenient service management procedures, including agents for network management systems.

An important element of an enterprise-wide network operating system is a centralized help desk that stores data about users and network shares. This service, also known as a directory service, provides a single logical logon for a user on the network and provides a convenient way to view all the resources available to him. The administrator, if there is a centralized help desk in the network, is relieved of the need to create a repeating list of users on each server, which means he is relieved of a lot of routine work and potential errors in determining the composition of users and their rights on each server. An important property of the help desk is its scalability, provided by the distributed database of users and resources.

Network operating systems such as Banyan Vines, Novell NetWare 4.x, IBM LAN Server, Sun NFS, Microsoft LAN Manager, and Windows NT Server can serve as the enterprise operating system, while NetWare 3.x, Personal Ware, Artisoft LANtastic is more suitable for small workgroups.

The criteria for choosing an enterprise-wide OS are the following characteristics:

Organic multi-server network support;

High efficiency of file operations;

Possibility of effective integration with other operating systems;

Availability of a centralized scalable help desk;

Good development prospects;

Effective work of remote users;

Various services: file service, print service, data security and fault tolerance, data archiving, messaging service, various databases and others;

Various transport protocols: TCP / IP, IPX / SPX, NetBIOS, AppleTalk;

Support for a variety of end-user operating systems: DOS, UNIX, OS / 2, Mac;

Support for network equipment standards Ethernet, Token Ring, FDDI, ARCnet;

Availability of popular APIs and RPC remote procedure call mechanisms;

The ability to interact with the network control and management system, support for SNMP network management standards.

Of course, none of the existing network operating systems fully meets these requirements, so the choice of a network operating system, as a rule, is carried out taking into account the production situation and experience. The table summarizes the main characteristics of the currently popular and available network operating systems.

Determination of the reliability of the LAN. 2.4.1. NSLAN reliability indicators

In general, reliability is the property of a technical device or product to perform its functions within the permissible deviations within a certain period of time.

The reliability of the product is laid down at the design stage and essentially depends on such criteria as the choice of technical and technological specifications, the conformity of the adopted design solutions to the world level. The reliability of a LAN is also influenced by the literacy of personnel at all levels of network use, the conditions of transportation, storage, installation, adjustment and testing of each network node, compliance with the rules for operating the equipment.

When calculating and assessing the reliability of a computer network, the following terms and definitions will be used:

Serviceability is the state of a product in which it is capable of performing its functions within the established requirements.

Failure is an event in which the performance of the product is disrupted.

Malfunction - a condition of a product in which it does not meet at least one requirement of the technical documentation.

Operating time - the duration of the product's operation in hours or other units of time.

MTBF, or MTBF, is the mean value of the MTBF of a repaired product between failures.

Probability of failure-free operation - the probability that a product failure will not occur in a given period of time.

Failure rate is the probability of failure of a non-repairable product per unit of time after a given point in time.

Reliability is the property of a product to remain operational for a certain operating time.

Durability is the property of a product to maintain its performance up to the limit state with interruptions for maintenance and repair.

Resource - the operating time of the product to the limiting state, as specified in the technical documentation.

Service life - the calendar duration of the product's operation to the limit state specified in the technical documentation.

Maintainability - product availability for service

and repair.

Reliability is a complex property that includes properties such as:

working capacity;

preservation;

maintainability;

durability.

The main property described by quantitative characteristics is efficiency.

Loss of performance - refusal. Failures of an electrical product can mean not only electrical or mechanical damage, but also the departure of its parameters outside the permissible limits. In this regard, failures can be sudden and gradual.

Sudden device failures are random events. These failures can be independent, when the failure of one element in the device occurs independently of other elements, and dependent, when the failure of one element is caused by the failure of others. The division of failures into sudden and gradual ones is conditional, since sudden failures can be caused by the development of gradual failures.

The main quantitative characteristics of reliability (performance):

the probability of failure-free operation for time t: P (t);

the probability of failure in time t: Q (t) = 1 - P (t);

failure rate X (t) - indicates the average number of failures that occur per unit of product operation time;

the average time of the product's operating time to failure T (the inverse of the failure rate).

The real values ​​of these characteristics are obtained from the results of reliability tests. In calculating the time to failure / is considered a random variable, therefore, the apparatus of the theory of probability is used.

Properties (axioms):

Р (0) = 1 (the operation of workable products is considered);

lim t _> 00 P (t) = O (operability cannot be maintained indefinitely);

dP (t) / dt<0 (в случае если после отказа изделие не восстанавливается).

During the service life of a technical device, three periods can be distinguished, the failure rate in which varies in different ways. The dependence of the failure rate on time is shown in Fig. 5.

Fig. 5. Typical X (t) curve over the life of the product.

I - running-in stage dX (t) / dt<0

II - stage of normal operation X (t) -const

III - aging stage dX (t) / dt> 0

In the first period, called the running-in period, structural, technological, installation and other defects are identified, therefore, the failure rate can increase at the beginning of the period, decreasing when approaching the period of normal operation.

The period of normal operation is characterized by sudden failures of constant intensity, which increases towards the period of wear.

During wear and tear, the failure rate increases over time as the product wears out.

Obviously, the main one should be the period of normal work, and the other periods are the periods of entry and exit from this period.

Axiom 3 is valid for non-recoverable elements (microcircuits, radioelements, etc.). The process of operation of recoverable systems and products differs from the same process for non-recoverable ones in that along with the flow of failures of product elements, there are stages of repairing failed elements, i.e. there is a flow of recovery of elements. For recoverable systems, the third property of reliability characteristics is not satisfied: dP (t) / dt<0. За период времени At могут отказать два элемента системы, а быть восстановленными - три аналогичных элемента, а значит производная dP(t)/dt>0.

When configuring computer networks, they operate with such a concept as the mean time between failures of a particular network element (Tn).

For example, if 100 products were tested during a year and 10 of them failed, then Тn will be equal to 10 years. Those. all products are expected to be out of order after 10 years.

A quantitative characteristic for the mathematical definition of reliability is the failure rate of a device per unit of time, which is usually measured by the number of failures per hour and is indicated by an X.

Mean time between failures and mean recovery time are related to each other through the availability factor Kg, which is expressed in the probability that the computer network will be in a working state:

Thus, the availability factor Kg of the entire network will be determined as the product of the partial availability factor Kri. It should be noted that the network is considered reliable when Kr> 0.97.

An example of calculating reliabilityand local area network

A local area network usually includes a set of user workstations, a network administrator's workstation (one of the user stations can be used), a server core (a set of hardware server platforms with server programs: file server, WWW server, database server, mail server etc.), communication equipment (routers, switches, hubs) and structured cabling (cable equipment).

Calculation of LAN reliability begins with the formation of the concept of failure of a given network. For this, the management functions are analyzed, the implementation of which at the enterprise is carried out using this LAN. The functions, the violation of which is unacceptable, are selected, and the LAN equipment involved in their implementation is determined. For example: of course, during the working day, it should be possible to call / write information from the database, as well as access the Internet.

For a set of such functions, according to the structural electrical diagram, the LAN equipment is determined, the failure of which directly violates at least one of the specified functions, and a logic diagram for calculating the reliability is drawn up.

This takes into account the number and working conditions of repair and restoration teams. The following conditions are usually accepted:

Limited recovery - i.e. more than one failed element cannot be restored at any given time. there is one repair team;

the average recovery time of a failed element is set either on the basis of permissible interruptions in the operation of the LAN, or from the technical capabilities of delivery and inclusion in the operation of this element.

Within the framework of the above approach to the calculation, the reliability calculation scheme, as a rule, can be reduced to a series-parallel scheme.

Let us establish as a criterion for LAN failure the failure of equipment included in the core of the network: servers, switches or cable equipment. We believe that the failure of user workstations does not lead to a failure of the LAN, and since the simultaneous failure of all workstations is an unlikely event, the network continues to function in the event of individual failures of workstations.

Fig. 6. Layout of LAN elements for calculating the total reliability.

Let us assume that the considered local network includes two servers (one provides access to the Internet), two switches and five cable fragments related to the network core. The failure and recovery rates for them are given below.

Thus,

1) the failure rate of the entire network L is 6.5 * 10-5 1 / h,

2) the mean time between failures of the entire network Тн is approximately 15.4 thousand hours,

3) the average recovery time for TV is 30 h.

The calculated values ​​of the corresponding readiness are presented in table. 4:

The availability factor of the entire network is

Calculation of the efficiency of the LAN

To determine the parameters of the network functioning, the selection and justification of control points is carried out. For the data of the selected points, information is collected and the parameters are calculated:

request processing time - calculation of the time interval between the formation of a request and the receipt of a response to it, performed for the selected basic services.

response time in a loaded and unloaded network - calculation of the performance indicator of an unloaded and unloaded network.

frame transmission delay time - calculation of the frame delay time of the link layer of the selected main network segments.

determination of the real bandwidth - determination of the real bandwidth for the routes of the selected main nodes of the network.

analytical calculation of reliability indicators - an analytical assessment of the possible failure rate and mean time between failures.

availability factor - analytical calculation of the degree of availability (average recovery time) of a LAN.

Let's assume that the network between two users is organized according to the scheme shown in Fig. 7.

Work order

To complete the work, you must:

a) repeat the safety rules when working with computers;

b) study the lecture materials for the courses "", as well as the theoretical part of these guidelines;

c) choose a semi-hypothetical enterprise or organization and study in it the existing document management system from the point of view of automation. Propose a new document management system based on the use of computer networks, evaluate the advantages and disadvantages of the existing and proposed systems (performance, cost, topology, changes in the wage bill, etc.);

d) calculate the numerical indicators of the new document management system: network reliability, MTBF, availability ratio, message delivery time to the addressee, time of receipt of a message delivery receipt;

e) in accordance with the requirements given in section 5, draw up a report on laboratory work;

g) defend laboratory work by demonstrating to the teacher:

1) a report on laboratory work;

2) understanding the basic principles of organizing a local area network;

3) theoretical knowledge of the quantitative parameters of the computer network.

When preparing for protection for self-testing, it is recommended to answer the security questions given in section 5.

4. Requirements for the report

The laboratory report should contain:

a) title page;

b) the condition of the assignment;

c) justification for the development of a LAN and calculations for the proposed network topology;

d) comments and conclusions on the work done.

Bibliography

1.Guseva A.I. Working in local networks NetWare 3.12-4.1: Textbook. - M .: "DIALOG-MEPhI", 1996. - 288 p.

2.Lorin G. Distributed computing systems :. - M .: Radio and communication, 1984 .-- 296 p.

4. Frolov A.V., Frolov G.V. Local area networks of personal computers. Using the protocols IPX, SPX, NETBIOS. - M .: "DIALOG-MEPhI", 1993. - 160 p.

Posted on Allbest.ru

...

Similar documents

    Local area network, switching nodes and communication lines providing data transmission of network users. The data link layer of the OSI model. The layout of computers. Calculation of the total cable length. Local network software and hardware.

    term paper, added 06/28/2014

    Ways of connecting disparate computers to a network. Basic principles of organizing a local area network (LAN). Development and design of a local area network at the enterprise. Description of the selected topology, technology, standard and equipment.

    thesis, added 06/19/2013

    The goals of informatization of school No. 15 in the Volga region. School network design and organization. The structure and main functions of the local area network. Characteristics of software and hardware, construction mechanisms and features of LAN administration.

    thesis, added 05/20/2013

    Justification of the modernization of the local area network (LAN) of the enterprise. LAN hardware and software. Choice of network topology, cable and switch. Implementation and configuration of Wi-Fi - access points. Ensuring the reliability and security of the network.

    thesis, added 12/21/2016

    Creation of a local area network, its topology, cabling, technology, hardware and software, minimum server requirements. Physical construction of a local network and the organization of Internet access, calculation of the cable system.

    term paper, added 05/05/2010

    Computer local area network: design on two floors, interaction of about 30 machines. The distance between machines and switches is at least 20 meters, the number of switches is within the project. Logical and physical network topology.

    laboratory work, added 09/27/2010

    The main types of communication lines. Local area networks (LAN) as a distributed data processing system, features of the area coverage, cost. Analysis of the possibilities and relevance of the use of network equipment in the construction of modern LANs.

    thesis, added 06/16/2012

    Calculations of the parameters of the projected local area network. Total cable length. Allocation of IP addresses for the designed network. Specification of equipment and consumables. Choice of operating system and application software.

    term paper, added 11/01/2014

    Review of methods for designing a local area network for classrooms in one of the college buildings using Ethernet standard using twisted pair and thin coaxial cables in all parameters, using 10Base-T and 10Base standards.

    term paper, added 03/24/2011

    The main stages of maintenance and modernization of the local network of the enterprise. Type of automated activity at the enterprise. The choice of the topology of the local area network. Hardware and software. Characteristics of the seven-layer OSI model.

The most important characteristic of computer networks is reliability. Improving reliability is based on the principle of preventing malfunctions by reducing the rate of failures and failures through the use of electronic circuits and components with a high and ultra-high degree of integration, reducing the level of interference, light modes of operation of circuits, ensuring thermal modes of their operation, as well as by improving methods for assembling equipment ...

Fault tolerance is a property of a computing system that provides it as a logical machine with the ability to continue actions specified by the program after a malfunction occurs. The introduction of fault tolerance requires redundant hardware and software. Areas related to fault prevention and fault tolerance are central to the problem of reliability. On parallel computing systems, both the highest performance and, in many cases, very high reliability are achieved. The available redundancy resources in parallel systems can be flexibly used to both improve performance and improve reliability.

It should be remembered that the concept of reliability includes not only hardware, but also software. The main goal of improving the reliability of systems is the integrity of the data stored in them.

Security is one of the main tasks solved by any normal computer network. The security problem can be viewed from different angles - malicious data corruption, confidentiality of information, unauthorized access, theft, etc.

It is always easier to ensure the protection of information in a local network than in the presence of a dozen of autonomously working computers in a company. You practically have one tool at your disposal - backup. For simplicity, let's call this process a reservation. Its essence is to create a complete copy of the data in a safe place, updated regularly and as often as possible. For a personal computer, floppy disks serve as a more or less secure medium. It is possible to use a streamer, but this is an additional cost for equipment.

Rice. 5.1. Data security challenges

The easiest way to protect your data from all kinds of annoyances is in the case of a network with a dedicated file server. All the most important files are concentrated on the server, and saving one machine is much easier than ten. Concentration of data also makes it easier to back up as it does not need to be collected across the entire network.

Shielded lines improve network security and reliability. Shielded systems are much more resistant to external RF fields.

1) the characteristics of the devices used in the network;

2) the network operating system used;

3) the way of physical connection of network nodes by communication channels;

4) the way signals propagate over the network.

60. For standard Ethernet technologies are used ...

1) coaxial cable;

2) linear topology;

3) ring topology;

4) Carrier Sense Access;

5) token forwarding

6) fiber optic cable;

61. Indicate the ways in which the workstation can be physically connected to the network?

1) with a power adapter and cable outlet

2) using a hub

3) using a modem and a dedicated telephone line

4) using the server

62. Local networks cannot be physically combine with ...

1) servers

2) gateways

3) routers

4) hubs

63. What is the main disadvantage of a ring topology?

1. high cost of the network;

2. low network reliability;

3. high cable consumption;

4. low noise immunity of the network.

64. For which topology is the statement true: "A computer failure does not disrupt the operation of the entire network"?

1) basic star topology

2) basic topology "bus"

3) basic ring topology

4) the statement is not true for any of the basic topologies

65. What is the main advantage of a star topology?

1. low cost of the network;

2. high reliability and manageability of the network;

3. low cable consumption;

4. good noise immunity of the network.

66. What topology and access method are used in Ethernet networks?

1) bus and CSMA / CD

2) bus and token transfer

3) ring and marker transfer

4) bus and CSMA / CA

67. What characteristics of the network are determined by the choice of the network topology?

1.the cost of equipment

2.network reliability

3.subordination of computers in the network

4. network extensibility

68. What is the main benefit of the token passing access method?

  1. no collisions (collisions)
  2. simplicity of technical implementation
  3. low cost of equipment

Stages of data exchange in networked computer systems

1) data transformation in the process of moving from the upper level to the lower1

2) data transformation as a result of moving from the lower level to the upper ones

3) transportation to the receiving computer2

70. What is the main protocol for hypertext transmission on the Internet?

2) TCP / IP

3) NetBIOS

71. What is the name of a device that provides a domain name on request based on an IP address and vice versa:

1) DFS server

2) host - computer

3) DNS server

4) DHCP server

72. DNS Protocol Establishes Correspondence ...

1) IP addresses with switch port

2) IP addresses with a domain address

3) IP addresses with MAC address

4) MAC addresses with a domain address

73. What IP addresses cannot be assigned to hosts on the Internet?

1) 172.16.0.2;

2) 213.180.204.11;

3) 192.168.10.255;

4) 169.254.141.25

The unique 32-bit sequence of binary digits that uniquely identifies a computer on a network is called

1) MAC address

2) url;

3) IP - address;

4) frame;

What (or what) identifiers are allocated in an IP address using a subnet mask



1) networks

2) network and node

3) node

4) adapter

76. For each server connected to the Internet, the following addresses are set:

1) digital only;

2) domain only;

3) digital and domain;

4) addresses are determined automatically;

77. At the network level of interaction of the OSI model ...

1) retransmission of erroneous data is performed;

2) the route of message delivery is determined;

3) the programs that will carry out the interaction are determined;

78. What protocol is used to determine the physical MAC address of a computer corresponding to its IP address?

The OSI model includes _____ levels of interaction

1) seven

2) five

3) four

4) six

80. What class of network must be registered for an organization with 300 computers to access the Internet?

81. What is the difference between TCP and UDP?

1) uses ports when running

2) establishes a connection before transferring data

3) guarantees the delivery of information

82. Which of the following protocols are located at the network layer of the TCP / IP stack?

They work, but not quite as we would like. For example, it is not very clear how to restrict access to a network drive; every morning the accountant's printer stops working and there is a suspicion that a virus lives somewhere, because the computer has become unusually slow.

Sound familiar? You are not alone, these are classic symptoms of network services configuration errors. This is quite fixable, we helped hundreds of times in solving similar problems. Let's call it modernization of IT infrastructure, or increasing the reliability and security of a computer network.

Improving the Reliability of a Computer Network - Who Benefits?

First of all, he is needed by a leader who is not indifferent to his company. The result of a well-executed project is a significant improvement in network performance and almost complete elimination of failures. For this reason, the money spent on modernizing the network in terms of improving it infrastructure and increasing the level of security should not be considered a cost, but an investment that will certainly pay off.

Also, a network modernization project is necessary for ordinary users, since it allows them to focus on direct work, and not on solving it problems.

How we carry out a network modernization project

We are ready to help you understand the problem, it is not difficult. Start by calling us and asking for an IT audit. He will show you what is causing your daily problems and how to get rid of them. We will make it for you either inexpensively or free of charge.

Essentially, an IT audit is part of a network modernization project. As part of an IT audit, we not only examine the server and workplaces, figure out the schemes for switching on network equipment and telephony, but also develop a plan for the network modernization project, determine the project budget both in terms of our work and the necessary equipment or software.

The next stage is the actual implementation of the network modernization project. The main work is performed on the server, since it is he who is the defining component of the infrastructure. Our task within the framework of the network modernization project is to eliminate not so much the manifestations as the roots of the problems. As a rule, they boil down to roughly the same conceptual infrastructure flaws:

a) servers and workstations work as part of a workgroup, not a domain, as Microsoft recommends for networks with more than five computers. This leads to problems of user authentication, inability to effectively enter passwords and restrict user rights, inability to use security policies.

b) incorrectly configured network services, in particular DNS, and computers stop seeing each other or network resources. For the same reason, most often the network "slows down" for no apparent reason.

c) computers have a motley anti-virus software installed, which turns protection into a colander. You can work on a slow machine for years without knowing that 80% of its resources are used to attack other computers or send spam. Well, maybe even stealing your passwords or transferring everything you write to an external server. Unfortunately, this is quite possible, reliable anti-virus protection is an important and necessary part of any network modernization project.

These are the three most common causes of infrastructure problems, and each of them means an urgent need to address them. It is necessary not only to fix the problem, but also to correctly build the system in order to eliminate the very possibility of their appearance.

By the way, we try to use the phrase "modernization of the information system" instead of "network modernization" as we try to look wider than network problems. In our opinion, an information system should be considered from different points of view, and a professional, when developing a network modernization project, should take into account the following aspects of its work.

Information security of your company

Speaking about the information security of the company, we consider it very important not so much external protection against intrusions through the Internet, as the streamlining of the internal work of employees. Unfortunately, the greatest damage to the company is caused not by unknown hackers, but by those people whom you know by sight, but who might be offended by your decisions or consider the information their own. A manager taking away a customer base or a resentful employee copying accounting or management information "just in case" are two of the most common security breaches.

Data security

Unfortunately, data integrity is rarely on the list of executives and even many IT professionals. It is believed that once spaceships are going out of orbit, it is almost impossible to prevent server breakdowns. And the network modernization project carried out often does not cover this part of the infrastructure.

We partly agree that it is not always possible to prevent an accident. But it is possible and necessary for any self-respecting IT specialist to make sure that the data always remains intact and safe, and the work of the company can be restored within an hour or two from the moment of the server failure. In the course of the network modernization project, we consider it our duty to implement both hardware backup schemes for storage media and data backup according to a special scheme that allows you to restore data at the right time and ensure their safety for a long time. And if the administrator does not understand the meaning of the above words, then, to put it mildly, he is not trustworthy as a professional.

Long-term equipment operation

The long-term performance of servers and workstations is directly related to what they are made of and how. And we try to help you choose such equipment that is bought for a long time and does not require attention for many years. And within the framework of a network modernization project, it is very often necessary to upgrade the disk subsystem of the server - unfortunately, it is often forgotten. This is because the real life of hard drives does not exceed 4 years, and after this time they must be replaced on servers. This should be monitored as part of server and computer maintenance, as it is critical to the reliability of data storage.

Maintenance of server and computer systems

It should not be forgotten that even a very well-structured and reliable infrastructure requires competent and attentive maintenance. We believe that IT outsourcing in terms of infrastructure maintenance is a logical continuation of design work. There are a number of companies that have their own IT specialists, but we were entrusted with the task of maintaining server systems. This practice shows high efficiency - the company pays only for server support, taking on low-level tasks. We are responsible for ensuring that security and backup policies are respected, that routine maintenance is carried out, and we monitor server systems.

Relevance of IT solutions

The world is constantly changing. The IT world is changing twice as fast. And technologies are born and die faster than we would like to spend money on updating them. Therefore, when carrying out a network modernization project, we consider it necessary to implement not only the newest, but also the most reliable and justified solutions. Not always what everyone is talking about is a panacea or a solution to your problem. Often times, things are not at all as described. Virtualization and cloud computing are used by thousands of companies, but the introduction of some technologies is not always economically justified. And vice versa - a correctly selected and competently conducted network modernization project and a reasonable choice of software provide new opportunities for work, save time and money.

Paid Windows or Free Linux? MS SharePoint or Bitrix: Corporate Portal? IP telephony or classic? Each product has its own merits and its own scope.

What does your company need? How to carry out a project to modernize a network or introduce a new service so as not to interrupt the work of the company? How can you ensure that your implementation is successful and your employees get the best tools for the job? Call us, let's figure it out.

Lecture 13. Requirements for computer networks

The most important metrics of network performance are discussed: performance, reliability and security, extensibility and scalability, transparency, support for different types of traffic, quality of service characteristics, manageability and interoperability.

Keywords: performance, response time, average, instant, maximum, total throughput, transmission delay, transmission delay variation, reliability metrics, mean time between failures, probability of failure, failure rate, availability, availability, data integrity, consistency, data consistency, probability of data delivery, security, fault tolerance, scalability, scalability, transparency, multimedia traffic, synchronicity, reliability, delays, data loss, computer traffic, centralized control, monitoring, analysis, network planning, quality of service (QoS), delays packet transmission, the level of loss and distortion of packets, service "; best effort" ;, service "; with maximum effort" ;, "; whenever possible" ;.

Compliance is only one of many requirements for today's networks. In this section, we will focus on some others, no less important.

The most common wish that can be made for the operation of a network is that the network performs the set of services for which it is intended to provide: for example, the provision of access to file archives or pages of public Internet Web sites, the exchange of e-mail within the enterprise or on a global scale. , interactive voice messaging, IP telephony, etc.

All other requirements - performance, reliability, compatibility, manageability, security, extensibility and scalability - are related to the quality of this core task. And although all of the above requirements are very important, often the concept of "; quality of service"; (Quality of Service, QoS) of a computer network is interpreted more narrowly: it includes only the two most important characteristics of the network - performance and reliability.

Performance

Potentially high performance is one of the main advantages of distributed systems, which include computer networks. This property is provided by the fundamental, but, unfortunately, not always practically realizable possibility of distributing work among several computers in the network.

Main characteristics of network performance:

    reaction time;

    traffic rate;

    bandwidth;

    transmission delay and transmission delay variation.

Network response time is an integral measure of network performance from the user's point of view. It is this characteristic that the user has in mind when he says: "; Today the network is slow" ;.

In general, response time is defined as the interval between the occurrence of a user request for a network service and the receipt of a response to it.

Obviously, the value of this indicator depends on the type of service that the user is accessing, on which user and which server is accessing, as well as on the current state of network elements - the load of segments, switches and routers through which the request passes, the load of the server, and etc.

Therefore, it makes sense to also use a weighted average estimate of the network response time, averaging this indicator across users, servers and time of day (on which network load largely depends).

Network response times are usually made up of several components. In general, it includes:

    time of preparation of requests on the client computer;

    the time of transmission of requests between the client and the server through network segments and intermediate communication equipment;

    time of processing requests on the server;

    the time of transmission of responses from the server to the client and the time it takes to process the responses received from the server on the client computer.

It is obvious that the decomposition of the reaction time into components does not interest the user - he is interested in the final result. However, it is very important for a network specialist to distinguish from the total reaction time the components corresponding to the stages of the actual network processing of data - the transfer of data from the client to the server through network segments and communication equipment.

Knowing the network components of the response time allows you to evaluate the performance of individual network elements, identify bottlenecks and, if necessary, upgrade the network to improve its overall performance.

Network performance can also be characterized by the speed of traffic transmission.

The traffic transfer rate can be instant, maximum and average.

    the average speed is calculated by dividing the total amount of transmitted data by the time of their transmission, and a sufficiently long period of time is selected - an hour, a day or a week;

    the instantaneous speed differs from the average one in that a very small time interval is selected for averaging - for example, 10 ms or 1 s;

    maximum speed is the highest speed recorded during the observation period.

Most often, when designing, configuring and optimizing a network, indicators such as average and maximum speed are used. The average speed at which the traffic, an individual element or the network as a whole, processes traffic, makes it possible to evaluate the operation of the network over a long time, during which, by virtue of the law of large numbers, the peaks and drops of traffic intensity compensate each other. The fastest speed allows you to estimate how the network will handle the peaks associated with special periods of operation, for example, in the morning hours, when employees in the enterprise are almost simultaneously logging into the network and accessing shared files and databases. Usually, when determining the speed characteristics of a certain segment or device, the traffic of a certain user, application or computer is not allocated in the transmitted data - the total amount of transmitted information is calculated. However, for a more accurate assessment of the quality of service, such granularity is desirable, and recently network management systems increasingly allow it.

Throughputability- the maximum possible speed of traffic processing, determined by the standard of the technology on which the network is built. Bandwidth reflects the maximum possible amount of data transmitted by the network or part of it per unit of time.

The bandwidth is no longer a user characteristic, like the response time or the speed of data transmission over the network, since it speaks about the speed of performing internal network operations - the transfer of data packets between network nodes through various communication devices. But it directly characterizes the quality of the main function of the network - transporting messages - and therefore is more often used in analyzing network performance than response time or speed.

Throughput is measured in either bits per second or packets per second.

The network bandwidth depends both on the characteristics of the physical transmission medium (copper cable, optical fiber, twisted pair) and on the accepted data transmission method (Ethernet technology, FastEthernet, ATM). Bandwidth is often used as a characteristic not so much of the network as the actual technology on which the network is built. The importance of this characteristic for networking technology is shown, in particular, by the fact that its meaning sometimes becomes part of the name, for example, 10 Mbps Ethernet, 100 Mbps Ethernet.

Unlike response time or traffic speed, throughput does not depend on network congestion and has a constant value determined by the technologies used in the network.

In different parts of a heterogeneous network, where several different technologies are used, the bandwidth can be different. To analyze and configure a network, it is very useful to know the data on the throughput of its individual elements. It is important to note that due to the sequential nature of data transmission by various network elements, the total throughput of any composite path in the network will be equal to the minimum of the throughput of the constituent elements of the route. To increase the throughput of a compound path, it is necessary first of all to pay attention to the slowest elements. Sometimes it is useful to operate with the total network bandwidth, which is defined as the average amount of information transferred between all nodes in the network per unit of time. This indicator characterizes the quality of the network as a whole, without differentiating it by individual segments or devices.

Transmission delay is defined as the delay between the moment the data arrives at the input of any network device or part of the network and the moment it appears at the output of this device.

This performance parameter is close in meaning to the network reaction time, but differs in that it always characterizes only the network stages of data processing, without processing delays by the end nodes of the network.

Typically, the quality of the network is characterized by the values ​​of maximum transmission delay and delay variation. Not all types of traffic are sensitive to transmission delays, at least to those delays that are typical for computer networks - usually delays do not exceed hundreds of milliseconds, less often - several seconds. This order of delay in packets generated by the file service, e-mail service, or print service has little impact on the quality of those services from the point of view of the network user. On the other hand, the same delays in packets carrying voice or video data can lead to a significant decrease in the quality of the information provided to the user - the appearance of the "echo" effect, the inability to make out some words, image vibrations, etc.

All of these characteristics of network performance are fairly independent. While the network bandwidth is constant, the traffic rate can vary depending on the network load, without exceeding, of course, the bandwidth limit. So in a single-segment 10 Mbps Ethernet network, computers can exchange data at speeds of 2 Mbps and 4 Mbps, but never 12 Mbps.

Throughput and transmission delays are also independent parameters, so that a network can have, for example, high throughput, but introduce significant delays in the transmission of each packet. An example of such a situation is provided by a communication channel formed by a geostationary satellite. The throughput of this channel can be very high, for example, 2 Mbit / s, while the transmission delay is always at least 0.24 s, which is determined by the propagation speed of the electrical signal (about 300,000 km / s) and the length of the channel (72,000 km) ...

Reliability and safety

One of the original goals of creating distributed systems, which include computer networks, was to achieve greater reliability compared to individual computers.

It is important to distinguish between several aspects of reliability.

For relatively simple technical devices, such reliability indicators are used as:

Mean time between failures;

Probability of failure;

Bounce rate.

However, these indicators are suitable for assessing the reliability of simple elements and devices that can be in only two states - operable or inoperative. Complex systems consisting of many elements, in addition to the states of operability and inoperability, may have other intermediate states that do not take these characteristics into account.

To assess the reliability of complex systems, a different set of characteristics is used:

Availability or availability;

Data safety;

Consistency (consistency) of data;

The likelihood of data delivery;

Security;

Fault tolerance.

Availability, or availability, refers to the period of time that a system can be used. Availability can be increased by introducing redundancy into the structure of the system: the key elements of the system must exist in several copies, so that if one of them fails, the functioning of the system is ensured by others.

For a computer system to be considered highly reliable, it must at least have high availability, but this is not enough. It is necessary to ensure the safety of data and protect it from distortion. In addition, the consistency (consistency) of data must be maintained, for example, if multiple copies of data are stored on several file servers to improve reliability, then it is necessary to ensure their identity at all times.

Since the network operates on the basis of a mechanism for transmitting packets between end nodes, one of the reliability characteristics is the probability that the packet will be delivered to the destination node without distortion. Along with this characteristic, other indicators can also be used: the probability of packet loss (for any reason - due to a router buffer overflow, a checksum mismatch, the absence of an efficient path to the destination node, etc.), the probability of a single bit of transmitted data being corrupted, the ratio of the number of lost and delivered packets.

Another aspect of overall reliability is security, that is, the ability of the system to protect data from unauthorized access. This is much more difficult in a distributed system than in a centralized one. In networks, messages are transmitted over communication lines, often passing through public spaces in which wiretapping devices can be installed. Unattended personal computers can become another vulnerability. In addition, there is always a potential threat of compromising the protection of the network from unauthorized users if the network has access to global public networks.

Another characteristic of reliability is fault tolerance. In networks, fault tolerance refers to the ability of the system to hide the failure of its individual elements from the user. For example, if copies of a database table are stored concurrently on multiple file servers, users may simply not notice that one of them fails. In a fault-tolerant system, the failure of one of its elements leads to a certain decrease in the quality of its work (degradation), and not to a complete shutdown. So, if one of the file servers fails in the previous example, only the time of access to the database increases due to a decrease in the degree of parallelization of queries, but in general the system will continue to perform its functions.

Extensibility and scalability

The terms "; extensibility"; and "; scalability"; sometimes used as synonyms, but this is not true - each of them has a clearly defined independent meaning.

Extensibility(extensibility)

Scalability(scalability)

Possibility of relatively easy addition of individual network elements

Ability to add (optionally light) network elements

The ease of expanding the system can be provided within some very limited limits.

Scalability means that the network can be expanded within a very wide range, while maintaining the consumer properties of the network

Extensibility(extensibility) means the ability to relatively easily add individual network elements (users, computers, applications, services), increase the length of network segments and replace existing equipment with more powerful ones. At the same time, it is fundamentally important that the ease of expanding the system can sometimes be provided within very limited limits. For example, an Ethernet LAN based on a single thick coaxial cable segment is highly scalable in the sense that new stations can be easily connected. However, such a network has a limit on the number of stations - it should not exceed 30–40. Although the network allows physical connection to a segment and a larger number of stations (up to 100), this often drastically degrades network performance. The presence of such a limitation is a sign of poor scalability of the system with good extensibility.

Scalability(scalability) means that the network can grow the number of nodes and the length of the links over a very wide range, while the performance of the network does not degrade. To ensure network scalability, additional communication equipment must be used and the network must be structured in a special way. For example, a multi-segment network built using switches and routers and having a hierarchical structure of links has good scalability. Such a network can include several thousand computers and at the same time provide each user of the network with the desired quality of service.

Transparency

Transparency of a network is achieved when the network is presented to users not as a set of separate computers interconnected by a complex system of cables, but as a single traditional computer with a time-sharing system. Sun Microsystems' famous slogan "; The network is the computer"; - speaks of just such a transparent network.

Transparency can be achieved at two different levels - at the user level and at the programmer level. At the user level, transparency means that he uses the same commands and familiar procedures to work with remote resources as he does with local resources. At the programmatic level, transparency is that an application requires the same calls to access remote resources as it does to access local resources. It is easier to achieve transparency at the user level, since all the features of the procedures associated with the distributed nature of the system are hidden from the user by the programmer who creates the application. Transparency at the application level requires that all the details of the distribution be hidden by means of the network operating system.

Transparency- the property of the network to hide the details of its internal structure from the user, which makes it easier to work in the network.

The network must hide all the peculiarities of operating systems and differences in types of computers. A Macintosh user should be able to access resources supported by UNIX, and a UNIX user should be able to share information with Windows 95 users. The vast majority of users do not want to know anything about internal file formats or UNIX command syntax. A user of an IBM 3270 terminal should be able to exchange messages with users on a network of personal computers without having to delve into the secrets of hard-to-remember addresses.

The concept of transparency applies to various aspects of the web. For example, location transparency means that the user is not required to know the location of software and hardware resources such as processors, printers, files, and databases. The resource name must not include information about its location, so names like mashinel: prog.c or \\ ftp_serv \ pub are not transparent. Likewise, relocation transparency means that resources can move freely from one computer to another without changing names. Another of the possible aspects of transparency is the transparency of parallelism, which means that the process of parallelizing computations occurs automatically, without the participation of a programmer, while the system itself distributes parallel branches of the application to processors and computers on the network. At present, it cannot be said that the property of transparency is fully inherent in many computer networks; it is rather a goal towards which the developers of modern networks are striving.

Support for different types of traffic

Computer networks were originally intended for sharing computer resources: files, printers, etc. The traffic generated by these traditional computer network services has its own characteristics and differs significantly from message traffic in telephone networks or, for example, in cable TV networks. However, in the 1990s, the traffic of multimedia data, representing digital speech and video images, entered computer networks. Computer networks began to be used for organizing video conferencing, training based on video films, etc. Naturally, dynamic transmission of multimedia traffic requires different algorithms and protocols, and, accordingly, other equipment. Although the share of multimedia traffic is still small, it has already begun to penetrate both global and local networks, and this process, obviously, will actively continue.

The main feature of the traffic generated during the dynamic transmission of voice or image is the presence of strict requirements for the synchronization of transmitted messages. For high-quality reproduction of continuous processes, which are sound vibrations or changes in light intensity in a video image, it is necessary to obtain measured and encoded signal amplitudes with the same frequency with which they were measured on the transmitting side. If the messages are delayed, there will be distortions.

At the same time, the traffic of computer data is characterized by an extremely uneven intensity of messages entering the network in the absence of strict requirements for the synchronization of delivery of these messages. For example, the access of a user working with text on a remote disk generates a random flow of messages between the remote and local computers, depending on the user's actions, and delays in delivery in some (rather wide from a computer point of view) limits have little effect on the quality of service for a network user. All computer communication algorithms, corresponding protocols and communication equipment were designed for exactly this "; pulsating"; the nature of the traffic, therefore, the need to transmit multimedia traffic requires fundamental changes, both in the protocols and in the equipment. Today, almost all new protocols provide support for multimedia traffic to one degree or another.

Combining traditional computer and multimedia traffic in one network is especially difficult. The transmission of exclusively multimedia traffic by a computer network, although associated with certain difficulties, is less of a hassle. But the coexistence of two types of traffic with opposite quality of service requirements is much more difficult. Usually, protocols and equipment of computer networks classify multimedia traffic as optional, so the quality of its service is poor. Today, great efforts are expended to create networks that do not infringe on the interests of one of the types of traffic. The closest to this goal are networks based on ATM technology, the developers of which initially took into account the case of coexistence of different types of traffic in one network.

Controllability

Ideally, network management is a system that monitors, controls, and manages every element of the network, from the simplest to the most sophisticated devices, while treating the network as a whole, rather than as a disparate collection of separate devices.

Controllability network implies the ability to centrally monitor the state of the main elements of the network, identify and solve problems that arise during the operation of the network, perform performance analysis and plan the development of the network.

A good management system monitors the network and, upon detecting a problem, triggers an action, corrects the situation, and notifies the administrator of what happened and what steps were taken. At the same time, the control system must accumulate data on the basis of which the development of the network can be planned. Finally, the control system should be manufacturer-independent and have a user-friendly interface that allows you to perform all actions from one console.

In tactical tasks, administrators and technicians face the daily challenges of keeping the network up and running. These tasks require a quick solution, the staff of the network must respond promptly to messages about faults coming from users or automatic network controls. Gradually, general performance, network configuration, fault handling, and data security issues become visible, requiring a strategic approach, that is, network planning. Planning, in addition, includes forecasting changes in user requirements for the network, questions of the use of new applications, new network technologies, etc.

The need for a management system is especially evident in large networks: corporate or global. Without a control system, such networks require the presence of qualified maintenance specialists in every building in every city where the network equipment is installed, which ultimately leads to the need to maintain a huge staff of maintenance personnel.

Currently, there are many unsolved problems in the field of network management systems. It is clearly not enough really convenient, compact and multi-protocol network management tools. Most of the existing tools do not manage the network at all, but only monitor its operation. They monitor the network, but do not take active action if something has happened to the network or is about to happen. There are few scalable systems capable of serving both department-wide and enterprise-wide networks - very many systems manage only individual network elements and do not analyze the network's ability to perform high-quality data transfer between end users.

Compatibility

Compatibility or integrability means that the network can include a variety of software and hardware, that is, it can coexist different operating systems that support different communication protocol stacks and run hardware and applications from different manufacturers. A network consisting of elements of different types is called heterogeneous or heterogeneous, and if a heterogeneous network works without problems, then it is integrated. The main way to build integrated networks is to use modules made in accordance with open standards and specifications.

Quality of service

Quality of service(Quality of Service, QoS) quantifies the likelihood that a network will transmit a specific data stream between two nodes according to the needs of an application or user.

For example, when transmitting voice traffic through a network, quality of service is most often understood as a guarantee that voice packets will be delivered by the network with a delay of no more than N ms, while the delay variation will not exceed M ms, and these characteristics will be maintained by the network with a probability of 0.95 at a certain time interval. That is, for an application that transmits voice traffic, it is important that the network ensures that this particular set of QoS characteristics is met above. The file service needs guarantees of average bandwidth and expands it at short intervals to some maximum level for fast transmission of ripples. Ideally, the network should guarantee specific QoS parameters formulated for each individual application. However, for obvious reasons, the developed and already existing QoS mechanisms are limited to solving a simpler problem - guaranteeing some averaged requirements set for the main types of applications.

Most often, the parameters that appear in various definitions of the quality of service govern the following performance indicators of the network:

Bandwidth;

Packet transmission delays;

Packet loss and distortion.

Quality of service is guaranteed for some data flow. Recall that a data stream is a sequence of packets that have some common characteristics, such as the source node address, information identifying the type of application (TCP / UDP port number), and so on. Concepts such as aggregation and differentiation are applicable to streams. Thus, the data flow from one computer can be represented as a set of flows from different applications, and the flows from the computers of one enterprise are aggregated into one data flow of a subscriber of a certain service provider.

QoS support mechanisms do not create bandwidth by themselves. The network cannot give more than what it has. So the actual bandwidth of communication channels and transit communication equipment is the network resources, which are the starting point for the operation of QoS mechanisms. QoS mechanisms only control the allocation of available bandwidth according to application requirements and network settings. The most obvious way to reallocate network bandwidth is to manage packet queues.

Since data exchanged between two end nodes passes through a number of intermediate network devices such as hubs, switches and routers, QoS support requires the interaction of all network elements along the traffic path, that is, end-to-end; ("; end-to-end" ;, "; e2e";). Any QoS guarantees are as accurate as the weakest; an element in the chain between sender and receiver. Therefore, it should be clearly understood that QoS support in only one network device, even a backbone device, can only slightly improve the quality of service or not affect the QoS parameters at all.

The implementation of QoS support mechanisms in computer networks is a relatively new trend. For a long time, computer networks existed without such mechanisms, and this is mainly due to two reasons. First, most of the applications running on the network were “undemanding,” meaning that for such applications, packet delays or variations in average throughput over a fairly wide range did not result in significant loss of functionality. Examples of "undemanding" applications are the most common e-mail or remote file copy applications on networks in the 1980s.

Second, the bandwidth of 10-megabit Ethernet networks itself was not in short supply in many cases. Thus, the shared Ethernet segment, to which 10-20 computers were connected, occasionally copying small text files, the volume of which does not exceed several hundred kilobytes, allowed the traffic of each pair of interacting computers to cross the network as quickly as required by the applications that generated this traffic.

As a result, most networks operated at a quality of transport that met the needs of the applications. True, these networks did not provide any guarantees regarding control of packet delays or the bandwidth with which packets are transmitted between nodes, within certain limits. Moreover, during temporary network congestion, when a significant part of computers simultaneously began to transmit data at maximum speed, the latency and bandwidth became such that the operation of applications crashed - it was too slow, with broken sessions, etc.

There are two main approaches to ensuring network quality. The first is that the network guarantees the user compliance with a certain numerical value of the quality of service indicator. For example, frame relay and ATM networks can guarantee the user a given level of bandwidth. In the second approach (best effort), the network tries to serve the user as efficiently as possible, but does not guarantee anything.

The transport service provided by such networks was called "best effort", that is, the "best effort" service; (or "; if possible";). The network tries to process the incoming traffic as quickly as possible, but at the same time it does not give any guarantees regarding the result. Most of the technologies developed in the 1980s are examples: Ethernet, Token Ring, IP, X.25. Service "; with maximum effort"; is based on some fair algorithm for processing queues arising from network congestion, when for some time the rate of packet arrival in the network exceeds the speed of forwarding of these packets. In the simplest case, the queue processing algorithm considers packets of all flows as peers and advances them in the order of arrival (First In - First Out, FIFO). In the event that the queue becomes too large (does not fit in the buffer), the problem is solved by simply discarding new incoming packets.

Obviously, the service is "; best effort"; Provides acceptable quality of service only when the network performance is much higher than the average demand, that is, it is excessive. In such a network, the bandwidth is sufficient even to support peak traffic periods. It is also obvious that such a solution is not economical - at least in relation to the bandwidth of today's technologies and infrastructures, especially for wide area networks.

However, building networks with excess bandwidth, being the simplest way to ensure the required level of quality of service, is sometimes applied in practice. For example, some TCP / IP network service providers provide a quality assurance service by consistently maintaining a certain level of excess bandwidth on their backbones relative to customer needs.

In the conditions when many mechanisms for maintaining quality of service are just being developed, the use of excess bandwidth for these purposes is often the only possible, albeit temporary, solution.

Option 1

1. Which of the techniques will reduce the response time of the network when the user is working with

a database server?

    transfer of the server to the network segment where the majority of clients work

    replacing the server hardware platform with a more productive one

    reducing the intensity of client requests

    reducing the size of the database

2. Which of the following statements are wrong?

    transmission latency is synonymous with network response time

    bandwidth is synonymous with traffic speed

    transmission delay - the reciprocal of the bandwidth

    QoS mechanisms cannot increase network bandwidth

3. Which of the listed characteristics can be attributed to reliability

computer network?

    availability or availability

    reaction time

    data integrity

    consistency (consistency) of data

    transmission delay

    probability of data delivery

Option 2

1. In the network, from 3 to 5 o'clock, the data transfer rate was measured. Was determined

average speed. The instantaneous speed was measured at intervals of 10 seconds. Finally, the maximum speed was determined. Which statements are correct?

    average speed is always less than maximum

    average speed is always less than instantaneous

    instantaneous speed is always less than maximum

2.With which of the following translations of the names of network characteristics from English

do you agree to Russian?

    availability - reliability

    fault tolerance - fault tolerance

    reliability - readiness

    security - secrecy

    extensibility - extensibility

    scalability - scalability

3. Which of the statements are correct?

    the network can have high bandwidth, but introduce significant delays in the transmission of each packet

    service "; best effort"; provides acceptable quality of service only if there is excess bandwidth in the network

Option 3

1. Which statements are correct?

    throughput is a constant value for each technology

    network bandwidth is equal to the maximum possible data transfer rate

    bandwidth depends on the amount of traffic transferred

    the network can have different values ​​of bandwidth at different sites

2. What property, first of all, must a network have so that it can be attributed

famous company sloganSunMicrosystems: "; The network is a computer" ;?

    high performance

    high reliability

    high degree of transparency

    excellent scalability

3. Which statements are wrong?

    extensibility and scalability are two names for the same system property

    using QoS, you can increase the network bandwidth

    for computer traffic, the uniformity of data transmission is more important than high network reliability

    all statements are correct

Required literature

1. V.G. Olifer, NA. Olifer

Computer networks. Principles, technologies, protocols

study guide for students of higher educational institutions,

students in the direction "; Informatics and computing

technique";

additional literature

1. V.G. Olifer, N.A. Olifer

Network operating systems

Peter, 2001

2. A.Z. Dodd

The world of telecommunications. Technology and industry overview

Olymp-Business, 2002

About project 2

Foreword 3

Lecture 1. Evolution of computer networks. Part 1. From Charles Babbage's machine to the first global networks 4

Two roots of data networks 4

The appearance of the first computers 5

Program Monitors - 6 First Operating Systems

Multiprogramming 6

Multi-Terminal Systems - Preimage of Network 8

First networks - global 8

The legacy of telephone networks 9

Lecture 2. Evolution of computer networks. 12

Part 2. From the first local area networks to modern network technologies 12

Mini computers - harbingers of local networks 12

The Emergence of Standard LAN Technologies 13

The role of personal computers in the evolution of computer networks 13

New opportunities for users of local networks 14

Evolution of network operating systems 14

Lecture 3. The main tasks of building networks 18

Computer communication with peripheral devices 18

Communication of two computers 20

Client, Redirector and Server 21

The problem of physical data transmission over communication lines 22

Lecture 4. Problems of communication of several computers 25

Physical link topology 25

Host Addressing 30

Lecture 5. Switching and multiplexing 35

Generalized switching problem 35

Defining information flows 36

Defining Routes 37

Notifying the network about the selected route 37

Forwarding - flow recognition and switching at each transit node 38

Multiplexing and demultiplexing 39

Shared media 41

Lecture 6. Channel switching and packet switching. Part 1 44

Different Approaches to Making Connections 44

Channel switching 45

Packet Switching 47

Switching messages 50

Lecture 7. Channel switching and packet switching. Part 2 52

Permanent and dynamic switching 52

Packet-Switched Throughput 53

Ethernet is an example of standard packet switching technology 55

Datagram transmission 57

Virtual circuits in packet-switched networks 58

Lecture 8. Structuring networks 62

Reasons for structuring the transport infrastructure of networks 62

Physical network structuring 63

Logical network structuring 65

Lecture 9. Functional roles of computers in a network 71

Layered Network Model 71

Functional roles of computers in a network 72

Peer-to-peer networks 73

Dedicated Server Networks 74

Network Services and Operating System 76

Lecture 10. Convergence of computer and telecommunication networks 79

General structure of telecommunication network 80

Networks of telecom operators 82

Corporate networks 86

Department networks 88

Campus networks 89

Enterprise Networks 89

Lecture 11. OSI 93 Model

Layered Approach 94

Decomposition of the Networking Problem 94

Protocol. Interface. Protocol stack 95

OSI Model 97

General characteristics of the OSI 97 model

Physical Layer 100

Link layer 100

Network layer 102

Transport level 103

Session level 104

Representative level 104

Application level 105

Network independent and network independent levels 105

Lecture 12. Standardization of networks 109

The concept "; open system"; 109

Modularity and standardization 110

Sources of standards 111

Internet 112 standards

Standard Communication Protocol Stacks 114

informationresources with aim
  • It is allowed to use it exclusively for educational purposes; it is prohibited to duplicate information resources (2)

    Book

    alloweduseexclusively v educationalpurposes. Forbiddenreplicationinformationresources with aim obtaining commercial benefits, as well as other ...

  • It is allowed to use it exclusively for educational purposes; it is prohibited to duplicate information resources (4)

    Tutorial

    In the telecommunications library and presented as citations, alloweduseexclusively v educationalpurposes. Forbiddenreplicationinformationresources with aim obtaining commercial benefits, as well as other ...

  • It is allowed to use it exclusively for educational purposes; it is prohibited to duplicate information resources (5)

    List of tutorials

    In the telecommunications library and presented as citations, alloweduseexclusively v educationalpurposes. Forbiddenreplicationinformationresources with aim obtaining commercial benefits, as well as other ...

  • It is allowed to use it exclusively for educational purposes; it is prohibited to duplicate information resources (3)

    Tutorial

    In the telecommunications library and presented as citations, alloweduseexclusively v educationalpurposes. Forbiddenreplicationinformationresources with aim obtaining commercial benefits, as well as other ...