Menu
Is free
registration
home  /  Advice/ What are the technical means of information processing. Technical means of data processing

What are the technical means of information processing. Technical means of data processing

Main characteristics of PC modules

Personal computers usually consist of the following basic modules:

  1. system unit
    1. Power Supply
    2. Motherboard
    3. CPU
    4. Memory
  2. information output devices (monitor)
  3. input devices (keyboard, mouse)
  4. storage media

Let's consider these modules in more detail.

System unit (case).

The PC case protects the internal elements of the PC from external influences.

The case includes: Power supply, connection cables motherboard, additional fans.

The number of bays is important for the expandability of the system.

Types of enclosures.

Name Dimensions, height / width / length (cm) Power supply unit, W Number of compartments additional characteristics
5,25 3,5
Slimline 7*35*45 1-2 1-2 Limited expansion and upgrade options
Desktop 20*45*45 200-250 2-3 1-2 Takes up a lot of space
Mini tower 45*20*45 200-250
Midi tower 50*20*45 200-250 Most common
Big tower 63*20*45 250-350
File Server 73*35*55 350-400 Dearest

Power Supply.

The power supply generates different voltages for the internal devices and the motherboard. The life of the power supply unit is 4-7 years, and it can be extended by more rarely turning on and off the PC.

There are three form factors (types) of power supplies and, accordingly, motherboards.

  • AT - connects to two connectors on the motherboard. Used in older PC types. Turning on and off the power in them is done with a conventional mains switch, which is under mains voltage.
  • ATX - 1 connector. Included on command from the mat. boards. The ATX power supply units operate according to the following scheme: at t 0 to 35 0 С, the fan rotates at a minimum speed and is practically inaudible. When t 0 reaches 50 0 С, the fan speed increases to the maximum value and does not decrease until the temperature decreases.

ATX motherboards are generally not compatible with AT power supplies. It is necessary that the chassis and motherboard are of the same type.

  • BTX - has 2 required components:
    • Thermal balance module that directs fresh air directly to the processor heatsink.
    • The support module on which the motherboard is installed. The support module is designed to compensate for shocks and shocks to the system, to reduce the kinks of the motherboard. Thanks to him, we managed to increase the maximum permissible weight of the processor heatsink from 450 to 900 grams. In addition, the configuration of the motherboard and the system unit has been significantly changed. The hottest PC components are now positioned in the airflow path, increasing the efficiency of case coolers.

"-" incompatibility with ATX, despite the mechanical and electrical compatibility of power supplies (400 W, 120 mm fan).

What is the threat of the PC insufficient power BP.

In the event of an excessive overload of the power supply unit, the protection circuit will work, and the power supply unit simply will not start. In the worst case, the consequences can be very different, for example, very sad for hard drives. A decrease in the HDD supply voltage is regarded as a shutdown signal and the HDD starts to park the read heads. When the voltage level is restored, the disc turns on again and begins to spin.

There may also be obscure program malfunctions. A poor-quality power supply unit in an emergency can disable the mat. board and video card.

Motherboard

@ Motherboard (system) board is the central part of any computer that generally hosts CPU, coprocessor, controllers providing communication between the central processor and peripheral devices, RAM , cache memory, BIOS item(basic input / output system), accumulator battery, crystal clock generator and slots(connectors) for connect other devices... All these modules are connected together using the system bus, which, as we have already found out, is located on the motherboard.

The overall performance of a motherboard is not only determined by clock frequency, but also quantity(bitness) data, processed per unit of time central processing unit, and bit width of the data exchange bus between different devices motherboard.

The architecture of motherboards is constantly being improved: their functional saturation increases, performance increases. It has become standard for the motherboard to have such built-in devices as a dual-channel E-IDE HDD controller ( hard drives), FDD (floppy) disk controller, advanced parallel (LPT) and serial (COM) ports, and serial infrared port.

@ Port - multi-bit input or output in the device.

COM1, COM2- serial ports that transmit electrical impulses (information) sequentially one after another (scanner, mouse). Hardware implemented using 25-pin and 9-pin connectors, which are displayed on back panel system unit.

LPT- the parallel port has a higher speed, since it simultaneously transmits 8 electrical impulses (connect the printer). The hardware is implemented in the form of a 25-pin connector on the rear panel of the system unit.

USB- (universal serial bus) provides high-speed connection to a PC of several peripheral devices at once (connect flash drives, webcams, external modems, HDD, etc.). This port is universal and can replace all other ports.

^ PS / 2- dedicated port for keyboard and mouse.

AGP- accelerated graphics port for connecting a monitor.

The performance of various computer components (processor, RAM, and peripheral controllers) can vary significantly.

^ To agree on performance on the motherboard special microcircuits are installed(chipsets), including a RAM controller (the so-called north bridge) and a peripheral controller ( south bridge).

The North Bridge provides information exchange between the processor and the main memory via the system backbone.

The processor uses internal frequency multiplication, so the processor frequency is several times higher than the system bus frequency. In modern computers, the processor frequency can exceed the system bus frequency by up to 10 times (for example, the processor frequency is 1 GHz, and the bus frequency is 100 MHz).

Logic diagram of the motherboard

A PCI bus (Peripherial Component Interconnect bus) is connected to the north bridge, which provides information exchange with peripheral controllers. (The frequency of the controllers is less than the frequency of the system bus, for example, if the frequency of the system bus is 100 MHz, then the frequency of the PCI bus is usually three times less - 33 MHz.) sound card, network card, SCSI controller, internal modem) are installed in the expansion slots of the system card .

A special AGP bus is used to connect the video card(Accelerated Graphic Port), connected to the northbridge and having a frequency several times higher than the PCI bus.

CPU

In general@ subprocessor is understood a device that performs a set of operations on data presented in digital form (binary code).

Applied to Computing @ processor means a central processing unit (CPU) that has the ability to select, decode, and execute instructions, as well as transmit and receive information from other devices.

The number of firms that design and manufacture PC processors is small. Currently known: Intel, Cyrix, AMD, NexGen, Texas Instrument.

Processor structure and functions:

The processor structure can be represented by the following diagram:

1 ) UU - controls the entire course of the computational and logical process in the computer. This is the "brain" of the computer, which controls all of its actions. The functions of the control unit are to read the next command, recognize it and then connect the necessary electronic circuits and devices for its implementation.

2) ALU- performs direct processing of data in binary code. ALU is able to perform only a certain set of the simplest operations:

  • Arithmetic operations (+, -, *, /);
  • Logical operations(comparison, condition check);
  • Forwarding operations(from one area of ​​RAM to another).

3) Clock generator- sets the rhythm for all operations in the processor by sending one pulse at regular intervals (clock). It synchronizes the work of PC devices.

@Tact Is the time interval between the start of two consecutive pulses of the clock frequency generator. GTC synchronizes the work of PC nodes.

^ 4) Coprocessor- allows you to significantly speed up the operation of a computer with floating point numbers (we are talking about real numbers, for example, 1.233 * 10 -5). When working with texts, the coprocessor is not used.

5) A modern processor has such a high performance that information from the RAM does not have time to reach it in a timely manner and the processor is idle. To prevent this from happening, a special microcircuit is embedded in the processor. cache memory .

@ Cache memory - superfast memory designed to store intermediate calculation results. It has a volume of 128-1024 Kb.

In addition to the specified element base, the processor contains special registers that are directly involved in command processing.

6) Registers- processor memory, or a number of special storage cells.

Registers serve two purposes:

  • short-term storage of a number or command;
  • performing some operations on them.

The most important processor registers are:

  1. command counter - serves for automatic selection of program commands from sequential memory cells, it stores the address of the command being executed;
  2. command and state register - serves to store the command code.

The execution of a command by the processor is broken down into the following stages:

  1. a command is selected from the memory cell, the address of which is stored in the command counter, into the RAM (the contents of the command counter increase);
  2. from the OP, the command is transferred to the control unit (to the command register);
  3. the control device decrypts the address field of the command;
  4. by signals from the control device operands are fetched from memory in ALU (in registers of operands);
  5. UU decrypts the operation code and issues the ALU signal to perform the operation, which is performed in the adder;
  6. the result of the operation remains in the processor, or is returned to the RAM.

Memory

^ Classification of memory elements.

File system

The order of storing files on the disk is determined by the used file system, which directly means the file allocation table, which is stored in 2 instances in the system area of ​​the disk.

At the physical disk level, a file is a sequence of bytes. However, since the smallest unit on a disk is a sector then the file could be understood as a certain sequence of sectors. But the file is actually a concatenated sequence of clusters.

@ Cluster Is a collection of several contiguous disk sectors (from 1 to several dozen).

Traditionally, it is believed that a cluster and a sector are one and the same, but they are different things. Cluster size can vary depending on disk capacity. The larger the disk capacity, the larger the cluster size. The cluster size can vary from 512 bytes to 64 KB.

^ Clusters are needed to reduce the size of the file allocation table.

If the file allocation table is corrupted in any way, then, despite the fact that the data is on disk, they will be inaccessible. In this regard, 2 such tables are stored on the disk.

Clusters reduce the size of the table. But here comes another problem. ^ Wasted disk space.

When writing a file to disk, an integer number of clusters will always be occupied.

For example, the file is 1792 bytes in size and the cluster size is 512 bytes. In order to save the file, we need 2 full sectors + 256 bytes from the third sector. Thus, in the third sector, 256 bytes will remain free. (1792 = 3 * 512 +256); (512 * 4 = 2048)

^ Remaining bytes in the fourth cluster cannot be used... It is believed that, on average, there is 0.5 cluster of wasted space per file, which is leads to a loss of up to 15% of disk space... That is, from 2 GB occupied place- 300 MB lost. As files are deleted, it comes back online.

The file allocation table was first used in the MS-DOS operating system and it was called the FAT (File Allocation Table) table.

^ There are several types of File Allocation Tables (FAT).

General structure of FAT

TO

The initial 34th cluster stores the address of the 35th cluster, the 35th address of the 36th, the 36th address of the 53rd, etc. The 55th cluster stores the end-of-file character.

File system NTFS.

The NTFS file system was based on file system families of UNIX operating systems.

Here, a file element has two parts: the file name and the inode.

The file is written to disk as follows:

There are 13 blocks in which the addresses of data blocks located on the disk can be written, of which:

11 - Indicates an indirect addressing block of 256 data blocks. It is used in cases where the first 10 blocks were not enough to write the addresses of the data blocks, i.e. the file is large.

12 - indicates not a block of double indirect addressing (256 * 256), it is used when there is not enough space for writing the addresses of data blocks.

13 - address of the triple addressing unit (256 * 256 * 256).

Thus, maximum file size may be up to 16 GB.

This mechanism provides colossal data security. If in FAT you can just spoil tables, in NTFS you will have to wander between blocks for a long time.

NTFS can displace, even fragment across the disk, all of its service areas, bypassing any surface faults - except for the first 16 MFT elements. The second copy of the first three records is stored exactly in the middle of the disk.

NTFS is a fault-tolerant system that can bring itself to the correct state in case of almost any real failure. Any modern file system is based on such a concept as transaction - an action performed entirely and correctly or not performed at all.

Example 1: data is being written to disk. Suddenly it turns out that we couldn't write to the place where we just decided to write the next piece of data - physical damage to the surface. The behavior of NTFS in this case is quite logical: the entire write transaction is rolled back - the system realizes that the write has not been made. The location is marked as bad, and the data is written to another location - a new transaction starts.

Example 2: a more complicated case - data is being written to disk. Suddenly the power goes off and the system reboots. At what phase did the recording stop, where is the data? Another system mechanism comes to the rescue - the transaction log, which marks the beginning and end of any transaction. The fact is that the system, realizing its desire to write to disk, marked its state in the metafile. When rebooting, this file is examined for the presence of incomplete transactions that were interrupted by an accident and the result of which is unpredictable - all these transactions are canceled: the place to which the write was made is marked again as free, indexes and MFT elements are brought to the state in which they were before failure, and the system as a whole remains stable.

^ It is important to understand, however, that the NTFS recovery system guarantees the correctness of the file system,not your data.

In NTFS, each disk is divided into volumes. Each volume contains its own MFT (file table), which can be located anywhere on the disk within the volume.

HDD content

1. Magnetic disc is a round plate made of aluminum (in rare cases made of special glass), the surface of which is processed to the highest class of accuracy. There can be several such magnetic disks from 1 to 4. To make the plates magnetic, their surface is coated with an alloy based on chromium, cobalt or ferromagnet. This coating has a high hardness... Each side of the disc is numbered.

^ 2. To rotate the discs, a special electric motor , the design of which includes special bearings, which can be both conventional ball and liquid (instead of balls, they use a special oil that absorbs shock loads, which increases the durability of the engine). Liquid bearings have a lower noise level and generate almost no heat during operation.

In addition, some modern hard drives have a motor completely immersed in a sealed container of oil, which helps to efficiently remove heat from the windings.

3. Each disk corresponds to a pair of read / write heads. The gap between the heads and the surface of the discs is 0.1 microns, which is 500 times less than the thickness of a human hair. Magnetic head is a complex structure consisting of dozens of parts. (These parts are so small that they are made by photolithography in the same way as modern microcircuits, i.e. they are burned out with a laser with high precision) The working surface of the ceramic head body is polished with the same high precision as the disc.

4. Actuator is a flat solenoid coil made of copper wire, placed between the poles of a permanent magnet and fixed at the end of a lever rotating on a bearing. At the other end is easy arrow with magnetic heads.

The coil is capable of moving in a magnetic field under the action of a current passing through it, simultaneously moving all heads in the radial direction. To prevent the coil with heads from dangling from side to side when inoperative, there is a magnetic latch that holds the heads of the turned off hard drive in place. In the inoperative state of the drive, the heads are located near the center of the disks, in the "parking zone" and are pressed against the sides of the plates by light springs. This is the only moment the heads touch the surface of the disc. But as soon as the discs begin to rotate, the air flow raises the heads above their surface, overcoming the force of the springs. The heads "float" and from that moment are above the disc without touching it at all. Since there is no mechanical contact between the head and the disc, there is no wear of discs and heads.

5. Also inside the HDA is signal amplifier placed closer to the heads to reduce pickup from external interference. It is connected to the heads with a flexible ribbon cable. The same cable is used to supply power to the moving coil of the drive head, and sometimes to the engine. All these components are connected to the controller board through a small connector.

In the process of formatting disks, it may turn out that there are one or several small areas on the surface of the platters, reading or writing to which is accompanied by errors (the so-called bad sectors, or bad blocks).

Sectors, reading or writing to which is accompanied by errors are called @ bad sectors .

but because of this, the disc is not thrown away and do not consider it spoiled, but only only mark these sectors in a special way, and they are further ignored... To prevent the user from seeing this disgrace, the hard drive contains a number of spare tracks with which the drive electronics "on the fly" replaces defective areas of the surface, making them absolutely transparent for operating system.

In addition, not all of the disc area is reserved for recording data. Part of the information surface is used by the drive for its own needs. This is the area of ​​service, as it is sometimes called, engineering information.

Structure optical disc

V According to accepted standards, the surface of the disc is divided into three areas:

1. Input directory - the ring-shaped area closest to the center of the disc (4 mm wide). Reading information from a disc begins exactly from the input directory, which contains the table of contents, recording addresses, the number of titles, the volume of the disc, the name of the disc;

2. Data area ;

3. Output directory - has a disc end mark.

Optical disc types:

  1. CD-ROM... Information is industrially recorded on a CD-ROM disc and cannot be rewritten. The most widely used are 5-inch CD-ROMs with a capacity of 670 MB. In terms of their characteristics, they are completely identical to ordinary music CDs. The data on the disk is written in a spiral pattern.
  2. CD-R... The abbreviation CD-R (CD-Recordable) denotes an optical write-once technology that can be used for archiving data, prototyping discs for mass production and for small-scale editions on CDs, recording audio and video. The purpose of a CD-R device is to write data onto CD-R CDs, which can then be read on CD-ROM and CD-RW drives.
  3. CD-RW... Old data can be erased and new data overwritten. The capacity of CD-RW media is 650 MB and is equal to the capacity of CD-ROM and CD-R discs.
  4. ^ DVD-ROM, DVD-R, DVD-RW... Similar to the previously discussed types of optical discs, but with a large capacity.
  5. Is being developed HVD(Holografic Versatile Dosc) with a capacity of 1 TB.

DVD technology allows 4 types of discs:

  • single-sided, single-layer - 4.7 GB
  • single-sided, double-layer - 8.5 GB
  • double-sided, single-layer - 9.4 GB
  • double-sided, double-layer - 17 GB

In double-layer discs, a reinforcing layer is used, on which information is recorded. When reading information from the first layer located in the depth of the disk, the laser passes through the transparent film of the second layer. When reading information from the second layer, the drive controller sends a signal for focusing the laser beam on the second layer and reading is performed from it. With all this, the diameter of the disc is 120 mm, and its thickness is 1.2 mm.

As already mentioned, for example, a double-sided dual-layer DVD disc can hold up to 17 GB of information, which is about 8 hours of high-quality video, 26 hours of music, or, most clearly, a 1.4 kilometer stack of paper written on both sides!

^ DVD Formats

  1. DVD-R. can only be single-layer, but it is possible to create double-sided discs. The principle by which it is made DVD-R burn exactly the same as CD-R. The reflective layer changes its characteristics under the influence of a high-power laser beam. DVD-R is nothing new, technically it's the same CD-R, only designed for thinner tracks. At DVD-R creation the closest attention is paid to compatibility with existing DVD-ROM drives. Length of the recording laser 635 Nm + copy protection of the recordable discs.
  2. DVD + R... The principles on which DVD + R is built are identical to those used in DVD-R. The difference between the two is in the recording format that is used. So, for example, DVD + R discs support recording in several stages. The length of the recording laser is 650 Nm + higher reflective surface.

^ There are two main classes of CDs: CD and DVD.

ZIP drives.

Magneto-optical discs.

They are made of aluminum alloy and enclosed in a plastic sheath. Capacity 25-50 GB.

Reading is carried out by optical method, and writing by magnetic means, like on floppy disks.

The data recording technology is as follows: a laser beam heats up a point on the disk, and an electromagnet changes the magnetic orientation of this point, depending on what needs to be recorded: 0 or 1.

The reading is performed by a laser beam of lower power, which, reflected from this point, changes its polarity.

Externally, the magneto-optical media looks like a 3.5 diskette, only slightly thicker.

Flash drives

This technology is quite new and therefore does not belong to cheap solutions, however, there are all prerequisites for reducing the cost of devices of this class,

The basis of any flash drive is non-volatile memory. The device has no moving parts and is not susceptible to vibration and mechanical shock. Flash is not intrinsically magnetic carrier and it is not influenced by magnetic fields. And power consumption occurs only during write / read operations, and the power from USB is quite enough.

^ Flash storage capacities range from approximately 256 MB to several GB (4-5 GB).

In addition to the fact that a flash drive can be used for recording, reliable storage and transfer of information, it can be split into logical drives and install it with a bootable disk.

Dignity

Technical means information processing

The means of intensifying information are the scientific and technological revolution, the use of the latest achievements of science and technology in the information business; scientific organization, information process management; training and improvement of specialists serving information services of the management system.

The development of a system of measures that expand the opportunities for the most effective use of information is an important condition for success in management. Among these measures, of paramount importance is the careful preparation of the subject of management for the perception, assessment of information, the development of the ability to assess its social significance, to choose from the flow of information the most generally significant, the most social, since this type of information is invaluable in management.

The collection and processing of social information is unthinkable without the use of modern technical means.

The most important means of obtaining reliable social information is not only the widespread use of technical (computer) means of obtaining social information, but also the formation of a new type of culture - humanitarian and technological.

The most important mechanism of its formation is a change in the style of thinking, which gradually becomes conceptual (humanitarian), strategic and constructive, technological, finding ways and means of solving increasingly complex social problems. The presence in our society of two cultures, "humanitarian" and technocratic, which so far interact weakly, gives rise to many information problems in management.

The world community as a whole, including our country, has entered a new stage in the development of its civilization - the formation of an information society. This process is often called the third socio-technical revolution, the informatization of society.

The informatization of society inevitably affects not only material production and communications, but also social relations, culture, intellectual activity in all its diverse manifestations.

It is quite obvious that the informatization of society leaves its imprint directly on the activities of people working in the field of organization and management. They have incomparably wider opportunities in obtaining, storing, processing, transmitting, and arranging the most diverse in their content and form of presentation of information about various aspects of society.

For example, in the early 60s of the 20th century, the parliament, government and people of Japan were faced with the question of which way to direct the country's development. On the path of material well-being or information and intellectual development, informatization of society, building up information resources and technologies, that is, along the material or information path?

Since 1964, Japan has chosen the second path, preferring material wealth - the wealth of information and its resources. Since that time, the world history of informatization of society, information resources and technologies has been counting.

The United States of America, with its powerful intelligence gathering techniques, adopted the Japanese development information system in the late 1960s and early 1970s.

In the late 60s, the USSR also began to deal with similar problems of informatization of the last century. However, the public information consciousness of developed countries did not become the general information property of Soviet society for a number of reasons.

At present, all countries of the world are following the path of informational progress. Information has become a non-alternative source of development and well-being of many peoples; information resources and technologies have raised science and technological progress to an unprecedented level compared to what physics, mechanics, chemistry and electrodynamics combined in the past provided.

That is why the International Academy of Informatization attaches great importance to the promotion of the ideas of informatization, educational and educational work in the field of information, information security, information resources and technologies.

It is difficult to find a sphere or area of ​​human activity, where information does not play an important role, because it provides self-organization not only of man, but also of the entire animal and plant world.

Therefore, a new branch of scientific knowledge has appeared - informationology is the science of fundamental research of all processes and phenomena of the micro- and macrocosms of the universe, generalization of the practical and theoretical material of physicochemical, astrophysical, nuclear, biological, space and other studies from a single information point of view.

The successful use of computer technology is possible only under the following conditions:

Economy, that is, achieving a greater effect compared to the use of conventional computing means;

Accurate determination of the suitability of primary information for processing and analysis by computer means;

Compliance of the control system with the possibilities of successful use of computers;

Compliance of the documentation with the principles of computing;

Availability of relevant specialists.

Thanks to computer technology acts automatically, according to programs prepared in advance by a person, they perform all the actual work on processing and analyzing information without the direct participation of a person; as a result, the speed of these machines is not limited by its physiological capabilities. It is determined by the speed of the physical elements of which they are composed. Physical devices possessed by modern devices allow memorizing and storing practically unlimited amounts of information.

Thus, computer technology as a tool for processing and analyzing information opens up fundamentally new opportunities for the prompt processing of large volumes of information, which allow one to sufficiently deeply and fully reveal the tendencies and patterns of the development of society and thereby successfully solve managerial problems.

For example, in the 1980s and 1990s, the rapid development of microelectronics brought down the cost and size of computers to such an extent that they could be used in every workplace.

This led to a further change in the technical equipment of the management apparatus. The driving force in the process of converting it into an electronic one is a microcomputer. Transforming information according to a complex program, he embodies the primitive form of "intelligence", changes the content, and not the form or location of the information entering it, as was done by the "information technology" of the previous period.

The invention of the microprocessor reduced the cost of electronic computing to such an extent that electronic "intelligence" was applied in the widest possible areas and was installed at a changed cost in exactly the places where it was needed, and not at significant costs in a remote center.

Now the developing technical equipment of the management apparatus can include:

Office technical units equipped with microcomputers located at the workplaces of almost every manager;

Programs that ensure interaction between man and machine include the necessary means for information processing and reflect the accumulated experience of the management apparatus;

Communication networks connecting office equipment blocks with each other and with central processing units as well as with external sources of information;

Shared devices, such as electronic files, printing and scanning devices, available to all office units via communication lines.

Changes in the content, organization and management technique under the influence of information technology and automated offices are taking place in the following directions.

First, the organization and technique of information support for the head are changing radically. Of particular importance is the massive introduction of mini- and microcomputers, personal computers as components information systems connected with a network of data banks. At the same time, the work of collecting, processing, and disseminating information is carried out by human-machine interfaces, which do not require special training.

The technique of storing and processing information is also changing significantly, incomplete information, duplication, information designed for other levels of management are not allowed.

Secondly, a certain automation of the manager's functions is carried out. The number of efficiently functioning automated systems has grown, covering production, economic activities, organizational and technological processes.

An increasing part of the work in drawing up plans is transferred to the computer. At the same time, the quality of plans developed using microcomputers at a lower control level is significantly improved. In addition, the plans for the individual control subsystems are clearly coordinated.

Control systems have improved, including those that make it possible to detect deviations from the planned level and ensure that the probable causes of such deviations are found.

Third, the means of communication have changed significantly, apart from the exchange of messages through the network of microprocessors.

Of particular importance is the telecommunications system, which makes it possible to hold absentee meetings, conferences between distant points, and quick receipt of information by performers. Accordingly, the methods and techniques of communication between managers and subordinates and with higher authorities are changing.

Complex of technical means of information processing Is a set of autonomous devices for collecting, accumulating, transmitting, processing and presenting information, as well as office equipment, management, maintenance and others.

A number of requirements are imposed on the complex of technical means:

Providing solutions to problems with minimal costs, the required accuracy and reliability

Possibility of technical compatibility of devices, their aggregation

Ensuring high reliability

Minimum acquisition costs

Domestic and foreign industry produces a wide range of technical means of information processing, differing in the element base, design, use of various storage media, operational characteristics, etc.

Technical means of information processing are divided into two large groups. These are the main and auxiliary processing tools.

Fixed assets Are tools for automated processing information.

It is known that to control certain processes, certain management information is required that characterizes the states and parameters. technological processes, quantitative, cost and labor indicators of production, supply, sales, financial activities, etc.

The main means of technical processing include: means for registering and collecting information, means for receiving and transmitting data, means for preparing data, input means, means for processing information and means for displaying information. Below, all these tools are discussed in detail.

Obtaining initial information and registration is one of the laborious processes. Therefore, devices for mechanized and automated measurement, collection and recording of data are widely used. The range of these funds is very extensive. These include: electronic scales, various counters, scoreboards, flow meters, cash registers, machines for counting banknotes, ATMs and much more. This also includes various production recorders, designed for registration and recording of information about business operations on computer media.

· Means of receiving and transmitting information.

Under the transfer of information the process of transferring data (messages) from one device to another is understood. The interacting set of objects formed by data transmission and processing devices is called network ... Combine devices designed to transmit and receive information. They provide the exchange of information between the place of its origin and the place of its processing. The structure of the means and methods of data transmission is determined by the location of information sources and data processing means, volumes and time for data transmission, types of communication lines and other factors. Data transmission facilities are represented by subscriber stations (AP), transmission equipment, modems, multiplexers.


Data preparation tools represented by devices for the preparation of information on machine media, devices for transferring information from documents to media, including computer devices. These devices can sort and correct.

Input aids serve for the perception of data from machine media and input of information into computer systems

Information processing facilities play a crucial role in the complex of technical means of information processing. Computers can be classified as processing tools, which, in turn, can be divided into four classes: micro, small (mini); large and supercomputers.

Microcomputer are of two types: universal and specialized. Both universal and specialized can be both multi-user - powerful computers equipped with several terminals and operating in a time-sharing mode (servers), and single-user (workstations), which specialize in performing one type of work.

Small computers- work in time-sharing and multitasking modes. Their positive side is reliability and ease of use.

Large computers- (mainframes) are characterized by large memory capacity, high fault tolerance and performance. Also characterized by high reliability and data protection; the ability to connect a large number of users.

Super-computer Are powerful multiprocessor computers with a speed of 40 billion operations per second.

Server- a computer dedicated to processing requests from all stations on the network and providing these stations with access to system resources and distributing these resources.

Universal server called - server-application.

Powerful servers can be classified as small and mainframe computers. Now the leader is Marshall servers, and there are also Cray servers (64 processors).

Information display facilities are used to output the results of calculations, reference data and programs to computer media, print, screen, and so on. Output devices include monitors, printers, and plotters.

Monitor Is a device designed to display information entered by a user from a keyboard or output from a computer.

a printer Is a device for outputting text and graphic information to paper.

Plotter Is a device for outputting drawings and diagrams of large formats to paper.

Aids- this is equipment that ensures the performance of fixed assets, as well as equipment that facilitates and makes managerial work more comfortable.

The auxiliary means of information processing include office equipment and maintenance and repair tools. Office equipment is represented by a very wide range of means, from office supplies, to means of delivery, reproduction, storage, search and destruction of basic data, means of administrative production communication, and so on, which makes the work of a manager convenient and comfortable.

The technological process of data processing in information systems is carried out using:

    technical means for collecting and recording data;

    telecommunication facilities;

    data storage, search and retrieval systems;

    means of computing data processing;

    technical means of office equipment.

In modern information systems, technical means of data processing are used in an integrated manner, on the basis of a technical and economic calculation of the feasibility of their use, taking into account the price / quality ratio and the reliability of the operation of technical means.

Information Technology

Information technology can be defined as a set of methods- techniques and algorithms for data processing and tools- software and hardware data processing.

Information technology can be roughly divided into categories:

    Basic information technologies are universal technological operations of data processing, as a rule, independent of the content of the information being processed, for example, launching programs for execution, copying, deleting, moving and searching for files, etc. They are based on the use of widely used software and hardware data processing.

    Special information technology - a complex of information-related basic information technologies designed to perform special operations, taking into account the content and / or form of data presentation.

Information technology is a necessary basis for the creation of information systems.

Information Systems

An information system (IS) is a communication system for collecting, transferring, processing information about an object, supplying workers of various ranks with information to implement the management function.

The users of the IS are organizational units of management - structural divisions, management personnel, performers. The content basis of the IS is made up of functional components - models, methods and algorithms for the formation of control information. The functional structure of an IS is a set of functional components: subsystems, task complexes, information processing procedures that determine the sequence and conditions for their implementation.

The introduction of information systems is carried out in order to increase the efficiency of production and economic activities of the facility by not only processing and storing routine information, automating office work, but also by fundamentally new management methods. These methods are based on modeling the actions of the organization's specialists when making decisions (artificial intelligence methods, expert systems, etc.), using modern telecommunications (e-mail, teleconferences), global and local computer networks, etc.

IP classification is carried out according to the following criteria:

    the nature of information processing;

    scale and integration of IP components;

    information technology architecture of IS.

According to the nature of information processing and the complexity of algorithms for processing ICs, it is customary to divide into two large classes:

    IS for operational data processing. These are traditional ISs for accounting and processing of primary data of large volume using strictly regulated algorithms, a fixed structure of the database (DB), etc.

    IS of support and decision making... They are focused on the analytical processing of large amounts of information, the integration of heterogeneous data sources, the use of methods and tools for analytical processing.

Currently, the main information technology architectures have developed:

Centralized processing assumes the unification of the user interface, applications and databases on one computer.

V architecturefile server”Many network users are provided files the main computer of the network, called file server... These can be individual user files, database files, and application programs. All data processing is performed on users' computers. Such a computer is called workstation(RS). On it, the PS of the user interface and applications are installed, which can be entered both from the PC input devices and transmitted over the network from the file server. The file server can also be used for centralized storage of files of individual users, sent by them over the network from the PC. Architecture “ file server”Is mainly used in local computer networks.

V architectureclient-server”Software is focused not only on the collective use of resources, but also on their processing at the location of the resource at the request of users. Client-server architecture software systems consist of two parts: server software and user-client software. The operation of these systems is organized as follows: client programs run on the user's computer and send requests to a server program that runs on a shared computer. The main data processing is performed by a powerful server, and only the results of the query are sent to the user's computer. So, for example, a database server is used in powerful DBMS such as Microsoft SQL Server, Oracle, etc., working with distributed databases. Database servers are designed to work with large amounts of data (tens of gigabytes or more) and a large number of users, while providing high performance, reliability and security. The client-server architecture, in a sense, is the main one in the applications of global computer networks.

Lecture number 3

The main questions of the lecture:

1. Technical means of informatics.

2. The concept of the principles of computer operation.

3. The main components of a personal computer.

Technical means of informatics

Computer is the main technical means of information processing, classified according to a number of characteristics, in particular: by purpose, principle of action, methods of organizing the computing process, size and computing power, functionality, ability to execute programs in parallel and etc.

By appointment Computers can be divided into three groups:

· universal (general purpose) - designed to solve a variety of engineering and technical problems: economic, mathematical, informational and other problems that differ in the complexity of algorithms and a large amount of processed data. The characteristic features of these computers are high performance, a variety of forms of processed data (binary, decimal, symbolic), a variety of operations performed (arithmetic, logical, special), a large capacity of random access memory, a well-developed organization of information input-output;

· problem-oriented - designed to solve a narrower range of tasks, usually associated with technological objects, registration, accumulation and processing of small amounts of data (control computer systems);

· specialized - to solve a narrow range of tasks in order to reduce the complexity and cost of these computers, while maintaining high performance and reliability (programmable microprocessors for special purposes, controllers that perform the functions of controlling technical devices).

By principle of action(the criterion for dividing computers is the form of presentation of information with which they work):

· Analog computers (AVM) - computers of continuous operation, work with information presented in continuous form, i.e. in the form of a continuous series of values ​​of any physical quantity (most often electrical voltage); in this case, the voltage value is analogous to the value of some measured variable. For example, entering 19.42 at a scale of 0.1 is equivalent to applying a voltage of 1.942 V to the input;

· Digital computers (DCM) - computers of discrete action, work with information presented in discrete, or rather in digital, form - in the form of several different voltages, equivalent to the number of units in the represented value of the variable;

· Hybrid computers (GVM) - computers of combined action, work with information presented in both digital and analog form.

AVM are simple and easy to use; programming tasks for solving them is not laborious, the speed of the solution changes at the request of the operator (more than that of a digital computer), but the accuracy of the solution is very low (relative error 2-5%). AVM is used to solve mathematical problems containing differential equations that do not contain complex logic. Digital computers are the most widespread, they are meant when they talk about computers. It is advisable to use the GVM to control complex high-speed technical complexes.

By generations the following groups can be distinguished:

1st generation. In 1946. the idea of ​​using binary arithmetic (John von Neumann, A. Burns) and the principle of a stored program were published, which are actively used in computers of the 1st generation. Computers were distinguished by their large dimensions, high energy consumption, low speed, low reliability, and programming in codes. The tasks were solved mainly computational containing complex calculations required for weather forecasting, solving nuclear power problems, controlling aircraft and other strategic tasks.

2nd generation. In 1948 Bell Telefon Laboratory announced the creation of the first transistor. Compared to the computers of the previous generation, all technical characteristics have improved. Algorithmic languages ​​are used for programming, the first attempts of automatic programming have been made.

3rd generation. A feature of computers of the 3rd generation is considered to be the use of integrated circuits in their design, and operating systems in the control of computer operation. The possibilities of multiprogramming, memory management, input-output devices appeared. Disaster recovery was handled by the operating system. From mid 60s to mid 70s an important species information services have become databases containing different types information on all kinds of branches of knowledge. For the first time, information technology of decision support appears. This is absolutely new way human-computer interaction.

4th generation. The main features of this generation of computers are the presence of storage devices, the launch of the computer using a bootstrap system from ROM, a variety of architectures, powerful operating systems, and the integration of computers into a network. Since the mid-70s, with the creation of national and global networks Data transmission the leading type of information services has become the dialogue search for information in databases remote from the user.

5th generation. Computers with dozens of processors operating in parallel, making it possible to build efficient knowledge processing systems; A computer based on super-complex microprocessors with a parallel vector structure, simultaneously executing dozens of sequential program commands.

6th generation. Optoelectronic computers with massive parallelism and neural structure - with a network of a large number (tens of thousands) of simple microprocessors that simulate the structure of neural biological systems.

Computer classification in size and functionality.

Large computers. Historically, large computers were the first to appear, the element base of which went from vacuum tubes to integrated circuits with an ultra-high degree of integration. However, their productivity turned out to be insufficient for modeling ecological systems, genetic engineering problems, managing complex defense complexes, etc.

Mainframes are often referred to abroad as MAINFRAME, and rumors of their death are greatly exaggerated.

Typically, they have:

Performance of at least 10 MIPS (millions of floating point operations per second)

Main memory from 64 to 10000 MB

External memory of at least 50 GV

Multiuser mode of operation

Main directions of use- is the solution of scientific and technical problems, work with large databases, management computer networks and their resources as servers.

Small computers. Small (mini) computers are reliable, inexpensive and easy-to-use; they have somewhat lower capabilities than large computers.

Super-mini computers have:

Main memory capacity - 4-512 MB

Disk storage capacity - 2 - 100 GW

· The number of supported users - 16-512.

Mini-computers are focused on using as control computer systems, in simple modeling systems, in automated control systems, for controlling technological processes.

Supercomputer. These are powerful multiprocessor computers with a speed of hundreds of millions - tens of billions of operations per second.

It is impossible to achieve such performance on one microprocessor using modern technologies, in view of the final value of the propagation speed of electromagnetic waves (300,000 km / sec), because the time of signal propagation over a distance of several millimeters becomes commensurate with the execution time of one operation. Therefore, supercomputers are created in the form of highly parallel multiprocessor computing systems.

Currently, there are several thousand supercomputers in the world, ranging from simple office Cray EL to powerful Cray 3, SX-X from NEC, VP2000 from Fujitsu (Japan), VPP 500 from Siemens (Germany).

Microcomputer or personal computer. The PC must have characteristics that meet the requirements of general availability and versatility:

Low cost

Autonomy of operation

· Flexibility of architecture, which makes it possible to adapt in the field of education, science, management, in everyday life;

· Friendliness of the operating system;

· High reliability (more than 5000 hours of MTBF).

Most of them are self-powered by batteries, but can be connected to the network.

Special computers. Special computers focused on solving special computational problems or control problems. Electronic microcalculators can also be considered as a special computer. The program that the processor executes is in ROM or in the RAM, and since the machine usually solves one problem, then only the data changes. It is convenient (the program is stored in ROM), in this case, the reliability and speed of the computer increases. This approach is often used in on-board computers, control of the operating mode of a camera, movie camera, and in sports simulators.

The concept of the principles of computer operation

The architecture of modern personal computers is based on the trunk-modular principle. The modular principle allows the consumer to complete the required computer configuration and, if necessary, upgrade it. The modular organization of a computer is based on the trunk (bus) principle of information exchange between devices.

The backbone includes three multi-bit buses:

Data bus,

Address bus

· And control bus.

Busbars are multi-wire lines.

Data bus. On this bus, data is transferred between various devices. For example, data read from main memory can be transferred to a processor for processing, and then the received data can be sent back to main memory for storage. Thus, data on the data bus can be transferred from device to device in any direction.

The bit width of the data bus is determined by the bit width of the processor, i.e. the number of bits that the processor processes in one clock cycle. The bit capacity of processors has constantly increased with the development of computer technology.

Address bus. The choice of a device or memory cell where data is sent or read from via the data bus is made by the processor. Each device or RAM cell has its own address. The address is transmitted over the address bus, and signals are transmitted along it in one direction from the processor to the main memory and devices (unidirectional bus). Width of the address bus defines the address space of the processor, i.e. the number of memory cells that can have unique addresses. The bit width of the address bus has been constantly increasing and in modern personal computers it is 32 bits.

Control bus. Signals are transmitted over the control bus that determine the nature of the exchange of information along the highway. Control signals determine what operation to read or write information from memory to be performed, synchronize the exchange of information between devices, etc.

The overwhelming majority of computers are based on the following general principles formulated in 1945 by an American scientist John von Neumann.

1. The principle of programmed control. The program consists of a set of instructions that are executed by the processor automatically in a specific sequence. The program is retrieved from memory using the command counter. This processor register sequentially increases the address of the next instruction stored in it by the instruction length. And since the program commands are located in memory one after another, thus the selection of a chain of commands from sequentially located memory cells is organized. If, after executing the command, you need to go not to the next, but to some other, use the commands conditional or unconditional jump, which enter into the command counter the number of the memory cell containing the next command. Fetching commands from memory stops after reaching and executing the command "stop". Thus, the processor executes the program automatically, without human intervention.

2. The principle of memory homogeneity. Programs and data are stored in the same memory, so the computer cannot distinguish what is stored in a given memory location — a number, text, or command. You can perform the same actions on commands as on data, and this opens up a number of possibilities. For example, the program in the course of its execution can also undergo processing, which allows you to specify in the program itself the rules for obtaining some of its parts (this is how the execution of loops and subroutines is organized in the program). Moreover, the commands of one program can be received as the results of the execution of another program. This principle is based on broadcast methods- translation of the program text from a high-level programming language into the language of a specific machine.

3. The targeting principle. Structurally, the main memory consists of renumbered cells. Any cell is available to the processor at any time. Hence, it is possible to give names to memory areas so that the values ​​stored in them can be subsequently accessed or changed during the execution of programs using the assigned names. Computers built on the listed principles are of the type von Neumann. But there are computers that are fundamentally different from von Neumann's. For them, for example, the principle of program control may not be fulfilled, that is, they may operate without an instruction counter indicating the currently executing program command. These computers do not need to give it a name to refer to a variable in memory. Such computers are called not von Neumann.

The main components of a personal computer

The computer has modular structure which includes:

System unit

Metal case with power supply. Currently, system units are produced in ATX standard, 21x42x40cm in size, power supply - 230W, operating voltage 210-240V, 3x5.25 "" and 2x3.5 "" bays, automatic shutdown upon completion of work. The housing also houses a speaker.

1.1. System (mother) board(motherboard), which contains the various devices included in the system unit. The design of the motherboard is made on the principle of a modular constructor, which allows each user to easily replace failed or outdated elements of the system unit. On motherboard are attached:

a) CPU (CPU - Central Processing Unit) - a large integrated circuit on a chip. Performs logical and arithmetic operations, controls the functioning of the computer. The processor is characterized by the manufacturer and clock frequency... The most famous manufacturers are Intel and AMD. Processors have their own names Athlon, Pentium 4, Celeron, etc. The clock frequency determines the speed of the processor and is measured in Hertz (1 \ s). So, Pentium 4 2.2 GHz, has a clock speed of 220,000,000 Hz (performs more than 2 billion operations per second). Another characteristic of the processor is the presence cache memory- even faster than RAM memory, which stores the data most frequently used by the CPU. The cache is a buffer between the processor and RAM. The cache is completely transparent and cannot be detected programmatically. The cache reduces the total number of CPU clock cycles when accessing RAM.

b) Coprocessor (FPU - Floating Point Unit). Built into the CPU. Performs floating point arithmetic.

v) Controllers - microcircuits responsible for the operation of various computer devices (keyboard, HDD, FDD, mouse, etc.). This also includes the ROM (Read Only Memory) microcircuit in which the ROM-BIOS is stored.

d) Slots(buses) - connectors (ISA, PCI, SCSI, AGP, etc.) for various devices (RAM, video card, etc.).

A bus is actually a set of wires (lines) that connect various components of a computer to supply power to them and exchange data. Existing buses: ISA (frequency - 8 MHz, number of bits - 16, data transfer rate - 16 Mb / s),

e) Random access memory (RAM, RAM - Random Access Memory (types SIMM, DIMM (Dual Inline Memory Module), DRAM (Dynamic RAM), SDRAM (Synchronous DRAM), RDRAM)) - microcircuits used for short-term storage of intermediate instructions, values ​​of calculations performed by the CPU as well as other data. In the same place, to increase performance, are stored executable programs... RAM - high-speed memory with a regeneration time of 7 · 10 -9 sec. Capacities up to 1GB. 3.3V power supply.

e) Video card (video accelerator) - a device that expands the possibilities and accelerates the work with graphics. The video card has its own video memory (16, 32, 64, 128 MB) for storing graphic information and a graphic processor (GPU - Graphic Processor Unit), which takes care of calculations when working with 3D graphics and video. The GPU runs at 350MHz and contains 60mln. transistors. Supports 2048x1536 60Hz resolution at 32-bit color. Performance: 286 million pixels / sec. It can have a TV and video input. Supported effects: transparency and translucency, shading (getting realistic lighting), glare, color lighting (light sources of different colors), blurring, three-dimensional, fogging, reflection, reflection in a curved mirror, surface shake, image distortion caused by water and warm air, transformation of distortions using noise algorithms, imitation of clouds in the sky, etc.

g) Sound card - a device that enhances the sound capabilities of a computer. Sounds are generated using samples of sounds of different timbres stored in memory (32MB). Up to 1024 sounds are played simultaneously. Various effects are supported. May have line-in / out, headphone-out, mic-in, joystick jack, answering machine-in, analog and digital CD audio input.

h) Network Card - a device responsible for connecting a computer to the network for the exchange of information.

In addition to the motherboard, the system unit contains:

1.2. Hard disk drive(hard drive, HDD - Hard Disk Drive) - a hermetically sealed case with rotating magnetic disks and magnetic heads. Serves for long-term storage of information in the form of files (programs, texts, graphics, photography, music, video). Capacity - 75 Gb, buffer size 1-2 Mb, data transfer speed 66.6 Mb / s. Maximum spindle rotation speed - 10,000, 15,000 rpm. The IBM HDD has a capacity of 120GB, the spindle speed is 7200 rpm.

1.3. Floppy disk drive(floppy drive, floppy, FDD - Floppy Disk Drive) - a device used to write / read information from floppy disks that can be transferred from computer to computer. Floppy disk capacity: 1.22MB (size 5.25 "" (1 "" = 2.54cm)), 1.44MB (size 3.5 ""). 1.44MB is equivalent to 620 pages of text.

1.4. CD-ROM(Compact Disc Read Only Memory) - a device that only reads information from a CD. Binary information from the CD surface is read by a laser beam. CD capacity - 640MB = 74min. music = 150,000 p. text. The spindle speed is 8560 rpm, the buffer size is 128Kb, the maximum data transfer rate is 33.3Mb / s. Jumps and breaks during video playback are the reasons for not filling or overflowing the buffer used for intermediate storage of the transmitted data. There is a volume control and a headphone output (for listening to music CDs).

1.5. CD-R(Compact Disc Recorder) - a device used to read and write once information on a CD. The recording is based on the change in the reflective properties of the CD substrate material under the action of a laser beam.

1.6. DVD-ROM discs (digital video discs) have a much larger information capacity (up to 17 GB), because information can be written on two sides, in two layers on one side, and the tracks themselves are thinner.

The first generation of DVD-ROM drives provided a read speed of approximately 1.3 MB / s. Currently, 5-speed DVD-ROMs reach read speeds of up to 6.8 MB / s.

Exists DVD-R discs (R - recordable), which are golden in color. Special DVD-R drives have a powerful enough laser, which, in the process of recording information, changes the reflectivity of areas of the surface of the disc being recorded. Information on such discs can be recorded only once.

1.7. There are also CD-RW and DVD-RW discs (RW - Rewritable, rewritable), which have a "platinum" tint. Special CD-RW and DVD-RW drives in the process of recording information also change the reflectivity of certain areas of the disc surface, however, information on such discs can be recorded many times. Before rewriting, the recorded information is "erased" by heating portions of the disc surface with a laser.

The composition of the computer, in addition to the system unit, includes the following input-output devices.

2. Monitor(display) - graphic information output device. There are digital and liquid crystal. Diagonal Sizes - 14 "", 15 "", 17 "", 19 "", 21 "", 24 "". Pixel size - 0.2-0.3mm. Frame rate - 77Hz @ 1920x1200 pixel, 85Hz @ 1280x1024, 160Hz @ 800x600. The number of colors is determined by the number of bits per pixel and can be 256 (2 8, where 8 is the number of bits), 65536 (2 16, High Color mode), 16 777 216 (2 24, True Color mode, maybe 2 32) ... There are cathode ray and LCD monitors. Monitors use RGB color education system, i.e. the color is obtained by mixing 3 primary colors: red (Red), green (Green) and blue (Blue).

3. Keyboard(keyboard) - a device for entering commands and symbolic information (108 keys). Connects to the serial interface (COM port).

4. Mouse-type manipulator(mouse) - command input device. A 3-button mouse with a scrolling wheel is standard.

5. Printing device(printer) - a device for displaying information on paper, film or other surface. Connects to the parallel interface (LPT port). USB (Universal Serial Bus) is a universal serial bus that replaces the outdated COM and LPT ports.

a) Matrix. The image is formed by needles piercing the ink ribbon.

b) Jet. The image is formed by droplets of paint ejected from the nozzles (up to 256). The droplet speed is up to 40m / s.

v) Laser. The image is transferred to the paper from a special drum, electrified by a laser, to which ink (toner) particles are attracted.

6. Scanner- a device for inputting images into a computer. There are manual, tablet, drum.

7. Modem(Modulator-DEModulator) - a device that allows you to exchange information between computers via analog or digital channels. Modems differ from each other in the maximum data transfer rate (2400, 9600, 14400, 19200, 28800, 33600, 56000 bits per second), supported by the communication protocols. There are internal and external modems.