Menu
Is free
registration
home  /  Firmware/ Lecture: Ways of implementing applied software environments. Architecture, purpose and function of operating systems Objectives and exercises

Lecture: Ways of implementing applied software environments. Architecture, purpose and function of operating systems Objectives and exercises

While many of the architectural features of the OS directly concern only system programmers, the concept of multiple application (operating) tools is directly related to the needs of end users - the ability of the operating system to run applications written for other operating systems. This property of the operating system is called compatibility.

Application compatibility can be at the binary level and at the source level. Applications are usually stored in the OS as executable files that contain binary images of code and data. Binary compatibility is achieved if you can take an executable program and run it on another OS environment.

Source compatibility requires an appropriate compiler in the software of the computer on which you intend to run the application, as well as library and system call compatibility. In this case, it is necessary to recompile the source code of the application into a new executable module.

Source compatibility is important primarily for application developers who have source code at their disposal. But for end users, only binary compatibility is of practical importance, since only in this case they can use the same product on different operating systems and on different machines.

The kind of possible compatibility depends on many factors. The most important of these is the processor architecture. If the processor uses the same set of instructions (possibly with additions, as in the case of the IBM PC: standard set + multimedia + graphics + streaming) and the same address range, then binary compatibility can be achieved quite simply. To do this, the following conditions must be met:

  • The API used by the application must be supported by the given OS;
  • the internal structure of the executable file of the application must correspond to the structure of the executable files of the given OS.

If the processors have different architectures, then, in addition to the listed conditions, it is necessary to organize the emulation of the binary code. For example, the emulation of Intel processor commands on the Motorola 680x0 processor of a Macintosh computer is widely used. The software emulator then sequentially selects the Intel processor binary instruction and executes the equivalent subroutine written in the Motorola processor instructions. Since the Motorola processor does not have exactly the same registers, flags, internal ALU, etc., as in Intel processors, it must also simulate (emulate) all these elements using its registers or memory.

This is simple, but very slow work, as a single Intel command is significantly faster than the Motorola processor command sequence that emulates it. The way out in such cases is the use of so-called application software environments or operating environments. One of the components of such an environment is the set of API functions that the OS provides to its applications. To reduce the time spent on executing someone else's programs, application environments imitate calls to library functions.

The effectiveness of this approach is due to the fact that most of today's programs run under GUI (graphical user interfaces) such as Windows, MAC or UNIX Motif, while applications spend 60-80% of the time executing GUI functions and other OS library calls. It is this property of applications that allows application environments to compensate for the large amount of time spent emulating programs per command. A carefully designed software application environment contains libraries that mimic GUI libraries, but are written in "native" code. Thus, a significant acceleration of the execution of programs with the API of another operating system is achieved. This approach is also called broadcasting to distinguish it from the slower emulation process one command at a time.

For example, for a Windows program running on a Macintosh, when interpreting commands from an Intel processor performance can be very low. But when a GUI function is called, a window is opened, etc., the OS module that implements the Windows application environment can intercept this call and redirect it to the window opening routine that is recompiled for the Motorola 680x0 processor. As a result, in such parts of the code, the speed of the program can reach (and, possibly, surpass) the speed of work on its own processor.

For a program written for one OS to be executed on another OS, it is not enough just to ensure API compatibility. The concepts behind different operating systems can conflict with each other. For example, in one OS an application may be allowed to control I / O devices, in another - these actions are the prerogative of the OS.

Each OS has its own resource protection mechanisms, its own error and exception handling algorithms, a specific processor structure and memory management scheme, its own file access semantics and graphical user interface. To ensure compatibility, it is necessary to organize conflict-free coexistence within one OS of several methods of managing computer resources.

There are various options for building multiple application environments, differing in both architectural features and functionality that provide varying degrees of application portability. One of the most obvious options for implementing multiple application environments is based on a standard OS layered structure.

Another way to build multiple application environments is based on a microkernel approach. At the same time, it is very important to note the basic, common for all application environments, difference between operating system mechanisms and high-level functions specific to each of the application environments that solve strategic problems. In accordance with microkernel architecture all OS functions are implemented by the microkernel and user-mode servers. It is important that the application environment is designed as a separate user-mode server and does not include basic mechanisms.

Applications using the API make system calls to the appropriate application environment through the microkernel. The application environment processes the request, executes it (perhaps using basic microkernel functions for help), and sends the result back to the application. In the course of executing a request, the application environment must, in turn, access the basic OS mechanisms implemented by the microkernel and other OS servers.

All the advantages and disadvantages of micro-nuclear architecture are inherent in this approach to the construction of multiple application environments, in particular:

  • it is very easy to add and exclude application environments, which is a consequence of the good extensibility of micro-kernel OS;
  • if one of the application environments fails, the rest remain operational, which contributes to the reliability and stability of the system as a whole;
  • low performance of microkernel operating systems affects the speed of application tools, and hence the speed of applications.

As a result, it should be noted that the creation of several application tools within one OS for executing applications of different OS is a way that allows you to have a single version of the program and transfer it between different operating systems. Multiple application environments provide binary compatibility of a given OS with applications written for other OSs.

1.9. Virtual machines as a modern approach to the implementation of multiple application environments

The concept of "virtual machine monitor" (VMM) originated in the late 60s as a software abstraction level that divided the hardware platform into multiple virtual machines. Each of these virtual machines (VMs) was so similar to the underlying physical machine that the existing software could be executed on it unchanged. At that time, general computing tasks were performed on expensive mainframes (such as the IBM / 360), and users appreciated the VMM's ability to distribute scarce resources across multiple applications.

In the 80-90s, the cost of computer equipment decreased significantly and effective multitasking OS, which reduced the value of the VMM in the eyes of users. Mainframes gave way to mini-computers and then PCs, and there was no need for a VMM. As a result, computer architecture has simply disappeared hardware for their effective implementation. By the end of the 80s, in science and in production, MVM were perceived as nothing more than a historical curiosity.

Today the MVM is again in the spotlight. Intel, AMD, Sun Microsystems and IBM are creating virtualization strategies, and virtual machine-based approaches are evolving in academia and universities to address mobility, security, and manageability issues. What happened between the resignation of the MVM and their revival?

In the 1990s, researchers at Stanford University began exploring the use of VMs to overcome the limitations of hardware and operating systems. Problems arose with computers with Massively Parallel Processing (MPP), which were difficult to program and could not run the existing operating systems. Researchers have found that virtual machines can make this awkward architecture similar enough to existing platforms to take advantage of the off-the-shelf operating systems. From this project came the people and ideas that became the gold reserve of VMware (www.vmware.com), the first vendor of VMMs for mainstream computers.

Ironically, the advancement of modern operating systems and the reduction in hardware costs have led to problems that the researchers hoped to solve with the help of the VMM. The cheapness of equipment contributed to the rapid spread of computers, but they were often underutilized, requiring additional space and effort to maintain. And the consequences of the growth of the functional capabilities of the OS were their instability and vulnerability.

To reduce the impact of system crashes and protect against hacks, system administrators again turned to single-task computational model(with one app on one machine). This resulted in additional costs due to increased hardware requirements. Moving applications from different physical machines to VMs and consolidating these VMs on a few physical platforms have allowed to increase equipment utilization, reduce management costs and reduce floor space. So the VMM's ability to multiplex hardware — this time in the name of server consolidation and utility computing — has brought them back to life again.

Nowadays, VMM has become not so much a tool for organizing multitasking as it was once conceived as a solution to the problems of ensuring security, mobility and reliability. In many ways, the VMM gives the creators of operating systems the ability to develop functionality that is not possible in today's complex operating systems. Features such as migration and security are much more convenient to implement at the VMM level, which maintains backward compatibility when deploying innovative operating system solutions while maintaining previous advances.

Virtualization is an evolving technology. In general terms, virtualization allows you to decouple software from the underlying hardware infrastructure. In fact, it breaks the connection between a specific set of programs and a specific computer. Virtual machine monitor separates software from hardware and forms an intermediate layer between software running by virtual machines and hardware. This level allows the VMM to fully control the use of hardware resources. guest operating systems (GuestOS) that run on the VM.

The VMM creates a unified view of the underlying hardware so that physical machines from different vendors with different I / O subsystems look the same and the VMs run on whatever hardware is available. Without worrying about individual machines with their tight interconnections between hardware and software, administrators can view hardware as simply a pool of resources to deliver any service on demand.

Thanks to full encapsulation The VMM monitor can map the VM to any available hardware resources and even transfer it from one physical machine to another. The task of balancing the load on a group of machines becomes trivial, and reliable ways to deal with equipment failures and system growth are emerging. If you need to shut down a failed computer or bring a new one into operation, the VMM is able to redistribute virtual machines accordingly. The virtual machine is easy to replicate, allowing administrators to quickly deliver new services as needed.

Encapsulation also means that the administrator can suspend or resume a VM at any time, as well as save the current state of the virtual machine or return it to a previous state. With universal undo capabilities, it is easy to deal with accidents and configuration errors. Encapsulation is at the heart of a generic mobility model because a suspended VM can be copied over the network, stored, and transported on removable media.

The VMM acts as an intermediary for all interactions between the VM and the underlying hardware, keeping multiple virtual machines running on a single hardware platform and providing reliable isolation. The VMM allows you to assemble a group of VMs with low resource requirements on a separate computer, reducing the cost of hardware and the need for production space.

Complete isolation is also important for reliability and safety. Applications that used to run on the same machine can now be distributed across different VMs. If one of them, as a result of an error, causes the OS to crash, other applications will be isolated from it and continue to work. If one of the applications is threatened by an external attack, the attack will be localized within the "compromised" VM. Thus, the VMM is a tool for restructuring the system to increase its resilience and security, without the additional space and administrative effort required when running applications on separate physical machines.

The VMM must associate the hardware interface with the VM while retaining full control over the underlying machine and the procedures for interacting with its hardware. There are different methods to achieve this goal, based on certain technical trade-offs. When looking for such compromises, the main requirements for VMM are taken into account: compatibility, performance and simplicity. Compatibility is important because the main advantage of a VMM is the ability to run legacy applications. Performance determines the amount of virtualization overhead - programs on a VM must run at the same speed as on a real machine. Simplicity is needed because a VMM failure will cause all VMs running on the computer to fail. In particular, secure isolation requires that the VMM be free from bugs that attackers can use to destroy the system.

Instead of tinkering with the complex rewriting of the guest operating system code, you can make some changes to the host operating system by changing some of the more "annoying" parts of the kernel. This approach is called paravirtualization. It is clear that in this case only the author can adapt the OS kernel, and, for example, Microsoft does not show any desire to adapt the popular Windows 2000 kernel to the realities of specific virtual machines.

In paravirtualization, the VMM designer redefines the virtual machine interface, replacing the unvirtualizable subset of the original instruction set with more convenient and efficient equivalents. Note that although the OS must be ported to run on these VMs, most common applications can run unchanged.

The biggest drawback of paravirtualization is incompatibility. Any operating system designed to run under the control of a paravirtualized VMM monitor must be ported to this architecture by negotiating cooperation with OS vendors. In addition, legacy operating systems cannot be used, and existing machines cannot be easily replaced with virtual machines.

To achieve high performance and compatibility when virtualizing the x86 architecture, VMware has developed a new virtualization technique that combines traditional live execution with fast, on-the-fly binary translation. In most modern operating systems, the operating modes of the processor when executing ordinary application programs are easily virtualized, and therefore, they can be virtualized through direct execution. Unsuitable for virtualization privileged modes can be executed by the binary translator, fixing "awkward" x86 instructions. The result is a highly efficient virtual machine that fully matches the hardware and maintains full software compatibility.

The converted code is very similar to the results of paravirtualization. Regular commands are executed unchanged, and commands that need special processing (such as POPF and commands to read code segment registers) are replaced by the translator with sequences of commands that are similar to those required for execution on a paravirtualized virtual machine. However, there is an important difference: instead of modifying the source code of the operating system or applications, the binary translator modifies the code the first time it is executed.

While there are some additional overhead required to translate binary code, it is negligible under normal workloads. The translator processes only part of the code, and the speed of program execution becomes comparable to the speed of direct execution - as soon as the cache is full

Creation of a full-fledged application environment, fully compatible with the environment of another OS, is a rather difficult task, closely related to the structure of the OS. There are various options for building multiple application environments, differing both in the features of architectural solutions and in functionality that provide varying degrees of application portability. .

In many versions of UNIX OS, the application environment translator is implemented as a regular application. In operating systems built using the microkernel concept, such as Windows NT or Workplace OS, application environments run as user-mode servers. And in OS / 2, with its simpler architecture, the tools for organizing application environments are built deep into the OS. One of the most obvious options for implementing multiple application environments is based on a standard layered OS structure. .

Rice. 3.13. Application programming environments that translate system calls

Unfortunately, the behavior of almost all the functions that make up the API of one OS tend to differ significantly from the behavior of the corresponding functions of another.

In another implementation of multiple application environments, the OS has multiple peer APIs. In the example shown in Fig. 3.14 OS example supports applications written for OS1, OS2, and OS3. To do this, the application programming interfaces of all these operating systems are located directly in the kernel space of the system: API OS1, API OS2, and API OS3. In this variation, API-level functions refer to lower-level OS functions that must support all three generally incompatible application environments.

Different operating systems manage the system time differently, use a different time of day format, share processor time based on their own algorithms, etc. The functions of each API are implemented by the kernel, taking into account the specifics of the corresponding OS, even if they have a similar purpose. For example, as mentioned, the process creation feature works differently for a UNIX application and an OS / 2 application. Similarly, when a process ends, the kernel also needs to determine which OS the process belongs to. If this process was created at the request of a UNIX application, then during its termination the kernel should send a signal to the parent process, as is done in UNIX OS. And upon termination of the OS / 2 process, the kernel should note that the process ID cannot be reused by another OS / 2 process. In order for the kernel to choose the desired implementation of the system call, each process must pass a set of identifying characteristics to the kernel.

Rice. 3.14 Implementing Interoperability Based on Multiple Peer APIs

Another way to build multiple application environments is based on a microkernel approach... At the same time, it is very important to separate the basic OS mechanisms common for all application environments from high-level functions specific to each of the application environments that solve strategic problems.

According to the microkernel architecture, all OS functions are implemented by the microkernel and user-mode servers. It is important that each application environment is designed as a separate user-mode server and does not include basic mechanisms (Figure 3.15). Applications using the API make system calls to the appropriate application environment through the microkernel. The application environment processes the request, executes it (perhaps using basic microkernel functions for help), and sends the result back to the application. In the course of executing a request, the application environment must, in turn, access the basic OS mechanisms implemented by the microkernel and other OS servers.

This approach to the construction of multiple application environments has all the advantages and disadvantages of a microkernel architecture, in particular:

§ it is very easy to add and exclude application environments, which is a consequence of the good extensibility of microkernel operating systems;

§ reliability and stability are expressed in the fact that if one of the application environments fails, all the others remain operational;

§ Low performance of microkernel OS affects the speed of application environments, and hence the speed of application execution.

Rice. 3.15. Microkernel Approach to Implementing Multiple Application Environments

The creation of several application environments within one OS for executing applications of different OS is a way that allows you to have a single version of the program and transfer it between operating systems. Multiple application environments provide binary compatibility of a given OS with applications written for other OSs. As a result, users have greater freedom of choice of OS and easier access to quality software.

Conclusions:

§ The simplest OS structuring consists in dividing all OS components into modules that perform the main OS functions (kernel), and modules that perform auxiliary OS functions. Supporting OS modules are designed either as applications (utilities and system processing programs), or as libraries of procedures. Auxiliary modules are loaded into RAM only for the duration of their functions, that is, they are transitory. Kernel modules are resident in RAM, that is, they are resident.

§ If there is hardware support for modes with different levels of authority, the stability of the OS can be increased by executing kernel functions in the privileged mode, and auxiliary OS modules and applications in the user mode. This makes it possible to protect codes and data of OS and applications from unauthorized access. The OS can act as an arbiter in application disputes over resources.

§ The kernel, being a structural element of the OS, in turn, can be logically decomposed into the following layers (starting from the very bottom):

§ machine-dependent components of the OS;

§ basic mechanisms of the kernel;

§ resource managers;

§ system call interface.

§ In a multilayer system, each layer serves the overlying layer, performing for it a certain set of functions that form an interlayer interface. Based on the functions of the underlying layer, the next up in the hierarchy layer builds its functions - more complex and more powerful, which, in turn, turn out to be primitives for creating even more powerful functions of the overlying layer. The multilayer organization of the OS greatly simplifies the development and modernization of the system.

§ Any OS to solve its tasks interacts with computer hardware, namely: means of supporting privileged mode and address translation, means of switching processes and protecting memory areas, an interrupt system and a system timer. This makes the OS machine dependent, tied to a specific hardware platform.

§ OS portability can be achieved by observing the following rules. First, most of the code must be written in a language that has translators on all computers where the system is supposed to be transferred. Second, the amount of machine-dependent pieces of code that directly interact with the hardware should be minimized as much as possible. Third, the hardware-dependent code must be reliably localized across multiple modules.

§ Microkernel architecture is an alternative to the classical way of building an OS, in accordance with which all the main OS functions that make up a multilayer kernel are performed in a privileged mode. In microkernel operating systems, only a very small portion of the operating system, called a microkernel, remains running in privileged mode. All other high-level kernel functions are designed as user-mode applications.

§ Microkernel operating systems meet most of the requirements for modern operating systems, with portability, extensibility, reliability, and creating a good precondition for supporting distributed applications. These advantages come at the cost of reduced performance, which is the main disadvantage of the microkernel architecture.

§ Application software environment - a set of OS tools designed to organize the execution of applications using a specific system of machine instructions, a specific type of API and a specific format of the executable program. Each OS creates at least one application programming environment. The problem lies in ensuring the compatibility of several software environments within the same OS. When building multiple application environments, various architectural solutions, concepts of binary code emulation, and API translations are used.

Tasks and exercises

1. Which of the following terms are synonymous?

§ privileged regime;

§ protected mode;

§ supervisor mode;

§ user mode;

§ real mode;

§ kernel mode.

2. Is it possible, analyzing the binary code of a program, to conclude that it cannot be executed in user mode?

3. What are the differences between a processor in privileged and user modes?

4. Ideally, a microkernel OS architecture requires only those OS components that cannot be run in user mode to be placed in the microkernel. What makes the developers of operating systems move away from this principle and expand the kernel by transferring functions to it that could be implemented in the form of server processes?

5. What are the steps involved in developing a mobile OS variant for a new hardware platform?

6. Describe how applications interact with an OS that has a microkernel architecture.

7. What are the steps involved in executing a system call in a microkernel OS and an OS with a monolithic kernel?

8. Can a program emulated on a "foreign" processor run faster than on a "native" one?

An alternative to emulation is multiple application environments, which includes a set of API functions. They imitate the call to the library functions of the application environment, but in fact they call their internal libraries. It is called translation of libraries... This is a purely software package.

In order for a program written under one OS to work under another, it is necessary to ensure conflict-free interaction between the methods of managing processes in different OS.

Methods of Implementation of Application Software Environments

Depending on the architecture:

1. An application software environment in the form of an application (the upper layer of the native OS kernel).

User mode of operation, translation of system calls (API calls) into calls to the "native" OS. Corresponds to classic multilayer OS (Unix, Windows).

2. The presence of several application environments that function equally. Each in the form of a separate core layer.

Privileged mode of operation. The API refers to the functions of the underlying (privileged) OS layer. The system is responsible for the recognition and adaptation of the call. Requires a lot of resources. A set of identifying characteristics for recognition is transferred to the core.

3. Microkernel principle.

Any application environment is designed as a separate user-mode server. Applications using the API make system calls to the appropriate application environment through the microkernel. The application environment processes the request and returns the result through the microkernel. Microkernel functions can be used. Multiple access to other resources is possible (while the microkernel is running).

OS interfaces

OS interface Is an application programming system. Regulated by standards (POSIX, ISO).

1. User interface- it is implemented using special software modules that translate user requests in a special command language into requests to the OS.

The collection of such modules is called interpreter... It does lexical and parsing and either executes the command itself or passes it to the API.

2. API- designed to provide application programs with OS resources and implement other functions. API describes a set of functions, procedures belonging to the kernel and OS add-ons. The API uses system programs both within the OS and outside it, using application programs through a programming environment.

At the core of the provision of the OS-th resources, ultimately, there is a software interrupt. Their implementation depending on the system (vector, tabular). There are several options for implementing the API at the OS level (fastest, lowest), at the system programming level (more abstracted, less fast) and at the level of an external library of procedures and functions (a small set).

Linux OS interfaces:

· Software (without intermediaries - the actual execution of system calls);

· Command line (intermediary - shell of the Shell interpreter, redirecting the call);

· Graphical (intermediaries - Shell + graphical shell).

File system

File system is a part of the OS designed to provide users with a convenient interface for working with files and ensuring the use of files stored on external media (hard disk + RAM) by several users and processes.

By the composition of the FS:

The collection of all files on the disk on all media,

Sets of data structures used to manage files, such as file directories, file descriptors, tables of allocation of free and used disk space,

· A set of system software tools that implement file management, in particular: creation, destruction, reading, writing, naming, search and other operations on files.

One of the file attributes - filenames - is a way to identify a file to the user. On systems where multiple names are allowed, an inode used by the OS kernel is assigned to the file. Names are defined differently in different operating systems.

Consider the structure of an abstract multilingual, open, compiling programming system and the process of developing applications in this environment (Fig. 1.4).

The program in the source language (source module) is prepared with the help of text editors and enters the translator in the form of a text file or a section of the library.

Translation of the source program is a procedure for converting the source module into an intermediate, so-called object form. Translation generally includes preprocessing (preprocessing) and compilation.

Preprocessing is an optional phase, which consists of analyzing the source text, extracting preprocessor directives from it and executing them.

Preprocessor directives are strings marked with special characters (usually%, #, &) containing abbreviations, symbols, etc. constructs included in the source program before it is processed by the compiler.

The data for expanding the source text can be standard, user-defined, or contained in the OS system libraries.

Compilation is generally a multi-step process that includes the following phases:

Lexical analysis - checking the lexical composition of the input text and translating compound characters (operators, brackets, identifiers, etc.) into some intermediate internal form (tables, graphs, stacks, hyperlinks), convenient for further processing;

parsing- checking the correctness of the constructions used by the programmer in the preparation of the text;

semantic analysis- identification of inconsistencies of types and structures of variables, functions and procedures;

Object code generation is the final phase of translation.

Translation (compilation) can be performed in various modes, the installation of which is performed using keys, parameters or options. It may, for example, only be required to execute the parsing phase, and the like.

An object module is a program module that is the result of compiling a source module. It includes machine instructions, dictionaries, service information.

The object module is not functional because it contains unresolved references to called subroutines of the translator library (in the general case, programming systems) that implement input / output functions, processing of numeric and string variables, as well as to other user programs or tools of application packages.

Rice. 1.4. Abstract multilingual, open source, compiling programming system

Loading module a program module in a form suitable for loading and execution. The construction of the load module is carried out by special software tools - the link editor, the task builder, the linker, the collector, the main function of which is to combine object and load modules into a single load module with subsequent writing to a library or file. The resulting module can later be used to build other programs, etc., which makes it possible to build up software.

After assembly, the load module is either placed in the user program library or sent for execution directly. The execution of the module consists in loading it into RAM, setting it in place in memory and transferring control to it. The image of the load module in memory is called an absolute module, since all computer commands here take their final form and receive absolute addresses in memory. The formation of an absolute module can be carried out both in software, by processing the command codes of the module by the loader program, and in hardware by applying indexing and basing the commands of the loading module and bringing the relative addresses indicated in them to the absolute form.

Modern programming systems allow you to conveniently move from one stage to another. This is done by the presence of the so-called integrated programming environment, which contains a text editor, compiler, linker, built-in debugger and, depending on the system or its version, provides the programmer with additional convenience for writing and debugging programs.

Creation of a full-fledged application environment, fully compatible with the environment of another operating system, is a rather complex task, closely related to the structure of the operating system. There are various options for building multiple application environments, differing both in the features of architectural solutions and in functionality that provide varying degrees of application portability.

In many versions of UNIX OS, the application environment translator is implemented as a regular application. In operating systems built using the microkernel concept, such as Windows NT, application environments run as user-mode servers. And in the OS / 2 operating system, with its simpler architecture, the tools for organizing application environments are built deep into the system.

One of the most obvious options for implementing multiple application environments is based on a standard OS layered structure. In fig. 2.7 The OS1 operating system supports, in addition to its "native" applications, applications of the OS2 operating system. For this, it contains a special application - an application software environment that translates the interface of the "foreign" operating system - OS2 API into the interface of its "native" operating system - OS1 API.



Custom mode

Privileged mode

Rice. 2.7. Application software environment,
translating system calls

In another implementation of multiple application environments, the operating system has multiple peer APIs. In the example shown in Fig. In the 2.8 example, the operating system supports applications written for OS1, OS2 and OS3. For this, applications are located directly in the kernel space of the system.

programming interfaces of all these operating systems: API OS1, API OS2 and
API OS3.


Custom mode


Privileged

Rice. 2.8. Implementing Compatibility
based on multiple peer APIs

In this variation, the API layer functions refer to the underlying OS layer functions, which must support all three generally incompatible application environments. Different operating systems manage the system time in different ways, use a different time-of-day format, separate processor time based on their own algorithms, etc. The functions of each API are implemented by the kernel, taking into account the specifics of the corresponding OS, even if they have a similar purpose.

Another way to build multiple application environments is based on a microkernel approach. At the same time, it is very important to separate the basic operating system mechanisms common to all application environments from high-level functions specific to each of the application environments that solve strategic problems.

According to the microkernel architecture, all OS functions are implemented by the microkernel and user-mode servers. It is important that each application environment is designed as a separate user-mode server and does not include basic mechanisms (Figure 2.9). Applications using the API make system calls to the appropriate application environment through the microkernel. The application environment processes the request, executes it (perhaps using basic microkernel functions for help), and sends the result back to the application. In the course of executing a request, the application environment must, in turn, access the basic OS mechanisms implemented by the microkernel and other OS servers.

Applications Servers OS


Custom


Privileged

Rice. 2.9. Microkernel approach
to the implementation of multiple application environments

This approach to the construction of multiple application environments has all the advantages and disadvantages of a microkernel architecture, in particular:

it is very easy to add and exclude application environments, which is a consequence of the good extensibility of microkernel operating systems;

reliability and stability are expressed in the fact that if one of the application environments fails, all the others remain operational;

low performance of microkernel operating systems affects the speed of application environments, and hence the speed of application execution.

The creation of several application environments within one operating system for executing applications of different OS is a way that allows you to have a single version of the program and move it between operating systems. Multiple application environments provide binary compatibility of a given OS with applications written for other OSs. As a result, users have greater freedom of choice of operating systems and easier access to quality software.

Self-test questions

50. What is the difference between microkernel architecture and traditional OS architecture?

51. Why is a microkernel well suited to support distributed computing?

52. What is meant by the concept of multiple application environments?

53. What is the essence of the library translation method?

Control questions

54. What is the term used in microkernel architecture to refer to resource managers placed in user mode?

56. Why is the microkernel architecture of the OS more extensible than the classic OS?

57. Is microkernel architecture more reliable than traditional architecture?

58. State the reason why the performance of the microkernel architecture is lower than the performance of the traditional OS.

60. What types of compatibility do you know?

61. How can you achieve binary compatibility for processors of different architectures?

62. Specify a method that allows you to improve the performance of your PC when executing a "foreign" executable file.

63. Is one library translation method enough for full application compatibility?