Menu
Is free
registration
home  /  ON/ How to attach an image to a page? Linking geographic information to photographs Literature for self-study.

How to link an image to a page? Linking geographic information to photographs Literature for self-study.

Two-dimensional images obtained by aerospace always display three-dimensional objects on the earth's surface. Even images of areas that appear almost flat are always distorted due to the curvature of the earth's surface and the heterogeneity of the spatial characteristics of the sensors used. The purpose of the geometric correction of images is an adequate representation of the objects of the earth's surface on them, the comparability of various images (multi-temporal or obtained from different types equipment) and transforming them into a map projection for the purpose of a comprehensive analysis of aerospace and cartographic materials.

In some thematic processing tasks, it is advisable to perform geometric correction after performing image classification. This primarily applies to those cases where the spectral reflective properties of the objects of study are the main characteristic necessary to obtain correct results. If, however, in the process of thematic classification, reliable data from ground surveys or the results of multitemporal observations, including those presented in the form of cartographic materials, are used, then geometric correction should be performed before starting thematic interpretation, and in the most thorough manner. In cases where processing is carried out on a territory with a complex topography, it may be necessary to orthorectify the image using a three-dimensional digital elevation model to accurately match the objects under study with the map.

Geometric correction is also necessary in landscape-indicative interpretation, where geomorphological structural features of landscapes and their relationships play an important role, as well as in all problems associated with the identification of spatially localized objects. Compilation of accurate photoplans and image mosaics also requires preliminary geometric correction.

Georeferencing and geometric correction of aerospace images in most cases is associated with one or another type of cartographic projection. Cartographer system ches which projection is any system designed to represent a sphere or ellipsoid of revolution (like the Earth) on a plane. There are many different projection methods. Since projecting a sphere onto a plane inevitably results in distortion of surface objects, each projection system is characterized by certain properties, such as maintaining distances, angles, or areas. According to these properties, projections are distinguished, respectively, equidistant, equiangular and equal-area projections.

The expediency of using one or another type of projections from the listed ones is determined by the nature of the measurements that are supposed to be performed in the process of solving the problem. For example, in equal area projections (preserving areas), a circle of a certain diameter drawn anywhere on the map will have the same area. This is useful when comparing different land uses, determining the density of features on a map, and in many other applications. However, in this case, the shape and mutual distances in some parts of the map may be distorted.

There are various systems of cartographic coordinates for determining the position of a point on a map (on an image). Each coordinate system generates a grid, the nodes of which are denoted by a pair of numbers X, Y (on digital image column number and row number). Each system for projecting data onto a map is associated with a specific system of cartographic coordinates.

In aerospace image processing packages, there are three types of operations associated with the use of a coordinate grid. Further see Bill. 24, 25.

Image transformation during geometric correction. Obtaining a transformation matrix by reference points, error estimation. Methods for recalculating pixel values ​​when transforming an image.

Rectification (transformation)- the process of converting data from one grid system to another using polynomials nth degree. Since the pixels on the new grid may not match the pixels on the original grid, it must be reselected. Resampling is a process of interpolation (extrapolation) of pixel values ​​to a new coordinate grid.

Image binding. In many applied problems, the analysis of images of the same territory obtained with different types of equipment or at different shooting times is used. To be able to compare images pixel by pixel, you need to bring them to a single coordinate system and “fit” the images to each other. In this case, it is not necessary to use a cartographic coordinate system. If none of the images used is transformed into a map projection, they can be analyzed by fitting one to the other in the coordinate system of one of the images.

One of the common techniques used in the process of interactive visual interpretation is to increase the resolution and, consequently, the information content of multi-zone images by combining them with a panchromatic image of a higher spatial resolution. At the first stage, the mutual binding of the multizonal and panchromatic images is performed. Then the multizonal image is stretched to the panchromatic scale and the brightness is recalculated according to a certain rule. When using the simplest multiplicative rule, the j-ro value of the pixel Ij at the output is j-th channel is determined by the product: where is the initial value of the pixel, I rap -

the value of the corresponding pixel in the panchromatic channel.

Georeferencing- the process of assigning geographic coordinates to the pixels of an image. The georeference is only reflected in the geographic coordinate information in the image file. The image grid does not change. The image may be geo-referenced, but not rectified. In the case when spherical (geodesic) coordinates (latitude, longitude) are assigned to the image pixels, it is called digital model, in contrast to a digital map, which always has a certain cartographic projection and a planned (geographical) coordinate system. A digital model can be reduced to any digital map by means of rectification. The process of rectification always requires preliminary georeferencing of the image, since any cartographic projection is always associated with a certain coordinate system. When linking an image to an image, georeferencing is required if one of the images is already linked.

The rectification process includes the following steps:

1) selection of control points (GCP - Ground Control Points);

2.calculation and testing of the transformation matrix;

3) formation of a new image with information about the coordinate grid in the file header; while resampling pixels.

Checkpoints (GCP) are reliably identifiable image elements with known coordinates. The most correct are the coordinates received from reference geodetic points or from JPS receivers. However, in many cases it is necessary to use scanned paper maps or electronic map vector layers in formats compatible with the processing package, such as shapefiles from ArcView or coverages from ARC/INFO. When using cartographic materials for rectification, it should be taken into account that in the process of generalization, when moving from a larger map scale to a larger

small size and position of some objects are distorted. This is allowed in order to preserve the characteristic features of the territory and the most important topographic objects in one sense or another. First of all, this applies to a heavily indented coastline, deltas and branches of rivers, lakes in arid lands, etc. The most reliable control points are hydraulic network nodes without characteristic features, road intersections and other objects of a fairly simple shape. The scale of the map should be comparable to the pixel size of the image (the error of displaying linear objects on a paper map is about 0.4 mm).

Transform Matrix- this is a table of polynomial transformation coefficients in the transition from the original coordinate grid to the calculated one. For a polynomial transformation of the nth order, the polynomial equations have the following form:

where index

When n=1 (linear transformation), equations (1) are the usual system linear equations kind

The coefficients and are calculated by the coordinates of control points using the least squares method. The coordinates of each control point contribute to the total approximation error (Fig. 1). At the stage of testing the transformation matrix, the average square of the error and the contribution to the error of the coordinates of each control point are displayed in the windows of the transformation procedure, which allows the analyst to correct the position of the control points to minimize errors or replace the least successful control points. Rice. one.

In rectification procedures, polynomials up to the third order inclusive are most often used, although the ERDAS package allows polynomials up to the 5th order. Linear transformation is most often used to combine scanned maps or already rectified images. For rectification of satellite images, polynomials of the second and third order are usually used.

Recalculation of pixel brightness values ​​when transforming an image.

When transforming an image, the nodes of the rectangular grid in which the new image will be presented will not be the same pixels that were in the nodes of the original grid. Therefore, the brightness values ​​of the pixels must be recalculated according to their new coordinates. There are three main ways to recalculate these values: nearest neighbor, bilinear interpolation, and bicubic convolution.

In the nearest neighbor method, a pixel with coordinates (x, y), whose brightness value in the new coordinate grid is unknown, is assigned the value that has the nearest pixel in the new grid with a known brightness value. This method is most often used when transforming already classified (index) images, where the brightness of a pixel corresponds to the index of its thematic class.

Pixel coordinates

Fig.2. Linear interpolation along the Y coordinate.

At bilinear interpolation

the unknown brightness of a pixel is calculated on the assumption that in a local area of ​​the image, the brightness, depending on the value of the coordinates, changes according to a linear law (Fig. 2). That is, the desired brightness value is the coordinate Vm points (Y m ,V m) straight line, given by the brightness of the two nearest pixels on the right and left, respectively. The calculation is made taking into account both X and Y coordinates, which is why the interpolation is called bilinear.

Insofar as this method has a smoothing effect, it is advisable to use bilinear interpolation for images that do not have pronounced structural features. Most often these are images of undeveloped territories - forests and tundra, deserts, oceans and seas.

At bicubic convolution pixel value with coordinates (X r ,Y r), calculated according to

pixel values ​​inside a 4x4 window, as shown in Figure 3.

The convolution used in ERDAS Imagine has a rather complex form and gives a complex effect of low-pass and high-pass filters. That is, it provides, on the one hand, a slight increase in contrast, on the other hand, smoothing of individual small details. In general, the effect of the method depends on the type of image, but it can be used if the image has pronounced structural elements.

Fig.3. Window selection for bicubic convolution.

Upscaling of multi-zone images using high-resolution panchromatic images. The main stages of the process. Methods for implementing this procedure in the ERDAS Imagine package.

In the ERDAS Imagine package, you can increase the spatial resolution of a multi-zone image by having a black-and-white (ie panchromatic) image of the same area. The process includes two stages: 1) bringing a pair of images to a single coordinate system; 2) the actual increase in resolution. Despite the fact that the second stage is performed in ERDAS Imagine by a single procedure, it also includes 2 tasks: 1) bringing images to a single scale, that is, stretching a multi-zone image to a panchromatic scale; 2) image alignment and recalculation of pixel brightness values ​​in each channel using the value of the corresponding pixel in the panchromatic channel. The simplest way recalculation - multiplicative, where the new brightness is calculated by the formula: where is the original pixel value, I rap is the value of the corresponding pixel in the panchromatic channel

The obtained values ​​are then reduced to a scale, and, as you can see, at a higher level of detail, the brightness ratios across the channels are stored for each type of object. Execution in the ERDAS Imagine program:

1 Open in Viewer No. 1 the image panAtlanta.img from the EXAMPLES folder. This image is already georeferenced. Characteristics of a map projection can be viewed using the function Utilities->Layer Info.

2 In the new Viewer #2, open the multi-zone image tmAtlanta.img. This image will be used as a working image.

3 The first step in the process will be to link the working multi-zone image to the panchromatic one. Select a function in Viewer No. 2 Raster- > Geometric correction. In the window Set Geometric Model choose a polynomial model.

4 In the window Polynomial Model Properties set the degree of the polynomial to be used when transforming the image. In this case, a second-order polynomial is sufficient.

5 In the window Geo Correction Tools select the crosshair circle to create a set of anchor points. In the opened window GCP Tool Reference Setup mode must be set Existing viewer. After confirmation (OK) you will see a window asking you to specify the window (Viewer) of the image to which the binding will be performed. Click inside the panchromatic window and confirm your choice in the message box that appears. After that, you will open all the tools for transforming the image by reference points.

6 GCPs are created in the mode of the pressed button “circle with crosshairs” from the GCP editor (table GCP Tools). It is more convenient to indicate these points inside small auxiliary windows, the position of which is displayed by rectangles on the main images. The size and position of these rectangles are adjusted using the cursor in the down arrow mode. The size can be adjusted by hovering the cursor on the corner of the rectangle in the cross hairs, the position is changed by moving the cross hair lines. The points must be applied in pairs on both images. If you first put several dots on one, and then several dots on the other, the program will not be able to identify them. Anchor points should be placed evenly over the image, otherwise only the one will be correctly transformed.

the area with more dots, and the rest of the image will be heavily distorted.

If the dot is applied unsuccessfully, it can be removed as follows. Select the corresponding row in the table by clicking on the left gray field, where the row numbers are indicated. Then click the right mouse button on the same field. From the pop-up menu, select Delete Selection. In the same menu, you can cancel the selection using the command select None or vice versa, select all rows (Select All)

7 After setting a certain number of control points, you will automatically create a transformation matrix with polynomial coefficients calculated from these points. Approximation errors for each point are shown in the field "RMS Error" and the contribution of each point to the error is in the field "Contrib". Point deviations in X and Y are shown in the fields X Residual And "Y Residual" respectively. You can move the dot in the Viewer; while the errors will change. For an acceptable transformation, all errors must be on the order of 0.1 or less. Try to reduce these errors by moving the cursor in X and Y. If this fails, then delete the bad point. To delete, select its row in the table by clicking on the leftmost (gray) field with the cursor. After that, right-click on this gray field to call the pop-up menu and select Delete Selection

8 After a certain number of control points have been set, the program will automatically calculate the transformation polynomial for you. To check whether this polynomial is calculated correctly, mark one or two control reference points on one of the images in those areas where you have not put them down yet. If at the same time they appear on another image at the same points, then the polynomial is chosen correctly. Otherwise, continue the process of generating control points until the required accuracy is obtained.

9 After you get a transformation matrix that is acceptable in accuracy, you can proceed to the process of transforming an image (Resampling). In the window Geo Correction Tools select the oblique square tool. In the Resample window that opens, open new file in own folder to record the result of image transformation. On the right, set the desired way to recalculate the pixels of the image and click OK.

10 Display the result in the new Viewer and make sure that the transformation is done correctly.

11 In the block interpreter select menu item Spatial Enhancement, and in the opened submenu - the function resolution merge. In the window that opens, in order from left to right, open the files: 1) panchromatic image; 2) multizone image transformed by you; 3) the output you are going to get. Modes can choose those that are set by default. Click OK.

12 Open the resulting result and verify that it exists. If it is missing, try using a different pixel rendering mode.

In addition to the ability to add images to page content using the FilePicker from the TinyMCE visual editor, developers and designers in CMS Made Simple have long been looking for the possibility of the so-called association of a single image and a page. What is it for? Here are some examples:

    To create a graphical menu that displays not text, but an image. Look at an interesting example of a Mac style icon menu or a hierarchy icon menu at the bottom of the site after the word Portfolio.

    To create a list of pages (like a teaser) with an image attached to each page.

    To limit page editors who are unable to scale down and neatly insert images into content. In this case, they are prompted to select one of the already loaded images from the list, which is then inserted in the template in the right place of the right size. Or the ability to upload images that will shrink when uploaded automatically.

There are currently three options for linking an image to a page (at least I don't know of any others).

Option 1: Image on the Options tab

This was the very first attempt to link an image to a page, which is still available on the tab Options when editing a page. Here you can select one of the images in the list of files that were previously uploaded to the folder uploads/images. The path to this folder can only be changed globally in the general site settings (Site Administration » General settings, tab Page editing settings). The selected image is made available in the menu template via the variable $node->image, and its sketch through $node->thumbnail. With this option, you can only associate one image per page, i.e. 1:1.

Option 2: Image via the (content_image) tag

Second try. The tag is added to the main site template. If you add the tag multiple times, you can attach multiple images to the same page. IN administrative panel in this case a dropdown menu is displayed to select uploaded files (same as option 1) and on the page itself it outputs HTML tag img. (content_image) is more intelligent than the first option, in particular it allows you to customize the folder where images are stored.

But its big drawback, like the first option, is that the images that can be selected from the list must be pre-loaded to the system using the file manager or Image Management. If you (for educational purposes) removed the "Insert/Edit Image" button from the visual editor in order to prohibit their direct insertion into the site content, then your editor must first load the images, and then edit the page. The second drawback: if there are a lot of these images, then the list is huge and you can easily get confused in it.

Option 3: Using the GBFilePicker Module

Unusually flexible. It allows you not only to select already uploaded images, but also to upload them "on the fly" while editing the page, as well as delete and even edit already uploaded ones, without leaving the content editing page. The list of images in the drop-down menu can be shown or disabled (for example, if there are already 100 images in the folder, then the list is most likely useless).

A few examples of how this tag might look in the admin interface on a page with content editing, depending on the parameters used.

Module features: reducing files when uploading, excluding certain files from the list by suffix or prefix in the file name, the ability to restrict extensions for uploaded files, the ability to restrict access to files depending on the user, creating thumbnails. And I especially love this module because it is not only the name of the file in the list, but also its sketch shows the editor, which is extremely convenient for the forgetful.

This option is by far the best that I see in CMS Made Simple. This is what my website editors grasp intuitively.

Please enable JavaScript to view the

The idea to record, along with each photograph, the coordinates of the point at which they were taken, arose at the dawn of digital photography and was implemented almost immediately. Today, this idea has come to the masses and has acquired many services. From the very beginning, the idea arose and continues to be implemented at the hardware level, when the GPS receiver directly communicates with the camera, or it is built into it, or connected to it via a serial port, or installed on the camera and receives a signal that a picture has been taken from flash sync. Sony also released the GPS-CS1 device, which simply records the coordinates every 15 seconds, and then they are synchronized in time with the pictures taken, and the coordinate information is recorded in a file. Considering that today GPS receivers, and cameras have become very common and in Everyday life, you may not have to buy an additional device, you can use the GPS receiver and camera you already have, all that remains is to link the coordinate data to specific images. Previously, there was a significant limitation due to the fact that the memory GPS navigator but it was overflowing, and every day I had to download information to the computer. If you filmed infrequently and GPS was used for navigation, then it is likely that when you come back from a hike, you will only be able to extract information about the last day. Now, when GPS navigators have the opportunity to record traveled paths on memory cards, the issue of its shortage has been almost completely removed. On the Internet, you can find several dozen programs designed to bind photos to coordinates. A more or less complete list can be found and. There are also commercial ones among them, but most of them are free and even open source. source codes. I tried to try many of them, but if for some reason the program did not immediately start working correctly, then I did not try to figure it out, but immediately moved on to the next one. Therefore, it is very likely that among those programs that I rejected, there are worthy ones that will start working immediately and without problems on a different hardware configuration. I also did not consider commercial programs, since they demo versions introduce a deliberate error of about a kilometer, and it seemed to me unreasonable to waste time on them with a large number of open programs.

In addition, the number of programs considered was reduced, since I had rather specific additional requirements. Namely: to record the coordinates, the Etrex Venture Cx navigator was used, which saves coordinate data in GPX format (GPS Exchange Format). The format is standard, but it turns out that Garmin and some software developers understand this standard differently. Fortunately there is universal programs, which convert one format to another. And among them I would single out . In particular, this program can be asked to convert the GPX format taken from Garmin navigator, to the same format, but the result of this conversion will be understood by all programs.

The second requirement was that I wanted to immediately link the photos in RAW format so that all the photos received from the source photos were already marked with coordinates, and there would be no need to determine the coordinates again by the time when the picture was taken. Because with time, as it turned out, there are quite a few problems. And if they are further multiplied by the fact that the converted files are made and processed in different time, and the original time information of the snapshot may be lost, or after a while you won't be able to remember what time zone you shot in. Many of the programs I have reviewed have quite sophisticated settings for correcting possible problems with time setting. However, it is better to set up the navigator and camera right away so that these problems do not arise. My navigator has the ability to choose how the track is recorded - automatic or after a specified time interval. In automatic mode, if you move quickly, then a lot of points are written, but if you stand still, then they are not written at all. This allows you to get a record of the same quality path, whether you are walking or driving a car. However, if you shoot from one point for a long time, then a situation may arise when at the time of shooting the GPS navigator did not record the coordinates, since they did not differ from those recorded half an hour ago. In many programs, you can set the time interval in which the coordinates are considered to coincide with the snapshot taken. However, the lack of information may mean not only that you did not move, but also that the signal from the satellite was lost. In this case, if the interval is large enough, then the image can be assigned coordinates that differ significantly from the true ones. Therefore, I recommend setting the time recording with an interval of 10 s. If you are not shooting from the bus window, then the accuracy will be more than sufficient.

The next global problem is what time to set in the camera. If you are traveling or taking pictures in the fall or spring when the time can change, then setting the camera to local time seems like a bad idea to me, especially since the idea of ​​local time is completely discredited today. The sun is at its zenith over my house in Moscow in the summer at 13:15. Today, means of transport allow you to move many thousands of kilometers, and it is more reasonable to use the universal time, and not explain what time and taking into account what time period you agreed to meet. The navigator logs UTC (Coordinated Universal Time). Therefore, it makes sense to set the same time on the camera and never change it, regardless of travel or time of year. Given that I write down the coordinates at 10 s intervals, I prefer to call this time the old-fashioned GMT (Greenvich Meridium Time). This option is more informative, because it means that the countdown is from local time on the Greenwich meridian and, with the accuracy I set, does not differ from UTC. Knowing your own coordinates and this time, you can always easily calculate when your sun will be at its highest point, that is, local noon. All this information is by no means useless for the photographer, because it allows you to imagine where and where the light will fall at the intended shooting point. All the troubles are from the sciences, therefore, probably, the people who called the morning noon sought to quickly send everyone who studied geography at school to a madhouse.

So, if we have the camera and the navigator set to the same time, then in the future you can ignore the Time Zone settings. Programs for binding photos to coordinates

GPicSync

For primary batch processing taken photos, I selected the program .

Spartan GUI, work only with folders, view only JPEGs, but it performs its task quite quickly. I note that there are quite a few programs that work from command line, which can argue with this one for asceticism, but I don't like working with the keyboard :-) The program uses and . Distributed under the GPL license. There are versions for Windows and Linux. Russian language is supported.

It works immediately with folders, allows you to batch convert many photos at once, works with RAW, understands Garmin GPX files, writes coordinates to EXIF, allows you to add automatically to keywords IPTC is the nearest geographical names it takes from databases on the Internet. In addition to writing coordinate information to photo files, it also creates a file in KML format or KMZ.

KML (Keyhole Markup Language - Keyhole markup language) is an XML-based markup language used to represent three-dimensional geospatial data in a program Google Earth, which was called "Keyhole" before its acquisition by Google. KMZ are the result of ZIP compression of KML files. See details.

Google Earth is distributed free of charge.

If you want to find out where you took the pictures in field conditions (without fast Internet), you need to put them on some map that is saved in your laptop. To do this, you can use the above GPS program Babel and convert it to WPT format for viewing in the program or again to GPX format, but already with waypoints included in it, marking the pictures taken, for viewing in the program, i.e. put photos on the very map you were oriented on, when using your GPS navigator.

To work with individual photographs, it may be good choice program .

This program is written in Java and, as a result, it is equally easy to run without reinstallation under both Windows and Linux. In addition, it is licensed GNU General public license. The program can do everything: work with RAW files; view them; write coordinates in EXIF; view the position of photos on satellite images through the Google Earth program; add geographic names to keywords using information from the site. To achieve this versatility, the program uses external modules third-party developers that must be installed separately: , .

The program allows you to export photos not only to Google Earth, but also without installing additional programs, control the position of the shooting point via .

Of the minuses of this program, it should be noted that it is very slow, i.e. it can take about a minute to prepare to view a photo in RAW format, and it does not understand Garmin files without conversion. The program is used to communicate with the GPS receiver, and to convert files it must be run separately. Some geographic names can be inserted in Cyrillic, which would be welcome, but some viewers refuse to work with such files :-(

The program is updated very often, so there is hope that it will be improved :-)

COPIKS PhotoMapper

If you work only with files in JPEG format and only under Windows, then a program would be a good choice.

It also very effectively copes with the task of packing previously attached to the coordinates of photos in KMZ format. You can see what it looks like by downloading the 500 KB file.

Locr GPS Photo

For further processing and posting photos on the Internet, a program can be useful.

It is also convenient because it allows you to overlay photos on satellite images and maps provided by different companies. You can choose between Google, Microsoft and YAHOO.

I never learned how to link photos with it, because I did not find a way to convert GPX to an acceptable NMEA format for it. Therefore, for me, its main purpose is to post photos on the Internet. This is not the only service that provides similar service, you can post photos on the Internet and on the site.

A convenient addition turned out to be a program that allows you to edit coordinates manually, find a survey point in Google Earth using data recorded in EXIF, and also perform the reverse operation - write the coordinates of the survey point found on a satellite image in EXIF.

Over the past year, the idea has received strong support among the masses, and soon any point on the earth's surface can be seen not only from space, but also from ground level. By turning on the "Geography on the Internet / Panoramio" layer in Google Earth, you will see that the earth is literally strewn with marks of survey points, by clicking on which you can see a photo.

The raster map in the GIS "Map 2000" has the RSW format. The format was developed in 1992, its structure is close to the TIFF version 6 format. The main indicators characterizing a raster map are:

  • image scale;
  • image resolution;
  • image size;
  • image palette;
  • planned image binding.

Image Scale- a value that characterizes the source material (as a result of scanning which this raster image was obtained). Image scale is the ratio between the distance on the source material and the corresponding distance on the ground.

Image resolution- a value that characterizes the scanning device on which the bitmap image was obtained. The resolution value shows how many elementary dots (pixels) the scanning device divides a meter (inch) of the original image into. In other words, this value shows the size of the "grain" of the bitmap. The larger the resolution, the smaller the "grain", which means the smaller the size of terrain objects that can be uniquely identified (decoded)

Image size(height and width) - values ​​that characterize the image itself. These values ​​can be used to determine dimensions bitmap in pixels (dots). The image size depends on the size of the source material being scanned and the resolution setting.

Image Palette- a value that characterizes the degree of display of color shades of the source material in a raster image. There are the following main palette types:

  • two-color (black and white, one bit);
  • 16 colors (or shades of gray, four digits);
  • 256 colors (or shades of gray, eight bits);
  • High Color (16 bits);
  • True Color (24 or 32 bits).

If it is possible to select the resolution and image palette when scanning source materials (some scanning devices work only with fixed values), it should be borne in mind that with an increase in resolution and a higher degree of displaying shades, the volume of the resulting file increases sharply, which will subsequently affect the volumes. stored information and the speed of displaying and processing a bitmap image. For example, when scanning original map materials, there is no need to use a palette of more than 256 colors, since in reality on a regular map, as a rule, there are no more than 8 colors.

The image palette is stored in source file, and the resolution and scale of the future image should be entered when converting a raster to an internal format. Files are an exception. TIFF format, in which, in addition to the palette, the resolution is also stored. For other cases, the resolution is specified in accordance with the parameters selected during scanning. For example, domestic drum scanners from the KSI company scan with a resolution of 508 dots/inch (or 20,000 dots/meter). If you do not know the exact value of the scale of processed materials, you should enter an approximate value (the scale value is automatically adjusted during the process of linking a bitmap image).

The raster image loaded into the system is not yet a raster map, since it does not have a planned reference. An unattached image is always added to the southwest corner of the map dimensions. Therefore, if you are working with a large area of ​​work, for quick search added raster, you can use the "Go to raster" item of the raster image properties menu of the "List of raster" dialog.

Once linked, the raster map becomes a measurement document. Using a raster map, you can determine the coordinates of the objects depicted on it (when you move the cursor over the raster map, the current coordinates are displayed in the information field at the bottom of the screen). The linked raster map can be used as a standalone document or in combination with other data.

1.2. Converting raster data

The Panorama system processes raster maps presented in the RSW format (internal format of the system). Data from other formats (PCX, BMP, TIFF) can be converted to RSW format using software Panorama systems. In addition, the system supports early version raster data structures RST ("Panorama under MS-DOS") . When an RST file is opened, it is automatically converted to the RSW format.

There are two ways to load a bitmap into the system:

  • Opening a raster image as an independent document (the "Open" item of the "File" menu).
  • Adding a bitmap to already open document(vector, raster, matrix or combined map). Adding a raster image to an already opened map is done through the "Add - Raster" item of the "File" menu or the "List of rasters" item of the "View" menu.

1.3. Raster Map Anchor

The raster map is bound according to the linked document, i.e. First, you need to open a document oriented in a given coordinate system (vector, raster, or matrix map), add the raster to be bound to it, and perform binding. You can bind a raster using one of the methods provided in the raster properties ("List of rasters - Properties"). It should be remembered that all actions on a raster available in the properties menu of a raster image are performed on the CURRENT raster. Therefore, if an open document contains several rasters, you should activate (select in the list of open rasters) the one you currently want to work with.

1.3.1. Snap by one point

Snapping is performed by sequentially specifying a point on the raster and a point where the specified point should move after transformation (from where - where). The transformation is performed by moving the entire raster in parallel without changing its scale and orientation.

1.3.2. Move to the southwest corner

The transformation is performed by moving the entire raster in parallel without changing its scale and orientation to the southwest corner of the work area dimensions. It is advisable to use this georeferencing mode when you add an incorrectly geolocated raster to an open map, which is displayed far outside the work area. In this case, after moving the raster to the southwest corner, its re-binding becomes easier.

1.3.3. Two point snapping with scaling

Binding is performed by sequentially specifying a pair of points on the raster and points to which the specified points should move after transformation (from where - where, from where - where). The transformation is performed by moving the entire raster in parallel with changing its scale. The image is bound by the first pair of specified points. The second pair of points is specified to calculate the new bitmap scale. Therefore, if the raster values ​​of the vertical and horizontal scales are not equal (the raster is stretched or compressed due to the deformation of the source material or the error of the scanning device), the second point will take its theoretical position with some error. To eliminate the error, one of the methods for transforming a raster image should be used (applied task "Transforming raster data").

1.3.4. Rotate without scaling

Binding is performed by sequentially specifying a pair of points on the raster and points to which the specified points should move after transformation (from where - where, from where - where). The transformation is performed by moving the entire raster in parallel with changing its orientation in space. Rotation is performed around the first specified point. The image is bound by the first pair of specified points. The second pair of points is specified to calculate the rotation angle of the image. Therefore, if the raster values ​​of the vertical and horizontal scales are not equal (the raster is stretched or compressed due to the deformation of the source material or the error of the scanning device), the second point will take its theoretical position with some error. To eliminate the error, one of the methods for transforming a raster image should be used (applied task "Transforming raster data").

While loading raster maps a raster map work area can be created in the database. To create a raster region, it is necessary to sequentially load into the system each raster image forming this region and orient it relative to unified system coordinates.
A combination of raster and vector maps for the same or adjacent territories allows you to quickly create and update work areas, while maintaining the ability to solve applied problems for which some types of map objects must have a vector representation.

More

  1. Matching images based on "features"

Literature for self-study

The book ($\textit(Krasovsky, Beloglazov, Chigin)$) contains a presentation of the classical theory of correlation-extremal analysis of two-dimensional fields, which we recommend that you familiarize yourself with in an in-depth course.

The original approach to mutual linking of images based on the so-called searchless correlation is described in the book ($\textit(Astapov, Vasiliev, Zalozhnev)$). This approach is more applicable in the field of correlation tracking than in the field of comparison of arbitrary images, but it is attractive due to the possibility of efficient software and hardware-software implementation.

In the book ($\textit(Shapiro, Stockman)$), Chapter 11 is devoted to methods of matching images and objects in two-dimensional space. Here the geometric aspects of the problem are of interest, to which less attention was paid in our presentation. Chapters $12$ and $13$ are devoted to the perception of three-dimensional scenes. They can also be recommended for independent study, although the presentation of the same range of issues in the book seems to us more complete and successful.

In the book ($\textit(Foresight, Pons)$) a small section "binocular image matching" is devoted directly to the problem of stereoidentification, which at the same time contains a number of interesting ideas that are absent in our presentation. In particular, stereo identification by the dynamic programming method and a number of other methods are described. In a broad sense, the problem of reconstructing three-dimensional spatial information from a set of two-dimensional images is devoted to the entire part III of this book, including chapters $10$ "Geometry of several projections", $11$ "Stereo vision", $12$ "Definition of an affine structure from motion" and $13$ "Definition of an aprojective movement structures. The issues considered here are related to the construction of various metric and projective relationships between image points and scene points, the calculation of ray paths, and so on. These topics are not included in this tutorial because they are closer to the field of photogrammetry than to the field of image processing and analysis, however, in the framework of an advanced machine vision course, such elements should be recognized as necessary. In this regard, we recommend the entire III part of the book for in-depth independent study.

List of sources by section

  1. $\textit(Bertram S.)$ The UNAMACE and the automatic photomapper\Dslash Photogrammetric Engineering. 35. No.6. 1969. P.569 - 576.
  2. $\textit(Levine M.D., O"handley D.A., Yagi G.M.)$ Computer Determination of Depth Maps\Dslash Computer Graphics and Image Processing. 2. No.2. 1973. P.131 - 150.
  3. $\textit(Mori K., Kidode M., Asada H.)$ An iterative prediction and correction method for automatic stereocomparison\Dslash Computer Graphics and Image Processing. 2. No.3 - 4. 1973. P.393 - 401.
  4. $\textit(Ackerman F.)$ High precision digital image correlation\Dslash IPSUS. 1984. No. 9. P.231-243.
  5. $\textit(Gruen A., Baltsavias E.)$ Adaptive least squares correlation with geometrical constraints\Dslash SPIE. 1985. V.595. P.72-82.
  6. $\textit(Ohta Y., Kanade T.)$ Stereo by intra- and inter-scanline search using dynamic programming\Dslash IEEE PAMI. V.7. No.2. 1985. P.139 - 154.
  7. $\textit(Priice K.E.)$ Relaxation techniques for matching\Dslash Minutes of the Workshop of Image Matching, September 9-11, 1987, Stuttgart University, F.R.Germany.
  8. $\textit(Foerstner W.)$ A feature based correspondence algorithm for image matching. ISPRS Commision III Symposium, Rovaniemi, Finland, August 19-22, 1986\Dslash IAPRS. V.26-3/3. P.150-166.
  9. $\textit(Ayache N., Faverjon B.)$ Efficient registration of stereo images by matching graph description of edge segments\Dslash IJCV. V.1. No.2. 1987. P.107 - 131.
  10. $\textit(Van Tries G.)$ Theory of detection, estimates and modulation. T.1 - M.: Soviet radio, 1972.
  11. $\textit(G.I. Vasilenko, L.M. Tsibulkin)$ Holographic recognition devices. - M.: Radio and communication, 1985.
  12. $\textit(Bochkarev A.M.)$. Correlation-extremal navigation systems\Dslash Foreign radio electronics. 1981. No. 9. C.28-53.
  13. $\textit(Yaroslavsky L.P.)$ Digital signal processing in optics and holography: Introduction to digital optics. - M.: Radio and communication, 1987.
  14. $\textit(Horn BK)$ Robot vision. - M.: Mir, 1989.
  15. $\textit(Denisov D.A., Nizovkin V.A.)$ Image segmentation on a computer\Dslash Foreign radioelectronics, no.10. 1985.
  16. $\textit(Davies E.R.)$ Machine Vision: Theory, Algorithms, Practicalities. - Academic Press., 2nd Edition, San Diego, 1997.
  17. $\textit(T. Tuytelaars, L. Van Gool.)$ Matching widely separated views based on affine invariant regions\Dslash International Journal of Computer Vision 59(1). 2004. P.61 - 85.
  18. $\textit(Yaroslavsky LP)$ Accuracy and Reliability of Measuring the Position of a Two-Dimensional Object on a Plane\Dslash Radio Engineering and Electronics. 1972. No. 4.
  19. $\textit(Abbasi-Dezfould M., Freeman T.G.)$ Stereo-Image Registration Based of Uniform Patches, International Archives of Photogrammetry and Remote Sensing. V.XXXI. Part B2. Vienna, 1996.
  20. $\textit(Schenk.)$ Automatic Generation of DEM`s, Digital Photogrammetry: An Addentum to the Manual of Photogrammetry\Dslash American Society for Photogrammetry(\&)Remote Sensing. 1996. P.145 - 150.
  21. $\textit(Gruen A,)$ Adaptive Least Squares Correlation: A powerful image matching technique\Dslash South African Journal of photogrammetry, Remote Sensing and Cartography. V.14. Part 3. June, 1985.
  22. $\textit(Golub G.H., Ch. F. Van Loan.)$ Matrix computations. - John Hopkins University Press, 1983.
  23. $\textit(Pyt'ev Yup.)$ Morphological analysis of images\Dslash Doklady AN SSSR. 1983. T.269. No. 5. C.1061 - 1064.
  24. $\textit(Haralick R.M. and Shapiro L.G.)$ Machine vision. - Addison-Wesley, 1991.
  25. $\textit(Zuniga O.A., Haralick R.M.)$ Corner detection using the facet model\Dslash Proc. IEEE Comput. Vision Pattern Recognition. Conf., 1983. P.30-37.