Paging (memory management)

From Bauman National Library
This page was last modified on 22 January 2017, at 18:36.

This article is about computer memory paging.
See also: Page (computer memory)

In computer operating systems, paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory.[1] In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

For simplicity, main memory is called "RAM" (abbreviated from "random-access memory") and secondary storage is called "disk" (a shorthand for "hard disk drive"), but the concepts do not depend on whether these terms apply literally to a specific computer system.

Memory management scheme

History

The first memory pages were concepts in computer architecture, regardless of whether a page moved between RAM and disk.[2] [3] For example, on the PDP-8, 7 of the instruction bits comprised a memory address that selected one of 128 (2^7) words. This zone of memory was called a page. This use of the term is now rare. In the 1960s, swapping was an early virtual memory technique. An entire program would be swapped out (or rolled out) from RAM to disk, and another one would be swapped in (or rolled in).[4][5] A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program.

A program might include multiple overlays that occupy the same memory at different times. Overlays are not a method of paging RAM to disk but merely of minimizing the program's use of RAM. Subsequent architectures used memory segmentation, and individual program segments became the units exchanged between disk and RAM. A segment was the program's entire code segment or data segment, or sometimes other large data structures. These segments had to be contiguous when resident in RAM, requiring additional computation and movement to remedy fragmentation.[6]

The invention of the page table let the processor operate on arbitrary pages anywhere in RAM as a seemingly contiguous logical address space. These pages became the units exchanged between disk and RAM.

The algorithm for determining outdated pages

When allocating space for a new page is necessary to delete any page that is currently located in the memory. Terms of replacement pages are used for deciding what kind of page should be removed from memory. The ideal candidate is a "dead" page that no longer need anyone else (for example, refers to the consummation of the process). If no such pages in memory (or their number is insufficient), used usually local or global page replacement: Rule local replacement allocates each process or group of interrelated processes of a certain number of pages. If the process needs a new page, it has to replace one of their own. Rule global page replacement page you can take any process, using global selection criteria. To implement this approach, you need to select the criteria by which a decision will be made on pages stored in memory. The most frequently useded searching criteria are:

  • Least Recently Used. Removed those pages are accessed most for a long time. It is believed that in the future these pages will be a minimum of references.
  • Last Recently Used. Deletes recently released pages. This includes the page just to complete the process.

The swap algorithm

Description of the algorithm swap (swap) can be divided into three parts: the space management in the discharge device, unloading process of the main memory and swapping processes in main memory.

Office space on the unloading device

unloading device is a block type, which is a configurable section diska.Togda as usual kernel allocates space for files on the same block in a single operation, to unload the device space is allocated in groups of adjacent blocks. The space allocated to the files used in a static manner; since assignment scheme files under the operating space for a long period of time, its flexibility is understood in the sense of reducing the incidence of fragmentation and therefore the volume of unused space in the file system.

Allocation of space on the discharge device, in contrast, is temporary, to a large extent dependent on the scheduling mechanism processes. The process is placed on the unloading device eventually will come back to the main memory, freeing up space on the external device. Since time is a critical factor, and given the fact that the input and output of data in a single multiblock operation is faster than several single-block operation, the kernel allocates for unloading device continuous space, without taking into account the possible fragmentation.

Since the discharge space allocation unit circuit differs from the circuit used for the file system data structures that record space must also be different. Free space in the file system is described by a linked list of free blocks, which can be accessed through the file system superblock, information about free space on the device is going to unload a table, referred to as "memory card". memory cards, in addition to discharging device used, and other system resources (for example, drivers for some devices), they make it possible to allocate memory device (in the form of adjacent blocks) in a first-appropriate.

Allocation algorithm using a memory card (malloc). Map kernel searches for the first available line containing the number of resource units is sufficient to satisfy the request. If the request covers all the number of units contained in the string line and removes the core condenses map (i.e. the map becomes smaller than one line). Otherwise, the kernel address and resets the number of remaining units in a row in accordance with the number of units allocated on demand. Here is the algorithm for allocation using memory cards:

    malloc algorithm     #allocation algorithm using memory card        
     input information:  (1) adress:    #indicates the type of card used
                          (2) the required number of resource units:   
     output infomation me:  get address - in case of successful completion s 
                          0 - otherwise                
     {                                                          
        for (each card string)                               
        {                                                       
             if (the required number of resource units located in the card line)                                        
           {                                                    
              if (number of required number = number of units in string)    
                  dlte string from card;                      
              otherwise                                
                 adjust the starting address in string;
             или       
                return (original string address);             
           }                                                    
        }                                                       
        return (0);                                            
     }                                                            

Freeing resources, the kernel is looking for them in the appropriate place at the map. In this case there are three possibilities:

  • Freed resources completely cover the gap in the memory card. In other words, they have contiguous addresses with addresses of resources lines immediately preceding and following that. In this case, the core incorporates the newly freed resources with the resources of these lines on a single line card.
  • Freed resources partially close the gap in the memory card. If they have an address adjacent to the location of resources line immediately preceding or immediately following this (but not the addresses of the two lines), the kernel address value and resets the number of resources in the corresponding row based on the newly freed resources. The number of rows in the memory map remains unchanged.
  • Freed resources partially close the gap in the memory card, but the addresses are not in contact with the addresses of any other map resources. The kernel creates a new row and inserts it in the appropriate place in the map.

In the traditional implementation of the UNIX system uses a single discharge device, but in the last editions of the version V has allowed the presence of a plurality of discharge devices. The kernel selects unloading device in a "circular list" provided that the device has a sufficient contiguous address space. Administrators can dynamically create and delete from the discharge system of the device. If the unloading device is removed from the system, the kernel does not unload the data on it; if the same data is spooled to the device to be removed, it is first emptied and only after the release of a device belonging to the space unit can be removed from the system.

Unloading process

The kernel unloads the process, if you are in need of free memory, which can occur in the following cases:

1. Produced by reference to the system function fork, which is to allocate memory space for the child process.

2. Produced by reference to the system function brk, increasing the size of the process.

3. Process size has increased as a result of natural increase in the stack process.

4. Core need to free memory space for the swap previously unloaded processes.

When the kernel decides whether the process to be discharged from the main memory, it decrements the reference count associated with each process area, and unloads the areas in which the reference counter has become equal to 0. The kernel allocates space on the discharge device and in the process blocks RAM (cases 1-3) prohibiting it to discharge until the current is completed the unload operation. Address areas of unloading space retains the core areas in the respective records of the table.

For one IO operation, which involves unloading device and the address space of the task and which is carried out through the buffer cache, the kernel unloads the maximum possible amount of data.

If the equipment is not able to pass in a single operation the contents of multiple memory pages before the kernel programs faced with the task to transfer the contents of the memory in a few steps on one page for each operation. Thus, the exact data rate and mechanism are determined, inter alia, the disk controller capabilities and memory allocation strategy.

At the same time before the kernel is not faced with the task to rewrite the contents of the device at the unloading of the virtual address space of the process completely. Instead, the kernel copies the contents of the physical memory discharge device allocated process, ignoring the unused virtual addresses. When the kernel pumps up the process back into memory, it has at the map virtual addresses the process and the process can reassign new addresses. The kernel reads the copy process from the buffer cache to the physical memory in those cells, which is set according to the process of virtual addresses.

The figure shows an example of the display in the process image memory address space discharge device. The process has three areas: command, data, and stack. Command area ends at 2K virtual address, and the data area starts with the address 64K, thus the virtual address space formed pass to 62 Kbytes. When the kernel unloads the process, it dumps the contents of memory pages with addresses 0, 1K, 64K, 65K, 66K and 128K; on the discharge device will not be allocated a place in the gap in between 62 KB instruction and data, as well as a pass to 61 Kbytes of data between regions and stack space for unloading device is filled continuously. When the kernel loads the process back into memory, it knows from the memory map of the process is that the process has a dead space in its portion size value of 62K, and with this in mind, respectively allocates physical memory.

Fig.1. CPU utilization in memory

Theoretically, all the memory space occupied by the process, including its private address space and kernel stack can be unloaded, even though the core and may temporarily block the area in memory for the duration of the critical operations. In practice, however, the kernel does not unload the contents of the address space of the process, if it contains the address conversion table (address table) process. Practical considerations also dictated by the conditions under which the process can unload itself or to demand his discharge by another process.

Unloading while the fork function is running

In the description of the system the fork was assumed that the parent process has at its disposal the memory that is sufficient to create a descendant context. If this condition is not met, the kernel unloads the process from memory without releasing the memory space occupied by it (the parent) copy. When the upload procedure is completed, the child process will be located in the discharge device; process-parent takes his child of the state "ready to run" and returned to the task mode. Since the child process is in a "ready to run", the swap program eventually load it into memory, where the kernel run it; descendant accomplish thus their role in the system function fork and return to the task mode.

Unloading extension

If the process is experiencing a need for more physical memory, or as a result of expansion of the stack, or by running brk function, and if the demand exceeds the available memory reserves, the kernel executes the process of unloading operation with the expansion of its size in the discharge device. The device unload the kernel reserves space for placing the process in view of the expansion of its razmera.Zatem made migration table address translation process, taking into account the additional virtual space, but without isolation of physical memory (in connection with its absence). Finally, the kernel unloads the process, completing the discharge procedure in the usual way and zeroing re-allocated space on the device. When later the kernel will load the process back into memory, the physical space will be allocated already to reflect the new state of the address conversion table. At the time of renewal in the process will already be in possession of sufficient memory.

Download (suspension) process

Fig.2. Reconfigure the memory card

Zero process (process swap) is the only process of downloading other processes in the memory discharge devices. paging process begins to work on the implementation of its single function after system initialization. He loads the processes in memory, and if he does not have enough space in the memory dumps out some of the processes that are there. If the swap process is not working (for example, no processes waiting to be loaded into memory), or he is unable to do its job (none of the processes can not be unloaded), the swap process is suspended; core periodically renews its implementation. The kernel plans to launch the paging process in the same way as it does for other processes, focusing on the higher priority, wherein the paging process is executed only in kernel mode.

Swap process does not apply to the operating system functions and uses in its work only internal kernel function; He is the archetype of all core processes.

When the paging process resumes its work on the loading processes in the memory, it scans all the processes that are in a state of "readiness to perform, being unloaded", and selects the one that is in this state longer than the others (see Figure 2). If there is sufficient free memory, swap process loads the selected process by performing the operation in the reverse process of discharge. First, the physical memory is allocated, then discharging the device reads the required process and frees up the device.

If the swap process is carried out successfully boot procedure, he again looks set of unloaded, but ready to run processes in the search for the next process, which is supposed to load into memory, and repeats the sequence of actions indicated. In the end, any of the following situations:

  • On unloading device is no longer a single process that is ready for implementation. swap process suspend their work until then, until the process resumes discharging device or until the kernel unload process runnable.
  • Swap process discovered a process ready to be loaded, but not enough system memory to host it. swap process tries to load a different process and, if successful restarts paging algorithm, continuing search for downloadable processes.

If the swap process is necessary to unload the process, it scans all processes in memory. Defunct processes are not suitable for unloading, since they do not take physical memory; also it can not be unloaded in the processes that are locked in memory, for example, to perform operations on regions. The kernel prefers to unload the suspended processes, because the processes are ready to run, are more likely to be selected to perform soon. The decision on the process of unloading the kernel is adopted based on its priority and the duration of his stay in the memory. If no memory suspended process, the decision on which of the processes ready to run, it is necessary to unload depends on the value assigned to a process function nice, and the length of stay in the memory process.

Process runnable must be resident in the memory for at least 2 seconds before the escape from it, and the process that is loaded into memory must be at least 2 seconds to stay on the unloading device. If the swap process can not find any process suitable for discharge, or any process, suitable for downloading, or none of the process, prior to discharge at least 2 seconds in memory, it suspends its work due to the fact that he needs to download a process in memory, and memory is no place to host it. In this situation, the timer resumes the pumping process by every second. The kernel also resumes pumping process in the case when one of the processes goes into suspend, as the latter may be more suitable for discharging process when compared with the previously discussed. If the swap process is cleared in memory or if it has been suspended for failure to do so, he resumes his work with restart paging algorithm (from the beginning), again attempting to download pending execution processes.

Swap algorithm

Swap process selects the processes to load, based on the length of their stay in the discharge device. As another criterion may be applied a higher priority process loaded than the other, ready to carry out processes such as the process to run more preferable. Practice has shown that this approach is "more" increases system capacity in high load.

Process selection algorithm to unload from memory in order to free the required amount of space is, however, more serious flaws. Firstly, the paging process produces a discharge based on priority, length of time and a nice memory values. Despite the fact that it produces a discharge process for the sole purpose - to free memory space for the feed process, it may unload and process which frees not required size. For example, if the swap process tries to load into memory process size of 1 MB, and the system does not free memory is far from enough to unload the process that takes only 2 KB of memory. Alternatively, groups of discharge processes can be proposed strategy provided that they free space large enough to accommodate the loaded processes. Experiments using the machine PDP 11/23 showed that under heavy load, such a strategy can increase system performance by nearly 10 percent.

Secondly, if the swap process suspended its work due to the fact that the memory is not enough space for the boot process, after the resumption of the process, he again chooses to load into memory, despite the fact that earlier they have the choice was made. The reason for this behavior lies in the fact that over time into a state of readiness to perform could go the other unloaded processes more suitable for loading into memory as compared to the previously selected process. However, from this little consolation for the previously selected process still tries to load into memory.

Third, if the swap process selects for unloading process in a state of "readiness to perform," not exclude the possibility that the process after being loaded into memory has never been launched for execution.


It should be mentioned about another danger. If trying to unload process on unloading device is found free space in the system may be a deadlock in which: all of the processes in the main memory in a state of "suspend" all runnable processes swapped for new processes for unloading device has there is no place, no space in main memory.

Slippage

Most programs do not use all of the memory allocated to them at the same time, but only some part of it determined by the currently executing instructions and data they require. If the program meets the principle of locality, then part of the memory used may be much less than the allocated storage program. The size of the used parts of the program reflects the achievement of a stable state in memory consumption and is often called the working set.

Systems with virtual memory work effectively when the sum of the working sets of all the processes does not exceed the amount of physical RAM. In this case, the time required for processing pages failures has little impact on performance. However, the program is working with large data structures may be too large working set, that paging system can not effectively serve. This will lead to a continuous flow of pages failures and a sharp decrease in computer performance. This situation is called slip: the page is continuously discharged, and then it comes to treatment, causing frequent failures pages.

An interesting feature is the slow slip to a certain critical point increase in the number of page faults on the growth of the working set. After reaching this critical point, the number of page faults increases dramatically and most of their processing power is spent processing. To eliminate the slip, the user can take the following actions:

  • Increase the amount of RAM.
  • Reduce the number of simultaneously running programs.
  • Change process priorities so that some of them are completed faster and free some resources.

Disadvantages and the ability to overcome them

In the case of virtual memory location of the data on external storage devices (eg, hard drives) as often happens - memory access is slowed down (compared to random access memory).

With high probability, the use of swap space on the SSD drives (have a limited number of write cycles) reduces their lifespan. In 32-bit Windows XP, Vista, 7 for the swap file, you can use memory beyond the third gigabyte, using third-party programs to create electronic drives, stored in the memory.

On Linux, supported by a similar mechanism, zswap, accommodating swap memory in compressed form.

Guidelines for placement of the paging file

One way to make space for the swap-file (section), recommended for many years, is a multiple of the memory allocation when the volume of this file is equal to the volume of memory, multiplied by a constant of 0.5 to 2 or 3. If your computer has more than one hard disk, the more quickly access the paging file, it is desirable to place the least loaded read / write physical disk requests. A good choice would be a physical paging file on a disk that has the highest read / write speed. In Windows, the speed of reading from small sections greater in FAT32 compared with NTFS (New Technology File System), however, due to the higher stability of the NTFS (New Technology File System) to failures and significant volumes of modern hard disks, partitions with FAT32 now rarely used. In the presence of a significant volume of computer RAM (2-4 GB) and the use of the most popular operating system GNU / Linux families can completely disable swapping.

Safety while working with swap-file

From File (section) swap often can extract confidential information used in the computer system. Therefore, when dealing with sensitive data is usually performed swap treatment - for example, by using a set of tools sswap secure remove. Also, many programs that work with valuable information or encryption can selectively disable the pump memory fragments. In Linux, you can encrypt the swap-file or partition (eg Ubuntu distribution is done automatically when you select a user in the OS installation home directory encryption option). This solution increases the number of CPU load, but guarantees the safety of confidential information even in case of sudden power failure. Using a swap file can lead to infection with certain OS computer viruses, as there is a vulnerability that allows displace executables in a virtual memory and modify executable code via direct access to the hard drive.

Unix

On Unix, and similar, swap is usually placed on a separate partition of the hard disk that previously accelerated access to data, as compared to the swap arrangement on a normal section. On Linux kernels 2.6 and later, the work of swap-file is not inferior to the performance swap-partition.

Example of creating a swap-file for GNU / Linux:

1. dd if = / dev / zero of = / swap bs = 1024 count = 128K

2. mkswap / swap

3. sync

4. swapon / swap

Oracle Solaris can use the ZFS volume (Zettabyte File System) as the swap partitions:

1. zfs create -V 2G pool / swap

2. swap -a / dev / zvol / dsk / pool / swap

In addition to use in the system, some GNU / Linux distributions use a swap file for the organization of sleep ( "sleep mode", Hibernation or "Suspend to disk" - S4 ACPI mode). To support this mode, the recommended size of the SWAP file or partition is memory size increased by 10-15%.

Microsoft Windows

Windows NT uses a paging file called pagefile.sys, by default it creates a system and is always located in the root directory. Subsequently, the user can control the size and placement of the paging file, for example, using the Control Panel, click System. The paging file is not necessarily a single file, it can be a group of files stored on different hard disks and partitions.

The Windows Virtual PC is called the swap file win386.swp and is located in the Windows Virtual PC folder. However System.INI file editing can be moved to the root directory for later use with Windows NT. Also, starting with Windows Vista is possible to create a dedicated swap partition, similar to the destination used in UNIX-based systems.

See also

Wikisource has original text related to this article: The Paging Game

References

Cite error: Invalid <references> tag; parameter "group" is allowed only.

Use <references />, or <references group="..." />

External links

  • Arpaci-Dusseau, Remzi H.; Arpaci-Dusseau, Andrea C. (2014), Operating Systems: Three Easy Pieces (Chapter: Paging) (PDF), Arpaci-Dusseau Books
  • Deitel, Harvey M. (1983). An Introduction to Operating Systems. Addison-Wesley. pp. 181, 187.
  • Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Operating systems". Encyclopedia of computer science and technology. 11. CRC Press. p. 433
  • Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Operating systems". Encyclopedia of computer science and technology. 11. CRC Press. p. 442.
  • Cragon, Harvey G. (1996). Memory Systems and Pipelined Processors. Jones and Bartlett Publishers. p. 109.
  • Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Virtual memory systems". Encyclopedia of computer science and technology. 14. CRC Press. p. 32.