Virtual memory compression

From HandWiki
Revision as of 19:22, 6 February 2024 by John Marlo (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Algorithms for compressing in-memory data

Virtual memory compression (also referred to as RAM compression and memory compression) is a memory management technique that utilizes data compression to reduce the size or number of paging requests to and from the auxiliary storage.[1] In a virtual memory compression system, pages to be paged out of virtual memory are compressed and stored in physical memory, which is usually random-access memory (RAM), or sent as compressed to auxiliary storage such as a hard disk drive (HDD) or solid-state drive (SSD). In both cases the virtual memory range, whose contents has been compressed, is marked inaccessible so that attempts to access compressed pages can trigger page faults and reversal of the process (retrieval from auxiliary storage and decompression). The footprint of the data being paged is reduced by the compression process; in the first instance, the freed RAM is returned to the available physical memory pool, while the compressed portion is kept in RAM. In the second instance, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and therefore takes less time.[2][3]

In some implementations, including zswap, zram and Helix Software Company’s Hurricane, the entire process is implemented in software. In other systems, such as IBM's MXT, the compression process occurs in a dedicated processor that handles transfers between a local cache and RAM.

Virtual memory compression is distinct from garbage collection (GC) systems, which remove unused memory blocks and in some cases consolidate used memory regions, reducing fragmentation and improving efficiency. Virtual memory compression is also distinct from context switching systems, such as Connectix's RAM Doubler (though it also did online compression) and Apple OS 7.1, in which inactive processes are suspended and then compressed as a whole.[4]

Types

There are two general types of virtual memory compression : (1) sending compressed pages to a swap file in main memory, possibly with a backing store in auxiliary storage,[1][5][6] and (2) storing compressed pages side-by-side with uncompressed pages.[1]

The first type (1) usually uses some sort of LZ class dictionary compression algorithm combined with entropy coding, such as LZO or LZ4,[6][5] to compress the pages being swapped out. Once compressed, they are either stored in a swap file in main memory, or written to auxiliary storage, such as a hard disk.[6][5] A two stage process can be used instead wherein there exists both a backing store in auxiliary storage and a swap file in main memory and pages that are evicted from the in-memory swap file are written to the backing store with a greatly decreased bandwidth need due to the compression. This last scheme leverages the benefits of both previous methods : fast in-memory data access with a great increase in the total amount of data that can be swapped out and a decreased bandwidth requirement for data written to auxiliary storage.[6][5][1]

One of the most used class of algorithms for the second type (2), the WK class of compression algorithms, takes advantage of in-memory data regularities present in pointers and integers.[1] Specifically, in target code generated by most high-level programming languages, both integers and pointers are often present in records whose elements are word-aligned. Furthermore, the values stored in integers are usually small. Also pointers tend to point to nearby locations. Additionally, common data patterns such as a word of all zeroes can be encoded in the compressed output by a very small code. Using these data regularities, the WK class of algorithms use a very small dictionary ( 16 entries in the case of WKdm ) to achieve up to a 2:1 compression ratio while achieving much greater speeds and having less overhead than LZ class dictionary compression schemes.[1]

Benefits

By reducing the I/O activity caused by paging requests, virtual memory compression can produce overall performance improvements. The degree of performance improvement depends on a variety of factors, including the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the physical memory, and the compressibility of the physical memory contents.

On multi-core, multithreaded CPUs, some benchmarks show performance improvements of over 50%.[7][8]

In some situations, such as in embedded devices, auxiliary storage is limited or non-existent. In these cases, virtual memory compression can allow a virtual memory system to operate, where otherwise virtual memory would have to be disabled. This allows the system to run certain software which would otherwise be unable to operate in an environment with no virtual memory.[9]

Shortcomings

Low compression ratios

One of the primary issues is the degree to which the contents of physical memory can be compressed under real-world loads. Program code and much of the data held in physical memory is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets. Various studies show typical data compression ratios ranging from 2:1 to 2.5:1 for program data,[10][11] similar to typically achievable compression ratios with disk compression.[9]

Background I/O

In order for virtual memory compression to provide measurable performance improvements, the throughput of the virtual memory system must be improved when compared to the uncompressed equivalent. Thus, the additional amount of processing introduced by the compression must not increase the overall latency. However, in I/O-bound systems or applications with highly compressible data sets, the gains can be substantial.[9]

Increased thrashing

The physical memory used by a compression system reduces the amount of physical memory available to processes that a system runs, which may result in increased paging activity and reduced overall effectiveness of virtual memory compression. This relationship between the paging activity and available physical memory is roughly exponential, meaning that reducing the amount of physical memory available to system processes results in an exponential increase of paging activity.[12][13]

In circumstances where the amount of free physical memory is low and paging is fairly prevalent, any performance gains provided by the compression system (compared to paging directly to and from auxiliary storage) may be offset by an increased page fault rate that leads to thrashing and degraded system performance. In an opposite state, where enough physical memory is available and paging activity is low, compression may not impact performance enough to be noticeable. The middle ground between these two circumstances‍—‌low RAM with high paging activity, and plenty of RAM with low paging activity‍—‌is where virtual memory compression may be most useful. However, the more compressible the program data is, the more pronounced are the performance improvements as less physical memory is needed to hold the compressed data.

For example, in order to maximize the use of a compressed pages cache, Helix Software Company's Hurricane 2.0 provides a user-configurable compression rejection threshold. By compressing the first 256 to 512 bytes of a 4 KiB page, this virtual memory compression system determines whether the configured compression level threshold can be achieved for a particular page; if achievable, the rest of the page would be compressed and retained in a compressed cache, and otherwise the page would be sent to auxiliary storage through the normal paging system. The default setting for this threshold is an 8:1 compression ratio.[14][4]

CPU utilization overhead

In hardware implementations, the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price-to-performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.

Prioritization

In a typical virtual memory implementation, paging happens on a least recently used basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.

Compression using quantization

Accelerator designers exploit quantization to reduce the bitwidth of values and reduce the cost of data movement. However, any value that does not fit in the reduced bitwidth results in an overflow (we refer to these values as outliers). Therefore accelerators use quantization for applications that are tolerant to overflows. In most applications the rate of outliers is low and values are often within a narrow range [15] providing the opportunity to exploit quantization in generalpurpose processors. However, a software implementation of quantization in general-purpose processors has three problems. First, the programmer has to manually implement conversions and the additional instructions that quantize and dequantize values, imposing a programmer's effort and performance overhead. Second, to cover outliers, the bitwidth of the quantized values often become greater than or equal to the original values. Third, the programmer has to use standard bitwidth; otherwise, extracting non-standard bitwidth (i.e., 1–7, 9–15, and 17–31) for representing narrow integers exacerbates the overhead of software-based quantization. A hardware support in the memory hierarchy of general-purpose processors for quantization can solve these problems. The hardware support allows representing values by few and flexible numbers of bits and storing outliers in their original format in a separate space, preventing any overflow. It minimizes metadata and the overhead of locating quantized values using a software-hardware interaction that transfers quantization parameters and data layout to hardware. As a result, transparent hardware-based quantization has three advantages over cache compression techniques: (i) less metadata, (ii) higher compression ratio for floating-point values and cache blocks with multiple data types, and (iii) lower overhead for locating the compressed blocks.[15]

History

Virtual memory compression has gone in and out of favor as a technology. The price and speed of RAM and external storage have plummeted due to Moore's Law and improved RAM interfaces such as DDR3, thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make virtual memory compression more attractive.

Origins

Acorn Computers' Unix variant, RISC iX, was supplied as the primary operating system for its R140 workstation released in 1989.[16] RISC iX provided support for demand paging of compressed executable files. However, the principal motivation for providing compressed executable files was to accommodate a complete Unix system in a hard disk of relatively modest size. Compressed data was not paged out to disk under this scheme.[17][18]

Paul R. Wilson proposed compressed caching of virtual memory pages in 1990, in a paper circulated at the ACM OOPSLA/ECOOP '90 Workshop on Garbage Collection ("Some Issues and Strategies in Heap Management and Memory Hierarchies"), and appearing in ACM SIGPLAN Notices in January 1991.[19]

Helix Software Company pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year.[2] In 1994 and 1995, Helix refined the process using test-compression and secondary memory caches on video cards and other devices.[3] However, Helix did not release a product incorporating virtual memory compression until July 1996 and the release of Hurricane 2.0, which used the Stac Electronics Lempel–Ziv–Stac compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits.[14]

In 1995, RAM cost nearly $50 per megabyte, and Microsoft's Windows 95 listed a minimum requirement of 4 MB of RAM.[20] Due to the high RAM requirement, several programs were released which claimed to use compression technology to gain “memory”. Most notorious was the SoftRAM program from Syncronys Softcorp. SoftRAM was exposed as fake because it did not perform any compression at all.[21][9] Other products, including Hurricane and MagnaRAM, included virtual memory compression, but implemented only run-length encoding, with poor results, giving the technology a negative reputation.[22]

In its 8 April 1997 issue, PC Magazine published a comprehensive test of the performance enhancement claims of several software virtual memory compression tools. In its testing PC Magazine found a minimal (5% overall) performance improvement from the use of Hurricane, and none at all from any of the other packages.[22] However the tests were run on Intel Pentium systems which had a single core and were single threaded, and thus compression directly impacted all system activity.

In 1996, IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology (MXT).[23][24] MXT was a stand-alone chip which acted as a CPU cache between the CPU and memory controller. MXT had an integrated compression engine which compressed all data heading to/from physical memory. Subsequent testing of the technology by Intel showed 5–20% overall system performance improvement, similar to the results obtained by PC Magazine with Hurricane.[25]

Recent developments

  • In early 2008, a Linux project named zram (originally called compcache) was released; in a 2013 update, it was incorporated into ChromeOS[26] and Android 4.4
  • In 2010, IBM released Active Memory Expansion (AME) for AIX 6.1 which implements virtual memory compression.[27]
  • In 2012, some versions of the POWER7+ chip included AME hardware accelerators using the 842 compression algorithm for data compression support, used on AIX, for virtual memory compression.[28] More recent POWER processors continue to support the feature.
  • In December 2012, the zswap project was announced; it was merged into the Linux kernel mainline in September 2013.
  • In June 2013, Apple announced that it would include virtual memory compression in OS X Mavericks, using the Wilson-Kaplan WKdm algorithm.[29][30]
  • A 10 August 2015 "Windows Insider Preview" update for Windows 10 (build 10525) added support for RAM compression.[31]

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 "The Case for Compressed Caching in Virtual Memory Systems". USENIX Annual Technical Conference. Monterey, California, USA. 1999-06-06. pp. 101–116. https://www.usenix.org/legacy/event/usenix99/full_papers/wilson/wilson.pdf. 
  2. 2.0 2.1 Spilo, Michael L., "Method for increasing the efficiency of a virtual memory system by selective compression of RAM memory contents", US patent 5559978, published 1996-09-24, assigned to Helix Software Co., Inc.
  3. 3.0 3.1 Fabrizio, Daniel & Michael L. Spilo, "Method for caching virtual memory paging and disk input/output requests using off screen video memory", US patent 5875474, published 1999-02-23, assigned to Helix Software Co., Inc.
  4. 4.0 4.1 "Mac Memory Booster Gets an Upgrade". Computerworld (IDG Enterprise) 30 (37): 56. 9 September 1996. ISSN 0010-4841. https://books.google.com/books?id=BUaIcc6lsdwC&pg=PA56. Retrieved 2015-01-12. 
  5. 5.0 5.1 5.2 5.3 "Gupta", "Nitin". ""zram: Compressed RAM-based block devices"". "The kernel development community". https://www.kernel.org/doc/html/next/admin-guide/blockdev/zram.html. 
  6. 6.0 6.1 6.2 6.3 ""zswap"". "The kernel development community". https://www.kernel.org/doc/html/v4.18/vm/zswap.html. 
  7. "Transparent Memory Compression in Linux". https://events.linuxfoundation.org/sites/events/files/slides/tmc_sjennings_linuxcon2013.pdf. 
  8. "Performance numbers for compcache". https://code.google.com/p/compcache/wiki/Performance. 
  9. 9.0 9.1 9.2 9.3 "Kapitel II.18. Mit STACKER Hauptspeicher 'virtuell' verdoppeln…" (in de). NWDOS-TIPs — Tips & Tricks rund um Novell DOS 7, mit Blick auf undokumentierte Details, Bugs und Workarounds (3 ed.). 1997-07-30. http://www.antonis.de/dos/dos-tuts/mpdostip/html/nwdostip.htm. Retrieved 2012-01-11. 
  10. "Analysis of Compression Algorithms for Program Data". 2014. pp. 6. http://www.ece.umd.edu/~barua/matt-compress-tr.pdf. 
  11. "A very fast algorithm for RAM compression". ACM SIGOPS Operating Systems Review 31 (2): 8. 1996. doi:10.1145/250007.250012. http://dl.acm.org/citation.cfm?id=250012. Retrieved 2015-01-09. 
  12. "Thrashing: Its causes and prevention". Proceedings AFIPS, Fall Joint Computer Conference 33: 918. 1968. http://www.cs.uwaterloo.ca/~brecht/courses/702/Possible-Readings/vm-and-gc/thrashing-denning-afips-1968.pdf. Retrieved 2015-01-05. 
  13. "The Compression Cache: Virtual Memory Compression for Handheld Computers". 2000-03-16. http://www.cs.princeton.edu/~mfreed//docs/6.033/compression.pdf. 
  14. 14.0 14.1 "Hurricane 2.0 Squeezes the Most Memory from Your System". PC Magazine. 1996-10-08. https://books.google.com/books?id=7WGv1D0tOVYC&pg=PA48. Retrieved 2015-01-01. 
  15. 15.0 15.1 "An Overflow-free Quantized Memory Hierarchy in General-purpose Processors". In Proceedings of the IEEE International Symposium on Workload Characterization. 2019-11-03. http://www.cs.virginia.edu/~es9bt/papers/iiswc2019_Quantized.pdf. Retrieved 2020-03-16. 
  16. Cox, James (December 1989). "Power to the People". Acorn User: 66-67,69,71. https://archive.org/details/AcornUser089-Dec89/page/n67/mode/2up. Retrieved 6 September 2020. 
  17. Taunton, Mark (1991). "Compressed Executables: An Exercise in Thinking Small". Proceedings of the Summer 1991 USENIX Conference, Nashville, TN, USA, June 1991 (USENIX Association): 385–404. https://archive.org/details/1991-proceedings-tech-conference-nashville/page/385/mode/1up. 
  18. Taunton, Mark (22 January 1991). "Compressed executables". Newsgroupcomp.unix.internals. Usenet: 4743@acorn.co.uk. Retrieved 10 October 2020.
  19. "Some Issues and Strategies in Heap Management and Memory Hierarchies". ACM SIGPLAN Notices 26 (3): 45–52. 1991. doi:10.1145/122167.122173. 
  20. "Windows 95 Installation Requirements". Microsoft. http://support.microsoft.com/kb/138349/en-us. 
  21. "SoftRAM Under Scruitny". PC Magazine. 1996-01-23. https://books.google.com/books?id=XcEKP0ml18EC&pg=PA34. Retrieved 2015-01-01. 
  22. 22.0 22.1 "Performance Enhancers". PC Magazine. 1997-04-08. https://books.google.com/books?id=8RSHdk84u50C&pg=RA1-PA165. Retrieved 2015-01-01. 
  23. "IBM Research Breakthrough Doubles Computer Memory Capacity". IBM. 2000-06-26. http://www-03.ibm.com/press/us/en/pressrelease/1653.wss. 
  24. "Memory eXpansion Technologies". IBM. http://researcher.watson.ibm.com/researcher/view_group_pubs.php?grp=2917. 
  25. "An Evaluation of Memory Compression Alternatives". Intel Corporation. 2003-02-01. http://www.kkant.net/papers/caecw.doc. 
  26. "CompCache". Google code. https://code.google.com/p/compcache/. 
  27. "AIX 6.1 Active Memory Expansion". IBM. https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101633. 
  28. "IBM Power Systems Hardware Deep Dive". IBM. http://www-05.ibm.com/cz/events/febannouncement2012/pdf/power_architecture.pdf. 
  29. "OS X 10.9 Mavericks: The Ars Technica Review". 22 October 2013. https://arstechnica.com/apple/2013/10/os-x-10-9/17/#compressed-memory. 
  30. "The Case for Compressed Caching in Virtual Memory Systems". https://www.usenix.org/legacy/publications/library/proceedings/usenix01/cfp/wilson/wilson_html/acc.html. 
  31. "Announcing Windows 10 Insider Preview Build 10525". Blogging Windows. Microsoft. 2015-08-18. http://blogs.windows.com/bloggingwindows/2015/08/18/announcing-windows-10-insider-preview-build-10525/.