1.. _frontswap: 2 3========= 4Frontswap 5========= 6 7Frontswap provides a "transcendent memory" interface for swap pages. 8In some environments, dramatic performance savings may be obtained because 9swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk. 10 11(Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends" 12and the only necessary changes to the core kernel for transcendent memory; 13all other supporting code -- the "backends" -- is implemented as drivers. 14See the LWN.net article `Transcendent memory in a nutshell`_ 15for a detailed overview of frontswap and related kernel parts) 16 17.. _Transcendent memory in a nutshell: https://lwn.net/Articles/454795/ 18 19Frontswap is so named because it can be thought of as the opposite of 20a "backing" store for a swap device. The storage is assumed to be 21a synchronous concurrency-safe page-oriented "pseudo-RAM device" conforming 22to the requirements of transcendent memory (such as Xen's "tmem", or 23in-kernel compressed memory, aka "zcache", or future RAM-like devices); 24this pseudo-RAM device is not directly accessible or addressable by the 25kernel and is of unknown and possibly time-varying size. The driver 26links itself to frontswap by calling frontswap_register_ops to set the 27frontswap_ops funcs appropriately and the functions it provides must 28conform to certain policies as follows: 29 30An "init" prepares the device to receive frontswap pages associated 31with the specified swap device number (aka "type"). A "store" will 32copy the page to transcendent memory and associate it with the type and 33offset associated with the page. A "load" will copy the page, if found, 34from transcendent memory into kernel memory, but will NOT remove the page 35from transcendent memory. An "invalidate_page" will remove the page 36from transcendent memory and an "invalidate_area" will remove ALL pages 37associated with the swap type (e.g., like swapoff) and notify the "device" 38to refuse further stores with that swap type. 39 40Once a page is successfully stored, a matching load on the page will normally 41succeed. So when the kernel finds itself in a situation where it needs 42to swap out a page, it first attempts to use frontswap. If the store returns 43success, the data has been successfully saved to transcendent memory and 44a disk write and, if the data is later read back, a disk read are avoided. 45If a store returns failure, transcendent memory has rejected the data, and the 46page can be written to swap as usual. 47 48If a backend chooses, frontswap can be configured as a "writethrough 49cache" by calling frontswap_writethrough(). In this mode, the reduction 50in swap device writes is lost (and also a non-trivial performance advantage) 51in order to allow the backend to arbitrarily "reclaim" space used to 52store frontswap pages to more completely manage its memory usage. 53 54Note that if a page is stored and the page already exists in transcendent memory 55(a "duplicate" store), either the store succeeds and the data is overwritten, 56or the store fails AND the page is invalidated. This ensures stale data may 57never be obtained from frontswap. 58 59If properly configured, monitoring of frontswap is done via debugfs in 60the `/sys/kernel/debug/frontswap` directory. The effectiveness of 61frontswap can be measured (across all swap devices) with: 62 63``failed_stores`` 64 how many store attempts have failed 65 66``loads`` 67 how many loads were attempted (all should succeed) 68 69``succ_stores`` 70 how many store attempts have succeeded 71 72``invalidates`` 73 how many invalidates were attempted 74 75A backend implementation may provide additional metrics. 76 77FAQ 78=== 79 80* Where's the value? 81 82When a workload starts swapping, performance falls through the floor. 83Frontswap significantly increases performance in many such workloads by 84providing a clean, dynamic interface to read and write swap pages to 85"transcendent memory" that is otherwise not directly addressable to the kernel. 86This interface is ideal when data is transformed to a different form 87and size (such as with compression) or secretly moved (as might be 88useful for write-balancing for some RAM-like devices). Swap pages (and 89evicted page-cache pages) are a great use for this kind of slower-than-RAM- 90but-much-faster-than-disk "pseudo-RAM device" and the frontswap (and 91cleancache) interface to transcendent memory provides a nice way to read 92and write -- and indirectly "name" -- the pages. 93 94Frontswap -- and cleancache -- with a fairly small impact on the kernel, 95provides a huge amount of flexibility for more dynamic, flexible RAM 96utilization in various system configurations: 97 98In the single kernel case, aka "zcache", pages are compressed and 99stored in local memory, thus increasing the total anonymous pages 100that can be safely kept in RAM. Zcache essentially trades off CPU 101cycles used in compression/decompression for better memory utilization. 102Benchmarks have shown little or no impact when memory pressure is 103low while providing a significant performance improvement (25%+) 104on some workloads under high memory pressure. 105 106"RAMster" builds on zcache by adding "peer-to-peer" transcendent memory 107support for clustered systems. Frontswap pages are locally compressed 108as in zcache, but then "remotified" to another system's RAM. This 109allows RAM to be dynamically load-balanced back-and-forth as needed, 110i.e. when system A is overcommitted, it can swap to system B, and 111vice versa. RAMster can also be configured as a memory server so 112many servers in a cluster can swap, dynamically as needed, to a single 113server configured with a large amount of RAM... without pre-configuring 114how much of the RAM is available for each of the clients! 115 116In the virtual case, the whole point of virtualization is to statistically 117multiplex physical resources across the varying demands of multiple 118virtual machines. This is really hard to do with RAM and efforts to do 119it well with no kernel changes have essentially failed (except in some 120well-publicized special-case workloads). 121Specifically, the Xen Transcendent Memory backend allows otherwise 122"fallow" hypervisor-owned RAM to not only be "time-shared" between multiple 123virtual machines, but the pages can be compressed and deduplicated to 124optimize RAM utilization. And when guest OS's are induced to surrender 125underutilized RAM (e.g. with "selfballooning"), sudden unexpected 126memory pressure may result in swapping; frontswap allows those pages 127to be swapped to and from hypervisor RAM (if overall host system memory 128conditions allow), thus mitigating the potentially awful performance impact 129of unplanned swapping. 130 131A KVM implementation is underway and has been RFC'ed to lkml. And, 132using frontswap, investigation is also underway on the use of NVM as 133a memory extension technology. 134 135* Sure there may be performance advantages in some situations, but 136 what's the space/time overhead of frontswap? 137 138If CONFIG_FRONTSWAP is disabled, every frontswap hook compiles into 139nothingness and the only overhead is a few extra bytes per swapon'ed 140swap device. If CONFIG_FRONTSWAP is enabled but no frontswap "backend" 141registers, there is one extra global variable compared to zero for 142every swap page read or written. If CONFIG_FRONTSWAP is enabled 143AND a frontswap backend registers AND the backend fails every "store" 144request (i.e. provides no memory despite claiming it might), 145CPU overhead is still negligible -- and since every frontswap fail 146precedes a swap page write-to-disk, the system is highly likely 147to be I/O bound and using a small fraction of a percent of a CPU 148will be irrelevant anyway. 149 150As for space, if CONFIG_FRONTSWAP is enabled AND a frontswap backend 151registers, one bit is allocated for every swap page for every swap 152device that is swapon'd. This is added to the EIGHT bits (which 153was sixteen until about 2.6.34) that the kernel already allocates 154for every swap page for every swap device that is swapon'd. (Hugh 155Dickins has observed that frontswap could probably steal one of 156the existing eight bits, but let's worry about that minor optimization 157later.) For very large swap disks (which are rare) on a standard 1584K pagesize, this is 1MB per 32GB swap. 159 160When swap pages are stored in transcendent memory instead of written 161out to disk, there is a side effect that this may create more memory 162pressure that can potentially outweigh the other advantages. A 163backend, such as zcache, must implement policies to carefully (but 164dynamically) manage memory limits to ensure this doesn't happen. 165 166* OK, how about a quick overview of what this frontswap patch does 167 in terms that a kernel hacker can grok? 168 169Let's assume that a frontswap "backend" has registered during 170kernel initialization; this registration indicates that this 171frontswap backend has access to some "memory" that is not directly 172accessible by the kernel. Exactly how much memory it provides is 173entirely dynamic and random. 174 175Whenever a swap-device is swapon'd frontswap_init() is called, 176passing the swap device number (aka "type") as a parameter. 177This notifies frontswap to expect attempts to "store" swap pages 178associated with that number. 179 180Whenever the swap subsystem is readying a page to write to a swap 181device (c.f swap_writepage()), frontswap_store is called. Frontswap 182consults with the frontswap backend and if the backend says it does NOT 183have room, frontswap_store returns -1 and the kernel swaps the page 184to the swap device as normal. Note that the response from the frontswap 185backend is unpredictable to the kernel; it may choose to never accept a 186page, it could accept every ninth page, or it might accept every 187page. But if the backend does accept a page, the data from the page 188has already been copied and associated with the type and offset, 189and the backend guarantees the persistence of the data. In this case, 190frontswap sets a bit in the "frontswap_map" for the swap device 191corresponding to the page offset on the swap device to which it would 192otherwise have written the data. 193 194When the swap subsystem needs to swap-in a page (swap_readpage()), 195it first calls frontswap_load() which checks the frontswap_map to 196see if the page was earlier accepted by the frontswap backend. If 197it was, the page of data is filled from the frontswap backend and 198the swap-in is complete. If not, the normal swap-in code is 199executed to obtain the page of data from the real swap device. 200 201So every time the frontswap backend accepts a page, a swap device read 202and (potentially) a swap device write are replaced by a "frontswap backend 203store" and (possibly) a "frontswap backend loads", which are presumably much 204faster. 205 206* Can't frontswap be configured as a "special" swap device that is 207 just higher priority than any real swap device (e.g. like zswap, 208 or maybe swap-over-nbd/NFS)? 209 210No. First, the existing swap subsystem doesn't allow for any kind of 211swap hierarchy. Perhaps it could be rewritten to accommodate a hierarchy, 212but this would require fairly drastic changes. Even if it were 213rewritten, the existing swap subsystem uses the block I/O layer which 214assumes a swap device is fixed size and any page in it is linearly 215addressable. Frontswap barely touches the existing swap subsystem, 216and works around the constraints of the block I/O subsystem to provide 217a great deal of flexibility and dynamicity. 218 219For example, the acceptance of any swap page by the frontswap backend is 220entirely unpredictable. This is critical to the definition of frontswap 221backends because it grants completely dynamic discretion to the 222backend. In zcache, one cannot know a priori how compressible a page is. 223"Poorly" compressible pages can be rejected, and "poorly" can itself be 224defined dynamically depending on current memory constraints. 225 226Further, frontswap is entirely synchronous whereas a real swap 227device is, by definition, asynchronous and uses block I/O. The 228block I/O layer is not only unnecessary, but may perform "optimizations" 229that are inappropriate for a RAM-oriented device including delaying 230the write of some pages for a significant amount of time. Synchrony is 231required to ensure the dynamicity of the backend and to avoid thorny race 232conditions that would unnecessarily and greatly complicate frontswap 233and/or the block I/O subsystem. That said, only the initial "store" 234and "load" operations need be synchronous. A separate asynchronous thread 235is free to manipulate the pages stored by frontswap. For example, 236the "remotification" thread in RAMster uses standard asynchronous 237kernel sockets to move compressed frontswap pages to a remote machine. 238Similarly, a KVM guest-side implementation could do in-guest compression 239and use "batched" hypercalls. 240 241In a virtualized environment, the dynamicity allows the hypervisor 242(or host OS) to do "intelligent overcommit". For example, it can 243choose to accept pages only until host-swapping might be imminent, 244then force guests to do their own swapping. 245 246There is a downside to the transcendent memory specifications for 247frontswap: Since any "store" might fail, there must always be a real 248slot on a real swap device to swap the page. Thus frontswap must be 249implemented as a "shadow" to every swapon'd device with the potential 250capability of holding every page that the swap device might have held 251and the possibility that it might hold no pages at all. This means 252that frontswap cannot contain more pages than the total of swapon'd 253swap devices. For example, if NO swap device is configured on some 254installation, frontswap is useless. Swapless portable devices 255can still use frontswap but a backend for such devices must configure 256some kind of "ghost" swap device and ensure that it is never used. 257 258* Why this weird definition about "duplicate stores"? If a page 259 has been previously successfully stored, can't it always be 260 successfully overwritten? 261 262Nearly always it can, but no, sometimes it cannot. Consider an example 263where data is compressed and the original 4K page has been compressed 264to 1K. Now an attempt is made to overwrite the page with data that 265is non-compressible and so would take the entire 4K. But the backend 266has no more space. In this case, the store must be rejected. Whenever 267frontswap rejects a store that would overwrite, it also must invalidate 268the old data and ensure that it is no longer accessible. Since the 269swap subsystem then writes the new data to the read swap device, 270this is the correct course of action to ensure coherency. 271 272* What is frontswap_shrink for? 273 274When the (non-frontswap) swap subsystem swaps out a page to a real 275swap device, that page is only taking up low-value pre-allocated disk 276space. But if frontswap has placed a page in transcendent memory, that 277page may be taking up valuable real estate. The frontswap_shrink 278routine allows code outside of the swap subsystem to force pages out 279of the memory managed by frontswap and back into kernel-addressable memory. 280For example, in RAMster, a "suction driver" thread will attempt 281to "repatriate" pages sent to a remote machine back to the local machine; 282this is driven using the frontswap_shrink mechanism when memory pressure 283subsides. 284 285* Why does the frontswap patch create the new include file swapfile.h? 286 287The frontswap code depends on some swap-subsystem-internal data 288structures that have, over the years, moved back and forth between 289static and global. This seemed a reasonable compromise: Define 290them as global but declare them in a new include file that isn't 291included by the large number of source files that include swap.h. 292 293Dan Magenheimer, last updated April 9, 2012 294