Lines Matching +full:in +full:- +full:memory

7 Frontswap provides a "transcendent memory" interface for swap pages.
8 In some environments, dramatic performance savings may be obtained because
9 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk.
11 (Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends"
12 and the only necessary changes to the core kernel for transcendent memory;
13 all other supporting code -- the "backends" -- is implemented as drivers.
14 See the LWN.net article `Transcendent memory in a nutshell`_
17 .. _Transcendent memory in a nutshell: https://lwn.net/Articles/454795/
21 a synchronous concurrency-safe page-oriented "pseudo-RAM device" conforming
22 to the requirements of transcendent memory (such as Xen's "tmem", or
23 in-kernel compressed memory, aka "zcache", or future RAM-like devices);
24 this pseudo-RAM device is not directly accessible or addressable by the
25 kernel and is of unknown and possibly time-varying size. The driver
32 copy the page to transcendent memory and associate it with the type and
34 from transcendent memory into kernel memory, but will NOT remove the page
35 from transcendent memory. An "invalidate_page" will remove the page
36 from transcendent memory and an "invalidate_area" will remove ALL pages
41 succeed. So when the kernel finds itself in a situation where it needs
43 success, the data has been successfully saved to transcendent memory and
45 If a store returns failure, transcendent memory has rejected the data, and the
49 cache" by calling frontswap_writethrough(). In this mode, the reduction
50 in swap device writes is lost (and also a non-trivial performance advantage)
51 in order to allow the backend to arbitrarily "reclaim" space used to
52 store frontswap pages to more completely manage its memory usage.
54 Note that if a page is stored and the page already exists in transcendent memory
59 If properly configured, monitoring of frontswap is done via debugfs in
83 Frontswap significantly increases performance in many such workloads by
85 "transcendent memory" that is otherwise not directly addressable to the kernel.
88 useful for write-balancing for some RAM-like devices). Swap pages (and
89 evicted page-cache pages) are a great use for this kind of slower-than-RAM-
90 but-much-faster-than-disk "pseudo-RAM device" and the frontswap (and
91 cleancache) interface to transcendent memory provides a nice way to read
92 and write -- and indirectly "name" -- the pages.
94 Frontswap -- and cleancache -- with a fairly small impact on the kernel,
96 utilization in various system configurations:
98 In the single kernel case, aka "zcache", pages are compressed and
99 stored in local memory, thus increasing the total anonymous pages
100 that can be safely kept in RAM. Zcache essentially trades off CPU
101 cycles used in compression/decompression for better memory utilization.
102 Benchmarks have shown little or no impact when memory pressure is
104 on some workloads under high memory pressure.
106 "RAMster" builds on zcache by adding "peer-to-peer" transcendent memory
108 as in zcache, but then "remotified" to another system's RAM. This
109 allows RAM to be dynamically load-balanced back-and-forth as needed,
111 vice versa. RAMster can also be configured as a memory server so
112 many servers in a cluster can swap, dynamically as needed, to a single
113 server configured with a large amount of RAM... without pre-configuring
116 In the virtual case, the whole point of virtualization is to statistically
119 it well with no kernel changes have essentially failed (except in some
120 well-publicized special-case workloads).
121 Specifically, the Xen Transcendent Memory backend allows otherwise
122 "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
126 memory pressure may result in swapping; frontswap allows those pages
127 to be swapped to and from hypervisor RAM (if overall host system memory
133 a memory extension technology.
135 * Sure there may be performance advantages in some situations, but
144 request (i.e. provides no memory despite claiming it might),
145 CPU overhead is still negligible -- and since every frontswap fail
146 precedes a swap page write-to-disk, the system is highly likely
160 When swap pages are stored in transcendent memory instead of written
161 out to disk, there is a side effect that this may create more memory
164 dynamically) manage memory limits to ensure this doesn't happen.
167 in terms that a kernel hacker can grok?
171 frontswap backend has access to some "memory" that is not directly
172 accessible by the kernel. Exactly how much memory it provides is
175 Whenever a swap-device is swapon'd frontswap_init() is called,
183 have room, frontswap_store returns -1 and the kernel swaps the page
189 and the backend guarantees the persistence of the data. In this case,
190 frontswap sets a bit in the "frontswap_map" for the swap device
194 When the swap subsystem needs to swap-in a page (swap_readpage()),
198 the swap-in is complete. If not, the normal swap-in code is
208 or maybe swap-over-nbd/NFS)?
214 assumes a swap device is fixed size and any page in it is linearly
222 backend. In zcache, one cannot know a priori how compressible a page is.
224 defined dynamically depending on current memory constraints.
229 that are inappropriate for a RAM-oriented device including delaying
236 the "remotification" thread in RAMster uses standard asynchronous
238 Similarly, a KVM guest-side implementation could do in-guest compression
241 In a virtualized environment, the dynamicity allows the hypervisor
243 choose to accept pages only until host-swapping might be imminent,
246 There is a downside to the transcendent memory specifications for
265 is non-compressible and so would take the entire 4K. But the backend
266 has no more space. In this case, the store must be rejected. Whenever
274 When the (non-frontswap) swap subsystem swaps out a page to a real
275 swap device, that page is only taking up low-value pre-allocated disk
276 space. But if frontswap has placed a page in transcendent memory, that
279 of the memory managed by frontswap and back into kernel-addressable memory.
280 For example, in RAMster, a "suction driver" thread will attempt
282 this is driven using the frontswap_shrink mechanism when memory pressure
287 The frontswap code depends on some swap-subsystem-internal data
290 them as global but declare them in a new include file that isn't