Lines Matching +full:many +full:- +full:to +full:- +full:one
9 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk.
11 (Note, frontswap -- and :ref:`cleancache` (merged at 3.0) -- are the "frontends"
12 and the only necessary changes to the core kernel for transcendent memory;
13 all other supporting code -- the "backends" -- is implemented as drivers.
20 a "backing" store for a swap device. The storage is assumed to be
21 a synchronous concurrency-safe page-oriented "pseudo-RAM device" conforming
22 to the requirements of transcendent memory (such as Xen's "tmem", or
23 in-kernel compressed memory, aka "zcache", or future RAM-like devices);
24 this pseudo-RAM device is not directly accessible or addressable by the
25 kernel and is of unknown and possibly time-varying size. The driver
26 links itself to frontswap by calling frontswap_register_ops to set the
28 conform to certain policies as follows:
30 An "init" prepares the device to receive frontswap pages associated
32 copy the page to transcendent memory and associate it with the type and
38 to refuse further stores with that swap type.
42 to swap out a page, it first attempts to use frontswap. If the store returns
43 success, the data has been successfully saved to transcendent memory and
46 page can be written to swap as usual.
50 in swap device writes is lost (and also a non-trivial performance advantage)
51 in order to allow the backend to arbitrarily "reclaim" space used to
52 store frontswap pages to more completely manage its memory usage.
64 how many store attempts have failed
67 how many loads were attempted (all should succeed)
70 how many store attempts have succeeded
73 how many invalidates were attempted
83 Frontswap significantly increases performance in many such workloads by
84 providing a clean, dynamic interface to read and write swap pages to
85 "transcendent memory" that is otherwise not directly addressable to the kernel.
86 This interface is ideal when data is transformed to a different form
88 useful for write-balancing for some RAM-like devices). Swap pages (and
89 evicted page-cache pages) are a great use for this kind of slower-than-RAM-
90 but-much-faster-than-disk "pseudo-RAM device" and the frontswap (and
91 cleancache) interface to transcendent memory provides a nice way to read
92 and write -- and indirectly "name" -- the pages.
94 Frontswap -- and cleancache -- with a fairly small impact on the kernel,
106 "RAMster" builds on zcache by adding "peer-to-peer" transcendent memory
108 as in zcache, but then "remotified" to another system's RAM. This
109 allows RAM to be dynamically load-balanced back-and-forth as needed,
110 i.e. when system A is overcommitted, it can swap to system B, and
112 many servers in a cluster can swap, dynamically as needed, to a single
113 server configured with a large amount of RAM... without pre-configuring
116 In the virtual case, the whole point of virtualization is to statistically
118 virtual machines. This is really hard to do with RAM and efforts to do
120 well-publicized special-case workloads).
122 "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
123 virtual machines, but the pages can be compressed and deduplicated to
124 optimize RAM utilization. And when guest OS's are induced to surrender
127 to be swapped to and from hypervisor RAM (if overall host system memory
131 A KVM implementation is underway and has been RFC'ed to lkml. And,
141 registers, there is one extra global variable compared to zero for
145 CPU overhead is still negligible -- and since every frontswap fail
146 precedes a swap page write-to-disk, the system is highly likely
147 to be I/O bound and using a small fraction of a percent of a CPU
151 registers, one bit is allocated for every swap page for every swap
152 device that is swapon'd. This is added to the EIGHT bits (which
155 Dickins has observed that frontswap could probably steal one of
161 out to disk, there is a side effect that this may create more memory
163 backend, such as zcache, must implement policies to carefully (but
164 dynamically) manage memory limits to ensure this doesn't happen.
171 frontswap backend has access to some "memory" that is not directly
175 Whenever a swap-device is swapon'd frontswap_init() is called,
177 This notifies frontswap to expect attempts to "store" swap pages
180 Whenever the swap subsystem is readying a page to write to a swap
183 have room, frontswap_store returns -1 and the kernel swaps the page
184 to the swap device as normal. Note that the response from the frontswap
185 backend is unpredictable to the kernel; it may choose to never accept a
191 corresponding to the page offset on the swap device to which it would
194 When the swap subsystem needs to swap-in a page (swap_readpage()),
195 it first calls frontswap_load() which checks the frontswap_map to
198 the swap-in is complete. If not, the normal swap-in code is
199 executed to obtain the page of data from the real swap device.
208 or maybe swap-over-nbd/NFS)?
211 swap hierarchy. Perhaps it could be rewritten to accommodate a hierarchy,
216 and works around the constraints of the block I/O subsystem to provide
220 entirely unpredictable. This is critical to the definition of frontswap
221 backends because it grants completely dynamic discretion to the
222 backend. In zcache, one cannot know a priori how compressible a page is.
229 that are inappropriate for a RAM-oriented device including delaying
231 required to ensure the dynamicity of the backend and to avoid thorny race
235 is free to manipulate the pages stored by frontswap. For example,
237 kernel sockets to move compressed frontswap pages to a remote machine.
238 Similarly, a KVM guest-side implementation could do in-guest compression
242 (or host OS) to do "intelligent overcommit". For example, it can
243 choose to accept pages only until host-swapping might be imminent,
244 then force guests to do their own swapping.
246 There is a downside to the transcendent memory specifications for
248 slot on a real swap device to swap the page. Thus frontswap must be
249 implemented as a "shadow" to every swapon'd device with the potential
264 to 1K. Now an attempt is made to overwrite the page with data that
265 is non-compressible and so would take the entire 4K. But the backend
269 swap subsystem then writes the new data to the read swap device,
270 this is the correct course of action to ensure coherency.
274 When the (non-frontswap) swap subsystem swaps out a page to a real
275 swap device, that page is only taking up low-value pre-allocated disk
278 routine allows code outside of the swap subsystem to force pages out
279 of the memory managed by frontswap and back into kernel-addressable memory.
281 to "repatriate" pages sent to a remote machine back to the local machine;
287 The frontswap code depends on some swap-subsystem-internal data