Lines Matching full:and
19 that is not directly accessible or addressable by the kernel and is
20 of unknown and possibly time-varying size.
25 and a disk access is avoided.
28 in Xen (using hypervisor memory) and zcache (using in-kernel compressed
29 memory) and other implementations are in development.
43 by the kernel and so may or may not still be in cleancache at any later time.
45 Cleancache has complete discretion over what pages to preserve and what
46 pages to discard and when.
51 (presumably about-to-be-evicted) page into cleancache and associate it with
52 the pool id, a file key, and a page index into the file. (The combination
53 of a pool id, a file key, and an index is sometimes called a "handle".)
57 file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate
58 all pages in all files specified by the given pool id and also surrender
64 systems) that may share a clustered filesystem, and where cleancache
76 cleancache (shared or not), the page cache, and the filesystem, using
79 Note that cleancache must enforce put-put-get coherency and get-get
81 with different data, say AAA by the first put and BBB by the second, a
87 different Linux threads are simultaneously putting and invalidating a page
124 and thus disk reads.
126 Cleancache (and its sister code "frontswap") provide interfaces for
128 fast kernel-directly-addressable RAM and slower DMA/asynchronous devices.
130 is ideal when data is transformed to a different form and size (such
132 balancing for some RAM-like devices). Evicted page-cache pages (and
134 faster-than-disk transcendent memory, and the cleancache (and frontswap)
135 "page-object-oriented" specification provides a nice way to read and
136 write -- and indirectly "name" -- the pages.
140 virtual machines. This is really hard to do with RAM and efforts to
142 well-publicized special-case workloads). Cleancache -- and frontswap --
147 virtual machines, but the pages can be compressed and deduplicated to
148 optimize RAM utilization. And when guest OS's are induced to surrender
150 are the first to go, and cleancache allows those pages to be
151 saved and reclaimed if overall host system memory conditions allow.
153 And the identical interface used for cleancache can be used in
155 device that stores pages of data in a compressed state. And
160 filesystems and VFS? (Andrew Morton and Christoph Hellwig)
163 and the minimum set are placed precisely where needed to maintain
165 the page cache, and disk. All hooks compile into nothingness if
166 cleancache is config'ed off and turn into a function-pointer-
172 Some filesystems are built entirely on top of VFS and the hooks
176 incomplete and one or more hooks in fs-specific code are required.
177 And for some other filesystems, such as tmpfs, cleancache may
183 that untested filesystems are not affected, and the hooks in the
187 The total impact of the hooks to existing fs and mm files is only
188 about 40 lines added (not counting comments and blank lines).
190 * Why not make cleancache asynchronous and batched so it can more
195 on both the frontend and backend and also allows the backend to
196 do fancy things on-the-fly like page compression and
197 page deduplication. And since the data is "gone" (copied into/out
199 a great deal of race conditions and potential coherency issues
204 * Why is non-shared cleancache "exclusive"? And where is the
207 The main reason is to free up space in transcendent memory and
217 Performance analysis has been presented at OLS'09 and LCA'10.
220 overcommitted in a virtual workload); and because the hooks are
230 Filesystems that are well-behaved and conform to certain
234 and/or undergo extensive additional testing... or should just
247 - The FS must call the VFS superblock alloc and deactivate routines
264 inode unused list, and only invalidates the data page if the file
270 is potentially much larger than the kernel pagecache and is most