Lines Matching full:the
10 Userfaults allow the implementation of on-demand paging from userland
12 memory page faults, something otherwise only the kernel code could do.
15 of the ``PROT_NONE+SIGSEGV`` trick.
21 regions of virtual memory with it. Then, any page faults which occur within the
22 region(s) result in a message being delivered to the userfaultfd, notifying
23 userspace of the fault.
25 The ``userfaultfd`` (aside from registering and unregistering virtual
28 1) ``read/POLLIN`` protocol to notify a userland thread of the faults
31 2) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions
32 registered in the ``userfaultfd`` that allows userland to efficiently
33 resolve the userfaults it receives via 1) or to manage the virtual
34 memory in the background
36 The real advantage of userfaults if compared to regular virtual memory
37 management of mremap/mprotect is that the userfaults in all their
38 operations never involve heavyweight structures like vmas (in fact the
39 ``userfaultfd`` runtime load never takes the mmap_lock for writing).
44 The ``userfaultfd``, once created, can also be
45 passed using unix domain sockets to a manager process, so the same
46 manager process could handle the userfaults of a multitude of
48 (well of course unless they later try to use the ``userfaultfd``
49 themselves on the same region the manager is already tracking, which
60 handle kernel page faults have been a useful tool for exploiting the kernel).
62 The first way, supported since userfaultfd was introduced, is the
66 only. Such a userfaultfd can be created using the userfaultfd(2) syscall
67 with the flag UFFD_USER_MODE_ONLY.
69 - In order to also trap kernel page faults for the address space, either the
70 process needs the CAP_SYS_PTRACE capability, or the system must have
74 The second way, added to the kernel more recently, is by opening
76 yields equivalent userfaultfds to the userfaultfd(2) syscall.
81 the same time (as e.g. granting CAP_SYS_PTRACE would do). Users who have access
88 When first opened the ``userfaultfd`` must be enabled invoking the
90 a later API version) which will specify the ``read/POLLIN`` protocol
91 userland intends to speak on the ``UFFD`` and the ``uffdio_api.features``
92 userland requires. The ``UFFDIO_API`` ioctl if successful (i.e. if the
93 requested ``uffdio_api.api`` is spoken also by the running kernel and the
96 respectively all the available features of the read(2) protocol and
97 the generic ioctl available.
99 The ``uffdio_api.features`` bitmask returned by the ``UFFDIO_API`` ioctl
100 defines what memory types are supported by the ``userfaultfd`` and what
103 - The ``UFFD_FEATURE_EVENT_*`` flags indicate that various other events
105 detail below in the `Non-cooperative userfaultfd`_ section.
108 indicate that the kernel supports ``UFFDIO_REGISTER_MODE_MISSING``
113 - ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates that the kernel supports
115 areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
118 The userland application should set the feature flags it intends to use
119 when invoking the ``UFFDIO_API`` ioctl, to request that those features be
122 Once the ``userfaultfd`` API has been enabled the ``UFFDIO_REGISTER``
123 ioctl should be invoked (if present in the returned ``uffdio_api.ioctls``
124 bitmask) to register a memory range in the ``userfaultfd`` by setting the
125 uffdio_register structure accordingly. The ``uffdio_register.mode``
126 bitmask will specify to the kernel which kind of faults to track for
127 the range. The ``UFFDIO_REGISTER`` ioctl will return the
129 userfaults on the range registered. Not all ioctls will necessarily be
133 Userland can use the ``uffdio_register.ioctls`` to manage the virtual
134 address space in the background (to add or potentially also remove
135 memory from the ``userfaultfd`` registered range). This means a userfault
136 could be triggering just before userland maps in the background the
147 - ``UFFDIO_ZEROPAGE`` atomically zeros the new page.
151 These operations are atomic in the sense that they guarantee nothing can
152 see a half-populated page, since readers will keep userfaulting until the
155 By default, these wake up userfaults blocked on the range in question.
159 Which ioctl to choose depends on the kind of page fault, and what we'd
162 - For ``UFFDIO_REGISTER_MODE_MISSING`` faults, the fault needs to be
164 the zero page (``UFFDIO_ZEROPAGE``). By default, the kernel would map
165 the zero page for a missing fault. With userfaultfd, userspace can
166 decide what content to provide before the faulting thread continues.
169 the page cache). Userspace has the option of modifying the page's
170 contents before resolving the fault. Once the contents are correct
171 (modified or not), userspace asks the kernel to map the page and let the
177 ``pagefault.flags`` within the ``uffd_msg``, checking for the
180 - None of the page-delivering ioctls default to the range that you
181 registered with. You must fill in all fields for the appropriate
182 ioctl struct including the range.
184 - You get the address of the access that triggered the missing page
185 event out of a struct uffd_msg that you read in the thread from the
187 Keep in mind that unless you used DONTWAKE then the first of any of
188 those IOCTLs wakes up the faulting thread.
204 in the struct passed in. The range does not default to and does not
205 have to be identical to the range you registered with. You can write
206 protect as many ranges as you like (inside the registered range).
207 Then, in the thread reading from uffd the struct will have
211 set. This wakes up the thread which will continue to run with writes. This
212 allows you to do the bookkeeping about the write in the uffd reading
213 thread before the ioctl.
216 ``UFFDIO_REGISTER_MODE_WP`` then you need to think about the sequence in
218 difference between writes into a WP area and into a !WP area. The
219 former will have ``UFFD_PAGEFAULT_FLAG_WP`` set, the latter
220 ``UFFD_PAGEFAULT_FLAG_WRITE``. The latter did not fail on protection but
227 QEMU/KVM is using the ``userfaultfd`` syscall to implement postcopy live
230 all of its memory residing on a different node in the cloud. The
237 page faults in the guest scheduler so those guest processes that
239 the guest vcpus.
245 The implementation of postcopy live migration currently uses one
246 single bidirectional socket but in the future two different sockets
247 will be used (to reduce the latency of the userfaults to the minimum
250 The QEMU in the source node writes all pages that it knows are missing
251 in the destination node, into the socket, and the migration thread of
252 the QEMU running in the destination node runs ``UFFDIO_COPY|ZEROPAGE``
253 ioctls on the ``userfaultfd`` in order to map the received pages into the
254 guest (``UFFDIO_ZEROCOPY`` is used if the source page was a zero page).
256 A different postcopy thread in the destination node listens with
257 poll() to the ``userfaultfd`` in parallel. When a ``POLLIN`` event is
258 generated after a userfault triggers, the postcopy thread read() from
259 the ``userfaultfd`` and receives the fault address (or ``-EAGAIN`` in case the
261 by the parallel QEMU migration thread).
263 After the QEMU postcopy thread (running in the destination node) gets
264 the userfault address it writes the information about the missing page
265 into the socket. The QEMU source node receives the information and
268 (just the time to flush the tcp_wmem queue through the network) the
269 migration thread in the QEMU running in the destination node will
270 receive the page that triggered the userfault and it'll map it as
271 usual with the ``UFFDIO_COPY|ZEROPAGE`` (without actually knowing if it
272 was spontaneously sent by the source or if it was an urgent page
275 By the time the userfaults start, the QEMU in the destination node
276 doesn't need to keep any per-page state bitmap relative to the live
278 the QEMU running in the source node to know which pages are still
279 missing in the destination node. The bitmap in the source node is
282 course the bitmap is updated accordingly. It's also useful to avoid
283 sending the same page twice (in case the userfault is read by the
284 postcopy thread just before ``UFFDIO_COPY|ZEROPAGE`` runs in the migration
290 When the ``userfaultfd`` is monitored by an external manager, the manager
291 must be able to track changes in the process virtual memory
292 layout. Userfaultfd can notify the manager about such changes using
293 the same read(2) protocol as for the page fault notifications. The
299 enabled, the ``userfaultfd`` context of the parent process is
300 duplicated into the newly created process. The manager
301 receives ``UFFD_EVENT_FORK`` with file descriptor of the new
302 ``userfaultfd`` context in the ``uffd_msg.fork``.
305 enable notifications about mremap() calls. When the
307 different location, the manager will receive
308 ``UFFD_EVENT_REMAP``. The ``uffd_msg.remap`` will contain the old and
309 new addresses of the area and its original length.
313 madvise(MADV_DONTNEED) calls. The event ``UFFD_EVENT_REMOVE`` will
314 be generated upon these calls to madvise(). The ``uffd_msg.remove``
315 will contain start and end addresses of the removed area.
318 enable notifications about memory unmapping. The manager will
320 end addresses of the unmapped area.
322 Although the ``UFFD_FEATURE_EVENT_REMOVE`` and ``UFFD_FEATURE_EVENT_UNMAP``
323 are pretty similar, they quite differ in the action expected from the
324 ``userfaultfd`` manager. In the former case, the virtual memory is
325 removed, but the area is not, the area remains monitored by the
327 delivered to the manager. The proper resolution for such page fault is
328 to zeromap the faulting address. However, in the latter case, when an
330 implicitly (e.g. during mremap()), the area is removed and in turn the
331 ``userfaultfd`` context for such area disappears too and the manager will
332 not get further userland page faults from the removed area. Still, the
334 ``UFFDIO_COPY`` on the unmapped area.
337 explicit or implicit wakeup, all the events are delivered
338 asynchronously and the non-cooperative process resumes execution as
339 soon as manager executes read(). The ``userfaultfd`` manager should
340 carefully synchronize calls to ``UFFDIO_COPY`` with the events
341 processing. To aid the synchronization, the ``UFFDIO_COPY`` ioctl will
342 return ``-ENOSPC`` when the monitored process exits at the time of
343 ``UFFDIO_COPY``, and ``-ENOENT``, when the non-cooperative process has changed
347 The current asynchronous model of the event delivery is optimal for
350 ``userfaultfd`` feature to facilitate multithreading enhancements of the
352 run in parallel to the event reception. Single threaded
353 implementations should continue to use the current async event