1.. hmm:
2
3=====================================
4Heterogeneous Memory Management (HMM)
5=====================================
6
7Provide infrastructure and helpers to integrate non-conventional memory (device
8memory like GPU on board memory) into regular kernel path, with the cornerstone
9of this being specialized struct page for such memory (see sections 5 to 7 of
10this document).
11
12HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
13allowing a device to transparently access program address coherently with
14the CPU meaning that any valid pointer on the CPU is also a valid pointer
15for the device. This is becoming mandatory to simplify the use of advanced
16heterogeneous computing where GPU, DSP, or FPGA are used to perform various
17computations on behalf of a process.
18
19This document is divided as follows: in the first section I expose the problems
20related to using device specific memory allocators. In the second section, I
21expose the hardware limitations that are inherent to many platforms. The third
22section gives an overview of the HMM design. The fourth section explains how
23CPU page-table mirroring works and the purpose of HMM in this context. The
24fifth section deals with how device memory is represented inside the kernel.
25Finally, the last section presents a new migration helper that allows lever-
26aging the device DMA engine.
27
28.. contents:: :local:
29
30Problems of using a device specific memory allocator
31====================================================
32
33Devices with a large amount of on board memory (several gigabytes) like GPUs
34have historically managed their memory through dedicated driver specific APIs.
35This creates a disconnect between memory allocated and managed by a device
36driver and regular application memory (private anonymous, shared memory, or
37regular file backed memory). From here on I will refer to this aspect as split
38address space. I use shared address space to refer to the opposite situation:
39i.e., one in which any application memory region can be used by a device
40transparently.
41
42Split address space happens because device can only access memory allocated
43through device specific API. This implies that all memory objects in a program
44are not equal from the device point of view which complicates large programs
45that rely on a wide set of libraries.
46
47Concretely this means that code that wants to leverage devices like GPUs needs
48to copy object between generically allocated memory (malloc, mmap private, mmap
49share) and memory allocated through the device driver API (this still ends up
50with an mmap but of the device file).
51
52For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
53complex data sets (list, tree, ...) are hard to get right. Duplicating a
54complex data set needs to re-map all the pointer relations between each of its
55elements. This is error prone and program gets harder to debug because of the
56duplicate data set and addresses.
57
58Split address space also means that libraries cannot transparently use data
59they are getting from the core program or another library and thus each library
60might have to duplicate its input data set using the device specific memory
61allocator. Large projects suffer from this and waste resources because of the
62various memory copies.
63
64Duplicating each library API to accept as input or output memory allocated by
65each device specific allocator is not a viable option. It would lead to a
66combinatorial explosion in the library entry points.
67
68Finally, with the advance of high level language constructs (in C++ but in
69other languages too) it is now possible for the compiler to leverage GPUs and
70other devices without programmer knowledge. Some compiler identified patterns
71are only do-able with a shared address space. It is also more reasonable to use
72a shared address space for all other patterns.
73
74
75I/O bus, device memory characteristics
76======================================
77
78I/O buses cripple shared address spaces due to a few limitations. Most I/O
79buses only allow basic memory access from device to main memory; even cache
80coherency is often optional. Access to device memory from CPU is even more
81limited. More often than not, it is not cache coherent.
82
83If we only consider the PCIE bus, then a device can access main memory (often
84through an IOMMU) and be cache coherent with the CPUs. However, it only allows
85a limited set of atomic operations from device on main memory. This is worse
86in the other direction: the CPU can only access a limited range of the device
87memory and cannot perform atomic operations on it. Thus device memory cannot
88be considered the same as regular memory from the kernel point of view.
89
90Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0
91and 16 lanes). This is 33 times less than the fastest GPU memory (1 TBytes/s).
92The final limitation is latency. Access to main memory from the device has an
93order of magnitude higher latency than when the device accesses its own memory.
94
95Some platforms are developing new I/O buses or additions/modifications to PCIE
96to address some of these limitations (OpenCAPI, CCIX). They mainly allow two-
97way cache coherency between CPU and device and allow all atomic operations the
98architecture supports. Sadly, not all platforms are following this trend and
99some major architectures are left without hardware solutions to these problems.
100
101So for shared address space to make sense, not only must we allow devices to
102access any memory but we must also permit any memory to be migrated to device
103memory while device is using it (blocking CPU access while it happens).
104
105
106Shared address space and migration
107==================================
108
109HMM intends to provide two main features. First one is to share the address
110space by duplicating the CPU page table in the device page table so the same
111address points to the same physical memory for any valid main memory address in
112the process address space.
113
114To achieve this, HMM offers a set of helpers to populate the device page table
115while keeping track of CPU page table updates. Device page table updates are
116not as easy as CPU page table updates. To update the device page table, you must
117allocate a buffer (or use a pool of pre-allocated buffers) and write GPU
118specific commands in it to perform the update (unmap, cache invalidations, and
119flush, ...). This cannot be done through common code for all devices. Hence
120why HMM provides helpers to factor out everything that can be while leaving the
121hardware specific details to the device driver.
122
123The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that
124allows allocating a struct page for each page of the device memory. Those pages
125are special because the CPU cannot map them. However, they allow migrating
126main memory to device memory using existing migration mechanisms and everything
127looks like a page is swapped out to disk from the CPU point of view. Using a
128struct page gives the easiest and cleanest integration with existing mm mech-
129anisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE
130memory for the device memory and second to perform migration. Policy decisions
131of what and when to migrate things is left to the device driver.
132
133Note that any CPU access to a device page triggers a page fault and a migration
134back to main memory. For example, when a page backing a given CPU address A is
135migrated from a main memory page to a device page, then any CPU access to
136address A triggers a page fault and initiates a migration back to main memory.
137
138With these two features, HMM not only allows a device to mirror process address
139space and keeping both CPU and device page table synchronized, but also lever-
140ages device memory by migrating the part of the data set that is actively being
141used by the device.
142
143
144Address space mirroring implementation and API
145==============================================
146
147Address space mirroring's main objective is to allow duplication of a range of
148CPU page table into a device page table; HMM helps keep both synchronized. A
149device driver that wants to mirror a process address space must start with the
150registration of an hmm_mirror struct::
151
152 int hmm_mirror_register(struct hmm_mirror *mirror,
153                         struct mm_struct *mm);
154 int hmm_mirror_register_locked(struct hmm_mirror *mirror,
155                                struct mm_struct *mm);
156
157
158The locked variant is to be used when the driver is already holding mmap_sem
159of the mm in write mode. The mirror struct has a set of callbacks that are used
160to propagate CPU page tables::
161
162 struct hmm_mirror_ops {
163     /* sync_cpu_device_pagetables() - synchronize page tables
164      *
165      * @mirror: pointer to struct hmm_mirror
166      * @update_type: type of update that occurred to the CPU page table
167      * @start: virtual start address of the range to update
168      * @end: virtual end address of the range to update
169      *
170      * This callback ultimately originates from mmu_notifiers when the CPU
171      * page table is updated. The device driver must update its page table
172      * in response to this callback. The update argument tells what action
173      * to perform.
174      *
175      * The device driver must not return from this callback until the device
176      * page tables are completely updated (TLBs flushed, etc); this is a
177      * synchronous call.
178      */
179      void (*update)(struct hmm_mirror *mirror,
180                     enum hmm_update action,
181                     unsigned long start,
182                     unsigned long end);
183 };
184
185The device driver must perform the update action to the range (mark range
186read only, or fully unmap, ...). The device must be done with the update before
187the driver callback returns.
188
189When the device driver wants to populate a range of virtual addresses, it can
190use either::
191
192  int hmm_vma_get_pfns(struct vm_area_struct *vma,
193                      struct hmm_range *range,
194                      unsigned long start,
195                      unsigned long end,
196                      hmm_pfn_t *pfns);
197 int hmm_vma_fault(struct vm_area_struct *vma,
198                   struct hmm_range *range,
199                   unsigned long start,
200                   unsigned long end,
201                   hmm_pfn_t *pfns,
202                   bool write,
203                   bool block);
204
205The first one (hmm_vma_get_pfns()) will only fetch present CPU page table
206entries and will not trigger a page fault on missing or non-present entries.
207The second one does trigger a page fault on missing or read-only entry if the
208write parameter is true. Page faults use the generic mm page fault code path
209just like a CPU page fault.
210
211Both functions copy CPU page table entries into their pfns array argument. Each
212entry in that array corresponds to an address in the virtual range. HMM
213provides a set of flags to help the driver identify special CPU page table
214entries.
215
216Locking with the update() callback is the most important aspect the driver must
217respect in order to keep things properly synchronized. The usage pattern is::
218
219 int driver_populate_range(...)
220 {
221      struct hmm_range range;
222      ...
223 again:
224      ret = hmm_vma_get_pfns(vma, &range, start, end, pfns);
225      if (ret)
226          return ret;
227      take_lock(driver->update);
228      if (!hmm_vma_range_done(vma, &range)) {
229          release_lock(driver->update);
230          goto again;
231      }
232
233      // Use pfns array content to update device page table
234
235      release_lock(driver->update);
236      return 0;
237 }
238
239The driver->update lock is the same lock that the driver takes inside its
240update() callback. That lock must be held before hmm_vma_range_done() to avoid
241any race with a concurrent CPU page table update.
242
243HMM implements all this on top of the mmu_notifier API because we wanted a
244simpler API and also to be able to perform optimizations latter on like doing
245concurrent device updates in multi-devices scenario.
246
247HMM also serves as an impedance mismatch between how CPU page table updates
248are done (by CPU write to the page table and TLB flushes) and how devices
249update their own page table. Device updates are a multi-step process. First,
250appropriate commands are written to a buffer, then this buffer is scheduled for
251execution on the device. It is only once the device has executed commands in
252the buffer that the update is done. Creating and scheduling the update command
253buffer can happen concurrently for multiple devices. Waiting for each device to
254report commands as executed is serialized (there is no point in doing this
255concurrently).
256
257
258Represent and manage device memory from core kernel point of view
259=================================================================
260
261Several different designs were tried to support device memory. First one used
262a device specific data structure to keep information about migrated memory and
263HMM hooked itself in various places of mm code to handle any access to
264addresses that were backed by device memory. It turns out that this ended up
265replicating most of the fields of struct page and also needed many kernel code
266paths to be updated to understand this new kind of memory.
267
268Most kernel code paths never try to access the memory behind a page
269but only care about struct page contents. Because of this, HMM switched to
270directly using struct page for device memory which left most kernel code paths
271unaware of the difference. We only need to make sure that no one ever tries to
272map those pages from the CPU side.
273
274HMM provides a set of helpers to register and hotplug device memory as a new
275region needing a struct page. This is offered through a very simple API::
276
277 struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
278                                   struct device *device,
279                                   unsigned long size);
280 void hmm_devmem_remove(struct hmm_devmem *devmem);
281
282The hmm_devmem_ops is where most of the important things are::
283
284 struct hmm_devmem_ops {
285     void (*free)(struct hmm_devmem *devmem, struct page *page);
286     int (*fault)(struct hmm_devmem *devmem,
287                  struct vm_area_struct *vma,
288                  unsigned long addr,
289                  struct page *page,
290                  unsigned flags,
291                  pmd_t *pmdp);
292 };
293
294The first callback (free()) happens when the last reference on a device page is
295dropped. This means the device page is now free and no longer used by anyone.
296The second callback happens whenever the CPU tries to access a device page
297which it cannot do. This second callback must trigger a migration back to
298system memory.
299
300
301Migration to and from device memory
302===================================
303
304Because the CPU cannot access device memory, migration must use the device DMA
305engine to perform copy from and to device memory. For this we need a new
306migration helper::
307
308 int migrate_vma(const struct migrate_vma_ops *ops,
309                 struct vm_area_struct *vma,
310                 unsigned long mentries,
311                 unsigned long start,
312                 unsigned long end,
313                 unsigned long *src,
314                 unsigned long *dst,
315                 void *private);
316
317Unlike other migration functions it works on a range of virtual address, there
318are two reasons for that. First, device DMA copy has a high setup overhead cost
319and thus batching multiple pages is needed as otherwise the migration overhead
320makes the whole exercise pointless. The second reason is because the
321migration might be for a range of addresses the device is actively accessing.
322
323The migrate_vma_ops struct defines two callbacks. First one (alloc_and_copy())
324controls destination memory allocation and copy operation. Second one is there
325to allow the device driver to perform cleanup operations after migration::
326
327 struct migrate_vma_ops {
328     void (*alloc_and_copy)(struct vm_area_struct *vma,
329                            const unsigned long *src,
330                            unsigned long *dst,
331                            unsigned long start,
332                            unsigned long end,
333                            void *private);
334     void (*finalize_and_map)(struct vm_area_struct *vma,
335                              const unsigned long *src,
336                              const unsigned long *dst,
337                              unsigned long start,
338                              unsigned long end,
339                              void *private);
340 };
341
342It is important to stress that these migration helpers allow for holes in the
343virtual address range. Some pages in the range might not be migrated for all
344the usual reasons (page is pinned, page is locked, ...). This helper does not
345fail but just skips over those pages.
346
347The alloc_and_copy() might decide to not migrate all pages in the
348range (for reasons under the callback control). For those, the callback just
349has to leave the corresponding dst entry empty.
350
351Finally, the migration of the struct page might fail (for file backed page) for
352various reasons (failure to freeze reference, or update page cache, ...). If
353that happens, then the finalize_and_map() can catch any pages that were not
354migrated. Note those pages were still copied to a new page and thus we wasted
355bandwidth but this is considered as a rare event and a price that we are
356willing to pay to keep all the code simpler.
357
358
359Memory cgroup (memcg) and rss accounting
360========================================
361
362For now device memory is accounted as any regular page in rss counters (either
363anonymous if device page is used for anonymous, file if device page is used for
364file backed page or shmem if device page is used for shared memory). This is a
365deliberate choice to keep existing applications, that might start using device
366memory without knowing about it, running unimpacted.
367
368A drawback is that the OOM killer might kill an application using a lot of
369device memory and not a lot of regular system memory and thus not freeing much
370system memory. We want to gather more real world experience on how applications
371and system react under memory pressure in the presence of device memory before
372deciding to account device memory differently.
373
374
375Same decision was made for memory cgroup. Device memory pages are accounted
376against same memory cgroup a regular page would be accounted to. This does
377simplify migration to and from device memory. This also means that migration
378back from device memory to regular memory cannot fail because it would
379go above memory cgroup limit. We might revisit this choice latter on once we
380get more experience in how device memory is used and its impact on memory
381resource control.
382
383
384Note that device memory can never be pinned by device driver nor through GUP
385and thus such memory is always free upon process exit. Or when last reference
386is dropped in case of shared memory or file backed memory.
387