1.. hmm: 2 3===================================== 4Heterogeneous Memory Management (HMM) 5===================================== 6 7Provide infrastructure and helpers to integrate non-conventional memory (device 8memory like GPU on board memory) into regular kernel path, with the cornerstone 9of this being specialized struct page for such memory (see sections 5 to 7 of 10this document). 11 12HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., 13allowing a device to transparently access program addresses coherently with 14the CPU meaning that any valid pointer on the CPU is also a valid pointer 15for the device. This is becoming mandatory to simplify the use of advanced 16heterogeneous computing where GPU, DSP, or FPGA are used to perform various 17computations on behalf of a process. 18 19This document is divided as follows: in the first section I expose the problems 20related to using device specific memory allocators. In the second section, I 21expose the hardware limitations that are inherent to many platforms. The third 22section gives an overview of the HMM design. The fourth section explains how 23CPU page-table mirroring works and the purpose of HMM in this context. The 24fifth section deals with how device memory is represented inside the kernel. 25Finally, the last section presents a new migration helper that allows 26leveraging the device DMA engine. 27 28.. contents:: :local: 29 30Problems of using a device specific memory allocator 31==================================================== 32 33Devices with a large amount of on board memory (several gigabytes) like GPUs 34have historically managed their memory through dedicated driver specific APIs. 35This creates a disconnect between memory allocated and managed by a device 36driver and regular application memory (private anonymous, shared memory, or 37regular file backed memory). From here on I will refer to this aspect as split 38address space. I use shared address space to refer to the opposite situation: 39i.e., one in which any application memory region can be used by a device 40transparently. 41 42Split address space happens because devices can only access memory allocated 43through a device specific API. This implies that all memory objects in a program 44are not equal from the device point of view which complicates large programs 45that rely on a wide set of libraries. 46 47Concretely, this means that code that wants to leverage devices like GPUs needs 48to copy objects between generically allocated memory (malloc, mmap private, mmap 49share) and memory allocated through the device driver API (this still ends up 50with an mmap but of the device file). 51 52For flat data sets (array, grid, image, ...) this isn't too hard to achieve but 53for complex data sets (list, tree, ...) it's hard to get right. Duplicating a 54complex data set needs to re-map all the pointer relations between each of its 55elements. This is error prone and programs get harder to debug because of the 56duplicate data set and addresses. 57 58Split address space also means that libraries cannot transparently use data 59they are getting from the core program or another library and thus each library 60might have to duplicate its input data set using the device specific memory 61allocator. Large projects suffer from this and waste resources because of the 62various memory copies. 63 64Duplicating each library API to accept as input or output memory allocated by 65each device specific allocator is not a viable option. It would lead to a 66combinatorial explosion in the library entry points. 67 68Finally, with the advance of high level language constructs (in C++ but in 69other languages too) it is now possible for the compiler to leverage GPUs and 70other devices without programmer knowledge. Some compiler identified patterns 71are only do-able with a shared address space. It is also more reasonable to use 72a shared address space for all other patterns. 73 74 75I/O bus, device memory characteristics 76====================================== 77 78I/O buses cripple shared address spaces due to a few limitations. Most I/O 79buses only allow basic memory access from device to main memory; even cache 80coherency is often optional. Access to device memory from a CPU is even more 81limited. More often than not, it is not cache coherent. 82 83If we only consider the PCIE bus, then a device can access main memory (often 84through an IOMMU) and be cache coherent with the CPUs. However, it only allows 85a limited set of atomic operations from the device on main memory. This is worse 86in the other direction: the CPU can only access a limited range of the device 87memory and cannot perform atomic operations on it. Thus device memory cannot 88be considered the same as regular memory from the kernel point of view. 89 90Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0 91and 16 lanes). This is 33 times less than the fastest GPU memory (1 TBytes/s). 92The final limitation is latency. Access to main memory from the device has an 93order of magnitude higher latency than when the device accesses its own memory. 94 95Some platforms are developing new I/O buses or additions/modifications to PCIE 96to address some of these limitations (OpenCAPI, CCIX). They mainly allow 97two-way cache coherency between CPU and device and allow all atomic operations the 98architecture supports. Sadly, not all platforms are following this trend and 99some major architectures are left without hardware solutions to these problems. 100 101So for shared address space to make sense, not only must we allow devices to 102access any memory but we must also permit any memory to be migrated to device 103memory while the device is using it (blocking CPU access while it happens). 104 105 106Shared address space and migration 107================================== 108 109HMM intends to provide two main features. The first one is to share the address 110space by duplicating the CPU page table in the device page table so the same 111address points to the same physical memory for any valid main memory address in 112the process address space. 113 114To achieve this, HMM offers a set of helpers to populate the device page table 115while keeping track of CPU page table updates. Device page table updates are 116not as easy as CPU page table updates. To update the device page table, you must 117allocate a buffer (or use a pool of pre-allocated buffers) and write GPU 118specific commands in it to perform the update (unmap, cache invalidations, and 119flush, ...). This cannot be done through common code for all devices. Hence 120why HMM provides helpers to factor out everything that can be while leaving the 121hardware specific details to the device driver. 122 123The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that 124allows allocating a struct page for each page of device memory. Those pages 125are special because the CPU cannot map them. However, they allow migrating 126main memory to device memory using existing migration mechanisms and everything 127looks like a page that is swapped out to disk from the CPU point of view. Using a 128struct page gives the easiest and cleanest integration with existing mm 129mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE 130memory for the device memory and second to perform migration. Policy decisions 131of what and when to migrate is left to the device driver. 132 133Note that any CPU access to a device page triggers a page fault and a migration 134back to main memory. For example, when a page backing a given CPU address A is 135migrated from a main memory page to a device page, then any CPU access to 136address A triggers a page fault and initiates a migration back to main memory. 137 138With these two features, HMM not only allows a device to mirror process address 139space and keeps both CPU and device page tables synchronized, but also 140leverages device memory by migrating the part of the data set that is actively being 141used by the device. 142 143 144Address space mirroring implementation and API 145============================================== 146 147Address space mirroring's main objective is to allow duplication of a range of 148CPU page table into a device page table; HMM helps keep both synchronized. A 149device driver that wants to mirror a process address space must start with the 150registration of an hmm_mirror struct:: 151 152 int hmm_mirror_register(struct hmm_mirror *mirror, 153 struct mm_struct *mm); 154 155The mirror struct has a set of callbacks that are used 156to propagate CPU page tables:: 157 158 struct hmm_mirror_ops { 159 /* release() - release hmm_mirror 160 * 161 * @mirror: pointer to struct hmm_mirror 162 * 163 * This is called when the mm_struct is being released. The callback 164 * must ensure that all access to any pages obtained from this mirror 165 * is halted before the callback returns. All future access should 166 * fault. 167 */ 168 void (*release)(struct hmm_mirror *mirror); 169 170 /* sync_cpu_device_pagetables() - synchronize page tables 171 * 172 * @mirror: pointer to struct hmm_mirror 173 * @update: update information (see struct mmu_notifier_range) 174 * Return: -EAGAIN if update.blockable false and callback need to 175 * block, 0 otherwise. 176 * 177 * This callback ultimately originates from mmu_notifiers when the CPU 178 * page table is updated. The device driver must update its page table 179 * in response to this callback. The update argument tells what action 180 * to perform. 181 * 182 * The device driver must not return from this callback until the device 183 * page tables are completely updated (TLBs flushed, etc); this is a 184 * synchronous call. 185 */ 186 int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, 187 const struct hmm_update *update); 188 }; 189 190The device driver must perform the update action to the range (mark range 191read only, or fully unmap, etc.). The device must complete the update before 192the driver callback returns. 193 194When the device driver wants to populate a range of virtual addresses, it can 195use:: 196 197 long hmm_range_fault(struct hmm_range *range, unsigned int flags); 198 199With the HMM_RANGE_SNAPSHOT flag, it will only fetch present CPU page table 200entries and will not trigger a page fault on missing or non-present entries. 201Without that flag, it does trigger a page fault on missing or read-only entries 202if write access is requested (see below). Page faults use the generic mm page 203fault code path just like a CPU page fault. 204 205Both functions copy CPU page table entries into their pfns array argument. Each 206entry in that array corresponds to an address in the virtual range. HMM 207provides a set of flags to help the driver identify special CPU page table 208entries. 209 210Locking within the sync_cpu_device_pagetables() callback is the most important 211aspect the driver must respect in order to keep things properly synchronized. 212The usage pattern is:: 213 214 int driver_populate_range(...) 215 { 216 struct hmm_range range; 217 ... 218 219 range.start = ...; 220 range.end = ...; 221 range.pfns = ...; 222 range.flags = ...; 223 range.values = ...; 224 range.pfn_shift = ...; 225 hmm_range_register(&range, mirror); 226 227 /* 228 * Just wait for range to be valid, safe to ignore return value as we 229 * will use the return value of hmm_range_fault() below under the 230 * mmap_sem to ascertain the validity of the range. 231 */ 232 hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); 233 234 again: 235 down_read(&mm->mmap_sem); 236 ret = hmm_range_fault(&range, HMM_RANGE_SNAPSHOT); 237 if (ret) { 238 up_read(&mm->mmap_sem); 239 if (ret == -EBUSY) { 240 /* 241 * No need to check hmm_range_wait_until_valid() return value 242 * on retry we will get proper error with hmm_range_fault() 243 */ 244 hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); 245 goto again; 246 } 247 hmm_range_unregister(&range); 248 return ret; 249 } 250 take_lock(driver->update); 251 if (!hmm_range_valid(&range)) { 252 release_lock(driver->update); 253 up_read(&mm->mmap_sem); 254 goto again; 255 } 256 257 // Use pfns array content to update device page table 258 259 hmm_range_unregister(&range); 260 release_lock(driver->update); 261 up_read(&mm->mmap_sem); 262 return 0; 263 } 264 265The driver->update lock is the same lock that the driver takes inside its 266sync_cpu_device_pagetables() callback. That lock must be held before calling 267hmm_range_valid() to avoid any race with a concurrent CPU page table update. 268 269HMM implements all this on top of the mmu_notifier API because we wanted a 270simpler API and also to be able to perform optimizations latter on like doing 271concurrent device updates in multi-devices scenario. 272 273HMM also serves as an impedance mismatch between how CPU page table updates 274are done (by CPU write to the page table and TLB flushes) and how devices 275update their own page table. Device updates are a multi-step process. First, 276appropriate commands are written to a buffer, then this buffer is scheduled for 277execution on the device. It is only once the device has executed commands in 278the buffer that the update is done. Creating and scheduling the update command 279buffer can happen concurrently for multiple devices. Waiting for each device to 280report commands as executed is serialized (there is no point in doing this 281concurrently). 282 283 284Leverage default_flags and pfn_flags_mask 285========================================= 286 287The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify 288fault or snapshot policy for the whole range instead of having to set them 289for each entry in the pfns array. 290 291For instance, if the device flags for range.flags are:: 292 293 range.flags[HMM_PFN_VALID] = (1 << 63); 294 range.flags[HMM_PFN_WRITE] = (1 << 62); 295 296and the device driver wants pages for a range with at least read permission, 297it sets:: 298 299 range->default_flags = (1 << 63); 300 range->pfn_flags_mask = 0; 301 302and calls hmm_range_fault() as described above. This will fill fault all pages 303in the range with at least read permission. 304 305Now let's say the driver wants to do the same except for one page in the range for 306which it wants to have write permission. Now driver set:: 307 308 range->default_flags = (1 << 63); 309 range->pfn_flags_mask = (1 << 62); 310 range->pfns[index_of_write] = (1 << 62); 311 312With this, HMM will fault in all pages with at least read (i.e., valid) and for the 313address == range->start + (index_of_write << PAGE_SHIFT) it will fault with 314write permission i.e., if the CPU pte does not have write permission set then HMM 315will call handle_mm_fault(). 316 317Note that HMM will populate the pfns array with write permission for any page 318that is mapped with CPU write permission no matter what values are set 319in default_flags or pfn_flags_mask. 320 321 322Represent and manage device memory from core kernel point of view 323================================================================= 324 325Several different designs were tried to support device memory. The first one 326used a device specific data structure to keep information about migrated memory 327and HMM hooked itself in various places of mm code to handle any access to 328addresses that were backed by device memory. It turns out that this ended up 329replicating most of the fields of struct page and also needed many kernel code 330paths to be updated to understand this new kind of memory. 331 332Most kernel code paths never try to access the memory behind a page 333but only care about struct page contents. Because of this, HMM switched to 334directly using struct page for device memory which left most kernel code paths 335unaware of the difference. We only need to make sure that no one ever tries to 336map those pages from the CPU side. 337 338Migration to and from device memory 339=================================== 340 341Because the CPU cannot access device memory, migration must use the device DMA 342engine to perform copy from and to device memory. For this we need to use 343migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize() helpers. 344 345 346Memory cgroup (memcg) and rss accounting 347======================================== 348 349For now, device memory is accounted as any regular page in rss counters (either 350anonymous if device page is used for anonymous, file if device page is used for 351file backed page, or shmem if device page is used for shared memory). This is a 352deliberate choice to keep existing applications, that might start using device 353memory without knowing about it, running unimpacted. 354 355A drawback is that the OOM killer might kill an application using a lot of 356device memory and not a lot of regular system memory and thus not freeing much 357system memory. We want to gather more real world experience on how applications 358and system react under memory pressure in the presence of device memory before 359deciding to account device memory differently. 360 361 362Same decision was made for memory cgroup. Device memory pages are accounted 363against same memory cgroup a regular page would be accounted to. This does 364simplify migration to and from device memory. This also means that migration 365back from device memory to regular memory cannot fail because it would 366go above memory cgroup limit. We might revisit this choice latter on once we 367get more experience in how device memory is used and its impact on memory 368resource control. 369 370 371Note that device memory can never be pinned by a device driver nor through GUP 372and thus such memory is always free upon process exit. Or when last reference 373is dropped in case of shared memory or file backed memory. 374