Lines Matching +full:data +full:- +full:mirror

7 Provide infrastructure and helpers to integrate non-conventional memory (device
23 CPU page-table mirroring works and the purpose of HMM in this context. The
52 For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
53 for complex data sets (list, tree, ...) it's hard to get right. Duplicating a
54 complex data set needs to re-map all the pointer relations between each of its
56 duplicate data set and addresses.
58 Split address space also means that libraries cannot transparently use data
60 might have to duplicate its input data set using the device specific memory
71 are only do-able with a shared address space. It is also more reasonable to use
97 two-way cache coherency between CPU and device and allow all atomic operations the
117 allocate a buffer (or use a pool of pre-allocated buffers) and write GPU
138 With these two features, HMM not only allows a device to mirror process address
140 leverages device memory by migrating the part of the data set that is actively being
149 device driver that wants to mirror a process address space must start with the
152 int hmm_mirror_register(struct hmm_mirror *mirror,
155 The mirror struct has a set of callbacks that are used
159 /* release() - release hmm_mirror
161 * @mirror: pointer to struct hmm_mirror
164 * must ensure that all access to any pages obtained from this mirror
168 void (*release)(struct hmm_mirror *mirror);
170 /* sync_cpu_device_pagetables() - synchronize page tables
172 * @mirror: pointer to struct hmm_mirror
174 * Return: -EAGAIN if update.blockable false and callback need to
186 int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror,
200 entries and will not trigger a page fault on missing or non-present entries.
201 Without that flag, it does trigger a page fault on missing or read-only entries
225 hmm_range_register(&range, mirror);
235 down_read(&mm->mmap_sem);
238 up_read(&mm->mmap_sem);
239 if (ret == -EBUSY) {
250 take_lock(driver->update);
252 release_lock(driver->update);
253 up_read(&mm->mmap_sem);
260 release_lock(driver->update);
261 up_read(&mm->mmap_sem);
265 The driver->update lock is the same lock that the driver takes inside its
271 concurrent device updates in multi-devices scenario.
275 update their own page table. Device updates are a multi-step process. First,
299 range->default_flags = (1 << 63);
300 range->pfn_flags_mask = 0;
308 range->default_flags = (1 << 63);
309 range->pfn_flags_mask = (1 << 62);
310 range->pfns[index_of_write] = (1 << 62);
313 address == range->start + (index_of_write << PAGE_SHIFT) it will fault with
326 used a device specific data structure to keep information about migrated memory