Lines Matching +full:non +full:- +full:secure

1 // SPDX-License-Identifier: GPL-2.0
3 * Secure pages management: Migration of pages between normal and secure
10 * A pseries guest can be run as secure guest on Ultravisor-enabled
13 * hypervisor (HV) and secure memory managed by Ultravisor (UV).
15 * The page-in or page-out requests from UV will come to HV as hcalls and
18 * Private ZONE_DEVICE memory equal to the amount of secure memory
19 * available in the platform for running secure guests is hotplugged.
20 * Whenever a page belonging to the guest becomes secure, a page from this
21 * private device memory is used to represent and track that secure page
31 * kvm->arch.uvmem_lock is a per-guest lock that prevents concurrent
32 * page-in and page-out requests for the same GPA. Concurrent accesses
36 * UV(secure) and vice versa. So the serialization points are around
37 * migrate_vma routines and page-in/out routines.
39 * Per-guest mutex comes with a cost though. Mainly it serializes the
40 * fault path as page-out can occur when HV faults on accessing secure
41 * guest pages. Currently UV issues page-in requests for all the guest
43 * not a cause for concern. Also currently the number of page-outs caused
44 * by HV touching secure pages is very very low. If an when UV supports
45 * overcommitting, then we might see concurrent guest driven page-outs.
49 * 1. kvm->srcu - Protects KVM memslots
50 * 2. kvm->mm->mmap_lock - find_vma, migrate_vma_pages and helpers, ksm_madvise
51 * 3. kvm->arch.uvmem_lock - protects read/writes to uvmem slots thus acting
52 * as sync-points for page-in/out
60 * secure GPAs at 64K page size and maintains one device PFN for each
61 * 64K secure GPA. UV_PAGE_IN and UV_PAGE_OUT calls by HV are also issued
64 * HV faulting on secure pages: When HV touches any secure page, it
69 * Shared pages: Whenever guest shares a secure page, UV will split and
72 * HV invalidating a page: When a regular page belonging to secure
74 * page size. Using 64K page size is correct here because any non-secure
76 * and page-out ensures this.
79 * to secure guest, it sends that to UV with a 64K UV_PAGE_IN request.
81 * into 64k mappings and would have done page-outs earlier.
83 * In summary, the current secure pages handling code in HV assumes
84 * 64K page size and in fact fails any page-in/page-out requests of
85 * non-64K size upfront. If and when UV starts supporting multiple
86 * page-sizes, we need to break this assumption.
104 * ---------------
107 * (a) Secure - The GFN is secure. The GFN is associated with
108 * a Secure VM, the contents of the GFN is not accessible
109 * to the Hypervisor. This GFN can be backed by a secure-PFN,
110 * or can be backed by a normal-PFN with contents encrypted.
111 * The former is true when the GFN is paged-in into the
112 * ultravisor. The latter is true when the GFN is paged-out
115 * (b) Shared - The GFN is shared. The GFN is associated with a
116 * a secure VM. The contents of the GFN is accessible to
117 * Hypervisor. This GFN is backed by a normal-PFN and its
118 * content is un-encrypted.
120 * (c) Normal - The GFN is a normal. The GFN is associated with
125 * ---------------
128 * the hypervisor. All its GFNs are normal-GFNs.
130 * Secure VM: A VM whose contents are not accessible to the
132 * either Shared-GFN or Secure-GFNs.
134 * Transient VM: A Normal VM that is transitioning to secure VM.
138 * in any of the three states; i.e Secure-GFN, Shared-GFN,
139 * and Normal-GFN. The VM never executes in this state
140 * in supervisor-mode.
143 * -----------------------------
148 * --------------------
157 * secure-state. At this point any left-over normal-GFNs are
158 * transitioned to Secure-GFN.
161 * All its GFNs are moved to Normal-GFNs.
163 * UV_TERMINATE transitions the secure-VM back to normal-VM. All
164 * the secure-GFN and shared-GFNs are tranistioned to normal-GFN
165 * Note: The contents of the normal-GFN is undefined at this point.
168 * -------------------------
170 * Secure GFN is associated with a secure-PFN; also called uvmem_pfn,
171 * when the GFN is paged-in. Its pfn[] has KVMPPC_GFN_UVMEM_PFN flag
172 * set, and contains the value of the secure-PFN.
173 * It is associated with a normal-PFN; also called mem_pfn, when
175 * The value of the normal-PFN is not tracked.
177 * Shared GFN is associated with a normal-PFN. Its pfn[] has
178 * KVMPPC_UVMEM_SHARED_PFN flag set. The value of the normal-PFN
181 * Normal GFN is associated with normal-PFN. Its pfn[] has
182 * no flag set. The value of the normal-PFN is not tracked.
185 * --------------------
187 * --------------------------------------------------------------
191 * -------------------------------------------------------------
193 * | Secure | Shared | Secure |Normal |Secure |
195 * | Shared | Shared | Secure |Normal |Shared |
197 * | Normal | Shared | Secure |Normal |Secure |
198 * --------------------------------------------------------------
201 * --------------------
203 * --------------------------------------------------------------------
207 * --------- ----------------------------------------------------------
211 * | Secure | Error | Error |Error |Error |Normal |
213 * |Transient| N/A | Error |Secure |Normal |Normal |
214 * --------------------------------------------------------------------
252 return -ENOMEM; in kvmppc_uvmem_slot_init()
253 p->pfns = vzalloc(array_size(slot->npages, sizeof(*p->pfns))); in kvmppc_uvmem_slot_init()
254 if (!p->pfns) { in kvmppc_uvmem_slot_init()
256 return -ENOMEM; in kvmppc_uvmem_slot_init()
258 p->nr_pfns = slot->npages; in kvmppc_uvmem_slot_init()
259 p->base_pfn = slot->base_gfn; in kvmppc_uvmem_slot_init()
261 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_init()
262 list_add(&p->list, &kvm->arch.uvmem_pfns); in kvmppc_uvmem_slot_init()
263 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_init()
275 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_free()
276 list_for_each_entry_safe(p, next, &kvm->arch.uvmem_pfns, list) { in kvmppc_uvmem_slot_free()
277 if (p->base_pfn == slot->base_gfn) { in kvmppc_uvmem_slot_free()
278 vfree(p->pfns); in kvmppc_uvmem_slot_free()
279 list_del(&p->list); in kvmppc_uvmem_slot_free()
284 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_free()
292 list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) { in kvmppc_mark_gfn()
293 if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) { in kvmppc_mark_gfn()
294 unsigned long index = gfn - p->base_pfn; in kvmppc_mark_gfn()
297 p->pfns[index] = uvmem_pfn | flag; in kvmppc_mark_gfn()
299 p->pfns[index] = flag; in kvmppc_mark_gfn()
305 /* mark the GFN as secure-GFN associated with @uvmem pfn device-PFN. */
312 /* mark the GFN as secure-GFN associated with a memory-PFN. */
324 /* mark the GFN as a non-existent GFN. */
330 /* return true, if the GFN is a secure-GFN backed by a secure-PFN */
336 list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) { in kvmppc_gfn_is_uvmem_pfn()
337 if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) { in kvmppc_gfn_is_uvmem_pfn()
338 unsigned long index = gfn - p->base_pfn; in kvmppc_gfn_is_uvmem_pfn()
340 if (p->pfns[index] & KVMPPC_GFN_UVMEM_PFN) { in kvmppc_gfn_is_uvmem_pfn()
342 *uvmem_pfn = p->pfns[index] & in kvmppc_gfn_is_uvmem_pfn()
354 * transitioned to a secure GFN. return the value of that GFN in *gfn. If a
357 * Must be called with kvm->arch.uvmem_lock held.
366 list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) in kvmppc_next_nontransitioned_gfn()
367 if (*gfn >= p->base_pfn && *gfn < p->base_pfn + p->nr_pfns) in kvmppc_next_nontransitioned_gfn()
375 for (i = *gfn; i < p->base_pfn + p->nr_pfns; i++) { in kvmppc_next_nontransitioned_gfn()
376 unsigned long index = i - p->base_pfn; in kvmppc_next_nontransitioned_gfn()
378 if (!(p->pfns[index] & KVMPPC_GFN_FLAG_MASK)) { in kvmppc_next_nontransitioned_gfn()
390 unsigned long gfn = memslot->base_gfn; in kvmppc_memslot_page_merge()
399 end = start + (memslot->npages << PAGE_SHIFT); in kvmppc_memslot_page_merge()
401 mmap_write_lock(kvm->mm); in kvmppc_memslot_page_merge()
403 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_memslot_page_merge()
408 ret = ksm_madvise(vma, vma->vm_start, vma->vm_end, in kvmppc_memslot_page_merge()
409 merge_flag, &vma->vm_flags); in kvmppc_memslot_page_merge()
414 start = vma->vm_end; in kvmppc_memslot_page_merge()
415 } while (end > vma->vm_end); in kvmppc_memslot_page_merge()
417 mmap_write_unlock(kvm->mm); in kvmppc_memslot_page_merge()
424 uv_unregister_mem_slot(kvm->arch.lpid, memslot->id); in __kvmppc_uvmem_memslot_delete()
440 ret = uv_register_mem_slot(kvm->arch.lpid, in __kvmppc_uvmem_memslot_create()
441 memslot->base_gfn << PAGE_SHIFT, in __kvmppc_uvmem_memslot_create()
442 memslot->npages * PAGE_SIZE, in __kvmppc_uvmem_memslot_create()
443 0, memslot->id); in __kvmppc_uvmem_memslot_create()
463 kvm->arch.secure_guest = KVMPPC_SECURE_INIT_START; in kvmppc_h_svm_init_start()
468 /* Only radix guests can be secure guests */ in kvmppc_h_svm_init_start()
472 /* NAK the transition to secure if not enabled */ in kvmppc_h_svm_init_start()
473 if (!kvm->arch.svm_enabled) in kvmppc_h_svm_init_start()
476 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_init_start()
495 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_init_start()
501 * from secure memory using UV_PAGE_OUT uvcall.
502 * Caller must held kvm->arch.uvmem_lock.
525 /* The requested page is already paged-out, nothing to do */ in __kvmppc_svm_page_out()
531 return -1; in __kvmppc_svm_page_out()
542 ret = -1; in __kvmppc_svm_page_out()
547 pvt = spage->zone_device_data; in __kvmppc_svm_page_out()
552 * - When HV touches a secure page, for which we do UV_PAGE_OUT in __kvmppc_svm_page_out()
553 * - When a secure page is converted to shared page, we *get* in __kvmppc_svm_page_out()
555 * case we skip page-out. in __kvmppc_svm_page_out()
557 if (!pvt->skip_page_out) in __kvmppc_svm_page_out()
558 ret = uv_page_out(kvm->arch.lpid, pfn << page_shift, in __kvmppc_svm_page_out()
583 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_svm_page_out()
585 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_svm_page_out()
591 * Drop device pages that we maintain for the secure guest
608 mmap_read_lock(kvm->mm); in kvmppc_uvmem_drop_pages()
610 addr = slot->userspace_addr; in kvmppc_uvmem_drop_pages()
612 gfn = slot->base_gfn; in kvmppc_uvmem_drop_pages()
613 for (i = slot->npages; i; --i, ++gfn, addr += PAGE_SIZE) { in kvmppc_uvmem_drop_pages()
616 if (!vma || addr >= vma->vm_end) { in kvmppc_uvmem_drop_pages()
617 vma = find_vma_intersection(kvm->mm, addr, addr+1); in kvmppc_uvmem_drop_pages()
624 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_drop_pages()
628 pvt = uvmem_page->zone_device_data; in kvmppc_uvmem_drop_pages()
629 pvt->skip_page_out = skip_page_out; in kvmppc_uvmem_drop_pages()
630 pvt->remove_gfn = true; in kvmppc_uvmem_drop_pages()
633 PAGE_SHIFT, kvm, pvt->gpa)) in kvmppc_uvmem_drop_pages()
635 pvt->gpa, addr); in kvmppc_uvmem_drop_pages()
641 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_drop_pages()
644 mmap_read_unlock(kvm->mm); in kvmppc_uvmem_drop_pages()
656 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_init_abort()
659 if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) in kvmppc_h_svm_init_abort()
662 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_init_abort()
667 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_init_abort()
669 kvm->arch.secure_guest = 0; in kvmppc_h_svm_init_abort()
670 uv_svm_terminate(kvm->arch.lpid); in kvmppc_h_svm_init_abort()
678 * Called when a normal page is moved to secure memory (UV_PAGE_IN). Device
679 * PFN will be used to keep track of the secure page on HV side.
681 * Called with kvm->arch.uvmem_lock held
696 pfn_last - pfn_first); in kvmppc_uvmem_get_page()
697 if (bit >= (pfn_last - pfn_first)) in kvmppc_uvmem_get_page()
709 pvt->gpa = gpa; in kvmppc_uvmem_get_page()
710 pvt->kvm = kvm; in kvmppc_uvmem_get_page()
713 dpage->zone_device_data = pvt; in kvmppc_uvmem_get_page()
727 * copy page from normal memory to secure memory using UV_PAGE_IN uvcall.
755 ret = -1; in kvmppc_svm_page_in()
761 ret = -1; in kvmppc_svm_page_in()
769 ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, in kvmppc_svm_page_in()
786 unsigned long gfn = memslot->base_gfn; in kvmppc_uv_migrate_mem_slot()
791 mmap_read_lock(kvm->mm); in kvmppc_uv_migrate_mem_slot()
792 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uv_migrate_mem_slot()
800 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_uv_migrate_mem_slot()
801 if (!vma || vma->vm_start > start || vma->vm_end < end) in kvmppc_uv_migrate_mem_slot()
814 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uv_migrate_mem_slot()
815 mmap_read_unlock(kvm->mm); in kvmppc_uv_migrate_mem_slot()
826 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_init_done()
830 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_init_done()
849 kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE; in kvmppc_h_svm_init_done()
850 pr_info("LPID %d went secure\n", kvm->arch.lpid); in kvmppc_h_svm_init_done()
853 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_init_done()
860 * - If the page is already secure, then provision a new page and share
861 * - If the page is a normal page, share the existing page
878 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_share_page()
879 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
882 pvt = uvmem_page->zone_device_data; in kvmppc_share_page()
883 pvt->skip_page_out = true; in kvmppc_share_page()
888 pvt->remove_gfn = false; in kvmppc_share_page()
892 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
897 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
900 pvt = uvmem_page->zone_device_data; in kvmppc_share_page()
901 pvt->skip_page_out = true; in kvmppc_share_page()
902 pvt->remove_gfn = false; /* it continues to be a valid GFN */ in kvmppc_share_page()
907 if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, in kvmppc_share_page()
913 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
915 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_share_page()
920 * H_SVM_PAGE_IN: Move page from normal memory to secure memory.
935 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_page_in()
948 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_page_in()
949 mmap_read_lock(kvm->mm); in kvmppc_h_svm_page_in()
955 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_h_svm_page_in()
956 /* Fail the page-in request of an already paged-in page */ in kvmppc_h_svm_page_in()
961 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_h_svm_page_in()
962 if (!vma || vma->vm_start > start || vma->vm_end < end) in kvmppc_h_svm_page_in()
972 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_h_svm_page_in()
974 mmap_read_unlock(kvm->mm); in kvmppc_h_svm_page_in()
975 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_page_in()
982 * has been moved to secure memory, we ask UV to give back the page by
990 struct kvmppc_uvmem_page_pvt *pvt = vmf->page->zone_device_data; in kvmppc_uvmem_migrate_to_ram()
992 if (kvmppc_svm_page_out(vmf->vma, vmf->address, in kvmppc_uvmem_migrate_to_ram()
993 vmf->address + PAGE_SIZE, PAGE_SHIFT, in kvmppc_uvmem_migrate_to_ram()
994 pvt->kvm, pvt->gpa)) in kvmppc_uvmem_migrate_to_ram()
1003 * Gets called when secure GFN tranistions from a secure-PFN
1005 * Gets called with kvm->arch.uvmem_lock held.
1009 unsigned long pfn = page_to_pfn(page) - in kvmppc_uvmem_page_free()
1017 pvt = page->zone_device_data; in kvmppc_uvmem_page_free()
1018 page->zone_device_data = NULL; in kvmppc_uvmem_page_free()
1019 if (pvt->remove_gfn) in kvmppc_uvmem_page_free()
1020 kvmppc_gfn_remove(pvt->gpa >> PAGE_SHIFT, pvt->kvm); in kvmppc_uvmem_page_free()
1022 kvmppc_gfn_secure_mem_pfn(pvt->gpa >> PAGE_SHIFT, pvt->kvm); in kvmppc_uvmem_page_free()
1032 * H_SVM_PAGE_OUT: Move page from secure memory to normal memory.
1044 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_page_out()
1054 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_page_out()
1055 mmap_read_lock(kvm->mm); in kvmppc_h_svm_page_out()
1061 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_h_svm_page_out()
1062 if (!vma || vma->vm_start > start || vma->vm_end < end) in kvmppc_h_svm_page_out()
1068 mmap_read_unlock(kvm->mm); in kvmppc_h_svm_page_out()
1069 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_page_out()
1080 return -EFAULT; in kvmppc_send_page_to_uv()
1082 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_send_page_to_uv()
1086 ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gfn << PAGE_SHIFT, in kvmppc_send_page_to_uv()
1090 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_send_page_to_uv()
1091 return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT; in kvmppc_send_page_to_uv()
1117 * First try the new ibm,secure-memory nodes which supersede the in kvmppc_get_secmem_size()
1118 * secure-memory-ranges property. in kvmppc_get_secmem_size()
1121 for_each_compatible_node(np, NULL, "ibm,secure-memory") { in kvmppc_get_secmem_size()
1130 np = of_find_compatible_node(NULL, NULL, "ibm,uv-firmware"); in kvmppc_get_secmem_size()
1134 prop = of_get_property(np, "secure-memory-ranges", &len); in kvmppc_get_secmem_size()
1158 * Don't fail the initialization of kvm-hv module if in kvmppc_uvmem_init()
1159 * the platform doesn't export ibm,uv-firmware node. in kvmppc_uvmem_init()
1160 * Let normal guests run on such PEF-disabled platform. in kvmppc_uvmem_init()
1162 pr_info("KVMPPC-UVMEM: No support for secure guests\n"); in kvmppc_uvmem_init()
1173 kvmppc_uvmem_pgmap.range.start = res->start; in kvmppc_uvmem_init()
1174 kvmppc_uvmem_pgmap.range.end = res->end; in kvmppc_uvmem_init()
1185 pfn_first = res->start >> PAGE_SHIFT; in kvmppc_uvmem_init()
1187 kvmppc_uvmem_bitmap = kcalloc(BITS_TO_LONGS(pfn_last - pfn_first), in kvmppc_uvmem_init()
1190 ret = -ENOMEM; in kvmppc_uvmem_init()
1194 pr_info("KVMPPC-UVMEM: Secure Memory size 0x%lx\n", size); in kvmppc_uvmem_init()
1199 release_mem_region(res->start, size); in kvmppc_uvmem_init()