Lines Matching +full:non +full:- +full:secure

1 // SPDX-License-Identifier: GPL-2.0
3 * Secure pages management: Migration of pages between normal and secure
10 * A pseries guest can be run as secure guest on Ultravisor-enabled
13 * hypervisor (HV) and secure memory managed by Ultravisor (UV).
15 * The page-in or page-out requests from UV will come to HV as hcalls and
18 * Private ZONE_DEVICE memory equal to the amount of secure memory
19 * available in the platform for running secure guests is hotplugged.
20 * Whenever a page belonging to the guest becomes secure, a page from this
21 * private device memory is used to represent and track that secure page
31 * kvm->arch.uvmem_lock is a per-guest lock that prevents concurrent
32 * page-in and page-out requests for the same GPA. Concurrent accesses
36 * UV(secure) and vice versa. So the serialization points are around
37 * migrate_vma routines and page-in/out routines.
39 * Per-guest mutex comes with a cost though. Mainly it serializes the
40 * fault path as page-out can occur when HV faults on accessing secure
41 * guest pages. Currently UV issues page-in requests for all the guest
43 * not a cause for concern. Also currently the number of page-outs caused
44 * by HV touching secure pages is very very low. If an when UV supports
45 * overcommitting, then we might see concurrent guest driven page-outs.
49 * 1. kvm->srcu - Protects KVM memslots
50 * 2. kvm->mm->mmap_lock - find_vma, migrate_vma_pages and helpers, ksm_madvise
51 * 3. kvm->arch.uvmem_lock - protects read/writes to uvmem slots thus acting
52 * as sync-points for page-in/out
60 * secure GPAs at 64K page size and maintains one device PFN for each
61 * 64K secure GPA. UV_PAGE_IN and UV_PAGE_OUT calls by HV are also issued
64 * HV faulting on secure pages: When HV touches any secure page, it
69 * Shared pages: Whenever guest shares a secure page, UV will split and
72 * HV invalidating a page: When a regular page belonging to secure
74 * page size. Using 64K page size is correct here because any non-secure
76 * and page-out ensures this.
79 * to secure guest, it sends that to UV with a 64K UV_PAGE_IN request.
81 * into 64k mappings and would have done page-outs earlier.
83 * In summary, the current secure pages handling code in HV assumes
84 * 64K page size and in fact fails any page-in/page-out requests of
85 * non-64K size upfront. If and when UV starts supporting multiple
86 * page-sizes, we need to break this assumption.
105 * ---------------
108 * (a) Secure - The GFN is secure. The GFN is associated with
109 * a Secure VM, the contents of the GFN is not accessible
110 * to the Hypervisor. This GFN can be backed by a secure-PFN,
111 * or can be backed by a normal-PFN with contents encrypted.
112 * The former is true when the GFN is paged-in into the
113 * ultravisor. The latter is true when the GFN is paged-out
116 * (b) Shared - The GFN is shared. The GFN is associated with a
117 * a secure VM. The contents of the GFN is accessible to
118 * Hypervisor. This GFN is backed by a normal-PFN and its
119 * content is un-encrypted.
121 * (c) Normal - The GFN is a normal. The GFN is associated with
126 * ---------------
129 * the hypervisor. All its GFNs are normal-GFNs.
131 * Secure VM: A VM whose contents are not accessible to the
133 * either Shared-GFN or Secure-GFNs.
135 * Transient VM: A Normal VM that is transitioning to secure VM.
139 * in any of the three states; i.e Secure-GFN, Shared-GFN,
140 * and Normal-GFN. The VM never executes in this state
141 * in supervisor-mode.
144 * -----------------------------
149 * --------------------
158 * secure-state. At this point any left-over normal-GFNs are
159 * transitioned to Secure-GFN.
162 * All its GFNs are moved to Normal-GFNs.
164 * UV_TERMINATE transitions the secure-VM back to normal-VM. All
165 * the secure-GFN and shared-GFNs are tranistioned to normal-GFN
166 * Note: The contents of the normal-GFN is undefined at this point.
169 * -------------------------
171 * Secure GFN is associated with a secure-PFN; also called uvmem_pfn,
172 * when the GFN is paged-in. Its pfn[] has KVMPPC_GFN_UVMEM_PFN flag
173 * set, and contains the value of the secure-PFN.
174 * It is associated with a normal-PFN; also called mem_pfn, when
176 * The value of the normal-PFN is not tracked.
178 * Shared GFN is associated with a normal-PFN. Its pfn[] has
179 * KVMPPC_UVMEM_SHARED_PFN flag set. The value of the normal-PFN
182 * Normal GFN is associated with normal-PFN. Its pfn[] has
183 * no flag set. The value of the normal-PFN is not tracked.
186 * --------------------
188 * --------------------------------------------------------------
192 * -------------------------------------------------------------
194 * | Secure | Shared | Secure |Normal |Secure |
196 * | Shared | Shared | Secure |Normal |Shared |
198 * | Normal | Shared | Secure |Normal |Secure |
199 * --------------------------------------------------------------
202 * --------------------
204 * --------------------------------------------------------------------
208 * --------- ----------------------------------------------------------
212 * | Secure | Error | Error |Error |Error |Normal |
214 * |Transient| N/A | Error |Secure |Normal |Normal |
215 * --------------------------------------------------------------------
253 return -ENOMEM; in kvmppc_uvmem_slot_init()
254 p->pfns = vzalloc(array_size(slot->npages, sizeof(*p->pfns))); in kvmppc_uvmem_slot_init()
255 if (!p->pfns) { in kvmppc_uvmem_slot_init()
257 return -ENOMEM; in kvmppc_uvmem_slot_init()
259 p->nr_pfns = slot->npages; in kvmppc_uvmem_slot_init()
260 p->base_pfn = slot->base_gfn; in kvmppc_uvmem_slot_init()
262 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_init()
263 list_add(&p->list, &kvm->arch.uvmem_pfns); in kvmppc_uvmem_slot_init()
264 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_init()
276 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_free()
277 list_for_each_entry_safe(p, next, &kvm->arch.uvmem_pfns, list) { in kvmppc_uvmem_slot_free()
278 if (p->base_pfn == slot->base_gfn) { in kvmppc_uvmem_slot_free()
279 vfree(p->pfns); in kvmppc_uvmem_slot_free()
280 list_del(&p->list); in kvmppc_uvmem_slot_free()
285 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_slot_free()
293 list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) { in kvmppc_mark_gfn()
294 if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) { in kvmppc_mark_gfn()
295 unsigned long index = gfn - p->base_pfn; in kvmppc_mark_gfn()
298 p->pfns[index] = uvmem_pfn | flag; in kvmppc_mark_gfn()
300 p->pfns[index] = flag; in kvmppc_mark_gfn()
306 /* mark the GFN as secure-GFN associated with @uvmem pfn device-PFN. */
313 /* mark the GFN as secure-GFN associated with a memory-PFN. */
325 /* mark the GFN as a non-existent GFN. */
331 /* return true, if the GFN is a secure-GFN backed by a secure-PFN */
337 list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) { in kvmppc_gfn_is_uvmem_pfn()
338 if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) { in kvmppc_gfn_is_uvmem_pfn()
339 unsigned long index = gfn - p->base_pfn; in kvmppc_gfn_is_uvmem_pfn()
341 if (p->pfns[index] & KVMPPC_GFN_UVMEM_PFN) { in kvmppc_gfn_is_uvmem_pfn()
343 *uvmem_pfn = p->pfns[index] & in kvmppc_gfn_is_uvmem_pfn()
355 * transitioned to a secure GFN. return the value of that GFN in *gfn. If a
358 * Must be called with kvm->arch.uvmem_lock held.
367 list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) in kvmppc_next_nontransitioned_gfn()
368 if (*gfn >= p->base_pfn && *gfn < p->base_pfn + p->nr_pfns) in kvmppc_next_nontransitioned_gfn()
376 for (i = *gfn; i < p->base_pfn + p->nr_pfns; i++) { in kvmppc_next_nontransitioned_gfn()
377 unsigned long index = i - p->base_pfn; in kvmppc_next_nontransitioned_gfn()
379 if (!(p->pfns[index] & KVMPPC_GFN_FLAG_MASK)) { in kvmppc_next_nontransitioned_gfn()
391 unsigned long gfn = memslot->base_gfn; in kvmppc_memslot_page_merge()
400 end = start + (memslot->npages << PAGE_SHIFT); in kvmppc_memslot_page_merge()
402 mmap_write_lock(kvm->mm); in kvmppc_memslot_page_merge()
404 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_memslot_page_merge()
409 ret = ksm_madvise(vma, vma->vm_start, vma->vm_end, in kvmppc_memslot_page_merge()
410 merge_flag, &vma->vm_flags); in kvmppc_memslot_page_merge()
415 start = vma->vm_end; in kvmppc_memslot_page_merge()
416 } while (end > vma->vm_end); in kvmppc_memslot_page_merge()
418 mmap_write_unlock(kvm->mm); in kvmppc_memslot_page_merge()
425 uv_unregister_mem_slot(kvm->arch.lpid, memslot->id); in __kvmppc_uvmem_memslot_delete()
441 ret = uv_register_mem_slot(kvm->arch.lpid, in __kvmppc_uvmem_memslot_create()
442 memslot->base_gfn << PAGE_SHIFT, in __kvmppc_uvmem_memslot_create()
443 memslot->npages * PAGE_SIZE, in __kvmppc_uvmem_memslot_create()
444 0, memslot->id); in __kvmppc_uvmem_memslot_create()
464 kvm->arch.secure_guest = KVMPPC_SECURE_INIT_START; in kvmppc_h_svm_init_start()
469 /* Only radix guests can be secure guests */ in kvmppc_h_svm_init_start()
473 /* NAK the transition to secure if not enabled */ in kvmppc_h_svm_init_start()
474 if (!kvm->arch.svm_enabled) in kvmppc_h_svm_init_start()
477 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_init_start()
496 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_init_start()
502 * from secure memory using UV_PAGE_OUT uvcall.
503 * Caller must held kvm->arch.uvmem_lock.
526 /* The requested page is already paged-out, nothing to do */ in __kvmppc_svm_page_out()
532 return -1; in __kvmppc_svm_page_out()
543 ret = -1; in __kvmppc_svm_page_out()
548 pvt = spage->zone_device_data; in __kvmppc_svm_page_out()
553 * - When HV touches a secure page, for which we do UV_PAGE_OUT in __kvmppc_svm_page_out()
554 * - When a secure page is converted to shared page, we *get* in __kvmppc_svm_page_out()
556 * case we skip page-out. in __kvmppc_svm_page_out()
558 if (!pvt->skip_page_out) in __kvmppc_svm_page_out()
559 ret = uv_page_out(kvm->arch.lpid, pfn << page_shift, in __kvmppc_svm_page_out()
584 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_svm_page_out()
586 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_svm_page_out()
592 * Drop device pages that we maintain for the secure guest
609 mmap_read_lock(kvm->mm); in kvmppc_uvmem_drop_pages()
611 addr = slot->userspace_addr; in kvmppc_uvmem_drop_pages()
613 gfn = slot->base_gfn; in kvmppc_uvmem_drop_pages()
614 for (i = slot->npages; i; --i, ++gfn, addr += PAGE_SIZE) { in kvmppc_uvmem_drop_pages()
617 if (!vma || addr >= vma->vm_end) { in kvmppc_uvmem_drop_pages()
618 vma = vma_lookup(kvm->mm, addr); in kvmppc_uvmem_drop_pages()
625 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_drop_pages()
629 pvt = uvmem_page->zone_device_data; in kvmppc_uvmem_drop_pages()
630 pvt->skip_page_out = skip_page_out; in kvmppc_uvmem_drop_pages()
631 pvt->remove_gfn = true; in kvmppc_uvmem_drop_pages()
634 PAGE_SHIFT, kvm, pvt->gpa)) in kvmppc_uvmem_drop_pages()
636 pvt->gpa, addr); in kvmppc_uvmem_drop_pages()
642 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uvmem_drop_pages()
645 mmap_read_unlock(kvm->mm); in kvmppc_uvmem_drop_pages()
657 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_init_abort()
660 if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) in kvmppc_h_svm_init_abort()
663 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_init_abort()
668 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_init_abort()
670 kvm->arch.secure_guest = 0; in kvmppc_h_svm_init_abort()
671 uv_svm_terminate(kvm->arch.lpid); in kvmppc_h_svm_init_abort()
679 * Called when a normal page is moved to secure memory (UV_PAGE_IN). Device
680 * PFN will be used to keep track of the secure page on HV side.
682 * Called with kvm->arch.uvmem_lock held
697 pfn_last - pfn_first); in kvmppc_uvmem_get_page()
698 if (bit >= (pfn_last - pfn_first)) in kvmppc_uvmem_get_page()
710 pvt->gpa = gpa; in kvmppc_uvmem_get_page()
711 pvt->kvm = kvm; in kvmppc_uvmem_get_page()
714 dpage->zone_device_data = pvt; in kvmppc_uvmem_get_page()
728 * copy page from normal memory to secure memory using UV_PAGE_IN uvcall.
756 ret = -1; in kvmppc_svm_page_in()
762 ret = -1; in kvmppc_svm_page_in()
770 ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, in kvmppc_svm_page_in()
787 unsigned long gfn = memslot->base_gfn; in kvmppc_uv_migrate_mem_slot()
792 mmap_read_lock(kvm->mm); in kvmppc_uv_migrate_mem_slot()
793 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_uv_migrate_mem_slot()
801 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_uv_migrate_mem_slot()
802 if (!vma || vma->vm_start > start || vma->vm_end < end) in kvmppc_uv_migrate_mem_slot()
815 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_uv_migrate_mem_slot()
816 mmap_read_unlock(kvm->mm); in kvmppc_uv_migrate_mem_slot()
827 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_init_done()
831 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_init_done()
850 kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE; in kvmppc_h_svm_init_done()
851 pr_info("LPID %d went secure\n", kvm->arch.lpid); in kvmppc_h_svm_init_done()
854 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_init_done()
861 * - If the page is already secure, then provision a new page and share
862 * - If the page is a normal page, share the existing page
879 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_share_page()
880 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
883 pvt = uvmem_page->zone_device_data; in kvmppc_share_page()
884 pvt->skip_page_out = true; in kvmppc_share_page()
889 pvt->remove_gfn = false; in kvmppc_share_page()
893 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
898 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
901 pvt = uvmem_page->zone_device_data; in kvmppc_share_page()
902 pvt->skip_page_out = true; in kvmppc_share_page()
903 pvt->remove_gfn = false; /* it continues to be a valid GFN */ in kvmppc_share_page()
908 if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, in kvmppc_share_page()
914 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_share_page()
916 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_share_page()
921 * H_SVM_PAGE_IN: Move page from normal memory to secure memory.
936 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_page_in()
949 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_page_in()
950 mmap_read_lock(kvm->mm); in kvmppc_h_svm_page_in()
956 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_h_svm_page_in()
957 /* Fail the page-in request of an already paged-in page */ in kvmppc_h_svm_page_in()
962 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_h_svm_page_in()
963 if (!vma || vma->vm_start > start || vma->vm_end < end) in kvmppc_h_svm_page_in()
973 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_h_svm_page_in()
975 mmap_read_unlock(kvm->mm); in kvmppc_h_svm_page_in()
976 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_page_in()
983 * has been moved to secure memory, we ask UV to give back the page by
991 struct kvmppc_uvmem_page_pvt *pvt = vmf->page->zone_device_data; in kvmppc_uvmem_migrate_to_ram()
993 if (kvmppc_svm_page_out(vmf->vma, vmf->address, in kvmppc_uvmem_migrate_to_ram()
994 vmf->address + PAGE_SIZE, PAGE_SHIFT, in kvmppc_uvmem_migrate_to_ram()
995 pvt->kvm, pvt->gpa)) in kvmppc_uvmem_migrate_to_ram()
1004 * Gets called when secure GFN tranistions from a secure-PFN
1006 * Gets called with kvm->arch.uvmem_lock held.
1010 unsigned long pfn = page_to_pfn(page) - in kvmppc_uvmem_page_free()
1018 pvt = page->zone_device_data; in kvmppc_uvmem_page_free()
1019 page->zone_device_data = NULL; in kvmppc_uvmem_page_free()
1020 if (pvt->remove_gfn) in kvmppc_uvmem_page_free()
1021 kvmppc_gfn_remove(pvt->gpa >> PAGE_SHIFT, pvt->kvm); in kvmppc_uvmem_page_free()
1023 kvmppc_gfn_secure_mem_pfn(pvt->gpa >> PAGE_SHIFT, pvt->kvm); in kvmppc_uvmem_page_free()
1033 * H_SVM_PAGE_OUT: Move page from secure memory to normal memory.
1045 if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) in kvmppc_h_svm_page_out()
1055 srcu_idx = srcu_read_lock(&kvm->srcu); in kvmppc_h_svm_page_out()
1056 mmap_read_lock(kvm->mm); in kvmppc_h_svm_page_out()
1062 vma = find_vma_intersection(kvm->mm, start, end); in kvmppc_h_svm_page_out()
1063 if (!vma || vma->vm_start > start || vma->vm_end < end) in kvmppc_h_svm_page_out()
1069 mmap_read_unlock(kvm->mm); in kvmppc_h_svm_page_out()
1070 srcu_read_unlock(&kvm->srcu, srcu_idx); in kvmppc_h_svm_page_out()
1081 return -EFAULT; in kvmppc_send_page_to_uv()
1083 mutex_lock(&kvm->arch.uvmem_lock); in kvmppc_send_page_to_uv()
1087 ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gfn << PAGE_SHIFT, in kvmppc_send_page_to_uv()
1091 mutex_unlock(&kvm->arch.uvmem_lock); in kvmppc_send_page_to_uv()
1092 return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT; in kvmppc_send_page_to_uv()
1118 * First try the new ibm,secure-memory nodes which supersede the in kvmppc_get_secmem_size()
1119 * secure-memory-ranges property. in kvmppc_get_secmem_size()
1122 for_each_compatible_node(np, NULL, "ibm,secure-memory") { in kvmppc_get_secmem_size()
1131 np = of_find_compatible_node(NULL, NULL, "ibm,uv-firmware"); in kvmppc_get_secmem_size()
1135 prop = of_get_property(np, "secure-memory-ranges", &len); in kvmppc_get_secmem_size()
1159 * Don't fail the initialization of kvm-hv module if in kvmppc_uvmem_init()
1160 * the platform doesn't export ibm,uv-firmware node. in kvmppc_uvmem_init()
1161 * Let normal guests run on such PEF-disabled platform. in kvmppc_uvmem_init()
1163 pr_info("KVMPPC-UVMEM: No support for secure guests\n"); in kvmppc_uvmem_init()
1174 kvmppc_uvmem_pgmap.range.start = res->start; in kvmppc_uvmem_init()
1175 kvmppc_uvmem_pgmap.range.end = res->end; in kvmppc_uvmem_init()
1186 pfn_first = res->start >> PAGE_SHIFT; in kvmppc_uvmem_init()
1188 kvmppc_uvmem_bitmap = kcalloc(BITS_TO_LONGS(pfn_last - pfn_first), in kvmppc_uvmem_init()
1191 ret = -ENOMEM; in kvmppc_uvmem_init()
1195 pr_info("KVMPPC-UVMEM: Secure Memory size 0x%lx\n", size); in kvmppc_uvmem_init()
1200 release_mem_region(res->start, size); in kvmppc_uvmem_init()