Searched refs:subpages (Results 1 – 5 of 5) sorted by relevance
439 unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); in kvmppc_clear_tce() local442 for (i = 0; i < subpages; ++i) { in kvmppc_clear_tce()497 unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); in kvmppc_tce_iommu_unmap() local498 unsigned long io_entry = entry * subpages; in kvmppc_tce_iommu_unmap()500 for (i = 0; i < subpages; ++i) { in kvmppc_tce_iommu_unmap()506 iommu_tce_kill(tbl, io_entry, subpages); in kvmppc_tce_iommu_unmap()555 unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); in kvmppc_tce_iommu_map() local556 unsigned long io_entry = entry * subpages; in kvmppc_tce_iommu_map()558 for (i = 0, pgoff = 0; i < subpages; in kvmppc_tce_iommu_map()567 iommu_tce_kill(tbl, io_entry, subpages); in kvmppc_tce_iommu_map()
126 last unmap of subpages.131 subpages is offset up by one. This additional reference is required to132 get race-free detection of unmap of subpages when we have them mapped with136 tracking. The alternative is to alter ->_mapcount in all subpages on each178 comes. Splitting will free up unused subpages.
173 this counter is increased by the number of THP or hugetlb subpages.175 (subpages) will cause this counter to increase by 512.178 PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
435 individual subpages.447 page will be split, subpages which belong to VM_LOCKED VMAs will be moved
880 bool "Support setting protections for 4k subpages (subpage_prot syscall)"886 on the 4k subpages of each 64k page.