Searched refs:MPOL_BIND (Results 1 – 6 of 6) sorted by relevance
22 MPOL_BIND, enumerator
400 [MPOL_BIND] = {785 case MPOL_BIND: in get_policy_nodemask()1711 if (unlikely(policy->mode == MPOL_BIND) && in policy_nodemask()1731 WARN_ON_ONCE(policy->mode == MPOL_BIND && (gfp & __GFP_THISNODE)); in policy_node()1775 case MPOL_BIND: { in mempolicy_slab_node()1867 if ((*mpol)->mode == MPOL_BIND) in huge_node()1908 case MPOL_BIND: in init_nodemask_of_mempolicy()1955 case MPOL_BIND: in mempolicy_nodemask_intersects()2160 case MPOL_BIND: in __mpol_equal()2315 case MPOL_BIND: in mpol_misplaced()[all …]
605 mempolicy MPOL_BIND, and the nodes to which it was bound overlap with607 of MPOL_BIND nodes are still allowed in the new cpuset. If the task608 was using MPOL_BIND and now none of its MPOL_BIND nodes are allowed610 was MPOL_BIND bound to the new cpuset (even though its NUMA placement,
116 allocation policy of MPOL_BIND | MPOL_F_STATIC_NODES.
195 MPOL_BIND355 path". Note that for MPOL_BIND, the "usage" extends across the entire
348 ret = set_mempolicy(MPOL_BIND, &nodemask, sizeof(nodemask)*8); in bind_to_memnode()