Lines Matching full:areas
11 * The percpu allocator handles both static and dynamic areas. Percpu
12 * areas are allocated in chunks which are divided into units. There is
177 /* chunks which need their map areas extended, protected by pcpu_lock */
411 * pcpu_next_fit_region - finds fit areas for a given allocation request
476 * Metadata free area iterators. These perform aggregation of free areas
797 /* iterate over free areas and update the contig hints */ in pcpu_block_refresh_hint()
1100 * skip over blocks and chunks that have valid free areas.
1155 * free areas, smaller allocations will eventually fill those holes.
1881 /* clear the areas and return address relative to base address */ in pcpu_alloc()
1996 * areas can be scarce. Destroy all free chunks except for one. in pcpu_balance_free()
2345 * static percpu areas are not considered. For those, use
2553 * static areas on architectures where the addressing model has
2566 * for vm areas.
2573 * percpu areas. Units which should be colocated are put into the
2574 * same group. Dynamic VM areas will be allocated according to these
3037 void **areas = NULL; in pcpu_embed_first_chunk() local
3051 areas = memblock_alloc(areas_size, SMP_CACHE_BYTES); in pcpu_embed_first_chunk()
3052 if (!areas) { in pcpu_embed_first_chunk()
3076 areas[group] = ptr; in pcpu_embed_first_chunk()
3079 if (ptr > areas[highest_group]) in pcpu_embed_first_chunk()
3082 max_distance = areas[highest_group] - base; in pcpu_embed_first_chunk()
3103 void *ptr = areas[group]; in pcpu_embed_first_chunk()
3119 ai->groups[group].base_offset = areas[group] - base; in pcpu_embed_first_chunk()
3131 if (areas[group]) in pcpu_embed_first_chunk()
3132 free_fn(areas[group], in pcpu_embed_first_chunk()
3136 if (areas) in pcpu_embed_first_chunk()
3137 memblock_free_early(__pa(areas), areas_size); in pcpu_embed_first_chunk()
3306 panic("Failed to initialize percpu areas."); in setup_per_cpu_areas()
3334 panic("Failed to allocate memory for percpu areas."); in setup_per_cpu_areas()