Lines Matching full:areas

11  * The percpu allocator handles both static and dynamic areas.  Percpu
12 * areas are allocated in chunks which are divided into units. There is
171 /* chunks which need their map areas extended, protected by pcpu_lock */
386 * pcpu_next_fit_region - finds fit areas for a given allocation request
451 * Metadata free area iterators. These perform aggregation of free areas
749 /* iterate over free areas and update the contig hints */ in pcpu_block_refresh_hint()
1052 * skip over blocks and chunks that have valid free areas.
1110 * free areas, smaller allocations will eventually fill those holes.
1837 /* clear the areas and return address relative to base address */ in pcpu_alloc()
1954 * areas can be scarce. Destroy all free chunks except for one. in __pcpu_balance_workfn()
2157 * static percpu areas are not considered. For those, use
2365 * static areas on architectures where the addressing model has
2378 * for vm areas.
2385 * percpu areas. Units which should be colocated are put into the
2386 * same group. Dynamic VM areas will be allocated according to these
2841 void **areas = NULL; in pcpu_embed_first_chunk() local
2855 areas = memblock_alloc(areas_size, SMP_CACHE_BYTES); in pcpu_embed_first_chunk()
2856 if (!areas) { in pcpu_embed_first_chunk()
2880 areas[group] = ptr; in pcpu_embed_first_chunk()
2883 if (ptr > areas[highest_group]) in pcpu_embed_first_chunk()
2886 max_distance = areas[highest_group] - base; in pcpu_embed_first_chunk()
2907 void *ptr = areas[group]; in pcpu_embed_first_chunk()
2923 ai->groups[group].base_offset = areas[group] - base; in pcpu_embed_first_chunk()
2935 if (areas[group]) in pcpu_embed_first_chunk()
2936 free_fn(areas[group], in pcpu_embed_first_chunk()
2940 if (areas) in pcpu_embed_first_chunk()
2941 memblock_free_early(__pa(areas), areas_size); in pcpu_embed_first_chunk()
3110 panic("Failed to initialize percpu areas."); in setup_per_cpu_areas()
3138 panic("Failed to allocate memory for percpu areas."); in setup_per_cpu_areas()