Lines Matching full:areas
11 * The percpu allocator handles both static and dynamic areas. Percpu
12 * areas are allocated in chunks which are divided into units. There is
177 /* chunks which need their map areas extended, protected by pcpu_lock */
411 * pcpu_next_fit_region - finds fit areas for a given allocation request
476 * Metadata free area iterators. These perform aggregation of free areas
797 /* iterate over free areas and update the contig hints */ in pcpu_block_refresh_hint()
1100 * skip over blocks and chunks that have valid free areas.
1155 * free areas, smaller allocations will eventually fill those holes.
1880 /* clear the areas and return address relative to base address */ in pcpu_alloc()
1996 * areas can be scarce. Destroy all free chunks except for one. in pcpu_balance_free()
2343 * static percpu areas are not considered. For those, use
2551 * static areas on architectures where the addressing model has
2564 * for vm areas.
2571 * percpu areas. Units which should be colocated are put into the
2572 * same group. Dynamic VM areas will be allocated according to these
3069 void **areas = NULL; in pcpu_embed_first_chunk() local
3083 areas = memblock_alloc(areas_size, SMP_CACHE_BYTES); in pcpu_embed_first_chunk()
3084 if (!areas) { in pcpu_embed_first_chunk()
3108 areas[group] = ptr; in pcpu_embed_first_chunk()
3111 if (ptr > areas[highest_group]) in pcpu_embed_first_chunk()
3114 max_distance = areas[highest_group] - base; in pcpu_embed_first_chunk()
3135 void *ptr = areas[group]; in pcpu_embed_first_chunk()
3151 ai->groups[group].base_offset = areas[group] - base; in pcpu_embed_first_chunk()
3163 if (areas[group]) in pcpu_embed_first_chunk()
3164 pcpu_fc_free(areas[group], in pcpu_embed_first_chunk()
3168 if (areas) in pcpu_embed_first_chunk()
3169 memblock_free(areas, areas_size); in pcpu_embed_first_chunk()
3390 panic("Failed to initialize percpu areas."); in setup_per_cpu_areas()
3418 panic("Failed to allocate memory for percpu areas."); in setup_per_cpu_areas()