Lines Matching full:we

44 ** Detect whether or not we are building for a 32- or 64-bit (LP/LLP)
242 ** We support allocations of sizes up to (1 << FL_INDEX_MAX) bits.
243 ** However, because we linearly subdivide the second-level lists, and
246 ** or (1 << (SL_INDEX_COUNT_LOG2 + 2)) bytes, as there we will be
247 ** trying to split size ranges into more slots than we have available.
248 ** Instead, we calculate the minimum threshold size, and place all
256 ** TODO: We can increase this to support larger sizes, at the expense
304 /* Ensure we've properly tuned our sizes. */
514 /* aligned sized must not exceed block_size_max or we'll go out of bounds on sl_bitmap */ in adjust_request_size()
730 /* If the next block is free, we must coalesce. */ in block_trim_used()
743 /* We want the 2nd block. */ in block_trim_free_leading()
765 ** So, we protect against that here, since this is the only callsite of mapping_search. in block_locate_free()
766 …** Note that we don't need to check sl, since it comes from a modulo operation that guarantees it'… in block_locate_free()
1111 ** We must allocate an additional minimum block size bytes so that if in lv_tlsf_memalign()
1112 ** our free block will leave an alignment gap which is smaller, we can in lv_tlsf_memalign()
1113 ** trim a leading free block and release it back to the pool. We must in lv_tlsf_memalign()
1115 ** the prev_phys_block field is not valid, and we can't simply adjust in lv_tlsf_memalign()
1122 ** If alignment is less than or equals base alignment, we're done. in lv_tlsf_memalign()
1123 ** If we requested 0 bytes, return null, as lv_tlsf_malloc(0) does. in lv_tlsf_memalign()
1219 ** block, does not offer enough space, we must reallocate and copy. in lv_tlsf_realloc()
1230 /* Do we need to expand to the next block? */ in lv_tlsf_realloc()