Lines Matching +full:- +full:a
8 /* ---------- To make a malloc.h, start cutting here ------------ */
11 A version of malloc/free/realloc written by Doug Lea and released to the
21 Note: This version differs from 2.6.4 only by correcting a
28 This is not the fastest, most space-conserving, most portable, or
30 while also being among the most space-conserving, portable and tunable.
31 Consistent balance across these factors results in a good general-purpose
32 allocator. For a high-level description, see
40 Return a pointer to a newly allocated chunk of at least n bytes, or null
45 Return a pointer to a chunk of size n that contains the same data
49 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
50 size argument of zero (re)allocates a minimum-sized chunk.
52 Return a pointer to a newly allocated chunk of n bytes, aligned
53 in accord with the alignment argument, which must be a power of
60 Equivalent to valloc(minimum-page-that-holds(n)), that is,
63 Returns a pointer to quantity * unit bytes, with all locations
68 Release all but pad bytes of freed top-most memory back
77 Returns (by copy) a struct containing various summary statistics.
84 Alignment: 8-byte
89 Code for 8-byte pointers is untested by me but has worked
97 Each malloced chunk has a hidden overhead of 4 bytes holding size
100 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
101 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
103 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
105 needed; 4 (8) for a trailing size field
109 Even a request for zero bytes (i.e., malloc(0)) returns a
112 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
113 8-byte size_t: 2^63 - 16 bytes
117 that `size_t' may be defined on a system as either a signed or
120 Requests for sizes with a negative sign bit will return a
121 minimum-sized chunk.
126 make the normal worst-case wastage 15 bytes (i.e., up to 15
129 1. Because requests for zero bytes allocate non-zero space,
130 the worst case wastage for a request of zero bytes is 24 bytes.
133 from a system page (the minimal mmap unit); typically 4096 bytes.
139 * No user-definable hooks for callbacks and the like.
144 * Synopsis of compile-time options:
151 stand-alone embedded systems.
153 The implementation is in straight, hand-tuned ANSI C. Among other
154 consequences, it uses a lot of macros. Because of this, to be at
156 (for example gcc -O2) that can simplify expressions and control
160 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
161 a C compiler sufficiently close to ANSI to get away with it.
163 Define to enable debugging. Adds fairly extensive assertion-based
168 compile malloc.c several times, defining a DEFINE_* macro each
175 MALLOC_UNLOCK calls. MALLOC_LOCK must avoid waiting for a lock
182 to free(p). Otherwise, since malloc returns a unique pointer for
194 Define to non-zero to optionally make malloc() use mmap() to
197 Define to non-zero to optionally make realloc() use mremap() to
200 Either a constant or routine call returning the system page size.
202 Optionally define if you are on a system with a /usr/include/malloc.h
206 Define to a 32-bit type (probably `unsigned int') if you are on a
207 64-bit machine, yet do not want or need to allow malloc requests of
212 Also note that there is some odd internal name-mangling via defines
222 Define this if your system does not have a <unistd.h>.
225 MORECORE_FAILURE (default: -1)
249 #pragma GCC diagnostic ignored "-Wpragmas"
250 #pragma GCC diagnostic ignored "-Wunknown-warning-option"
251 #pragma GCC diagnostic ignored "-Wanalyzer-malloc-leak"
252 #pragma GCC diagnostic ignored "-Wanalyzer-use-of-uninitialized-value"
253 #pragma GCC diagnostic ignored "-Wanalyzer-out-of-bounds"
254 #pragma GCC diagnostic ignored "-Warray-bounds"
255 #pragma GCC diagnostic ignored "-Wanalyzer-null-dereference"
270 Compile-time options
295 If you compile with -DDEBUG, a number of assertion checks are
301 attempt to check every non-mmapped allocated and free chunk in the
319 into a separate .o file. You must then compile malloc.c once per
344 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
345 of chunk sizes. On a 64-bit machine, you can reduce malloc
346 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
348 2^31. This limitation is hardly ever a concern; you are encouraged
351 a custom INTERNAL_SIZE_T on machines/compilers without
367 (long)(x - y) where x = 0 and y is 1 ends up being 0x00000000FFFFFFFF
368 which is 2*31 - 1 instead of 0xFFFFFFFFFFFFFFFF. This is due to the
374 ? -(long) (y - x) \
375 : (long) (x - y))
378 REALLOC_ZERO_BYTES_FREES should be set if a call to
379 realloc with zero bytes should be the same as a call to free.
381 returns a unique pointer for malloc(0), so does realloc(p, 0).
401 /* The following macros are only invoked with (2n+1)-multiples of
402 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
448 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
457 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
466 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
475 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
485 operating system immediately after a free().
493 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
519 manages memory from the system in page-size units.
569 routine that returns a struct containing the same kind of
571 any SVID/XPG compliant system that has a /usr/include/malloc.h
572 defining struct mallinfo. (If you'd like to install such a thing
574 and below and save them in a malloc.h file. But there's no
578 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
589 #define M_TRIM_THRESHOLD -1
590 #define M_TOP_PAD -2
591 #define M_MMAP_THRESHOLD -3
592 #define M_MMAP_MAX -4
601 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
604 Automatic trimming is mainly useful in long-lived programs.
615 system-level demands of a long-lived program down to a bare
617 the XF86 X server on Linux, using a trim threshold of 128K and a
618 mmap threshold of 192K led to near-minimal long term resource
621 If you are using this malloc in a long-lived program, it should
622 pay to experiment with these values. As a rough guide, you
623 might set to a value close to the average size of a process
625 would allow such a process to run in memory. Generally, it's
626 worth it to tune for trimming rather tham memory mapping when a
630 chunks at all. And in well-behaved long-lived programs,
635 protection against the system-level effects of carrying around
644 disable trimming completely, you can set to (unsigned long)(-1);
659 a new malloc request, this much padding is added to the sbrk
666 so that the end of the arena is always a system page boundary.
669 often. Having even a small pad greatly reduces the likelihood
670 that nearly every malloc request during program start-up (or
674 Automatic rounding-up to page-size units is normally sufficient
690 to service a request. Requests of at least this size that cannot
691 be allocated using already-existing space will be serviced via mmap.
696 system. A request serviced through mmap is never reused by any
702 helps keep the system level memory demands of a long-lived
716 limitations. Generally, servicing a request via normal
717 malloc steps is faster than going through a system's mmap.
739 1. Some systems have a limited number of internal tables for
743 3. If a program allocates many large regions, it is probably
744 better off using normal sbrk-based allocation routines that
745 can reclaim and reallocate normal heap memory. Using a
750 the default value is 0, and attempts to set it to non-zero values
765 don't run at the same time, which could be disasterous. A single
778 #define MORECORE_FAILURE -1
839 /* ---------- To make a malloc.h, end cutting here ------------ */
860 struct malloc_chunk* fd; /* double links -- used only if free. */
872 Chunks of memory are maintained using a `boundary tag' method as
874 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
884 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
886 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
888 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
893 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
895 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
904 thus double-word aligned.
906 Free chunks are stored in circular doubly-linked lists, and look like this:
908 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
910 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
912 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
914 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
916 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
920 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
922 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
924 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
925 chunk size (which is always a multiple of two words), is an in-use
930 preventing access to non-existent (or non-owned) memory.)
945 2. Chunks allocated via mmap, which have the second-lowest-order
954 proportionally (log) spaced. There are a lot of these bins
963 the same-sized chunks, but facilitates best-fit allocation for
973 * `top': The top-most available chunk (i.e., the one bordering the
979 * `last_remainder': A bin holding only the remainder of the
980 most recently split (non-top) chunk. This bin is checked
981 before other non-fitting chunks, so as to provide better
985 If supported, requests greater than a threshold are usually
1004 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1010 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1012 /* pad request bytes into a usable size */
1046 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1051 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1054 /* Treat space at ptr + offset as a chunk */
1068 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1072 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1076 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1081 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1084 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1089 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1092 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1095 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1106 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1110 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1114 #define set_head(p, s) ((p)->size = (s))
1118 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1128 heads of (initially empty) doubly-linked lists of chunks, laid out
1129 in a way so that each pair can be treated as if it were in a
1148 There is actually a little bit of slop in the numbers in bin_index
1168 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1170 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1178 #define top (bin_at(0)->fd) /* The topmost chunk */
1221 /* field-extraction macros */
1223 #define first(b) ((b)->fd)
1224 #define last(b) ((b)->bk)
1246 #define MAX_SMALLBIN (MAX_SMALLBIN_SIZE / SMALLBIN_WIDTH) - 1
1254 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1259 To help compensate for the large number of bins, a one-level index
1260 structure is used for bin-by-bin searching. `binblocks' is a
1261 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1262 have any (possibly) non-empty bins, so they can be skipped over
1264 cleared as soon as all bins in a block are empty, but instead only
1270 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1272 /* bin<->block macros */
1311 STATIC char* sbrk_base = (char*)(-1);
1367 These routines make a number of assertions about the states
1369 are not true, it's very likely that a user program has somehow
1370 trashed memory. (It's also possible that there is a coding error
1376 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; in do_check_chunk()
1393 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; in do_check_free_chunk()
1401 /* Unless a special marker, must have OK fields */ in do_check_free_chunk()
1407 assert(next->prev_size == sz); in do_check_free_chunk()
1413 assert(p->fd->bk == p); in do_check_free_chunk()
1414 assert(p->bk->fd == p); in do_check_free_chunk()
1450 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; in do_check_malloced_chunk()
1485 Macro-based internal utilities
1507 FD = BK->fd; \
1508 P->bk = BK; \
1509 P->fd = FD; \
1510 FD->bk = BK->fd = P; \
1516 FD = BK->fd; \
1520 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1521 BK = FD->bk; \
1523 P->bk = BK; \
1524 P->fd = FD; \
1525 FD->bk = BK->fd = P; \
1530 /* take a chunk off a list */
1534 BK = P->bk; \
1535 FD = P->fd; \
1536 FD->bk = BK; \
1537 BK->fd = FD; \
1544 last_remainder->fd = last_remainder->bk = P; \
1545 P->fd = P->bk = last_remainder; \
1551 (last_remainder->fd = last_remainder->bk = last_remainder)
1566 size_t page_mask = malloc_getpagesize - 1; in mmap_chunk()
1570 static int fd = -1; in mmap_chunk()
1582 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); in mmap_chunk()
1592 if(p == (mchunkptr)-1) return 0; in mmap_chunk()
1597 /* We demand that eight bytes into a page must be 8-byte aligned. */ in mmap_chunk()
1604 p->prev_size = 0; in mmap_chunk()
1631 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0); in munmap_chunk()
1633 n_mmaps--; in munmap_chunk()
1634 mmapped_mem -= (size + p->prev_size); in munmap_chunk()
1636 ret = munmap((char *)p - p->prev_size, size + p->prev_size); in munmap_chunk()
1638 /* munmap returns non-zero on failure */ in munmap_chunk()
1654 size_t page_mask = malloc_getpagesize - 1; in mremap_chunk()
1655 INTERNAL_SIZE_T offset = p->prev_size; in mremap_chunk()
1662 assert(((size + offset) & (malloc_getpagesize-1)) == 0); in mremap_chunk()
1667 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1); in mremap_chunk()
1669 if (cp == (char *)-1) return 0; in mremap_chunk()
1675 assert((p->prev_size == offset)); in mremap_chunk()
1676 set_head(p, (new_size - offset)|IS_MMAPPED); in mremap_chunk()
1678 mmapped_mem -= size + offset; in mremap_chunk()
1699 Extend the top-most chunk by obtaining memory from system.
1722 /* Otherwise, we need to correct to a page size below anyway. */ in malloc_extend_top()
1725 if (sbrk_base != (char*)(-1)) in malloc_extend_top()
1726 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1); in malloc_extend_top()
1730 /* Fail if sbrk failed or if a foreign sbrk call killed our space */ in malloc_extend_top()
1739 && ((uintptr_t)old_end & (pagesz - 1)) == 0) in malloc_extend_top()
1746 if (sbrk_base == (char*)(-1)) /* First time through. Record base */ in malloc_extend_top()
1749 sbrked_mem += brk - (char*)old_end; in malloc_extend_top()
1755 correction = (MALLOC_ALIGNMENT) - front_misalign; in malloc_extend_top()
1761 /* Guarantee the next brk will be at a page boundary */ in malloc_extend_top()
1762 correction += pagesz - ((uintptr_t)(brk + sbrk_size) & (pagesz - 1)); in malloc_extend_top()
1765 correction &= (pagesz - 1); in malloc_extend_top()
1775 new_brk -= (MALLOC_ALIGNMENT) - front_misalign; in malloc_extend_top()
1781 top_size = new_brk - brk + correction; in malloc_extend_top()
1788 /* A double fencepost is necessary to prevent consolidation */ in malloc_extend_top()
1797 /* Also keep size a multiple of MALLOC_ALIGNMENT */ in malloc_extend_top()
1798 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK; in malloc_extend_top()
1800 chunk_at_offset(old_top, old_top_size )->size = in malloc_extend_top()
1802 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size = in malloc_extend_top()
1820 /* We always land on a page boundary */ in malloc_extend_top()
1821 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0 in malloc_extend_top()
1836 The requested size is first converted into a usable form, `nb'.
1838 obtain 8-byte alignment and/or to obtain a size of at least
1845 a chunk of exactly the right size is found, it is taken.
1848 enough. This is a form of (roving) first fit, used only in
1851 whenever possible. This limited use of a first-fit style
1856 3. Other bins are scanned in increasing size order, using a
1858 any remainder. This search is strictly by best-fit; i.e.,
1864 the best-fit search rule. In effect, `top' is treated as
1871 allocated mmapped regions, and a call to mmap succeeds,
1877 Memory is gathered from the system (in system page-sized
1878 units) in a way that allows chunks obtained across different
1887 chunk borders either a previously allocated and still in-use chunk,
1904 mchunkptr remainder; /* remainder from a split */ in mALLOc()
1908 int startidx; /* first bin of a traversed block */ in mALLOc()
1924 /* Check for exact match in a bin */ in mALLOc()
1936 /* Also scan the next one, since it would have a remainder < MINSIZE */ in mALLOc()
1961 for (victim = last(bin); victim != bin; victim = victim->bk) in mALLOc()
1968 --idx; /* adjust to rescan below after checking last remainder */ in mALLOc()
1986 /* Try to use the last split-off remainder */ in mALLOc()
1988 if ( (victim = last_remainder->fd) != last_remainder) in mALLOc()
1993 if (remainder_size >= (long)MINSIZE) /* re-split */ in mALLOc()
2021 If there are any possibly nonempty big-enough blocks, in mALLOc()
2033 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH; in mALLOc()
2053 for (victim = last(bin); victim != bin; victim = victim->bk) in mALLOc()
2091 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0); in mALLOc()
2095 do /* Possibly backtrack to try to clear a partial block */ in mALLOc()
2097 if ((startidx & (BINBLOCKWIDTH - 1)) == 0) in mALLOc()
2102 --startidx; in mALLOc()
2124 /* Require that there be a remainder, ensuring top always exists */ in mALLOc()
2174 3. If a returned chunk borders the current high end of memory,
2211 hd = p->size; in fREe()
2234 prevsz = p->prev_size; in fREe()
2235 p = chunk_at_offset(p, -prevsz); in fREe()
2254 prevsz = p->prev_size; in fREe()
2255 p = chunk_at_offset(p, -prevsz); in fREe()
2258 if (p->fd == last_remainder) /* keep as last_remainder */ in fREe()
2268 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */ in fREe()
2290 #pragma GCC diagnostic ignored "-Wmissing-attributes"
2309 chunk can be extended, it is, else a malloc-copy-free sequence is
2310 taken. There are several different ways that a chunk could be
2318 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2319 size argument of zero (re)allocates a minimum-sized chunk.
2322 a `small' (<512 bytes) size, then the newly unused space is lopped
2325 The old unix realloc convention of allowing the last-free'd chunk
2399 if(oldsize - SIZE_SZ >= nb) in rEALLOc()
2411 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ); in rEALLOc()
2430 /* Forward into top only if a remainder */ in rEALLOc()
2437 set_head(top, (newsize - nb) | PREV_INUSE); in rEALLOc()
2465 /* try forward + backward first to save a later consolidation */ in rEALLOc()
2478 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); in rEALLOc()
2480 set_head(top, (newsize - nb) | PREV_INUSE); in rEALLOc()
2495 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); in rEALLOc()
2507 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); in rEALLOc()
2533 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); in rEALLOc()
2573 memalign requests more than enough space from malloc, finds a spot
2577 The alignment argument must be a power of two. This property is not
2580 8-byte alignment is guaranteed by normal malloc calls, so don't
2583 Overreliance on memalign is a sure way to fragment space.
2604 /* Otherwise, ensure that it is at least a minimum chunk size */ in mEMALIGn()
2613 if (nb > __SIZE_MAX__ - (alignment + MINSIZE) || nb < bytes) in mEMALIGn()
2641 Since we need to give back leading space in a chunk of at in mEMALIGn()
2643 a spot with less than MINSIZE leader, we can move to the in mEMALIGn()
2644 next aligned spot -- we've allocated enough total room so that in mEMALIGn()
2648 brk = (char*)mem2chunk(((uintptr_t)(m + alignment - 1)) & -alignment); in mEMALIGn()
2649 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment; in mEMALIGn()
2652 leadsize = brk - (char*)(p); in mEMALIGn()
2653 newsize = chunksize(p) - leadsize; in mEMALIGn()
2658 newp->prev_size = p->prev_size + leadsize; in mEMALIGn()
2725 if (bytes > __SIZE_MAX__ - pagesize) in pvALLOc()
2730 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1)); in pvALLOc()
2800 /* clear only the bytes from non-freshly-sbrked memory */ in cALLOc()
2806 MALLOC_ZERO(mem, csz - SIZE_SZ); in cALLOc()
2840 memory to potentially reduce the system-level memory requirements
2841 of a program. However, it cannot guarantee to reduce memory. Under
2849 structures will be left (one page or less). Non-zero arguments
2851 future expected allocations without having to re-obtain memory
2860 long top_size; /* Amount of top-most memory */ in malloc_trim()
2862 char* current_brk; /* address returned by pre-check sbrk call */ in malloc_trim()
2870 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz; in malloc_trim()
2890 new_brk = (char*)(MORECORE (-extra)); in malloc_trim()
2896 top_size = current_brk - (char*)top; in malloc_trim()
2899 sbrked_mem = current_brk - sbrk_base; in malloc_trim()
2910 set_head(top, (top_size - extra) | PREV_INUSE); in malloc_trim()
2911 sbrked_mem -= extra; in malloc_trim()
2930 overwriting other allocated objects. Not a particularly great
2951 return chunksize(p) - SIZE_SZ; in malloc_usable_size()
2953 return chunksize(p) - 2*SIZE_SZ; in malloc_usable_size()
2981 for (p = last(b); p != b; p = p->bk) in malloc_update_mallinfo()
2996 current_mallinfo.uordblks = sbrked_mem - avail; in malloc_update_mallinfo()
3073 mallinfo returns a copy of updated current mallinfo.
3095 The format is to provide a (parameter-number, parameter-value) pair.
3137 /* Return EINVAL if align isn't power of 2 or not a multiple of a pointer size */ in pOSIx_mEMALIGn()
3138 if ((align & (align-1)) != 0 || align % sizeof(void *) != 0 || align == 0) in pOSIx_mEMALIGn()
3156 * Fixed ordering problem with boundary-stamping
3170 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3173 * Use ordered bins instead of best-fit threshhold
3174 * Eliminate block-local decls to simplify tracing and debugging.
3176 * Fix error occuring when initial sbrk_base not word-aligned.
3183 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3189 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3196 * Use best fit for very large chunks to prevent some worst-cases.
3205 (wmglo@Dent.MED.Uni-Muenchen.DE).
3211 * malloc: swap order of clean-bin strategy;
3214 * Use bin counts as a guide to preallocation
3216 * Add a few optimizations from colin@nyx10.cs.du.edu
3224 * Add stuff to allow compilation on non-ANSI compilers
3230 * misc cosmetics and a bit more internal documentation
3232 * tested on sparc, hp-700, dec-mips, rs6000
3237 * Based loosely on libg++-1.2X malloc. (It retains some of the overall