1 /*
2 Copyright (c) 1990 Regents of the University of California.
3 All rights reserved.
4 */
5 #ifdef MALLOC_PROVIDED
6 int _dummy_mallocr = 1;
7 #else
8 /* ---------- To make a malloc.h, start cutting here ------------ */
9
10 /*
11 A version of malloc/free/realloc written by Doug Lea and released to the
12 public domain. Send questions/comments/complaints/performance data
13 to dl@cs.oswego.edu
14
15 * VERSION 2.6.5 Wed Jun 17 15:55:16 1998 Doug Lea (dl at gee)
16
17 Note: There may be an updated version of this malloc obtainable at
18 ftp://g.oswego.edu/pub/misc/malloc.c
19 Check before installing!
20
21 Note: This version differs from 2.6.4 only by correcting a
22 statement ordering error that could cause failures only
23 when calls to this malloc are interposed with calls to
24 other memory allocators.
25
26 * Why use this malloc?
27
28 This is not the fastest, most space-conserving, most portable, or
29 most tunable malloc ever written. However it is among the fastest
30 while also being among the most space-conserving, portable and tunable.
31 Consistent balance across these factors results in a good general-purpose
32 allocator. For a high-level description, see
33 http://g.oswego.edu/dl/html/malloc.html
34
35 * Synopsis of public routines
36
37 (Much fuller descriptions are contained in the program documentation below.)
38
39 malloc(size_t n);
40 Return a pointer to a newly allocated chunk of at least n bytes, or null
41 if no space is available.
42 free(Void_t* p);
43 Release the chunk of memory pointed to by p, or no effect if p is null.
44 realloc(Void_t* p, size_t n);
45 Return a pointer to a chunk of size n that contains the same data
46 as does chunk p up to the minimum of (n, p's size) bytes, or null
47 if no space is available. The returned pointer may or may not be
48 the same as p. If p is null, equivalent to malloc. Unless the
49 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
50 size argument of zero (re)allocates a minimum-sized chunk.
51 memalign(size_t alignment, size_t n);
52 Return a pointer to a newly allocated chunk of n bytes, aligned
53 in accord with the alignment argument, which must be a power of
54 two.
55 valloc(size_t n);
56 Equivalent to memalign(pagesize, n), where pagesize is the page
57 size of the system (or as near to this as can be figured out from
58 all the includes/defines below.)
59 pvalloc(size_t n);
60 Equivalent to valloc(minimum-page-that-holds(n)), that is,
61 round up n to nearest pagesize.
62 calloc(size_t unit, size_t quantity);
63 Returns a pointer to quantity * unit bytes, with all locations
64 set to zero.
65 cfree(Void_t* p);
66 Equivalent to free(p).
67 malloc_trim(size_t pad);
68 Release all but pad bytes of freed top-most memory back
69 to the system. Return 1 if successful, else 0.
70 malloc_usable_size(Void_t* p);
71 Report the number usable allocated bytes associated with allocated
72 chunk p. This may or may not report more bytes than were requested,
73 due to alignment and minimum size constraints.
74 malloc_stats();
75 Prints brief summary statistics on stderr.
76 mallinfo()
77 Returns (by copy) a struct containing various summary statistics.
78 mallopt(int parameter_number, int parameter_value)
79 Changes one of the tunable parameters described below. Returns
80 1 if successful in changing the parameter, else 0.
81
82 * Vital statistics:
83
84 Alignment: 8-byte
85 8 byte alignment is currently hardwired into the design. This
86 seems to suffice for all current machines and C compilers.
87
88 Assumed pointer representation: 4 or 8 bytes
89 Code for 8-byte pointers is untested by me but has worked
90 reliably by Wolfram Gloger, who contributed most of the
91 changes supporting this.
92
93 Assumed size_t representation: 4 or 8 bytes
94 Note that size_t is allowed to be 4 bytes even if pointers are 8.
95
96 Minimum overhead per allocated chunk: 4 or 8 bytes
97 Each malloced chunk has a hidden overhead of 4 bytes holding size
98 and status information.
99
100 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
101 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
102
103 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
104 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
105 needed; 4 (8) for a trailing size field
106 and 8 (16) bytes for free list pointers. Thus, the minimum
107 allocatable size is 16/24/32 bytes.
108
109 Even a request for zero bytes (i.e., malloc(0)) returns a
110 pointer to something of the minimum allocatable size.
111
112 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
113 8-byte size_t: 2^63 - 16 bytes
114
115 It is assumed that (possibly signed) size_t bit values suffice to
116 represent chunk sizes. `Possibly signed' is due to the fact
117 that `size_t' may be defined on a system as either a signed or
118 an unsigned type. To be conservative, values that would appear
119 as negative numbers are avoided.
120 Requests for sizes with a negative sign bit will return a
121 minimum-sized chunk.
122
123 Maximum overhead wastage per allocated chunk: normally 15 bytes
124
125 Alignnment demands, plus the minimum allocatable size restriction
126 make the normal worst-case wastage 15 bytes (i.e., up to 15
127 more bytes will be allocated than were requested in malloc), with
128 two exceptions:
129 1. Because requests for zero bytes allocate non-zero space,
130 the worst case wastage for a request of zero bytes is 24 bytes.
131 2. For requests >= mmap_threshold that are serviced via
132 mmap(), the worst case wastage is 8 bytes plus the remainder
133 from a system page (the minimal mmap unit); typically 4096 bytes.
134
135 * Limitations
136
137 Here are some features that are NOT currently supported
138
139 * No user-definable hooks for callbacks and the like.
140 * No automated mechanism for fully checking that all accesses
141 to malloced memory stay within their bounds.
142 * No support for compaction.
143
144 * Synopsis of compile-time options:
145
146 People have reported using previous versions of this malloc on all
147 versions of Unix, sometimes by tweaking some of the defines
148 below. It has been tested most extensively on Solaris and
149 Linux. It is also reported to work on WIN32 platforms.
150 People have also reported adapting this malloc for use in
151 stand-alone embedded systems.
152
153 The implementation is in straight, hand-tuned ANSI C. Among other
154 consequences, it uses a lot of macros. Because of this, to be at
155 all usable, this code should be compiled using an optimizing compiler
156 (for example gcc -O2) that can simplify expressions and control
157 paths.
158
159 __STD_C (default: derived from C compiler defines)
160 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
161 a C compiler sufficiently close to ANSI to get away with it.
162 DEBUG (default: NOT defined)
163 Define to enable debugging. Adds fairly extensive assertion-based
164 checking to help track down memory errors, but noticeably slows down
165 execution.
166 SEPARATE_OBJECTS (default: NOT defined)
167 Define this to compile into separate .o files. You must then
168 compile malloc.c several times, defining a DEFINE_* macro each
169 time. The list of DEFINE_* macros appears below.
170 MALLOC_LOCK (default: NOT defined)
171 MALLOC_UNLOCK (default: NOT defined)
172 Define these to C expressions which are run to lock and unlock
173 the malloc data structures. Calls may be nested; that is,
174 MALLOC_LOCK may be called more than once before the corresponding
175 MALLOC_UNLOCK calls. MALLOC_LOCK must avoid waiting for a lock
176 that it already holds.
177 MALLOC_ALIGNMENT (default: NOT defined)
178 Define this to 16 if you need 16 byte alignment instead of 8 byte alignment
179 which is the normal default.
180 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
181 Define this if you think that realloc(p, 0) should be equivalent
182 to free(p). Otherwise, since malloc returns a unique pointer for
183 malloc(0), so does realloc(p, 0).
184 HAVE_MEMCPY (default: defined)
185 Define if you are not otherwise using ANSI STD C, but still
186 have memcpy and memset in your C library and want to use them.
187 Otherwise, simple internal versions are supplied.
188 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
189 Define as 1 if you want the C library versions of memset and
190 memcpy called in realloc and calloc (otherwise macro versions are used).
191 At least on some platforms, the simple macro versions usually
192 outperform libc versions.
193 HAVE_MMAP (default: defined as 1)
194 Define to non-zero to optionally make malloc() use mmap() to
195 allocate very large blocks.
196 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
197 Define to non-zero to optionally make realloc() use mremap() to
198 reallocate very large blocks.
199 malloc_getpagesize (default: derived from system #includes)
200 Either a constant or routine call returning the system page size.
201 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
202 Optionally define if you are on a system with a /usr/include/malloc.h
203 that declares struct mallinfo. It is not at all necessary to
204 define this even if you do, but will ensure consistency.
205 INTERNAL_SIZE_T (default: size_t)
206 Define to a 32-bit type (probably `unsigned int') if you are on a
207 64-bit machine, yet do not want or need to allow malloc requests of
208 greater than 2^31 to be handled. This saves space, especially for
209 very small chunks.
210 INTERNAL_LINUX_C_LIB (default: NOT defined)
211 Defined only when compiled as part of Linux libc.
212 Also note that there is some odd internal name-mangling via defines
213 (for example, internally, `malloc' is named `mALLOc') needed
214 when compiling in this case. These look funny but don't otherwise
215 affect anything.
216 _LIBC (default: NOT defined)
217 Defined only when compiled as part of the Cygnus newlib
218 distribution.
219 WIN32 (default: undefined)
220 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
221 LACKS_UNISTD_H (default: undefined)
222 Define this if your system does not have a <unistd.h>.
223 MORECORE (default: sbrk)
224 The name of the routine to call to obtain more memory from the system.
225 MORECORE_FAILURE (default: -1)
226 The value returned upon failure of MORECORE.
227 MORECORE_CLEARS (default 1)
228 True (1) if the routine mapped to MORECORE zeroes out memory (which
229 holds for sbrk).
230 DEFAULT_TRIM_THRESHOLD
231 DEFAULT_TOP_PAD
232 DEFAULT_MMAP_THRESHOLD
233 DEFAULT_MMAP_MAX
234 Default values of tunable parameters (described in detail below)
235 controlling interaction with host system routines (sbrk, mmap, etc).
236 These values may also be changed dynamically via mallopt(). The
237 preset defaults are those that give best performance for typical
238 programs/systems.
239
240
241 */
242
243
244
245
246 /* Preliminaries */
247
248 #ifdef __GNUC__
249 #pragma GCC diagnostic ignored "-Wpragmas"
250 #pragma GCC diagnostic ignored "-Wunknown-warning-option"
251 #pragma GCC diagnostic ignored "-Wanalyzer-malloc-leak"
252 #pragma GCC diagnostic ignored "-Wanalyzer-use-of-uninitialized-value"
253 #endif
254
255 #define _DEFAULT_SOURCE
256 #ifndef __STD_C
257 #ifdef __STDC__
258 #define __STD_C 1
259 #else
260 #if __cplusplus
261 #define __STD_C 1
262 #else
263 #define __STD_C 0
264 #endif /*__cplusplus*/
265 #endif /*__STDC__*/
266 #endif /*__STD_C*/
267
268 #ifndef Void_t
269 #if __STD_C
270 #define Void_t void
271 #else
272 #define Void_t char
273 #endif
274 #endif /*Void_t*/
275
276 #if __STD_C
277 #include <stddef.h> /* for size_t */
278 #else
279 #include <sys/types.h>
280 #endif
281
282 #ifdef __cplusplus
283 extern "C" {
284 #endif
285
286 #include <stdio.h> /* needed for malloc_stats */
287 #include <limits.h> /* needed for overflow checks */
288 #include <errno.h> /* needed to set errno to ENOMEM */
289
290 #ifdef WIN32
291 #define WIN32_LEAN_AND_MEAN
292 #include <windows.h>
293 #endif
294
295 /*
296 Compile-time options
297 */
298
299
300 /*
301
302 Special defines for Cygnus newlib distribution.
303
304 */
305
306 #ifdef _LIBC
307
308 #include <sys/config.h>
309 #include <sys/lock.h>
310
311 #define POINTER_UINT unsigned _POINTER_INT
312 #define SEPARATE_OBJECTS
313 #define HAVE_MMAP 0
314 #define MORECORE(size) sbrk((size))
315 #define MORECORE_CLEARS 0
316 #define MALLOC_LOCK __LIBC_LOCK()
317 #define MALLOC_UNLOCK __LIBC_UNLOCK()
318
319 #ifdef __CYGWIN__
320 # undef _WIN32
321 # undef WIN32
322 #endif
323
324 #ifndef _WIN32
325 #ifdef SMALL_MEMORY
326 #define malloc_getpagesize (128)
327 #else
328 #define malloc_getpagesize (4096)
329 #endif
330 #endif
331
332 #if __STD_C
333 extern void __malloc_lock(void);
334 extern void __malloc_unlock(void);
335 #else
336 extern void __malloc_lock();
337 extern void __malloc_unlock();
338 #endif
339
340 #else /* ! _LIBC */
341
342 #define POINTER_UINT unsigned long
343
344 #endif /* ! _LIBC */
345
346 /*
347 Debugging:
348
349 Because freed chunks may be overwritten with link fields, this
350 malloc will often die when freed memory is overwritten by user
351 programs. This can be very effective (albeit in an annoying way)
352 in helping track down dangling pointers.
353
354 If you compile with -DDEBUG, a number of assertion checks are
355 enabled that will catch more memory errors. You probably won't be
356 able to make much sense of the actual assertion errors, but they
357 should help you locate incorrectly overwritten memory. The
358 checking is fairly extensive, and will slow down execution
359 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
360 attempt to check every non-mmapped allocated and free chunk in the
361 course of computing the summmaries. (By nature, mmapped regions
362 cannot be checked very much automatically.)
363
364 Setting DEBUG may also be helpful if you are trying to modify
365 this code. The assertions in the check routines spell out in more
366 detail the assumptions and invariants underlying the algorithms.
367
368 */
369
370 #if DEBUG
371 #include <assert.h>
372 #else
373 #define assert(x) ((void)0)
374 #endif
375
376
377 /*
378 SEPARATE_OBJECTS should be defined if you want each function to go
379 into a separate .o file. You must then compile malloc.c once per
380 function, defining the appropriate DEFINE_ macro. See below for the
381 list of macros.
382 */
383
384 #ifndef SEPARATE_OBJECTS
385 #define DEFINE_MALLOC
386 #define DEFINE_FREE
387 #define DEFINE_REALLOC
388 #define DEFINE_CALLOC
389 #define DEFINE_CFREE
390 #define DEFINE_MEMALIGN
391 #define DEFINE_VALLOC
392 #define DEFINE_PVALLOC
393 #define DEFINE_MALLINFO
394 #define DEFINE_MALLOC_STATS
395 #define DEFINE_MALLOC_USABLE_SIZE
396 #define DEFINE_MALLOPT
397
398 #define STATIC static
399 #else
400 #define STATIC
401 #endif
402
403 /*
404 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
405 of chunk sizes. On a 64-bit machine, you can reduce malloc
406 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
407 at the expense of not being able to handle requests greater than
408 2^31. This limitation is hardly ever a concern; you are encouraged
409 to set this. However, the default version is the same as size_t.
410 Since the implementation relies on __builtin_mul_overflow, defining
411 a custom INTERNAL_SIZE_T on machines/compilers without
412 __builtin_mul_overflow is not permitted.
413 */
414
415 #ifndef INTERNAL_SIZE_T
416 #define INTERNAL_SIZE_T size_t
417 #elif !defined(_HAVE_BUILTIN_MUL_OVERFLOW)
418 #error Compiler does not support __builtin_mul_overflow, hence INTERNAL_SIZE_T cannot be set
419 #endif
420
421 /*
422 Following is needed on implementations whereby long > size_t.
423 The problem is caused because the code performs subtractions of
424 size_t values and stores the result in long values. In the case
425 where long > size_t and the first value is actually less than
426 the second value, the resultant value is positive. For example,
427 (long)(x - y) where x = 0 and y is 1 ends up being 0x00000000FFFFFFFF
428 which is 2*31 - 1 instead of 0xFFFFFFFFFFFFFFFF. This is due to the
429 fact that assignment from unsigned to signed won't sign extend.
430 */
431
432 #define long_sub_size_t(x, y) \
433 (sizeof (long) > sizeof (INTERNAL_SIZE_T) && x < y \
434 ? -(long) (y - x) \
435 : (long) (x - y))
436
437 /*
438 REALLOC_ZERO_BYTES_FREES should be set if a call to
439 realloc with zero bytes should be the same as a call to free.
440 Some people think it should. Otherwise, since this malloc
441 returns a unique pointer for malloc(0), so does realloc(p, 0).
442 */
443
444
445 /* #define REALLOC_ZERO_BYTES_FREES */
446
447
448 /*
449 WIN32 causes an emulation of sbrk to be compiled in
450 mmap-based options are not currently supported in WIN32.
451 */
452
453 /* #define WIN32 */
454 #ifdef WIN32
455 #define MORECORE wsbrk
456 #define HAVE_MMAP 0
457 #endif
458
459
460 /*
461 HAVE_MEMCPY should be defined if you are not otherwise using
462 ANSI STD C, but still have memcpy and memset in your C library
463 and want to use them in calloc and realloc. Otherwise simple
464 macro versions are defined here.
465
466 USE_MEMCPY should be defined as 1 if you actually want to
467 have memset and memcpy called. People report that the macro
468 versions are often enough faster than libc versions on many
469 systems that it is better to use them.
470
471 */
472
473 #define HAVE_MEMCPY
474
475 /* Although the original macro is called USE_MEMCPY, newlib actually
476 uses memmove to handle cases whereby a platform's memcpy implementation
477 copies backwards and thus destructive overlap may occur in realloc
478 whereby we are reclaiming free memory prior to the old allocation. */
479 #ifndef USE_MEMCPY
480 #ifdef HAVE_MEMCPY
481 #define USE_MEMCPY 1
482 #else
483 #define USE_MEMCPY 0
484 #endif
485 #endif
486
487 #if (__STD_C || defined(HAVE_MEMCPY))
488
489 #if __STD_C
490 void* memset(void*, int, size_t);
491 void* memcpy(void*, const void*, size_t);
492 void* memmove(void*, const void*, size_t);
493 #else
494 Void_t* memset();
495 Void_t* memcpy();
496 Void_t* memmove();
497 #endif
498 #endif
499
500 #if USE_MEMCPY
501
502 /* The following macros are only invoked with (2n+1)-multiples of
503 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
504 for fast inline execution when n is small. */
505
506 #define MALLOC_ZERO(charp, nbytes) \
507 do { \
508 INTERNAL_SIZE_T mzsz = (nbytes); \
509 if(mzsz <= 9*sizeof(mzsz)) { \
510 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
511 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
512 *mz++ = 0; \
513 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
514 *mz++ = 0; \
515 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
516 *mz++ = 0; }}} \
517 *mz++ = 0; \
518 *mz++ = 0; \
519 *mz = 0; \
520 } else memset((charp), 0, mzsz); \
521 } while(0)
522
523 #define MALLOC_COPY(dest,src,nbytes) \
524 do { \
525 INTERNAL_SIZE_T mcsz = (nbytes); \
526 if(mcsz <= 9*sizeof(mcsz)) { \
527 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
528 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
529 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
530 *mcdst++ = *mcsrc++; \
531 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
532 *mcdst++ = *mcsrc++; \
533 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
534 *mcdst++ = *mcsrc++; }}} \
535 *mcdst++ = *mcsrc++; \
536 *mcdst++ = *mcsrc++; \
537 *mcdst = *mcsrc ; \
538 } else memmove(dest, src, mcsz); \
539 } while(0)
540
541 #else /* !USE_MEMCPY */
542
543 /* Use Duff's device for good zeroing/copying performance. */
544
545 #define MALLOC_ZERO(charp, nbytes) \
546 do { \
547 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
548 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
549 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
550 switch (mctmp) { \
551 case 0: for(;;) { *mzp++ = 0; \
552 case 7: *mzp++ = 0; \
553 case 6: *mzp++ = 0; \
554 case 5: *mzp++ = 0; \
555 case 4: *mzp++ = 0; \
556 case 3: *mzp++ = 0; \
557 case 2: *mzp++ = 0; \
558 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
559 } \
560 } while(0)
561
562 #define MALLOC_COPY(dest,src,nbytes) \
563 do { \
564 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
565 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
566 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
567 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
568 switch (mctmp) { \
569 case 0: for(;;) { *mcdst++ = *mcsrc++; \
570 case 7: *mcdst++ = *mcsrc++; \
571 case 6: *mcdst++ = *mcsrc++; \
572 case 5: *mcdst++ = *mcsrc++; \
573 case 4: *mcdst++ = *mcsrc++; \
574 case 3: *mcdst++ = *mcsrc++; \
575 case 2: *mcdst++ = *mcsrc++; \
576 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
577 } \
578 } while(0)
579
580 #endif
581
582
583 /*
584 Define HAVE_MMAP to optionally make malloc() use mmap() to
585 allocate very large blocks. These will be returned to the
586 operating system immediately after a free().
587 */
588
589 #ifndef HAVE_MMAP
590 #define HAVE_MMAP 1
591 #endif
592
593 /*
594 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
595 large blocks. This is currently only possible on Linux with
596 kernel versions newer than 1.3.77.
597 */
598
599 #ifndef HAVE_MREMAP
600 #ifdef INTERNAL_LINUX_C_LIB
601 #define HAVE_MREMAP 1
602 #else
603 #define HAVE_MREMAP 0
604 #endif
605 #endif
606
607 #if HAVE_MMAP
608
609 #include <unistd.h>
610 #include <fcntl.h>
611 #include <sys/mman.h>
612
613 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
614 #define MAP_ANONYMOUS MAP_ANON
615 #endif
616
617 #endif /* HAVE_MMAP */
618
619 /*
620 Access to system page size. To the extent possible, this malloc
621 manages memory from the system in page-size units.
622
623 The following mechanics for getpagesize were adapted from
624 bsd/gnu getpagesize.h
625 */
626
627 #ifndef LACKS_UNISTD_H
628 # include <unistd.h>
629 #endif
630
631 #ifndef malloc_getpagesize
632 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
633 # ifndef _SC_PAGE_SIZE
634 # define _SC_PAGE_SIZE _SC_PAGESIZE
635 # endif
636 # endif
637 # ifdef _SC_PAGE_SIZE
638 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
639 # else
640 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
641 extern size_t getpagesize();
642 # define malloc_getpagesize getpagesize()
643 # else
644 # include <sys/param.h>
645 # ifdef EXEC_PAGESIZE
646 # define malloc_getpagesize EXEC_PAGESIZE
647 # else
648 # ifdef NBPG
649 # ifndef CLSIZE
650 # define malloc_getpagesize NBPG
651 # else
652 # define malloc_getpagesize (NBPG * CLSIZE)
653 # endif
654 # else
655 # ifdef NBPC
656 # define malloc_getpagesize NBPC
657 # else
658 # ifdef PAGESIZE
659 # define malloc_getpagesize PAGESIZE
660 # else
661 # define malloc_getpagesize (4096) /* just guess */
662 # endif
663 # endif
664 # endif
665 # endif
666 # endif
667 # endif
668 #endif
669
670
671
672 /*
673
674 This version of malloc supports the standard SVID/XPG mallinfo
675 routine that returns a struct containing the same kind of
676 information you can get from malloc_stats. It should work on
677 any SVID/XPG compliant system that has a /usr/include/malloc.h
678 defining struct mallinfo. (If you'd like to install such a thing
679 yourself, cut out the preliminary declarations as described above
680 and below and save them in a malloc.h file. But there's no
681 compelling reason to bother to do this.)
682
683 The main declaration needed is the mallinfo struct that is returned
684 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
685 bunch of fields, most of which are not even meaningful in this
686 version of malloc. Some of these fields are are instead filled by
687 mallinfo() with other numbers that might possibly be of interest.
688
689 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
690 /usr/include/malloc.h file that includes a declaration of struct
691 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
692 version is declared below. These must be precisely the same for
693 mallinfo() to work.
694
695 */
696
697 /* #define HAVE_USR_INCLUDE_MALLOC_H */
698
699 #if HAVE_USR_INCLUDE_MALLOC_H
700 #include "/usr/include/malloc.h"
701 #else
702
703 /* SVID2/XPG mallinfo structure */
704
705 struct mallinfo {
706 int arena; /* total space allocated from system */
707 int ordblks; /* number of non-inuse chunks */
708 int smblks; /* unused -- always zero */
709 int hblks; /* number of mmapped regions */
710 int hblkhd; /* total space in mmapped regions */
711 int usmblks; /* unused -- always zero */
712 int fsmblks; /* unused -- always zero */
713 int uordblks; /* total allocated space */
714 int fordblks; /* total non-inuse space */
715 int keepcost; /* top-most, releasable (via malloc_trim) space */
716 };
717
718 /* SVID2/XPG mallopt options */
719
720 #define M_MXFAST 1 /* UNUSED in this malloc */
721 #define M_NLBLKS 2 /* UNUSED in this malloc */
722 #define M_GRAIN 3 /* UNUSED in this malloc */
723 #define M_KEEP 4 /* UNUSED in this malloc */
724
725 #endif
726
727 /* mallopt options that actually do something */
728
729 #define M_TRIM_THRESHOLD -1
730 #define M_TOP_PAD -2
731 #define M_MMAP_THRESHOLD -3
732 #define M_MMAP_MAX -4
733
734
735
736 #ifndef DEFAULT_TRIM_THRESHOLD
737 #define DEFAULT_TRIM_THRESHOLD (128L * 1024L)
738 #endif
739
740 /*
741 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
742 to keep before releasing via malloc_trim in free().
743
744 Automatic trimming is mainly useful in long-lived programs.
745 Because trimming via sbrk can be slow on some systems, and can
746 sometimes be wasteful (in cases where programs immediately
747 afterward allocate more large chunks) the value should be high
748 enough so that your overall system performance would improve by
749 releasing.
750
751 The trim threshold and the mmap control parameters (see below)
752 can be traded off with one another. Trimming and mmapping are
753 two different ways of releasing unused memory back to the
754 system. Between these two, it is often possible to keep
755 system-level demands of a long-lived program down to a bare
756 minimum. For example, in one test suite of sessions measuring
757 the XF86 X server on Linux, using a trim threshold of 128K and a
758 mmap threshold of 192K led to near-minimal long term resource
759 consumption.
760
761 If you are using this malloc in a long-lived program, it should
762 pay to experiment with these values. As a rough guide, you
763 might set to a value close to the average size of a process
764 (program) running on your system. Releasing this much memory
765 would allow such a process to run in memory. Generally, it's
766 worth it to tune for trimming rather tham memory mapping when a
767 program undergoes phases where several large chunks are
768 allocated and released in ways that can reuse each other's
769 storage, perhaps mixed with phases where there are no such
770 chunks at all. And in well-behaved long-lived programs,
771 controlling release of large blocks via trimming versus mapping
772 is usually faster.
773
774 However, in most programs, these parameters serve mainly as
775 protection against the system-level effects of carrying around
776 massive amounts of unneeded memory. Since frequent calls to
777 sbrk, mmap, and munmap otherwise degrade performance, the default
778 parameters are set to relatively high values that serve only as
779 safeguards.
780
781 The default trim value is high enough to cause trimming only in
782 fairly extreme (by current memory consumption standards) cases.
783 It must be greater than page size to have any useful effect. To
784 disable trimming completely, you can set to (unsigned long)(-1);
785
786
787 */
788
789
790 #ifndef DEFAULT_TOP_PAD
791 #define DEFAULT_TOP_PAD (0)
792 #endif
793
794 /*
795 M_TOP_PAD is the amount of extra `padding' space to allocate or
796 retain whenever sbrk is called. It is used in two ways internally:
797
798 * When sbrk is called to extend the top of the arena to satisfy
799 a new malloc request, this much padding is added to the sbrk
800 request.
801
802 * When malloc_trim is called automatically from free(),
803 it is used as the `pad' argument.
804
805 In both cases, the actual amount of padding is rounded
806 so that the end of the arena is always a system page boundary.
807
808 The main reason for using padding is to avoid calling sbrk so
809 often. Having even a small pad greatly reduces the likelihood
810 that nearly every malloc request during program start-up (or
811 after trimming) will invoke sbrk, which needlessly wastes
812 time.
813
814 Automatic rounding-up to page-size units is normally sufficient
815 to avoid measurable overhead, so the default is 0. However, in
816 systems where sbrk is relatively slow, it can pay to increase
817 this value, at the expense of carrying around more memory than
818 the program needs.
819
820 */
821
822
823 #ifndef DEFAULT_MMAP_THRESHOLD
824 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
825 #endif
826
827 /*
828
829 M_MMAP_THRESHOLD is the request size threshold for using mmap()
830 to service a request. Requests of at least this size that cannot
831 be allocated using already-existing space will be serviced via mmap.
832 (If enough normal freed space already exists it is used instead.)
833
834 Using mmap segregates relatively large chunks of memory so that
835 they can be individually obtained and released from the host
836 system. A request serviced through mmap is never reused by any
837 other request (at least not directly; the system may just so
838 happen to remap successive requests to the same locations).
839
840 Segregating space in this way has the benefit that mmapped space
841 can ALWAYS be individually released back to the system, which
842 helps keep the system level memory demands of a long-lived
843 program low. Mapped memory can never become `locked' between
844 other chunks, as can happen with normally allocated chunks, which
845 menas that even trimming via malloc_trim would not release them.
846
847 However, it has the disadvantages that:
848
849 1. The space cannot be reclaimed, consolidated, and then
850 used to service later requests, as happens with normal chunks.
851 2. It can lead to more wastage because of mmap page alignment
852 requirements
853 3. It causes malloc performance to be more dependent on host
854 system memory management support routines which may vary in
855 implementation quality and may impose arbitrary
856 limitations. Generally, servicing a request via normal
857 malloc steps is faster than going through a system's mmap.
858
859 All together, these considerations should lead you to use mmap
860 only for relatively large requests.
861
862
863 */
864
865
866
867 #ifndef DEFAULT_MMAP_MAX
868 #if HAVE_MMAP
869 #define DEFAULT_MMAP_MAX (64)
870 #else
871 #define DEFAULT_MMAP_MAX (0)
872 #endif
873 #endif
874
875 /*
876 M_MMAP_MAX is the maximum number of requests to simultaneously
877 service using mmap. This parameter exists because:
878
879 1. Some systems have a limited number of internal tables for
880 use by mmap.
881 2. In most systems, overreliance on mmap can degrade overall
882 performance.
883 3. If a program allocates many large regions, it is probably
884 better off using normal sbrk-based allocation routines that
885 can reclaim and reallocate normal heap memory. Using a
886 small value allows transition into this mode after the
887 first few allocations.
888
889 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
890 the default value is 0, and attempts to set it to non-zero values
891 in mallopt will fail.
892 */
893
894
895
896
897 /*
898
899 Special defines for linux libc
900
901 Except when compiled using these special defines for Linux libc
902 using weak aliases, this malloc is NOT designed to work in
903 multithreaded applications. No semaphores or other concurrency
904 control are provided to ensure that multiple malloc or free calls
905 don't run at the same time, which could be disasterous. A single
906 semaphore could be used across malloc, realloc, and free (which is
907 essentially the effect of the linux weak alias approach). It would
908 be hard to obtain finer granularity.
909
910 */
911
912
913 #ifdef INTERNAL_LINUX_C_LIB
914
915 #if __STD_C
916
917 Void_t * __default_morecore_init (ptrdiff_t);
918 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
919
920 #else
921
922 Void_t * __default_morecore_init ();
923 Void_t *(*__morecore)() = __default_morecore_init;
924
925 #endif
926
927 #define MORECORE (*__morecore)
928 #define MORECORE_FAILURE 0
929 #define MORECORE_CLEARS 1
930
931 #else /* INTERNAL_LINUX_C_LIB */
932
933 #ifndef _LIBC
934 #if __STD_C
935 extern Void_t* sbrk(ptrdiff_t);
936 #else
937 extern Void_t* sbrk();
938 #endif
939 #endif
940
941 #ifndef MORECORE
942 #define MORECORE sbrk
943 #endif
944
945 #ifndef MORECORE_FAILURE
946 #define MORECORE_FAILURE -1
947 #endif
948
949 #ifndef MORECORE_CLEARS
950 #define MORECORE_CLEARS 1
951 #endif
952
953 #endif /* INTERNAL_LINUX_C_LIB */
954
955 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
956
957 #define cALLOc __libc_calloc
958 #define fREe __libc_free
959 #define mALLOc __libc_malloc
960 #define mEMALIGn __libc_memalign
961 #define rEALLOc __libc_realloc
962 #define vALLOc __libc_valloc
963 #define pvALLOc __libc_pvalloc
964 #define mALLINFo __libc_mallinfo
965 #define mALLOPt __libc_mallopt
966
967 #pragma weak calloc = __libc_calloc
968 #pragma weak free = __libc_free
969 #pragma weak cfree = __libc_free
970 #pragma weak malloc = __libc_malloc
971 #pragma weak memalign = __libc_memalign
972 #pragma weak realloc = __libc_realloc
973 #pragma weak valloc = __libc_valloc
974 #pragma weak pvalloc = __libc_pvalloc
975 #pragma weak mallinfo = __libc_mallinfo
976 #pragma weak mallopt = __libc_mallopt
977
978 #else
979
980 #define cALLOc calloc
981 #define fREe free
982 #define mALLOc malloc
983 #define mEMALIGn memalign
984 #define rEALLOc realloc
985 #define vALLOc valloc
986 #define pvALLOc pvalloc
987 #define mALLINFo mallinfo
988 #define mALLOPt mallopt
989
990 #ifdef _LIBC
991
992 #define malloc_stats malloc_stats
993 #define malloc_trim malloc_trim
994 #define malloc_usable_size malloc_usable_size
995
996 #define malloc_update_mallinfo __malloc_update_mallinfo
997
998 #define malloc_av_ __malloc_av_
999 #define malloc_current_mallinfo __malloc_current_mallinfo
1000 #define malloc_max_sbrked_mem __malloc_max_sbrked_mem
1001 #define malloc_max_total_mem __malloc_max_total_mem
1002 #define malloc_sbrk_base __malloc_sbrk_base
1003 #define malloc_top_pad __malloc_top_pad
1004 #define malloc_trim_threshold __malloc_trim_threshold
1005
1006 #endif /* ! _LIBC */
1007 #endif
1008
1009 /* Public routines */
1010
1011 #if __STD_C
1012
1013 Void_t* mALLOc(size_t);
1014 void fREe(Void_t*);
1015 Void_t* rEALLOc(Void_t*, size_t);
1016 Void_t* mEMALIGn(size_t, size_t);
1017 Void_t* vALLOc(size_t);
1018 Void_t* pvALLOc(size_t);
1019 Void_t* cALLOc(size_t, size_t);
1020 void cfree(Void_t*);
1021 int malloc_trim(size_t);
1022 size_t malloc_usable_size(Void_t*);
1023 void malloc_stats(void);
1024 int mALLOPt(int, int);
1025 struct mallinfo mALLINFo(void);
1026 #else
1027 Void_t* mALLOc();
1028 void fREe();
1029 Void_t* rEALLOc();
1030 Void_t* mEMALIGn();
1031 Void_t* vALLOc();
1032 Void_t* pvALLOc();
1033 Void_t* cALLOc();
1034 void cfree();
1035 int malloc_trim();
1036 size_t malloc_usable_size();
1037 void malloc_stats();
1038 int mALLOPt();
1039 struct mallinfo mALLINFo();
1040 #endif
1041
1042 /* Work around compiler optimizing away stores to 'size' field before
1043 * call to free.
1044 */
1045 #ifdef _HAVE_ALIAS_ATTRIBUTE
1046 extern __typeof(free) __malloc_free;
1047 #else
1048 #define __malloc_free(x) fREe(x)
1049 #endif
1050
1051 #ifdef __cplusplus
1052 }; /* end of extern "C" */
1053 #endif
1054
1055 /* ---------- To make a malloc.h, end cutting here ------------ */
1056
1057
1058 /*
1059 Emulation of sbrk for WIN32
1060 All code within the ifdef WIN32 is untested by me.
1061 */
1062
1063
1064 #ifdef WIN32
1065
1066 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
1067 ~(malloc_getpagesize-1))
1068
1069 /* resrve 64MB to insure large contiguous space */
1070 #define RESERVED_SIZE (1024*1024*64)
1071 #define NEXT_SIZE (2048*1024)
1072 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1073
1074 struct GmListElement;
1075 typedef struct GmListElement GmListElement;
1076
1077 struct GmListElement
1078 {
1079 GmListElement* next;
1080 void* base;
1081 };
1082
1083 static GmListElement* head = 0;
1084 static unsigned int gNextAddress = 0;
1085 static unsigned int gAddressBase = 0;
1086 static unsigned int gAllocatedSize = 0;
1087
1088 static
makeGmListElement(void * bas)1089 GmListElement* makeGmListElement (void* bas)
1090 {
1091 GmListElement* this;
1092 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1093 ASSERT (this);
1094 if (this)
1095 {
1096 this->base = bas;
1097 this->next = head;
1098 head = this;
1099 }
1100 return this;
1101 }
1102
gcleanup(void)1103 void gcleanup (void)
1104 {
1105 BOOL rval;
1106 ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
1107 if (gAddressBase && (gNextAddress - gAddressBase))
1108 {
1109 rval = VirtualFree ((void*)gAddressBase,
1110 gNextAddress - gAddressBase,
1111 MEM_DECOMMIT);
1112 ASSERT (rval);
1113 }
1114 while (head)
1115 {
1116 GmListElement* next = head->next;
1117 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1118 ASSERT (rval);
1119 LocalFree (head);
1120 head = next;
1121 }
1122 }
1123
1124 static
findRegion(void * start_address,unsigned long size)1125 void* findRegion (void* start_address, unsigned long size)
1126 {
1127 MEMORY_BASIC_INFORMATION info;
1128 while ((unsigned long)start_address < TOP_MEMORY)
1129 {
1130 VirtualQuery (start_address, &info, sizeof (info));
1131 if (info.State != MEM_FREE)
1132 start_address = (char*)info.BaseAddress + info.RegionSize;
1133 else if (info.RegionSize >= size)
1134 return start_address;
1135 else
1136 start_address = (char*)info.BaseAddress + info.RegionSize;
1137 }
1138 return NULL;
1139
1140 }
1141
1142
wsbrk(long size)1143 void* wsbrk (long size)
1144 {
1145 void* tmp;
1146 if (size > 0)
1147 {
1148 if (gAddressBase == 0)
1149 {
1150 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1151 gNextAddress = gAddressBase =
1152 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1153 MEM_RESERVE, PAGE_NOACCESS);
1154 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1155 gAllocatedSize))
1156 {
1157 long new_size = max (NEXT_SIZE, AlignPage (size));
1158 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1159 do
1160 {
1161 new_address = findRegion (new_address, new_size);
1162
1163 if (new_address == 0)
1164 return (void*)-1;
1165
1166 gAddressBase = gNextAddress =
1167 (unsigned int)VirtualAlloc (new_address, new_size,
1168 MEM_RESERVE, PAGE_NOACCESS);
1169 // repeat in case of race condition
1170 // The region that we found has been snagged
1171 // by another thread
1172 }
1173 while (gAddressBase == 0);
1174
1175 ASSERT (new_address == (void*)gAddressBase);
1176
1177 gAllocatedSize = new_size;
1178
1179 if (!makeGmListElement ((void*)gAddressBase))
1180 return (void*)-1;
1181 }
1182 if ((size + gNextAddress) > AlignPage (gNextAddress))
1183 {
1184 void* res;
1185 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1186 (size + gNextAddress -
1187 AlignPage (gNextAddress)),
1188 MEM_COMMIT, PAGE_READWRITE);
1189 if (res == 0)
1190 return (void*)-1;
1191 }
1192 tmp = (void*)gNextAddress;
1193 gNextAddress = (unsigned int)tmp + size;
1194 return tmp;
1195 }
1196 else if (size < 0)
1197 {
1198 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1199 /* Trim by releasing the virtual memory */
1200 if (alignedGoal >= gAddressBase)
1201 {
1202 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1203 MEM_DECOMMIT);
1204 gNextAddress = gNextAddress + size;
1205 return (void*)gNextAddress;
1206 }
1207 else
1208 {
1209 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1210 MEM_DECOMMIT);
1211 gNextAddress = gAddressBase;
1212 return (void*)-1;
1213 }
1214 }
1215 else
1216 {
1217 return (void*)gNextAddress;
1218 }
1219 }
1220
1221 #endif
1222
1223
1224
1225 /*
1226 Type declarations
1227 */
1228
1229
1230 struct malloc_chunk
1231 {
1232 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1233 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1234 struct malloc_chunk* fd; /* double links -- used only if free. */
1235 struct malloc_chunk* bk;
1236 };
1237
1238 typedef struct malloc_chunk* mchunkptr;
1239
1240 /*
1241
1242 malloc_chunk details:
1243
1244 (The following includes lightly edited explanations by Colin Plumb.)
1245
1246 Chunks of memory are maintained using a `boundary tag' method as
1247 described in e.g., Knuth or Standish. (See the paper by Paul
1248 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1249 survey of such techniques.) Sizes of free chunks are stored both
1250 in the front of each chunk and at the end. This makes
1251 consolidating fragmented chunks into bigger chunks very fast. The
1252 size fields also hold bits representing whether chunks are free or
1253 in use.
1254
1255 An allocated chunk looks like this:
1256
1257
1258 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1259 | Size of previous chunk, if allocated | |
1260 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1261 | Size of chunk, in bytes |P|
1262 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1263 | User data starts here... .
1264 . .
1265 . (malloc_usable_space() bytes) .
1266 . |
1267 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1268 | Size of chunk |
1269 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1270
1271
1272 Where "chunk" is the front of the chunk for the purpose of most of
1273 the malloc code, but "mem" is the pointer that is returned to the
1274 user. "Nextchunk" is the beginning of the next contiguous chunk.
1275
1276 Chunks always begin on even word boundries, so the mem portion
1277 (which is returned to the user) is also on an even word boundary, and
1278 thus double-word aligned.
1279
1280 Free chunks are stored in circular doubly-linked lists, and look like this:
1281
1282 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1283 | Size of previous chunk |
1284 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1285 `head:' | Size of chunk, in bytes |P|
1286 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1287 | Forward pointer to next chunk in list |
1288 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1289 | Back pointer to previous chunk in list |
1290 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1291 | Unused space (may be 0 bytes long) .
1292 . .
1293 . |
1294 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1295 `foot:' | Size of chunk, in bytes |
1296 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1297
1298 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1299 chunk size (which is always a multiple of two words), is an in-use
1300 bit for the *previous* chunk. If that bit is *clear*, then the
1301 word before the current chunk size contains the previous chunk
1302 size, and can be used to find the front of the previous chunk.
1303 (The very first chunk allocated always has this bit set,
1304 preventing access to non-existent (or non-owned) memory.)
1305
1306 Note that the `foot' of the current chunk is actually represented
1307 as the prev_size of the NEXT chunk. (This makes it easier to
1308 deal with alignments etc).
1309
1310 The two exceptions to all this are
1311
1312 1. The special chunk `top', which doesn't bother using the
1313 trailing size field since there is no
1314 next contiguous chunk that would have to index off it. (After
1315 initialization, `top' is forced to always exist. If it would
1316 become less than MINSIZE bytes long, it is replenished via
1317 malloc_extend_top.)
1318
1319 2. Chunks allocated via mmap, which have the second-lowest-order
1320 bit (IS_MMAPPED) set in their size fields. Because they are
1321 never merged or traversed from any other chunk, they have no
1322 foot size or inuse information.
1323
1324 Available chunks are kept in any of several places (all declared below):
1325
1326 * `av': An array of chunks serving as bin headers for consolidated
1327 chunks. Each bin is doubly linked. The bins are approximately
1328 proportionally (log) spaced. There are a lot of these bins
1329 (128). This may look excessive, but works very well in
1330 practice. All procedures maintain the invariant that no
1331 consolidated chunk physically borders another one. Chunks in
1332 bins are kept in size order, with ties going to the
1333 approximately least recently used chunk.
1334
1335 The chunks in each bin are maintained in decreasing sorted order by
1336 size. This is irrelevant for the small bins, which all contain
1337 the same-sized chunks, but facilitates best-fit allocation for
1338 larger chunks. (These lists are just sequential. Keeping them in
1339 order almost never requires enough traversal to warrant using
1340 fancier ordered data structures.) Chunks of the same size are
1341 linked with the most recently freed at the front, and allocations
1342 are taken from the back. This results in LRU or FIFO allocation
1343 order, which tends to give each chunk an equal opportunity to be
1344 consolidated with adjacent freed chunks, resulting in larger free
1345 chunks and less fragmentation.
1346
1347 * `top': The top-most available chunk (i.e., the one bordering the
1348 end of available memory) is treated specially. It is never
1349 included in any bin, is used only if no other chunk is
1350 available, and is released back to the system if it is very
1351 large (see M_TRIM_THRESHOLD).
1352
1353 * `last_remainder': A bin holding only the remainder of the
1354 most recently split (non-top) chunk. This bin is checked
1355 before other non-fitting chunks, so as to provide better
1356 locality for runs of sequentially allocated chunks.
1357
1358 * Implicitly, through the host system's memory mapping tables.
1359 If supported, requests greater than a threshold are usually
1360 serviced via calls to mmap, and then later released via munmap.
1361
1362 */
1363
1364
1365
1366
1367
1368
1369 /* sizes, alignments */
1370
1371 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1372 #ifndef MALLOC_ALIGNMENT
1373 #define MALLOC_ALIGN 8
1374 #define MALLOC_ALIGNMENT (SIZE_SZ < 4 ? 8 : (SIZE_SZ + SIZE_SZ))
1375 #else
1376 #define MALLOC_ALIGN MALLOC_ALIGNMENT
1377 #endif
1378 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1379 #define MINSIZE (sizeof(struct malloc_chunk))
1380
1381 /* conversion from malloc headers to user pointers, and back */
1382
1383 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1384 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1385
1386 /* pad request bytes into a usable size */
1387
1388 #define request2size(req) \
1389 (((unsigned long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1390 (unsigned long)(MINSIZE + MALLOC_ALIGN_MASK)) ? ((MINSIZE + MALLOC_ALIGN_MASK) & ~(MALLOC_ALIGN_MASK)) : \
1391 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1392
1393 /* Check if m has acceptable alignment */
1394
1395 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1396
1397
1398
1399
1400 /*
1401 Physical chunk operations
1402 */
1403
1404
1405 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1406
1407 #define PREV_INUSE 0x1
1408
1409 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1410
1411 #define IS_MMAPPED 0x2
1412
1413 /* Bits to mask off when extracting size */
1414
1415 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1416
1417
1418 /* Ptr to next physical malloc_chunk. */
1419
1420 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1421
1422 /* Ptr to previous physical malloc_chunk */
1423
1424 #define prev_chunk(p)\
1425 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1426
1427
1428 /* Treat space at ptr + offset as a chunk */
1429
1430 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1431
1432
1433
1434
1435 /*
1436 Dealing with use bits
1437 */
1438
1439 /* extract p's inuse bit */
1440
1441 #define inuse(p)\
1442 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1443
1444 /* extract inuse bit of previous chunk */
1445
1446 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1447
1448 /* check for mmap()'ed chunk */
1449
1450 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1451
1452 /* set/clear chunk as in use without otherwise disturbing */
1453
1454 #define set_inuse(p)\
1455 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1456
1457 #define clear_inuse(p)\
1458 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1459
1460 /* check/set/clear inuse bits in known places */
1461
1462 #define inuse_bit_at_offset(p, s)\
1463 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1464
1465 #define set_inuse_bit_at_offset(p, s)\
1466 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1467
1468 #define clear_inuse_bit_at_offset(p, s)\
1469 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1470
1471
1472
1473
1474 /*
1475 Dealing with size fields
1476 */
1477
1478 /* Get size, ignoring use bits */
1479
1480 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1481
1482 /* Set size at head, without disturbing its use bit */
1483
1484 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1485
1486 /* Set size/use ignoring previous bits in header */
1487
1488 #define set_head(p, s) ((p)->size = (s))
1489
1490 /* Set size at footer (only when chunk is not in use) */
1491
1492 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1493
1494
1495
1496
1497
1498 /*
1499 Bins
1500
1501 The bins, `av_' are an array of pairs of pointers serving as the
1502 heads of (initially empty) doubly-linked lists of chunks, laid out
1503 in a way so that each pair can be treated as if it were in a
1504 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1505 and chunks are the same).
1506
1507 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1508 8 bytes apart. Larger bins are approximately logarithmically
1509 spaced. (See the table below.) The `av_' array is never mentioned
1510 directly in the code, but instead via bin access macros.
1511
1512 Bin layout:
1513
1514 64 bins of size 8
1515 32 bins of size 64
1516 16 bins of size 512
1517 8 bins of size 4096
1518 4 bins of size 32768
1519 2 bins of size 262144
1520 1 bin of size what's left
1521
1522 There is actually a little bit of slop in the numbers in bin_index
1523 for the sake of speed. This makes no difference elsewhere.
1524
1525 The special chunks `top' and `last_remainder' get their own bins,
1526 (this is implemented via yet more trickery with the av_ array),
1527 although `top' is never properly linked to its bin since it is
1528 always handled specially.
1529
1530 */
1531
1532 #ifdef SEPARATE_OBJECTS
1533 #define av_ malloc_av_
1534 #endif
1535
1536 #define NAV 128 /* number of bins */
1537
1538 typedef struct malloc_chunk* mbinptr;
1539
1540 /* access macros */
1541
1542 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1543 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1544 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1545
1546 /*
1547 The first 2 bins are never indexed. The corresponding av_ cells are instead
1548 used for bookkeeping. This is not to save space, but to simplify
1549 indexing, maintain locality, and avoid some initialization tests.
1550 */
1551
1552 #define top (bin_at(0)->fd) /* The topmost chunk */
1553 #define last_remainder (bin_at(1)) /* remainder from last split */
1554
1555
1556 /*
1557 Because top initially points to its own bin with initial
1558 zero size, thus forcing extension on the first malloc request,
1559 we avoid having any special code in malloc to check whether
1560 it even exists yet. But we still need to in malloc_extend_top.
1561 */
1562
1563 #define initial_top ((mchunkptr)(bin_at(0)))
1564
1565 /* Helper macro to initialize bins */
1566
1567 #define IAV(i) bin_at(i), bin_at(i)
1568
1569 #ifdef DEFINE_MALLOC
1570 STATIC mbinptr av_[NAV * 2 + 2] = {
1571 0, 0,
1572 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1573 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1574 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1575 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1576 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1577 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1578 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1579 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1580 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1581 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1582 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1583 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1584 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1585 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1586 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1587 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1588 };
1589 #else
1590 extern mbinptr av_[NAV * 2 + 2];
1591 #endif
1592
1593
1594
1595 /* field-extraction macros */
1596
1597 #define first(b) ((b)->fd)
1598 #define last(b) ((b)->bk)
1599
1600 /*
1601 Indexing into bins
1602 */
1603
1604 #define bin_index(sz) \
1605 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1606 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1607 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1608 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1609 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1610 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1611 126)
1612 /*
1613 bins for chunks < 512 are all spaced SMALLBIN_WIDTH bytes apart, and hold
1614 identically sized chunks. This is exploited in malloc.
1615 */
1616
1617 #define MAX_SMALLBIN_SIZE 512
1618 #define SMALLBIN_WIDTH 8
1619 #define SMALLBIN_WIDTH_BITS 3
1620 #define MAX_SMALLBIN (MAX_SMALLBIN_SIZE / SMALLBIN_WIDTH) - 1
1621
1622 #define smallbin_index(sz) (((unsigned long)(sz)) >> SMALLBIN_WIDTH_BITS)
1623
1624 /*
1625 Requests are `small' if both the corresponding and the next bin are small
1626 */
1627
1628 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1629
1630
1631
1632 /*
1633 To help compensate for the large number of bins, a one-level index
1634 structure is used for bin-by-bin searching. `binblocks' is a
1635 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1636 have any (possibly) non-empty bins, so they can be skipped over
1637 all at once during during traversals. The bits are NOT always
1638 cleared as soon as all bins in a block are empty, but instead only
1639 when all are noticed to be empty during traversal in malloc.
1640 */
1641
1642 #define BINBLOCKWIDTH 4 /* bins per block */
1643
1644 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1645
1646 /* bin<->block macros */
1647
1648 #define idx2binblock(ix) ((unsigned long)1 << (ix / BINBLOCKWIDTH))
1649 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1650 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1651
1652
1653
1654
1655
1656 /* Other static bookkeeping data */
1657
1658 #ifdef SEPARATE_OBJECTS
1659 #define trim_threshold malloc_trim_threshold
1660 #define top_pad malloc_top_pad
1661 #define n_mmaps_max malloc_n_mmaps_max
1662 #define mmap_threshold malloc_mmap_threshold
1663 #define sbrk_base malloc_sbrk_base
1664 #define max_sbrked_mem malloc_max_sbrked_mem
1665 #define max_total_mem malloc_max_total_mem
1666 #define current_mallinfo malloc_current_mallinfo
1667 #define n_mmaps malloc_n_mmaps
1668 #define max_n_mmaps malloc_max_n_mmaps
1669 #define mmapped_mem malloc_mmapped_mem
1670 #define max_mmapped_mem malloc_max_mmapped_mem
1671 #endif
1672
1673 /* variables holding tunable values */
1674
1675 #ifdef DEFINE_MALLOC
1676
1677 STATIC unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1678 STATIC unsigned long top_pad = DEFAULT_TOP_PAD;
1679 #if HAVE_MMAP
1680 STATIC unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1681 STATIC unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1682 #endif
1683
1684 /* The first value returned from sbrk */
1685 STATIC char* sbrk_base = (char*)(-1);
1686
1687 /* The maximum memory obtained from system via sbrk */
1688 STATIC unsigned long max_sbrked_mem = 0;
1689
1690 /* The maximum via either sbrk or mmap */
1691 STATIC unsigned long max_total_mem = 0;
1692
1693 /* internal working copy of mallinfo */
1694 STATIC struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1695
1696 #if HAVE_MMAP
1697
1698 /* Tracking mmaps */
1699
1700 STATIC unsigned int n_mmaps = 0;
1701 STATIC unsigned int max_n_mmaps = 0;
1702 STATIC unsigned long mmapped_mem = 0;
1703 STATIC unsigned long max_mmapped_mem = 0;
1704
1705 #endif
1706
1707 #else /* ! DEFINE_MALLOC */
1708
1709 extern unsigned long trim_threshold;
1710 extern unsigned long top_pad;
1711 #if HAVE_MMAP
1712 extern unsigned int n_mmaps_max;
1713 extern unsigned long mmap_threshold;
1714 #endif
1715 extern char* sbrk_base;
1716 extern unsigned long max_sbrked_mem;
1717 extern unsigned long max_total_mem;
1718 extern struct mallinfo current_mallinfo;
1719 #if HAVE_MMAP
1720 extern unsigned int n_mmaps;
1721 extern unsigned int max_n_mmaps;
1722 extern unsigned long mmapped_mem;
1723 extern unsigned long max_mmapped_mem;
1724 #endif
1725
1726 #endif /* ! DEFINE_MALLOC */
1727
1728 /* The total memory obtained from system via sbrk */
1729 #define sbrked_mem (current_mallinfo.arena)
1730
1731
1732
1733 /*
1734 Debugging support
1735 */
1736
1737 #if DEBUG
1738
1739
1740 /*
1741 These routines make a number of assertions about the states
1742 of data structures that should be true at all times. If any
1743 are not true, it's very likely that a user program has somehow
1744 trashed memory. (It's also possible that there is a coding error
1745 in malloc. In which case, please report it!)
1746 */
1747
1748 #if __STD_C
do_check_chunk(mchunkptr p)1749 static void do_check_chunk(mchunkptr p)
1750 #else
1751 static void do_check_chunk(p) mchunkptr p;
1752 #endif
1753 {
1754 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1755
1756 /* No checkable chunk is mmapped */
1757 assert(!chunk_is_mmapped(p));
1758
1759 /* Check for legal address ... */
1760 assert((char*)p >= sbrk_base);
1761 if (p != top)
1762 assert((char*)p + sz <= (char*)top);
1763 else
1764 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1765
1766 }
1767
1768
1769 #if __STD_C
do_check_free_chunk(mchunkptr p)1770 static void do_check_free_chunk(mchunkptr p)
1771 #else
1772 static void do_check_free_chunk(p) mchunkptr p;
1773 #endif
1774 {
1775 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1776 mchunkptr next = chunk_at_offset(p, sz);
1777
1778 do_check_chunk(p);
1779
1780 /* Check whether it claims to be free ... */
1781 assert(!inuse(p));
1782
1783 /* Unless a special marker, must have OK fields */
1784 if ((long)sz >= (long)MINSIZE)
1785 {
1786 assert((sz & MALLOC_ALIGN_MASK) == 0);
1787 assert(aligned_OK(chunk2mem(p)));
1788 /* ... matching footer field */
1789 assert(next->prev_size == sz);
1790 /* ... and is fully consolidated */
1791 assert(prev_inuse(p));
1792 assert (next == top || inuse(next));
1793
1794 /* ... and has minimally sane links */
1795 assert(p->fd->bk == p);
1796 assert(p->bk->fd == p);
1797 }
1798 else /* markers are always of size SIZE_SZ */
1799 assert(sz == SIZE_SZ);
1800 }
1801
1802 #if __STD_C
do_check_inuse_chunk(mchunkptr p)1803 static void do_check_inuse_chunk(mchunkptr p)
1804 #else
1805 static void do_check_inuse_chunk(p) mchunkptr p;
1806 #endif
1807 {
1808 mchunkptr next = next_chunk(p);
1809 do_check_chunk(p);
1810
1811 /* Check whether it claims to be in use ... */
1812 assert(inuse(p));
1813
1814 /* ... and is surrounded by OK chunks.
1815 Since more things can be checked with free chunks than inuse ones,
1816 if an inuse chunk borders them and debug is on, it's worth doing them.
1817 */
1818 if (!prev_inuse(p))
1819 {
1820 mchunkptr prv = prev_chunk(p);
1821 assert(next_chunk(prv) == p);
1822 do_check_free_chunk(prv);
1823 }
1824 if (next == top)
1825 {
1826 assert(prev_inuse(next));
1827 assert(chunksize(next) >= MINSIZE);
1828 }
1829 else if (!inuse(next))
1830 do_check_free_chunk(next);
1831
1832 }
1833
1834 #if __STD_C
do_check_malloced_chunk(mchunkptr p,INTERNAL_SIZE_T s)1835 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1836 #else
1837 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1838 #endif
1839 {
1840 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1841 long room = long_sub_size_t(sz, s);
1842
1843 do_check_inuse_chunk(p);
1844
1845 /* Legal size ... */
1846 assert((long)sz >= (long)MINSIZE);
1847 assert((sz & MALLOC_ALIGN_MASK) == 0);
1848 assert(room >= 0);
1849 assert(room < (long)MINSIZE);
1850
1851 /* ... and alignment */
1852 assert(aligned_OK(chunk2mem(p)));
1853
1854
1855 /* ... and was allocated at front of an available chunk */
1856 assert(prev_inuse(p));
1857
1858 }
1859
1860
1861 #define check_free_chunk(P) do_check_free_chunk(P)
1862 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1863 #define check_chunk(P) do_check_chunk(P)
1864 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1865 #else
1866 #define check_free_chunk(P)
1867 #define check_inuse_chunk(P)
1868 #define check_chunk(P)
1869 #define check_malloced_chunk(P,N)
1870 #endif
1871
1872
1873
1874 /*
1875 Macro-based internal utilities
1876 */
1877
1878
1879 /*
1880 Linking chunks in bin lists.
1881 Call these only with variables, not arbitrary expressions, as arguments.
1882 */
1883
1884 /*
1885 Place chunk p of size s in its bin, in size order,
1886 putting it ahead of others of same size.
1887 */
1888
1889
1890 #define frontlink(P, S, IDX, BK, FD) \
1891 { \
1892 if (S < MAX_SMALLBIN_SIZE) \
1893 { \
1894 IDX = smallbin_index(S); \
1895 mark_binblock(IDX); \
1896 BK = bin_at(IDX); \
1897 FD = BK->fd; \
1898 P->bk = BK; \
1899 P->fd = FD; \
1900 FD->bk = BK->fd = P; \
1901 } \
1902 else \
1903 { \
1904 IDX = bin_index(S); \
1905 BK = bin_at(IDX); \
1906 FD = BK->fd; \
1907 if (FD == BK) mark_binblock(IDX); \
1908 else \
1909 { \
1910 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1911 BK = FD->bk; \
1912 } \
1913 P->bk = BK; \
1914 P->fd = FD; \
1915 FD->bk = BK->fd = P; \
1916 } \
1917 }
1918
1919
1920 /* take a chunk off a list */
1921
1922 #define unlink(P, BK, FD) \
1923 { \
1924 BK = P->bk; \
1925 FD = P->fd; \
1926 FD->bk = BK; \
1927 BK->fd = FD; \
1928 } \
1929
1930 /* Place p as the last remainder */
1931
1932 #define link_last_remainder(P) \
1933 { \
1934 last_remainder->fd = last_remainder->bk = P; \
1935 P->fd = P->bk = last_remainder; \
1936 }
1937
1938 /* Clear the last_remainder bin */
1939
1940 #define clear_last_remainder \
1941 (last_remainder->fd = last_remainder->bk = last_remainder)
1942
1943
1944
1945
1946
1947
1948 /* Routines dealing with mmap(). */
1949
1950 #if HAVE_MMAP
1951
1952 #ifdef DEFINE_MALLOC
1953
1954 #if __STD_C
mmap_chunk(size_t size)1955 static mchunkptr mmap_chunk(size_t size)
1956 #else
1957 static mchunkptr mmap_chunk(size) size_t size;
1958 #endif
1959 {
1960 size_t page_mask = malloc_getpagesize - 1;
1961 mchunkptr p;
1962
1963 #ifndef MAP_ANONYMOUS
1964 static int fd = -1;
1965 #endif
1966
1967 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1968
1969 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1970 * there is no following chunk whose prev_size field could be used.
1971 */
1972 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1973
1974 #ifdef MAP_ANONYMOUS
1975 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1976 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1977 #else /* !MAP_ANONYMOUS */
1978 if (fd < 0)
1979 {
1980 fd = open("/dev/zero", O_RDWR);
1981 if(fd < 0) return 0;
1982 }
1983 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1984 #endif
1985
1986 if(p == (mchunkptr)-1) return 0;
1987
1988 n_mmaps++;
1989 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1990
1991 /* We demand that eight bytes into a page must be 8-byte aligned. */
1992 assert(aligned_OK(chunk2mem(p)));
1993
1994 /* The offset to the start of the mmapped region is stored
1995 * in the prev_size field of the chunk; normally it is zero,
1996 * but that can be changed in memalign().
1997 */
1998 p->prev_size = 0;
1999 set_head(p, size|IS_MMAPPED);
2000
2001 mmapped_mem += size;
2002 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2003 max_mmapped_mem = mmapped_mem;
2004 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2005 max_total_mem = mmapped_mem + sbrked_mem;
2006 return p;
2007 }
2008
2009 #endif /* DEFINE_MALLOC */
2010
2011 #ifdef SEPARATE_OBJECTS
2012 #define munmap_chunk malloc_munmap_chunk
2013 #endif
2014
2015 #ifdef DEFINE_FREE
2016
2017 #if __STD_C
munmap_chunk(mchunkptr p)2018 STATIC void munmap_chunk(mchunkptr p)
2019 #else
2020 STATIC void munmap_chunk(p) mchunkptr p;
2021 #endif
2022 {
2023 INTERNAL_SIZE_T size = chunksize(p);
2024 int ret;
2025
2026 assert (chunk_is_mmapped(p));
2027 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2028 assert((n_mmaps > 0));
2029 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
2030
2031 n_mmaps--;
2032 mmapped_mem -= (size + p->prev_size);
2033
2034 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
2035
2036 /* munmap returns non-zero on failure */
2037 assert(ret == 0);
2038 }
2039
2040 #else /* ! DEFINE_FREE */
2041
2042 #if __STD_C
2043 extern void munmap_chunk(mchunkptr);
2044 #else
2045 extern void munmap_chunk();
2046 #endif
2047
2048 #endif /* ! DEFINE_FREE */
2049
2050 #if HAVE_MREMAP
2051
2052 #ifdef DEFINE_REALLOC
2053
2054 #if __STD_C
mremap_chunk(mchunkptr p,size_t new_size)2055 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
2056 #else
2057 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
2058 #endif
2059 {
2060 size_t page_mask = malloc_getpagesize - 1;
2061 INTERNAL_SIZE_T offset = p->prev_size;
2062 INTERNAL_SIZE_T size = chunksize(p);
2063 char *cp;
2064
2065 assert (chunk_is_mmapped(p));
2066 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2067 assert((n_mmaps > 0));
2068 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
2069
2070 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
2071 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
2072
2073 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
2074
2075 if (cp == (char *)-1) return 0;
2076
2077 p = (mchunkptr)(cp + offset);
2078
2079 assert(aligned_OK(chunk2mem(p)));
2080
2081 assert((p->prev_size == offset));
2082 set_head(p, (new_size - offset)|IS_MMAPPED);
2083
2084 mmapped_mem -= size + offset;
2085 mmapped_mem += new_size;
2086 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2087 max_mmapped_mem = mmapped_mem;
2088 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2089 max_total_mem = mmapped_mem + sbrked_mem;
2090 return p;
2091 }
2092
2093 #endif /* DEFINE_REALLOC */
2094
2095 #endif /* HAVE_MREMAP */
2096
2097 #endif /* HAVE_MMAP */
2098
2099
2100
2101
2102 #ifdef DEFINE_MALLOC
2103
2104 /*
2105 Extend the top-most chunk by obtaining memory from system.
2106 Main interface to sbrk (but see also malloc_trim).
2107 */
2108
2109 #if __STD_C
malloc_extend_top(INTERNAL_SIZE_T nb)2110 static void malloc_extend_top(INTERNAL_SIZE_T nb)
2111 #else
2112 static void malloc_extend_top(nb) RDECL INTERNAL_SIZE_T nb;
2113 #endif
2114 {
2115 char* brk; /* return value from sbrk */
2116 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2117 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2118 int correction_failed = 0; /* whether we should relax the assertion */
2119 char* new_brk; /* return of 2nd sbrk call */
2120 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2121
2122 mchunkptr old_top = top; /* Record state of old top */
2123 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2124 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2125
2126 /* Pad request with top_pad plus minimal overhead */
2127
2128 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2129 unsigned long pagesz = malloc_getpagesize;
2130
2131 /* If not the first time through, round to preserve page boundary */
2132 /* Otherwise, we need to correct to a page size below anyway. */
2133 /* (We also correct below if an intervening foreign sbrk call.) */
2134
2135 if (sbrk_base != (char*)(-1))
2136 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2137
2138 brk = (char*)(MORECORE (sbrk_size));
2139
2140 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2141 if (brk == (char*)(MORECORE_FAILURE) ||
2142 (brk < old_end && old_top != initial_top))
2143 return;
2144
2145 sbrked_mem += sbrk_size;
2146
2147 if (brk == old_end /* can just add bytes to current top, unless
2148 previous correction failed */
2149 && ((POINTER_UINT)old_end & (pagesz - 1)) == 0)
2150 {
2151 top_size = sbrk_size + old_top_size;
2152 set_head(top, top_size | PREV_INUSE);
2153 }
2154 else
2155 {
2156 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2157 sbrk_base = brk;
2158 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2159 sbrked_mem += brk - (char*)old_end;
2160
2161 /* Guarantee alignment of first new chunk made from this space */
2162 front_misalign = (POINTER_UINT)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2163 if (front_misalign > 0)
2164 {
2165 correction = (MALLOC_ALIGNMENT) - front_misalign;
2166 brk += correction;
2167 }
2168 else
2169 correction = 0;
2170
2171 /* Guarantee the next brk will be at a page boundary */
2172 correction += pagesz - ((POINTER_UINT)(brk + sbrk_size) & (pagesz - 1));
2173
2174 /* To guarantee page boundary, correction should be less than pagesz */
2175 correction &= (pagesz - 1);
2176
2177 /* Allocate correction */
2178 new_brk = (char*)(MORECORE (correction));
2179 if (new_brk == (char*)(MORECORE_FAILURE))
2180 {
2181 correction = 0;
2182 correction_failed = 1;
2183 new_brk = brk + sbrk_size;
2184 if (front_misalign > 0)
2185 new_brk -= (MALLOC_ALIGNMENT) - front_misalign;
2186 }
2187
2188 sbrked_mem += correction;
2189
2190 top = (mchunkptr)brk;
2191 top_size = new_brk - brk + correction;
2192 set_head(top, top_size | PREV_INUSE);
2193
2194 if (old_top != initial_top)
2195 {
2196
2197 /* There must have been an intervening foreign sbrk call. */
2198 /* A double fencepost is necessary to prevent consolidation */
2199
2200 /* If not enough space to do this, then user did something very wrong */
2201 if (old_top_size < MINSIZE)
2202 {
2203 set_head(top, PREV_INUSE); /* will force null return from malloc */
2204 return;
2205 }
2206
2207 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2208 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2209 set_head_size(old_top, old_top_size);
2210 chunk_at_offset(old_top, old_top_size )->size =
2211 SIZE_SZ|PREV_INUSE;
2212 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2213 SIZE_SZ|PREV_INUSE;
2214 /* If possible, release the rest. */
2215 if (old_top_size >= MINSIZE)
2216 __malloc_free(chunk2mem(old_top));
2217 }
2218 }
2219
2220 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2221 max_sbrked_mem = sbrked_mem;
2222 #if HAVE_MMAP
2223 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2224 max_total_mem = mmapped_mem + sbrked_mem;
2225 #else
2226 if ((unsigned long)(sbrked_mem) > (unsigned long)max_total_mem)
2227 max_total_mem = sbrked_mem;
2228 #endif
2229
2230 /* We always land on a page boundary */
2231 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0
2232 || correction_failed);
2233 (void) correction_failed;
2234 }
2235
2236 #endif /* DEFINE_MALLOC */
2237
2238
2239 /* Main public routines */
2240
2241 #ifdef DEFINE_MALLOC
2242
2243 /*
2244 Malloc Algorthim:
2245
2246 The requested size is first converted into a usable form, `nb'.
2247 This currently means to add 4 bytes overhead plus possibly more to
2248 obtain 8-byte alignment and/or to obtain a size of at least
2249 MINSIZE (currently 16 bytes), the smallest allocatable size.
2250 (All fits are considered `exact' if they are within MINSIZE bytes.)
2251
2252 From there, the first successful of the following steps is taken:
2253
2254 1. The bin corresponding to the request size is scanned, and if
2255 a chunk of exactly the right size is found, it is taken.
2256
2257 2. The most recently remaindered chunk is used if it is big
2258 enough. This is a form of (roving) first fit, used only in
2259 the absence of exact fits. Runs of consecutive requests use
2260 the remainder of the chunk used for the previous such request
2261 whenever possible. This limited use of a first-fit style
2262 allocation strategy tends to give contiguous chunks
2263 coextensive lifetimes, which improves locality and can reduce
2264 fragmentation in the long run.
2265
2266 3. Other bins are scanned in increasing size order, using a
2267 chunk big enough to fulfill the request, and splitting off
2268 any remainder. This search is strictly by best-fit; i.e.,
2269 the smallest (with ties going to approximately the least
2270 recently used) chunk that fits is selected.
2271
2272 4. If large enough, the chunk bordering the end of memory
2273 (`top') is split off. (This use of `top' is in accord with
2274 the best-fit search rule. In effect, `top' is treated as
2275 larger (and thus less well fitting) than any other available
2276 chunk since it can be extended to be as large as necessary
2277 (up to system limitations).
2278
2279 5. If the request size meets the mmap threshold and the
2280 system supports mmap, and there are few enough currently
2281 allocated mmapped regions, and a call to mmap succeeds,
2282 the request is allocated via direct memory mapping.
2283
2284 6. Otherwise, the top of memory is extended by
2285 obtaining more space from the system (normally using sbrk,
2286 but definable to anything else via the MORECORE macro).
2287 Memory is gathered from the system (in system page-sized
2288 units) in a way that allows chunks obtained across different
2289 sbrk calls to be consolidated, but does not require
2290 contiguous memory. Thus, it should be safe to intersperse
2291 mallocs with other sbrk calls.
2292
2293
2294 All allocations are made from the the `lowest' part of any found
2295 chunk. (The implementation invariant is that prev_inuse is
2296 always true of any allocated chunk; i.e., that each allocated
2297 chunk borders either a previously allocated and still in-use chunk,
2298 or the base of its memory arena.)
2299
2300 */
2301
2302 #if __STD_C
mALLOc(size_t bytes)2303 Void_t* mALLOc(size_t bytes)
2304 #else
2305 Void_t* mALLOc(bytes) RDECL size_t bytes;
2306 #endif
2307 {
2308 #ifdef MALLOC_PROVIDED
2309
2310 return malloc (bytes); // Make sure that the pointer returned by malloc is returned back.
2311
2312 #else
2313
2314 mchunkptr victim; /* inspected/selected chunk */
2315 INTERNAL_SIZE_T victim_size; /* its size */
2316 int idx; /* index for bin traversal */
2317 mbinptr bin; /* associated bin */
2318 mchunkptr remainder; /* remainder from a split */
2319 long remainder_size; /* its size */
2320 int remainder_index; /* its bin index */
2321 unsigned long block; /* block traverser bit */
2322 int startidx; /* first bin of a traversed block */
2323 mchunkptr fwd; /* misc temp for linking */
2324 mchunkptr bck; /* misc temp for linking */
2325 mbinptr q; /* misc temp */
2326
2327 INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2328
2329 /* Check for overflow and just fail, if so. */
2330 if (nb > INT_MAX || nb < bytes)
2331 {
2332 errno = ENOMEM;
2333 return 0;
2334 }
2335
2336 MALLOC_LOCK;
2337
2338 /* Check for exact match in a bin */
2339
2340 if (is_small_request(nb)) /* Faster version for small requests */
2341 {
2342 idx = smallbin_index(nb);
2343
2344 /* No traversal or size check necessary for small bins. */
2345
2346 q = bin_at(idx);
2347 victim = last(q);
2348
2349 #if MALLOC_ALIGN != 16
2350 /* Also scan the next one, since it would have a remainder < MINSIZE */
2351 if (victim == q)
2352 {
2353 q = next_bin(q);
2354 victim = last(q);
2355 }
2356 #endif
2357 if (victim != q)
2358 {
2359 victim_size = chunksize(victim);
2360 unlink(victim, bck, fwd);
2361 set_inuse_bit_at_offset(victim, victim_size);
2362 check_malloced_chunk(victim, nb);
2363 MALLOC_UNLOCK;
2364 return chunk2mem(victim);
2365 }
2366
2367 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2368
2369 }
2370 else
2371 {
2372 idx = bin_index(nb);
2373 bin = bin_at(idx);
2374
2375 for (victim = last(bin); victim != bin; victim = victim->bk)
2376 {
2377 victim_size = chunksize(victim);
2378 remainder_size = long_sub_size_t(victim_size, nb);
2379
2380 if (remainder_size >= (long)MINSIZE) /* too big */
2381 {
2382 --idx; /* adjust to rescan below after checking last remainder */
2383 break;
2384 }
2385
2386 else if (remainder_size >= 0) /* exact fit */
2387 {
2388 unlink(victim, bck, fwd);
2389 set_inuse_bit_at_offset(victim, victim_size);
2390 check_malloced_chunk(victim, nb);
2391 MALLOC_UNLOCK;
2392 return chunk2mem(victim);
2393 }
2394 }
2395
2396 ++idx;
2397
2398 }
2399
2400 /* Try to use the last split-off remainder */
2401
2402 if ( (victim = last_remainder->fd) != last_remainder)
2403 {
2404 victim_size = chunksize(victim);
2405 remainder_size = long_sub_size_t(victim_size, nb);
2406
2407 if (remainder_size >= (long)MINSIZE) /* re-split */
2408 {
2409 remainder = chunk_at_offset(victim, nb);
2410 set_head(victim, nb | PREV_INUSE);
2411 link_last_remainder(remainder);
2412 set_head(remainder, remainder_size | PREV_INUSE);
2413 set_foot(remainder, remainder_size);
2414 check_malloced_chunk(victim, nb);
2415 MALLOC_UNLOCK;
2416 return chunk2mem(victim);
2417 }
2418
2419 clear_last_remainder;
2420
2421 if (remainder_size >= 0) /* exhaust */
2422 {
2423 set_inuse_bit_at_offset(victim, victim_size);
2424 check_malloced_chunk(victim, nb);
2425 MALLOC_UNLOCK;
2426 return chunk2mem(victim);
2427 }
2428
2429 /* Else place in bin */
2430
2431 frontlink(victim, victim_size, remainder_index, bck, fwd);
2432 }
2433
2434 /*
2435 If there are any possibly nonempty big-enough blocks,
2436 search for best fitting chunk by scanning bins in blockwidth units.
2437 */
2438
2439 if ( (block = idx2binblock(idx)) <= binblocks)
2440 {
2441
2442 /* Get to the first marked block */
2443
2444 if ( (block & binblocks) == 0)
2445 {
2446 /* force to an even block boundary */
2447 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2448 block <<= 1;
2449 while ((block & binblocks) == 0)
2450 {
2451 idx += BINBLOCKWIDTH;
2452 block <<= 1;
2453 }
2454 }
2455
2456 /* For each possibly nonempty block ... */
2457 for (;;)
2458 {
2459 startidx = idx; /* (track incomplete blocks) */
2460 q = bin = bin_at(idx);
2461
2462 /* For each bin in this block ... */
2463 do
2464 {
2465 /* Find and use first big enough chunk ... */
2466
2467 for (victim = last(bin); victim != bin; victim = victim->bk)
2468 {
2469 victim_size = chunksize(victim);
2470 remainder_size = long_sub_size_t(victim_size, nb);
2471
2472 if (remainder_size >= (long)MINSIZE) /* split */
2473 {
2474 remainder = chunk_at_offset(victim, nb);
2475 set_head(victim, nb | PREV_INUSE);
2476 unlink(victim, bck, fwd);
2477 link_last_remainder(remainder);
2478 set_head(remainder, remainder_size | PREV_INUSE);
2479 set_foot(remainder, remainder_size);
2480 check_malloced_chunk(victim, nb);
2481 MALLOC_UNLOCK;
2482 return chunk2mem(victim);
2483 }
2484
2485 else if (remainder_size >= 0) /* take */
2486 {
2487 set_inuse_bit_at_offset(victim, victim_size);
2488 unlink(victim, bck, fwd);
2489 check_malloced_chunk(victim, nb);
2490 MALLOC_UNLOCK;
2491 return chunk2mem(victim);
2492 }
2493
2494 }
2495
2496 bin = next_bin(bin);
2497
2498 #if MALLOC_ALIGN == 16
2499 if (idx < MAX_SMALLBIN)
2500 {
2501 bin = next_bin(bin);
2502 ++idx;
2503 }
2504 #endif
2505 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2506
2507 /* Clear out the block bit. */
2508
2509 do /* Possibly backtrack to try to clear a partial block */
2510 {
2511 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2512 {
2513 binblocks &= ~block;
2514 break;
2515 }
2516 --startidx;
2517 q = prev_bin(q);
2518 } while (first(q) == q);
2519
2520 /* Get to the next possibly nonempty block */
2521
2522 if ( (block <<= 1) <= binblocks && (block != 0) )
2523 {
2524 while ((block & binblocks) == 0)
2525 {
2526 idx += BINBLOCKWIDTH;
2527 block <<= 1;
2528 }
2529 }
2530 else
2531 break;
2532 }
2533 }
2534
2535
2536 /* Try to use top chunk */
2537
2538 /* Require that there be a remainder, ensuring top always exists */
2539 remainder_size = long_sub_size_t(chunksize(top), nb);
2540 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2541 {
2542
2543 #if HAVE_MMAP
2544 /* If big and would otherwise need to extend, try to use mmap instead */
2545 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2546 (victim = mmap_chunk(nb)) != 0)
2547 {
2548 MALLOC_UNLOCK;
2549 return chunk2mem(victim);
2550 }
2551 #endif
2552
2553 /* Try to extend */
2554 malloc_extend_top(nb);
2555 remainder_size = long_sub_size_t(chunksize(top), nb);
2556 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2557 {
2558 MALLOC_UNLOCK;
2559 return 0; /* propagate failure */
2560 }
2561 }
2562
2563 victim = top;
2564 set_head(victim, nb | PREV_INUSE);
2565 top = chunk_at_offset(victim, nb);
2566 set_head(top, remainder_size | PREV_INUSE);
2567 check_malloced_chunk(victim, nb);
2568 MALLOC_UNLOCK;
2569 return chunk2mem(victim);
2570
2571 #endif /* MALLOC_PROVIDED */
2572 }
2573
2574 #endif /* DEFINE_MALLOC */
2575
2576 #ifdef DEFINE_FREE
2577
2578 /*
2579
2580 free() algorithm :
2581
2582 cases:
2583
2584 1. free(0) has no effect.
2585
2586 2. If the chunk was allocated via mmap, it is release via munmap().
2587
2588 3. If a returned chunk borders the current high end of memory,
2589 it is consolidated into the top, and if the total unused
2590 topmost memory exceeds the trim threshold, malloc_trim is
2591 called.
2592
2593 4. Other chunks are consolidated as they arrive, and
2594 placed in corresponding bins. (This includes the case of
2595 consolidating with the current `last_remainder').
2596
2597 */
2598
2599
2600 #if __STD_C
fREe(Void_t * mem)2601 void fREe(Void_t* mem)
2602 #else
2603 void fREe(mem) RDECL Void_t* mem;
2604 #endif
2605 {
2606 #ifdef MALLOC_PROVIDED
2607
2608 free (mem);
2609
2610 #else
2611
2612 mchunkptr p; /* chunk corresponding to mem */
2613 INTERNAL_SIZE_T hd; /* its head field */
2614 INTERNAL_SIZE_T sz; /* its size */
2615 int idx; /* its bin index */
2616 mchunkptr next; /* next contiguous chunk */
2617 INTERNAL_SIZE_T nextsz; /* its size */
2618 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2619 mchunkptr bck; /* misc temp for linking */
2620 mchunkptr fwd; /* misc temp for linking */
2621 int islr; /* track whether merging with last_remainder */
2622
2623 if (mem == 0) /* free(0) has no effect */
2624 return;
2625
2626 MALLOC_LOCK;
2627
2628 p = mem2chunk(mem);
2629 hd = p->size;
2630
2631 #if HAVE_MMAP
2632 if (hd & IS_MMAPPED) /* release mmapped memory. */
2633 {
2634 munmap_chunk(p);
2635 MALLOC_UNLOCK;
2636 return;
2637 }
2638 #endif
2639
2640 check_inuse_chunk(p);
2641
2642 sz = hd & ~PREV_INUSE;
2643 next = chunk_at_offset(p, sz);
2644 nextsz = chunksize(next);
2645
2646 if (next == top) /* merge with top */
2647 {
2648 sz += nextsz;
2649
2650 if (!(hd & PREV_INUSE)) /* consolidate backward */
2651 {
2652 prevsz = p->prev_size;
2653 p = chunk_at_offset(p, -prevsz);
2654 sz += prevsz;
2655 unlink(p, bck, fwd);
2656 }
2657
2658 set_head(p, sz | PREV_INUSE);
2659 top = p;
2660 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2661 malloc_trim(top_pad);
2662 MALLOC_UNLOCK;
2663 return;
2664 }
2665
2666 set_head(next, nextsz); /* clear inuse bit */
2667
2668 islr = 0;
2669
2670 if (!(hd & PREV_INUSE)) /* consolidate backward */
2671 {
2672 prevsz = p->prev_size;
2673 p = chunk_at_offset(p, -prevsz);
2674 sz += prevsz;
2675
2676 if (p->fd == last_remainder) /* keep as last_remainder */
2677 islr = 1;
2678 else
2679 unlink(p, bck, fwd);
2680 }
2681
2682 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2683 {
2684 sz += nextsz;
2685
2686 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2687 {
2688 islr = 1;
2689 link_last_remainder(p);
2690 }
2691 else
2692 unlink(next, bck, fwd);
2693 }
2694
2695
2696 set_head(p, sz | PREV_INUSE);
2697 set_foot(p, sz);
2698 if (!islr)
2699 frontlink(p, sz, idx, bck, fwd);
2700
2701 MALLOC_UNLOCK;
2702
2703 #endif /* MALLOC_PROVIDED */
2704 }
2705 #ifdef _HAVE_ALIAS_ATTRIBUTE
2706 #pragma GCC diagnostic push
2707 #ifndef __clang__
2708 #pragma GCC diagnostic ignored "-Wmissing-attributes"
2709 #endif
2710 __strong_reference(free, __malloc_free);
2711 #pragma GCC diagnostic pop
2712 #endif
2713 #endif /* DEFINE_FREE */
2714
2715 #ifdef DEFINE_REALLOC
2716
2717 /*
2718
2719 Realloc algorithm:
2720
2721 Chunks that were obtained via mmap cannot be extended or shrunk
2722 unless HAVE_MREMAP is defined, in which case mremap is used.
2723 Otherwise, if their reallocation is for additional space, they are
2724 copied. If for less, they are just left alone.
2725
2726 Otherwise, if the reallocation is for additional space, and the
2727 chunk can be extended, it is, else a malloc-copy-free sequence is
2728 taken. There are several different ways that a chunk could be
2729 extended. All are tried:
2730
2731 * Extending forward into following adjacent free chunk.
2732 * Shifting backwards, joining preceding adjacent space
2733 * Both shifting backwards and extending forward.
2734 * Extending into newly sbrked space
2735
2736 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2737 size argument of zero (re)allocates a minimum-sized chunk.
2738
2739 If the reallocation is for less space, and the new request is for
2740 a `small' (<512 bytes) size, then the newly unused space is lopped
2741 off and freed.
2742
2743 The old unix realloc convention of allowing the last-free'd chunk
2744 to be used as an argument to realloc is no longer supported.
2745 I don't know of any programs still relying on this feature,
2746 and allowing it would also allow too many other incorrect
2747 usages of realloc to be sensible.
2748
2749
2750 */
2751
2752
2753 #if __STD_C
rEALLOc(Void_t * oldmem,size_t bytes)2754 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2755 #else
2756 Void_t* rEALLOc(oldmem, bytes) RDECL Void_t* oldmem; size_t bytes;
2757 #endif
2758 {
2759 #ifdef MALLOC_PROVIDED
2760
2761 realloc (oldmem, bytes);
2762
2763 #else
2764
2765 INTERNAL_SIZE_T nb; /* padded request size */
2766
2767 mchunkptr oldp; /* chunk corresponding to oldmem */
2768 INTERNAL_SIZE_T oldsize; /* its size */
2769
2770 mchunkptr newp; /* chunk to return */
2771 INTERNAL_SIZE_T newsize; /* its size */
2772 Void_t* newmem; /* corresponding user mem */
2773
2774 mchunkptr next; /* next contiguous chunk after oldp */
2775 INTERNAL_SIZE_T nextsize; /* its size */
2776
2777 mchunkptr prev; /* previous contiguous chunk before oldp */
2778 INTERNAL_SIZE_T prevsize; /* its size */
2779
2780 mchunkptr remainder; /* holds split off extra space from newp */
2781 INTERNAL_SIZE_T remainder_size; /* its size */
2782
2783 mchunkptr bck; /* misc temp for linking */
2784 mchunkptr fwd; /* misc temp for linking */
2785
2786 #ifdef REALLOC_ZERO_BYTES_FREES
2787 if (bytes == 0) { fREe(oldmem); return 0; }
2788 #endif
2789
2790
2791 /* realloc of null is supposed to be same as malloc */
2792 if (oldmem == 0) return mALLOc(bytes);
2793
2794 MALLOC_LOCK;
2795
2796 newp = oldp = mem2chunk(oldmem);
2797 newsize = oldsize = chunksize(oldp);
2798
2799
2800 nb = request2size(bytes);
2801
2802 /* Check for overflow and just fail, if so. */
2803 if (nb > INT_MAX || nb < bytes)
2804 {
2805 errno = ENOMEM;
2806 return 0;
2807 }
2808
2809 #if HAVE_MMAP
2810 if (chunk_is_mmapped(oldp))
2811 {
2812 #if HAVE_MREMAP
2813 newp = mremap_chunk(oldp, nb);
2814 if(newp)
2815 {
2816 MALLOC_UNLOCK;
2817 return chunk2mem(newp);
2818 }
2819 #endif
2820 /* Note the extra SIZE_SZ overhead. */
2821 if(oldsize - SIZE_SZ >= nb)
2822 {
2823 MALLOC_UNLOCK;
2824 return oldmem; /* do nothing */
2825 }
2826 /* Must alloc, copy, free. */
2827 newmem = mALLOc(bytes);
2828 if (newmem == 0)
2829 {
2830 MALLOC_UNLOCK;
2831 return 0; /* propagate failure */
2832 }
2833 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2834 munmap_chunk(oldp);
2835 MALLOC_UNLOCK;
2836 return newmem;
2837 }
2838 #endif
2839
2840 check_inuse_chunk(oldp);
2841
2842 if ((long)(oldsize) < (long)(nb))
2843 {
2844
2845 /* Try expanding forward */
2846
2847 next = chunk_at_offset(oldp, oldsize);
2848 if (next == top || !inuse(next))
2849 {
2850 nextsize = chunksize(next);
2851
2852 /* Forward into top only if a remainder */
2853 if (next == top)
2854 {
2855 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2856 {
2857 newsize += nextsize;
2858 top = chunk_at_offset(oldp, nb);
2859 set_head(top, (newsize - nb) | PREV_INUSE);
2860 set_head_size(oldp, nb);
2861 MALLOC_UNLOCK;
2862 return chunk2mem(oldp);
2863 }
2864 }
2865
2866 /* Forward into next chunk */
2867 else if (((long)(nextsize + newsize) >= (long)(nb)))
2868 {
2869 unlink(next, bck, fwd);
2870 newsize += nextsize;
2871 goto split;
2872 }
2873 }
2874 else
2875 {
2876 next = 0;
2877 nextsize = 0;
2878 }
2879
2880 /* Try shifting backwards. */
2881
2882 if (!prev_inuse(oldp))
2883 {
2884 prev = prev_chunk(oldp);
2885 prevsize = chunksize(prev);
2886
2887 /* try forward + backward first to save a later consolidation */
2888
2889 if (next != 0)
2890 {
2891 /* into top */
2892 if (next == top)
2893 {
2894 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2895 {
2896 unlink(prev, bck, fwd);
2897 newp = prev;
2898 newsize += prevsize + nextsize;
2899 newmem = chunk2mem(newp);
2900 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2901 top = chunk_at_offset(newp, nb);
2902 set_head(top, (newsize - nb) | PREV_INUSE);
2903 set_head_size(newp, nb);
2904 MALLOC_UNLOCK;
2905 return newmem;
2906 }
2907 }
2908
2909 /* into next chunk */
2910 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2911 {
2912 unlink(next, bck, fwd);
2913 unlink(prev, bck, fwd);
2914 newp = prev;
2915 newsize += nextsize + prevsize;
2916 newmem = chunk2mem(newp);
2917 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2918 goto split;
2919 }
2920 }
2921
2922 /* backward only */
2923 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2924 {
2925 unlink(prev, bck, fwd);
2926 newp = prev;
2927 newsize += prevsize;
2928 newmem = chunk2mem(newp);
2929 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2930 goto split;
2931 }
2932 }
2933
2934 /* Must allocate */
2935
2936 newmem = mALLOc (bytes);
2937
2938 if (newmem == 0) /* propagate failure */
2939 {
2940 MALLOC_UNLOCK;
2941 return 0;
2942 }
2943
2944 /* Avoid copy if newp is next chunk after oldp. */
2945 /* (This can only happen when new chunk is sbrk'ed.) */
2946
2947 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2948 {
2949 newsize += chunksize(newp);
2950 newp = oldp;
2951 goto split;
2952 }
2953
2954 /* Otherwise copy, free, and exit */
2955 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2956 __malloc_free(oldmem);
2957 MALLOC_UNLOCK;
2958 return newmem;
2959 }
2960
2961
2962 split: /* split off extra room in old or expanded chunk */
2963
2964 remainder_size = long_sub_size_t(newsize, nb);
2965
2966 if (remainder_size >= (long)MINSIZE) /* split off remainder */
2967 {
2968 remainder = chunk_at_offset(newp, nb);
2969 set_head_size(newp, nb);
2970 set_head(remainder, remainder_size | PREV_INUSE);
2971 set_inuse_bit_at_offset(remainder, remainder_size);
2972 __malloc_free(chunk2mem(remainder)); /* let free() deal with it */
2973 }
2974 else
2975 {
2976 set_head_size(newp, newsize);
2977 set_inuse_bit_at_offset(newp, newsize);
2978 }
2979
2980 check_inuse_chunk(newp);
2981 MALLOC_UNLOCK;
2982 return chunk2mem(newp);
2983
2984 #endif /* MALLOC_PROVIDED */
2985 }
2986
2987 #endif /* DEFINE_REALLOC */
2988
2989 #ifdef DEFINE_MEMALIGN
2990
2991 /*
2992
2993 memalign algorithm:
2994
2995 memalign requests more than enough space from malloc, finds a spot
2996 within that chunk that meets the alignment request, and then
2997 possibly frees the leading and trailing space.
2998
2999 The alignment argument must be a power of two. This property is not
3000 checked by memalign, so misuse may result in random runtime errors.
3001
3002 8-byte alignment is guaranteed by normal malloc calls, so don't
3003 bother calling memalign with an argument of 8 or less.
3004
3005 Overreliance on memalign is a sure way to fragment space.
3006
3007 */
3008
3009
3010 #if __STD_C
mEMALIGn(size_t alignment,size_t bytes)3011 Void_t* mEMALIGn(size_t alignment, size_t bytes)
3012 #else
3013 Void_t* mEMALIGn(alignment, bytes) RDECL size_t alignment; size_t bytes;
3014 #endif
3015 {
3016 INTERNAL_SIZE_T nb; /* padded request size */
3017 char* m; /* memory returned by malloc call */
3018 mchunkptr p; /* corresponding chunk */
3019 char* brk; /* alignment point within p */
3020 mchunkptr newp; /* chunk to return */
3021 INTERNAL_SIZE_T newsize; /* its size */
3022 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3023 mchunkptr remainder; /* spare room at end to split off */
3024 long remainder_size; /* its size */
3025
3026 /* If need less alignment than we give anyway, just relay to malloc */
3027
3028 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
3029
3030 /* Otherwise, ensure that it is at least a minimum chunk size */
3031
3032 if (alignment < MINSIZE) alignment = MINSIZE;
3033
3034 /* Call malloc with worst case padding to hit alignment. */
3035
3036 nb = request2size(bytes);
3037
3038 /* Check for overflow. */
3039 if (nb > __SIZE_MAX__ - (alignment + MINSIZE) || nb < bytes)
3040 {
3041 errno = ENOMEM;
3042 return 0;
3043 }
3044
3045 m = (char*)(mALLOc(nb + alignment + MINSIZE));
3046
3047 if (m == 0) return 0; /* propagate failure */
3048
3049 MALLOC_LOCK;
3050
3051 p = mem2chunk(m);
3052
3053 if ((((uintptr_t)(m)) % alignment) == 0) /* aligned */
3054 {
3055 #if HAVE_MMAP
3056 if(chunk_is_mmapped(p))
3057 {
3058 MALLOC_UNLOCK;
3059 return chunk2mem(p); /* nothing more to do */
3060 }
3061 #endif
3062 }
3063 else /* misaligned */
3064 {
3065 /*
3066 Find an aligned spot inside chunk.
3067 Since we need to give back leading space in a chunk of at
3068 least MINSIZE, if the first calculation places us at
3069 a spot with less than MINSIZE leader, we can move to the
3070 next aligned spot -- we've allocated enough total room so that
3071 this is always possible.
3072 */
3073
3074 brk = (char*)mem2chunk(((uintptr_t)(m + alignment - 1)) & -alignment);
3075 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment;
3076
3077 newp = (mchunkptr)brk;
3078 leadsize = brk - (char*)(p);
3079 newsize = chunksize(p) - leadsize;
3080
3081 #if HAVE_MMAP
3082 if(chunk_is_mmapped(p))
3083 {
3084 newp->prev_size = p->prev_size + leadsize;
3085 set_head(newp, newsize|IS_MMAPPED);
3086 MALLOC_UNLOCK;
3087 return chunk2mem(newp);
3088 }
3089 #endif
3090
3091 /* give back leader, use the rest */
3092
3093 set_head(newp, newsize | PREV_INUSE);
3094 set_inuse_bit_at_offset(newp, newsize);
3095 set_head_size(p, leadsize);
3096 __malloc_free(chunk2mem(p));
3097 p = newp;
3098
3099 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3100 }
3101
3102 /* Also give back spare room at the end */
3103
3104 remainder_size = long_sub_size_t(chunksize(p), nb);
3105
3106 if (remainder_size >= (long)MINSIZE)
3107 {
3108 remainder = chunk_at_offset(p, nb);
3109 set_head(remainder, remainder_size | PREV_INUSE);
3110 set_head_size(p, nb);
3111 __malloc_free(chunk2mem(remainder));
3112 }
3113
3114 check_inuse_chunk(p);
3115 MALLOC_UNLOCK;
3116 return chunk2mem(p);
3117
3118 }
3119
3120 #ifdef _HAVE_ALIAS_ATTRIBUTE
3121 __strong_reference(memalign, aligned_alloc);
3122 #endif
3123 #endif /* DEFINE_MEMALIGN */
3124
3125 #ifdef DEFINE_VALLOC
3126
3127 /*
3128 valloc just invokes memalign with alignment argument equal
3129 to the page size of the system (or as near to this as can
3130 be figured out from all the includes/defines above.)
3131 */
3132
3133 #if __STD_C
vALLOc(size_t bytes)3134 Void_t* vALLOc(size_t bytes)
3135 #else
3136 Void_t* vALLOc(bytes) RDECL size_t bytes;
3137 #endif
3138 {
3139 return mEMALIGn (malloc_getpagesize, bytes);
3140 }
3141
3142 #endif /* DEFINE_VALLOC */
3143
3144 #ifdef DEFINE_PVALLOC
3145
3146 /*
3147 pvalloc just invokes valloc for the nearest pagesize
3148 that will accommodate request
3149 */
3150
3151
3152 #if __STD_C
pvALLOc(size_t bytes)3153 Void_t* pvALLOc(size_t bytes)
3154 #else
3155 Void_t* pvALLOc(bytes) RDECL size_t bytes;
3156 #endif
3157 {
3158 size_t pagesize = malloc_getpagesize;
3159 if (bytes > __SIZE_MAX__ - pagesize)
3160 {
3161 errno = ENOMEM;
3162 return 0;
3163 }
3164 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3165 }
3166
3167 #endif /* DEFINE_PVALLOC */
3168
3169 #ifdef DEFINE_CALLOC
3170 #include "mul_overflow.h"
3171 /*
3172
3173 calloc calls malloc, then zeroes out the allocated chunk.
3174
3175 */
3176
3177 #if __STD_C
cALLOc(size_t n,size_t elem_size)3178 Void_t* cALLOc(size_t n, size_t elem_size)
3179 #else
3180 Void_t* cALLOc(n, elem_size) RDECL size_t n; size_t elem_size;
3181 #endif
3182 {
3183 mchunkptr p;
3184 INTERNAL_SIZE_T csz;
3185
3186 INTERNAL_SIZE_T sz;
3187
3188 #if MORECORE_CLEARS
3189 mchunkptr oldtop;
3190 INTERNAL_SIZE_T oldtopsize;
3191 #endif
3192 Void_t* mem;
3193
3194 if (mul_overflow((INTERNAL_SIZE_T) n, (INTERNAL_SIZE_T) elem_size, &sz))
3195 {
3196 errno = ENOMEM;
3197 return 0;
3198 }
3199
3200 /* check if expand_top called, in which case don't need to clear */
3201 #if MORECORE_CLEARS
3202 MALLOC_LOCK;
3203 oldtop = top;
3204 oldtopsize = chunksize(top);
3205 #endif
3206
3207 mem = mALLOc (sz);
3208
3209 if (mem == 0)
3210 {
3211 #if MORECORE_CLEARS
3212 MALLOC_UNLOCK;
3213 #endif
3214 return 0;
3215 }
3216 else
3217 {
3218 p = mem2chunk(mem);
3219
3220 /* Two optional cases in which clearing not necessary */
3221
3222
3223 #if HAVE_MMAP
3224 if (chunk_is_mmapped(p))
3225 {
3226 #if MORECORE_CLEARS
3227 MALLOC_UNLOCK;
3228 #endif
3229 return mem;
3230 }
3231 #endif
3232
3233 csz = chunksize(p);
3234
3235 #if MORECORE_CLEARS
3236 if (p == oldtop && csz > oldtopsize)
3237 {
3238 /* clear only the bytes from non-freshly-sbrked memory */
3239 csz = oldtopsize;
3240 }
3241 MALLOC_UNLOCK;
3242 #endif
3243
3244 MALLOC_ZERO(mem, csz - SIZE_SZ);
3245 return mem;
3246 }
3247 }
3248
3249 #endif /* DEFINE_CALLOC */
3250
3251 #if defined(DEFINE_CFREE) && !defined(__CYGWIN__)
3252
3253 /*
3254
3255 cfree just calls free. It is needed/defined on some systems
3256 that pair it with calloc, presumably for odd historical reasons.
3257
3258 */
3259
3260 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3261 #if !defined(_LIBC) || !defined(_REENT_ONLY)
3262 #if __STD_C
cfree(Void_t * mem)3263 void cfree(Void_t *mem)
3264 #else
3265 void cfree(mem) Void_t *mem;
3266 #endif
3267 {
3268 fREe(mem);
3269 }
3270 #endif
3271 #endif
3272
3273 #endif /* DEFINE_CFREE */
3274
3275 #ifdef DEFINE_FREE
3276
3277 /*
3278
3279 Malloc_trim gives memory back to the system (via negative
3280 arguments to sbrk) if there is unused memory at the `high' end of
3281 the malloc pool. You can call this after freeing large blocks of
3282 memory to potentially reduce the system-level memory requirements
3283 of a program. However, it cannot guarantee to reduce memory. Under
3284 some allocation patterns, some large free blocks of memory will be
3285 locked between two used chunks, so they cannot be given back to
3286 the system.
3287
3288 The `pad' argument to malloc_trim represents the amount of free
3289 trailing space to leave untrimmed. If this argument is zero,
3290 only the minimum amount of memory to maintain internal data
3291 structures will be left (one page or less). Non-zero arguments
3292 can be supplied to maintain enough trailing space to service
3293 future expected allocations without having to re-obtain memory
3294 from the system.
3295
3296 Malloc_trim returns 1 if it actually released any memory, else 0.
3297
3298 */
3299
3300 #if __STD_C
malloc_trim(size_t pad)3301 int malloc_trim(size_t pad)
3302 #else
3303 int malloc_trim(pad) RDECL size_t pad;
3304 #endif
3305 {
3306 long top_size; /* Amount of top-most memory */
3307 long extra; /* Amount to release */
3308 char* current_brk; /* address returned by pre-check sbrk call */
3309 char* new_brk; /* address returned by negative sbrk call */
3310
3311 unsigned long pagesz = malloc_getpagesize;
3312
3313 MALLOC_LOCK;
3314
3315 top_size = chunksize(top);
3316 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3317
3318 if (extra < (long)pagesz) /* Not enough memory to release */
3319 {
3320 MALLOC_UNLOCK;
3321 return 0;
3322 }
3323
3324 else
3325 {
3326 /* Test to make sure no one else called sbrk */
3327 current_brk = (char*)(MORECORE (0));
3328 if (current_brk != (char*)(top) + top_size)
3329 {
3330 MALLOC_UNLOCK;
3331 return 0; /* Apparently we don't own memory; must fail */
3332 }
3333
3334 else
3335 {
3336 new_brk = (char*)(MORECORE (-extra));
3337
3338 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3339 {
3340 /* Try to figure out what we have */
3341 current_brk = (char*)(MORECORE (0));
3342 top_size = current_brk - (char*)top;
3343 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3344 {
3345 sbrked_mem = current_brk - sbrk_base;
3346 set_head(top, top_size | PREV_INUSE);
3347 }
3348 check_chunk(top);
3349 MALLOC_UNLOCK;
3350 return 0;
3351 }
3352
3353 else
3354 {
3355 /* Success. Adjust top accordingly. */
3356 set_head(top, (top_size - extra) | PREV_INUSE);
3357 sbrked_mem -= extra;
3358 check_chunk(top);
3359 MALLOC_UNLOCK;
3360 return 1;
3361 }
3362 }
3363 }
3364 }
3365
3366 #endif /* DEFINE_FREE */
3367
3368 #ifdef DEFINE_MALLOC_USABLE_SIZE
3369
3370 /*
3371 malloc_usable_size:
3372
3373 This routine tells you how many bytes you can actually use in an
3374 allocated chunk, which may be more than you requested (although
3375 often not). You can use this many bytes without worrying about
3376 overwriting other allocated objects. Not a particularly great
3377 programming practice, but still sometimes useful.
3378
3379 */
3380
3381 #if __STD_C
malloc_usable_size(Void_t * mem)3382 size_t malloc_usable_size(Void_t* mem)
3383 #else
3384 size_t malloc_usable_size(mem) RDECL Void_t* mem;
3385 #endif
3386 {
3387 mchunkptr p;
3388 if (mem == 0)
3389 return 0;
3390 else
3391 {
3392 p = mem2chunk(mem);
3393 if(!chunk_is_mmapped(p))
3394 {
3395 if (!inuse(p)) return 0;
3396 #if DEBUG
3397 MALLOC_LOCK;
3398 check_inuse_chunk(p);
3399 MALLOC_UNLOCK;
3400 #endif
3401 return chunksize(p) - SIZE_SZ;
3402 }
3403 return chunksize(p) - 2*SIZE_SZ;
3404 }
3405 }
3406
3407 #endif /* DEFINE_MALLOC_USABLE_SIZE */
3408
3409 #ifdef DEFINE_MALLINFO
3410
3411 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3412
malloc_update_mallinfo(void)3413 STATIC void malloc_update_mallinfo(void)
3414 {
3415 int i;
3416 mbinptr b;
3417 mchunkptr p;
3418 #if DEBUG
3419 mchunkptr q;
3420 #endif
3421
3422 INTERNAL_SIZE_T avail = chunksize(top);
3423 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3424
3425 for (i = 1; i < NAV; ++i)
3426 {
3427 b = bin_at(i);
3428 for (p = last(b); p != b; p = p->bk)
3429 {
3430 #if DEBUG
3431 check_free_chunk(p);
3432 for (q = next_chunk(p);
3433 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3434 q = next_chunk(q))
3435 check_inuse_chunk(q);
3436 #endif
3437 avail += chunksize(p);
3438 navail++;
3439 }
3440 }
3441
3442 current_mallinfo.ordblks = navail;
3443 current_mallinfo.uordblks = sbrked_mem - avail;
3444 current_mallinfo.fordblks = avail;
3445 #if HAVE_MMAP
3446 current_mallinfo.hblks = n_mmaps;
3447 current_mallinfo.hblkhd = mmapped_mem;
3448 #endif
3449 current_mallinfo.keepcost = chunksize(top);
3450
3451 }
3452
3453 #else /* ! DEFINE_MALLINFO */
3454
3455 #if __STD_C
3456 extern void malloc_update_mallinfo(void);
3457 #else
3458 extern void malloc_update_mallinfo();
3459 #endif
3460
3461 #endif /* ! DEFINE_MALLINFO */
3462
3463 #ifdef DEFINE_MALLOC_STATS
3464
3465 /*
3466
3467 malloc_stats:
3468
3469 Prints on stderr the amount of space obtain from the system (both
3470 via sbrk and mmap), the maximum amount (which may be more than
3471 current if malloc_trim and/or munmap got called), the maximum
3472 number of simultaneous mmap regions used, and the current number
3473 of bytes allocated via malloc (or realloc, etc) but not yet
3474 freed. (Note that this is the number of bytes allocated, not the
3475 number requested. It will be larger than the number requested
3476 because of alignment and bookkeeping overhead.)
3477
3478 */
3479
3480 #if __STD_C
malloc_stats(void)3481 void malloc_stats(void)
3482 #else
3483 void malloc_stats() RDECL
3484 #endif
3485 {
3486 unsigned long local_max_total_mem;
3487 int local_sbrked_mem;
3488 struct mallinfo local_mallinfo;
3489 #if HAVE_MMAP
3490 unsigned long local_mmapped_mem, local_max_n_mmaps;
3491 #endif
3492 FILE *fp;
3493
3494 MALLOC_LOCK;
3495 malloc_update_mallinfo();
3496 local_max_total_mem = max_total_mem;
3497 local_sbrked_mem = sbrked_mem;
3498 local_mallinfo = current_mallinfo;
3499 #if HAVE_MMAP
3500 local_mmapped_mem = mmapped_mem;
3501 local_max_n_mmaps = max_n_mmaps;
3502 #endif
3503 MALLOC_UNLOCK;
3504
3505 fp = stderr;
3506
3507 fprintf(fp, "max system bytes = %10u\n",
3508 (unsigned int)(local_max_total_mem));
3509 #if HAVE_MMAP
3510 fprintf(fp, "system bytes = %10u\n",
3511 (unsigned int)(local_sbrked_mem + local_mmapped_mem));
3512 fprintf(fp, "in use bytes = %10u\n",
3513 (unsigned int)(local_mallinfo.uordblks + local_mmapped_mem));
3514 #else
3515 fprintf(fp, "system bytes = %10u\n",
3516 (unsigned int)local_sbrked_mem);
3517 fprintf(fp, "in use bytes = %10u\n",
3518 (unsigned int)local_mallinfo.uordblks);
3519 #endif
3520 #if HAVE_MMAP
3521 fprintf(fp, "max mmap regions = %10u\n",
3522 (unsigned int)local_max_n_mmaps);
3523 #endif
3524 }
3525
3526 #endif /* DEFINE_MALLOC_STATS */
3527
3528 #ifdef DEFINE_MALLINFO
3529
3530 /*
3531 mallinfo returns a copy of updated current mallinfo.
3532 */
3533
3534 #if __STD_C
mALLINFo(void)3535 struct mallinfo mALLINFo(void)
3536 #else
3537 struct mallinfo mALLINFo() RDECL
3538 #endif
3539 {
3540 struct mallinfo ret;
3541
3542 MALLOC_LOCK;
3543 malloc_update_mallinfo();
3544 ret = current_mallinfo;
3545 MALLOC_UNLOCK;
3546 return ret;
3547 }
3548
3549 #endif /* DEFINE_MALLINFO */
3550
3551 #ifdef DEFINE_MALLOPT
3552
3553 /*
3554 mallopt:
3555
3556 mallopt is the general SVID/XPG interface to tunable parameters.
3557 The format is to provide a (parameter-number, parameter-value) pair.
3558 mallopt then sets the corresponding parameter to the argument
3559 value if it can (i.e., so long as the value is meaningful),
3560 and returns 1 if successful else 0.
3561
3562 See descriptions of tunable parameters above.
3563
3564 */
3565
3566 #if __STD_C
mALLOPt(int param_number,int value)3567 int mALLOPt(int param_number, int value)
3568 #else
3569 int mALLOPt(param_number, value) RDECL int param_number; int value;
3570 #endif
3571 {
3572 MALLOC_LOCK;
3573 switch(param_number)
3574 {
3575 case M_TRIM_THRESHOLD:
3576 trim_threshold = value; MALLOC_UNLOCK; return 1;
3577 case M_TOP_PAD:
3578 top_pad = value; MALLOC_UNLOCK; return 1;
3579 case M_MMAP_THRESHOLD:
3580 #if HAVE_MMAP
3581 mmap_threshold = value;
3582 #endif
3583 MALLOC_UNLOCK;
3584 return 1;
3585 case M_MMAP_MAX:
3586 #if HAVE_MMAP
3587 n_mmaps_max = value; MALLOC_UNLOCK; return 1;
3588 #else
3589 MALLOC_UNLOCK; return value == 0;
3590 #endif
3591
3592 default:
3593 MALLOC_UNLOCK;
3594 return 0;
3595 }
3596 }
3597
3598 #endif /* DEFINE_MALLOPT */
3599
3600 /*
3601
3602 History:
3603
3604 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3605 * Fixed ordering problem with boundary-stamping
3606
3607 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3608 * Added pvalloc, as recommended by H.J. Liu
3609 * Added 64bit pointer support mainly from Wolfram Gloger
3610 * Added anonymously donated WIN32 sbrk emulation
3611 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3612 * malloc_extend_top: fix mask error that caused wastage after
3613 foreign sbrks
3614 * Add linux mremap support code from HJ Liu
3615
3616 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3617 * Integrated most documentation with the code.
3618 * Add support for mmap, with help from
3619 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3620 * Use last_remainder in more cases.
3621 * Pack bins using idea from colin@nyx10.cs.du.edu
3622 * Use ordered bins instead of best-fit threshhold
3623 * Eliminate block-local decls to simplify tracing and debugging.
3624 * Support another case of realloc via move into top
3625 * Fix error occuring when initial sbrk_base not word-aligned.
3626 * Rely on page size for units instead of SBRK_UNIT to
3627 avoid surprises about sbrk alignment conventions.
3628 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3629 (raymond@es.ele.tue.nl) for the suggestion.
3630 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3631 * More precautions for cases where other routines call sbrk,
3632 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3633 * Added macros etc., allowing use in linux libc from
3634 H.J. Lu (hjl@gnu.ai.mit.edu)
3635 * Inverted this history list
3636
3637 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3638 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3639 * Removed all preallocation code since under current scheme
3640 the work required to undo bad preallocations exceeds
3641 the work saved in good cases for most test programs.
3642 * No longer use return list or unconsolidated bins since
3643 no scheme using them consistently outperforms those that don't
3644 given above changes.
3645 * Use best fit for very large chunks to prevent some worst-cases.
3646 * Added some support for debugging
3647
3648 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3649 * Removed footers when chunks are in use. Thanks to
3650 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3651
3652 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3653 * Added malloc_trim, with help from Wolfram Gloger
3654 (wmglo@Dent.MED.Uni-Muenchen.DE).
3655
3656 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3657
3658 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3659 * realloc: try to expand in both directions
3660 * malloc: swap order of clean-bin strategy;
3661 * realloc: only conditionally expand backwards
3662 * Try not to scavenge used bins
3663 * Use bin counts as a guide to preallocation
3664 * Occasionally bin return list chunks in first scan
3665 * Add a few optimizations from colin@nyx10.cs.du.edu
3666
3667 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3668 * faster bin computation & slightly different binning
3669 * merged all consolidations to one part of malloc proper
3670 (eliminating old malloc_find_space & malloc_clean_bin)
3671 * Scan 2 returns chunks (not just 1)
3672 * Propagate failure in realloc if malloc returns 0
3673 * Add stuff to allow compilation on non-ANSI compilers
3674 from kpv@research.att.com
3675
3676 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3677 * removed potential for odd address access in prev_chunk
3678 * removed dependency on getpagesize.h
3679 * misc cosmetics and a bit more internal documentation
3680 * anticosmetics: mangled names in macros to evade debugger strangeness
3681 * tested on sparc, hp-700, dec-mips, rs6000
3682 with gcc & native cc (hp, dec only) allowing
3683 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3684
3685 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3686 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3687 structure of old version, but most details differ.)
3688
3689 */
3690 #endif
3691