Apparently all the old-style prototypes using __P are now gone. Robert Garret is the one to thank for removing the thousands of entries.
Matt Dillon has brought in his slab allocator. This handles memory allocation, and is almost nearly multiprocessor-safe, meaning no fancy locking will (eventually) be required for memory allocation.
I may be wrong, but this sounds like a partial fix to the issue of the “Giant Lock” from FreeBSD 4. Matt Dillon’s description follows…
Following this point is Matt Dillon’s explanation of the Slab Allocator, taken right from the cvs entry.
SLAB ALLOCATOR FEATURES
The slab allocator breaks allocations up into approximately 80 zones based
on their size. Each zone has a chunk size (alignment). For example, all
allocations in the 1-8 byte range will allocate in chunks of 8 bytes. Each
size zone is backed by one or more blocks of memory. The size of these
blocks is fixed at ZoneSize, which is calculated at boot time to be between
32K and 128K. The use of a fixed block size allows us to locate the zone
header given a memory pointer with a simple masking operation.
The slab allocator operates on a per-cpu basis. The cpu that allocates a
zone block owns it. free() checks the cpu that owns the zone holding the
memory pointer being freed and forwards the request to the appropriate cpu
through an asynchronous IPI. This request is not currently optimized but it
can theoretically be heavily optimized (‘queued’) to the point where the
overhead becomes inconsequential. As of this commit the malloc_type
information is not MP safe, but the core slab allocation and deallocation
algorithms, non-inclusive the having to allocate the backing block,
ARE MP safe. The core code requires no mutexes or locks, only a critical
section.
Each zone contains N allocations of a fixed chunk size. For example, a
128K zone can hold approximately 16000 or so 8 byte allocations. The zone
is initially zero’d and new allocations are simply allocated linearly out
of the zone. When a chunk is freed it is entered into a linked list and
the next allocation request will reuse it. The slab allocator heavily
optimizes M_ZERO operations at both the page level and the chunk level.
The slab allocator maintains various undocumented malloc quirks such as
ensuring that small power-of-2 allocations are aligned to their size,
and malloc(0) requests are also allowed and return a non-NULL result.
kern_tty.c depends heavily on the power-of-2 alignment feature and ahc
depends on the malloc(0) feature. Eventually we may remove the malloc(0)
feature.
PROBLEMS AS OF THIS COMMIT
NOTE! This commit may destabilize the kernel a bit. There are issues
with the ISA DMA area (‘bounce’ buffer allocation) due to the large backing
block size used by the slab allocator and there are probably some deadlock
issues do to the removal of kmem_map that have not yet been resolved.