Commit Graph

14 Commits

Author SHA1 Message Date
Andreas Kling
c78a50c4c3 LibGC: Use mmap for HeapBlock chunks
Use a direct anonymous mapping for POSIX BlockAllocator chunks and trim
any temporary padding needed to make the live 2 MiB chunk HeapBlock-
aligned.

The GC only needs each 16 KiB HeapBlock slot aligned so from_cell() can
recover the block base by masking low bits. Request that alignment from
mach_vm_map() as well, rather than aligning whole chunks to 2 MiB.
2026-05-10 10:58:11 +02:00
R-Goc
02bb892d7a LibThreading/LibSync: Split out sync primitives
This commit splits out synchronization primitives from LibThreading into
LibSync. This is because LibThreading depends on LibCore, while LibCore
needs the synchronization primitives from LibThreading. This worked
while they were header only, but when I tried to add an implementation
file it ran into the circular dependency. To abstract away the pthread
implementation using cpp files is necessary so the synchronization
primitives were moved to a separate library.
2026-05-08 18:58:35 -05:00
Andreas Kling
09459ec74c LibGC: Prefer MADV_DONTNEED over MADV_FREE for decommitted blocks
MADV_FREE is lazy: the kernel only reclaims pages once it actually
needs them. As a result, blocks freed during a transient burst of
allocation continue to count toward our RSS for an arbitrarily long
time after a busy page goes idle.

Switch to MADV_DONTNEED on Linux so freed blocks drop out of RSS
immediately. macOS keeps the existing FREE_REUSABLE/FREE_REUSE paired
protocol (which integrates with its RSS accounting and gives the same
"eager release" behavior).

On cloudflare.com, some big GCs now drop ~350 MB instantly instead of
accumulating into a long-lived MADV_FREE backlog.
2026-05-09 00:45:45 +02:00
Andreas Kling
d948320e32 LibGC: Drop the random pick when handing out a HeapBlock slot
The slot lists used to be drained in a random order to make heap
layout less predictable, but on top of the per-CellAllocator type-
isolation we already enforce, the security delta is negligible.
LIFO via take_last() gives us better cache locality (we hand out
the most recently freed slot, which is more likely to be warm) and
saves a get_random_uniform call on every allocation.
2026-05-07 20:09:05 +02:00
Andreas Kling
adfc9d263f LibGC: Defer per-block madvise to a global background worker
deallocate_block() used to call MADV_FREE_REUSABLE / MADV_FREE /
MADV_DONTNEED inline on every freed block. With sweep typically
freeing many blocks per GC, the cumulative syscall cost shows up
as real GC pause time.

Move the work onto a single global "decommit worker" thread:

- deallocate_block now just poisons the slot and pushes it onto a
  per-allocator m_freshly_freed queue. No syscalls.
- allocate_block prefers m_freshly_freed over m_blocks, so a slot
  that's recycled before the worker sees it skips the
  REUSABLE/REUSE pair entirely. This is the main payoff.
- Heap::sweep_dead_cells kicks the worker at the end of sweep.
  The worker sleeps 50 ms after each kick to give the JS thread
  breathing room, then drains each registered allocator's
  m_freshly_freed, madvises slots in batches of 64 with
  sched_yield between batches, and splices them onto m_blocks.
- Per-allocator refcount + condvar lets ~BlockAllocator wait
  until the worker has dropped its reference before our storage
  goes away. (Chunks themselves remain leaked: type-isolated VM
  is permanent, so we never tear them down.)
2026-05-07 20:09:05 +02:00
Andreas Kling
788bfe0f24 LibGC: Allocate HeapBlocks from 2 MiB chunks instead of one mmap each
Previously every 16 KiB HeapBlock was its own posix_memalign /
mach_vm_map / VirtualAlloc, which churned VMAs and made the kernel's
vm_area_struct list balloon for any non-trivial heap.

Carve slots out of 2 MiB chunks instead. The kernel now sees one
mmap per 128 blocks. Chunks are owned exclusively by a single
BlockAllocator and are never released back to the OS or shared
across allocators -- that's how we keep the heap's VM permanently
type-isolated, where a virtual address used for a cell of type T
is never reused for any other type. We don't bother tracking chunk
bases for teardown: the destructor leaks them by design.

Per-block memory return is preserved: deallocate_block still calls
MADV_FREE_REUSABLE / MADV_FREE / MADV_DONTNEED / DiscardVirtualMemory
so the kernel can reclaim physical pages under pressure.
2026-05-07 20:09:05 +02:00
Ben Wiederhake
f006852203 LibGC: Remove unused header in BlockAllocator 2026-02-23 12:15:23 +01:00
Andreas Kling
5af4bc81e1 LibGC: Use mach_vm_map() for BlockAllocator on macOS
This allows us to get a correctly aligned allocation with a single
syscall, unlike posix_memalign() which makes no such guarantees.
2025-12-19 20:21:07 -06:00
Andreas Kling
82f63334d0 LibGC: Use MADV_FREE_REUSABLE and MADV_FREE_REUSE if available
These are macOS madvise() hints that keep the kernel accounting aware of
how we're using the GC memory. This keeps our memory footprint looking
more accurate.
2025-12-19 20:21:07 -06:00
Andreas Kling
716e5f72f2 LibGC: Always use 16 KiB as HeapBlock size
Before this change, we'd use the system page size as the HeapBlock
size. This caused it to vary on different platforms, going as low
as 4 KiB on most Linux systems.

To make this work, we now use posix_memalign() to ensure we get
size-aligned allocations on every platform.

Also nice: HeapBlock::BLOCK_SIZE is now a constant.
2025-12-19 20:21:07 -06:00
Andreas Kling
cce3ce2df7 LibGC: Remove awkward USE_FALLBACK_BLOCK_DEALLOCATION path
We can make a new fallback path eventually if needed for some platform.
2025-12-19 20:21:07 -06:00
R-Goc
0de3a95433 LibGC: Pass correct args to VirtualFree
VirtualFree expects zero to be passed as size for MEM_RELEASE.
2025-06-05 22:00:55 -06:00
R-Goc
d5cb940fe0 LibGC: Use native windows allocation methods for GC blocks
This allows us to use DiscardVirtualMemory in the same way that we
use madvise with MADV_DONTNEED and MADV_FREE on Unix systems.
2025-05-29 03:26:23 -06:00
Shannon Booth
f87041bf3a LibGC+Everywhere: Factor out a LibGC from LibJS
Resulting in a massive rename across almost everywhere! Alongside the
namespace change, we now have the following names:

 * JS::NonnullGCPtr -> GC::Ref
 * JS::GCPtr -> GC::Ptr
 * JS::HeapFunction -> GC::Function
 * JS::CellImpl -> GC::Cell
 * JS::Handle -> GC::Root
2024-11-15 14:49:20 +01:00