Previously, every set() and remove() call would do a full O(capacity)
scan of the underlying hash table to remove dead weak references.
This caused significant stalls on pages where the set grew large, as
every insertion paid the full cost of scanning the entire table.
Fix this by tracking mutations since last prune and only performing
the scan after enough operations to amortize the cost. The threshold
scales with the table size (minimum 64), so the per-operation cost
is O(1) amortized.
HashTable<GC::Weak<T>> is unsafe because GC::Weak's hash depends on
ptr() which becomes nullptr when the object is collected. This corrupts
the hash table: entries shift to wrong buckets, probe chains break
during delete_bucket(), and rehash scatters dead entries incorrectly.
WeakHashSet wraps HashTable with private internal traits that own the
hash function, prunes dead entries before every mutation, and provides
an iterator that skips dead entries and yields T& directly.