The #pragma once was placed after the #include directives instead of
immediately after the copyright comment, inconsistent with every other
header file
The Weak<T>::operator=(U const&) template incorrectly compares the
internal pointer directly against the `other` reference. If this
template is instantiated, it causes a compilation error because a Ptr<T>
cannot be compared directly to a U const&
Previously, every set() and remove() call would do a full O(capacity)
scan of the underlying hash table to remove dead weak references.
This caused significant stalls on pages where the set grew large, as
every insertion paid the full cost of scanning the entire table.
Fix this by tracking mutations since last prune and only performing
the scan after enough operations to amortize the cost. The threshold
scales with the table size (minimum 64), so the per-operation cost
is O(1) amortized.
Property lookup cache entries previously used GC::Weak<T> for shape,
prototype, and prototype_chain_validity pointers. Each GC::Weak
requires a ref-counted WeakImpl allocation and an extra indirection
on every access.
Replace these with GC::RawPtr<T> and make Executable a WeakContainer
so the GC can clear stale pointers during sweep via remove_dead_cells.
For static PropertyLookupCache instances (used throughout the runtime
for well-known property lookups), introduce StaticPropertyLookupCache
which registers itself in a global list that also gets swept.
Now that inline cache entries use GC::RawPtr instead of GC::Weak,
we can compare shape/prototype pointers directly without going
through the WeakImpl indirection. This removes one dependent load
from each IC check in GetById, PutById, GetLength, GetGlobal, and
SetGlobal handlers.
This adds a stack trace to the JSON output from GC graph dumps which
is shown in a default-collapsed tray on the right side of the graph
explorer. When a stack pointer root is selected, the stack frame it
originated from is highlighted in the tray.
Make HashTable<GC::Weak<T>> a compile error to prevent future misuse.
All existing uses have been migrated to GC::WeakHashSet<T> which
provides its own internal hash traits.
HashTable<GC::Weak<T>> is unsafe because GC::Weak's hash depends on
ptr() which becomes nullptr when the object is collected. This corrupts
the hash table: entries shift to wrong buckets, probe chains break
during delete_bucket(), and rehash scatters dead entries incorrectly.
WeakHashSet wraps HashTable with private internal traits that own the
hash function, prunes dead entries before every mutation, and provides
an iterator that skips dead entries and yields T& directly.
This introduces a new MUST_UPCALL macro that expands to
[[clang::annotate("must_upcall")]] on Clang. When placed on a virtual
function, the Clang plugin will verify that all overrides call their
base class implementation.
This generalizes the existing Base::visit_edges() check to work with
any annotated virtual function. The first use is Cell::visit_edges(),
but this can be applied to other functions that require upcalls.
The plugin checks for calls via either Base:: (the common typedef
pattern) or the explicit parent class name.
Add a clang plugin check that flags GC::Cell subclasses (and their
base classes within the Cell hierarchy) that have destructors with
non-trivial bodies. Such logic should use Cell::finalize() instead.
Add GC_ALLOW_CELL_DESTRUCTOR annotation macro for opting out in
exceptional cases (currently only JS::Object).
This prevents us from accidentally adding code in destructors that
runs after something we're pointing to may have been destroyed.
(This could become a problem when the garbage collector sweeps
objects in an unfortunate order.)
This new check uncovered a handful of bugs which are then also fixed
in this commit. :^)
Conceptually similar to GC::Function and GC::HeapVector, allowing hash
tables to safely hold GC-managed values. GC::HeapHashTable derives
from GC::Cell and traces its table elements during marking.
This ensures that we are explicitly declaring the allocator to use when
allocating a cell(-inheriting) type, instead of silently falling back
to size-based allocation.
Since this is done in allocate_cell, this will only be detected for
types that are actively being allocated. However, since that means
they're _not_ being allocated, that means it's safe to not declare
an allocator to use for those. For example, the base TypedArray<T>,
which is never directly allocated and only the defined specializations
are ever allocated.
Using ensure_capacity() was a mistake, as that API is for specifying an
exact needed capacity, while grow_capacity() is for growing at a
reasonable rate.
Amusingly, we ended up with very different behavior on macOS and Linux
here, since ensure_capacity() calls kmalloc_good_size() which quantizes
to malloc bucket sizes on macOS, but is effectively a no-op on Linux.
Extreme slowdown on Linux caught by GarBench/marking-stress.js
When passing a Vector<JS::Value> to the MarkingVisitor, we were
iterating over the vector and visiting one value at a time. This led
to a very inefficient way of building up the GC's work queue.
By adding a new visit_impl() virtual to Cell::Visitor, we can now
grow the work queue capacity once, and then add without incrementally
growing the storage.
Instead of checking if every single cell overrides the "must survive GC"
virtual, we can make this a HeapBlock level thing.
This avoids almost an entire GC heap traversal during the mark phase.
- Delete defined but not implemented
`ConservativeVectorBase& operator=(ConservativeVectorBase const&);`
- Mark `ConservativeVectorBase` as non-copyable because underlying
`m_list_node` cannot be copied safely.
- Change `ConservativeVector` copy assignment to simply copy elements
from one vector to another, since we don't have to worry about
`ConservativeVectorBase` while copying.
Post-GC tasks may trigger another GC, and things got very confusing
when that happened. Just dump all stats before running tasks.
Also add a separate Heap function to run these tasks. This makes
backtraces much easier to understand.
This had two fatal bugs:
1. We didn't actually mark the cell that must survive GC, we only
visited its edges.
2. Worse, we didn't actually mark anything at all! We just added
cells to MarkingVisitor's work queue, but this happened after
the work queue had already been processed.
This commit fixes these issues by moving the "must survive" pass
earlier in the mark phase.
HeapVector inherits from GC::Cell, and thus participates in tracing
garbage collection. It's not a standalone vector of roots like
RootVector or ConservativeVector. It must be marked for its elements to
be marked.