Delayed preload and image animation callbacks can outlive the objects
they notify. Incremental sweeping makes this easier to hit because stale
callback state may be reclaimed before the delayed work runs.
Use a weak link element when firing preload load and error events, and
use a weak image style value from animated image timers instead of a raw
pointer.
SharedResourceRequest can be finalized while fetch work is pending.
Use weak fetch callbacks so late work no-ops once the request is gone.
Finalization now only drops local references and callbacks. It does not
stop fetch, since that allocates new algorithms during collection.
Tear down the MediaControls (and its Core::Timer-driven hover handler)
during HTMLMediaElement finalization. Otherwise, between weak-clearing
and incremental sweep destroying the element, a queued hover timer
event can still fire and trip a VERIFY against an already-cleared
GC::Weak reference to a shadow tree node.
Previously we were inconsistent by generating code for enum definitions
but not generating code for dictionaries. With future changes to the
IDL generator to expose helpers to convert to and from IDL values
this produced circular depdendencies. To solve this problem, also
generate the dictionary definitions in bindings headers.
Change the bindings generator to emit enum definitions into the
Bindings/<Thing>.h file owned by the IDL that defines them, including
support IDLs that do not have a top-level interface or namespace.
Allow enum-only modules to generate bindings headers and make generated
bindings code include those owner headers when another IDL depends on
their enums.
This allows us to replace the hand made NavigationType.h header with
one generated by bindings.
This commit splits out synchronization primitives from LibThreading into
LibSync. This is because LibThreading depends on LibCore, while LibCore
needs the synchronization primitives from LibThreading. This worked
while they were header only, but when I tried to add an implementation
file it ran into the circular dependency. To abstract away the pthread
implementation using cpp files is necessary so the synchronization
primitives were moved to a separate library.
DecodedImageFrame only wraps a ref-counted Bitmap and color-space
metadata. The frame object itself does not provide shared mutable
state or lifetime ownership beyond those members, so ref-counting it
adds an unnecessary layer of indirection.
The Paintable tree and its supplemental painting data structures were
GC allocated because that was the easiest way to manage it and avoid
leaks introduced by ref cycles. This included the Paintable subclasses
themselves plus StackingContext, ChromeWidget, Scrollbar, ResizeHandle,
and scroll-frame state.
We are now trying to reduce GC allocation churn on layout and painting
updates, so keeping this short-lived rendering tree outside the JS heap
is a better fit. Move Paintable to RefCountedTreeNode, make painting
helpers ref-counted or weakly reference Paintables, and update the
layout and event-handler call sites to use RefPtr/WeakPtr ownership.
Report DOM character data, decoded image frames, ImageBitmap pixel
buffers, and 2D canvas surfaces through the GC external memory hook.
This lets image and text-heavy pages participate in GC threshold
calculations through their retained backing storage.
Attach cached JavaScript bytecode sidecars to HTTP response headers so
WebContent can materialize classic and module scripts directly from a
decoded cache blob on cache hits.
Carry the disk cache vary key with the sidecar and reuse it when storing
fresh bytecode, avoiding mismatches against the augmented network
request headers used to create the cache entry.
Keep CORS-filtered module responses intact for status, MIME, and script
creation checks. Read bytecode sidecar data only from the internal
response, and treat decode or materialization failure as a cache miss
that falls back to normal source compilation.
Schedule JavaScript bytecode cache generation after downloaded classic
scripts and modules have been handed back to the main thread.
The cache job reparses and fully compiles on the thread pool,
serializes the bytecode blob, and stores it as HTTP cache sidecar data.
RequestServer finalizes the disk cache entry before notifying
WebContent, so the script fetcher can attach the sidecar immediately.
Store a SHA-256 fingerprint of the decoded source text in each bytecode
cache blob, and require callers to provide the expected fingerprint when
validating or decoding a blob.
This rejects sidecars for stale HTTP cache entries whose URL and request
headers still match but whose source body has been replaced. Bytecode
cache tests cover the mismatched-source rejection path.
After off-thread script or module compilation hands top-level bytecode
back to the main thread, clone the remaining lazy function payloads and
compile them on the thread pool.
Install completed bytecode only when the function is still lazy. If the
main thread compiled it first, discard the stale result and schedule
another pass over that executable so nested lazy payloads still move to
bytecode.
DecodedImageFrame now owns decoded bitmap pixels directly, so the
separate ImmutableBitmap wrapper no longer carries useful semantics.
Remove the class and pass decoded image frames or bitmaps at the
boundaries where pixels are actually required.
The Skia image cache now keys off DecodedImageFrame, matching the
display-list commands that paint decoded images. Video frames stay
owned by LibMedia, with the explicit YUV-to-bitmap conversion living
at HTMLVideoElement's decoded-frame entry point for canvas and WebGL
callers.
Decoded image data should not continue to traffic in ImmutableBitmap now
that the bitmap wrapper is being retired. Introduce DecodedImageFrame as
the paintable decoded-image unit and store a Bitmap plus ColorSpace in
it directly.
Thread the new frame type through decoded image data, display-list
image commands, filters, canvas drawImage, patterns, WebGL texture
upload, and CSS/SVG image consumers. ImmutableBitmap remains only at
the legacy boundaries that still need it, such as HTML video snapshots
and callers that explicitly ask for a bitmap snapshot.
This keeps color-space ownership with the decoded frame while making
the expensive or legacy ImmutableBitmap path explicit at the few call
sites that still need it.
ImmutableBitmap still owned the helper that read pixels from a
PaintingSurface and wrapped the result as an ImmutableBitmap. That kept
a surface readback operation attached to the type we are trying to
remove, even though the snapshot is really a property of the painting
surface.
Add PaintingSurface::snapshot_bitmap() as the explicit readback path.
The remaining callers now wrap that bitmap in ImmutableBitmap only at
the places that still need the old abstraction. Canvas serialization
also uses the same helper, so the BGRA8888 premultiplied snapshot
policy has a single owner.
A video element should record video as video, not as generic external
bitmap content. Add VideoFrameSource and a dedicated display-list
command so the display-list player receives the current
Media::VideoFrame directly.
The Skia player can now upload YUV pixmaps from the frame when a GPU
context is available, without teaching the ordinary ImmutableBitmap
image cache about media formats. If GPU upload is unavailable, the
fallback explicitly converts the frame through YUVData::to_bitmap().
This gives video painting a clear extension point for future frame
backends, such as hardware frames or other planar formats, while
keeping bitmap drawing focused on immutable pixel snapshots.
Decoded video frames should own their planar YUV data and color space
directly. Keeping that storage behind ImmutableBitmap gave a
still-image abstraction media-specific behavior and made calls like
bitmap() potentially allocate and convert a whole video frame.
Move YUV ownership into Media::VideoFrame, where the lifetime naturally
follows media playback, and remove the YUV-backed mode from
ImmutableBitmap. This commit intentionally keeps the visible Web paint
path on ExternalContentSource by converting the current frame back to
an ImmutableBitmap where Web still expects one.
Callers that need pixels now ask the frame to convert explicitly. That
preserves behavior for canvas and bitmap consumers while making the
expensive YUV-to-pixel path visible at the call site instead of
hiding it behind ImmutableBitmap::bitmap().
The display queue used TimedImage even though the media pipeline is
selecting decoded video frames. That naming hid the real object being
handed from LibMedia to Web and kept the queue interface coupled to
bitmap-style painting.
Rename the wrapper to TimedVideoFrame and pass ref-counted VideoFrame
objects through the provider and display sink. Web still reads the
ImmutableBitmap from the frame for painting in this commit, so rendered
output and conversion behavior stay the same while the playback-facing
interfaces become frame-shaped.
The rendering thread now uses its own SkiaBackendContext, and the main
thread no longer reaches into GPU-backed Skia objects directly. No Skia
context is shared between threads anymore, so the mutex on
SkiaBackendContext and the lock_context()/unlock_context() pair that
wrapped every PaintingSurface and ImmutableBitmap operation are all dead
weight and can go.
Sharing a single SkiaBackendContext between the main thread and the
rendering thread forces locking around every GPU operation. Now that
ImmutableBitmaps are context-neutral, the SkImage cache is per-painter,
and PaintingSurface accepts an explicit context, have the rendering
thread create its own GPU context on startup and use it for the
display-list player and backing store allocation.
This sets up the next commit to remove the cross-thread locking
machinery entirely.
ImmutableBitmap currently caches an SkImage tied to a specific Skia
backend context, which prevents using the same bitmap with more than one
context. Move that cache out into a per-painter
ImmutableBitmapSkiaImageCache so the cached image lives next to the
context that produced it.
Painters and the display-list player prune their caches each frame, and
canvas 2D contexts prune when presenting, so unused images do not
accumulate.
Backing stores are GPU surfaces and need to be created on whichever
thread owns the Skia context behind them. As preparation for giving the
rendering thread its own Skia context, run the allocation inside the
compositor loop, and use a callback to hand the resulting shared image
handles back to the page client on the main thread.
Previously, presentational hints bypassed the regular cascade pipeline
and wrote directly into `CascadedProperties` under
`CascadeOrigin::Author`. That meant `var()` substitution and the
invalid-at-computed-value-time fallback had to be duplicated in a
separate per-element pass, which in practice missed the IACVT step and
could leave a `GuaranteedInvalidStyleValue` in the cascaded
properties. This caused a crash in downstream code that assumed the
value had been resolved.
This introduces an `AuthorPresentationalHint` cascade origin and feeds
them through the cascade as normal declarations. This means that
`var()` resolution now happens in only one place.
thead, tbody, tfoot, tr, td, and th all have an `align` presentational
attribute with identical definitions. We previously only supported it
for td and th, and also allowed arbitrary text-align values instead of
the 4 dictated by the spec.
Previously a fixed-rate paint refresh timer kept queueing rendering
update tasks at the maximum configured frame rate, regardless of whether
anything had actually changed. This wasted CPU on idle pages, which
spend most of their time in a steady state where no style, layout, or
paint work is needed.
Replace the repeating timer with a single-shot frame timer driven by
PageClient::request_frame(). A rendering update is now scheduled only
when something requires one. The configured maximum frame rate is
preserved as a ceiling on how closely consecutive frames can follow each
other.
When the regular HTML parser is blocked on an external script, the
speculative parser scans ahead and pre-fetches discoverable
sub-resources. Previously those fetches were tracked only in the
parser's own URL list and never registered in the document's preload
map, so when the regular parser later reached each element fetch()'s
consume_a_preloaded_resource() lookup found nothing and issued a
duplicate request — every parser-blocked sub-resource was fetched
twice.
issue_speculative_fetch now creates a PreloadEntry, registers it
under create_a_preload_key(request) in the document's preload map,
and supplies a processResponseConsumeBody callback that populates
the entry. The map insertion happens after fetch() starts because
fetch() runs consume_a_preloaded_resource() synchronously, so
registering the entry beforehand would short-circuit the
speculative fetch itself.
The body-handling steps (1, 2, 5 of the preload algorithm's
processResponseConsumeBody) are factored into a shared
deliver_preload_response helper used by both the speculative parser
and HTMLLinkElement::preload.
IFrame geometry changes and object representation changes directly
selected style invalidation reasons from their HTML element classes.
Move those mappings into a new
CSS::Invalidation::EmbeddedContentInvalidator.
The HTML elements continue to own their loading, representation, and
layout-tree side effects. CSS invalidation now owns the style dirtiness
associated with those embedded-content changes.
CustomStateSet directly selected the style invalidation reason used when
its JS-visible set is modified. Move that mapping into
CSS::Invalidation::CustomElementInvalidator.
This keeps the custom-state container focused on its set contents while
CSS invalidation owns the style work required by :state() selectors.
HTMLInputElement still mapped its picker open-state change directly to a
style invalidation reason. Move that mapping into
CSS::Invalidation::ElementStateInvalidator alongside the matching select
open-state helper.
This keeps another element-state invalidation decision out of the HTML
implementation without changing the invalidation behavior.
Several HTML element state changes directly selected style invalidation
reasons from their element implementations. Move those mappings into a
new CSS::Invalidation::ElementStateInvalidator helper.
This keeps details, dialog, option, and select code focused on their own
state changes while CSS invalidation owns the style work those changes
require. The existing invalidation breadth is preserved.
HTMLInputElement had two call sites spelling out the same checked and
unchecked pseudo-class invalidation set. Move that selector policy into
FormControlInvalidator.
This keeps the input element responsible for detecting state changes,
while CSS::Invalidation owns the affected selector features.
HTML and SVG link elements both encoded the same pseudo-class list for
hyperlink state changes. Move that CSS policy into LinkInvalidator and
have both call sites delegate to it.
This keeps element-specific code focused on detecting hyperlink state
changes, while the helper owns the affected selector features.
Introduce IncrementalDocumentParser, which streams the response body
through a TextCodec::StreamingDecoder into the HTMLTokenizer one chunk
at a time. The tokenizer pauses when it runs out of input and resumes
once the next chunk is appended; when the body closes we close the
tokenizer's input stream so it can finish the parse.
DocumentLoading routes HTML responses through the new parser instead of
buffering the full body before handing it to HTMLParser.
Pull the post-parse-action setup, run loop, and post-parse invocation
out of HTMLParser::run(URL, ...) into a new run_until_completion()
method. The URL overload still calls it; behavior is unchanged. The
incremental parser will use this entry point directly without going
through the URL-setting overload.
Add a ScriptCreatedParser flag plumbed through HTMLParser's constructor
and create_for_scripting(). Only document.open()'s parser sets it to
Yes. Document::close() step 3 now checks is_script_created() so it
correctly skips parsers that weren't created via document.open(),
matching the spec.
Previously the check was just `if (!m_parser)`, which incorrectly let
document.close() insert an EOF into a network-driven parser. The bug
was mostly latent because the network parser used to finish quickly,
but it matters once the network parser stays alive for the duration of
a streamed parse.
Add can_run_out_of_characters() and use it in the
NamedCharacterReference state and consume_next_if_match() so that an
open input stream gets the same code-point-at-a-time treatment as an
active document.write insertion point. Without this, a network chunk
that ends partway through a named character reference or a
multi-character match would make the tokenizer commit to a "no match"
decision before the remaining bytes arrive.
No behavior change for existing callers: the new helper still returns
false once the input stream is closed (which the StringView constructor
sets immediately).