When resolving grid track sizes, limited min/max-content contributions
should be capped by fixed max track sizing functions, including the
argument to fit-content(). We were instead falling back to the grid
container maximum size, which allowed a grid item with overflowing
contents in a fit-content(0) row to inflate the intrinsic block size of
a nested grid.
That bogus intrinsic height could then be used for the grid's second row
sizing pass, causing unrelated flexible rows to absorb the extra space.
Document already includes XPathEvaluatorBase from XPathEvaluator.idl.
Import that IDL file and drop the duplicate XPath method declarations
from Document.idl.
Import WebIDL/Function.idl where TimerHandler uses Function, and let the
bindings generator handle it through the normal callback-function path.
This removes the special C++ mapping for Function and makes TimerHandler
use GC::Root<CallbackType>, matching the generated binding type when IDL
files are parsed together.
When resolving a typedef, clone its stored type before applying the
nullability from the use site.
This keeps shared typedef definitions stable when multiple IDL files are
parsed in one generator run.
Change the IDL parser entry point to a static factory returning an
IDL::Module. Modules now own the parsed file identity and import graph,
with an optional reference to the file's real interface when one exists.
This is a step towards running the IDL generator on files without
an interface defined.
Shape::visit_edges used to walk every entry of m_property_table and
call PropertyKey::visit_edges on each key. For non-dictionary shapes
that work is redundant: m_property_table is a lazily built cache of
the transition chain, and every key it contains was originally
inserted as some ancestor shape's m_property_key, which is already
kept alive via m_previous.
Intrinsic shapes populated through add_property_without_transition()
in Intrinsics.cpp are not dictionaries and have no m_previous to
reach their keys through, but each of those keys is either a
vm.names.* string or a well-known symbol and is strongly rooted by
the VM for its whole lifetime, so skipping them here is safe too.
Measured on the main WebWorker used by https://www.maptiler.com/maps/
this cuts out ~98% of the PropertyKey::visit_edges calls made by
Shape::visit_edges each GC, reducing time spent in GC by ~1.3 seconds
on my Linux PC while initially loading the map.
eglMakeCurrent with EGL_NO_SURFACE leaves the GL viewport at its default
of (0, 0, 0, 0), which clips all rasterization to zero pixels until the
page explicitly calls gl.viewport(). The Vulkan path already sets the
viewport after binding the color buffer; do the same on the macOS
IOSurface path so WebGL content that does not manage viewport state
itself (e.g. feature-detection draws into a 1x1 canvas) still produces
visible pixels.
This extension lets pages query the underlying GPU vendor and renderer
strings via UNMASKED_VENDOR_WEBGL / UNMASKED_RENDERER_WEBGL. Some sites
(e.g. yandex.com/maps) use it to decide whether to render vector tiles
with WebGL or fall back to raster tiles.
Carry full source positions through the Rust bytecode source map so
stack traces and other bytecode-backed source lookups can use them
directly.
This keeps exception-heavy paths from reconstructing line and column
information through SourceCode::range_from_offsets(), which can spend a
lot of time building SourceCode's position cache on first use.
We're trading some space for time here, but I believe it's worth it at
this tag, as this saves ~250ms of main thread time while loading
https://x.com/ on my Linux machine. :^)
Reading the stored Position out of the source map directly also exposed
two things masked by the old range_from_offsets() path: a latent
off-by-one in Lexer::new_at_offset() (its consume() bumped line_column
past the character at offset; only synthesize_binding_pattern() hit it),
and a (1,1) fallback in range_from_offsets() that fired whenever the
queried range reached EOF. Fix the lexer, then rebaseline both the
bytecode dump tests (no more spurious "1:1") and the destructuring AST
tests (binding-pattern identifiers now report their real columns).
When inheriting custom-property data from a parent element, we were
copying the parent's full CustomPropertyData regardless of whether
each property was registered with `inherits: false`. That caused
non-inheriting registered properties to leak from the parent,
contrary to the @property spec.
Wrap the parent-side lookup so we strip any custom property whose
registration says it should not inherit, and only build a fresh
CustomPropertyData when at least one property was actually filtered.
Key the filtered view's cache on both the destination document's
identity and its custom-property registration generation. The
generation counter is local to each document, so a subtree adopted
into another document (or queried via getComputedStyle from another
window) could otherwise pick up a cached view computed under an
unrelated registration set and silently skip non-inheriting filtering
in the new document.
A @keyframes rule scoped to a shadow root was not reliably reached
from an animated slotted light-DOM element: the keyframes lookup
walked the element's own root first, then fell back to the document,
but slotted elements can pick up animation-name from a ::slotted(...)
rule that lives in an ancestor shadow root rather than in the
element's own tree.
Track the shadow-root scope that supplied each winning cascaded
declaration, and use that scope to resolve the matching @keyframes
when processing animation definitions. A shared constructable
stylesheet can be adopted into several scopes at once, so the
declaration object alone is too weak as a key; the per-entry
shadow-root pointer disambiguates which adoption actually contributed.
Also refresh running CSS animations' keyframe sets when style is
recomputed. Previously only the first animation creation path set a
keyframe set, so an existing animation never picked up newly inserted
@keyframes rules.
When CascadedProperties::set_property overwrites an existing entry for
the same origin and layer, it bumped cascade_index but kept the old
source pointer. That left source stale after a later declaration
overrode an earlier one at equal priority, so property_source() could
return a CSSStyleDeclaration that no longer supplied the winning
value.
Refresh source alongside the rewritten property and cascade_index so
the entry consistently describes its current contributor.
The @keyframes parser was storing the keyframes name via
Token::to_string(), which keeps a string token in its quoted,
serialized form. That meant @keyframes "foo" was stored as
"\"foo\"" while animation-name: "foo" resolved to "foo",
and the two never matched.
Store the unquoted string or identifier value so the @keyframes name
and the animation-name reference compare on the same string.
CSSOM's "add a CSS style sheet" steps bail out once the disabled flag
is set, so ownership alone should not make a disabled sheet observable
in the destination document. Delay CSS-connected font activation in
add_owning_document_or_shadow_root() until the sheet actually becomes
enabled, and refuse pending image-resource loads on a disabled sheet
for the same reason.
Also extend set_disabled() to drive the font/image lifecycle around the
transition: loading fonts and pending images when the sheet becomes
enabled, and unloading fonts when it goes back to disabled.
When inline layout emits a whitespace chunk, it previously selected the
surrounding text's font without checking whether that font actually
contains a glyph for the whitespace codepoint. On pages that use
`@font-face` rules sharded by `unicode-range` (e.g. a Roboto webfont
split across one file for Cyrillic letters and another for basic Latin),
the shard covering the letters is picked for an adjacent space even
though the space codepoint lives in a different shard. HarfBuzz then
shapes the space with a font that has no glyph for it and emits
`.notdef`, rendering spaces as tofu boxes.
Check `contains_glyph(space_code_point)` on each candidate in
`font_for_space()` and fall through to
`FontCascadeList::font_for_code_point()` for the whitespace codepoint
when no surrounding font has the glyph.
Fixes whitespace rendering on web.telegram.org/a.
CompareArrayElements was calling ToString(x) +
PrimitiveString::create(vm, ...) on every comparison, producing a
fresh PrimitiveString that wrapped the original's AK::String but
carried no cached UTF-16. The subsequent IsLessThan then hit
PrimitiveString::utf16_string_view() on that fresh object, which
re-ran simdutf UTF-8 validation + UTF-8 -> UTF-16 conversion for
both sides on every one of the N log N comparisons.
When x and y are already String Values, ToString(x) and
ToPrimitive(x, Number) are the identity per spec, so we can drop
the IsLessThan detour entirely and compare their Utf16Views
directly. The original PrimitiveString caches its UTF-16 on first
access, so subsequent comparisons against the same element hit
the cache; Utf16View::operator<=> additionally gives us a memcmp
fast path when both sides ended up with short-ASCII UTF-16 storage.
Microbenchmark:
```js
function makeStrings(n) {
let seed = 1234567;
const rand = () => {
seed = (seed * 1103515245 + 12345) & 0x7fffffff;
return seed;
};
const out = new Array(n);
for (let i = 0; i < n; i++)
out[i] = "item_" + rand().toString(36)
+ "_" + rand().toString(36);
return out;
}
const base = makeStrings(100000);
const arr = base.slice();
arr.sort();
```
```
n before after speedup
1k 0.70ms 0.30ms 2.3x
10k 8.33ms 3.33ms 2.5x
50k 49.33ms 17.33ms 2.8x
100k 118.00ms 45.00ms 2.6x
```
The !has_ascii_storage() && !other.has_ascii_storage() branch did a
byte-wise __builtin_memcmp over a char16_t array, which on little-endian
does not give code-unit order: the low byte is compared first, so
0xD83D (bytes [0x3D, 0xD8]) spuriously compared less than 0x2764
(bytes [0x64, 0x27]) even though the code unit 0xD83D is greater.
No in-tree caller currently uses operator<=> for Utf16View ordering,
so this bug is dormant; the follow-up LibJS change exposes it.
Replace the memcmp branch with a per-code-unit loop, which the compiler
can auto-vectorize and which mirrors what is_code_unit_less_than already
does.
Previously, `run_caption_layout()` passed the table's border-box width
as the available space to the caption's formatting context. The BFC then
used this width directly for inline line breaking, causing text to
overflow the caption's content box by the size of the caption's own
border and padding.
This script waits until fonts have loaded and waits for 2 animation
frames before signalling test completion. This is the same mechanism
already used for ref and crash tests.
Previously we would generate the calculation context based on the
current value parsing context. The main problem with this was that
contexts were defined per property by default and had to be overriden
per component value using "special" contexts, which was easy to forget.
We now generate the calculation context per component value in the
relevant `parse_foo_value` methods.
The new failures in `typed_arithmetic.html` are because we no longer
treat percentages as resolving to their property-level type when
computing what the resolved type of a calculation is i.e. when we are
parsing the `<number>` portion of `line-height` we treat percentages as
raw percentages, not lengths. This brings us in line with WebKit but no
longer with Chrome and WPT, I am not sure what the correct behavior is.
This brings a couple of advantages:
- Previously we relied on the caller validating the parsed value was in
bounds after the fact - this was usually fine but there are a couple
of places that it was forgotten (see the tests added in this commit),
requiring the bounds to be passed as arguments makes us consider the
desired range more explicitly.
- In a future commit we will use the passed bounds as the clamping
bounds for computed values, removing the need for the existing
`ValueParsingContext` based method we have at the moment.
- Generating code is easier with this approach
We should only try to parse a dimension-percentage mix in
`parse_css_value_for_properties` if percentages resolve relative to that
dimension, not simply because percentages are allowed in general.
This doesn't currently cause any issues since we check that percentages
are resolved relative to the relevant dimension within the
`parse_foo_percentage_value` functions
In a future commit we will use the allowed literal values as the bounds
for calculated values rather than the existing `ValueParsingContext`
based system. Since the computed bounds are different from the allowed
literal values here we need to handle clamping them manually.
This allows us to avoid the ugly hack in
`property_accepted_type_ranges()`.
This also updates the `ValueType` to be `opacity-value` rather than
`opacity` to match the spec.
This matches the behavior of other browsers. We did the equivalent
change for <integer> in b86377b
We continue to store these as doubles for the extra precision.
This brings us in line with the other numeric types (percentage and
dimension) and allows us to test the clamping behavior that will be
added in a future commit.
In a future commit we will be applying clamping to numeric (i.e.
number, percentage, dimension) values which will be done in the
appropriate `Token` getters so accessing the underlying number value
would be a potential footgun
Previously, the select button's text was only refreshed inside the
two non-trivial branches of the selectedness setting algorithm.
Paths that left the select with exactly one selected option hit a
no-op branch and skipped the refresh.
Fix this by implementing the "clone selected option into select
button" algorithm and invoking it whenever the set of selected options
may have changed.
Previously, `AnonymousBuffer::create_with_size(0)` returned an error
because POSIX `mmap` rejects a zero length with `EINVAL`, and Windows
`CreateFileMapping` rejects a zero maximum size for an anonymous
mapping. This caused a crash when using `--headless=text` with zero
size pages like `about:blank`.
Bypass the async body-reading pipeline for about:srcdoc iframes whose
body bytes are already in memory. Set up a deferred parser at document
load time and run the post-activation update synchronously, so the body
element exists before parent script can observe the new document via
contentDocument. This matches Chrome and Firefox behavior for srcdoc
iframes and fixes the flaky test
`set-innerHTML-inside-iframe-srcdoc-document.html` that relied on body
being non-null.
Co-authored-by: Tim Ledbetter <tim.ledbetter@ladybird.org>
When Skia's openStream() returns a memory-backed stream, retain it
on TypefaceSkia::Impl so the underlying bytes stay alive, and
reference them directly instead of copying into a ByteBuffer.
This pattern is used regularly inside Skia, so it seems fair game.
Implement bookmark import/export in about:bookmarks using Netscape
bookmark HTML in JavaScript.
Import parsed items into BookmarkStore under an "Imported Bookmarks"
folder, and treat internal WebUI about: pages as potentially
trustworthy so SecureContext APIs are available there.
Also, explicitly prevent drag events from firing when the context menu
opens. This will only be the case on macOS, since its context menu is
opened by Ctrl+mousedown. This replaces the prior exception preventing
drag events when Ctrl is held during mousedown.
Fixes#9018 and #9019
The old implementation stored chunks in a Vector, which meant every
discard() had to call Vector::remove(0, N) to drop the consumed chunks
from the front, shifting every remaining chunk down. For a stream used
as a back-pressure queue, draining it by discarding one chunk at a time
was quadratic in the queued chunk count: in RequestServer that cost
about a second of CPU per large response.
Replace it with a singly-linked list of chunks (head, tail, head read
offset, tail write offset) so push-back and pop-front are both O(1)
and no shifting ever happens. Each chunk now holds its CHUNK_SIZE byte
array inline rather than a separately-allocated ByteBuffer, which also
halves the per-chunk allocations. Teardown unlinks iteratively to avoid
recursive OwnPtr destructors on very long chains.