The existing part_list() method used by the bindings lazily creates a
DOMTokenList, which we don't want to do just to check if an Element has
any parts defined.
This works by generating random values using XorShift128PlusRNG at
compute time and then caching them on the document using the relevant
random-caching-key
Prevents observably calling Trusted Types, which can run arbitrary JS,
cause crashes due to use of MUST and allow arbitrary JS to modify
internal elements.
Since we now have access to the `AbstractElement` through the
`ComputationContext` we can just set the flag that this element relies
on tree counting functions directly, no need to pass this struct around.
Previously when one child of an element changed we would iterate over
every child to check whether they needed to be invalidated because they
relied on tree counting functions.
We now skip this in most cases by only doing it when at least one child
relies on tree counting functions.
When a subtree is projected through a slot, its root now inherits style
from the slot's parent, rather than the parent of the unprojected root.
This fixes a ton of subtle issues, and is very noticeable on Reddit.
Before this change, whenever element's attributes changed, we would add
a flag to "pending invalidation", indicating that all descendants whose
style uses CSS custom properties needed to be recomputed. This resulted
in severe overinvalidation, because we would run invalidation regardless
of whether any custom property on affected element actually changed.
This change takes another approach, and now we decide whether
descendant's style needs to be recomputed based on whether ancestor's
style recomputation results in a change of custom properties, though
this approach adds a little overhead to style computation as now we have
to compare old vs new hashmap of custom properties.
This brings substantial improvement on discord and x.com where, before
this change, advantage of using invalidation sets was lost and we had
to recompute all descendants, because almost all of them use custom
properties.
...and setter. We had lots of places where we check if pseudo-element
type is specified and then use `pseudo_element_computed_properties()` or
`computed_properties()`. This change moves these checks from caller side
to the getter and setter.
Before this change, we would never apply CSS rules where the selector
had a mixed-case tag name. This happened because our rule caches would
key them on the lowercased tag name, but we didn't lowercase the tag
name when fetching things from the cache.
This uncovered the fact that the SVG2 spec has a bunch of style applied
to non-rendered elements in a way that doesn't match other browsers.
Instead of blindly following the spec, we now match other browsers.
There are multiple things happening here which are interconnected:
- We now use AbstractElement to refer to the source of a counter, which
means we also need to pass that around to compute `content`.
- Give AbstractElement new helper methods that are needed by
CountersSet, so it doesn't have to care whether it's dealing with a
true Element or PseudoElement.
- The CountersSet algorithms now walk the layout tree instead of DOM
tree, so TreeBuilder needs to wait until the layout node exists
before it resolves counters for it.
- Resolve counters when creating a pseudo-element's layout node. We
awkwardly compute the `content` value up to twice: Once to figure out
what kind of node we need to make, and then if it's a string, we do
so again after counters are resolved so we can get the true value of
any `counter()` functions. This will need adjusting in the future but
it works for now.
This is one of those cases where the spec says "element" and
means "element or pseudo-element". The easiest way to handle both is to
make these be free functions that take an AbstractElement, and then
give AbstractElement some helper methods so that the caller doesn't
have to care which it's dealing with.
There are some FIXMEs here because PseudoElement doesn't have a
CountersSet yet, and because the CountersSet currently uses a
UniqueNodeID to identify counter sources, which doesn't support
pseudo-elements.
`Element::ordinal_value` is called for every `li` element in
a list (ul, ol, menu).
Before:
`ordinal_value` iterates through all of the children of the list
owner. It is called once for each element: complexity $O(n^2)$.
After:
- Save the result of the first calculation in `m_ordinal_value`
then return it in subsequent calls.
- Tree modifications are intercepted and trigger invalidation
of the first node's `m_ordinal_value`:
- insert_before
- append
- remove
Results in noticeable performance improvement rendering' large
lists: from 20s to 4s for 20K elements.