Generating boilerplate is nice! This also has the bonus that we're more
correct: I included all the units listed in the spec before,
(see https://drafts.css-houdini.org/css-typed-om-1/#numeric-factory )
but we're supposed to exactly include ones for the units we support:
> If an implementation supports additional CSS units that do not have a
corresponding method in the above list, but that do correspond to one
of the existing CSSNumericType values, it must additionally support
such a method, named after the unit in its defined canonical casing,
using the generic behavior defined above.
> If an implementation does not support a given unit, it must not
implement its corresponding method from the list above.
Now, our factory functions will exactly match the units we support.
The changed test result is partly the order being different, and partly
that the container-query units are no longer included as we don't
actually support them.
This copies the latest generated code in tree and then removes code
generation for the WebGL rendering contexts. This is because it didn't
add much value, and we can maintain the generated output instead of
both that and the generator itself.
Add a new JSON file describing at-rule descriptors, and then use it to
generate a DescriptorID enum, and code to check if it's accepted in a
given at-rule.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.
The CSSOM spec tells us to potentially add up to three different IDL
attributes to CSSStyleDeclaration for every CSS property we support:
- A camelCased attribute, where a dash indicates the next character
should be uppercase
- A camelCased attribute for every -webkit- prefixed property, with the
first letter always being lowercase
- A dashed-attribute for every property with a dash in it.
Additionally, every attribute must have the CEReactions and
LegacyNullToEmptyString extended attributes specified on it.
Since we specify every property we support with Properties.json, we can
use that file to generate the IDL file and it's implementation.
We import it from the Build directory with the help of multiple import
base paths. Then, we add it to CSSStyleDeclaration via the mixin
functionality and inheriting the generated class in
CSSStyleDeclaration.
For a long time, we've used two terms, inconsistently:
- "Identifier" is a spec term, but refers to a sequence of alphanumeric
characters, which may or may not be a keyword. (Keywords are a
subset of all identifiers.)
- "ValueID" is entirely non-spec, and is directly called a "keyword" in
the CSS specs.
So to avoid confusion as much as possible, let's align with the spec
terminology. I've attempted to change variable names as well, but
obviously we use Keywords in a lot of places in LibWeb and so I may
have missed some.
One exception is that I've not renamed "valid-identifiers" in
Properties.json... I'd like to combine that and the "valid-types" array
together eventually, so there's no benefit to doing an extra rename
now.
This new code generator takes all the .idl files in LibWeb, looks for
each top level interface in there with an [Exposed=Foo] attribute, and
adds code to add the constructor and prototype for each of those exposed
interfaces to the realm of the relevant global object we're initialzing.
It will soon replace WindowObjectHelper as the way that web interfaces
are added to the Window object, and will be used in the future for
creating proper WorkerGlobalScope objects for dedicated and shared
workers.
This code generator no longer creates JS wrappers for platform objects
in the old sense, instead they're JS objects internally themselves.
Most of what we generate now are prototypes - which can be seen as
bindings for the internal C++ methods implementing getters, setters, and
methods - as well as object constructors, i.e. bindings for the internal
create_with_global_object() method.
Also tweak the naming of various CMake glue code existing around this.
This matches the target names for the main serenity build, and will make
simplifying the Lagom build much easier going forward.
The LagomFoo name came from a time when we had both library builds in
the same CMake generated project and needed to deconflict the names.
Alias values are represented by "alias-name=real-name".
We have a lot of repetitive code for converting between ValueID and
property-specific enums. Let's see if we can generate it. :^)
This first step just produces the enums, from a JSON file. The values in
there are a duplication of what's in Properties.json, but eventually
those will go away.
This works largely the same as the PropertyID and ValueID generators,
but using LibMain, Core::Stream, and TRY().
Rather than have a MediaFeatureID::Invalid, I decided to return an
Optional. We'll see if that turns out better or not. :^)
The single 4000-line WrapperGenerator.cpp file was proving to be a pain
to hack, and was filled with spaghetti, split it into a bunch of files
to lessen the impact of the spaghetti.
Also refactor the whole parser to use a class instead of a giant
function with a million lambdas.
We'll use this to prevent repeating common tool dependencies. They all
depend on LibCore and AK only. We also want to encapsulate common
install rules for them.
This allows us to remove all the add_subdirectory calls from the top
level CMakeLists.txt that referred to targets linking LagomCore.
Segregating the host tools and Serenity targets helps us get to a place
where the main Serenity build can simply use a CMake toolchain file
rather than swapping all the compiler/sysroot variables after building
host libraries and tools.