Put the const cast in a common location to make the helper more
convenient to use.
(cherry picked from commit 28ed8e5d0f79fc9d961f746367127e137faaf46b)
Previously we were assuming that the attribute return value was never
nullable and going to be returned in an Optional<IntegralType>, causing
complile errors for something such as: `attribute unsigned long?`.
(cherry picked from commit ad32227c833e60c55b0f5460a4f9f9c1631ecd57)
Mirroring the pre-existing `generate_from_integral` function. This will
allow us to fix a bug that all of these if statements have in common -
no handling of nullable types.
This also adjusts the type casted for each integral to fully match that
stated by the spec.
(cherry picked from commit d6243abec3e0f10ddf75a32c8a291f43cbdae169)
This test is currently causing intermittent CI failures.
(cherry picked from commit 24c75922d8476b5096657acb5a1b4c71454ca616)
(cherry picked from commit 9585da37ada3be4d50df44e499a9c2884570f138)
This uses a non-stream xref table, and the spec says
"The cross-reference table (comprising the original cross-reference
section and all update sections) must contain one entry for each object
number from 0 to the maximum object number used in the file". So this
file isn't spec compliant. However, with xref streams this does
happen in practice (the spec isn't quite clear on if it's valid there),
and it's much easier to write a test file with a non-stream xref table
by hand. Also, this file shows up fine in other PDF viewers.
This is a regression test for #25079.
History:
* 72f693e9ed from #6974 added the initial XRefTable. Here, -1
was used for byte_offset of invalid entries. has_object() compared
byte_offset to -1.
* e23bfd7252 from #7675 added invalid_byte_offset (equal to
LONG_MAX) and initialized byte_offset with it, but forgot to update
has_object(). has_object() still compared to -1, so has_object would
now never return false.
* d1bc89e30b from #16150 added validate_xref_table_and_fix_if_necessary,
which used `byte_offset_for_object(index) == invalid_byte_offset` to
detect if an object was in the xref table. `byte_offset_for_object`
internally did `VERIFY(has_object(index))`, which due to the previous
bullet was always true. It ran this for all object numbers from 0
up to the first object with byte_offset != invalid_byte_offset.
* d458471e09 from #24099 updated has_object() to check against
invalid_byte_offset instead of -1, making it work again -- but causing
a VERIFY in validate_xref_table_and_fix_if_necessary(). This caused
validate_xref_table_and_fix_if_necessary() to VERIFY if object 0
was not in the xref table.
* The fix is to make validate_xref_table_and_fix_if_necessary() call
has_object() to find out if an object exists in the xref table.
(When validate_xref_table_and_fix_if_necessary(), that didn't work,
because has_object() was broken then.)
This is hit 4 times in my 1000 file test set. For these three files,
the xref is valid:
* 0000200.pdf
* 0000567.pdf
* 0000651.pdf
For 0000900.pdf, validate_xref_table_and_fix_if_necessary() actually
fixes up the xref table.
Fixes#25079.
Rather than reading an int and then checking if we shouldn't have
read it and using a default value, start with the default value
and read the int if we should read it.
No behavior change.
(cherry picked from commit 4a166a45ec9db910542893133f08cc7de4ec974f,
minorly amended to resolve #include conflict due to our LibLocale
and LibUnicode being separate -- we don't want
https://github.com/LadybirdBrowser/ladybird/pull/257)
(cherry picked from commit 7a17c654d293c4afaf3086dc94e8cd4bceac48b1;
amended to resolve minor conflict in TestUtf16.cpp due to us not
(yet?) having `TEST_CASE(null_view)`. Also amended to make the new
method not call simdutf -- it's now also inefficient, but at least
the inefficient code is now only in one place instead of in several)
USVString attributes Now replace any surrogates with the replacement
character U+FFFD and resolve any relative URLs to an absolute URL. This
brings our implementation in line with the specification.
(cherry picked from commit 335d51d6782c66e7743d773f6f6b6a32a2cb2067)
USVString is defined in the IDL spec as:
> The USVString type corresponds to scalar value strings. Depending on
> the context, these can be treated as sequences of either 16-bit
> unsigned integer code units or scalar values.
This means we need to account for surrogate code points by using the
replacement character.
This fixes the last test in https://wpt.live/url/url-constructor.any.html
(cherry picked from commit aa32bfa4481f6298c99846025394b7bc415ca621)
String::from_utf8_with_replacement_character is equivalent to
https://encoding.spec.whatwg.org/#utf-8-decode from the encoding spec,
so we can simply call through to it.
(cherry picked from commit 0b864bef6040fa66f6719bf06898e310d4c5c02f)
This ports the same optimization which was made in
1a46d8df5fc81eb2c320d5c8a5597285d3d8fb3a to this function as well.
(cherry picked from commit 1e8cc97b731871409316c121e637c32806135122)
This fixes a crash using URLSearchParams when provided a percent encoded
string which does not percent decode to valid UTF-8.
Fixes a crash running https://wpt.live/url/urlencoded-parser.any.html
(cherry picked from commit 9c72fc9642266ac92dedbccac7d8c0bd238450cd)
This takes a byte sequence and converts it to a UTF-8 string with the
replacement character.
(cherry picked from commit 033ea0e7fb0f72338ae95aa0413da838206440bb)
This simplifies a bunch of places which were needing to error check and
convert from a ByteString to String.
(cherry picked from commit 84a7fead0eefd967d4319f4d71c0a0ca3095d2d1)
TextCodec does not return Error for invalid UTF-8 or UTF-16, so
this only propagates allocation errors. No expected behavior change
in practice even for invalid PDFs. Removes three calls to
release_value_but_fixme_should_propagate_errors(), mostly for
aesthetic reasons.
PDF 1.7 spec, p231, 4.4.2 Path-Painting Operators, Stroking:
"If a subpath is degenerate (consists of a single-point closed path or
of two or more points at the same coordinates), the S operator paints it
only if round line caps have been specified, producing a filled circle
centered at the single point. If butt or projecting square line caps
have been specified, S produces no output, because the orientation of
the caps would be indeterminate. (This rule applies only to zero-length
subpaths of the path being stroked, and not to zero-length dashes in a
dash pattern. In the latter case, the line caps are always painted,
since their orientation is determined by the direction of the underlying
path.) A single-point open subpath (specified by a trailing m operator)
produces no output."
In practice, Chrome, Firefox, and Preview all also draw a square
for square line endings (Preview a square rotated 45 degrees). Chrome
even draws something weird-looking for butt caps. (Acrobat only draws
the round cap, per spec.)
https://html.spec.whatwg.org/multipage/canvas.html#trace-a-path sounds
like zero-lengths paths should be ignored for canvas, but in practice
Chrome and Firefox do draw them. (Safari doesn't.)
We don't do linecaps in SVGs yet, but
https://www.w3.org/TR/SVG/paths.html#ZeroLengthSegments says:
"As mentioned in Stroke Properties, linecaps must be painted for
zero-length subpaths when stroke-linecap has a value of round or
square."
With this commit, we now draw round and square linecaps for
zero-lengths paths, which is what's apparently desired most of the
time. Maybe we can add a setting to pick different behavior for
PDF (only draw round caps), canvas (don't draw caps on zero-length
paths), and SVG (draw round and square caps) in the future.
We now put butt line cap vertices in the correct position,
vertical to line direction. We still pretend we're walking on the
pen polygon though, so it's probably possible to make this produce
weird-looking output by making the line cap line segments very
short. In practice, for butt line caps of real-world paths,
it's a big improvement though.
Instead of one dense loop, there are now one call for the outer
stroke, one for the first cap, one for the inner stroke, and one
for the second cap.
This will make it easier to do butt caps correctly, and to add
support for square caps.
It also makes it easier to not add any caps at all for closed
paths.
Maybe it also helps for adding non-round joins eventually.
In particular:
* `shape_idx` now starts at 1, since we now start with the
stroke part, not the cap part, and explicitly call `close()`
to connect the second cap with the first stroke
* Having explicit cap building code means that the convolution
loop is kind-of duplicated for round caps
* We now need to remove duplicate points, else the explicit
cap drawing gets confused. This is the only non-behavior-preserving
part of this commit, and it's a progression for lines that have
two identical points at the end of an open path (this would previously
not correctly draw a round join)
* Similarly, there's no explicit rejection of empty paths
* zero-lengths paths with different linecaps
* actual paths with duplicate points
* lines where cap size is smaller than line width
* vertical paths
* thin wide lines
* an open path with a orientation (CW vs CCW)
Also add comments to the file.
Fix incorrect casts of (e1000_rx_desc*) to (e1000_tx_desc*) in functions
related to frame receive path.
Previous code appears to work because only fields common to both the TX
and RX descriptor types are used in the affected code and happen to be
at the same offset inside the packed structs.