MultiServer was inherited from SerenityOS where it was used in many
places. Now that BrowserProcess is its only consumer, inline the
connection acceptance logic directly into BrowserProcess and remove
the abstraction.
Delete Lexer.cpp/h and Token.cpp, replacing all tokenization with a
new rust_tokenize() FFI function that calls back for each token.
Rewrite SyntaxHighlighter.cpp and js.cpp REPL to use the Rust
tokenizer. The token type and category enums in Token.h now mirror
the Rust definitions in token.rs.
Move is_syntax_character/is_whitespace/is_line_terminator helpers
into RegExpConstructor.cpp as static functions, since they were only
used there.
Previously, `create_paired()` returned two full Transport objects, and
callers would immediately call `from_transport()` on the remote side to
extract its underlying fd. This wasted resources: the remote
Transport's IO thread, wakeup pipes, and send queue were initialized
only to be torn down without ever sending or receiving a message.
Now `create_paired()` returns `{Transport, TransportHandle}` — the
remote side is born as a lightweight handle containing just the raw fd,
skipping all unnecessary initialization.
Also replace `release_underlying_transport_for_transfer()` (which
returned a raw int fd) with `release_for_transfer()` (which returns a
TransportHandle directly), hiding the socket implementation detail
from callers including MessagePort.
Replace clone_from_transport() (which dup()s the FD) with
from_transport() (which releases the FD) in the WebWorkerClient
call site. The UI process never uses the WebWorkerClient connection
after spawning — it only passes the transport to WebContent — so
releasing instead of cloning is safe and simpler.
This removes clone_from_transport() from TransportHandle, and
clone_for_transfer() from TransportSocket/TransportSocketWindows,
as they no longer have any callers.
Now that auxiliary service sockets are sent over IPC rather than passed
as command-line arguments, TransportHandle no longer needs to expose raw
file descriptors or manage close-on-exec flags. Remove fd() and
clear_close_on_exec(), and simplify the connect helpers accordingly.
Instead of passing RequestServer and ImageDecoder socket FDs as
command-line arguments to WebWorker, send them over the main IPC channel
after launch. The worker-agent handoff now carries all three transport
handles (worker, RequestServer, ImageDecoder) so the connection path
matches WebContent.
Instead of passing RequestServer and ImageDecoder socket FDs as
command-line arguments to WebContent, send them over the main IPC
channel after launch. This unifies initial connection and reconnection
into a single code path.
Add IPC::TransportHandle as an abstraction for passing IPC
transports through .ipc messages. This replaces IPC::File at
all sites where a transport (not a generic file) is being
transferred between processes.
TransportHandle provides from_transport(),
clone_from_transport(), and create_transport() methods that
encapsulate the fd-to-socket-to-transport conversion in one
place. This is preparatory work for Mach port support on
macOS -- when that lands, only TransportHandle's internals
need to change while all .ipc definitions and call sites
remain untouched.
Consolidate the repeated socketpair + adopt + configure pattern from
4 call sites into a single Transport::create_paired() factory method.
This fixes inconsistent error handling and socket configuration across
call sites, and prepares for future mach port support on macOS.
The tabless page with an ever-growing list of vertical cards was getting
a bit disorganized. This moves each section of the settings page to be
its own tab, with buttons to switch tabs. Some of this presented the
opportunity to migrate settings from popup dialogs to be directly in the
tab, such as the disk cache settings.
The global "restore defaults" button has also been removed. The more
settings we have, the less sense such a button makes sense. Individual
settings can have a reset option where it makes sense.
WebView::FontPlugin was the only implementation of the abstract
FontPlugin base class. Its dependencies (LibGfx, LibCore) are
already visible to LibWeb.
Remove the virtual dispatch by making FontPlugin concrete and
absorbing the WebView::FontPlugin implementation directly.
These IPC methods should be expanded in the future to allow WebContent
to specify what UI elements should be kept/removed, for example, the
navigation UI.
The set_viewport_size and set_device_pixel_ratio IPC messages were sent
separately, potentially causing a race condition when the DPR changes
(e.g. moving a window between screens): the DPR message would arrive
and use a stale viewport size, computing a temporarily wrong CSS
viewport. Combine both into a single set_viewport IPC that updates the
device viewport size and DPR together.
This aligns our behaviour closer to other browsers, which
_mostly_ consider file scheme URLs as opaque. For test
purposes, allow overriding this behaviour with a commandline
flag.
Commit 84db5d8c1c introduced the ability
to load tests over an http:// URL instead of a file:// URL. Each time
this happens, we switch to a new WebContent process due to site
isolation. Our WebContent output capture was not handling this.
For some reason, this was causing a wide array of test failures and
timeouts. Often, the failures were accompanied by the content of the
files loaded over HTTP being dumped to stdout. It's not quite clear
what was going on here.
This adds the --expose-experimental-interfaces command line flag to
enable experimental IDL interfaces. Any IDL interface with Experimental
in its exposed attributes will be disabled by default.
The problem is that by stubbing out or partially implementing interfaces
in LibWeb, we actually make some sites behave worse. For example, the
OffscreenCanvas interface being exposed makes sites believe we fully
support it, even though we don't. If the interface was not exposed,
these sites may fall back to ordinary canvas objects. Similarly, to
use YouTube, we currently have to patch out MSE interfaces.
This flag will allow developers to iteratively work on features,
without breaking such sites. We enable experimental interfaces during
tests.
We will need to propagate test mode behavior to both the WebContent and
WebWorker processes. By moving this handling to the UI process, we will
only need to update one location.
Instead of copying the Bitmap that wraps the IOSurface, we can just
present the IOSurface directly. This significantly reduces CPU usage in
the UI process, particularly at high refresh rates such as 120Hz where
it would saturate a full CPU core.
This is done by using CAMetalLayer and blitting the IOSurface to the
next drawable buffer, which handles triple buffering, locking the
IOSurface and vsync automatically. This also allows the Metal HUD to
work, but the only accurate stat is the frame intervals/FPS because
it's in the UI layer, not WebContent. However, that's still useful
to detect frame drops.
This will allow the UI to request WebContent to properly close the top
level traversable when closing a tab. For example, this allows the site
to ask if the user is sure they want to leave, closes WebSocket
connections and more.
Implement the client side of streaming animation decode sessions.
The ImageDecoderClient handles new IPC messages for frame delivery
and failure reporting, and the ImageCodecPlugin bridges this
through the platform abstraction layer used by LibWeb.
This adds a settings box to about:settings to allow users to limit the
disk cache size. This will override the default 5 GiB limit. We do not
automatically delete cache data if the new limit is suddenly less than
the used disk space; this will happen on the next request. This allows
multiple changes to the settings in a row without thrashing the cache.
In the future, we can add more toggles, such as disabling the disk
cache altogether.
We currently attach HTTP cookie headers from LibWeb within Fetch. This
has the downside that the cookie IPC, and the infrastructure around it,
are all synchronous. This blocks the WebContent process entirely while
the cookie is being retrieved, for every request on a page.
We now attach cookie headers from RequestServer. The state machine in
RequestServer::Request allows us to easily do this work asynchronously.
We can also skip this work entirely when the response is served from
disk cache.
Note that we will continue to parse cookies in the WebContent process.
If something goes awry during parsing. we limit the damage to that
process, instead of the UI or RequestServer.
Also note that WebSocket requests still have cookie headers attached
attached from LibWeb. This will be handled in a future patch.
In the future, we may want to introduce a memory cache for cookies in
RequestServer to avoid IPC altogether as able.
LibJS+DevTools: Implement console.trace() with source locations
- Add Console::TraceFrame struct with source location data
- Implement Console::trace() to gather stack information
- Add WebView::StackFrame and ConsoleTrace for IPC
- Implement DevToolsConsoleClient::printer() for traces
- Update FrameActor to format traces for DevTools
- Update WorkerDebugConsoleClient trace handling
- Update ReplConsoleClient to format trace output
This patch introduces a cookie cache in the WebContent process to reduce
blocking IPC calls when JS accesses document.cookie. The UI process now
maintains a cookie version counter per-domain in shared memory. When JS
reads document.cookie, we check whether we have a valid cached cookie by
comparing the current shared version to the last used version. If they
match, the cached cookie is returned without IPC.
This optimization is based on Chromium's shared versioning, in which it
was observed that 87% of document.cookie accesses were redundant. See:
https://blog.chromium.org/2024/06/introducing-shared-memory-versioning-to.html
Note that this cache only supports document.cookie, not HTTP Cookie
headers. HTTP cookies are attached to requests with varying URLs and
paths. The cookies that match the document URL might not match the
request URL, which we wouldn't know from WebContent. So attaching the
cached document cookie would be incorrect.
On https://twinings.co.uk, we see approximately 600 document.cookie
requests while the page loads. This patch reduces the time spent in
the document.cookie getter from ~45ms to 2-3ms.
This introduces a simple FileDownloader to download files in the UI
process from RequestServer. We use this to download the context menu
image - this download is likely to hit the disk cache.
We currently always save screenshots to the Downloads folder. We will
now always ask for a save location.
This will just let an upcoming feature to save images from web pages
behave the same way. We will want the user to be able to choose a file
name, since the file name from the URL might be nonsense or already
exist.
Previously we would resolve font features
(https://drafts.csswg.org/css-fonts-4/#feature-variation-precedence)
per element, while this works for the current subset of the font feature
resolution algorithm that we support, some as yet unimplemented parts
require us to know whether we are resolving against a CSS @font-face
rule, and if so which one (e.g. applying descriptors from the @font-face
rule, deciding which @font-feature-values rules to apply, etc).
To achieve this we store the data required to resolve font features in a
struct and pass that to `FontComputer` which resolves the font features
and stores them with the computed `Font`.
We no longer need to invalidate the font shaping cache when features
change since the features are defined per font (and therefore won't ever
change).
While our default font supporting variations is unlikely, this is
nevertheless required for our fallback font to be considered equal to
it's non-default/fallback equivalent (i.e. `font-family: serif`) which
in turn is required for LineBuilder to merge chunks into a single
fragment.
When dumping a GC graph, we now write the output as a .js file
containing `var GC_GRAPH_DUMP = <json>;` instead of raw JSON.
This allows gc-heap-explorer.html to load the dump via a
dynamically created <script> element, avoiding CORS restrictions
that prevent file:// pages from fetching other file:// URLs.
After dumping, both the browser and test-web print a clickable
file:// URL that opens the heap explorer with the dump pre-loaded.
The heap explorer's drag-and-drop file picker also accepts both
the new .js format and plain .json files.
These can get very large, exceeding the new IPC message size limits.
Instead of serializing them into messages (which was silly anyway)
we now send them as Core::AnonymousBuffer which uses shared memory.
When cookies change or expire, we currently send a list of all changed
cookies to all WebContent processes. We then filter that list in the
WebContent process for cookies that match the page's URL before sending
out cookie change events to JS.
We now perform this filtering in the UI process, so each WebContent
process only receives the cookies it would be interested in, if any.
This serves two purposes:
1. Less IPC chatter.
2. This will let each ViewImplementation know that its cookie value has
actually changed.
(2) is for an upcoming change that will introduce a cookie cache, and
will allow each view to know it should bust that cache.
Note that for this filtering to work, we must iterate ViewImplementation
instances rather than WebContentClient in order to have the view's URL.
We must then associate the IPC with the view's page ID.
No changes to the /cookiestore WPT subtests.
If a did_paint message is in-flight and we create a new process when
navigating to another website, we would still have the view registered
with the client when eventually the stale did_paint message is handled.
In server_did_paint, we retrieve the client which points to the new
process, not the original process the view was registered for.
At that point, there might not be any queued rasterization tasks in the
RenderingThread for the new process, causing a crash because of:
VERIFY(m_queued_rasterization_tasks >= 1 && ..);
Fix this by unregistering the view before proceeding with a new
WebContent process.
This is expected by WPT. For this to work, we must be able to determine
the network partition key for shared worker environments. So we now set
a top-level origin for these environments, with a FIXME to implement it
in accordance with the Client-Side Storage Partitioning spec.
We had skipped some steps in the spec and were:
* Always broadcasting an old value of null, instead of what it
actually was previously.
* Still broadcasting a storage event even if the value had
not changed in storage compared to the last value.
Fix both issues by returning what the old value is in the setter and
implementing the missing logic.
When rendering text, if none of the fonts in the cascade list contain a
glyph for a given code point, we now query Skia's font manager to find
a system font that can render it.