Start a single-shot timer when a websocket enters CLOSING and fail
the connection if the peer never answers with its close frame.
Without a bound here, a dropped or non-responsive peer can leave the
websocket stuck in the closing handshake forever, which is another
path to rare websocket timeouts during repeated test runs.
Remove the client-side early return that skipped transport writes for
zero-length masked payloads.
This keeps empty text or binary messages and empty pong replies on the
wire instead of silently discarding them.
This essentially reverts 29078d4d53. The
crash that commit fixed (loading
https://www.linux.org.ru/news/opensource/16780786 and waiting) does not
occur with this change.
Route fatal protocol-level websocket failures through fail_connection(),
emit both error and close once a socket exists, and drop the old
fatal_error() helper.
This gives callers a terminal close event instead of leaving transport
failures as error-only state changes.
The close status code was written using (u8*)&code which produces
platform byte order (little-endian on x86/ARM). This caused echo
servers to receive invalid close codes and respond with 1011
(UnexpectedCondition).
Fixes for example WPT:
https://wpt.live/websockets/Close-1000-verify-code.any.html?wss
This fixes a race condition where a WebSocket would report a fatal
connection error instead of a clean close when the server dropped the
underlying connection immediately after sending a Close frame.
This fixes various timeouts in WPT, such as in:
https://wpt.live/websockets/Send-null.any.worker.html?wss
AK/Random is already the same as SecureRandom. See PR for more details.
ProcessPrng is used on Windows for compatibility w/ sandboxing measures
See e.g. https://crbug.com/40277768
The end goal here is for LibHTTP to be the home of our RFC 9111 (HTTP
caching) implementation. We currently have one implementation in LibWeb
for our in-memory cache and another in RequestServer for our disk cache.
The implementations both largely revolve around interacting with HTTP
headers. But in LibWeb, we are using Fetch's header infra, and in RS we
are using are home-grown header infra from LibHTTP.
So to give these a common denominator, this patch replaces the LibHTTP
implementation with Fetch's infra. Our existing LibHTTP implementation
was not particularly compliant with any spec, so this at least gives us
a standards-based common implementation.
This migration also required moving a handful of other Fetch AOs over
to LibHTTP. (It turns out these AOs were all from the Fetch/Infra/HTTP
folder, so perhaps it makes sense for LibHTTP to be the implementation
of that entire set of facilities.)
`curl_easy_recv` must be called in a loop until it returns EAGAIN,
because it may cache data, but only activate the read notifier once.
Additionally, the data received can contain multiple WebSocket frames
and only activate the notifier once, so we have to keep reading frames
until there isn't enough data.
We also have to do this immediately after connecting a WebSocket,
since the server may immediately send data when the WebSocket opens
and before we create the read notifier.
This makes Discord login faster and more reliable, and makes Discord
activities start loading.
This implementation can be better improved in the future by ripping
out a lot of the manual logic in LibWebSocket and rely on libcurl to
parse our message payloads. But for now, this uses the 'raw mode' of
curl websockets in connect-only mode to allow for somewhat seamless
integration into our event loop.
The previous implementation would call send a half-dozen times
when sending each frame of WebSocket data. This is excessive,
especially since we need to allocate a new buffer for the payload
in order to mask it anyway. Let's just allocate one buffer up front,
and send all the completed data at the end of the method