Wrap the SerializationRecord (Vector<u8, 1024>) in an OwnPtr so that
each ObjectStoreRecord is only ~16 bytes instead of ~1040+ bytes.
This makes Vector operations on the records list dramatically cheaper
since memmove now shifts pointers instead of kilobyte-sized buffers.
Since records are sorted by key, records matching a key range form a
contiguous block. Use binary search to find the range boundaries and
remove the block in one operation, instead of scanning every record
with is_in_range().
Instead of appending and re-sorting the entire records vector on every
insert (O(n log n)), use binary search to find the correct insertion
position and insert directly (O(log n) comparisons + O(n) shift).
If one request on a transaction succeeds, then the next one fails, that
would cause the abort algorithm to run before the success for the first
request due to the task queue ordering. Instead, queue the processing
for the next request after the completion of the current request.
This code was treating the parameter to the JS::Array as if it was a
capacity, but that causes JS to see more databases than are actually
accessible if a database was being created in an aborted transaction.
To allow these to be reverted, we store mutation logs per object store
in the scope of a readwrite transaction to track the modifications that
were made by it. If a revert is needed, the log is played in reverse to
bring us back to the original state.
These should persist for the duration of the program, at least until we
have persistent storage to restore from if a database needs to be used
again.
This fixes a flake in indexeddb-queued-delete-after-open that turns
into a consistent failure (or an assertion failure in GC::Weak in
debug builds) when running the test with the -g flag.
cleanup_indexed_database_transactions() is called on every event
loop spin. It was calling associated_connections() which allocates
a GC::HeapVector every time, creating massive GC pressure on
JS-heavy sites (217K allocations observed on x.com).
Switch the hot path to use a GC::RootVector instead, which lives
on the stack and avoids GC heap allocation while still being
visible to the garbage collector.
Rename the existing methods to make the return type explicit:
- associated_connections_as_heap_vector() for callers that
capture the result in GC::Function lambdas
- associated_connections_as_root_vector() for callers that
just iterate safely within a single scope
The database map stores GC::Weak<Database> entries. When the GC
collects a Database, the weak pointer goes null but the map entry
remains. The old code dereferenced the weak pointer without checking
liveness, causing a null reference binding (UBSan).
Fix this by checking ptr() before dereferencing, and cleaning up the
stale map entry if the database was collected.
...rather than at each of the open connections. These disagreed with the
spec steps written right above them.
The transaction-lifetime.any.html and idbdatabase_close.any.html tests
pass instead of timing out with these changes, so they've been imported.
If the current JS task has not made any requests, then nothing else
will trigger a commit like the spec desires, so we need to do it in the
microtask checkpoint.
Two WPT tests no longer time out with this change and have been
imported.
If an error causes this to be left inactive, further requests will be
rejected on the transaction. This gives us a few subtest passes in
IndexedDB/key-conversion-exceptions and IndexedDB/keypath-exceptions
WPTs.
With this fixed, the cleanup loop can assert that the transactions'
states are all active before they are set inactive.
This fixes the regression in idbindex_reverse_cursor.any.html, which
was actually exposing the underlying issue of ignoring conflicting
read/write transactions. Now, if a read/write transaction is in the
queue, no transactions can coincide with its requests' execution.
Previously, after one request was marked as processed, we would
synchronously queue another task to process the next request. This
would mean that two open requests on the same database could
interleave. This was especially problematic when one of the requests
would cause the database to upgrade, since the second open request
would begin processing before the upgradeneeded event fired, causing an
exception to be thrown in the second open().
The solution is to explicitly check for continuation conditions after
events have been fired in order to ensure that every step for the
request is completed before starting any further request processing.
For connection requests, the spec states:
> Open requests are processed in a connection queue. The queue contains
> all open requests associated with an storage key and a name. Requests
> added to the connection queue processed in order and each request
> must run to completion before the next request is processed. An open
> request may be blocked on other connections, requiring those
> connections to close before the request can complete and allow
> further requests to be processed.
For requests against a transaction, the spec states:
> Once the transaction has been started the implementation can begin
> executing the requests placed against the transaction. Requests must
> be executed in the order in which they were made against the
> transaction. Likewise, their results must be returned in the order
> the requests were placed against a specific transaction. There is no
> guarantee about the order that results from requests in different
> transactions are returned.
In the process of reworking it to use this approach, I've added a bunch
of new tests that cover things that our imported WPTs weren't checking.
With the fix for serializing connection requests, we can now fully
download the assets for the emscripten-compiled asm.js games in the
Humble Mozilla Bundle, particularly FTL: Faster Than Light.
There were no regressions in our test suite. One web platform test,
'idbindex_reverse_cursor.any.html', has one newly-failing subtest, but
the subtest was apparently only passing by chance due synchronous
execution of requests. A few web platform tests that were added in a
prior commit improved. The delete-request-queue.any.html test has
stopped crashing, and the close-in-upgrade-needed.any.html test has
stopped flaking, so they are both imported here as well.
Incidentally fixes#7512, for which a crash test has been added.
This reduces it to one GC::Root instead of one per element, but also
will make it easier to replace that Root with a Ref when needed in the
next commit.
By making use of the WEB_PLATFORM_OBJECT macro we can remove
the boilerplate of needing to add this override for every
serializable platform object so that we can check whether they
are exposed or not.
This way databases are allowed to be GC'ed when there are no open
connections to them.
As a side effect, databases are no longer kept alive for the duration of
a browsing session. This will be addressed once IndexedDB gets proper
on-disk persistence. For now, avoiding memory leaks is the better
trade-off.
With this change the number of live `Window` objects in GC graph
captured by `test-web -j 1 --dump-gc-graph` goes down from 50 to 25.
RequestList cannot be copied or moved, because m_pending_request_queue
contains lambdas that store pointers to the original RequestList and
completion steps that we don't have a reference to.
Fixes a bunch of WPT regressions and imports the ones that work.
IDBGetAllOptions is supposed to have a default value for direction.
When the value passed is not a potentially valid key range, we
need to default the direction argument, and not assume its set
Spec issue: https://github.com/w3c/IndexedDB/pull/478
Directly mapping a negative double to a u64 causes it to wrap around
to the max value. We work around this here by comparing as doubles,
and only incrementing the generator if the new value is greater
Fixes#6455
This fixes a crash on initial load of the page http://demo.actualbudget.org.
Minimal repro of the issue (error in the console without this PR):
<script>
const r = indexedDB.open("t", 1);
r.onupgradeneeded = e => e.target.result.createObjectStore("s", { keyPath: "id" });
r.onsuccess = () => r.result.transaction("s", "readonly").objectStore("s").getAllKeys();
</script>