Skip to content

Fix pure virtual call crash in BackgroundTransportCurl destructor#51

Draft
mliberman wants to merge 1 commit intocompnerd/swiftfrom
fix/curl-transport-destructor-race-condition
Draft

Fix pure virtual call crash in BackgroundTransportCurl destructor#51
mliberman wants to merge 1 commit intocompnerd/swiftfrom
fix/curl-transport-destructor-race-condition

Conversation

@mliberman
Copy link

@mliberman mliberman commented Mar 5, 2026

Fixes a race condition in the curl HTTP transport destructor that causes a crash in Arc on Windows (BROWSER-WIN-2SRG).

Context

When an async HTTP request finishes, ~BackgroundTransportCurl does two things: signals the calling thread that it can proceed (SignalTransferComplete()), and marks the request/response as completed (CompleteOperation()).

The problem is the async path does these in the wrong order. SignalTransferComplete() fires first, which wakes up the calling thread (blocked in WaitForAllTransfersToComplete()), and it immediately starts destroying the request and response objects. Then CompleteOperation() tries to call MarkCompleted() on an object that's mid-destruction; the vtable has already been rewound to the base class (Transfer) where MarkCompleted() is pure virtual, so we get a _purecall crash.

// Before (async path):
transport_curl_->SignalTransferComplete();   // caller wakes up, starts destroying request/response
CompleteOperation();                         // calls MarkCompleted() mid-destruction — purecall

// After (both paths):
CompleteOperation();                         // mark request/response while they're still alive
transport_curl_->SignalTransferComplete();   // now safe to let the caller clean up

The synchronous path already had the correct order; the async path had a different order because of a concern (noted in a comment) that CompleteOperation() might somehow trigger destruction of the TransportCurl that owns the signal. But this isn't actually possible: MarkCompleted() and MarkFailed() just set a boolean flag (completed_ = true). And even if the complete_ callback did somehow trigger TransportCurl destruction, ~TransportCurl calls WaitForAllTransfersToComplete() which blocks on the semaphore, so it would deadlock waiting for the SignalTransferComplete() that hasn't happened yet.

Changes

  • Reorders the async path in ~BackgroundTransportCurl to call CompleteOperation() before SignalTransferComplete(), matching the synchronous path.

Testing

I can't repro this crash, so this fix is slightly speculative.

In the async path, SignalTransferComplete() was called before
CompleteOperation(). This unblocked the waiting thread (via
WaitForAllTransfersToComplete), allowing it to destroy the Request
and Response objects. When CompleteOperation() then called
request_->MarkCompleted() / response_->MarkFailed(), these were
dangling pointers whose vtables had been overwritten during
destruction, causing _purecall -> abort().

The fix is to always call CompleteOperation() before
SignalTransferComplete(), which is what the synchronous path
already did. This ensures request_/response_ are still alive
when their virtual methods are called.

The old comment worried that MarkCompleted/MarkFailed "could end
up attempting to tear down TransportCurl", but these methods only
set a boolean flag and cannot trigger destruction. Even if the
complete_ callback somehow triggered TransportCurl teardown,
~TransportCurl calls WaitForAllTransfersToComplete() which would
block until SignalTransferComplete() is called, so the ordering
is self-protecting.

This crash manifests as BROWSER-WIN-2SRG in Sentry with 19k+
events: EXCEPTION_ACCESS_VIOLATION_WRITE via _purecall in
BackgroundTransportCurl::~BackgroundTransportCurl.
@mliberman mliberman changed the title Fix pure virtual call crash in BackgroundTransportCurl destructor Fix pure virtual call crash in BackgroundTransportCurl destructor Mar 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant