in some situations on unexpected loop errors, read buffer may still contain response bytes to buffer, which couldn't be buffered to the error responses after error has propagated; this makes it possible by delegating bytes to wrapped response
fixed an issue where a 421 response would not call the misredirected callback, as it wasn't being re-set in the cloned connection, therefore would be never called, and would hang...
in order to only close connections initiated during the Session#wrap block, pool also wraps the block so that existing connections do not roll over
Closes#292
the h2c was relying heavily on rewriting connection options to only allow the first request to upgrade; this changes by instead changing the parser on first incoming request, so that if it's upgrade and contains the header, blocks the remainder until the upgrade is successful or not, and if it's not reverts the max concurrent requests anyway
while not all of them are recoverable, the ones that aren't are raised very early in the request establishment phase; for the ones that are, such as socks4 or 5 connection phase errors, they're retried
when users of the library code bugs in callbacks, they should not be ignored (as they were before this change), but they should also not be treated as timeouts and such, in that they should not be wrapped in an error response. they should fail loudly, ,i.e. raise
Closes#276
while stream requests are lazy, they were being nonetheless enqueued, before any function would be called. this was not great behaviour, as they could perhaps never been called, it also interfered with how other plugins inferred finished responses, such as the webmock adapter and follow_redirects. Another flaw in the grpc plugin was fixed as a result, given that bidirectional streams were actually being buffered
this mixin applies only for connections built via Session#build_altsvc_connection. This moves out logic which was always being called on the hot path for connections which hadn't been alt-svc enabled, which improves the Connection#match? bottleneck.
the new merge strategy tries to avoid allocating new objects, whereas the old one relied on transforming objects to hashes for merging, then back to Options objects, which just generated too much garbage. So the new one keeps the merging object as a hash if it can, and tries to bail out if there's nothing new to merge (empty or same objects). If there is new stuff, a shallow dup is called (dup will not dup all attributes by default, more on that later), and new attributes are then passed through the transformation-then-set pipeline (which duplicates this logic in two places now)
Because .dup creates a full shallow copy, extending classes for plugins needs to be taken into account, and these must also be duped when extendable. This has the benefit of sharing more classes across plugins
httpx uses throw/catch in order to save from so-called early resolve errors, i.e. errors which may happen before the name resolution process is either early-complete or setup, such as when there are no nameservers (internet turned off), and the requests were piped into the connection, which means they're outside of the 'on_error' callback reach. there errors were only covered on the initial send flow, i.e. in other situations when new connections may have to be established ad may early-fail, the throw would not be caught, and would reach user code
this is an old vuln fixed in curl (https://github.com/advisories/GHSA-7xmh-mw7w-rr97), which has been fixed for a long time, where credentials via authorization header would be resent on all follow location requests; this limits it to same-origin redirects; an option, "auth_to_other_origins", can be used to keep original behaviour
stream responses weren't following redirects when both plugins were
loaded. This was due to the stream callback object not being passed
across the redirect chain.
connection bookkeeping on the pool changes, where all conections are kept around, even the ones that close during the scope of the requests; new requests may then find them, reset them, an reselect them. this is a major improvement, as objects get more reused, so less gc and object movement. this also changes the way pool terminates, as connections nonw follow a termination protocol, instead of just closing (which they can while scope is open)
the change to read/write cancellation-driven timeouts as the default
timeout strategy revealed a performance regression; because these were
built on Timers, which never got unsubscribed, this meant that they were
kept beyond the duration of the request they were created for, and
needlessly got picked up for the next timeout tick.
This was fixed by adding a callback on timer intervals, which
unsubscribes them from the timer group when called; these would then be
activated after the timeout is not needed anymore (request send /
response received), thereby removing the overhead on subsequent
requests.
An additional intervals array is also kept in the connection itself;
timeouts from timers are signalled via socket wait calls, however they
were always resulting in timeouts, even when they shouldn't (ex: expect
timeout and send full response payload as a result), and with the wrong
exception class in some cases. By keeping intervals from its requests
around, and monitoring whether there are relevant request triggers, the
connection can therefore handle a timeout or bail out (so that timers
can fire the correct callback).
since v1, performance has regressed due to the new default timeouts,
which are based on timers. That's because they were not being cleanup
after requests were done with, and were causing spurious wakeups in the
select loop after the fact.
Fixed by cleaning up each timer after each relevant request event.
the previous logic was relying on a random order which didn't work in practice; instead, one now reuses the max-attempts to define how many requests happen in the half-open state, and the drip rate defines how may of them will be real