the routine was using #fetch_response, which may return nil, and wasn't handling it, making it potentially return a nil instead of a response/errorresponse object. since, depending on the plugins, #fetch_response may reroute requests, one allows to keep in the loop in case there are selectables again to process as a result of it
when coupled with the retries plugin, the exception is raised inside send_request, which breaks the integration; in order to protect from it, the proxy plugin will protect from proxy connection errors (socket/timeout errors happening until tunnel established) and allow them to be retried, while ignoring other proxy errors; meanwhile, the naming of errors was simplified, and now there's an HTTPX::ProxyError replacing HTTPX::HTTPProxyError (which is a breaking change).
when a proxied ssl connection would be lost, standard reconnection wouldn't work, as it would not pick the information from the internal tcp socket. in order to fix this, the connection retrieves the proxied io on reset/purge, which makes the establish a new proxyssl connection on reconnect
while this type of error is avoided when doing HEv2, the IPs remain
in the cache; this means that, one the same host is reached, the
IPs are loaded onto the same socket, and if the issue is IPv6
connectivity, it'll break outside of the HEv2 flow.
this error is now protected inside the connect block, so that other
IPs in the list can be tried after; the IP is then evicted from the
cachee.
HEv2 related regression test is disabled in CI, as it's currently
reliable in Gitlab CI, which allows to resolve the IPv6 address,
but does not allow connecting to it
Options#merge works by duping-then-filling ivars, but due to not all of them being initialized on object creation, each merge had the potential of adding more object shapes for the same class, which breaks one of the most recent ruby optimizations
this was fixed by caching all possible options names at the class level, and using that as reference in the initalize method to nilify all unreferenced options
the native resolver needs to be unselected. it was already, but it was taken into account still for bookkeeping. this removes it from the list by eliminating closed selectables from the list (which were probably already removed from the list via callback)
Closes https://github.com/HoneyryderChuck/httpx/issues/91
in case of multiple connections to the same server, where the server may have closed all of them at the same time, a request will fail after checkout multiple times, before starting a new one where the request may succeed. this patch allows the prior attempts not to exhaust the number of possible retries on the request
it does so by marking the request as ping when the connection it's being sent to is marked as inactive; this leverages the logic of gating retries bookkeeping in such a case
Closes https://github.com/HoneyryderChuck/httpx/issues/92
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:237: warning: method redefined; discarding old option_pool_options (StandardError)
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:221: warning: previous definition of option_pool_options was here
the whole "found connection not open" branch was removed, as currently,
a mergeable connection must not be closed; this means that only
open/inactive connections will be picked up from selector/pool, as
they're the only coalescable connections (have addresses/ssl cert
state). this may be extended to support closed connections though, as
remaining ssl/addresses are enough to make it coalescable at that point,
and then it's just a matter of idling it, so it'll be simpler than it is
today.
coalesced connection gets closed via Connection#terminate at the end
now, in order to propagate whether it was a cloned connection.
added log messages in order to monitor coalescing handshake from logs.
In Ruby 3.5, most of the `cgi` gem will be removed and moved to a bundled gem.
Luckily, the escape/unescape methods have been left around. So, only the require path needs to be adjusted to avoid a warning.
`cgi/escape` was available since Ruby 2.3
I also moved the require to the file that actually uses it.
https://bugs.ruby-lang.org/issues/21258
this was not happening for errors happening during name resolution, particularly when HEv2 was used, as the second resolver was kept open and didn't stop the selector loop
Closes#348
this new option declares how many max inflight-or-idle open connections a session may hold. connections get recycled in case a new one is needed and the pool has closed connections to discard. the same pool timeout error applies as for max_connections_per_origin
the cache key will be also determined by the supported vary headers values, when present; this means easier lookups, and one level hash fetch, where the same url-verb request may have multiple entries depending on those headers
checking response vary header will therefore be something done at cache response lookup; writes may override when they shouldn't though, as a full match on supported vary headers will be performed, and one can't know in advance the combo of vary headers, which is why insterested parties will have to be judicious with the new option
response age wasn't being taken into account, and cache invalidation request was always being sent; fresh response will stay in the store until expired; when it expires, cache invalidation will be tried (if possible); if invalidated, the new response is put in the store; Bif validated, the body of the cached response is copied, and the cached response stays in the store
http-2 1.1.0 uses the string input as the ultimate buffer (when input not frozen), which will mutate the argument. in order to keep it around for further comparison, the string is dupped
in situations where the connection is already open/active, the requests would be buffered before setting the timeouts, which would skip transition callbacks associated with writes, such as write timeouts and request timeouts
the previous option was there to allow reconnecting on non-idempotent (i.e. POST) requests, but had the unfortunate side-effect of allowing retries for failures (i.e. timeouts) which had nothing to do with a failed connection error; this mitigates it by enabling retries for ping-aware requests, i.e. if there is an error during PING, always retry
this was being used as an internal cache for finished responses; this can be however superseded by Request#response, which fulfills the same role alongside the #finished? call; this allows us to drop one variable-size hash which would grow at least as large as the number of requests per call, and was inadvertedly shared across threads when using the same session (even at no danger of colliding, but could cause perhaps problems in jruby?)
also allows to remove one less callback
subplugins are modules of plugins which register as post-plugins of other plugins
a specific plugin may want to have a side-effect on the functionality of another plugin, so they can use this to register it when the other plugin is loaded
using Hash#new(capacity: ) to better predict size; reduce the number of allocated arrays by passing the result of to the store when possible, and only calling #downcased(str) once; #array_value will also not try to clean up errors in the passed data (it'll either fail loudly, or be fixed downstream)
before, canceling a timer connected to an interval which would become empty would delete it from the main intervals store; this deletion now moves away from the request critical path, and pinging for intervals will drop elapsed-or-empty before returning the shortest one
beyond that, the intervals store won't be constantly recreated if there's no need for it (i.e. nothing has elapsed), which reduce the gc pressure
searching for existing interval on #after now uses bsearch; since the list is ordered, this should make finding one more performant
this allows calling #status or #headers on a stream response, without buffering the whole response, as it's happening now; this will only work for methods which do not rely on the whole payload to be available, but that should be ok for the stream plugin usage
Fixes https://github.com/janko/down/issues/98
since pools can keep multiple persistent connections which may have been terminated by the peer already, exhausting the one retry attempt from the persistent plugin may make request fail before trying it on an actual connection. in this patch, requests which are preceded by a PING frame used for probing are therefore marked as such, and do not decrement the attempts counter when failing
some connection callbacks are prone to be left behind; when they do, they may access objects that may have been locked by another thread, thereby corrupting state.
with the retries plugin, the request payload will be rewinded, and that may not be possible if already closed. this was never detected so far because no request body transcoder internally closes, but the faraday multipart adapter does
the request is therefore closed alongside the response (when the latter is closed)
Fixes https://github.com/HoneyryderChuck/httpx/issues/75\#issuecomment-2731219586
this avoid, on HTTP/2 termination handshake, in case the socket was shown as closed due to EOF, that the bytes are going to be written regardless (due to being misidentified as the GOAWAY frame)
this may happen in a few contexts, such as connection exhaustion, but more importantly, when a request is retried in a different connection; if the request successfully sets the callbacks before the connection raises an issue and the request is retried in a new one, the callback from the faulty connection are carried with it, and triggered at a time when the connection is back in the connection pool, or worse, used in a different thread
this fix relies on :idle transition callback, which is called before request is routed around
this should fallback to terminating the session immediately and closing its connections, instead of trying to fit the same exception into the request objects, no point in that
Closes https://github.com/HoneyryderChuck/httpx/issues/77
the previous iteration relied on internal behaviour do delete the correct callback; in the process, logic to delete all callbacks from an interval was accidentally committed, which motivated this refactoring. the premise is: timeouts can cancel the timer; they set themselves as active until done; operation timeouts rely on the previous to be ignored or not.
a new error, OperationTimeoutError, was added for that effect
there were a lot of issues with bookkeeping this at the connection level; in the end, the timers infra was a much better proxy for all of this; set timer after write; cancel it on reading data to parse
each connection will now check in its sibling and whether it's the original connection (containing the inital batch of requests); internal functions are then called to control how connections react to successful or failed resolutions, which reduces code repetition
the handling of coalesced connections is also simplified, as when that happens, the sibling must also be closed. this allowed to fix some mismatch when handling this use case with callbacks
there's a bug (reported in https://bugs.ruby-lang.org/issues/21131) with IO.copy_stream, where yielded duped strings still change value on subsequent yields, which breaks http2 framing, which requires two yields at the same time in the first iteration. it replaces it with #read calls; file handles will now be closed once done streaming, which is a change in behaviour
```
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/selector.rb:95: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/resolver/system.rb:54: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
```
In selector.rb, the definitions are identical, so I kept the delegator
For system.rb, it always returns true so I kept that one
this was previously done in connection initialization, which means that the request would map to an error response with this error; however, the change to thread-safe pools in 1.4.0 caused a regression, where the uri is expected to have an origin before the connection is established; this is fixed by raising an error on request creation, which will need to be caught by the caller
Fixes#335
nodejs servers, for example, seem to send them when shutting down servers on timeout; when receiving, in the same buffer, the first correctly closes the parser and emits the message, while the second, because the parser is closed already, will emit an exception; the regression happened because the second exception was swallowed by the pool handler, but now that's gone, and errors on connection consumption get handled; this was worked around by, on the parser, when emitting the errors for pending requests, claearing the queue, as when the second error comes, there's no request to emit the error for
Closes#333
this is a regression from a ractor compatibility commit, which ensured that errors raised while preparing the request / resolving name are caught and raised, but introduced a regression when name resolution retrieves a cached IP; this error only manifested in dual-stack situations, which can't be tested in CI yet
Closes#329
before, compressed bodies were yielding chunks and buffering locally (the variant in this snippet); they were also failing to rewind, due to lack of method (fixed in the last commit); in this change, support is added for bodies which can read and rewind (but do not map to a local path via ), such as compressed bodies, which at this point haven't been yet buffered; the procedure is then to buffer the compressed body into a tempfile, calculate the hexdigest then rewind the body and move on
instead of fiber-local storage; this allows that under fiber-scheduler based engines, like async, requests on the same session with an open selector will reuse the later, thereby ensuring connection reuse within the same thread
in normal conditions, that'll happen only if the user uses a session object and uses HTTPX::Session#wrap to keep the context open; it'll also work OTTB when using sessions with the plugin. Otherwise, a new connection will be opened per fiber
because you're reconnecting to the same host, now the previous connection is closed, in order to avoid a deadlock on the pool where the per-host conns are exhausted, and the new connection can't be initiated because the older one hasn't been checked back in
this eliminates the overuse of Connection#origin, which in the case of proxied connections was broken in the previous commit
the proxy implementation got simpler, despite this large changeset
defaulting to unbounded, in order to preserve current behaviour; this will cap the number of connections initiated for a given origin for a pool, which if not shared, will be per-origin; this will include connections from separate option profiles
a pool timeout is defined to checkout a connection when limit is reeached
also fixed the coalescing case where the connection may come from the pool, and should therefore be remmoved from there and selected/checked back in accordingly as a result
pools are then used only to fetch new conenctions; selectors are discarded when not needed anymore; HTTPX.wrap is for now patched, but would ideally be done with in the future
when building the request signature, the body is preemptively converted to a string, which fulfills the expectation for webmock, despite it being a bit of a perf penalty if the request contains a multipart request body, as the body will be fully read to memory
Closes#319
Closes https://github.com/HoneyryderChuck/httpx/issues/65
When:
1. the proxy is autodetected from `http_proxy` etc. variables;
2. a request is made which bypasses the proxy (e.g. to an authority in `no_proxy`);
3. this request fails with one of `Proxy::PROXY_ERRORS` (timeout or a system error)
the `fetch_response` method tried to access the proxy URIs array which
isn't initialized by `proxy_options`. This change fixes the
`proxy_error?` check to avoid the issue.
Options become a bunch of session and connection level parameters, and requests do not need to maintain a separate Options object when they contain a body anymore, instead, objects is shared with the session, while request-only parameters get passed downwards to the request and its body. This reduces allocations of Options, currently the heaviest object to manage.
in order to reuse idle transition for situations, and also because the only cases where it's required to reset timeouts is when retrying on a separate nameserver
the bug this was deleted for seems to not be hanging on this behaviour anymore, and this at least allows the down integration to not change significantly.
spans initiation gate wasn't being reset in the case of retries, which
reuse the same object; a redesign was done to ensure the span initiates
before really sending the request, is reused when the request object is
reset and reused, and when the error happens outside of the request
transaction, such as during name resolution.
the recovery model of long running connections is to mark requests as pending, ping the connection to fill the write buffer, and moveon. since the last changes which impoved connection object reusage, the way that the procedures were stacked created a conundrum, where the inactive connection would move to idle before being activated, so it'd never go back to the connection pool; this switches operations, so an inactive connection activates first and is picked up by the pool, before ping-and-reconnect happens
in some situations on unexpected loop errors, read buffer may still contain response bytes to buffer, which couldn't be buffered to the error responses after error has propagated; this makes it possible by delegating bytes to wrapped response
if a CNAME came on a tcp dns response, the follow-up dns query would be erased and never done; this fixes it by keeping buffer state on fall back to udp
fixed an issue where a 421 response would not call the misredirected callback, as it wasn't being re-set in the cloned connection, therefore would be never called, and would hang...
triggering timer callbacks may call Connection#consume, which may trigger interval cleanup process of the timer callback. this does not happen usually, but it can happen in the context of multiple requests to the same host using the expect plugin
in order to only close connections initiated during the Session#wrap block, pool also wraps the block so that existing connections do not roll over
Closes#292
buffering more chunks after decoding response payload leads to dubious results in ruby 3.3, and is, from a usability perspective, not even something httpx should allow
consume may call on_error, which ends up in #handle_transition again. To prevent this infinite loop, the state is set before the handshake packet is buffered.
if a socket or ssl session gets corrupted, it's a certainty that HTTP/2 goaway frame will fail to be sent. in such cases, the error should be ignored. given that this is already handled in transition routine, one moves the handshake push there
in the face of multiple dns name candidates, the https was not behaving as the native resolver on recursively trying them on receiving domain not found types of errors
the h2c was relying heavily on rewriting connection options to only allow the first request to upgrade; this changes by instead changing the parser on first incoming request, so that if it's upgrade and contains the header, blocks the remainder until the upgrade is successful or not, and if it's not reverts the max concurrent requests anyway
when it's a real request on a webmmocked connnection, mocked state needs to go away, which includes the registered connect_error callback; this should be better refactored to not rely on private API, but for now, this moves the needle forward
mocked responses are set up in plain text; in some cases, ,such as vcr integrations, they're auto-registered after the first successful requests, where the content-encoding header is retained but body has benn decoded; when so, they're marked as mocked, and therefore the decoding step is skipped
while not all of them are recoverable, the ones that aren't are raised very early in the request establishment phase; for the ones that are, such as socks4 or 5 connection phase errors, they're retried
the ssrf filter tests surfaced an issue with these errors, which were leaving connections in the loop, a problem even more exacerbated now that inactive connections are kept. these are the kind of connections that can be immmediately discarded
when users of the library code bugs in callbacks, they should not be ignored (as they were before this change), but they should also not be treated as timeouts and such, in that they should not be wrapped in an error response. they should fail loudly, ,i.e. raise
Closes#276
these are interesting for browsers, but I can't seem to find a use-case for an http client. it was also breaks under HTTP/2, where the final response would have the 103 headers and the 200 response body
this had the effect of storing redirect responses and using them solely for inferences on the each chunk block, instead of the final response
Closes#282
when response would be called inside the #each block, the webmock trigger would inject the body before attaching the response object to the request, thereby retriggering #each in a loop
Closes#281
the last line of the payload wasn't being yielded unless the last character of the payload was a newliine. this was overlooked for a time due to stream plugin being built for text/event-stream mime type, which follows that rule, as per what the tests cover.
while stream requests are lazy, they were being nonetheless enqueued, before any function would be called. this was not great behaviour, as they could perhaps never been called, it also interfered with how other plugins inferred finished responses, such as the webmock adapter and follow_redirects. Another flaw in the grpc plugin was fixed as a result, given that bidirectional streams were actually being buffered
header merge logic changed, and because Headers#initialize and Headers#merge logic is a bit different, it's safer to account everything as having string keys
this mixin applies only for connections built via Session#build_altsvc_connection. This moves out logic which was always being called on the hot path for connections which hadn't been alt-svc enabled, which improves the Connection#match? bottleneck.
the new merge strategy tries to avoid allocating new objects, whereas the old one relied on transforming objects to hashes for merging, then back to Options objects, which just generated too much garbage. So the new one keeps the merging object as a hash if it can, and tries to bail out if there's nothing new to merge (empty or same objects). If there is new stuff, a shallow dup is called (dup will not dup all attributes by default, more on that later), and new attributes are then passed through the transformation-then-set pipeline (which duplicates this logic in two places now)
Because .dup creates a full shallow copy, extending classes for plugins needs to be taken into account, and these must also be duped when extendable. This has the benefit of sharing more classes across plugins
a plugin which allows for requests to fail when requests are crafted to
use IPs considered internal or reserved for specific usages. these SSRF
vulnerabilities happen when one allows requests with urls input by an
external user.
This plugin is inspired, and heavily makes use of routines existing in
the ssrf_filter gem: https://github.com/arkadiyt/ssrf_filter/ .
httpx uses throw/catch in order to save from so-called early resolve errors, i.e. errors which may happen before the name resolution process is either early-complete or setup, such as when there are no nameservers (internet turned off), and the requests were piped into the connection, which means they're outside of the 'on_error' callback reach. there errors were only covered on the initial send flow, i.e. in other situations when new connections may have to be established ad may early-fail, the throw would not be caught, and would reach user code
the name resolution code was making the usage of IPs dependent on the existence of a DNS resolver, but there are situations where users use the IP directly, and in such a case, when IPv4-only DNS is possible **but** IPv6 loopback/link-local is available, one should still provide support for it
this is an old vuln fixed in curl (https://github.com/advisories/GHSA-7xmh-mw7w-rr97), which has been fixed for a long time, where credentials via authorization header would be resent on all follow location requests; this limits it to same-origin redirects; an option, "auth_to_other_origins", can be used to keep original behaviour
stream responses weren't following redirects when both plugins were
loaded. This was due to the stream callback object not being passed
across the redirect chain.
connection bookkeeping on the pool changes, where all conections are kept around, even the ones that close during the scope of the requests; new requests may then find them, reset them, an reselect them. this is a major improvement, as objects get more reused, so less gc and object movement. this also changes the way pool terminates, as connections nonw follow a termination protocol, instead of just closing (which they can while scope is open)
now, when a connection gets exhausted, the same object is reused, and reconnection & reselection is handled without having to redrive all requests again, so less work to do
this is only used by http/1 connections; still, this is now adapted to the new reality of picking up closed-but-in-the-loop connections, in that the reset process picks up requests left off, transitions to closed, then mooves back to idle if there's request backlog to deal with
due to how read timeouts are added on request transitions, timers may
enter the pool **before** a new tick happens, and are therefore
accounted for when the timers are fired after the current tick.
This patch resets the timer, which will force a new tick before they may
fire again.
there are situations where a connection may already be closed before dns response is received for it. such an example is connection coalescing, when happy eyeballs takes over, first address arrives, a coalescing situation is detected, and then the connection and it's happy eyeballs cousin are both closed, **before** the cound connection has been resolved
this has been working for a while, but was silently failing in HTTP/1, due to our inability to test it in CI (HTTP/1 setup is not yet using keep-alive)
the change to read/write cancellation-driven timeouts as the default
timeout strategy revealed a performance regression; because these were
built on Timers, which never got unsubscribed, this meant that they were
kept beyond the duration of the request they were created for, and
needlessly got picked up for the next timeout tick.
This was fixed by adding a callback on timer intervals, which
unsubscribes them from the timer group when called; these would then be
activated after the timeout is not needed anymore (request send /
response received), thereby removing the overhead on subsequent
requests.
An additional intervals array is also kept in the connection itself;
timeouts from timers are signalled via socket wait calls, however they
were always resulting in timeouts, even when they shouldn't (ex: expect
timeout and send full response payload as a result), and with the wrong
exception class in some cases. By keeping intervals from its requests
around, and monitoring whether there are relevant request triggers, the
connection can therefore handle a timeout or bail out (so that timers
can fire the correct callback).
since v1, performance has regressed due to the new default timeouts,
which are based on timers. That's because they were not being cleanup
after requests were done with, and were causing spurious wakeups in the
select loop after the fact.
Fixed by cleaning up each timer after each relevant request event.
a misinterpretation of the spec on http-2-next led to the introduction
of the max_requests option, a cap of requests on a given connection,
which in http/2 case, would be initialized with MAX_CONCURRENT_STREAMS,
which means something else.
This has been fixed already in http-2-next, and this is the summary of
changes required to support it.
The `max_requests` option is kept, as it can still be useful from a user
perspective, but the default in http/2 is now INFINITY, which disables
it effectively. The HTTP/1 cap is bumped to 200, but it may fall as
well soon.
Snake case named procedures could not be called. Now two methods are defined, where one is underscore named and the second has the originla procedure name as called on the service.
the previous logic was relying on a random order which didn't work in practice; instead, one now reuses the max-attempts to define how many requests happen in the half-open state, and the drip rate defines how may of them will be real
the proxy plugin contained an enhancement, when used with the follow_redirects plugin, which retries a request over the received proxy. This contained a bug, which was now caught with the added test
most of the code was moved to the transcoder layer.
The `compression_threshold_size` option has been removed.
The `:compression/brotli` plugin becomes only ´:brotli`, and depends on
the new transcoding APIs.
options to skip compression and decompression were added.
Some plugins override the Session class, however there may be instances
of the original Session around, therefore the assertions need to somehow
point to the original Session class to stil be able to work.
Closes#247
when closed, connections are now placed in a place called eden_connections; whenever a connection is matched for, after checking the live connections and finding none, a match is looked in eden connections; the match is accepted **if** the IP is considered fresh (the input is validated in the cache, or input was an ip or in /etc/hosts, or it's an external socket) and, if a TLS connection, the stored TLS session did not expire; if these conditions do not match, the connection is dropped from the eden and a new connection will started instead; this will therefore allow reusing ruby objects, reusing TLS sessions, and still respect the DNs cache
when connections get reset due to max number of requests being reached,
the same TLS session is going to be reused, as long as it's valid.
This change is ported from the same feature in net-http, including [the
tls 1.3
improvements](ddf5c52b5f)
besides not setting session sni hostname, which it was already doing,
the verify_hostname is set to false to avoid warnings, and the
post_connection_check is still allowed to proceed, to check that the
certificate returned includes the IP address.
port of the similar net-http change found
[here](fa68e64bee)
also ommitting certain steps in the initializer if the ssl socket is
initiated outside of the httpx context and passed as an option.
* implement `Faraday::Adapter#build_connection´ (adapter seems to
expect it)
* implement `Faraday::Adapter#close` (adapter seems to expect it)
* use `Faraday::Adapter#request_timeout` to translate faraday timeouts
to httpx timeouts;
* ensure that the same HTTPX sesion object gets reused
In the process, also had to tweak the parallel manager, by
reimplementing the faraday APIs I was required to implement in the first
place, in order to obe able to reuse something (which just shows that
this faraday parallel API was poorly thought out).
this made several plugins unusable with the proxy plugin, because a lot
of them are dependent on Connection#send being called and overwritten.
This was done so to avoid piping requests when intermediate
connect-level parsers are in place. So in the way, when the conn is
initial, original send is closed; when not, which should almost never
happen, a second list is created, which is then piped when the
connection is established, back to original send.
for instance, in multi-homed networks, ´/etc/hosts` will have both
"127.0.0.1" and "::1" pointing from localhost; still only one of
them may be reachable, if a server binds only to "127.0.0.1", for
exammple. In such cases, the early exit placed to prevent the loop
from b0777c61e was preventing the dual-stack IP resolve to pass the
second set of responses, thereby potentiallly making only the
unreachable IP accessible to the connection.
As per the ruby IO reader protocol, which Response::Body was aimed at
suppoorting since the beginning, the call to #rewind was impeding it
from consuming the body buffer, and instead delivering the same
substring everytime.
first attempt at more granular error handling: during response chunk processing, errors will be handled in a way where current response stops being fetched; for http/1, the connection is fully reset, for http/2, the individual stream is cancelled
These internnal registries were a bit magical to use, difficult to
debug, not thread-safe, and overall a nuisance when it came to type
checking. So long.
yet another compliance fix for the DNS protocol; while udp is the
preferred transport, in case a truncated response is received, the
resolver will switch to tcp, and performm the DNS query again.
This introduces a new resolver option, `:socket_type`, which is `:udp`
by default.
a new feature is introduced in the `retries` plugin, whereas if an error
occurred midway response payload transfer, and the peer server signaled
(via `"accept-ranges"`) that accepts range requests, the retried request
will attempt to start over from where the previous request left off.
The reference for a request verb is now the string which is used
everywhere else, instead of the symbol corresponding to it. This was an
artifact from the import from httprb, and there is no advantage in it,
since these strings are frozen in most use cases, and the
transformations from symbol to strings being performed everywhere are
prooof that keeping the atom isn't really bringing any benefit.
connections weren't being correctly initiated, as proxies were filtered
for the whole session based on URI.find_proxy for the first call. This
fixes it by:
* applying it to all used uris;
* falling back to proxy options instead;
* apply no_proxy option in case it's used, using
`URI::Generic.use_proxy?
without this, requests may not get merged between connections, and
callbacks aren't called.
multi resolver path gets simplified by this change, given that the
callbacks handle the bulk of happy eyeballs complexity.
the sentry and datadog plugins have been wrongly relying on the
assumption that #send is called just once, when in fact, it can be
called multiple times, both for conn exhaustion, as well as conn merging
(coalescing + happy eyeballs) scenarios.
Because of this, their "on response" callback could be set multiple times, which was confusing. So this fixes the behaviour.
Fixes#228
Given a sequence of events, where IPv4 and IPv6 addresses are emitted,
and IPv6 wins the race, the IPv4 may already be in an advanced state of
registering that it'll find the IPv6 connection, and it will coalesce
with it. In such a case, the `:tcp_open` callback will emitted for the
IPv6 connection, which will merge and shut itself down.
Ths caused hanging requests.
If the Ipv4 handshake works in dual stack, and there is an open
connection to be used, the tcp_open callback wasn't being called, and
the process halted. The fix was to emit :tcp_open before coalescing, as
this allows for the original conn state to be merged first with the new
conn, then with the connection to coalesce.
domain not found
since httpx supports candidate calculations for dns queries, candidates
were always traversed when no answers were back. However, the DNS
message response contains a code set by the server, indicating whether
we should consider the domain existing **but** has no adderss, and
unexisting; candidates should only be queried in the latter.
An issue was mis-identified for years, where it was thought that IPv6
addresses with the net interface suffix, attributed by the LAN, was
invalid; this was wrong, as that's an ip address with a zone identifier.
`ipaddr` has since fixed its support, which invalidated a
native resolver patch, and the error surfaced when building a URI
component for the UDP factory, because `uri` does not support zone
identifiers.
A patch was therefore added to the URI component, to allow setting up
the host while not validating it, as that was the shortest path to
fixing it right now.
Implemementing the following fixes:
* connections are now marked by IP family of the IO;
* connection "mergeable" status dependent on matching origin when conn
is already open;
* on a connection error, in case it happened while connection, an event
is emitted, rather than handling it; if there is another connection
for the same origin still doing the handshake, the error is ignored;
if not, the error is handled;
* a new event, `:tcp_open`, is emitted when the tcp socket conn is
established; this allows for concurrent handshakes to be promptly
terminated, instead of being dependent on TLS handshake;
* connection cloning now happens early, as connection is set for
resolving; this way, 2-way callbacks are set as early as possible;
This returns the filename advertised in the content-disposition header.
It reuses the same logic which existed for parsing multipart responses,
which itself was based on `rack`'s.
While this API may be accidentally exposed publicly but users aren't
suggeste to use it, internally this is used. for instance, in the
`:stream` plugin, to know whether a response is available; in such a
case, we don't want it to get stuck in the middle.
Until now, httpx was issuing concurrent DNS requests, but it'd only
start connecting to the first, and then on the following by the right
order, but sequentially.
With this change, httpx will now continue the process by connecting
concurrently to both IPv6 and IPv4, and close the other connection once
one is established. This means both TCP and TLS (when applicable) need
to succeed before the second connection is cancelled.
a behaviour has been observed behind a vpn, where when one of the
servers is unresponsive, the switch to the next nameserver wasn't
happening. Part of it was a bug in the timeout handling, but the rest
was actually the switch not happening (i.e. it'd fail on the first
server). This fixes it by switching to the next nammeserver on query
error.
* circuit break the uri (instead of the whole origin) if the timeout is
only on requests;
* improved cached responses loop;
* organized components into separate files
These are deadline oriented for the request and response, i.e. a write
timeout tracks the full time it takes to write the request, whereas the
read timeout does the same for receiving the response.
For back-compat, they're infinite by default. v1 may change that, and
will have to provide a safe fallback for endless "stream" requests and
responses.
http.patch("http://example.com/file", body: File.open("path/to/file")) # request body is streamed
```
If you want to do some more things with the response, you can get an `HTTPX::Response`:
@ -73,82 +84,79 @@ and then just require it in your program:
require "httpx"
```
## Why Should I care?
## What makes it the best ruby HTTP client
In Ruby, HTTP client implementations are a known cheap commodity. Why this one?
### Concurrency
### Concurrency, HTTP/2 support
This library supports HTTP/2 seamlessly (which means, if the request is secure, and the server support ALPN negotiation AND HTTP/2, the request will be made through HTTP/2). If you pass multiple URIs, and they can utilize the same connection, they will run concurrently in it.
`httpx` supports HTTP/2 (for "https" requests, it'll automatically do ALPN negotiation). However if the server supports HTTP/1.1, it will use HTTP pipelining, falling back to 1 request at a time if the server doesn't support it either (and it'll use Keep-Alive connections, unless the server does not support).
However if the server supports HTTP/1.1, it will try to use HTTP pipelining, falling back to 1 request at a time if the server doesn't support it (if the server support Keep-Alive connections, it will reuse the same connection).
If you passed multiple URIs, it'll perform all of the requests concurrently, by mulitplexing on the necessary sockets (and it'll batch requests to the same socket when the origin is the same):
```ruby
HTTPX.get(
"https://news.ycombinator.com/news",
"https://news.ycombinator.com/news?p=2",
"https://google.com/q=me"
) # first two requests will be multiplexed on the same socket.
```
### Clean API
`httpx` builds all functions around the `HTTPX` module, so that all calls can compose of each other. Here are a few examples:
http.get("https://example.com") # the above options will apply
http.post("https://example2.com", form: { name: "John", age: "22" }) # same, plus the form POST body
```
### Lightweight
It ships with a plugin system similar to the ones used by [sequel](https://github.com/jeremyevans/sequel), [roda](https://github.com/jeremyevans/roda) or [shrine](https://github.com/janko-m/shrine).
It ships with most features published as a plugin, making vanilla `httpx` lightweight and dependency-free, while allowing you to "pay for what you use"
It means that it loads the bare minimum to perform requests, and the user has to explicitly load the plugins, in order to get the features he/she needs.
The plugin system is similar to the ones used by [sequel](https://github.com/jeremyevans/sequel), [roda](https://github.com/jeremyevans/roda) or [shrine](https://github.com/shrinerb/shrine).
It also means that it ships with the minimum amount of dependencies.
### Advanced DNS features
### DNS-over-HTTPS
`HTTPX` ships with custom DNS resolver implementations, including a native Happy Eyeballs resolver implementation, and a DNS-over-HTTPS resolver.
`HTTPX` ships with custom DNS resolver implementations, including a DNS-over-HTTPS resolver.
## User-driven test suite
## Easy to test
The test suite runs against [httpbin proxied over nghttp2](https://nghttp2.org/httpbin/), so there are no mocking/stubbing false positives. The test suite uses [minitest](https://github.com/seattlerb/minitest), but its matchers usage is (almost) limited to `#assert` (`assert` is all you need).
The test suite runs against [httpbin proxied over nghttp2](https://nghttp2.org/httpbin/), so actual requests are performed during tests.
## Supported Rubies
All Rubies greater or equal to 2.1, and always latest JRuby and Truffleruby.
All Rubies greater or equal to 2.7, and always latest JRuby and Truffleruby.
**Note**: This gem is tested against all latest patch versions, i.e. if you're using 2.2.0 and you experience some issue, please test it against 2.2.10 (latest patch version of 2.2) before creating an issue.
**Note**: This gem is tested against all latest patch versions, i.e. if you're using 3.3.0 and you experience some issue, please test it against 3.3.$latest before creating an issue.
| Wiki | https://honeyryderchuck.gitlab.io/httpx/wiki/home.html |
| CI | https://gitlab.com/os85/httpx/pipelines |
| Rubygems | https://rubygems.org/gems/httpx |
## Caveats
### ALPN support
`HTTPS` TLS backend is ruby's own `openssl` gem.
If your requirement is to run requests over HTTP/2 and TLS, make sure you run a version of the gem which compiles OpenSSL 1.0.2 (Ruby 2.3 and higher are guaranteed to).
In order to use HTTP/2 under JRuby, [check this link](https://gitlab.com/honeyryderchuck/httpx/-/wikis/JRuby-Truffleruby-Other-Rubies) to know what to do.
### Known bugs
* Doesn't work with ruby 2.4.0 for Windows (see [#36](https://gitlab.com/honeyryderchuck/httpx/issues/36)).
* Using `total_timeout` along with the `:persistent` plugin [does not work as you might expect](https://gitlab.com/honeyryderchuck/httpx/-/wikis/Timeouts#total_timeout).
## Versioning Policy
Although 0.x software, `httpx` is considered API-stable and production-ready, i.e. current API or options may be subject to deprecation and emit log warnings, but can only effectively be removed in a major version change.
`httpx` follows Semantic Versioning.
## Contributing
* Discuss your contribution in an issue
* Fork it
* Make your changes, add some tests
* Ensure all tests pass (`docker-compose -f docker-compose.yml -f docker-compose-ruby-{RUBY_VERSION}.yml run httpx bundle exec rake test`)
* Make your changes, add some tests (follow the instructions from [here](test/README.md))
* Open a Merge Request (that's Pull Request in Github-ish)
Read more about it in the [webmock integration documentation](https://honeyryderchuck.gitlab.io/httpx/wiki/Webmock-Adapter).
Read more about it in the [webmock integration documentation](https://os85.gitlab.io/httpx/wiki/Webmock-Adapter).
### Datadog Adapter
@ -40,7 +40,7 @@ A trace will be emitted for every request, so this should be an interesting visu
Customization options and traces are similar to what [the net-http adapter provides](https://docs.datadoghq.com/tracing/setup_overview/setup/ruby/#nethttp).
Read more about it in the [datadog integration documentation](https://honeyryderchuck.gitlab.io/httpx/wiki/Datadog-Adapter).
Read more about it in the [datadog integration documentation](https://os85.gitlab.io/httpx/wiki/Datadog-Adapter).
## Improvements
@ -52,7 +52,7 @@ Read more about it in the [datadog integration documentation](https://honeyryder
Read more about it in the [multipart plugin documentation](https://honeyryderchuck.gitlab.io/httpx/wiki/Multipart-Uploads), including also about why this was made.
Read more about it in the [multipart plugin documentation](https://os85.gitlab.io/httpx/wiki/Multipart-Uploads), including also about why this was made.
A default 5 second timeout is in-place when using the DNS `:system` resolver, as it was found out that. when using the `resolv` library, the DNS query will not be retried otherwise. You can change this setting py passing `resolver_options: { timeouts: ANOTHER_TIMEOUT}`. In the future, this may become another timeout option, however.
A new plugin, `:aws_sigv4`, is now shipped with `httpx`. It implements the [AWS Signature Version 4 request signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html), a well documented way of authenticating requests to AWS services, which has since been adopted by other cloud providers, such as Google Cloud Storage.
See how to use it here: https://gitlab.com/honeyryderchuck/httpx/-/wikis/AWS-Sigv4#sessionaws_sigv4_authentication
See how to use it here: https://gitlab.com/os85/httpx/-/wikis/AWS-Sigv4#sessionaws_sigv4_authentication
For convenience, there's a derivative plugin, `:aws_sdk_authentication`, which builds on top of `:aws_sigv4`, and integrates with the `aws-sdk-core` gem, maintained by AWS, to resolve the authentication credentials (p.ex. if you support ephemeral access keys).
See how to use it here: https://gitlab.com/honeyryderchuck/httpx/-/wikis/AWS-Sigv4#sessionaws_sdk_authentication
See how to use it here: https://gitlab.com/os85/httpx/-/wikis/AWS-Sigv4#sessionaws_sdk_authentication
Other FAQ: https://gitlab.com/honeyryderchuck/httpx/-/wikis/AWS-Sigv4#faqs
Other FAQ: https://gitlab.com/os85/httpx/-/wikis/AWS-Sigv4#faqs
### HTTP/2 support for JRuby
`jruby-openssl` doesn't support ALPN protocol negotiation, nor are there plans to implement, which limited the seamless HTTP/2 usage in `httpx`. A new connection adapter was therefore added specifically for JRuby, where ssl/tls connections will be handled using ffi-based openssl bindings, provided you bundle `ffi-compiler` and `concurrent-ruby`, and install a TLS/1.2-compatible `openssl` package.
See how to use it here: https://gitlab.com/honeyryderchuck/httpx/-/wikis/JRuby-Truffleruby-Other-Rubies#http2
See how to use it here: https://gitlab.com/os85/httpx/-/wikis/JRuby-Truffleruby-Other-Rubies#http2
## Improvements
@ -52,4 +52,4 @@ They all contributed to a massive performance improvement, itself reflected in t
* Fixed TCP handshake Errno::INPROGRESS handling inside TLS connnections, which was causing the process to hang in a high handshake contention scenario;
* Do not call the event loop if there's nothing to listen on (the DoH resolver was being listened on even if there was nothing to be request);
A new plugin, `:upgrade`, is now available. This plugin allows one to "hook" on HTTP/1.1's protocol upgrade mechanism (see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Protocol_upgrade_mechanism), which is the mechanism that browsers use to initiate websockets (there is an example of how to use `httpx` to start a websocket client connection [in the tests](https://gitlab.com/honeyryderchuck/httpx/-/blob/master/test/support/requests/plugins/upgrade.rb))
A new plugin, `:upgrade`, is now available. This plugin allows one to "hook" on HTTP/1.1's protocol upgrade mechanism (see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Protocol_upgrade_mechanism), which is the mechanism that browsers use to initiate websockets (there is an example of how to use `httpx` to start a websocket client connection [in the tests](https://gitlab.com/os85/httpx/-/blob/master/test/support/requests/plugins/upgrade.rb))
You can read more about the `:upgrade` plugin in the [wiki](https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Upgrade).
You can read more about the `:upgrade` plugin in the [wiki](https://os85.gitlab.io/httpx/wiki/Connection-Upgrade).
It's the basis of two plugins:
@ -14,13 +14,13 @@ It's the basis of two plugins:
This plugin was been rewritten on top of the `:upgrade` plugin, and handles upgrading a plaintext (non-"https") HTTP/1.1 connection, into an HTTP/2 connection.
This plugin handles when a server responds to a request with an `Upgrade: h2` header, does the following requests to the same origin via HTTP/2 prior knowledge (bypassing the necessity for ALPN negotiation, which is the whole point of the feature).
The justification for this behaviour probably had to do with avoiding keeping huge payloads around, but it got a bit lost in git history. It became a feature, not a bug.
However, I got an [issue report](https://gitlab.com/honeyryderchuck/httpx/-/issues/143) that made me change my mind about this behaviour (tl;dr: it broke pattern matching when matching against response bodies more than once).
However, I got an [issue report](https://gitlab.com/os85/httpx/-/issues/143) that made me change my mind about this behaviour (tl;dr: it broke pattern matching when matching against response bodies more than once).
So now, you can call `.to_s` how many times you want!
You can now define an `OptionsMethods` module under your custom plugin to define your own methods. The tl;dr is, that, given the following module below, a new `:bar` option will be available (and the method will be used to set it):
The behaviour of the cookies jar from the `:cookies` plugin was a bit unpredictable in certain conditions, for instance if a "Cookie" header would be passed directly via `.with(headers: {"Cookie" => "a=1"})` and there'd be a value for it already (in same cases, it'd be fully ignored). This would even get worse, if the session had a jar, and a specific set of cookies would be passed to a request(i.e.: `session_with_cookies.get("http://url.get", headers: {"Cookies" => "..."}`).
The behaviour was fixed, and is now specced under https://gitlab.com/honeyryderchuck/httpx/-/blob/master/test/support/requests/plugins/cookies.rb .
The behaviour was fixed, and is now specced under https://gitlab.com/os85/httpx/-/blob/master/test/support/requests/plugins/cookies.rb .
Two new methods, `#json` and `#form`, were added to `HTTPX::Response`. As the name implies, they'll decode the raw payload into ruby objects you can work with.
The `:response_cache` plugin handles transparent usage of HTTP caching and conditional requests to improve performance and bandwidth usage.
@ -22,7 +22,7 @@ r1.body == r2.body #=> true
On the `:retries` plugin, jitter calculation is now applied to the value in seconds defined by user after which a request should be retried (i.e. if `:retry_after` option is set to `2`, the retry interval may be `1.5422312` seconds, for example). This is important to avoid cases of synchronized "thundering herd", where server rejects requests, but they all get retried at the same time because the retry interval is exactly the same.
You can override the jitter calculation function by using the [:retry_jitter](https://gitlab.com/honeyryderchuck/httpx/-/wikis/Retries#retry_jitter) option:
You can override the jitter calculation function by using the [:retry_jitter](https://gitlab.com/os85/httpx/-/wikis/Retries#retry_jitter) option:
* `webmock` adapter: added support for "stub_http_request#to_timeout" (https://gitlab.com/honeyryderchuck/httpx/-/merge_requests/165).
* `webmock` adapter: added support for "stub_http_request#to_timeout" (https://gitlab.com/os85/httpx/-/merge_requests/165).
## timers not a dependency
@ -57,7 +57,7 @@ The functionality provided by the `timers` gem was replaced by a simpler custom
## Bugfixes
* Fixed Error class declaration on response decoders when mime type is invalid (https://gitlab.com/honeyryderchuck/httpx/-/merge_requests/166).
* Fixed Error class declaration on response decoders when mime type is invalid (https://gitlab.com/os85/httpx/-/merge_requests/166).
* `ErrorResponse#to_s` now removes ANSI escape sequences from error backtraces.
* Persistent connections were kept around both in the pool and in the selector; the first is necessary, but the second caused busy loop scenarios all over; they are now removed when no requests are being handled.
* Connections which failed connection handshake were removed from the pool, but not from the selector list, causing busy loop scenarios in a few cases; this has been fixed.
The quirk of using the `:persistent` plugin with `:total_timeout` has been documented: https://gitlab.com/honeyryderchuck/httpx/-/wikis/Timeouts#total_timeout.
The quirk of using the `:persistent` plugin with `:total_timeout` has been documented: https://gitlab.com/os85/httpx/-/wikis/Timeouts#total_timeout.
The `:response_cache` plugin is now more compliant with how the RFC 2616 defines which behaviour caches shall have:
* it caches only responses with one of the following status codes: 200, 203, 300, 301, 410.
* it discards cached responses which become stale.
* it supports "cache-control" header directives to decided when to cache, to store, what the response "age" is.
* it can cache more than one response for the same request, provided that the request presents different header values for the headers declared in the "vary" response header (previously, it was only caching the first response, and discarding the remainder).
## Bugfixes
* fixed DNS resolution bug which caused a loop when a failed connection attempt would cause a new DNS request to be triggered for the same domain, filling up and giving preference to the very IP which failed the attempt.
* response_cache: request verb is now taken into account, not causing HEAD/GET confusion for the same URL.
Just like `:connect_timeout`, the new timeouts are deadline-oriented, rather than op-oriented, meaning that they do not reset on each socket operation (as most ruby HTTP clients do).
None of them has a default value, in order not to break integrations, but that'll change in a future v1, where they'll become the default timeouts.
The `:circuit_breaker` plugin wraps around errors happening when performing HTTP requests, and support options for setting maximum number of attempts before circuit opens (`:circuit_breaker_max_attempts`), period after which attempts should be reset (`:circuit_breaker_reset_attempts_in`), timespan until circuit half-opens (`circuit_breaker_break_in`), respective half-open drip rate (`:circuit_breaker_half_open_drip_rate`), and a callback to do your own check on whether a response has failed, in case you want HTTP level errors to be marked as failed attempts (`:circuit_breaker_break_on`).
Read the wiki for more info about the defaults.
```ruby
http = HTTPX.plugin(:circuit_breaker)
# that's it!
http.get(...
```
### WebDAV plugin
https://gitlab.com/os85/httpx/-/wikis/WebDav
The `:webdav` introduces some "convenience" methods to perform common WebDAV operations.
res = webdav.put("/file.html", body: "this is the file body")
res = webdav.copy("/file.html", "/newdir/copy.html")
# ...
```
### XML transcoder, `:xml` option and `response.xml`
A new transcoder was added fot the XML mime type, which requires `"nokogiri"` to be installed. It can both serialize Nokogiri nodes in a request, and parse response content into nokogiri nodes:
http.get("https://google.com/?q=httpx") #=> not proxied
http.get("https://gitlab.com") #=> proxied
http.get("https://gitlab.local") #=> not proxied
```
### OOTB support for other JSON libraries
If one of `multi_json`, `oj` or `yajl` is available, all `httpx` operations doing JSON parsing or dumping will use it (the `json` standard library will be used otherwise).
```ruby
require "oj"
require "httpx"
response = HTTPX.post("https://somedomain.json", json: { "foo" => "bar" }) # will use "oj"
puts response.json # will use "oj"
```
## Bugfixes
* `:expect` plugin: `:expect_timeout` can accept floats (not just integers).
## Chore
* DoH `:https` resolver: support was removed for the "application/dns-json" mime-type (it was only supported in practice by the Google DoH resolver, which has since added support for the standardized "application/dns-message").
Until now, httpx was issuing concurrent DNS requests, but it'd only start connecting to the first, and then on the following by the right order, but sequentially.
`httpx` will now establish connections concurrently to both IPv6 and IPv4 addresses of a given domain; the first one to succeed terminates the other. Successful connection means completion of both TCP and TLS (when applicable) handshakes.
### HTTPX::Response::Body#encoding
A new method, `#encoding`, can be called on response bodies. It'll return the encoding of the response payload.
Checking response class before calling `.status`, as this was being called in some places on error responses, thereby triggering the deprecation warning.
A new method, `.filename` can be called on response bodies, to get the filename referenced by the server for the received payload (usually in the "Content-Disposition" header).
```ruby
response = HTTPX.get(url)
response.raise_for_status
filename = response.body.filename
# you can do, for example:
response.body.copy_to("/home/files/#{filename}")
```
## Improvements
### Loading integrations by default
Integrations will be loaded by default, as long as the dependency being integrated is already available:
```ruby
require "ddtrace"
require "httpx"
HTTPX.get(... # request will be traced via the datadog integration
```
### Faraday: better error handling
The `faraday` adapter will not raise errors anymore, when used in parallel mode. This fixes the difference in behaviour with the equivalent `typhoeus` parallel adapter, which does not raise errors in such cases as well. This behaviour will exclude 4xx and 5xx HTTP responses, which will not be considered errors in the `faraday` adapter.
If errors occur in parallel mode, these'll be available in `env[:error]`. Users can check it in two ways:
```ruby
response.status == 0
# or
!response.env[:error].nil?
```
## Bugfixes
* unix socket: handle the error when the path for the unix sock is invalid, which was causing an endless loop.
### IPv6 / Happy eyeballs v2
* the `native` resolver will now use IPv6 nameservers with zone identifier to perform DNS queries. This bug was being ignored prior to ruby 3.1 due to some pre-filtering on the nameservere which were covering misuse of the `uri` dependency for this use case.
* Happy Eyeballs v2 handshake error on connection establishment for the first IP will now ignore it, in case an ongoing connecting for the second IP is happening. This fixes a case where both IPv4 and IPv6 addresses are served for a given domain, but only one of them can be connected to (i.e. if connection via IPv6 fails, the IPv4 one should still proceed to completion).
* the `native` resolver won't try querying DNS name candidates, if the resolver sends an empty answer with an error code different from "domain not found".
* fix error of Happy Eyeballs v2 handshake, where the resulting connection would coalesce with an already open one for the same IP **before** requests were merged to the coalesced connection, resulting in no requests being sent and the client hanging.
## Chore
* fixed error message on wrong type of parameter for the `compression_threshold_size` option from the `:compression` plugin.
* fix happy eyeballs v2 bug where, once the first connection would be established, the remaining one would still end up in the coalescing loop, thereby closing itself via the `:tcp_open` callback.
* fix for faraday plugin parallel mode, where it'd hang if no requests would be made in the parallel block (@catlee)
* `datadog` and `sentry` integrations did not account for `Connection#send` being possibly called multiple times (something possible for connection coalescing, max requests exhaustion, or Happy Eyeballs 2), and were registering multiple `on(:response)` callbacks. Requests are now marked when decorated the first time.
* Happy Eyeballs handshake "connect errors" routine is now taking both name resolution errors, as well as TLS handshake errors, into account, when the handshake fails.
The `:retries` plugin will now support scenarios where, if the request being retried supports the `range` header, and a partial response has been already buffered, the retry will resume from there and only download the missing data.
#### HTTPX::ErrorResponse#response
As a result, ´HTTPX::ErrorResponse#response` has also been introduced; error responses may have an actual response. This happens in cases where the request failed **after** a partial response was initiated.
#### `:buffer_size` option
A nnew option, `:buffer_size`, can be used to tweak the buffers used by the read/write socket routines (16k by default, you can lower it in memory-constrained environments).
## Improvements
### `:native` resolver falls back to TCP for truncated messages
The `:native` resolver will repeat DNS queries to a nameserver via TCP when the first attempt is marked as truncated. This behaviour is both aligned with `getaddrinfo` and the `resolv` standard library.
This introduces a new `resolver_options` option, `:socket_type`, which can now be `:tcp` if it is to remain the default.
## Chore
### HTTPX.build_request should receive upcased string (i.e. "GET")
Functions which receive an HTTP verb should be given he verb in "upcased string" format now. The usage of symbols is still possible, but a deprecation warning will be emitted, and support will be removed in v1.0.0 .
### Remove HTTPX::Registry
These internal registries were a bit magical to use, difficult to debug, not thread-safe, and overall a nuisance when it came to type checking. While there is the possibility that someone was relying on it existing, nothing had ever been publicly documented.
## Bugfixes
* fixed proxy discovery using proxy env vars (`HTTPS_PROXY`, `NO_PROXY`...) being enabled/disabled based on first host uused in the session;
* fixed `:no_proxy` option usage inn the `:proxy` plugin.
* fixed `webmock` adapter to correctly disable it when `Webmock.disable!` is called.
* fixed bug in `:digest_authentication` plugin when enabled and no credentials were passed.
* fixed several bugs in the `sentry` adapter around breadcrumb handling.
* fixed `:native` resolver candidate calculation by putting absolute domain at the bottom of the list.
The `:oauth` plugin manages the handling of a given OAuth session, in that it ships with convenience methods to generate a new access token, which it then injects in all requests.
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/OAuth
### session callbacks
HTTP request/response lifecycle events have now the ability of being intercepted via public API callback methods:
```ruby
HTTPX.on_request_completed do |request|
puts "request to #{request.uri} sent"
end.get(...)
```
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Events to know which events and callback methods are supported.
A callback has been introduced for the `:circuit_breaker` plugin, which is triggered when a circuit is opened.
```ruby
http = HTTPX.plugin(:circuit_breaker).on_circuit_open do |req|
puts "circuit opened for #{req.uri}"
end
http.get(...)
```
## Improvements
Several `:response_cache` features have been improved:
* `:response_cache` plugin: response cache store has been made thread-safe.
* cached response sharing across threads is made safer, as stringio/tempfile instances are copied instead of shared (without copying the underling string/file).
* stale cached responses are eliminate on cache store lookup/store operations.
* already closed responses are evicted from the cache store.
* fallback for lack of compatible response "date" header has been fixed to return a `Time` object.
## Bugfixes
* Ability to recover from errors happening during response chunk processing (required for overriding behaviour and response chunk callbacks); error bubbling up will result in the connection being closed.
* Happy eyeballs support for multi-homed early-resolved domain names (such as `localhost` under `/etc/hosts`) was broken, as it would try the first given IP; so, if given `::1` and connection would fail, it wouldn't try `127.0.0.1`, which would have succeeded.
* `:digest_authentication` plugin was removing the "algorithm" header on `-sess` declared algorithms, which is required for HTTP digest auth negotiation.
* datadog adapter: support `:service_name` configuration option.
* datadog adapter: set `:distributed_tracing` to `true` by default.
* `:proxy` plugin: when the proxy uri uses an unsupported scheme (i.e.: "scp://125.24.2.1"), a more user friendly error is raised (instead of the previous broken stacktrace).
## Bugfixes
* datadog adapter: fix tracing enable call, which was wrongly calling `super`.
+ `:proxy` plugin: fix for bug which was turning off plugins overriding `HTTPX::Connection#send` (such as the datadog adapter).
* besides an array, `:resolver_options` can now receive a hash for `:nameserver`, which **must** be indexed by IP family (`Socket::AF_INET6` or `Socket::AF_INET`); each group of nameservers will be used for emitting DNS queries of that iP family.
* `:authentication` plugin: Added `#bearer_auth` helper, which receives a token, and sets it as `"Bearer $TOKEN` in the `"authorization"` header.
* `faraday` adapter: now implements `#build_connection` and `#close`, will now interact with `faraday` native timeouts (`:read`, `:write` and `:connect`).
## Bugfixes
* fixed native resolver bug when queries involving intermediate alias would be kept after the original query and mess with re-queries.
The `:response_cache` plugin is now more compliant with how the RFC 2616 defines which behaviour caches shall have:
* it caches only responses with one of the following status codes: 200, 203, 300, 301, 410.
* it discards cached responses which become stale.
* it supports "cache-control" header directives to decided when to cache, to store, what the response "age" is.
* it can cache more than one response for the same request, provided that the request presents different header values for the headers declared in the "vary" response header (previously, it was only caching the first response, and discarding the remainder).
* `digest_authentication` plugin now supports passing HA1hashed with password HA1s (common to store in htdigest files for example) when setting the`:hashed` kwarg to `true` in the `.digest_auth` call.
* whenever possible, `httpx` sessions will recycle used connections so that, in the case of TLS connections, the first session will keep being reusedd, thereby diminishing the overhead of subsequent TLS handshakes on the same host.
* TLS sessions are only reused in the scope of the same `httpx` session, unless the `:persistent` plugin is used, in which case, the persisted `httpx` session will always try to resume TLS sessions.
## Bugfixes
* fixed DNS resolution bug which caused a loop when a failed connection attempt would cause a new DNS request to be triggered for the same domain, filling up and giving preference to the very IP which failed the attempt.
* response_cache: request verb is now taken into account, not causing HEAD/GET confusion for the same URL.
* When explicitly using IP addresses in the URL host, TLS handshake will now verify tif he IP address is included in the certificate.
* IP address will keep not be used for SNI, as per RFC 6066, section 3.
* ex: `http.get("https://10.12.0.12/get")`
* if you want the prior behavior, set `HTTPX.with(ssl: {verify_hostname: false})`
* Turn TLS hostname verification on for `jruby` (it's turned off by default).
* if you want the prior behavior, set `HTTPX.with(ssl: {verify_hostname: false})`
* fix Session class assertions not prepared for class overrides, which could break some plugins which override the Session class on load (such as `datadog` or `webmock` adapters).
`http-2-next` last supported version for the 0.x series is the last version before v1. This shoul ensure that older versions of `httpx` won't be affected by any of the recent breaking changes.
## Bugfixes
* `grpc`: setup of rpc calls from camel-cased symbols has been fixed. As an improvement, the GRPC-enabled session will now support both snake-cased, as well as camel-cased calls.
* `datadog` adapter has now been patched to support the most recent breaking changes of `ddtrace` configuration DSL (`env_to_bool` is no longer supported).
* The fallback support for IDNA 2003 has been removed. If you require this feature, install the [idnx gem](https://github.com/HoneyryderChuck/idnx), which `httpx` automatically integrates with when available (and supports IDNA 2008).
* `:total_timeout` option has been removed (no session-wide timeout supported, use `:request_timeout`).
* `:read_timeout` and `:write_timeout` are now set to 60 seconds by default, and preferred over `:operation_timeout`;
* the exception being in the `:stream` plugin, as the response is theoretically endless (so `:read_timeout` is unset).
* The `:multipart` plugin is removed, as its functionality and API are now loaded by default (no API changes).
* The `:compression` plugin is removed, as its functionality and API are now loaded by default (no API changes).
* `:compression_threshold_size` was removed (formats in `"content-encoding"` request header will always encode the request body).
* the new `:compress_request_body` and `:decompress_response_body` can be set to `false` to (respectively) disable compression of passed input body, or decompression of the response body.
* `:retries` plugin: the `:retry_on` condition will **not** replace default retriable error checks, it will now instead be triggered **only if** no retryable error has been found.
* OAuth plugin: `:oauth_authentication` helper is rename to `:oauth_auth`.
* `:compression/brotli` plugin becomes `:brotli`.
### Support removed for deprecated APIs
* The deprecated `HTTPX::Client` constant lookup has been removed (use `HTTPX::Session` instead).
* The deprecated `HTTPX.timeout({...})` function has been removed (use `HTTPX.with(timeout: {...})` instead).
* The deprecated `HTTPX.headers({...})` function has been removed (use `HTTPX.with(headers: {...})` instead).
* The deprecated `HTTPX.plugins(...)` function has been removed (use `HTTPX.plugin(...).plugin(...)...` instead).
* The deprecated `:transport_options` option, which was only valid for UNIX connections, has been removed (use `:addresses` instead).
* The deprecated `def_option(...)` function, previously used to define additional options in plugins, has been removed (use `def option_$new_option)` instead).
* The deprecated `:loop_timeout` timeout option has been removed.
* `:stream` plugin: the deprecated `HTTPX::InstanceMethods::StreamResponse` has been removed (use `HTTPX::StreamResponse` instead).
* The deprecated usage of symbols to indicate HTTP verbs (i.e. `HTTPX.request(:get, ...)` or `HTTPX.build_request(:get, ...)`) is not supported anymore (use the upcase string always, i.e. `HTTPX.request("GET", ...)` or `HTTPX.build_request("GET", ...)`, instead).
* The deprecated `HTTPX::ErrorResponse#status` method has been removed (use `HTTPX::ErrorResponse#error` instead).
### dependencies
* `http-2-next` minimum supported version is 1.0.0.
* `:datadog` adapter only supports `ddtrace` gem 1.x or higher.
* `:faraday` adapter only supports `faraday` gem 1.x or higher.
## Improvements
* `circuit_breaker`: the drip rate of real request during the "half-open" stage of a circuit will reliably distribute real requests (as per the drip rate) over the `max_attempts`, before the circuit is closed.
## Bugfixes
* Tempfiles are now correctly identified as file inputs for multipart requests.
* fixed `proxy` plugin behaviour when loaded with the `follow_redirects` plugin and processing a 305 response (request needs to be retried on a different proxy).
## Chore
* `:grpc` plugin: connection won't buffer requests before HTTP/2 handshake is commpleted, i.e. works the same as plain `httpx` HTTP/2 connection establishment.
* if you are relying on this, you can keep the old behavior this way: `HTTPX.plugin(:grpc, http2_settings: { wait_for_handshake: false })`.
* bump `http-2-next` to 1.0.1, which fixes a bug where http/2 connection interprets MAX_CONCURRENT_STREAMS as request cap.
* `grpc`: setup of rpc calls from camel-cased symbols has been fixed. As an improvement, the GRPC-enabled session will now support both snake-cased, as well as camel-cased calls.
* `datadog` adapter has now been patched to support the most recent breaking changes of `ddtrace` configuration DSL (`env_to_bool` is no longer supported).
A function, `#peer_address`, was added to the response object, which returns the IP (either a string or an `IPAddr` object) from the socket used to get the response from.
error responses will also expose an IP address via `#peer_address` as long a connection happened before the error.
## Improvements
* A performance regression involving the new default timeouts has been fixed, which could cause significant overhead in "multiple requests in sequence" scenarios, and was clearly visible in benchmarks.
* this regression will still be seen in jruby due to a bug, which fix will be released in jruby 9.4.5.0.
* HTTP/1.1 connections are now set to handle as many requests as they can by default (instead of the past default of max 200, at which point they'd be recycled).
* tolerate the inexistence of `openssl` in the installed ruby, like `net-http` does.
* `on_connection_opened` and `on_connection_closed` will yield the `OpenSSL::SSL::SSLSocket` instance for `https` backed origins (instead of always the `Socket` instance).
## Bugfixes
* when using the `:native` resolver (default option), a default of 1 for ndots is set, for systems which do not set one.
* replaced usage of `Float::INFINITY` with `nil` for timeout defaults, as the former can't be used in IO wait functions.
* `faraday` adapter timeout setup now maps to `:read_timeout` and `:write_timeout` options from `httpx`.
* fixed HTTP/1.1 connection recycling on number of max requests exhausted.
* `response.json` will now work when "content-type" header is set to "application/hal+json".
## Chore
* when using the `:cookies` plugin, a warning message to install the idnx message will only be emitted if the cookie domain is an IDN (this message was being shown all the time since v1 release).
* (Re-)enabling default retries in DNS name queries; this had been disabled as a result of revamping timeouts, and resulted in queries only being sent once, which is very little for UDP-related traffic, and breaks if using DNs rate-limiting software. Retries the query just once, for now.
## bugfixes
* reset timers when adding new intervals, as these may be added as a result on after-select connection handling, and must wait for the next tick cycle (before the patch, they were triggering too soon).
* fixed "on close" callback leak on connection reuse, which caused linear performance regression in benchmarks performing one request per connection.
* fixed hanging connection when an HTTP/1.1 emitted a "connection: close" header but the server would not emit one (it closes the connection now).
* fixed recursive dns cached lookups which may have already expired, and created nil entries in the returned address list.
* dns system resolver is now able to retry on failure.
* when using `:follow_redirects` plugin, the "authorization" header will be removed when following redirect responses to a different origin.
## bugfixes
* fixed `:stream` plugin not following redirect responses when used with the `:follow_redirects` plugin.
* fixed `:stream` plugin not doing content decoding when responses were p.ex. gzip-compressed.
* fixed bug preventing usage of IPv6 loopback or link-local addresses in the request URL in systems with no IPv6 internet connectivity (the request was left hanging).
* protect all code which may initiate a new connection from abrupt errors (such as internet turned off), as it was done on the initial request call.
## chore
internal usage of `mutex_m` has been removed (`mutex_m` is going to be deprecated in ruby 3.3).
* pattern matching support for responses has been backported to ruby 2.7 as well.
## bugfixes
* `stream` plugin: fix for `HTTPX::StreamResponse#each_line` not yielding the last line of the payload when not delimiter-terminated.
* `stream` plugin: fix `webmock` adapter integration when methods calls would happen in the `HTTPX::StreamResponse#each` block.
* `stream` plugin: fix `:follow_redirects` plugin integration which was caching the redirect response and using it for method calls inside the `HTTPX::StreamResponse#each` block.
* "103 early hints" responses will be ignored when processing the response (it was causing the response returned by sesssions to hold its headers, instead of the following 200 response, while keeping the 200 response body).
The `:ssrf_filter` plugin prevents server-side request forgery attacks, by blocking requests to the internal network. This is useful when the URLs used to perform requests aren’t under the developer control (such as when they are inserted via a web application form).
A new `:timeout` option, `:close_handshake_timeout`, is added, which monitors connection readiness when performing HTTP/2 connection termination handshake.
## Improvements
* Internal "eden connections" concept was removed, and connection objects are now kept-and-reused during the lifetime of a session, even when closed. This simplified connectio pool implementation and improved performance.
* request using `:proxy` and `:retries` plugin enabled sessions will now retry on proxy connection establishment related errors.
## Bugfixes
* webmock adapter: mocked responses storing decoded payloads won't try to decode them again (fixes vcr/webmock integrations).
* webmock adapter: fix issue related with making real requests over webmock-enabled connection.
* only raise "unknown option" error when option is not supported, not anymore when error happens in the setup of a support option.
* usage of `HTTPX::Session#wrap` within a thread with other sessions using the `:persistent` plugin won't inadvertedly terminate its open connections.
* terminate connections on `IOError` (`SocketError` does not cover them).
* terminate connections on HTTP/2 protocol and handshake errors, which happen during establishment or termination of a HTTP/2 connection (they were being previously kept around, although they'd irrecoverable).
* `:oauth` plugin: fixing check preventing the OAuth metadata server integration path to be exercised.
* fix instantiation of the options headers object with the wrong headers class.
* `:retries` plugin: allow `:max_retries` set to 0 (allows for a soft disable of retries when using the faraday adapter).
## Bugfixes
* `:oauth` plugin: fix for default auth method being ignored when setting grant type and scope as options only.
* ensure happy eyeballs-initiated cloned connections also set session callbacks (caused issues when server would respond with a 421 response, an event requiring a valid internal callback).
* native resolver cleanly transitions from tcp to udp after truncated DNS query (causing issues on follow-up CNAME resolution).
* elapsing timeouts now guard against mutation of callbacks while looping (prevents skipping callbacks in situations where a previous one would remove itself from the collection).
## Chore
* datadog adapter: do not call `.lazy` on options (avoids deprecation warning, to be removed in ddtrace 2.0)
`http-2` v1.0.0 is replacing `http-2-next` as the HTTP/2 parser.
`http-2-next` was forked from `http-2` 5 years ago; its improvements have been merged back to `http-2` recently though, so `http-2-next` willl therefore no longer be maintained.
## Improvements
Request-specific options (`:params`, `:form`, `:json` and `:xml`) are now separately kept by the request, which allows them to share `HTTPX::Options`, and reduce the number of copying / allocations.
This means that `HTTPX::Options` will throw an error if you initialize an object which such keys; this should not happen, as this class is considered internal and you should not be using it directly.
## Fixes
* support for the `datadog` gem v2.0.0 in its adapter has been unblocked, now that the gem has been released.
* loading the `:cookies` plugin was making the `Session#build_request` private.
* Prevent `NoMethodError` in an edge case when the `:proxy` plugin is autoloaded via env vars and webmock adapter are used in tandem, and a real request fails.
* raise invalid uri error if passed request uri does not contain the host part (ex: `"https:/get"`)
The `:content_digest` can be used to calculate the digest of request payloads and set them in the `"content-digest"` header; it can also validate the integrity of responses which declare the same `"content-digest"` header.
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Content-Digest
## Per-session connection pools
This architectural changes moves away from per-thread shared connection pools, and into per-session (also thread-safe) connection pools. Unlike before, this enables connections from a session to be reused across threads, as well as limiting the number of connections that can be open on a given origin peer. This fixes long-standing issues, such as reusing connections under a fiber scheduler loop (such as the one from the gem `async`).
A new `:pool_options` option is introduced, which can be passed an hash with the following sub-options:
* `:max_connections_per_origin`: maximum number of connections a pool allows (unbounded by default, for backwards compatibility).
* `:pool_timeout`: the number of seconds a session will wait for a connection to be checked out (default: 5)
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools
## Improvements
* `:aws_sigv4` plugin: improved digest calculation on compressed request bodies by buffering content to a tempfile.
* `HTTPX::Response#json` will parse payload from extended json MIME types (like `application/ld+json`, `application/hal+json`, ...).
## Bugfixes
* `:aws_sigv4` plugin: do not try to rewind a request body which yields chunks.
* fixed request encoding when `:json` param is passed, and the `oj` gem is used (by using the `:compat` flag).
* native resolver: on message truncation, bubble up tcp handshake errors as resolve errors.
* allow `HTTPX::Response#json` to accept extended JSON mime types (such as responses with `content-type: application/ld+json`)
## Chore
* default options are now fully frozen (in case anyone relies on overriding them).
### `:xml` plugin
XML encoding/decoding (via `:xml` request param, and `HTTPX::Response#xml`) is now available via the `:xml` plugin.
Using `HTTPX::Response#xml` without the plugin will issue a deprecation warning.
* only load the `datadog` integration when the `datadog` sdk is loaded (and not other gems that may define the `Datadog` module, like `dogstatsd`)
* do not trace if datadog integration is loaded but disabled
* distributed headers are now sent along (when the configuration is enabled, which it is by default)
* fix for handling multiple `GOAWAY` frames coming from the server (node.js servers seem to send multiple frames on connection timeout)
* fix regression for when a url is used with `httpx` which is not `http://` or `https://` (should raise `HTTPX::UnsupportedSchemaError`)
* worked around `IO.copy_stream` which was emitting incorrect bytes for HTTP/2 requests which bodies larger than the maximum supported frame size.
* multipart requests: make sure that a body declared as `Pathname` is opened for reading in binary mode.
* `webmock` integration: ensure that request events are emitted (such as plugins and integrations relying in it, such as `datadog` and the OTel integration)
* native resolver: do not propagate successful name resolutions for connections which were already closed.
* native resolver: fixed name resolution stalling, in a multi-request to multi-origin scenario, when a resolution timeout would happen.
## Chore
* refactor of the happy eyeballs and connection coalescing logic to not rely on callbacks, and instead on instance variable management (makes code more straightforward to read).
* faraday: use default reason when none is matched by Net::HTTP::STATUS_CODES
* native resolver: keep sending DNS queries if the socket is available, to avoid busy loops on select
* native resolver fixes for Happy Eyeballs v2
* do not apply resolution delay if the IPv4 IP was not resolved via DNS
* ignore ALIAS if DNS response carries IP answers
* do not try to query for names already awaiting answer from the resolver
* make sure all types of errors are propagated to connections
* make sure next candidate is picked up if receiving NX_DOMAIN_NOT_FOUND error from resolver
* raise error happening before any request is flushed to respective connections (avoids loop on non-actionable selector termination).
* fix "NoMethodError: undefined method `after' for nil:NilClass", happening for requests flushed into persistent connections which errored, and were retried in a different connection before triggering the timeout callbacks from the previously-closed connection.
## Chore
* Refactor of timers to allow for explicit and more performant single timer interval cancellation.
* default log message restructured to include info about process, thread and caller.
* `webmock` adapter: reassign headers to signature after callbacks are called (these may change the headers before virtual send).
* do not close request (and its body) right after sending, instead only on response close
* prevents retries from failing under the `:retries` plugin
* fixes issue when using `faraday-multipart` request bodies
* retry request with HTTP/1 when receiving an HTTP/2 GOAWAY frame with `HTTP_1_1_REQUIRED` error code.
* fix wrong method call on HTTP/2 PING frame with unrecognized code.
* fix EOFError issues on connection termination for long running connections which may have already been terminated by peer and were wrongly trying to complete the HTTP/2 termination handshake.
* `:stream` plugin: response will now be partially buffered in order to i.e. inspect response status or headers on the response body without buffering the full response
* this fixes an issue in the `down` gem integration when used with the `:max_size` option.
* do not unnecessarily probe for connection liveness if no more requests are inflight, including failed ones.
* when using persistent connections, do not probe for liveness right after reconnecting after a keep alive timeout.
## Bugfixes
* `:persistent` plugin: do not exhaust retry attempts when probing for (and failing) connection liveness.
* since the introduction of per-session connection pools, and consequentially due to the possibility of multiple inactive connections for the same origin being in the pool, which may have been terminated by the peer server, requests would fail before being able to establish a new connection.
* prevent retrying to connect the TCP socket object when an SSLSocket object is already in place and connecting.
The `:stream_bidi` plugin enables bidirectional streaming support (an HTTP/2 only feature!). It builds on top of the `:stream` plugin, and uses its block-based syntax to process incoming frames, while allowing the user to pipe more data to the request (from the same, or another thread/fiber).
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Query
this functionality was added as a plugin for explicit opt-in, as it's experimental (RFC for the new HTTP verb is still in draft).
### `:response_cache` plugin filesystem based store
The `:response_cache` plugin supports setting the filesystem as the response cache store (instead of just storing them in memory, which is the default `:store`).
```ruby
# cache store in the filesystem, writes to the temporary directory from the OS
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Response-Cache#:file_store
### `:close_on_fork` option
A new option `:close_on_fork` can be used to ensure that a session object which may have open connections will not leak them in case the process is forked (this can be the case of `:persistent` plugin enabled sessions which have add usage before fork):
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools#Fork-Safety .
### `:debug_redact` option
The `:debug_redact` option will, when enabled, replace parts of the debug logs (enabled via `:debug` and `:debug_level` options) which may contain sensitive information, with the `"[REDACTED]"` placeholder.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Debugging .
### `:max_connections` pool option
A new `:max_connections` pool option (settable under `:pool_options`) can be used to defined the maximum number **overall** of connections for a pool ("in-transit" or "at-rest"); this complements, and supersedes when used, the already existing `:max_connections_per_origin`, which does the same per connection origin.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools .
### Subplugins
An enhancement to the plugins architecture, it allows plugins to define submodules ("subplugins") which are loaded if another plugin is in use, or is loaded afterwards.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Custom-Plugins#Subplugins .
## Improvements
* `:persistent` plugin: several improvements around reconnections of failure:
* reconnections will only happen for "connection broken" errors (and will discard reconnection on timeouts)
* reconnections won't exhaust retries
* `:response_cache` plugin: several improements:
* return cached response if not stale, send conditional request otherwise (it was always doing the latter).
* consider immutable (i.e. `"Cache-Control: immutable"`) responses as never stale.
* `:datadog` adapter: decorate spans with more tags (header, kind, component, etc...)
* timers operations have been improved to use more efficient algorithms and reduce object creation.
## Bugfixes
* ensure that setting request timeouts happens before the request is buffered (the latter could trigger a state transition required by the former).
* `:response_cache` plugin: fix `"Vary"` header handling by supporting a new plugin option, `:supported_vary_headers`, which defines which headers are taken into account for cache key calculation.
* fixed query string encoded value when passed an empty hash to the `:query` param and the URL already contains query string.
* `:callbacks` plugin: ensure the callbacks from a session are copied when a new session is derived from it (via a `.plugin` call, for example).
* `:callbacks` plugin: errors raised from hostname resolution should bubble up to user code.
* fixed connection coalescing selector monitoring in cases where the coalescable connecton is cloned, while other branches were simplified.
* clear the connection write buffer in corner cases where the remaining bytes may be interpreted as GOAWAY handshake frame (and may cause unintended writes to connections already identified as broken).
* remove idle connections from the selector when an error happens before the state changes (this may happen if the thread is interrupted during name resolution).
## Chore
`httpx` makes extensive use of features introduced in ruby 3.4, such as `Module#set_temporary_name` for otherwise plugin-generated anonymous classes (improves debugging and issue reporting), or `String#append_as_bytes` for a small but non-negligible perf boost in buffer operations. It falls back to the previous behaviour when used with ruby 3.3 or lower.
Also, and in preparation for the incoming ruby 3.5 release, dependency of the `cgi` gem (which will be removed from stdlib) was removed.
* connection errors on persistent connections which have just been checked out from the pool no longer account for retries bookkeeping; the assumption should be that, if a connection has been checked into the pool in an open state, chances are, when it eventually gets checked out, it may be corrupt. This issue was more exacerbated in `:persistent` plugin connections, which by design have a retry of 1, thus failing often immediately after check out without a legitimate request try.
* native resolver: fix issue with process interrupts during DNS request, which caused a busy loop when closing the selector.
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.