the previous patch was only considering the current inflight request and the pending requests, but in a scenario where multiple requests may have been buffered and considered in theory in-flight, this information would pass by, leading to a busy loop
this causes a very subtle bug where, in a multifiber scenario, when the last active stream is closed and the connection is checked-in, only to be checked out later and pinged for liveness, it may get caught in the termination flow of a session, which will try to deactivateconnections from the selector; given that only inflight requests were taken into account, it'd get deactivated and checked in, which would mess with the fiber which was actively using it
this fixes it by considering the ping state in the decision on whether the connection should be deactivated; when having an inflight ping, keep it open and in the selector
this is used downstream to inform the selector (via IO interest calculation) whether the connection should wait on events for the current fiber (i.e. the connection should be used for requests of the current fiber, or DNS requests for the connection of such requests must be pending)
do not check buffer before going to the parsers, which also take that account
for http2, use the http2 connection send buffer (instead of the connection buffer) to figure out whether it's waiting on a WINDOW_UPDATE, which is more correct and efficient
on http1, the logic is the same, but the code is simplified, as the checks become the same regardless of whether request or @requests.first is picked up
the decision to use IO.select was based on the number of selectables, however only on select_many the actual selectables would be prefiltered before being passed to it, in some cases meaning that only one descriptor would be passed to IO.select (instead of IO#wait_readable or the like)
in fiber-scheduler frameworks, this meant going a potentially more inefficient route needlessly, such as in async, where IO.wait_readable and the like are performed using epoll or kevent, whereas IO.select calls are executed in a background thread
in a normal situation this would not happen, as events on a resolver would be dealt one at a time, but in a fiber-scheduler environment like async, the first query could be buffered in the 1st fiber switch, the second could then be enqueued on 2nd, then flush buffer and fiber switch on read, and a third query would then enter a corrupt state where, due to the buffer having been flushed on the 2nd fiber, write the 3rd query before receiving the second one, and messing up the pending query bookkeeping, making the fiber then block (due to DNS query never returning back) and expose that as an empty interest registration from some other connection on the selector
this is fixed by using @timer as a proxy for knowing that a given DNS query has been flushed but is still waiting on a response (or an eventual retry)
the routine was using #fetch_response, which may return nil, and wasn't handling it, making it potentially return a nil instead of a response/errorresponse object. since, depending on the plugins, #fetch_response may reroute requests, one allows to keep in the loop in case there are selectables again to process as a result of it
when coupled with the retries plugin, the exception is raised inside send_request, which breaks the integration; in order to protect from it, the proxy plugin will protect from proxy connection errors (socket/timeout errors happening until tunnel established) and allow them to be retried, while ignoring other proxy errors; meanwhile, the naming of errors was simplified, and now there's an HTTPX::ProxyError replacing HTTPX::HTTPProxyError (which is a breaking change).
when a proxied ssl connection would be lost, standard reconnection wouldn't work, as it would not pick the information from the internal tcp socket. in order to fix this, the connection retrieves the proxied io on reset/purge, which makes the establish a new proxyssl connection on reconnect
while this type of error is avoided when doing HEv2, the IPs remain
in the cache; this means that, one the same host is reached, the
IPs are loaded onto the same socket, and if the issue is IPv6
connectivity, it'll break outside of the HEv2 flow.
this error is now protected inside the connect block, so that other
IPs in the list can be tried after; the IP is then evicted from the
cachee.
HEv2 related regression test is disabled in CI, as it's currently
reliable in Gitlab CI, which allows to resolve the IPv6 address,
but does not allow connecting to it
Options#merge works by duping-then-filling ivars, but due to not all of them being initialized on object creation, each merge had the potential of adding more object shapes for the same class, which breaks one of the most recent ruby optimizations
this was fixed by caching all possible options names at the class level, and using that as reference in the initalize method to nilify all unreferenced options