In order to expose other auth schemes in proxy, the basic, digest and
ntlm modules were extracted from the plugins, these being left with the
request management. So now, an extra parameter, `:scheme`, can be
passed (it'll be "basic" for http and "socks5" for socks5 by default,
can also be "digest" or "ntlm", haven't tested those yet).
for multi-backed resolvers, resolving is attempted before sending it to
the resolver. in this way, cached, local or ip resolves get
propagated to the proper resolver by ip family, instead of the
previous mess.
the system resolver doesn't do these shenanigans (trust getaddrinfo)
the ruby `resolver` library does everthing in ruby, and sequentially
(first ipv4 then ipv6 resolution). we already have native for that, and
getaddrinfo should be considered the ideal way to use DNS (potentially
in the future, it becomes the default resolver).
Two resolver are kept (IPv6/IPv4) along in the pool, to which all
names are sent to and read from in the same pool. IPv4 resolves are
subject to a 50ms delay (as per rfc) before they're used for connecting.
IPv6 addresses have preference, in that if they arrive before the delay,
they are immediately used. If they arrive after the delay, they do not
interrupt the connection, but they'll be the next-in-line in case
connection handshake fails.
Two resolvers are kept, but the inherent Connection will be shared,
thereby sending name resolving requests to the same HTTP/2 connection in
bulk. The resolution delay logic from above also applies.
Currently handles resolving via `resolv` lib. This happens synchronously
though, so we're not there yet.
keeping them around was resulting in some busy loops on timer events
(i.e. retry after), making them unreliable, innacurate and CPU
draining. they're now kept out whenever they're inactive.
an issue was observed when stream was closed from our side, that the
the request in-flight count on the connection. This was fixed by not
reacting to :stream_closed events if request has been previously deleted.
during interest calculation
A quirk was found whereby a connection which failed while connecting
(such as the badssl test) was properly unregistered from the pool, was
however kept in the selectables selector pool, because if this operation
happening during the interest calculation pool, and the var substitution
being performed right afterwards, leaving the pool and selector out of
sync and causing all sorts of miscalculations around timers later on.
The HTTPX::Timers class mimicks the same top-level API as its
predecessors, but simplifies its implementation. Adding a timer will
resort all timers, while lookups are roughly the same complexity. The
key difference is that callbacks are now aggregated by interval, i.e.
different requests setting the same timeout, will reuse the same timer.
This is a more simple design than Timers::Group, which stores timers in
a binary search tree; the latter will perform well in any environment,
whereas the first one is more tailored for the use-case of httpx, where
most of the times no timers will be set, and when they do, the same
timer will be reused for all requests because they usually have the same
set of options (and therefore timeouts).
* `Response#error`, which, coupled with `ErrorResponse#error`, allows
for `if response.error` kind of conditional;
* `Response#raise_for_status` now returns the response when no error is
raise (for method chaining);
Closes#153