keeping them around was resulting in some busy loops on timer events
(i.e. retry after), making them unreliable, innacurate and CPU
draining. they're now kept out whenever they're inactive.
an issue was observed when stream was closed from our side, that the
the request in-flight count on the connection. This was fixed by not
reacting to :stream_closed events if request has been previously deleted.
during interest calculation
A quirk was found whereby a connection which failed while connecting
(such as the badssl test) was properly unregistered from the pool, was
however kept in the selectables selector pool, because if this operation
happening during the interest calculation pool, and the var substitution
being performed right afterwards, leaving the pool and selector out of
sync and causing all sorts of miscalculations around timers later on.
The HTTPX::Timers class mimicks the same top-level API as its
predecessors, but simplifies its implementation. Adding a timer will
resort all timers, while lookups are roughly the same complexity. The
key difference is that callbacks are now aggregated by interval, i.e.
different requests setting the same timeout, will reuse the same timer.
This is a more simple design than Timers::Group, which stores timers in
a binary search tree; the latter will perform well in any environment,
whereas the first one is more tailored for the use-case of httpx, where
most of the times no timers will be set, and when they do, the same
timer will be reused for all requests because they usually have the same
set of options (and therefore timeouts).
* `Response#error`, which, coupled with `ErrorResponse#error`, allows
for `if response.error` kind of conditional;
* `Response#raise_for_status` now returns the response when no error is
raise (for method chaining);
Closes#153
a subtle bug surfaced when trying multiple individual requests on the
same persistent session, where the connection was being removed from the
watchable connections after each request, but kept in the pool; on the
next request, it would set the same session callbacks; this would go on
and on until connections would get exhausted, after which all of these
callbacks would have to be called.
fixed by having a new callbacks interface, #only, which discards
existing callbacks by type, thereby ensuring there's only one of the
kind.
there was a long-standing buggy workaround, whereas in stream-mode, when
there was no response yet to query from, a synchronous request would be
fired. This would break when under event streams, so we had to document
this as "make sure that...".
This fixes it by implementing a general session API convention, which
separates the step of sending the requests, from waiting for its
receival. And, given that the request knows when the response is
available, we can actually "tick until response".
This might be used in the future to refactor the way we handle the
responses, which buffer the full payload by default, instead of reading
from the connection at will.