when selector emits a timeout error, the full list of connections in the
pool is traversed. This is in itself not very performant, but the
problem is that the accounting is also done with connection which
weren't selected, such as the inactive connections. So we skip this for
them for now.
the #next_proxy function was relying on the existence of a proxy object.
However, in the case of default proxy plugin, this is not the case, and
this is only created later.
"host" header is considered an invalid HTTP/2 header, and I think that,
although we log it, the parser does not send it. However, this is
equivalent to a silent fail. We'll be then copying cURL's behaviour,
i.e. use the user-set "host" as ":authority", but we'll be logging this
behaviour, in case it changes in the future.
Fixes#177
pipeline disabling may happen on the `handle_error` phase outside of the
main consumption loop, so the `throw` call is out of context, and needs
to be guarded against. we're alreaady doing in the handler, so I'm just
quick-fixing it fow now.
Fixes#176
The previous implementation of the webmock plugin bypassed the
connection layer, which made it ignore key plugins like the retries
plugin. The whole plugin was redone so that it'd hook at the connection
level when piping requests.
A small difficulty was on how to handle the connection initialization
state when needing to unmock, as name resolving triggers before requests
are piped. A hack with a #once callback was implemented.
Fixes#170
mime-types uses filenames, which is a terrible and inaccurate strategy
to infer mime types (example: "a.mp4" can be "application/mp4" or
"audio/mp4" before it's "video/mp4").
Added support for `marcel` and `filemagic`, which both support magic
bytes detection strategy.
Fixes#171
strings
An error was observed while looking at webmock integration, where
requests formed via the multipart plugin where returning an empty string
as body. The issue was caused by an optimization on multipart encoder,
which reuses the same buffer when reading chunks. Unfortunately, these
cannot be yielded the same way via IO.copy_stream, as the same (cleared)
buffer will be used to generate the eager-loaded body chunks.
to wait for.
Timeout calculation may trigger errors which lead to connection
unregistering, such as tital timeout errors. In such a case, we can end
up in a state where #select gets called with no timeout and no
selectable connections.
https://github.com/HoneyryderChuck/httpx/issues/3
keeping them around was resulting in some busy loops on timer events
(i.e. retry after), making them unreliable, innacurate and CPU
draining. they're now kept out whenever they're inactive.
an issue was observed when stream was closed from our side, that the
the request in-flight count on the connection. This was fixed by not
reacting to :stream_closed events if request has been previously deleted.
during interest calculation
A quirk was found whereby a connection which failed while connecting
(such as the badssl test) was properly unregistered from the pool, was
however kept in the selectables selector pool, because if this operation
happening during the interest calculation pool, and the var substitution
being performed right afterwards, leaving the pool and selector out of
sync and causing all sorts of miscalculations around timers later on.