although a connection might correctly emit an error response, the
returned responses are still defined by the fetch_response loop in the
session. When the pool is actually empty, this had the side-effect of
leaving error responses behind and exiting with just the first one.
This fixes it popping all available responses in such cases.
in some cases where the client is sending a request with a lot of bytes
(i.e. file uploads), and the server can't consume it (because
authorization, or wrong endpoint), the server stops processing the
request altogether and sends an error response immediately, in which
case the client should pivot and read the error response. Not doing this
was causing the Errno::EPIPE error. The mitigation is therefore to
rescue the error, and mark the consumption loop to read the response
immediately.
if a response does not advertise its body length, then the server closes
the connection when there's no more data to read. Therefore, the
HTTP/1.1 parser should interpret these conditions accordingly, and
emit the response.
Closes#114
two bugs were found. first, only file bodies would be rewinded, whereas
other rewindable (i.e. stringios and such) would be ignored. also,
part_index needed to be reset to 0, so that the parts would be flushed
sequentially (second request body was always empty).
a bug was found where in certain cases, a server responds with an error
before the request fully buffers the body. Under retries, the request
is reset, however, the http/2 conn handler kept the last chunk around,
which it would flush before writing the second request body, resulting
in byte-accounting issues. Therefore, response clean up request state
before yielding.
stream HTTP/2 framing errors were being yielded directly into the connection. This had
the issue of not closing the request, thereby causing an infinite loop
when closing the connection. This seemed to be the issue in CI.
to 1
A check was introduced limiting the number of requests send at once.
However. the connection header was set to "close" as well, because
the accounting involved the number of concurrent connections allowed.
This is now fixed, by doing accounting separately.
authentication plugin
TIL that S3 does not speak HTTP/2 (cloudfront does). Also, AWS sigv4
verification breaks with pipelined requests, therefore, we have to send
them one at a time.
GCP does provide HTTP/2 support, so let's test there as well.
It's expected that this endpoint will be used i most cases for file
uploads, so both plugins will be important to improve throughput and
auth-fail-fast scenarios
this allows the aws-sdk plugin to pass a wrapped Aws::Credentials
downstream, which provides the username/password vars. this is
important, as some of the strategies, such as the web identity token,
also revalidate these parameters.