Compare commits

...

141 Commits

Author SHA1 Message Date
HoneyryderChuck
0261449b39 fixed sig for callbacks_for 2025-08-08 17:06:03 +01:00
HoneyryderChuck
84c8126cd9 callback_for: check for ivar existence first
Closes #353
2025-08-08 16:30:17 +01:00
HoneyryderChuck
ff3f1f726f fix warning about argument potentially being ignored 2025-08-07 12:34:59 +01:00
HoneyryderChuck
b8b710470c fix sentry deprecation 2025-08-07 12:30:31 +01:00
HoneyryderChuck
0f3e3ab068 remove trailing :: from IO module usage, as there's no more internal module 2025-08-07 12:30:21 +01:00
HoneyryderChuck
095fbb3463 using local aws for the max requests tests
reduce exposure to httpbin.org even more
2025-08-07 12:12:50 +01:00
HoneyryderChuck
7790589c1f linting issue 2025-08-07 11:28:18 +01:00
HoneyryderChuck
dd8608ec3b small improv in max requests tests to make it tolerant to multi-homed networks 2025-08-07 11:22:29 +01:00
HoneyryderChuck
8205b351aa removing usage of httpbin.org peer in tests wherever possible
it has been quite unstable, 503'ing often
2025-08-07 11:21:59 +01:00
HoneyryderChuck
5992628926 update nghttp2 used in CI tests 2025-08-07 11:21:02 +01:00
HoneyryderChuck
39370b5883 Merge branch 'issue-337' into 'master'
fix for issues blocking reconnection in proxy mode

Closes #337

See merge request os85/httpx!397
2025-07-30 09:49:51 +00:00
HoneyryderChuck
1801a7815c http2 parser: fix calculation when connection closes and there's no termination handshake 2025-07-18 17:48:23 +01:00
HoneyryderChuck
0953e4f91a fix for #receive_requests bailout routing when out of selectables
the routine was using #fetch_response, which may return nil, and wasn't handling it, making it potentially return a nil instead of a response/errorresponse object. since, depending on the plugins, #fetch_response may reroute requests, one allows to keep in the loop in case there are selectables again to process as a result of it
2025-07-18 17:48:23 +01:00
HoneyryderChuck
a78a3f0b7c proxy fixes: allow proxy connection errors to be retriable
when coupled with the retries plugin, the exception is raised inside send_request, which breaks the integration; in order to protect from it, the proxy plugin will protect from proxy connection errors (socket/timeout errors happening until tunnel established) and allow them to be retried, while ignoring other proxy errors; meanwhile, the naming of errors was simplified, and now there's an HTTPX::ProxyError replacing HTTPX::HTTPProxyError (which is a breaking change).
2025-07-18 17:48:23 +01:00
HoneyryderChuck
aeb8fe5382 fix proxy ssl reconnection
when a proxied ssl connection would be lost, standard reconnection wouldn't work, as it would not pick the information from the internal tcp socket. in order to fix this, the connection retrieves the proxied io on reset/purge, which makes the establish a new proxyssl connection on reconnect
2025-07-18 17:48:23 +01:00
HoneyryderChuck
03170b6c89 promote certain transition logs to regular code (under level 3)
not really useful as telemetry metered, but would have been useful for other bugs
2025-07-18 17:48:23 +01:00
HoneyryderChuck
814d607a45 Revert "options: initialize all possible options to improve object shape"
This reverts commit f64c3ab5990b68f850d0d190535a45162929f0af.
2025-07-18 17:47:08 +01:00
HoneyryderChuck
5502332e7e logging when connections are deregistered from the selector/pool
also, logging when a response is fetched in the session
2025-07-18 17:46:43 +01:00
HoneyryderChuck
f3b68950d6 adding current fiber id to log message tags 2025-07-18 17:45:21 +01:00
HoneyryderChuck
2c4638784f Merge branch 'fix-shape' into 'master'
object shape improvements

See merge request os85/httpx!396
2025-07-14 15:38:19 +00:00
HoneyryderChuck
b0016525e3 recover from network unreachable errors when using cached IPs
while this type of error is avoided when doing HEv2, the IPs remain
in the cache; this means that, one the same host is reached, the
IPs are loaded onto the same socket, and if the issue is IPv6
connectivity, it'll break outside of the HEv2 flow.

this error is now protected inside the connect block, so that other
IPs in the list can be tried after; the IP is then evicted from the
cachee.

HEv2 related regression test is disabled in CI, as it's currently
reliable in Gitlab CI, which allows to resolve the IPv6 address,
but does not allow connecting to it
2025-07-14 15:44:47 +01:00
HoneyryderChuck
49555694fe remove check for non unique local ipv6 which is disabling HEv2
not sure anymore under which condition this was done...
2025-07-14 11:57:02 +01:00
HoneyryderChuck
93e5efa32e http2 stream header logs: initial newline to align values and make debug logs clearer 2025-07-14 11:50:22 +01:00
HoneyryderChuck
8b3c1da507 removed ivar left behind and used nowhere 2025-07-14 11:50:22 +01:00
HoneyryderChuck
d64f247e11 fix for Connection too many object shapes
some more ivars which were not initialized in the first place were leading to the warning in CI mode
2025-07-14 11:50:22 +01:00
HoneyryderChuck
f64c3ab599 options: initialize all possible options to improve object shape
Options#merge works by duping-then-filling ivars, but due to not all of them being initialized on object creation, each merge had the potential of adding more object shapes for the same class, which breaks one of the most recent ruby optimizations

this was fixed by caching all possible options names at the class level, and using that as reference in the initalize method to nilify all unreferenced options
2025-07-14 11:50:22 +01:00
HoneyryderChuck
af03ddba3b options: inlining logic from do_initialize in constructor 2025-07-14 09:10:52 +01:00
HoneyryderChuck
7012ca1f27 fixed previous commit, as the tag is not available before 1.15 2025-07-03 16:39:54 +01:00
HoneyryderChuck
d405f8905f fixed ddtrace compatibility for versions under 1.13.0 2025-07-03 16:23:27 +01:00
HoneyryderChuck
3ff10f142a replace h2 upgrade peer with a custom implementation
the remote one has been failing for some time
2025-06-09 22:56:30 +01:00
HoneyryderChuck
51ce9d10a4 bump version to 1.5.1 2025-06-09 09:04:05 +01:00
HoneyryderChuck
6bde11b09c Merge branch 'gh-92' into 'master'
don't bookkeep retry attempts when errors happen on just-checked-out open connections

See merge request os85/httpx!394
2025-05-28 17:54:03 +00:00
HoneyryderChuck
0c2808fa25 prevent needless closing loop when process is interrupted during DNS request
the native resolver needs to be unselected. it was already, but it was taken into account still for bookkeeping. this removes it from the list by eliminating closed selectables from the list (which were probably already removed from the list via callback)

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-28 15:26:11 +01:00
HoneyryderChuck
cb78091e03 don't bookkeep retry attempts when errors happen on just-checked-out open connections
in case of multiple connections to the same server, where the server may have closed all of them at the same time, a request will fail after checkout multiple times, before starting a new one where the request may succeed. this patch allows the prior attempts not to exhaust the number of possible retries on the request

it does so by marking the request as ping when the connection it's being sent to is marked as inactive; this leverages the logic of gating retries bookkeeping in such a case

Closes https://github.com/HoneyryderChuck/httpx/issues/92
2025-05-28 15:25:50 +01:00
HoneyryderChuck
6fa69ba475 Merge branch 'duplicate-method-def' into 'master'
Fix duplicate `option_pool_options` method

See merge request os85/httpx!393
2025-05-21 15:30:34 +00:00
Earlopain
4a78e78d32
Fix duplicate option_pool_options method
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:237: warning: method redefined; discarding old option_pool_options (StandardError)
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:221: warning: previous definition of option_pool_options was here
2025-05-21 12:49:54 +02:00
HoneyryderChuck
0e393987d0 bump version to 1.5.0 2025-05-16 14:04:08 +01:00
HoneyryderChuck
12483fa7c8 missing ivar sigs in tcp class 2025-05-16 11:15:28 +01:00
HoneyryderChuck
d955ba616a deselect idle connections on session termination
session may be interrupted earlier than the connection has finished
the handshake; in such a case, simulate early termination.

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-15 00:31:15 +01:00
HoneyryderChuck
804d5b878b Merge branch 'debug-redact' into 'master'
added :debug_redact option

See merge request os85/httpx!387
2025-05-14 23:01:28 +00:00
HoneyryderChuck
75702165fd remove ping check when querying for repeatable request status
this should be dependent on the exception only, as connections may have closed before ping was released

this addresses https://github.com/HoneyryderChuck/httpx/issues/87\#issuecomment-2866564479
2025-05-14 23:52:18 +01:00
HoneyryderChuck
120bbad126 clear write buffer on connect errors
leaving bytes around messes up the termination handshake and may raise other unwanted errors
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35446e9fe1 fixes for connection coalescing flow
the whole "found connection not open" branch was removed, as currently,
a mergeable connection must not be closed; this means that only
open/inactive connections will be picked up from selector/pool, as
they're the only coalescable connections (have addresses/ssl cert
state). this may be extended to support closed connections though, as
remaining ssl/addresses are enough to make it coalescable at that point,
and then it's just a matter of idling it, so it'll be simpler than it is
today.

coalesced connection gets closed via Connection#terminate at the end
now, in order to propagate whether it was a cloned connection.

added log messages in order to monitor coalescing handshake from logs.
2025-05-13 16:21:06 +01:00
HoneyryderChuck
3ed41ef2bf pool: do not decrement conn counter when returning existing connection, nor increment it when acquiring
this variable is supposed to monitor new connections being created or dropped, existing connection management shouldn't affect it
2025-05-13 16:21:06 +01:00
HoneyryderChuck
9ffbceff87 rename Connection#coalesced_connection=(conn) to Connection.coalesce!(conn) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
757c9ae32c making tcp state transition logs less ambiguous
also show transition states in connected
2025-05-13 16:21:06 +01:00
HoneyryderChuck
5d88ccedf9 redact ping payload as well 2025-05-13 16:21:06 +01:00
HoneyryderChuck
85808b6569 adding logs to select-on-socket phase 2025-05-13 16:21:06 +01:00
HoneyryderChuck
d5483a4264 reconnectable errors: include HTTP/2 parser errors and opnessl errors 2025-05-13 16:21:06 +01:00
HoneyryderChuck
540430c00e assert for request in a faraday test (sometimes this is nil, for some reason) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
3a417a4623 added :debug_redact option
when true, text passed to log messages considered sensitive (wrapped in a +#log_redact+ call) will be logged as "[REDACTED}"
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35c18a1b9b options: meta prog integer options into the same definition 2025-05-13 16:20:28 +01:00
HoneyryderChuck
cf19fe5221 Merge branch 'improv' into 'master'
sig improvements

See merge request os85/httpx!390
2025-05-13 15:18:50 +00:00
HoneyryderChuck
f9c2fc469a options: freeze more ivars by default 2025-05-13 15:52:57 +01:00
HoneyryderChuck
9b513faab4 aligning implementation of the #resolve function in all implementations 2025-05-13 15:52:57 +01:00
HoneyryderChuck
0be39faefc added some missing sigs + type safe code 2025-05-13 15:44:21 +01:00
HoneyryderChuck
08c5f394ba fixed usage of inexisting var 2025-05-13 15:13:02 +01:00
HoneyryderChuck
55411178ce resolver: moved @connections ivar + init into parent class
also, establishing the selectable interface for resolvers
2025-05-13 15:13:02 +01:00
HoneyryderChuck
a5c83e84d3 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!389
2025-05-13 14:10:56 +00:00
HoneyryderChuck
d7e15c4441 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-05-13 11:02:13 +01:00
HoneyryderChuck
012255e49c Merge branch 'ruby-3.5-cgi' into 'master'
Only require from `cgi` what is required

See merge request os85/httpx!391
2025-05-10 00:20:33 +00:00
HoneyryderChuck
d20506acb8 Merge branch 'httpx-issue-350' into 'master'
In file (any serialized) store need to response.finish! on get

Closes #350

See merge request os85/httpx!392
2025-05-10 00:13:41 +00:00
Paul Duey
28399f1b88 In file (any serialized) store need to response.finish! on get 2025-05-09 17:22:39 -04:00
Earlopain
953101afde
Only require from cgi what is required
In Ruby 3.5, most of the `cgi` gem will be removed and moved to a bundled gem.

Luckily, the escape/unescape methods have been left around. So, only the require path needs to be adjusted to avoid a warning.
`cgi/escape` was available since Ruby 2.3

I also moved the require to the file that actually uses it.

https://bugs.ruby-lang.org/issues/21258
2025-05-09 18:54:41 +02:00
HoneyryderChuck
055ee47b83 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!383
2025-04-29 22:44:44 +00:00
HoneyryderChuck
dbad275c65 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-04-29 23:25:41 +01:00
HoneyryderChuck
fe69231e6c Merge branch 'gh-86' into 'master'
persistent plugin: by default, do not retry requests which failed due to a request timeout

See merge request os85/httpx!385
2025-04-29 09:41:45 +00:00
HoneyryderChuck
4c61df768a persistent plugin: by default, do not retry requests which failed due to a request timeout
that isn't a connection-related type of failure, and confuses users when it gets retried, as connection was fine, request was just slow

Fixes https://github.com/HoneyryderChuck/httpx/issues/86
2025-04-27 16:47:50 +01:00
HoneyryderChuck
aec150b030 Merge branch 'issue-347' into 'master'
:callbacks plugin fix: copy callbacks to new session when using the session builder methods

Closes #347 and #348

See merge request os85/httpx!386
2025-04-26 15:12:42 +00:00
HoneyryderChuck
29a43c4bc3 callbacks plugin fix: errors raised in .on_request_error callback should bubble up to user code
this was not happening for errors happening during name resolution, particularly when HEv2 was used, as the second resolver was kept open and didn't stop the selector loop

Closes #348
2025-04-26 03:11:55 +01:00
HoneyryderChuck
34c2fee60c :callbacks plugin fix: copy callbacks to new session when using the session builder methods
such as '.with' or '.wrap', which create a new session object on the fly
2025-04-26 02:34:56 +01:00
HoneyryderChuck
c62966361e moving can_vuffer_more_requests? to private sector
it's only used internally
2025-04-26 01:42:55 +01:00
HoneyryderChuck
2b87a3d5e5 selector: make APIs expecting connections more strict, improve sigs by using interface 2025-04-26 01:42:55 +01:00
HoneyryderChuck
3dd767cdc2 response_cache: also cache request headers, for vary algo computation 2025-04-26 01:42:55 +01:00
HoneyryderChuck
a9255c52aa response_cache plugin: adding more rdoc documentation to methods 2025-04-26 01:42:55 +01:00
HoneyryderChuck
32031e8a03 response_cache plugin: rename cached_response? to not_modified?, more accurate 2025-04-26 01:42:55 +01:00
HoneyryderChuck
f328646c08 Merge branch 'gh-84' into 'master'
adding missing datadog span decoration

See merge request os85/httpx!384
2025-04-26 00:40:49 +00:00
HoneyryderChuck
0484dd76c8 fix for wrong query string encoding when passed an empty :params input
Fixes https://github.com/HoneyryderChuck/httpx/issues/85
2025-04-26 00:20:28 +01:00
HoneyryderChuck
17c1090b7a more agressive timeouts in tests 2025-04-26 00:10:48 +01:00
HoneyryderChuck
87f4ce4b03 adding missing datadog span decoration
including header tags, and other missing span tags
2025-04-25 23:46:11 +01:00
HoneyryderChuck
1ec7442322 Merge branch 'improv-tests' 2025-04-14 17:35:15 +01:00
HoneyryderChuck
723959cf92 wrong option docs 2025-04-13 01:27:27 +01:00
HoneyryderChuck
10b4b9c7c0 remove unused method 2025-04-13 01:27:05 +01:00
HoneyryderChuck
1b39bcd3a3 set approriate coverage key, use it as command 2025-04-13 01:08:18 +01:00
HoneyryderChuck
44a2041ea8 added missing response cache store sigs 2025-04-13 01:07:18 +01:00
HoneyryderChuck
b63f9f1ae2 native: realign log calls, so coverage does not misreport them 2025-04-13 01:06:54 +01:00
HoneyryderChuck
467dd5e7e5 file store: testing path when the same request is stored twice
also, testing usage of symbol response cache store options.
2025-04-13 01:05:42 +01:00
HoneyryderChuck
c626fae3da adding test to force usage of max_requests conditionals under http1 2025-04-13 01:05:08 +01:00
HoneyryderChuck
7f6b78540b Merge branch 'issue-328' into 'master'
pool option: max_connections

Closes #328

See merge request os85/httpx!371
2025-04-12 22:43:18 +00:00
HoneyryderChuck
b120ce4657 new pool option: max_connections
this new option declares how many max inflight-or-idle open connections a session may hold. connections get recycled in case a new one is needed and the pool has closed connections to discard. the same pool timeout error applies as for max_connections_per_origin
2025-04-12 23:29:08 +01:00
HoneyryderChuck
32c36bb4ee Merge branch 'issue-341' into 'master'
response_cache plugin: return cached response from store unless stale

Closes #341

See merge request os85/httpx!382
2025-04-12 21:45:35 +00:00
HoneyryderChuck
cc0626429b prevent overlap of test dirs/files across test instances 2025-04-12 22:09:12 +01:00
HoneyryderChuck
a0e2c1258a allow setting :response_cache_store with a symbol (:store, :file_store)
cleaner to select from one of the two available options
2025-04-12 22:09:12 +01:00
HoneyryderChuck
6bd3c15384 fixing cacheable_response? to exclude headers and freshness
it's called with a fresh response already
2025-04-12 22:09:12 +01:00
HoneyryderChuck
0d23c464f5 simplifying response cache store API
#get, #set, #clear, that's all you need. this can now be some bespoke custom class implementing these primitives
2025-04-12 22:09:12 +01:00
HoneyryderChuck
a75b89db74 response_cache plugin: addin filesystem based store
it stores the cached responses in the filesystem
2025-04-12 22:09:12 +01:00
HoneyryderChuck
7173616154 response cache: fix vary header handling by supporting a defined set of headers
the cache key will be also determined by the supported vary headers values, when present; this means easier lookups, and one level hash fetch, where the same url-verb request may have multiple entries depending on those headers

checking response vary header will therefore be something done at cache response lookup; writes may override when they shouldn't though, as a full match on supported vary headers will be performed, and one can't know in advance the combo of vary headers, which is why insterested parties will have to be judicious with the new  option
2025-04-12 22:09:12 +01:00
HoneyryderChuck
69f9557780 corrected equality comparison of response bodies 2025-04-12 22:09:12 +01:00
HoneyryderChuck
339af65cc1 response cache: store cached response in request, so that copying and cache invalidating work a bit OOTB 2025-04-12 22:09:12 +01:00
HoneyryderChuck
3df6edbcfc response_cache: an immutable response is always fresh 2025-04-12 22:09:11 +01:00
HoneyryderChuck
5c2f8ab0b1 response_cache plugin: return cached response from store unless stale
response age wasn't being taken into account, and cache invalidation request was always being sent; fresh response will stay in the store until expired; when it expires, cache invalidation will be tried (if possible); if invalidated, the new response is put in the store; Bif validated, the body of the cached response is copied, and the cached response stays in the store
2025-04-12 22:09:11 +01:00
HoneyryderChuck
0c335fd03d Merge branch 'gh-82' into 'master'
persistent plugin: drop , allow retries for ping requests, regardless of idempotency property

See merge request os85/httpx!381
2025-04-12 09:14:32 +00:00
HoneyryderChuck
bf19cde364 fix: ping record to match must be kept in a different string
http-2 1.1.0 uses the string input as the ultimate buffer (when input not frozen), which will mutate the argument. in order to keep it around for further comparison, the string is dupped
2025-04-11 16:25:58 +01:00
HoneyryderChuck
7e0ddb7ab2 persistent plugin: when errors happen during connection ping phase, make sure that only connection lost errors are going to be retriable 2025-04-11 14:41:36 +01:00
HoneyryderChuck
4cd3136922 connection: set request timeouts before sending the request to the parser
in situations where the connection is already open/active, the requests would be buffered before setting the timeouts, which would skip transition callbacks associated with writes, such as write timeouts and request timeouts
2025-04-11 14:41:36 +01:00
HoneyryderChuck
642122a0f5 persistent plugin: drop , allow retries for ping requests, regardless of idempotency property
the previous option was there to allow reconnecting on non-idempotent (i.e. POST) requests, but had the unfortunate side-effect of allowing retries for failures (i.e. timeouts) which had nothing to do with a failed connection error; this mitigates it by enabling retries for ping-aware requests, i.e. if there is an error during PING, always retry
2025-04-11 14:41:36 +01:00
HoneyryderChuck
42d42a92b4 added missing test for close_on_fork option 2025-04-09 09:39:53 +01:00
HoneyryderChuck
fb6a509d98 removing duplicate sig 2025-04-06 21:54:03 +01:00
HoneyryderChuck
3c22f36a6c session refactor: remove @responses hash
this was being used as an internal cache for finished responses; this can be however superseded by Request#response, which fulfills the same role alongside the #finished? call; this allows us to drop one variable-size hash which would grow at least as large as the number of requests per call, and was inadvertedly shared across threads when using the same session (even at no danger of colliding, but could cause perhaps problems in jruby?)

also allows to remove one less callback
2025-04-04 11:05:27 +01:00
HoneyryderChuck
51b2693842 Merge branch 'gh-disc-71' into 'master'
:stream_bidi plugin

See merge request os85/httpx!365
2025-04-04 09:51:29 +00:00
HoneyryderChuck
1ab5855961 Merge branch 'gh-74' into 'master'
adding  option, which automatically closes sessions on fork

See merge request os85/httpx!377
2025-04-04 09:49:06 +00:00
HoneyryderChuck
f82816feb3 Merge branch 'issue-339' into 'master'
QUERY plugin

Closes #339

See merge request os85/httpx!374
2025-04-04 09:48:13 +00:00
HoneyryderChuck
ee229aa74c readapt some plugins so that supported verbs can be overridden by custom plugins 2025-04-04 09:32:38 +01:00
HoneyryderChuck
793e900ce8 added the :query plugin, which supports the QUERY http method
added as a plugin for explicit opt-in, as it's still an experimental feature (RFC in draft)
2025-04-04 09:32:38 +01:00
HoneyryderChuck
1241586eb4 introducing subplugins to plugins
subplugins are modules of plugins which register as post-plugins of other plugins

a specific plugin may want to have a side-effect on the functionality of another plugin, so they can use this to register it when the other plugin is loaded
2025-04-04 09:25:53 +01:00
HoneyryderChuck
cbf454ae13 Merge branch 'issue-336' into 'master'
ruby 3.4 features

Closes #336

See merge request os85/httpx!372
2025-04-04 08:24:28 +00:00
HoneyryderChuck
180d3b0e59 adding option, which automatically closes sessions on fork
only for ruby 3.1 or higher. adapted from a similar feature from the connection_pool gem
2025-04-04 00:22:05 +01:00
HoneyryderChuck
84db0072fb new :stream_bidi plugin
this plugin is an HTTP/2 only plugin which enables bidirectional streaming

the client can continue writing request streams as response streams arrive midway

Closes https://github.com/HoneyryderChuck/httpx/discussions/71
2025-04-04 00:21:12 +01:00
HoneyryderChuck
c48f6c8e8f adding Request#can_buffer?
abstracts some logic around whether a request has request body bytes to buffer
2025-04-04 00:20:29 +01:00
HoneyryderChuck
870b8aed69 make .parser_type an instance method instead
allows plugins to override
2025-04-04 00:20:29 +01:00
HoneyryderChuck
56b8e9647a making multipart decoding code more robust 2025-04-04 00:18:53 +01:00
HoneyryderChuck
1f59688791 rename test servlet 2025-04-04 00:18:53 +01:00
HoneyryderChuck
e63c75a86c improvements in headers
using Hash#new(capacity: ) to better predict size; reduce the number of allocated arrays by passing the result of  to the store when possible, and only calling #downcased(str) once; #array_value will also not try to clean up errors in the passed data (it'll either fail loudly, or be fixed downstream)
2025-04-04 00:18:53 +01:00
HoneyryderChuck
3eaf58e258 refactoring timers to more efficiently deal with empty intervals
before, canceling a timer connected to an interval which would become empty would delete it from the main intervals store; this deletion now moves away from the request critical path, and pinging for intervals will drop elapsed-or-empty before returning the shortest one

beyond that, the intervals store won't be constantly recreated if there's no need for it (i.e. nothing has elapsed), which reduce the gc pressure

searching for existing interval on #after now uses bsearch; since the list is ordered, this should make finding one more performant
2025-04-04 00:18:53 +01:00
HoneyryderChuck
9ff62404a6 enabling warning messages 2025-04-04 00:18:53 +01:00
HoneyryderChuck
4d694f9517 ruby 3.4 feature: use String#append_as_bytes in buffers 2025-04-04 00:18:53 +01:00
HoneyryderChuck
22952f6a4a ruby 3.4: set string capacity for buffer-like string 2025-04-04 00:18:53 +01:00
HoneyryderChuck
7660e4c555 implement #inspect in a few places where ouput gets verbose
tweak some existing others
2025-04-04 00:18:53 +01:00
HoneyryderChuck
a9cc787210 ruby 3.4: use set_temporary_name to decorate plugin classes with more descriptive names 2025-04-04 00:18:53 +01:00
HoneyryderChuck
970830a025 bumping version to 1.4.4 2025-04-03 22:17:42 +01:00
HoneyryderChuck
7a3d38aeee Merge branch 'issue-343' into 'master'
session: discard connection callbacks if they're assigned to a different session already

Closes #343

See merge request os85/httpx!379
2025-04-03 18:53:39 +00:00
HoneyryderChuck
54bb617902 fixed regression test of 1.4.1 (it detected a different error, but the outcome is not a goaway error anymore, as persistent conns recover and retry) 2025-04-03 18:34:41 +01:00
HoneyryderChuck
cf08ae99f5 removing unneded require in regression test which loads webmock by mistake 2025-04-03 18:23:56 +01:00
HoneyryderChuck
c8ce4cd8c8 Merge branch 'down-issue-98' into 'master'
stream plugin: allow partial buffering of the response when calling things other than #each

See merge request os85/httpx!380
2025-04-03 17:23:21 +00:00
HoneyryderChuck
6658a2ce24 ssl socket: do not call tcp socket connect if already connected 2025-04-03 18:17:35 +01:00
HoneyryderChuck
7169f6aaaf stream plugin: allow partial buffering of the response when calling things other than #each
this allows calling #status or #headers on a stream response, without buffering the whole response, as it's happening now; this will only work for methods which do not rely on the whole payload to be available, but that should be ok for the stream plugin usage

Fixes https://github.com/janko/down/issues/98
2025-04-03 17:51:02 +01:00
HoneyryderChuck
ffc4824762 do not needlessly probe for readiness on a reconnected connection 2025-04-03 11:04:15 +01:00
HoneyryderChuck
8e050e846f decrementing the in-flight counter in a connection
sockets are sometimes needlessly probed on retries because the counter wasn't taking failed attempts into account
2025-04-03 11:04:15 +01:00
HoneyryderChuck
e40d3c9552 do not exhaust retry attempts when probing connections after keep alive timeout expires
since pools can keep multiple persistent connections which may have been terminated by the peer already, exhausting the one retry attempt from the persistent plugin may make request fail before trying it on an actual connection. in this patch, requests which are preceded by a PING frame used for probing are therefore marked as such, and do not decrement the attempts counter when failing
2025-04-03 11:04:15 +01:00
HoneyryderChuck
ba60ef79a7 if checking out a connection in a closing state, assume that the channel is irrecoverable and hard-close is beforehand
one less callback to manage, which potentially leaks across session usages
2025-03-31 11:46:04 +01:00
HoneyryderChuck
ca49c9ef41 session: discard connection callbacks if they're assigned to a different session already
some connection callbacks are prone to be left behind; when they do, they may access objects that may have been locked by another thread, thereby corrupting state.
2025-03-28 18:26:17 +00:00
157 changed files with 3288 additions and 926 deletions

View File

@ -111,7 +111,7 @@ regression tests:
variables:
BUNDLE_WITHOUT: lint:assorted
CI: 1
COVERAGE_KEY: "$RUBY_ENGINE-$RUBY_VERSION-regression-tests"
COVERAGE_KEY: "ruby-3.4-regression-tests"
artifacts:
paths:
- coverage/

View File

@ -0,0 +1,14 @@
# 1.4.4
## Improvements
* `:stream` plugin: response will now be partially buffered in order to i.e. inspect response status or headers on the response body without buffering the full response
* this fixes an issue in the `down` gem integration when used with the `:max_size` option.
* do not unnecessarily probe for connection liveness if no more requests are inflight, including failed ones.
* when using persistent connections, do not probe for liveness right after reconnecting after a keep alive timeout.
## Bugfixes
* `:persistent` plugin: do not exhaust retry attempts when probing for (and failing) connection liveness.
* since the introduction of per-session connection pools, and consequentially due to the possibility of multiple inactive connections for the same origin being in the pool, which may have been terminated by the peer server, requests would fail before being able to establish a new connection.
* prevent retrying to connect the TCP socket object when an SSLSocket object is already in place and connecting.

126
doc/release_notes/1_5_0.md Normal file
View File

@ -0,0 +1,126 @@
# 1.5.0
## Features
### `:stream_bidi` plugin
The `:stream_bidi` plugin enables bidirectional streaming support (an HTTP/2 only feature!). It builds on top of the `:stream` plugin, and uses its block-based syntax to process incoming frames, while allowing the user to pipe more data to the request (from the same, or another thread/fiber).
```ruby
http = HTTPX.plugin(:stream_bidi)
request = http.build_request(
"POST",
"https://your-origin.com/stream",
headers: { "content-type" => "application/x-ndjson" },
body: ["{\"message\":\"started\"}\n"]
)
chunks = []
response = http.request(request, stream: true)
Thread.start do
response.each do |chunk|
handle_data(chunk)
end
end
# now send data...
request << "{\"message\":\"foo\"}\n"
request << "{\"message\":\"bar\"}\n"
# ...
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Stream-Bidi
### `:query` plugin
The `:query` plugin adds public methods supporting the `QUERY` HTTP verb:
```ruby
http = HTTPX.plugin(:query)
http.query("https://example.com/gquery", body: "foo=bar") # QUERY /gquery ....
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Query
this functionality was added as a plugin for explicit opt-in, as it's experimental (RFC for the new HTTP verb is still in draft).
### `:response_cache` plugin filesystem based store
The `:response_cache` plugin supports setting the filesystem as the response cache store (instead of just storing them in memory, which is the default `:store`).
```ruby
# cache store in the filesystem, writes to the temporary directory from the OS
http = HTTPX.plugin(:response_cache, response_cache_store: :file_store)
# if you want a separate location
http = HTTPX.plugin(:response_cache).with(response_cache_store: HTTPX::Plugins::ResponseCache::FileStore.new("/path/to/dir"))
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Response-Cache#:file_store
### `:close_on_fork` option
A new option `:close_on_fork` can be used to ensure that a session object which may have open connections will not leak them in case the process is forked (this can be the case of `:persistent` plugin enabled sessions which have add usage before fork):
```ruby
http = HTTPX.plugin(:persistent, close_on_fork: true)
# http may have open connections here
fork do
# http has no connections here
end
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools#Fork-Safety .
### `:debug_redact` option
The `:debug_redact` option will, when enabled, replace parts of the debug logs (enabled via `:debug` and `:debug_level` options) which may contain sensitive information, with the `"[REDACTED]"` placeholder.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Debugging .
### `:max_connections` pool option
A new `:max_connections` pool option (settable under `:pool_options`) can be used to defined the maximum number **overall** of connections for a pool ("in-transit" or "at-rest"); this complements, and supersedes when used, the already existing `:max_connections_per_origin`, which does the same per connection origin.
```ruby
HTTPX.with(pool_options: { max_connections: 100 })
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools .
### Subplugins
An enhancement to the plugins architecture, it allows plugins to define submodules ("subplugins") which are loaded if another plugin is in use, or is loaded afterwards.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Custom-Plugins#Subplugins .
## Improvements
* `:persistent` plugin: several improvements around reconnections of failure:
* reconnections will only happen for "connection broken" errors (and will discard reconnection on timeouts)
* reconnections won't exhaust retries
* `:response_cache` plugin: several improements:
* return cached response if not stale, send conditional request otherwise (it was always doing the latter).
* consider immutable (i.e. `"Cache-Control: immutable"`) responses as never stale.
* `:datadog` adapter: decorate spans with more tags (header, kind, component, etc...)
* timers operations have been improved to use more efficient algorithms and reduce object creation.
## Bugfixes
* ensure that setting request timeouts happens before the request is buffered (the latter could trigger a state transition required by the former).
* `:response_cache` plugin: fix `"Vary"` header handling by supporting a new plugin option, `:supported_vary_headers`, which defines which headers are taken into account for cache key calculation.
* fixed query string encoded value when passed an empty hash to the `:query` param and the URL already contains query string.
* `:callbacks` plugin: ensure the callbacks from a session are copied when a new session is derived from it (via a `.plugin` call, for example).
* `:callbacks` plugin: errors raised from hostname resolution should bubble up to user code.
* fixed connection coalescing selector monitoring in cases where the coalescable connecton is cloned, while other branches were simplified.
* clear the connection write buffer in corner cases where the remaining bytes may be interpreted as GOAWAY handshake frame (and may cause unintended writes to connections already identified as broken).
* remove idle connections from the selector when an error happens before the state changes (this may happen if the thread is interrupted during name resolution).
## Chore
`httpx` makes extensive use of features introduced in ruby 3.4, such as `Module#set_temporary_name` for otherwise plugin-generated anonymous classes (improves debugging and issue reporting), or `String#append_as_bytes` for a small but non-negligible perf boost in buffer operations. It falls back to the previous behaviour when used with ruby 3.3 or lower.
Also, and in preparation for the incoming ruby 3.5 release, dependency of the `cgi` gem (which will be removed from stdlib) was removed.

View File

@ -0,0 +1,6 @@
# 1.5.1
## Bugfixes
* connection errors on persistent connections which have just been checked out from the pool no longer account for retries bookkeeping; the assumption should be that, if a connection has been checked into the pool in an open state, chances are, when it eventually gets checked out, it may be corrupt. This issue was more exacerbated in `:persistent` plugin connections, which by design have a retry of 1, thus failing often immediately after check out without a legitimate request try.
* native resolver: fix issue with process interrupts during DNS request, which caused a busy loop when closing the selector.

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint: /usr/local/bin/nghttpx

View File

@ -69,7 +69,7 @@ services:
command: -d 3
http2proxy:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 3300:80
depends_on:
@ -78,7 +78,7 @@ services:
command: --no-ocsp --frontend '*,80;no-tls' --backend 'httpproxy,3128' --http2-proxy
nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 80:80
- 443:443
@ -94,7 +94,7 @@ services:
- another
altsvc-nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 81:80
- 444:443

View File

@ -133,7 +133,7 @@ class SentryTest < Minitest::Test
Sentry.init do |config|
config.traces_sample_rate = 1.0
config.logger = mock_logger
config.sdk_logger = mock_logger
config.dsn = DUMMY_DSN
config.transport.transport_class = Sentry::DummyTransport
config.background_worker_threads = 0

View File

@ -13,8 +13,17 @@ module Datadog::Tracing
TYPE_OUTBOUND = Datadog::Tracing::Metadata::Ext::HTTP::TYPE_OUTBOUND
TAG_PEER_SERVICE = Datadog::Tracing::Metadata::Ext::TAG_PEER_SERVICE
TAG_BASE_SERVICE = if Gem::Version.new(DATADOG_VERSION::STRING) < Gem::Version.new("1.15.0")
"_dd.base_service"
else
Datadog::Tracing::Contrib::Ext::Metadata::TAG_BASE_SERVICE
end
TAG_PEER_HOSTNAME = Datadog::Tracing::Metadata::Ext::TAG_PEER_HOSTNAME
TAG_KIND = Datadog::Tracing::Metadata::Ext::TAG_KIND
TAG_CLIENT = Datadog::Tracing::Metadata::Ext::SpanKind::TAG_CLIENT
TAG_COMPONENT = Datadog::Tracing::Metadata::Ext::TAG_COMPONENT
TAG_OPERATION = Datadog::Tracing::Metadata::Ext::TAG_OPERATION
TAG_URL = Datadog::Tracing::Metadata::Ext::HTTP::TAG_URL
TAG_METHOD = Datadog::Tracing::Metadata::Ext::HTTP::TAG_METHOD
TAG_TARGET_HOST = Datadog::Tracing::Metadata::Ext::NET::TAG_TARGET_HOST
@ -81,6 +90,10 @@ module Datadog::Tracing
span.set_tag(TAG_STATUS_CODE, response.status.to_s)
span.set_error(::HTTPX::HTTPError.new(response)) if response.status >= 400 && response.status <= 599
span.set_tags(
Datadog.configuration.tracing.header_tags.response_tags(response.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
end
span.finish
@ -97,7 +110,13 @@ module Datadog::Tracing
span.resource = verb
# Add additional request specific tags to the span.
# Tag original global service name if not used
span.set_tag(TAG_BASE_SERVICE, Datadog.configuration.service) if span.service != Datadog.configuration.service
span.set_tag(TAG_KIND, TAG_CLIENT)
span.set_tag(TAG_COMPONENT, "httpx")
span.set_tag(TAG_OPERATION, "request")
span.set_tag(TAG_URL, request.path)
span.set_tag(TAG_METHOD, verb)
@ -105,8 +124,10 @@ module Datadog::Tracing
span.set_tag(TAG_TARGET_HOST, uri.host)
span.set_tag(TAG_TARGET_PORT, uri.port)
span.set_tag(TAG_PEER_HOSTNAME, uri.host)
# Tag as an external peer service
span.set_tag(TAG_PEER_SERVICE, span.service)
# span.set_tag(TAG_PEER_SERVICE, span.service)
if config[:distributed_tracing]
propagate_trace_http(
@ -120,6 +141,10 @@ module Datadog::Tracing
Contrib::Analytics.set_sample_rate(span, config[:analytics_sample_rate])
end
span.set_tags(
Datadog.configuration.tracing.header_tags.request_tags(request.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
span
rescue StandardError => e
Datadog.logger.error("error preparing span for http request: #{e}")

View File

@ -58,6 +58,8 @@ module WebMock
super
connection.once(:unmock_connection) do
next unless connection.current_session == self
unless connection.addresses
# reset Happy Eyeballs, fail early
connection.sibling = nil
@ -120,6 +122,7 @@ module WebMock
request.transition(:body)
request.transition(:trailers)
request.transition(:done)
response.finish!
request.response = response
request.emit(:response, response)
request_signature.headers = request.headers.to_h

View File

@ -14,8 +14,6 @@ module HTTPX
class Buffer
extend Forwardable
def_delegator :@buffer, :<<
def_delegator :@buffer, :to_s
def_delegator :@buffer, :to_str
@ -30,11 +28,24 @@ module HTTPX
attr_reader :limit
if RUBY_VERSION >= "3.4.0"
def initialize(limit)
@buffer = String.new("", encoding: Encoding::BINARY, capacity: limit)
@limit = limit
end
def <<(chunk)
@buffer.append_as_bytes(chunk)
end
else
def initialize(limit)
@buffer = "".b
@limit = limit
end
def_delegator :@buffer, :<<
end
def full?
@buffer.bytesize >= @limit
end

View File

@ -20,7 +20,7 @@ module HTTPX
end
def callbacks_for?(type)
@callbacks.key?(type) && @callbacks[type].any?
@callbacks && @callbacks.key?(type) && @callbacks[type].any?
end
protected

View File

@ -50,7 +50,11 @@ module HTTPX
protected :sibling
def initialize(uri, options)
@current_session = @current_selector = @sibling = @coalesced_connection = nil
@current_session = @current_selector =
@parser = @sibling = @coalesced_connection =
@io = @ssl_session = @timeout =
@connected_at = @response_received_at = nil
@exhausted = @cloned = @main_sibling = false
@options = Options.new(options)
@ -61,6 +65,8 @@ module HTTPX
@read_buffer = Buffer.new(@options.buffer_size)
@write_buffer = Buffer.new(@options.buffer_size)
@pending = []
@inflight = 0
@keep_alive_timeout = @options.timeout[:keep_alive_timeout]
on(:error, &method(:on_error))
if @options.io
@ -98,9 +104,6 @@ module HTTPX
build_altsvc_connection(alt_origin, origin, alt_params)
end
@inflight = 0
@keep_alive_timeout = @options.timeout[:keep_alive_timeout]
self.addresses = @options.addresses if @options.addresses
end
@ -152,6 +155,14 @@ module HTTPX
) && @options == connection.options
end
# coalesces +self+ into +connection+.
def coalesce!(connection)
@coalesced_connection = connection
close_sibling
connection.merge(self)
end
# coalescable connections need to be mergeable!
# but internally, #mergeable? is called before #coalescable?
def coalescable?(connection)
@ -251,6 +262,7 @@ module HTTPX
end
nil
rescue StandardError => e
@write_buffer.clear
emit(:error, e)
raise e
end
@ -262,7 +274,13 @@ module HTTPX
end
def terminate
@connected_at = nil if @state == :closed
case @state
when :idle
purge_after_closed
emit(:terminate)
when :closed
@connected_at = nil
end
close
end
@ -296,6 +314,7 @@ module HTTPX
@pending << request
transition(:active) if @state == :inactive
parser.ping
request.ping!
return
end
@ -340,13 +359,6 @@ module HTTPX
on_error(error)
end
def coalesced_connection=(connection)
@coalesced_connection = connection
close_sibling
connection.merge(self)
end
def sibling=(connection)
@sibling = connection
@ -360,8 +372,6 @@ module HTTPX
end
def handle_connect_error(error)
@connect_error = error
return handle_error(error) unless @sibling && @sibling.connecting?
@sibling.merge(self)
@ -377,6 +387,16 @@ module HTTPX
@current_selector = nil
end
# :nocov:
def inspect
"#<#{self.class}:#{object_id} " \
"@origin=#{@origin} " \
"@state=#{@state} " \
"@pending=#{@pending.size} " \
"@io=#{@io}>"
end
# :nocov:
private
def connect
@ -527,17 +547,21 @@ module HTTPX
def send_request_to_parser(request)
@inflight += 1
request.peer_address = @io.ip
parser.send(request)
set_request_timeouts(request)
parser.send(request)
return unless @state == :inactive
transition(:active)
# mark request as ping, as this inactive connection may have been
# closed by the server, and we don't want that to influence retry
# bookkeeping.
request.ping!
end
def build_parser(protocol = @io.protocol)
parser = self.class.parser_type(protocol).new(@write_buffer, @options)
parser = parser_type(protocol).new(@write_buffer, @options)
set_parser_callbacks(parser)
parser
end
@ -549,6 +573,7 @@ module HTTPX
end
@response_received_at = Utils.now
@inflight -= 1
response.finish!
request.emit(:response, response)
end
parser.on(:altsvc) do |alt_origin, origin, alt_params|
@ -630,6 +655,7 @@ module HTTPX
next unless request.active_timeouts.empty?
end
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
@ -670,7 +696,7 @@ module HTTPX
when :idle
@timeout = @current_timeout = @options.timeout[:connect_timeout]
@connected_at = nil
@connected_at = @response_received_at = nil
when :open
return if @state == :closed
@ -723,6 +749,7 @@ module HTTPX
# activate
@current_session.select_connection(self, @current_selector)
end
log(level: 3) { "#{@state} -> #{nextstate}" }
@state = nextstate
end
@ -843,6 +870,7 @@ module HTTPX
return unless request
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
@ -919,7 +947,6 @@ module HTTPX
end
end
class << self
def parser_type(protocol)
case protocol
when "h2" then HTTP2
@ -929,5 +956,4 @@ module HTTPX
end
end
end
end
end

View File

@ -93,7 +93,7 @@ module HTTPX
concurrent_requests_limit = [@max_concurrent_requests, requests_limit].min
@requests.each_with_index do |request, idx|
break if idx >= concurrent_requests_limit
next if request.state == :done
next unless request.can_buffer?
handle(request)
end
@ -119,7 +119,7 @@ module HTTPX
@parser.http_version.join("."),
headers)
log(color: :yellow) { "-> HEADLINE: #{response.status} HTTP/#{@parser.http_version.join(".")}" }
log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{v}" }.join("\n") }
log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v)}" }.join("\n") }
@request.response = response
on_complete if response.finished?
@ -131,7 +131,7 @@ module HTTPX
response = @request.response
log(level: 2) { "trailer headers received" }
log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{v.join(", ")}" }.join("\n") }
log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v.join(", "))}" }.join("\n") }
response.merge_headers(h)
end
@ -141,7 +141,7 @@ module HTTPX
return unless request
log(color: :green) { "-> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "-> #{chunk.inspect}" }
log(level: 2, color: :green) { "-> #{log_redact(chunk.inspect)}" }
response = request.response
response << chunk
@ -171,7 +171,6 @@ module HTTPX
@request = nil
@requests.shift
response = request.response
response.finish! unless response.is_a?(ErrorResponse)
emit(:response, request, response)
if @parser.upgrade?
@ -362,7 +361,7 @@ module HTTPX
while (chunk = request.drain_body)
log(color: :green) { "<- DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "<- #{chunk.inspect}" }
log(level: 2, color: :green) { "<- #{log_redact(chunk.inspect)}" }
@buffer << chunk
throw(:buffer_full, request) if @buffer.full?
end
@ -382,9 +381,9 @@ module HTTPX
def join_headers2(headers)
headers.each do |field, value|
buffer = "#{capitalized(field)}: #{value}#{CRLF}"
log(color: :yellow) { "<- HEADER: #{buffer.chomp}" }
@buffer << buffer
field = capitalized(field)
log(color: :yellow) { "<- HEADER: #{[field, log_redact(value)].join(": ")}" }
@buffer << "#{field}: #{value}#{CRLF}"
end
end

View File

@ -11,8 +11,8 @@ module HTTPX
MAX_CONCURRENT_REQUESTS = ::HTTP2::DEFAULT_MAX_CONCURRENT_STREAMS
class Error < Error
def initialize(id, code)
super("stream #{id} closed with error: #{code}")
def initialize(id, error)
super("stream #{id} closed with error: #{error}")
end
end
@ -58,6 +58,8 @@ module HTTPX
if @connection.state == :closed
return unless @handshake_completed
return if @buffer.empty?
return :w
end
@ -98,12 +100,6 @@ module HTTPX
@connection << data
end
def can_buffer_more_requests?
(@handshake_completed || !@wait_for_handshake) &&
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
end
def send(request, head = false)
unless can_buffer_more_requests?
head ? @pending.unshift(request) : @pending << request
@ -124,7 +120,7 @@ module HTTPX
def consume
@streams.each do |request, stream|
next if request.state == :done
next unless request.can_buffer?
handle(request, stream)
end
@ -152,13 +148,19 @@ module HTTPX
def ping
ping = SecureRandom.gen_random(8)
@connection.ping(ping)
@connection.ping(ping.dup)
ensure
@pings << ping
end
private
def can_buffer_more_requests?
(@handshake_completed || !@wait_for_handshake) &&
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
end
def send_pending
while (request = @pending.shift)
break unless send(request, true)
@ -224,12 +226,12 @@ module HTTPX
extra_headers = set_protocol_headers(request)
if request.headers.key?("host")
log { "forbidden \"host\" header found (#{request.headers["host"]}), will use it as authority..." }
log { "forbidden \"host\" header found (#{log_redact(request.headers["host"])}), will use it as authority..." }
extra_headers[":authority"] = request.headers["host"]
end
log(level: 1, color: :yellow) do
request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n")
"\n#{request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")}"
end
stream.headers(request.headers.each(extra_headers), end_stream: request.body.empty?)
end
@ -241,7 +243,7 @@ module HTTPX
end
log(level: 1, color: :yellow) do
request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n")
request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
stream.headers(request.trailers.each, end_stream: true)
end
@ -252,13 +254,13 @@ module HTTPX
chunk = @drains.delete(request) || request.drain_body
while chunk
next_chunk = request.drain_body
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: -> #{chunk.inspect}" }
stream.data(chunk, end_stream: !(next_chunk || request.trailers? || request.callbacks_for?(:trailers)))
send_chunk(request, stream, chunk, next_chunk)
if next_chunk && (@buffer.full? || request.body.unbounded_body?)
@drains[request] = next_chunk
throw(:buffer_full)
end
chunk = next_chunk
end
@ -267,6 +269,16 @@ module HTTPX
on_stream_refuse(stream, request, error)
end
def send_chunk(request, stream, chunk, next_chunk)
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: -> #{log_redact(chunk.inspect)}" }
stream.data(chunk, end_stream: end_stream?(request, next_chunk))
end
def end_stream?(request, next_chunk)
!(next_chunk || request.trailers? || request.callbacks_for?(:trailers))
end
######
# HTTP/2 Callbacks
######
@ -280,7 +292,7 @@ module HTTPX
end
log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n")
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
_, status = h.shift
headers = request.options.headers_class.new(h)
@ -293,14 +305,14 @@ module HTTPX
def on_stream_trailers(stream, response, h)
log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n")
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
response.merge_headers(h)
end
def on_stream_data(stream, request, data)
log(level: 1, color: :green) { "#{stream.id}: <- DATA: #{data.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: <- #{data.inspect}" }
log(level: 2, color: :green) { "#{stream.id}: <- #{log_redact(data.inspect)}" }
request.response << data
end
@ -388,8 +400,15 @@ module HTTPX
def on_frame_sent(frame)
log(level: 2) { "#{frame[:stream]}: frame was sent!" }
log(level: 2, color: :blue) do
payload = frame
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data
payload =
case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}"
end
end
@ -397,15 +416,22 @@ module HTTPX
def on_frame_received(frame)
log(level: 2) { "#{frame[:stream]}: frame was received!" }
log(level: 2, color: :magenta) do
payload = frame
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data
payload =
case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}"
end
end
def on_altsvc(origin, frame)
log(level: 2) { "#{frame[:stream]}: altsvc frame was received" }
log(level: 2) { "#{frame[:stream]}: #{frame.inspect}" }
log(level: 2) { "#{frame[:stream]}: #{log_redact(frame.inspect)}" }
alt_origin = URI.parse("#{frame[:proto]}://#{frame[:host]}:#{frame[:port]}")
params = { "ma" => frame[:max_age] }
emit(:altsvc, origin, alt_origin, origin, params)

View File

@ -29,17 +29,8 @@ module HTTPX
end
end
# Raise when it can't acquire a connection for a given origin.
class PoolTimeoutError < TimeoutError
attr_reader :origin
# initializes the +origin+ it refers to, and the
# +timeout+ causing the error.
def initialize(origin, timeout)
@origin = origin
super(timeout, "Timed out after #{timeout} seconds while waiting for a connection to #{origin}")
end
end
# Raise when it can't acquire a connection from the pool.
class PoolTimeoutError < TimeoutError; end
# Error raised when there was a timeout establishing the connection to a server.
# This may be raised due to timeouts during TCP and TLS (when applicable) connection

View File

@ -11,20 +11,32 @@ module HTTPX
end
def initialize(headers = nil)
if headers.nil? || headers.empty?
@headers = headers.to_h
return
end
@headers = {}
return unless headers
headers.each do |field, value|
array_value(value).each do |v|
add(downcased(field), v)
field = downcased(field)
value = array_value(value)
current = @headers[field]
if current.nil?
@headers[field] = value
else
current.concat(value)
end
end
end
# cloned initialization
def initialize_clone(orig)
def initialize_clone(orig, **kwargs)
super
@headers = orig.instance_variable_get(:@headers).clone
@headers = orig.instance_variable_get(:@headers).clone(**kwargs)
end
# dupped initialization
@ -39,17 +51,6 @@ module HTTPX
super
end
def same_headers?(headers)
@headers.empty? || begin
headers.each do |k, v|
next unless key?(k)
return false unless v == self[k]
end
true
end
end
# merges headers with another header-quack.
# the merge rule is, if the header already exists,
# ignore what the +other+ headers has. Otherwise, set
@ -119,6 +120,10 @@ module HTTPX
other == to_hash
end
def empty?
@headers.empty?
end
# the headers store in Hash format
def to_hash
Hash[to_a]
@ -137,7 +142,8 @@ module HTTPX
# :nocov:
def inspect
to_hash.inspect
"#<#{self.class}:#{object_id} " \
"#{to_hash.inspect}>"
end
# :nocov:
@ -160,12 +166,7 @@ module HTTPX
private
def array_value(value)
case value
when Array
value.map { |val| String(val).strip }
else
[String(value).strip]
end
Array(value)
end
def downcased(field)

View File

@ -9,7 +9,8 @@ module HTTPX
# rubocop:disable Style/MutableConstant
TLS_OPTIONS = { alpn_protocols: %w[h2 http/1.1].freeze }
# https://github.com/jruby/jruby-openssl/issues/284
TLS_OPTIONS[:verify_hostname] = true if RUBY_ENGINE == "jruby"
# TODO: remove when dropping support for jruby-openssl < 0.15.4
TLS_OPTIONS[:verify_hostname] = true if RUBY_ENGINE == "jruby" && JOpenSSL::VERSION < "0.15.4"
# rubocop:enable Style/MutableConstant
TLS_OPTIONS.freeze
@ -92,9 +93,12 @@ module HTTPX
end
def connect
return if @state == :negotiated
unless @state == :connected
super
return if @state == :negotiated ||
@state != :connected
return unless @state == :connected
end
unless @io.is_a?(OpenSSL::SSL::SSLSocket)
if (hostname_is_ip = (@ip == @sni_hostname))

View File

@ -75,9 +75,18 @@ module HTTPX
@io = build_socket
end
try_connect
rescue Errno::EHOSTUNREACH,
Errno::ENETUNREACH => e
raise e if @ip_index <= 0
log { "failed connecting to #{@ip} (#{e.message}), evict from cache and trying next..." }
Resolver.cached_lookup_evict(@hostname, @ip)
@ip_index -= 1
@io = build_socket
retry
rescue Errno::ECONNREFUSED,
Errno::EADDRNOTAVAIL,
Errno::EHOSTUNREACH,
SocketError,
IOError => e
raise e if @ip_index <= 0
@ -167,7 +176,12 @@ module HTTPX
# :nocov:
def inspect
"#<#{self.class}: #{@ip}:#{@port} (state: #{@state})>"
"#<#{self.class}:#{object_id} " \
"#{@ip}:#{@port} " \
"@state=#{@state} " \
"@hostname=#{@hostname} " \
"@addresses=#{@addresses} " \
"@state=#{@state}>"
end
# :nocov:
@ -195,12 +209,9 @@ module HTTPX
end
def log_transition_state(nextstate)
case nextstate
when :connected
"Connected to #{host} (##{@io.fileno})"
else
"#{host} #{@state} -> #{nextstate}"
end
label = host
label = "#{label}(##{@io.fileno})" if nextstate == :connected
"#{label} #{@state} -> #{nextstate}"
end
end
end

View File

@ -48,7 +48,7 @@ module HTTPX
transition(:connected)
rescue Errno::EINPROGRESS,
Errno::EALREADY,
::IO::WaitReadable
IO::WaitReadable
end
def expired?
@ -57,7 +57,7 @@ module HTTPX
# :nocov:
def inspect
"#<#{self.class}(path: #{@path}): (state: #{@state})>"
"#<#{self.class}:#{object_id} @path=#{@path}) @state=#{@state})>"
end
# :nocov:

View File

@ -15,7 +15,13 @@ module HTTPX
USE_DEBUG_LOG = ENV.key?("HTTPX_DEBUG")
def log(level: @options.debug_level, color: nil, debug_level: @options.debug_level, debug: @options.debug, &msg)
def log(
level: @options.debug_level,
color: nil,
debug_level: @options.debug_level,
debug: @options.debug,
&msg
)
return unless debug_level >= level
debug_stream = debug || ($stderr if USE_DEBUG_LOG)
@ -28,7 +34,10 @@ module HTTPX
klass = klass.superclass
end
message = +"(pid:#{Process.pid} tid:#{Thread.current.object_id}, self:#{class_name}##{object_id}) "
message = +"(pid:#{Process.pid}, " \
"tid:#{Thread.current.object_id}, " \
"fid:#{Fiber.current.object_id}, " \
"self:#{class_name}##{object_id}) "
message << msg.call << "\n"
message = "\e[#{COLORS[color]}m#{message}\e[0m" if color && debug_stream.respond_to?(:isatty) && debug_stream.isatty
debug_stream << message
@ -37,5 +46,11 @@ module HTTPX
def log_exception(ex, level: @options.debug_level, color: nil, debug_level: @options.debug_level, debug: @options.debug)
log(level: level, color: color, debug_level: debug_level, debug: debug) { ex.full_message }
end
def log_redact(text, should_redact = @options.debug_redact)
return text.to_s unless should_redact
"[REDACTED]"
end
end
end

View File

@ -18,7 +18,7 @@ module HTTPX
# https://github.com/ruby/resolv/blob/095f1c003f6073730500f02acbdbc55f83d70987/lib/resolv.rb#L408
ip_address_families = begin
list = Socket.ip_address_list
if list.any? { |a| a.ipv6? && !a.ipv6_loopback? && !a.ipv6_linklocal? && !a.ipv6_unique_local? }
if list.any? { |a| a.ipv6? && !a.ipv6_loopback? && !a.ipv6_linklocal? }
[Socket::AF_INET6, Socket::AF_INET]
else
[Socket::AF_INET]
@ -27,10 +27,19 @@ module HTTPX
[Socket::AF_INET]
end.freeze
SET_TEMPORARY_NAME = ->(mod, pl = nil) do
if mod.respond_to?(:set_temporary_name) # ruby 3.4 only
name = mod.name || "#{mod.superclass.name}(plugin)"
name = "#{name}/#{pl}" if pl
mod.set_temporary_name(name)
end
end
DEFAULT_OPTIONS = {
:max_requests => Float::INFINITY,
:debug => nil,
:debug_level => (ENV["HTTPX_DEBUG"] || 1).to_i,
:debug_redact => ENV.key?("HTTPX_DEBUG_REDACT"),
:ssl => EMPTY_HASH,
:http2_settings => { settings_enable_push: 0 }.freeze,
:fallback_protocol => "http/1.1",
@ -47,18 +56,18 @@ module HTTPX
write_timeout: WRITE_TIMEOUT,
request_timeout: REQUEST_TIMEOUT,
},
:headers_class => Class.new(Headers),
:headers_class => Class.new(Headers, &SET_TEMPORARY_NAME),
:headers => {},
:window_size => WINDOW_SIZE,
:buffer_size => BUFFER_SIZE,
:body_threshold_size => MAX_BODY_THRESHOLD_SIZE,
:request_class => Class.new(Request),
:response_class => Class.new(Response),
:request_body_class => Class.new(Request::Body),
:response_body_class => Class.new(Response::Body),
:pool_class => Class.new(Pool),
:connection_class => Class.new(Connection),
:options_class => Class.new(self),
:request_class => Class.new(Request, &SET_TEMPORARY_NAME),
:response_class => Class.new(Response, &SET_TEMPORARY_NAME),
:request_body_class => Class.new(Request::Body, &SET_TEMPORARY_NAME),
:response_body_class => Class.new(Response::Body, &SET_TEMPORARY_NAME),
:pool_class => Class.new(Pool, &SET_TEMPORARY_NAME),
:connection_class => Class.new(Connection, &SET_TEMPORARY_NAME),
:options_class => Class.new(self, &SET_TEMPORARY_NAME),
:transport => nil,
:addresses => nil,
:persistent => false,
@ -66,6 +75,7 @@ module HTTPX
:resolver_options => { cache: true }.freeze,
:pool_options => EMPTY_HASH,
:ip_families => ip_address_families,
:close_on_fork => false,
}.freeze
class << self
@ -92,7 +102,8 @@ module HTTPX
#
# :debug :: an object which log messages are written to (must respond to <tt><<</tt>)
# :debug_level :: the log level of messages (can be 1, 2, or 3).
# :ssl :: a hash of options which can be set as params of OpenSSL::SSL::SSLContext (see HTTPX::IO::SSL)
# :debug_redact :: whether header/body payload should be redacted (defaults to <tt>false</tt>).
# :ssl :: a hash of options which can be set as params of OpenSSL::SSL::SSLContext (see HTTPX::SSL)
# :http2_settings :: a hash of options to be passed to a HTTP2::Connection (ex: <tt>{ max_concurrent_streams: 2 }</tt>)
# :fallback_protocol :: version of HTTP protocol to use by default in the absence of protocol negotiation
# like ALPN (defaults to <tt>"http/1.1"</tt>)
@ -128,21 +139,37 @@ module HTTPX
# :base_path :: path to prefix given relative paths with (ex: "/v2")
# :max_concurrent_requests :: max number of requests which can be set concurrently
# :max_requests :: max number of requests which can be made on socket before it reconnects.
# :close_on_fork :: whether the session automatically closes when the process is fork (defaults to <tt>false</tt>).
# it only works if the session is persistent (and ruby 3.1 or higher is used).
#
# This list of options are enhanced with each loaded plugin, see the plugin docs for details.
def initialize(options = {})
do_initialize(options)
defaults = DEFAULT_OPTIONS.merge(options)
defaults.each do |k, v|
next if v.nil?
option_method_name = :"option_#{k}"
raise Error, "unknown option: #{k}" unless respond_to?(option_method_name)
value = __send__(option_method_name, v)
instance_variable_set(:"@#{k}", value)
end
freeze
end
def freeze
super
@origin.freeze
@base_path.freeze
@timeout.freeze
@headers.freeze
@addresses.freeze
@supported_compression_formats.freeze
@ssl.freeze
@http2_settings.freeze
@pool_options.freeze
@resolver_options.freeze
@ip_families.freeze
super
end
def option_origin(value)
@ -165,41 +192,6 @@ module HTTPX
Array(value).map(&:to_s)
end
def option_max_concurrent_requests(value)
raise TypeError, ":max_concurrent_requests must be positive" unless value.positive?
value
end
def option_max_requests(value)
raise TypeError, ":max_requests must be positive" unless value.positive?
value
end
def option_window_size(value)
value = Integer(value)
raise TypeError, ":window_size must be positive" unless value.positive?
value
end
def option_buffer_size(value)
value = Integer(value)
raise TypeError, ":buffer_size must be positive" unless value.positive?
value
end
def option_body_threshold_size(value)
bytes = Integer(value)
raise TypeError, ":body_threshold_size must be positive" unless bytes.positive?
bytes
end
def option_transport(value)
transport = value.to_s
raise TypeError, "#{transport} is an unsupported transport type" unless %w[unix].include?(transport)
@ -215,17 +207,42 @@ module HTTPX
Array(value)
end
# number options
%i[
max_concurrent_requests max_requests window_size buffer_size
body_threshold_size debug_level
].each do |option|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# converts +v+ into an Integer before setting the +#{option}+ option.
def option_#{option}(value) # def option_max_requests(v)
value = Integer(value) unless value.infinite?
raise TypeError, ":#{option} must be positive" unless value.positive? # raise TypeError, ":max_requests must be positive" unless value.positive?
value
end
OUT
end
# hashable options
%i[ssl http2_settings resolver_options pool_options].each do |option|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# converts +v+ into an Hash before setting the +#{option}+ option.
def option_#{option}(value) # def option_ssl(v)
Hash[value]
end
OUT
end
%i[
ssl http2_settings
request_class response_class headers_class request_body_class
response_body_class connection_class options_class
pool_class pool_options
io fallback_protocol debug debug_level resolver_class resolver_options
pool_class
io fallback_protocol debug debug_redact resolver_class
compress_request_body decompress_response_body
persistent
persistent close_on_fork
].each do |method_name|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# sets +v+ as the value of #{method_name}
# sets +v+ as the value of the +#{method_name}+ option
def option_#{method_name}(v); v; end # def option_smth(v); v; end
OUT
end
@ -296,35 +313,42 @@ module HTTPX
def extend_with_plugin_classes(pl)
if defined?(pl::RequestMethods) || defined?(pl::RequestClassMethods)
@request_class = @request_class.dup
SET_TEMPORARY_NAME[@request_class, pl]
@request_class.__send__(:include, pl::RequestMethods) if defined?(pl::RequestMethods)
@request_class.extend(pl::RequestClassMethods) if defined?(pl::RequestClassMethods)
end
if defined?(pl::ResponseMethods) || defined?(pl::ResponseClassMethods)
@response_class = @response_class.dup
SET_TEMPORARY_NAME[@response_class, pl]
@response_class.__send__(:include, pl::ResponseMethods) if defined?(pl::ResponseMethods)
@response_class.extend(pl::ResponseClassMethods) if defined?(pl::ResponseClassMethods)
end
if defined?(pl::HeadersMethods) || defined?(pl::HeadersClassMethods)
@headers_class = @headers_class.dup
SET_TEMPORARY_NAME[@headers_class, pl]
@headers_class.__send__(:include, pl::HeadersMethods) if defined?(pl::HeadersMethods)
@headers_class.extend(pl::HeadersClassMethods) if defined?(pl::HeadersClassMethods)
end
if defined?(pl::RequestBodyMethods) || defined?(pl::RequestBodyClassMethods)
@request_body_class = @request_body_class.dup
SET_TEMPORARY_NAME[@request_body_class, pl]
@request_body_class.__send__(:include, pl::RequestBodyMethods) if defined?(pl::RequestBodyMethods)
@request_body_class.extend(pl::RequestBodyClassMethods) if defined?(pl::RequestBodyClassMethods)
end
if defined?(pl::ResponseBodyMethods) || defined?(pl::ResponseBodyClassMethods)
@response_body_class = @response_body_class.dup
SET_TEMPORARY_NAME[@response_body_class, pl]
@response_body_class.__send__(:include, pl::ResponseBodyMethods) if defined?(pl::ResponseBodyMethods)
@response_body_class.extend(pl::ResponseBodyClassMethods) if defined?(pl::ResponseBodyClassMethods)
end
if defined?(pl::PoolMethods)
@pool_class = @pool_class.dup
SET_TEMPORARY_NAME[@pool_class, pl]
@pool_class.__send__(:include, pl::PoolMethods)
end
if defined?(pl::ConnectionMethods)
@connection_class = @connection_class.dup
SET_TEMPORARY_NAME[@connection_class, pl]
@connection_class.__send__(:include, pl::ConnectionMethods)
end
return unless defined?(pl::OptionsMethods)
@ -335,19 +359,6 @@ module HTTPX
private
def do_initialize(options = {})
defaults = DEFAULT_OPTIONS.merge(options)
defaults.each do |k, v|
next if v.nil?
option_method_name = :"option_#{k}"
raise Error, "unknown option: #{k}" unless respond_to?(option_method_name)
value = __send__(option_method_name, v)
instance_variable_set(:"@#{k}", value)
end
end
def access_option(obj, k, ivar_map)
case obj
when Hash

View File

@ -23,7 +23,7 @@ module HTTPX
def reset!
@state = :idle
@headers.clear
@headers = {}
@content_length = nil
@_has_trailers = nil
end

View File

@ -158,6 +158,7 @@ module HTTPX
def load_dependencies(*)
require "set"
require "digest/sha2"
require "cgi/escape"
end
def configure(klass)

View File

@ -8,6 +8,13 @@ module HTTPX
# https://gitlab.com/os85/httpx/-/wikis/Events
#
module Callbacks
CALLBACKS = %i[
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].freeze
# connection closed user-space errors happen after errors can be surfaced to requests,
# so they need to pierce through the scheduler, which is only possible by simulating an
# interrupt.
@ -16,12 +23,7 @@ module HTTPX
module InstanceMethods
include HTTPX::Callbacks
%i[
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].each do |meth|
CALLBACKS.each do |meth|
class_eval(<<-MOD, __FILE__, __LINE__ + 1)
def on_#{meth}(&blk) # def on_connection_opened(&blk)
on(:#{meth}, &blk) # on(:connection_opened, &blk)
@ -32,6 +34,17 @@ module HTTPX
private
def branch(options, &blk)
super(options).tap do |sess|
CALLBACKS.each do |cb|
next unless callbacks_for?(cb)
sess.callbacks(cb).concat(callbacks(cb))
end
sess.wrap(&blk) if blk
end
end
def do_init_connection(connection, selector)
super
connection.on(:open) do

View File

@ -70,10 +70,11 @@ module HTTPX
short_circuit_responses
end
def on_response(request, response)
emit(:circuit_open, request) if try_circuit_open(request, response)
def set_request_callbacks(request)
super
request.on(:response) do |response|
emit(:circuit_open, request) if try_circuit_open(request, response)
end
end
def try_circuit_open(request, response)

View File

@ -48,15 +48,15 @@ module HTTPX
private
def on_response(_request, response)
if response && response.respond_to?(:headers) && (set_cookie = response.headers["set-cookie"])
def set_request_callbacks(request)
super
request.on(:response) do |response|
next unless response && response.respond_to?(:headers) && (set_cookie = response.headers["set-cookie"])
log { "cookies: set-cookie is over #{Cookie::MAX_LENGTH}" } if set_cookie.bytesize > Cookie::MAX_LENGTH
@options.cookies.parse(set_cookie)
end
super
end
end

View File

@ -59,8 +59,6 @@ module HTTPX
return @cookies.each(&blk) unless uri
uri = URI(uri)
now = Time.now
tpath = uri.path

View File

@ -83,7 +83,7 @@ module HTTPX
scanner.skip(RE_WSP)
name, value = scan_name_value(scanner, true)
value = nil if name.empty?
value = nil if name && name.empty?
attrs = {}
@ -98,15 +98,18 @@ module HTTPX
aname, avalue = scan_name_value(scanner, true)
next if aname.empty? || value.nil?
next if (aname.nil? || aname.empty?) || value.nil?
aname.downcase!
case aname
when "expires"
next unless avalue
# RFC 6265 5.2.1
(avalue &&= Time.parse(avalue)) || next
(avalue = Time.parse(avalue)) || next
when "max-age"
next unless avalue
# RFC 6265 5.2.2
next unless /\A-?\d+\z/.match?(avalue)
@ -119,7 +122,7 @@ module HTTPX
# RFC 6265 5.2.4
# A relative path must be ignored rather than normalizing it
# to "/".
next unless avalue.start_with?("/")
next unless avalue && avalue.start_with?("/")
when "secure", "httponly"
# RFC 6265 5.2.5, 5.2.6
avalue = true

View File

@ -149,9 +149,11 @@ module HTTPX
retry_start = Utils.now
log { "redirecting after #{redirect_after} secs..." }
selector.after(redirect_after) do
if request.response
if (response = request.response)
response.finish!
retry_request.response = response
# request has terminated abruptly meanwhile
retry_request.emit(:response, request.response)
retry_request.emit(:response, response)
else
log { "redirecting (elapsed time: #{Utils.elapsed_time(retry_start)})!!" }
send_request(retry_request, selector, options)

View File

@ -15,7 +15,7 @@ module HTTPX
end
def inspect
"#GRPC::Call(#{grpc_response})"
"#{self.class}(#{grpc_response})"
end
def to_s

View File

@ -42,6 +42,12 @@ module HTTPX
end
end
module RequestMethods
def valid_h2c_verb?
VALID_H2C_VERBS.include?(@verb)
end
end
module ConnectionMethods
using URIExtensions
@ -53,7 +59,7 @@ module HTTPX
def send(request)
return super if @h2c_handshake
return super unless VALID_H2C_VERBS.include?(request.verb) && request.scheme == "http"
return super unless request.valid_h2c_verb? && request.scheme == "http"
return super if @upgrade_protocol == "h2c"

View File

@ -50,15 +50,6 @@ module HTTPX
end
end
module NativeResolverMethods
def transition(nextstate)
state = @state
val = super
meter_elapsed_time("Resolver::Native: #{state} -> #{nextstate}")
val
end
end
module InstanceMethods
def self.included(klass)
klass.prepend TrackTimeMethods
@ -69,13 +60,6 @@ module HTTPX
meter_elapsed_time("Session: initializing...")
super
meter_elapsed_time("Session: initialized!!!")
resolver_type = @options.resolver_class
resolver_type = Resolver.resolver_for(resolver_type)
return unless resolver_type <= Resolver::Native
resolver_type.prepend TrackTimeMethods
resolver_type.prepend NativeResolverMethods
@options = @options.merge(resolver_class: resolver_type)
end
def close(*)
@ -104,33 +88,6 @@ module HTTPX
end
end
module RequestMethods
def self.included(klass)
klass.prepend Loggable
klass.prepend TrackTimeMethods
super
end
def transition(nextstate)
prev_state = @state
super
meter_elapsed_time("Request##{object_id}[#{@verb} #{@uri}: #{prev_state}] -> #{@state}") if prev_state != @state
end
end
module ConnectionMethods
def self.included(klass)
klass.prepend TrackTimeMethods
super
end
def handle_transition(nextstate)
state = @state
super
meter_elapsed_time("Connection##{object_id}[#{@origin}]: #{state} -> #{nextstate}") if nextstate == @state
end
end
module PoolMethods
def self.included(klass)
klass.prepend Loggable
@ -138,12 +95,6 @@ module HTTPX
super
end
def checkout_connection(request_uri, options)
super.tap do |connection|
meter_elapsed_time("Pool##{object_id}: checked out connection for Connection##{connection.object_id}[#{connection.origin}]}")
end
end
def checkin_connection(connection)
super.tap do
meter_elapsed_time("Pool##{object_id}: checked in connection for Connection##{connection.object_id}[#{connection.origin}]}")

View File

@ -24,7 +24,7 @@ module HTTPX
else
1
end
klass.plugin(:retries, max_retries: max_retries, retry_change_requests: true)
klass.plugin(:retries, max_retries: max_retries)
end
def self.extra_options(options)
@ -34,6 +34,27 @@ module HTTPX
module InstanceMethods
private
def repeatable_request?(request, _)
super || begin
response = request.response
return false unless response && response.is_a?(ErrorResponse)
error = response.error
Retries::RECONNECTABLE_ERRORS.any? { |klass| error.is_a?(klass) }
end
end
def retryable_error?(ex)
super &&
# under the persistent plugin rules, requests are only retried for connection related errors,
# which do not include request timeout related errors. This only gets overriden if the end user
# manually changed +:max_retries+ to something else, which means it is aware of the
# consequences.
(!ex.is_a?(RequestTimeoutError) || @options.max_retries != 1)
end
def get_current_selector
super(&nil) || begin
return unless block_given?

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
module HTTPX
class HTTPProxyError < ConnectionError; end
class ProxyError < ConnectionError; end
module Plugins
#
@ -15,7 +15,8 @@ module HTTPX
# https://gitlab.com/os85/httpx/wikis/Proxy
#
module Proxy
Error = HTTPProxyError
class ProxyConnectionError < ProxyError; end
PROXY_ERRORS = [TimeoutError, IOError, SystemCallError, Error].freeze
class << self
@ -28,6 +29,12 @@ module HTTPX
def extra_options(options)
options.merge(supported_proxy_protocols: [])
end
def subplugins
{
retries: ProxyRetries,
}
end
end
class Parameters
@ -160,9 +167,9 @@ module HTTPX
next_proxy = proxy.uri
raise Error, "Failed to connect to proxy" unless next_proxy
raise ProxyError, "Failed to connect to proxy" unless next_proxy
raise Error,
raise ProxyError,
"#{next_proxy.scheme}: unsupported proxy protocol" unless options.supported_proxy_protocols.include?(next_proxy.scheme)
if (no_proxy = proxy.no_proxy)
@ -179,6 +186,9 @@ module HTTPX
private
def fetch_response(request, selector, options)
response = request.response # in case it goes wrong later
begin
response = super
if response.is_a?(ErrorResponse) && proxy_error?(request, response, options)
@ -193,6 +203,11 @@ module HTTPX
return
end
response
rescue ProxyError
# may happen if coupled with retries, and there are no more proxies to try, in which case
# it'll end up here
response
end
end
def proxy_error?(_request, response, options)
@ -211,7 +226,7 @@ module HTTPX
proxy_uri = URI(options.proxy.uri)
error.message.end_with?(proxy_uri.to_s)
when *PROXY_ERRORS
when ProxyConnectionError
# timeout errors connecting to proxy
true
else
@ -251,6 +266,14 @@ module HTTPX
when :connecting
consume
end
rescue *PROXY_ERRORS => e
if connecting?
error = ProxyConnectionError.new(e.message)
error.set_backtrace(e.backtrace)
raise error
end
raise e
end
def reset
@ -292,13 +315,29 @@ module HTTPX
end
super
end
def purge_after_closed
super
@io = @io.proxy_io if @io.respond_to?(:proxy_io)
end
end
module ProxyRetries
module InstanceMethods
def retryable_error?(ex)
super || ex.is_a?(ProxyConnectionError)
end
end
end
end
register_plugin :proxy, Proxy
end
class ProxySSL < SSL
attr_reader :proxy_io
def initialize(tcp, request_uri, options)
@proxy_io = tcp
@io = tcp.to_io
super(request_uri, tcp.addresses, options)
@hostname = request_uri.host

View File

@ -60,7 +60,7 @@ module HTTPX
return unless @io.connected?
@parser || begin
@parser = self.class.parser_type(@io.protocol).new(@write_buffer, @options.merge(max_concurrent_requests: 1))
@parser = parser_type(@io.protocol).new(@write_buffer, @options.merge(max_concurrent_requests: 1))
parser = @parser
parser.extend(ProxyParser)
parser.on(:response, &method(:__http_on_connect))
@ -138,6 +138,8 @@ module HTTPX
else
pending = @pending + @parser.pending
while (req = pending.shift)
response.finish!
req.response = response
req.emit(:response, response)
end
reset

View File

@ -4,7 +4,7 @@ require "resolv"
require "ipaddr"
module HTTPX
class Socks4Error < HTTPProxyError; end
class Socks4Error < ProxyError; end
module Plugins
module Proxy

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
module HTTPX
class Socks5Error < HTTPProxyError; end
class Socks5Error < ProxyError; end
module Plugins
module Proxy

View File

@ -0,0 +1,35 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds support for using the experimental QUERY HTTP method
#
# https://gitlab.com/os85/httpx/wikis/Query
module Query
def self.subplugins
{
retries: QueryRetries,
}
end
module InstanceMethods
def query(*uri, **options)
request("QUERY", uri, **options)
end
end
module QueryRetries
module InstanceMethods
private
def repeatable_request?(request, options)
super || request.verb == "QUERY"
end
end
end
end
register_plugin :query, Query
end
end

View File

@ -10,21 +10,18 @@ module HTTPX
module ResponseCache
CACHEABLE_VERBS = %w[GET HEAD].freeze
CACHEABLE_STATUS_CODES = [200, 203, 206, 300, 301, 410].freeze
SUPPORTED_VARY_HEADERS = %w[accept accept-encoding accept-language cookie origin].sort.freeze
private_constant :CACHEABLE_VERBS
private_constant :CACHEABLE_STATUS_CODES
class << self
def load_dependencies(*)
require_relative "response_cache/store"
require_relative "response_cache/file_store"
end
def cacheable_request?(request)
CACHEABLE_VERBS.include?(request.verb) &&
(
!request.headers.key?("cache-control") || !request.headers.get("cache-control").include?("no-store")
)
end
# whether the +response+ can be stored in the response cache.
# (i.e. has a cacheable body, does not contain directives prohibiting storage, etc...)
def cacheable_response?(response)
response.is_a?(Response) &&
(
@ -39,82 +36,230 @@ module HTTPX
# directive prohibits caching. However, a cache that does not support
# the Range and Content-Range headers MUST NOT cache 206 (Partial
# Content) responses.
response.status != 206 && (
response.headers.key?("etag") || response.headers.key?("last-modified") || response.fresh?
)
response.status != 206
end
def cached_response?(response)
# whether the +response+
def not_modified?(response)
response.is_a?(Response) && response.status == 304
end
def extra_options(options)
options.merge(response_cache_store: Store.new)
options.merge(
supported_vary_headers: SUPPORTED_VARY_HEADERS,
response_cache_store: :store,
)
end
end
# adds support for the following options:
#
# :supported_vary_headers :: array of header values that will be considered for a "vary" header based cache validation
# (defaults to {SUPPORTED_VARY_HEADERS}).
# :response_cache_store :: object where cached responses are fetch from or stored in; defaults to <tt>:store</tt> (in-memory
# cache), can be set to <tt>:file_store</tt> (file system cache store) as well, or any object which
# abides by the Cache Store Interface
#
# The Cache Store Interface requires implementation of the following methods:
#
# * +#get(request) -> response or nil+
# * +#set(request, response) -> void+
# * +#clear() -> void+)
#
module OptionsMethods
def option_response_cache_store(value)
raise TypeError, "must be an instance of #{Store}" unless value.is_a?(Store)
case value
when :store
Store.new
when :file_store
FileStore.new
else
value
end
end
def option_supported_vary_headers(value)
Array(value).sort
end
end
module InstanceMethods
# wipes out all cached responses from the cache store.
def clear_response_cache
@options.response_cache_store.clear
end
def build_request(*)
request = super
return request unless ResponseCache.cacheable_request?(request) && @options.response_cache_store.cached?(request)
return request unless cacheable_request?(request)
@options.response_cache_store.prepare(request)
prepare_cache(request)
request
end
private
def send_request(request, *)
return request if request.response
super
end
def fetch_response(request, *)
response = super
return unless response
if ResponseCache.cached_response?(response)
if ResponseCache.not_modified?(response)
log { "returning cached response for #{request.uri}" }
cached_response = @options.response_cache_store.lookup(request)
response.copy_from_cached(cached_response)
else
@options.response_cache_store.cache(request, response)
response.copy_from_cached!
elsif request.cacheable_verb? && ResponseCache.cacheable_response?(response)
request.options.response_cache_store.set(request, response) unless response.cached?
end
response
end
# will either assign a still-fresh cached response to +request+, or set up its HTTP
# cache invalidation headers in case it's not fresh anymore.
def prepare_cache(request)
cached_response = request.options.response_cache_store.get(request)
return unless cached_response && match_by_vary?(request, cached_response)
cached_response.body.rewind
if cached_response.fresh?
cached_response = cached_response.dup
cached_response.mark_as_cached!
request.response = cached_response
request.emit(:response, cached_response)
return
end
request.cached_response = cached_response
if !request.headers.key?("if-modified-since") && (last_modified = cached_response.headers["last-modified"])
request.headers.add("if-modified-since", last_modified)
end
if !request.headers.key?("if-none-match") && (etag = cached_response.headers["etag"]) # rubocop:disable Style/GuardClause
request.headers.add("if-none-match", etag)
end
end
def cacheable_request?(request)
request.cacheable_verb? &&
(
!request.headers.key?("cache-control") || !request.headers.get("cache-control").include?("no-store")
)
end
# whether the +response+ complies with the directives set by the +request+ "vary" header
# (true when none is available).
def match_by_vary?(request, response)
vary = response.vary
return true unless vary
original_request = response.original_request
if vary == %w[*]
request.options.supported_vary_headers.each do |field|
return false unless request.headers[field] == original_request.headers[field]
end
return true
end
vary.all? do |field|
!original_request.headers.key?(field) || request.headers[field] == original_request.headers[field]
end
end
end
module RequestMethods
# points to a previously cached Response corresponding to this request.
attr_accessor :cached_response
def initialize(*)
super
@cached_response = nil
end
def merge_headers(*)
super
@response_cache_key = nil
end
# returns whether this request is cacheable as per HTTP caching rules.
def cacheable_verb?
CACHEABLE_VERBS.include?(@verb)
end
# returns a unique cache key as a String identifying this request
def response_cache_key
@response_cache_key ||= Digest::SHA1.hexdigest("httpx-response-cache-#{@verb}-#{@uri}")
@response_cache_key ||= begin
keys = [@verb, @uri]
@options.supported_vary_headers.each do |field|
value = @headers[field]
keys << value if value
end
Digest::SHA1.hexdigest("httpx-response-cache-#{keys.join("-")}")
end
end
end
module ResponseMethods
def copy_from_cached(other)
# 304 responses do not have content-type, which are needed for decoding.
@headers = @headers.class.new(other.headers.merge(@headers))
attr_writer :original_request
@body = other.body.dup
def initialize(*)
super
@cached = false
end
# a copy of the request this response was originally cached from
def original_request
@original_request || @request
end
# whether this Response was duplicated from a previously {RequestMethods#cached_response}.
def cached?
@cached
end
# sets this Response as being duplicated from a previously cached response.
def mark_as_cached!
@cached = true
end
# eager-copies the response headers and body from {RequestMethods#cached_response}.
def copy_from_cached!
cached_response = @request.cached_response
return unless cached_response
# 304 responses do not have content-type, which are needed for decoding.
@headers = @headers.class.new(cached_response.headers.merge(@headers))
@body = cached_response.body.dup
@body.rewind
end
# A response is fresh if its age has not yet exceeded its freshness lifetime.
# other (#cache_control} directives may influence the outcome, as per the rules
# from the {rfc}[https://www.rfc-editor.org/rfc/rfc7234]
def fresh?
if cache_control
return false if cache_control.include?("no-cache")
return true if cache_control.include?("immutable")
# check age: max-age
max_age = cache_control.find { |directive| directive.start_with?("s-maxage") }
@ -132,15 +277,16 @@ module HTTPX
begin
expires = Time.httpdate(@headers["expires"])
rescue ArgumentError
return true
return false
end
return (expires - Time.now).to_i.positive?
end
true
false
end
# returns the "cache-control" directives as an Array of String(s).
def cache_control
return @cache_control if defined?(@cache_control)
@ -151,24 +297,28 @@ module HTTPX
end
end
# returns the "vary" header value as an Array of (String) headers.
def vary
return @vary if defined?(@vary)
@vary = begin
return unless @headers.key?("vary")
@headers["vary"].split(/ *, */)
@headers["vary"].split(/ *, */).map(&:downcase)
end
end
private
# returns the value of the "age" header as an Integer (time since epoch).
# if no "age" of header exists, it returns the number of seconds since {#date}.
def age
return @headers["age"].to_i if @headers.key?("age")
(Time.now - date).to_i
end
# returns the value of the "date" header as a Time object
def date
@date ||= Time.httpdate(@headers["date"])
rescue NoMethodError, ArgumentError

View File

@ -0,0 +1,140 @@
# frozen_string_literal: true
require "pathname"
module HTTPX::Plugins
module ResponseCache
# Implementation of a file system based cache store.
#
# It stores cached responses in a file under a directory pointed by the +dir+
# variable (defaults to the default temp directory from the OS), in a custom
# format (similar but different from HTTP/1.1 request/response framing).
class FileStore
CRLF = HTTPX::Connection::HTTP1::CRLF
attr_reader :dir
def initialize(dir = Dir.tmpdir)
@dir = Pathname.new(dir).join("httpx-response-cache")
FileUtils.mkdir_p(@dir)
end
def clear
FileUtils.rm_rf(@dir)
end
def get(request)
path = file_path(request)
return unless File.exist?(path)
File.open(path, mode: File::RDONLY | File::BINARY) do |f|
f.flock(File::Constants::LOCK_SH)
read_from_file(request, f)
end
end
def set(request, response)
path = file_path(request)
file_exists = File.exist?(path)
mode = file_exists ? File::RDWR : File::CREAT | File::Constants::WRONLY
File.open(path, mode: mode | File::BINARY) do |f|
f.flock(File::Constants::LOCK_EX)
if file_exists
cached_response = read_from_file(request, f)
if cached_response
next if cached_response == request.cached_response
cached_response.close
f.truncate(0)
f.rewind
end
end
# cache the request headers
f << request.verb << CRLF
f << request.uri << CRLF
request.headers.each do |field, value|
f << field << ":" << value << CRLF
end
f << CRLF
# cache the response
f << response.status << CRLF
f << response.version << CRLF
response.headers.each do |field, value|
f << field << ":" << value << CRLF
end
f << CRLF
response.body.rewind
IO.copy_stream(response.body, f)
end
end
private
def file_path(request)
@dir.join(request.response_cache_key)
end
def read_from_file(request, f)
# if it's an empty file
return if f.eof?
# read request data
verb = f.readline.delete_suffix!(CRLF)
uri = f.readline.delete_suffix!(CRLF)
request_headers = {}
while (line = f.readline) != CRLF
line.delete_suffix!(CRLF)
sep_index = line.index(":")
field = line.byteslice(0..(sep_index - 1))
value = line.byteslice((sep_index + 1)..-1)
request_headers[field] = value
end
status = f.readline.delete_suffix!(CRLF)
version = f.readline.delete_suffix!(CRLF)
response_headers = {}
while (line = f.readline) != CRLF
line.delete_suffix!(CRLF)
sep_index = line.index(":")
field = line.byteslice(0..(sep_index - 1))
value = line.byteslice((sep_index + 1)..-1)
response_headers[field] = value
end
original_request = request.options.request_class.new(verb, uri, request.options)
original_request.merge_headers(request_headers)
response = request.options.response_class.new(request, status, version, response_headers)
response.original_request = original_request
response.finish!
IO.copy_stream(f, response.body)
response
end
end
end
end

View File

@ -2,6 +2,7 @@
module HTTPX::Plugins
module ResponseCache
# Implementation of a thread-safe in-memory cache store.
class Store
def initialize
@store = {}
@ -12,80 +13,19 @@ module HTTPX::Plugins
@store_mutex.synchronize { @store.clear }
end
def lookup(request)
responses = _get(request)
return unless responses
responses.find(&method(:match_by_vary?).curry(2)[request])
end
def cached?(request)
lookup(request)
end
def cache(request, response)
return unless ResponseCache.cacheable_request?(request) && ResponseCache.cacheable_response?(response)
_set(request, response)
end
def prepare(request)
cached_response = lookup(request)
return unless cached_response
return unless match_by_vary?(request, cached_response)
if !request.headers.key?("if-modified-since") && (last_modified = cached_response.headers["last-modified"])
request.headers.add("if-modified-since", last_modified)
end
if !request.headers.key?("if-none-match") && (etag = cached_response.headers["etag"]) # rubocop:disable Style/GuardClause
request.headers.add("if-none-match", etag)
end
end
private
def match_by_vary?(request, response)
vary = response.vary
return true unless vary
original_request = response.instance_variable_get(:@request)
return request.headers.same_headers?(original_request.headers) if vary == %w[*]
vary.all? do |cache_field|
cache_field.downcase!
!original_request.headers.key?(cache_field) || request.headers[cache_field] == original_request.headers[cache_field]
end
end
def _get(request)
def get(request)
@store_mutex.synchronize do
responses = @store[request.response_cache_key]
return unless responses
responses.select! do |res|
!res.body.closed? && res.fresh?
end
responses
@store[request.response_cache_key]
end
end
def _set(request, response)
def set(request, response)
@store_mutex.synchronize do
responses = (@store[request.response_cache_key] ||= [])
cached_response = @store[request.response_cache_key]
responses.reject! do |res|
res.body.closed? || !res.fresh? || match_by_vary?(request, res)
end
cached_response.close if cached_response
responses << response
@store[request.response_cache_key] = response
end
end
end

View File

@ -17,7 +17,9 @@ module HTTPX
# TODO: pass max_retries in a configure/load block
IDEMPOTENT_METHODS = %w[GET OPTIONS HEAD PUT DELETE].freeze
RETRYABLE_ERRORS = [
# subset of retryable errors which are safe to retry when reconnecting
RECONNECTABLE_ERRORS = [
IOError,
EOFError,
Errno::ECONNRESET,
@ -25,12 +27,15 @@ module HTTPX
Errno::EPIPE,
Errno::EINVAL,
Errno::ETIMEDOUT,
Parser::Error,
TLSError,
TimeoutError,
ConnectionError,
Connection::HTTP2::GoawayError,
TLSError,
Connection::HTTP2::Error,
].freeze
RETRYABLE_ERRORS = (RECONNECTABLE_ERRORS + [
Parser::Error,
TimeoutError,
]).freeze
DEFAULT_JITTER = ->(interval) { interval * ((rand + 1) * 0.5) }
if ENV.key?("HTTPX_NO_JITTER")
@ -88,6 +93,7 @@ module HTTPX
end
module InstanceMethods
# returns a `:retries` plugin enabled session with +n+ maximum retries per request setting.
def max_retries(n)
with(max_retries: n)
end
@ -99,18 +105,18 @@ module HTTPX
if response &&
request.retries.positive? &&
__repeatable_request?(request, options) &&
repeatable_request?(request, options) &&
(
(
response.is_a?(ErrorResponse) && __retryable_error?(response.error)
response.is_a?(ErrorResponse) && retryable_error?(response.error)
) ||
(
options.retry_on && options.retry_on.call(response)
)
)
__try_partial_retry(request, response)
try_partial_retry(request, response)
log { "failed to get response, #{request.retries} tries to go..." }
request.retries -= 1
request.retries -= 1 unless request.ping? # do not exhaust retries on connection liveness probes
request.transition(:idle)
retry_after = options.retry_after
@ -125,9 +131,10 @@ module HTTPX
retry_start = Utils.now
log { "retrying after #{retry_after} secs..." }
selector.after(retry_after) do
if request.response
if (response = request.response)
response.finish!
# request has terminated abruptly meanwhile
request.emit(:response, request.response)
request.emit(:response, response)
else
log { "retrying (elapsed time: #{Utils.elapsed_time(retry_start)})!!" }
send_request(request, selector, options)
@ -142,11 +149,13 @@ module HTTPX
response
end
def __repeatable_request?(request, options)
# returns whether +request+ can be retried.
def repeatable_request?(request, options)
IDEMPOTENT_METHODS.include?(request.verb) || options.retry_change_requests
end
def __retryable_error?(ex)
# returns whether the +ex+ exception happend for a retriable request.
def retryable_error?(ex)
RETRYABLE_ERRORS.any? { |klass| ex.is_a?(klass) }
end
@ -155,11 +164,11 @@ module HTTPX
end
#
# Atttempt to set the request to perform a partial range request.
# Attempt to set the request to perform a partial range request.
# This happens if the peer server accepts byte-range requests, and
# the last response contains some body payload.
#
def __try_partial_retry(request, response)
def try_partial_retry(request, response)
response = response.response if response.is_a?(ErrorResponse)
return unless response
@ -180,10 +189,13 @@ module HTTPX
end
module RequestMethods
# number of retries left.
attr_accessor :retries
# a response partially received before.
attr_writer :partial_response
# initializes the request instance, sets the number of retries for the request.
def initialize(*args)
super
@retries = @options.max_retries

View File

@ -2,29 +2,43 @@
module HTTPX
class StreamResponse
attr_reader :request
def initialize(request, session)
@request = request
@options = @request.options
@session = session
@response = nil
@response_enum = nil
@buffered_chunks = []
end
def each(&block)
return enum_for(__method__) unless block
if (response_enum = @response_enum)
@response_enum = nil
# streaming already started, let's finish it
while (chunk = @buffered_chunks.shift)
block.call(chunk)
end
# consume enum til the end
begin
while (chunk = response_enum.next)
block.call(chunk)
end
rescue StopIteration
return
end
end
@request.stream = self
begin
@on_chunk = block
if @request.response
# if we've already started collecting the payload, yield it first
# before proceeding.
body = @request.response.body
body.each do |chunk|
on_chunk(chunk)
end
end
response = @session.request(@request)
response.raise_for_status
ensure
@ -59,38 +73,50 @@ module HTTPX
# :nocov:
def inspect
"#<StreamResponse:#{object_id}>"
"#<#{self.class}:#{object_id}>"
end
# :nocov:
def to_s
response.to_s
if @request.response
@request.response.to_s
else
@buffered_chunks.join
end
end
private
def response
return @response if @response
@request.response || begin
@response = @session.request(@request)
response_enum = each
while (chunk = response_enum.next)
@buffered_chunks << chunk
break if @request.response
end
@response_enum = response_enum
@request.response
end
end
def respond_to_missing?(meth, *args)
response.respond_to?(meth, *args) || super
def respond_to_missing?(meth, include_private)
if (response = @request.response)
response.respond_to_missing?(meth, include_private)
else
@options.response_class.method_defined?(meth) || (include_private && @options.response_class.private_method_defined?(meth))
end || super
end
def method_missing(meth, *args, &block)
def method_missing(meth, *args, **kwargs, &block)
return super unless response.respond_to?(meth)
response.__send__(meth, *args, &block)
response.__send__(meth, *args, **kwargs, &block)
end
end
module Plugins
#
# This plugin adds support for stream response (text/event-stream).
# This plugin adds support for streaming a response (useful for i.e. "text/event-stream" payloads).
#
# https://gitlab.com/os85/httpx/wikis/Stream
#

View File

@ -0,0 +1,315 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds support for bidirectional HTTP/2 streams.
#
# https://gitlab.com/os85/httpx/wikis/StreamBidi
#
# It is required that the request body allows chunk to be buffered, (i.e., responds to +#<<(chunk)+).
module StreamBidi
# Extension of the Connection::HTTP2 class, which adds functionality to
# deal with a request that can't be drained and must be interleaved with
# the response streams.
#
# The streams keeps send DATA frames while there's data; when they're ain't,
# the stream is kept open; it must be explicitly closed by the end user.
#
class HTTP2Bidi < Connection::HTTP2
def initialize(*)
super
@lock = Thread::Mutex.new
end
%i[close empty? exhausted? send <<].each do |lock_meth|
class_eval(<<-METH, __FILE__, __LINE__ + 1)
# lock.aware version of +#{lock_meth}+
def #{lock_meth}(*) # def close(*)
return super if @lock.owned?
# small race condition between
# checking for ownership and
# acquiring lock.
# TODO: fix this at the parser.
@lock.synchronize { super }
end
METH
end
private
%i[join_headers join_trailers join_body].each do |lock_meth|
class_eval(<<-METH, __FILE__, __LINE__ + 1)
# lock.aware version of +#{lock_meth}+
def #{lock_meth}(*) # def join_headers(*)
return super if @lock.owned?
# small race condition between
# checking for ownership and
# acquiring lock.
# TODO: fix this at the parser.
@lock.synchronize { super }
end
METH
end
def handle_stream(stream, request)
request.on(:body) do
next unless request.headers_sent
handle(request, stream)
emit(:flush_buffer)
end
super
end
# when there ain't more chunks, it makes the buffer as full.
def send_chunk(request, stream, chunk, next_chunk)
super
return if next_chunk
request.transition(:waiting_for_chunk)
throw(:buffer_full)
end
# sets end-stream flag when the request is closed.
def end_stream?(request, next_chunk)
request.closed? && next_chunk.nil?
end
end
# BidiBuffer is a Buffer which can be receive data from threads othr
# than the thread of the corresponding Connection/Session.
#
# It synchronizes access to a secondary internal +@oob_buffer+, which periodically
# is reconciled to the main internal +@buffer+.
class BidiBuffer < Buffer
def initialize(*)
super
@parent_thread = Thread.current
@oob_mutex = Thread::Mutex.new
@oob_buffer = "".b
end
# buffers the +chunk+ to be sent
def <<(chunk)
return super if Thread.current == @parent_thread
@oob_mutex.synchronize { @oob_buffer << chunk }
end
# reconciles the main and secondary buffer (which receives data from other threads).
def rebuffer
raise Error, "can only rebuffer while waiting on a response" unless Thread.current == @parent_thread
@oob_mutex.synchronize do
@buffer << @oob_buffer
@oob_buffer.clear
end
end
end
# Proxy to wake up the session main loop when one
# of the connections has buffered data to write. It abides by the HTTPX::_Selectable API,
# which allows it to be registered in the selector alongside actual HTTP-based
# HTTPX::Connection objects.
class Signal
def initialize
@closed = false
@pipe_read, @pipe_write = IO.pipe
end
def state
@closed ? :closed : :open
end
# noop
def log(**, &_); end
def to_io
@pipe_read.to_io
end
def wakeup
return if @closed
@pipe_write.write("\0")
end
def call
return if @closed
@pipe_read.readpartial(1)
end
def interests
return if @closed
:r
end
def timeout; end
def terminate
@pipe_write.close
@pipe_read.close
@closed = true
end
# noop (the owner connection will take of it)
def handle_socket_timeout(interval); end
end
class << self
def load_dependencies(klass)
klass.plugin(:stream)
end
def extra_options(options)
options.merge(fallback_protocol: "h2")
end
end
module InstanceMethods
def initialize(*)
@signal = Signal.new
super
end
def close(selector = Selector.new)
@signal.terminate
selector.deregister(@signal)
super(selector)
end
def select_connection(connection, selector)
super
selector.register(@signal)
connection.signal = @signal
end
def deselect_connection(connection, *)
super
connection.signal = nil
end
end
# Adds synchronization to request operations which may buffer payloads from different
# threads.
module RequestMethods
attr_accessor :headers_sent
def initialize(*)
super
@headers_sent = false
@closed = false
@mutex = Thread::Mutex.new
end
def closed?
@closed
end
def can_buffer?
super && @state != :waiting_for_chunk
end
# overrides state management transitions to introduce an intermediate
# +:waiting_for_chunk+ state, which the request transitions to once payload
# is buffered.
def transition(nextstate)
headers_sent = @headers_sent
case nextstate
when :waiting_for_chunk
return unless @state == :body
when :body
case @state
when :headers
headers_sent = true
when :waiting_for_chunk
# HACK: to allow super to pass through
@state = :headers
end
end
super.tap do
# delay setting this up until after the first transition to :body
@headers_sent = headers_sent
end
end
def <<(chunk)
@mutex.synchronize do
if @drainer
@body.clear if @body.respond_to?(:clear)
@drainer = nil
end
@body << chunk
transition(:body)
end
end
def close
@mutex.synchronize do
return if @closed
@closed = true
end
# last chunk to send which ends the stream
self << ""
end
end
module RequestBodyMethods
def initialize(*, **)
super
@headers.delete("content-length")
end
def empty?
false
end
end
# overrides the declaration of +@write_buffer+, which is now a thread-safe buffer
# responding to the same API.
module ConnectionMethods
attr_writer :signal
def initialize(*)
super
@write_buffer = BidiBuffer.new(@options.buffer_size)
end
# rebuffers the +@write_buffer+ before calculating interests.
def interests
@write_buffer.rebuffer
super
end
private
def parser_type(protocol)
return HTTP2Bidi if protocol == "h2"
super
end
def set_parser_callbacks(parser)
super
parser.on(:flush_buffer) do
@signal.wakeup if @signal
end
end
end
end
register_plugin :stream_bidi, StreamBidi
end
end

View File

@ -65,6 +65,12 @@ module HTTPX
module ConnectionMethods
attr_reader :upgrade_protocol, :hijacked
def initialize(*)
super
@upgrade_protocol = nil
end
def hijack_io
@hijacked = true

View File

@ -13,25 +13,28 @@ module HTTPX
# Sets up the connection pool with the given +options+, which can be the following:
#
# :max_connections:: the maximum number of connections held in the pool.
# :max_connections_per_origin :: the maximum number of connections held in the pool pointing to a given origin.
# :pool_timeout :: the number of seconds to wait for a connection to a given origin (before raising HTTPX::PoolTimeoutError)
#
def initialize(options)
@max_connections = options.fetch(:max_connections, Float::INFINITY)
@max_connections_per_origin = options.fetch(:max_connections_per_origin, Float::INFINITY)
@pool_timeout = options.fetch(:pool_timeout, POOL_TIMEOUT)
@resolvers = Hash.new { |hs, resolver_type| hs[resolver_type] = [] }
@resolver_mtx = Thread::Mutex.new
@connections = []
@connection_mtx = Thread::Mutex.new
@connections_counter = 0
@max_connections_cond = ConditionVariable.new
@origin_counters = Hash.new(0)
@origin_conds = Hash.new { |hs, orig| hs[orig] = ConditionVariable.new }
end
# connections returned by this function are not expected to return to the connection pool.
def pop_connection
@connection_mtx.synchronize do
conn = @connections.shift
@origin_conds.delete(conn.origin) if conn && (@origin_counters[conn.origin.to_s] -= 1).zero?
conn
drop_connection
end
end
@ -44,13 +47,34 @@ module HTTPX
@connection_mtx.synchronize do
acquire_connection(uri, options) || begin
if @connections_counter == @max_connections
# this takes precedence over per-origin
@max_connections_cond.wait(@connection_mtx, @pool_timeout)
acquire_connection(uri, options) || begin
if @connections_counter == @max_connections
# if no matching usable connection was found, the pool will make room and drop a closed connection. if none is found,
# this means that all of them are persistent or being used, so raise a timeout error.
conn = @connections.find { |c| c.state == :closed }
raise PoolTimeoutError.new(@pool_timeout,
"Timed out after #{@pool_timeout} seconds while waiting for a connection") unless conn
drop_connection(conn)
end
end
end
if @origin_counters[uri.origin] == @max_connections_per_origin
@origin_conds[uri.origin].wait(@connection_mtx, @pool_timeout)
return acquire_connection(uri, options) || raise(PoolTimeoutError.new(uri.origin, @pool_timeout))
return acquire_connection(uri, options) ||
raise(PoolTimeoutError.new(@pool_timeout,
"Timed out after #{@pool_timeout} seconds while waiting for a connection to #{uri.origin}"))
end
@connections_counter += 1
@origin_counters[uri.origin] += 1
checkout_new_connection(uri, options)
@ -64,6 +88,7 @@ module HTTPX
@connection_mtx.synchronize do
@connections << connection
@max_connections_cond.signal
@origin_conds[connection.origin.to_s].signal
end
end
@ -107,6 +132,15 @@ module HTTPX
end
end
# :nocov:
def inspect
"#<#{self.class}:#{object_id} " \
"@max_connections_per_origin=#{@max_connections_per_origin} " \
"@pool_timeout=#{@pool_timeout} " \
"@connections=#{@connections.size}>"
end
# :nocov:
private
def acquire_connection(uri, options)
@ -114,7 +148,9 @@ module HTTPX
connection.match?(uri, options)
end
@connections.delete_at(idx) if idx
return unless idx
@connections.delete_at(idx)
end
def checkout_new_connection(uri, options)
@ -128,5 +164,22 @@ module HTTPX
resolver_type.new(options)
end
end
# drops and returns the +connection+ from the connection pool; if +connection+ is <tt>nil</tt> (default),
# the first available connection from the pool will be dropped.
def drop_connection(connection = nil)
if connection
@connections.delete(connection)
else
connection = @connections.shift
return unless connection
end
@connections_counter -= 1
@origin_conds.delete(connection.origin) if (@origin_counters[connection.origin.to_s] -= 1).zero?
connection
end
end
end

View File

@ -8,6 +8,7 @@ module HTTPX
# as well as maintaining the state machine which manages streaming the request onto the wire.
class Request
extend Forwardable
include Loggable
include Callbacks
using URIExtensions
@ -104,21 +105,32 @@ module HTTPX
@state = :idle
@response = nil
@peer_address = nil
@ping = false
@persistent = @options.persistent
@active_timeouts = []
end
# the read timeout defined for this requet.
# whether request has been buffered with a ping
def ping?
@ping
end
# marks the request as having been buffered with a ping
def ping!
@ping = true
end
# the read timeout defined for this request.
def read_timeout
@options.timeout[:read_timeout]
end
# the write timeout defined for this requet.
# the write timeout defined for this request.
def write_timeout
@options.timeout[:write_timeout]
end
# the request timeout defined for this requet.
# the request timeout defined for this request.
def request_timeout
@options.timeout[:request_timeout]
end
@ -144,6 +156,10 @@ module HTTPX
:w
end
def can_buffer?
@state != :done
end
# merges +h+ into the instance of HTTPX::Headers of the request.
def merge_headers(h)
@headers = @headers.merge(h)
@ -211,7 +227,7 @@ module HTTPX
return @query if defined?(@query)
query = []
if (q = @query_params)
if (q = @query_params) && !q.empty?
query << Transcoder::Form.encode(q)
end
query << @uri.query if @uri.query
@ -236,7 +252,7 @@ module HTTPX
# :nocov:
def inspect
"#<HTTPX::Request:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"#{@verb} " \
"#{uri} " \
"@headers=#{@headers} " \
@ -249,6 +265,7 @@ module HTTPX
case nextstate
when :idle
@body.rewind
@ping = false
@response = nil
@drainer = nil
@active_timeouts.clear
@ -277,6 +294,7 @@ module HTTPX
return if @state == :expect
end
log(level: 3) { "#{@state}] -> #{nextstate}" }
@state = nextstate
emit(@state, self)
nil

View File

@ -56,7 +56,7 @@ module HTTPX
block.call(chunk)
end
# TODO: use copy_stream once bug is resolved: https://bugs.ruby-lang.org/issues/21131
# ::IO.copy_stream(body, ProcIO.new(block))
# IO.copy_stream(body, ProcIO.new(block))
elsif body.respond_to?(:each)
body.each(&block)
else
@ -116,7 +116,7 @@ module HTTPX
# :nocov:
def inspect
"#<HTTPX::Request::Body:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"#{unbounded_body? ? "stream" : "@bytesize=#{bytesize}"}>"
end
# :nocov:

View File

@ -83,6 +83,18 @@ module HTTPX
end
end
def cached_lookup_evict(hostname, ip)
ip = ip.to_s
lookup_synchronize do |lookups|
entries = lookups[hostname]
return unless entries
lookups.delete_if { |entry| entry["data"] == ip }
end
end
# do not use directly!
def lookup(hostname, lookups, ttl)
return unless lookups.key?(hostname)
@ -92,8 +104,8 @@ module HTTPX
end
ips = entries.flat_map do |address|
if address.key?("alias")
lookup(address["alias"], lookups, ttl)
if (als = address["alias"])
lookup(als, lookups, ttl)
else
IPAddr.new(address["data"])
end

View File

@ -2,11 +2,14 @@
require "resolv"
require "uri"
require "cgi"
require "forwardable"
require "httpx/base64"
module HTTPX
# Implementation of a DoH name resolver (https://www.youtube.com/watch?v=unMXvnY2FNM).
# It wraps an HTTPX::Connection object which integrates with the main session in the
# same manner as other performed HTTP requests.
#
class Resolver::HTTPS < Resolver::Resolver
extend Forwardable
using URIExtensions
@ -27,14 +30,13 @@ module HTTPX
use_get: false,
}.freeze
def_delegators :@resolver_connection, :state, :connecting?, :to_io, :call, :close, :terminate, :inflight?
def_delegators :@resolver_connection, :state, :connecting?, :to_io, :call, :close, :terminate, :inflight?, :handle_socket_timeout
def initialize(_, options)
super
@resolver_options = DEFAULTS.merge(@options.resolver_options)
@queries = {}
@requests = {}
@connections = []
@uri = URI(@resolver_options[:uri])
@uri_addresses = nil
@resolver = Resolv::DNS.new
@ -75,7 +77,11 @@ module HTTPX
private
def resolve(connection = @connections.first, hostname = nil)
def resolve(connection = nil, hostname = nil)
@connections.shift until @connections.empty? || @connections.first.state != :closed
connection ||= @connections.first
return unless connection
hostname ||= @queries.key(connection)

View File

@ -35,6 +35,10 @@ module HTTPX
@resolvers.each { |r| r.__send__(__method__, s) }
end
def log(*args, **kwargs, &blk)
@resolvers.each { |r| r.__send__(__method__, *args, **kwargs, &blk) }
end
def closed?
@resolvers.all?(&:closed?)
end

View File

@ -4,6 +4,9 @@ require "forwardable"
require "resolv"
module HTTPX
# Implements a pure ruby name resolver, which abides by the Selectable API.
# It delegates DNS payload encoding/decoding to the +resolv+ stlid gem.
#
class Resolver::Native < Resolver::Resolver
extend Forwardable
using URIExtensions
@ -34,7 +37,6 @@ module HTTPX
@search = Array(@resolver_options[:search]).map { |srch| srch.scan(/[^.]+/) }
@_timeouts = Array(@resolver_options[:timeouts])
@timeouts = Hash.new { |timeouts, host| timeouts[host] = @_timeouts.dup }
@connections = []
@name = nil
@queries = {}
@read_buffer = "".b
@ -46,6 +48,10 @@ module HTTPX
transition(:closed)
end
def terminate
emit(:close, self)
end
def closed?
@state == :closed
end
@ -120,10 +126,7 @@ module HTTPX
@ns_index += 1
nameserver = @nameserver
if nameserver && @ns_index < nameserver.size
log do
"resolver #{FAMILY_TYPES[@record_type]}: " \
"failed resolving on nameserver #{@nameserver[@ns_index - 1]} (#{e.message})"
end
log { "resolver #{FAMILY_TYPES[@record_type]}: failed resolving on nameserver #{@nameserver[@ns_index - 1]} (#{e.message})" }
transition(:idle)
@timeouts.clear
retry
@ -158,9 +161,7 @@ module HTTPX
timeouts = @timeouts[h]
if !timeouts.empty?
log do
"resolver #{FAMILY_TYPES[@record_type]}: timeout after #{interval}s, retry (with #{timeouts.first}s) #{h}..."
end
log { "resolver #{FAMILY_TYPES[@record_type]}: timeout after #{interval}s, retry (with #{timeouts.first}s) #{h}..." }
# must downgrade to tcp AND retry on same host as last
downgrade_socket
resolve(connection, h)
@ -388,10 +389,9 @@ module HTTPX
if hostname.nil?
hostname = connection.peer.host
log do
"resolver #{FAMILY_TYPES[@record_type]}: " \
"resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}"
end if connection.peer.non_ascii_hostname
if connection.peer.non_ascii_hostname
log { "resolver #{FAMILY_TYPES[@record_type]}: resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}" }
end
hostname = generate_candidates(hostname).each do |name|
@queries[name] = connection
@ -480,6 +480,7 @@ module HTTPX
@write_buffer.clear
@read_buffer.clear
end
log(level: 3) { "#{@state} -> #{nextstate}" }
@state = nextstate
rescue Errno::ECONNREFUSED,
Errno::EADDRNOTAVAIL,
@ -507,7 +508,7 @@ module HTTPX
end
while (connection = @connections.shift)
emit_resolve_error(connection, host, error)
emit_resolve_error(connection, connection.peer.host, error)
end
end
end

View File

@ -4,6 +4,9 @@ require "resolv"
require "ipaddr"
module HTTPX
# Base class for all internal internet name resolvers. It handles basic blocks
# from the Selectable API.
#
class Resolver::Resolver
include Callbacks
include Loggable
@ -36,6 +39,7 @@ module HTTPX
@family = family
@record_type = RECORD_TYPES[family]
@options = options
@connections = []
set_resolver_callbacks
end

View File

@ -3,6 +3,15 @@
require "resolv"
module HTTPX
# Implementation of a synchronous name resolver which relies on the system resolver,
# which is lib'c getaddrinfo function (abstracted in ruby via Addrinfo.getaddrinfo).
#
# Its main advantage is relying on the reference implementation for name resolution
# across most/all OSs which deploy ruby (it's what TCPSocket also uses), its main
# disadvantage is the inability to set timeouts / check socket for readiness events,
# hence why it relies on using the Timeout module, which poses a lot of problems for
# the selector loop, specially when network is unstable.
#
class Resolver::System < Resolver::Resolver
using URIExtensions
@ -23,14 +32,13 @@ module HTTPX
attr_reader :state
def initialize(options)
super(nil, options)
super(0, options)
@resolver_options = @options.resolver_options
resolv_options = @resolver_options.dup
timeouts = resolv_options.delete(:timeouts) || Resolver::RESOLVE_TIMEOUT
@_timeouts = Array(timeouts)
@timeouts = Hash.new { |tims, host| tims[host] = @_timeouts.dup }
resolv_options.delete(:cache)
@connections = []
@queries = []
@ips = []
@pipe_mutex = Thread::Mutex.new
@ -100,7 +108,14 @@ module HTTPX
def handle_socket_timeout(interval)
error = HTTPX::ResolveTimeoutError.new(interval, "timed out while waiting on select")
error.set_backtrace(caller)
on_error(error)
@queries.each do |host, connection|
@connections.delete(connection)
emit_resolve_error(connection, host, error)
end
while (connection = @connections.shift)
emit_resolve_error(connection, connection.peer.host, error)
end
end
private
@ -112,7 +127,7 @@ module HTTPX
when :open
return unless @state == :idle
@pipe_read, @pipe_write = ::IO.pipe
@pipe_read, @pipe_write = IO.pipe
when :closed
return unless @state == :open
@ -131,14 +146,16 @@ module HTTPX
case event
when DONE
*pair, addrs = @pipe_mutex.synchronize { @ips.pop }
if pair
@queries.delete(pair)
_, connection = pair
family, connection = pair
@connections.delete(connection)
family, connection = pair
catch(:coalesced) { emit_addresses(connection, family, addrs) }
end
when ERROR
*pair, error = @pipe_mutex.synchronize { @ips.pop }
if pair && error
@queries.delete(pair)
@connections.delete(connection)
@ -146,17 +163,23 @@ module HTTPX
emit_resolve_error(connection, connection.peer.host, error)
end
end
end
return emit(:close, self) if @connections.empty?
resolve
end
def resolve(connection = @connections.first)
def resolve(connection = nil, hostname = nil)
@connections.shift until @connections.empty? || @connections.first.state != :closed
connection ||= @connections.first
raise Error, "no URI to resolve" unless connection
return unless @queries.empty?
hostname = connection.peer.host
hostname ||= connection.peer.host
scheme = connection.origin.scheme
log do
"resolver: resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}"

View File

@ -71,6 +71,14 @@ module HTTPX
@content_type = nil
end
# dupped initialization
def initialize_dup(orig)
super
# if a response gets dupped, the body handle must also get dupped to prevent
# two responses from using the same file handle to read.
@body = orig.body.dup
end
# closes the respective +@request+ and +@body+.
def close
@request.close
@ -126,7 +134,7 @@ module HTTPX
# :nocov:
def inspect
"#<Response:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"HTTP/#{version} " \
"@status=#{@status} " \
"@headers=#{@headers} " \
@ -275,6 +283,8 @@ module HTTPX
true
end
def finish!; end
# raises the wrapped exception.
def raise_for_status
raise @error

View File

@ -11,6 +11,9 @@ module HTTPX
# Array of encodings contained in the response "content-encoding" header.
attr_reader :encodings
attr_reader :buffer
protected :buffer
# initialized with the corresponding HTTPX::Response +response+ and HTTPX::Options +options+.
def initialize(response, options)
@response = response
@ -133,7 +136,7 @@ module HTTPX
if dest.respond_to?(:path) && @buffer.respond_to?(:path)
FileUtils.mv(@buffer.path, dest.path)
else
::IO.copy_stream(@buffer, dest)
IO.copy_stream(@buffer, dest)
end
end
@ -148,18 +151,17 @@ module HTTPX
end
def ==(other)
object_id == other.object_id || begin
if other.respond_to?(:read)
_with_same_buffer_pos { FileUtils.compare_stream(@buffer, other) }
super || case other
when Response::Body
@buffer == other.buffer
else
to_s == other.to_s
end
@buffer = other
end
end
# :nocov:
def inspect
"#<HTTPX::Response::Body:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"@state=#{@state} " \
"@length=#{@length}>"
end
@ -226,19 +228,6 @@ module HTTPX
@state = nextstate
end
def _with_same_buffer_pos # :nodoc:
return yield unless @buffer && @buffer.respond_to?(:pos)
# @type ivar @buffer: StringIO | Tempfile
current_pos = @buffer.pos
@buffer.rewind
begin
yield
ensure
@buffer.pos = current_pos
end
end
class << self
def initialize_inflater_by_encoding(encoding, response, **kwargs) # :nodoc:
case encoding

View File

@ -7,6 +7,9 @@ require "tempfile"
module HTTPX
# wraps and delegates to an internal buffer, which can be a StringIO or a Tempfile.
class Response::Buffer < SimpleDelegator
attr_reader :buffer
protected :buffer
# initializes buffer with the +threshold_size+ over which the payload gets buffer to a tempfile,
# the initial +bytesize+, and the +encoding+.
def initialize(threshold_size:, bytesize: 0, encoding: Encoding::BINARY)
@ -20,7 +23,14 @@ module HTTPX
def initialize_dup(other)
super
@buffer = other.instance_variable_get(:@buffer).dup
# create new descriptor in READ-ONLY mode
@buffer =
case other.buffer
when StringIO
StringIO.new(other.buffer.string, mode: File::RDONLY)
else
other.buffer.class.new(other.buffer.path, encoding: Encoding::BINARY, mode: File::RDONLY)
end
end
# size in bytes of the buffered content.
@ -46,7 +56,7 @@ module HTTPX
end
when Tempfile
rewind
content = _with_same_buffer_pos { @buffer.read }
content = @buffer.read
begin
content.force_encoding(@encoding)
rescue ArgumentError # ex: unknown encoding name - utf
@ -61,6 +71,30 @@ module HTTPX
@buffer.unlink if @buffer.respond_to?(:unlink)
end
def ==(other)
super || begin
return false unless other.is_a?(Response::Buffer)
if @buffer.nil?
other.buffer.nil?
elsif @buffer.respond_to?(:read) &&
other.respond_to?(:read)
buffer_pos = @buffer.pos
other_pos = other.buffer.pos
@buffer.rewind
other.buffer.rewind
begin
FileUtils.compare_stream(@buffer, other.buffer)
ensure
@buffer.pos = buffer_pos
other.buffer.pos = other_pos
end
else
to_s == other.to_s
end
end
end
private
# initializes the buffer into a StringIO, or turns it into a Tempfile when the threshold
@ -76,21 +110,11 @@ module HTTPX
if aux
aux.rewind
::IO.copy_stream(aux, @buffer)
IO.copy_stream(aux, @buffer)
aux.close
end
__setobj__(@buffer)
end
def _with_same_buffer_pos # :nodoc:
current_pos = @buffer.pos
@buffer.rewind
begin
yield
ensure
@buffer.pos = current_pos
end
end
end
end

View File

@ -35,14 +35,21 @@ module HTTPX
end
begin
select(timeout, &:call)
select(timeout) do |c|
c.log(level: 2) { "[#{c.state}] selected#{" after #{timeout} secs" unless timeout.nil?}..." }
c.call
end
@timers.fire
rescue TimeoutError => e
@timers.fire(e)
end
end
rescue StandardError => e
emit_error(e)
each_connection do |c|
c.emit(:error, e)
end
rescue Exception # rubocop:disable Lint/RescueException
each_connection do |conn|
conn.force_reset
@ -56,7 +63,10 @@ module HTTPX
# array may change during iteration
selectables = @selectables.reject(&:inflight?)
selectables.each(&:terminate)
selectables.delete_if do |sel|
sel.terminate
sel.state == :closed
end
until selectables.empty?
next_tick
@ -77,9 +87,10 @@ module HTTPX
return enum_for(__method__) unless block
@selectables.each do |c|
if c.is_a?(Resolver::Resolver)
case c
when Resolver::Resolver
c.each_connection(&block)
else
when Connection
yield c
end
end
@ -133,6 +144,8 @@ module HTTPX
@selectables.delete_if do |io|
interests = io.interests
io.log(level: 2) { "[#{io.state}] registering for select (#{interests})#{" for #{interval} seconds" unless interval.nil?}" }
(r ||= []) << io if READABLE.include?(interests)
(w ||= []) << io if WRITABLE.include?(interests)
@ -169,6 +182,8 @@ module HTTPX
interests = io.interests
io.log(level: 2) { "[#{io.state}] registering for select (#{interests})#{" for #{interval} seconds" unless interval.nil?}" }
result = case interests
when :r then io.to_io.wait_readable(interval)
when :w then io.to_io.wait_writable(interval)
@ -205,13 +220,5 @@ module HTTPX
connection_interval
end
def emit_error(e)
@selectables.each do |c|
next if c.is_a?(Resolver::Resolver)
c.emit(:error, e)
end
end
end
end

View File

@ -15,11 +15,11 @@ module HTTPX
# When pass a block, it'll yield itself to it, then closes after the block is evaluated.
def initialize(options = EMPTY_HASH, &blk)
@options = self.class.default_options.merge(options)
@responses = {}
@persistent = @options.persistent
@pool = @options.pool_class.new(@options.pool_options)
@wrapped = false
@closing = false
INSTANCES[self] = self if @persistent && @options.close_on_fork && INSTANCES
wrap(&blk) if blk
end
@ -136,6 +136,9 @@ module HTTPX
alias_method :select_resolver, :select_connection
def deselect_connection(connection, selector, cloned = false)
connection.log(level: 2) do
"deregistering connection##{connection.object_id}(#{connection.state}) from selector##{selector.object_id}"
end
selector.deregister(connection)
# when connections coalesce
@ -145,14 +148,19 @@ module HTTPX
return if @closing && connection.state == :closed
connection.log(level: 2) { "check-in connection##{connection.object_id}(#{connection.state}) in pool##{@pool.object_id}" }
@pool.checkin_connection(connection)
end
def deselect_resolver(resolver, selector)
resolver.log(level: 2) do
"deregistering resolver##{resolver.object_id}(#{resolver.state}) from selector##{selector.object_id}"
end
selector.deregister(resolver)
return if @closing && resolver.closed?
resolver.log(level: 2) { "check-in resolver##{resolver.object_id}(#{resolver.state}) in pool##{@pool.object_id}" }
@pool.checkin_resolver(resolver)
end
@ -174,11 +182,15 @@ module HTTPX
# returns the HTTPX::Connection through which the +request+ should be sent through.
def find_connection(request_uri, selector, options)
if (connection = selector.find_connection(request_uri, options))
connection.idling if connection.state == :closed
connection.log(level: 2) { "found connection##{connection.object_id}(#{connection.state}) in selector##{selector.object_id}" }
return connection
end
connection = @pool.checkout_connection(request_uri, options)
connection.log(level: 2) { "found connection##{connection.object_id}(#{connection.state}) in pool##{@pool.object_id}" }
case connection.state
when :idle
do_init_connection(connection, selector)
@ -188,14 +200,9 @@ module HTTPX
else
pin_connection(connection, selector)
end
when :closed
when :closing, :closed
connection.idling
select_connection(connection, selector)
when :closing
connection.once(:close) do
connection.idling
select_connection(connection, selector)
end
else
pin_connection(connection, selector)
end
@ -212,11 +219,6 @@ module HTTPX
end
end
# callback executed when a response for a given request has been received.
def on_response(request, response)
@responses[request] = response
end
# callback executed when an HTTP/2 promise frame has been received.
def on_promise(_, stream)
log(level: 2) { "#{stream.id}: refusing stream!" }
@ -225,7 +227,13 @@ module HTTPX
# returns the corresponding HTTP::Response to the given +request+ if it has been received.
def fetch_response(request, _selector, _options)
@responses.delete(request)
response = request.response
return unless response && response.finished?
log(level: 2) { "response fetched" }
response
end
# sends the +request+ to the corresponding HTTPX::Connection
@ -242,7 +250,9 @@ module HTTPX
raise error unless error.is_a?(Error)
request.emit(:response, ErrorResponse.new(request, error))
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end
# returns a set of HTTPX::Request objects built from the given +args+ and +options+.
@ -272,7 +282,6 @@ module HTTPX
end
def set_request_callbacks(request)
request.on(:response, &method(:on_response).curry(2)[request])
request.on(:promise, &method(:on_promise))
end
@ -306,8 +315,7 @@ module HTTPX
# returns the array of HTTPX::Response objects corresponding to the array of HTTPX::Request +requests+.
def receive_requests(requests, selector)
# @type var responses: Array[response]
responses = []
responses = [] # : Array[response]
# guarantee ordered responses
loop do
@ -329,12 +337,30 @@ module HTTPX
# handshake error, and the error responses have already been emitted, but there was no
# opportunity to traverse the requests, hence we're returning only a fraction of the errors
# we were supposed to. This effectively fetches the existing responses and return them.
while (request = requests.shift)
response = fetch_response(request, selector, request.options)
request.emit(:complete, response) if response
exit_from_loop = true
requests_to_remove = [] # : Array[Request]
requests.each do |req|
response = fetch_response(req, selector, request.options)
if exit_from_loop && response
req.emit(:complete, response)
responses << response
requests_to_remove << req
else
# fetch_response may resend requests. when that happens, we need to go back to the initial
# loop and process the selector. we still do a pass-through on the remainder of requests, so
# that every request that need to be resent, is resent.
exit_from_loop = false
raise Error, "something went wrong, responses not found and requests not resent" if selector.empty?
end
break
end
break if exit_from_loop
requests -= requests_to_remove
end
responses
end
@ -367,13 +393,12 @@ module HTTPX
return select_connection(connection, selector) unless found_connection
if found_connection.open?
coalesce_connections(found_connection, connection, selector, from_pool)
else
found_connection.once(:open) do
coalesce_connections(found_connection, connection, selector, from_pool)
end
connection.log(level: 2) do
"try coalescing from #{from_pool ? "pool##{@pool.object_id}" : "selector##{selector.object_id}"} " \
"(conn##{found_connection.object_id}[#{found_connection.origin}])"
end
coalesce_connections(found_connection, connection, selector, from_pool)
end
def on_resolver_close(resolver, selector)
@ -384,13 +409,15 @@ module HTTPX
end
def find_resolver_for(connection, selector)
resolver = selector.find_resolver(connection.options)
if (resolver = selector.find_resolver(connection.options))
resolver.log(level: 2) { "found resolver##{connection.object_id}(#{connection.state}) in selector##{selector.object_id}" }
return resolver
end
unless resolver
resolver = @pool.checkout_resolver(connection.options)
resolver.log(level: 2) { "found resolver##{connection.object_id}(#{connection.state}) in pool##{@pool.object_id}" }
resolver.current_session = self
resolver.current_selector = selector
end
resolver
end
@ -399,14 +426,19 @@ module HTTPX
# (it is known via +from_pool+), then it adds its to the +selector+.
def coalesce_connections(conn1, conn2, selector, from_pool)
unless conn1.coalescable?(conn2)
conn2.log(level: 2) { "not coalescing with conn##{conn1.object_id}[#{conn1.origin}])" }
select_connection(conn2, selector)
@pool.checkin_connection(conn1) if from_pool
if from_pool
conn1.log(level: 2) { "check-in connection##{conn1.object_id}(#{conn1.state}) in pool##{@pool.object_id}" }
@pool.checkin_connection(conn1)
end
return false
end
conn2.coalesced_connection = conn1
conn2.log(level: 2) { "coalescing with conn##{conn1.object_id}[#{conn1.origin}])" }
conn2.coalesce!(conn1)
select_connection(conn1, selector) if from_pool
deselect_connection(conn2, selector)
conn2.disconnect
true
end
@ -451,6 +483,7 @@ module HTTPX
# session_with_custom = session.plugin(CustomPlugin)
#
def plugin(pl, options = nil, &block)
label = pl
# raise Error, "Cannot add a plugin to a frozen config" if frozen?
pl = Plugins.load_plugin(pl) if pl.is_a?(Symbol)
if !@plugins.include?(pl)
@ -475,9 +508,36 @@ module HTTPX
@default_options = pl.extra_options(@default_options) if pl.respond_to?(:extra_options)
@default_options = @default_options.merge(options) if options
if pl.respond_to?(:subplugins)
pl.subplugins.transform_keys(&Plugins.method(:load_plugin)).each do |main_pl, sub_pl|
# in case the main plugin has already been loaded, then apply subplugin functionality
# immediately
next unless @plugins.include?(main_pl)
plugin(sub_pl, options, &block)
end
end
pl.configure(self, &block) if pl.respond_to?(:configure)
if label.is_a?(Symbol)
# in case an already-loaded plugin complements functionality of
# the plugin currently being loaded, loaded it now
@plugins.each do |registered_pl|
next if registered_pl == pl
next unless registered_pl.respond_to?(:subplugins)
sub_pl = registered_pl.subplugins[label]
next unless sub_pl
plugin(sub_pl, options, &block)
end
end
@default_options.freeze
set_temporary_name("#{superclass}/#{pl}") if respond_to?(:set_temporary_name) # ruby 3.4 only
elsif options
# this can happen when two plugins are loaded, an one of them calls the other under the hood,
# albeit changing some default.
@ -486,9 +546,40 @@ module HTTPX
@default_options.freeze
end
self
end
end
# setup of the support for close_on_fork sessions.
# adapted from https://github.com/mperham/connection_pool/blob/main/lib/connection_pool.rb#L48
if Process.respond_to?(:fork)
INSTANCES = ObjectSpace::WeakMap.new
private_constant :INSTANCES
def self.after_fork
INSTANCES.each_value(&:close)
nil
end
if ::Process.respond_to?(:_fork)
module ForkTracker
def _fork
pid = super
Session.after_fork if pid.zero?
pid
end
end
Process.singleton_class.prepend(ForkTracker)
end
else
INSTANCES = nil
private_constant :INSTANCES
def self.after_fork
# noop
end
end
end
# session may be overridden by certain adapters.

View File

@ -7,17 +7,16 @@ module HTTPX
end
def after(interval_in_secs, cb = nil, &blk)
return unless interval_in_secs
callback = cb || blk
raise Error, "timer must have a callback" unless callback
# I'm assuming here that most requests will have the same
# request timeout, as in most cases they share common set of
# options. A user setting different request timeouts for 100s of
# requests will already have a hard time dealing with that.
unless (interval = @intervals.find { |t| t.interval == interval_in_secs })
unless (interval = @intervals.bsearch { |t| t.interval == interval_in_secs })
interval = Interval.new(interval_in_secs)
interval.on_empty { @intervals.delete(interval) }
@intervals << interval
@intervals.sort!
end
@ -30,6 +29,8 @@ module HTTPX
end
def wait_interval
drop_elapsed!
return if @intervals.empty?
@next_interval_at = Utils.now
@ -43,11 +44,25 @@ module HTTPX
elapsed_time = Utils.elapsed_time(@next_interval_at)
drop_elapsed!(elapsed_time)
@intervals = @intervals.drop_while { |interval| interval.elapse(elapsed_time) <= 0 }
@next_interval_at = nil if @intervals.empty?
end
private
def drop_elapsed!(elapsed_time = 0)
# check first, if not elapsed, then return
first_interval = @intervals.first
return unless first_interval && first_interval.elapsed?(elapsed_time)
# TODO: would be nice to have a drop_while!
@intervals = @intervals.drop_while { |interval| interval.elapse(elapsed_time) <= 0 }
end
class Timer
def initialize(interval, callback)
@interval = interval
@ -67,15 +82,6 @@ module HTTPX
def initialize(interval)
@interval = interval
@callbacks = []
@on_empty = nil
end
def on_empty(&blk)
@on_empty = blk
end
def cancel
@on_empty.call
end
def <=>(other)
@ -98,18 +104,20 @@ module HTTPX
def delete(callback)
@callbacks.delete(callback)
@on_empty.call if @callbacks.empty?
end
def no_callbacks?
@callbacks.empty?
end
def elapsed?
@interval <= 0
def elapsed?(elapsed = 0)
(@interval - elapsed) <= 0 || @callbacks.empty?
end
def elapse(elapsed)
# same as elapsing
return 0 if @callbacks.empty?
@interval -= elapsed
if @interval <= 0

View File

@ -12,6 +12,7 @@ module HTTPX
def initialize(filename, content_type)
@original_filename = filename
@content_type = content_type
@current = nil
@file = Tempfile.new("httpx", encoding: Encoding::BINARY, mode: File::RDWR)
super(@file)
end
@ -68,11 +69,12 @@ module HTTPX
# raise Error, "couldn't parse part headers" unless idx
return unless idx
# @type var head: String
head = @buffer.byteslice(0..idx + 4 - 1)
@buffer = @buffer.byteslice(head.bytesize..-1)
content_type = head[MULTIPART_CONTENT_TYPE, 1]
content_type = head[MULTIPART_CONTENT_TYPE, 1] || "text/plain"
if (name = head[MULTIPART_CONTENT_DISPOSITION, 1])
name = /\A"(.*)"\Z/ =~ name ? Regexp.last_match(1) : name.dup
name.gsub!(/\\(.)/, "\\1")
@ -83,7 +85,7 @@ module HTTPX
filename = HTTPX::Utils.get_filename(head)
name = filename || +"#{content_type || "text/plain"}[]" if name.nil? || name.empty?
name = filename || +"#{content_type}[]" if name.nil? || name.empty?
@current = name

View File

@ -20,7 +20,7 @@ module HTTPX
end
def to_s
read
read || ""
ensure
rewind
end
@ -37,6 +37,7 @@ module HTTPX
def rewind
form = @form.each_with_object([]) do |(key, val), aux|
if val.respond_to?(:path) && val.respond_to?(:reopen) && val.respond_to?(:closed?) && val.closed?
# @type var val: File
val = val.reopen(val.path, File::RDONLY)
end
val.rewind if val.respond_to?(:rewind)

View File

@ -44,7 +44,7 @@ module HTTPX
Open3.popen3(*%w[file --mime-type --brief -]) do |stdin, stdout, stderr, thread|
begin
::IO.copy_stream(file, stdin.binmode)
IO.copy_stream(file, stdin.binmode)
rescue Errno::EPIPE
end
file.rewind

View File

@ -63,7 +63,7 @@ module HTTPX
buffer = Response::Buffer.new(
threshold_size: Options::MAX_BODY_THRESHOLD_SIZE
)
::IO.copy_stream(self, buffer)
IO.copy_stream(self, buffer)
buffer.rewind if buffer.respond_to?(:rewind)

View File

@ -1,5 +1,5 @@
# frozen_string_literal: true
module HTTPX
VERSION = "1.4.3"
VERSION = "1.5.1"
end

View File

@ -36,4 +36,5 @@ class Bug_0_22_2_Test < Minitest::Test
assert connection_ipv4.family == Socket::AF_INET
assert connection_ipv6.family == Socket::AF_INET6
end
end if HTTPX::Session.default_options.ip_families.size > 1
# TODO: remove this once gitlab docker allows TCP connectivity alongside DNS
end unless ENV.key?("CI")

View File

@ -11,7 +11,7 @@ class Bug_1_1_0_Test < Minitest::Test
include HTTPHelpers
def test_read_timeout_firing_too_soon_before_select
timeout = { read_timeout: 1 }
timeout = { read_timeout: 2 }
uri = build_uri("/get")

View File

@ -6,7 +6,7 @@ require "support/http_helpers"
class Bug_1_1_1_Test < Minitest::Test
include HTTPHelpers
def test_conection_callbacks_fire_setup_once
def test_connection_callbacks_fire_setup_once
uri = build_uri("/get")
connected = 0

View File

@ -3,7 +3,6 @@
require "test_helper"
require "support/http_helpers"
require "webmock/minitest"
require "httpx/adapters/webmock"
class Bug_1_4_1_Test < Minitest::Test
include HTTPHelpers
@ -34,9 +33,9 @@ class Bug_1_4_1_Test < Minitest::Test
# first connection is set to inactive
sleep(2)
response = persistent_session.get(uri)
verify_error_response(response, HTTPX::Connection::HTTP2::GoawayError)
verify_status(response, 200)
assert persistent_session.connections.size == 2, "should have been just 1"
assert persistent_session.connections.first.state == :closed
assert(persistent_session.connections.one? { |c| c.state == :closed })
ensure
persistent_session.close
end

View File

@ -0,0 +1,81 @@
# frozen_string_literal: true
require "test_helper"
require "support/http_helpers"
require "webmock/minitest"
require "httpx/adapters/webmock"
class Bug_1_4_1_Test < Minitest::Test
include HTTPHelpers
def test_persistent_do_not_exhaust_retry_on_eof_error
start_test_servlet(KeepAlivePongThenCloseSocketServer) do |server|
persistent_session = HTTPX.plugin(SessionWithPool)
.plugin(:persistent)
.with(ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE })
.with(timeout: { keep_alive_timeout: 1 })
uri = "#{server.origin}/"
# artificially create two connections
responses = 2.times.map do
Thread.new do
Thread.current.abort_on_exception = true
Thread.current.report_on_exception = true
persistent_session.get(uri)
end
end.map(&:value)
responses.each do |response|
verify_status(response, 200)
end
conns1 = persistent_session.connections
assert conns1.size == 2, "should have started two different connections to the same origin"
assert conns1.none? { |c| c.state == :closed }, "all connections should have been open"
# both connections are shutdown by the server
sleep(2)
response = persistent_session.get(uri)
verify_status(response, 200) # should not raise EOFError
conns2 = persistent_session.connections
assert conns2.size == 2
assert conns2.count { |c| c.state == :closed } == 1, "one of them should have been closed"
ensure
persistent_session.close
end
end
end
class OnPingDisconnectServer < TestHTTP2Server
module GoAwayOnFirstPing
attr_accessor :num_requests
def activate_stream(*, **)
super.tap do
@num_requests += 1
end
end
def ping_management(*)
if @num_requests == 1
@num_requests = 0
goaway
else
super
end
end
end
def initialize(*)
super
@num_requests = Hash.new(0)
end
private
def handle_connection(conn, _)
super
conn.extend(GoAwayOnFirstPing)
conn.num_requests = 0
end
end

View File

@ -0,0 +1,80 @@
# frozen_string_literal: true
require "test_helper"
require "support/http_helpers"
require "webmock/minitest"
require "httpx/adapters/webmock"
class Bug_1_5_0_Test < Minitest::Test
include HTTPHelpers
def test_persistent_do_not_exhaust_retry_on_eof_error
start_test_servlet(KeepAlivePongThenGoawayServer) do |server|
persistent_session = HTTPX.plugin(SessionWithPool)
.plugin(:persistent)
.with(ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE })
uri = "#{server.origin}/"
# artificially create two connections
responses = 2.times.map do
Thread.new do
Thread.current.abort_on_exception = true
Thread.current.report_on_exception = true
persistent_session.get(uri)
end
end.map(&:value)
responses.each do |response|
verify_status(response, 200)
end
conns1 = persistent_session.connections
assert conns1.size == 2, "should have started two different connections to the same origin"
assert conns1.none? { |c| c.state == :closed }, "all connections should have been open"
sleep(2)
response = persistent_session.get(uri)
verify_status(response, 200) # should not raise GoAwayError
conns2 = persistent_session.connections
assert conns2.size == 2
assert conns2.count { |c| c.state == :closed } == 1, "one of them should have been closed"
ensure
persistent_session.close
end
end
end
class OnPingDisconnectServer < TestHTTP2Server
module GoAwayOnFirstPing
attr_accessor :num_requests
def activate_stream(*, **)
super.tap do
@num_requests += 1
end
end
def ping_management(*)
if @num_requests == 1
@num_requests = 0
goaway
else
super
end
end
end
def initialize(*)
super
@num_requests = Hash.new(0)
end
private
def handle_connection(conn, _)
super
conn.extend(GoAwayOnFirstPing)
conn.num_requests = 0
end
end

View File

@ -14,7 +14,7 @@ module HTTPX
def capacity: () -> Integer
# delegated
def <<: (string data) -> String
def <<: (String data) -> String
def empty?: () -> bool
def bytesize: () -> (Integer | Float)
def clear: () -> void

View File

@ -8,7 +8,7 @@ module HTTPX
def once: (Symbol) { (*untyped) -> void } -> ^(*untyped) -> void
def emit: (Symbol, *untyped) -> void
def callbacks_for?: (Symbol) -> bool
def callbacks_for?: (Symbol) -> boolish
def callbacks: () -> Hash[Symbol, Array[_Callable]]
| (Symbol) -> Array[_Callable]
end

View File

@ -21,14 +21,16 @@ module HTTPX
| (:cookies, ?options) -> Plugins::sessionCookies
| (:expect, ?options) -> Session
| (:follow_redirects, ?options) -> Plugins::sessionFollowRedirects
| (:upgrade, ?options) -> Session
| (:h2c, ?options) -> Session
| (:upgrade, ?options) -> Plugins::sessionUpgrade
| (:h2c, ?options) -> Plugins::sessionUpgrade
| (:h2, ?options) -> Plugins::sessionUpgrade
| (:persistent, ?options) -> Plugins::sessionPersistent
| (:proxy, ?options) -> (Plugins::sessionProxy & Plugins::httpProxy)
| (:push_promise, ?options) -> Plugins::sessionPushPromise
| (:retries, ?options) -> Plugins::sessionRetries
| (:rate_limiter, ?options) -> Session
| (:stream, ?options) -> Plugins::sessionStream
| (:stream_bidi, ?options) -> Plugins::sessionStreamBidi
| (:aws_sigv4, ?options) -> Plugins::awsSigV4Session
| (:grpc, ?options) -> Plugins::grpcSession
| (:response_cache, ?options) -> Plugins::sessionResponseCache
@ -39,6 +41,7 @@ module HTTPX
| (:ssrf_filter, ?options) -> Plugins::sessionSsrf
| (:webdav, ?options) -> Plugins::sessionWebDav
| (:xml, ?options) -> Plugins::sessionXML
| (:query, ?options) -> Plugins::sessionQuery
| (Symbol | Module, ?options) { (Class) -> void } -> Session
| (Symbol | Module, ?options) -> Session

View File

@ -20,6 +20,7 @@ module HTTPX
attr_reader type: io_type
attr_reader io: TCP | SSL | UNIX | nil
attr_reader origin: http_uri
attr_reader origins: Array[String]
attr_reader state: Symbol
@ -39,7 +40,6 @@ module HTTPX
@keep_alive_timeout: Numeric?
@timeout: Numeric?
@current_timeout: Numeric?
@io: TCP | SSL | UNIX
@parser: Object & _Parser
@connected_at: Float
@response_received_at: Float
@ -64,6 +64,8 @@ module HTTPX
def mergeable?: (Connection connection) -> bool
def coalesce!: (instance connection) -> void
def coalescable?: (Connection connection) -> bool
def create_idle: (?Hash[Symbol, untyped] options) -> instance
@ -80,7 +82,7 @@ module HTTPX
def interests: () -> io_interests?
def to_io: () -> ::IO
def to_io: () -> IO
def call: () -> void
@ -104,8 +106,6 @@ module HTTPX
def handle_socket_timeout: (Numeric interval) -> void
def coalesced_connection=: (instance connection) -> void
def sibling=: (instance? connection) -> void
def handle_connect_error: (StandardError error) -> void
@ -164,6 +164,6 @@ module HTTPX
def set_request_timeout: (Symbol label, Request request, Numeric timeout, Symbol start_event, Symbol | Array[Symbol] finish_events) { () -> void } -> void
def self.parser_type: (String protocol) -> (singleton(HTTP1) | singleton(HTTP2))
def parser_type: (String protocol) -> (singleton(HTTP1) | singleton(HTTP2))
end
end

View File

@ -16,6 +16,8 @@ module HTTPX
@drains: Hash[Request, String]
@pings: Array[String]
@buffer: Buffer
@handshake_completed: bool
@wait_for_handshake: bool
def interests: () -> io_interests?
@ -27,8 +29,6 @@ module HTTPX
def <<: (string) -> void
def can_buffer_more_requests?: () -> bool
def send: (Request request, ?bool head) -> bool
def consume: () -> void
@ -45,6 +45,8 @@ module HTTPX
def initialize: (Buffer buffer, Options options) -> untyped
def can_buffer_more_requests?: () -> bool
def send_pending: () -> void
def set_protocol_headers: (Request) -> _Each[[String, String]]
@ -63,6 +65,10 @@ module HTTPX
def join_body: (::HTTP2::Stream stream, Request request) -> void
def send_chunk: (Request request, ::HTTP2::Stream stream, String chunk, String? next_chunk) -> void
def end_stream?: (Request request, String? next_chunk) -> void
def on_stream_headers: (::HTTP2::Stream stream, Request request, Array[[String, String]] headers) -> void
def on_stream_trailers: (::HTTP2::Stream stream, Response response, Array[[String, String]] headers) -> void
@ -92,12 +98,15 @@ module HTTPX
def on_pong: (string ping) -> void
class Error < ::HTTPX::Error
def initialize: (Integer id, Symbol | StandardError error) -> void
end
class GoawayError < Error
def initialize: () -> void
end
class PingError < Error
def initialize: () -> void
end
end
end

View File

@ -18,9 +18,6 @@ module HTTPX
end
class PoolTimeoutError < TimeoutError
attr_reader origin: String
def initialize: (String origin, Numeric timeout) -> void
end
class ConnectTimeoutError < TimeoutError

View File

@ -2,7 +2,7 @@ module HTTPX
class Headers
include _ToS
@headers: Hash[String, Array[String]]
@headers: Hash[String, Array[_ToS]]
def self.new: (?untyped headers) -> instance
@ -11,34 +11,39 @@ module HTTPX
def []: (String field) -> String?
def []=: (String field, headers_value value) -> void
def add: (String field, string value) -> void
def delete: (String field) -> Array[String]?
def add: (String field, String value) -> void
def delete: (String field) -> Array[_ToS]?
def each: (?_Each[[String, String]]? extra_headers) { (String k, String v) -> void } -> void
| (?_Each[[String, String]]? extra_headers) -> Enumerable[[String, String]]
def get: (String field) -> Array[String]
def get: (String field) -> Array[_ToS]
def key?: (String downcased_key) -> bool
def merge: (_Each[[String, headers_value]] other) -> Headers
def same_headers?: (untyped headers) -> bool
def empty?: () -> bool
def to_a: () -> Array[[String, String]]
def to_hash: () -> Hash[String, String]
alias to_h to_hash
def inspect: () -> String
private
def initialize: (?headers?) -> untyped
def array_value: (headers_value) -> Array[String]
def downcased: (_ToS field) -> String
def initialize: (?(headers_input | instance)?) -> void
def array_value: (headers_value value) -> Array[_ToS]
def downcased: (header_field field) -> String
end
type header_field = string | _ToS
type headers_value = _ToS | Array[_ToS]
type headers_hash = Hash[_ToS, headers_value]
type headers_input = headers_hash | Array[[_ToS, string]]
type headers_hash = Hash[header_field, headers_value]
type headers_input = headers_hash | Array[[header_field, headers_value]]
type headers = Headers | headers_input
end

View File

@ -2,6 +2,7 @@ module HTTPX
extend Chainable
EMPTY: Array[untyped]
EMPTY_HASH: Hash[untyped, untyped]
VERSION: String
@ -16,7 +17,10 @@ module HTTPX
type ip_family = Integer #Socket::AF_INET6 | Socket::AF_INET
module Plugins
def self?.load_plugin: (Symbol) -> Module
self.@plugins: Hash[Symbol, Module]
self.@plugins_mutex: Thread::Mutex
def self?.load_plugin: (Symbol name) -> Module
def self?.register_plugin: (Symbol, Module) -> void
end

View File

@ -1,6 +1,3 @@
module HTTPX
type io_type = "udp" | "tcp" | "ssl" | "unix"
module IO
end
end

View File

@ -14,12 +14,24 @@ module HTTPX
alias host ip
@io: Socket
@hostname: String
@options: Options
@fallback_protocol: String
@keep_open: bool
@ip_index: Integer
# TODO: lift when https://github.com/ruby/rbs/issues/1497 fixed
def initialize: (URI::Generic origin, Array[ipaddr]? addresses, Options options) ?{ (instance) -> void } -> void
def add_addresses: (Array[ipaddr] addrs) -> void
def to_io: () -> ::IO
def to_io: () -> IO
def protocol: () -> String

View File

@ -4,7 +4,7 @@ module HTTPX
def initialize: (String ip, Integer port, Options options) -> void
def to_io: () -> ::IO
def to_io: () -> IO
def connect: () -> void

View File

@ -11,5 +11,7 @@ module HTTPX
def log: (?level: Integer?, ?color: Symbol?, ?debug_level: Integer, ?debug: _IOLogger?) { () -> String } -> void
def log_exception: (Exception error, ?level: Integer, ?color: Symbol, ?debug_level: Integer, ?debug: _IOLogger?) -> void
def log_redact: (_ToS text, ?bool should_redact) -> String
end
end

View File

@ -13,6 +13,7 @@ module HTTPX
KEEP_ALIVE_TIMEOUT: Integer
SETTINGS_TIMEOUT: Integer
CLOSE_HANDSHAKE_TIMEOUT: Integer
SET_TEMPORARY_NAME: ^(Module mod, ?Symbol pl) -> void
DEFAULT_OPTIONS: Hash[Symbol, untyped]
REQUEST_BODY_IVARS: Array[Symbol]
@ -25,7 +26,7 @@ module HTTPX
attr_reader uri: URI?
# headers
attr_reader headers: Headers?
attr_reader headers: headers?
# timeout
attr_reader timeout: timeout
@ -89,6 +90,8 @@ module HTTPX
attr_reader response_body_class: singleton(Response::Body)
attr_reader options_class: singleton(Options)
attr_reader resolver_class: Symbol | Class
attr_reader ssl: Hash[Symbol, untyped]
@ -109,6 +112,9 @@ module HTTPX
# persistent
attr_reader persistent: bool
# close_on_fork
attr_reader close_on_fork: bool
# resolver_options
attr_reader resolver_options: Hash[Symbol, untyped]
@ -134,8 +140,6 @@ module HTTPX
def initialize: (?options options) -> void
def do_initialize: (?options options) -> void
def access_option: (Hash[Symbol, untyped] | Object | nil obj, Symbol k, Hash[Symbol, Symbol]? ivar_map) -> untyped
end

View File

@ -1,6 +1,8 @@
module HTTPX
module Plugins
module Cookies
type cookie_attributes = Hash[Symbol | String, top]
type jar = Jar | _Each[Jar::cookie]
interface _CookieOptions

View File

@ -1,7 +1,5 @@
module HTTPX
module Plugins::Cookies
type cookie_attributes = Hash[Symbol | String, top]
class Cookie
include Comparable
@ -33,7 +31,7 @@ module HTTPX
def cookie_value: () -> String
alias to_s cookie_value
def valid_for_uri?: (uri) -> bool
def valid_for_uri?: (http_uri uri) -> bool
def self.new: (Cookie) -> instance
| (cookie_attributes) -> instance

View File

@ -11,12 +11,12 @@ module HTTPX
def add: (Cookie name, ?String path) -> void
def []: (uri) -> Array[Cookie]
def []: (http_uri) -> Array[Cookie]
def each: (?uri?) { (Cookie) -> void } -> void
| (?uri?) -> Enumerable[Cookie]
def each: (?http_uri?) { (Cookie) -> void } -> void
| (?http_uri?) -> Enumerable[Cookie]
def merge: (_Each[cookie] cookies) -> instance
def merge: (_Each[cookie] cookies) -> self
private

View File

@ -0,0 +1,22 @@
module HTTPX
module Plugins::Cookies
module SetCookieParser
RE_WSP: Regexp
RE_NAME: Regexp
RE_BAD_CHAR: Regexp
RE_COOKIE_COMMA: Regexp
def self?.call: (String set_cookie) { (String name, String value, cookie_attributes attrs) -> void } -> void
def self?.scan_dquoted: (StringScanner scanner) -> String
def self?.scan_value: (StringScanner scanner, ?bool comma_as_separator) -> String
def self?.scan_name_value: (StringScanner scanner, ?bool comma_as_separator) -> [String?, String?]
end
end
end

View File

@ -11,6 +11,10 @@ module HTTPX
def upgrade: (Request, Response) -> void
end
module RequestMethods
def valid_h2c_verb?: () -> bool
end
module ConnectionMethods
def upgrade_to_h2c: (Request, Response) -> void
end

View File

@ -1,8 +1,9 @@
module HTTPX
class HTTPProxyError < ConnectionError
class ProxyError < ConnectionError
end
class ProxySSL < SSL
attr_reader proxy_io: TCP | SSL
end
module Plugins
@ -11,7 +12,9 @@ module HTTPX
end
module Proxy
Error: singleton(HTTPProxyError)
class ProxyConnectionError < ProxyError
end
PROXY_ERRORS: Array[singleton(StandardError)]
class Parameters
@ -49,6 +52,10 @@ module HTTPX
def self.extra_options: (Options) -> (Options & _ProxyOptions)
module ConnectionMethods
@proxy_uri: generic_uri
end
module InstanceMethods
@__proxy_uris: Array[generic_uri]

View File

@ -16,6 +16,9 @@ module HTTPX
def __http_on_connect: (top, Response) -> void
end
module ProxyParser
end
class ConnectRequest < Request
def initialize: (generic_uri uri, Options options) -> void
end

View File

@ -1,5 +1,5 @@
module HTTPX
class Socks4Error < HTTPProxyError
class Socks4Error < ProxyError
end
module Plugins

View File

@ -1,5 +1,5 @@
module HTTPX
class Socks5Error < HTTPProxyError
class Socks5Error < ProxyError
end
module Plugins

18
sig/plugins/query.rbs Normal file
View File

@ -0,0 +1,18 @@
module HTTPX
module Plugins
module Query
def self.subplugins: () -> Hash[Symbol, Module]
module InstanceMethods
def query: (uri | [uri], **untyped) -> response
| (_Each[uri | [uri, request_params]], **untyped) -> Array[response]
end
module QueryRetries
module InstanceMethods
end
end
end
type sessionQuery = Session & Query::InstanceMethods
end
end

View File

@ -3,47 +3,62 @@ module HTTPX
module ResponseCache
CACHEABLE_VERBS: Array[verb]
CACHEABLE_STATUS_CODES: Array[Integer]
SUPPORTED_VARY_HEADERS: Array[String]
def self?.cacheable_request?: (Request & RequestMethods request) -> bool
def self?.cacheable_response?: (::HTTPX::ErrorResponse | (Response & ResponseMethods) response) -> bool
def self?.cached_response?: (response response) -> bool
def self?.cacheable_response?: (::HTTPX::ErrorResponse | cacheResponse response) -> bool
class Store
@store: Hash[String, Array[Response]]
def self?.not_modified?: (response response) -> bool
@store_mutex: Thread::Mutex
interface _ResponseCacheOptions
def response_cache_store: () -> Store
def lookup: (Request request) -> Response?
def supported_vary_headers: () -> Array[String]
end
def cached?: (Request request) -> boolish
interface _ResponseCacheStore
def get: (cacheRequest request) -> cacheResponse?
def cache: (Request request, Response response) -> void
def set: (cacheRequest request, cacheResponse response) -> void
def prepare: (Request request) -> void
private
def match_by_vary?: (Request request, Response response) -> bool
def _get: (Request request) -> Array[Response]?
def _set: (Request request, Response response) -> void
def clear: () -> void
end
module InstanceMethods
@response_cache: Store
def clear_response_cache: () -> void
private
def prepare_cache: (cacheRequest request) -> void
def cacheable_request?: (cacheRequest request) -> bool
def match_by_vary?: (cacheRequest request, cacheResponse response) -> bool
end
module RequestMethods
attr_accessor cached_response: cacheResponse?
@response_cache_key: String
def response_cache_key: () -> String
def cacheable_verb?: () -> bool
end
module ResponseMethods
def copy_from_cached: (Response other) -> void
attr_writer original_request: cacheRequest
@cache: bool
def original_request: () -> cacheRequest?
def cached?: () -> bool
def mark_as_cached!: () -> void
def copy_from_cached!: () -> void
def fresh?: () -> bool
@ -57,6 +72,13 @@ module HTTPX
def date: () -> Time
end
type cacheOptions = Options & _ResponseCacheOptions
type cacheRequest = Request & RequestMethods
type cacheResponse = Response & ResponseMethods
end
type sessionResponseCache = Session & ResponseCache::InstanceMethods

Some files were not shown because too many files have changed in this diff Show More