Compare commits

..

336 Commits

Author SHA1 Message Date
HoneyryderChuck
0261449b39 fixed sig for callbacks_for 2025-08-08 17:06:03 +01:00
HoneyryderChuck
84c8126cd9 callback_for: check for ivar existence first
Closes #353
2025-08-08 16:30:17 +01:00
HoneyryderChuck
ff3f1f726f fix warning about argument potentially being ignored 2025-08-07 12:34:59 +01:00
HoneyryderChuck
b8b710470c fix sentry deprecation 2025-08-07 12:30:31 +01:00
HoneyryderChuck
0f3e3ab068 remove trailing :: from IO module usage, as there's no more internal module 2025-08-07 12:30:21 +01:00
HoneyryderChuck
095fbb3463 using local aws for the max requests tests
reduce exposure to httpbin.org even more
2025-08-07 12:12:50 +01:00
HoneyryderChuck
7790589c1f linting issue 2025-08-07 11:28:18 +01:00
HoneyryderChuck
dd8608ec3b small improv in max requests tests to make it tolerant to multi-homed networks 2025-08-07 11:22:29 +01:00
HoneyryderChuck
8205b351aa removing usage of httpbin.org peer in tests wherever possible
it has been quite unstable, 503'ing often
2025-08-07 11:21:59 +01:00
HoneyryderChuck
5992628926 update nghttp2 used in CI tests 2025-08-07 11:21:02 +01:00
HoneyryderChuck
39370b5883 Merge branch 'issue-337' into 'master'
fix for issues blocking reconnection in proxy mode

Closes #337

See merge request os85/httpx!397
2025-07-30 09:49:51 +00:00
HoneyryderChuck
1801a7815c http2 parser: fix calculation when connection closes and there's no termination handshake 2025-07-18 17:48:23 +01:00
HoneyryderChuck
0953e4f91a fix for #receive_requests bailout routing when out of selectables
the routine was using #fetch_response, which may return nil, and wasn't handling it, making it potentially return a nil instead of a response/errorresponse object. since, depending on the plugins, #fetch_response may reroute requests, one allows to keep in the loop in case there are selectables again to process as a result of it
2025-07-18 17:48:23 +01:00
HoneyryderChuck
a78a3f0b7c proxy fixes: allow proxy connection errors to be retriable
when coupled with the retries plugin, the exception is raised inside send_request, which breaks the integration; in order to protect from it, the proxy plugin will protect from proxy connection errors (socket/timeout errors happening until tunnel established) and allow them to be retried, while ignoring other proxy errors; meanwhile, the naming of errors was simplified, and now there's an HTTPX::ProxyError replacing HTTPX::HTTPProxyError (which is a breaking change).
2025-07-18 17:48:23 +01:00
HoneyryderChuck
aeb8fe5382 fix proxy ssl reconnection
when a proxied ssl connection would be lost, standard reconnection wouldn't work, as it would not pick the information from the internal tcp socket. in order to fix this, the connection retrieves the proxied io on reset/purge, which makes the establish a new proxyssl connection on reconnect
2025-07-18 17:48:23 +01:00
HoneyryderChuck
03170b6c89 promote certain transition logs to regular code (under level 3)
not really useful as telemetry metered, but would have been useful for other bugs
2025-07-18 17:48:23 +01:00
HoneyryderChuck
814d607a45 Revert "options: initialize all possible options to improve object shape"
This reverts commit f64c3ab5990b68f850d0d190535a45162929f0af.
2025-07-18 17:47:08 +01:00
HoneyryderChuck
5502332e7e logging when connections are deregistered from the selector/pool
also, logging when a response is fetched in the session
2025-07-18 17:46:43 +01:00
HoneyryderChuck
f3b68950d6 adding current fiber id to log message tags 2025-07-18 17:45:21 +01:00
HoneyryderChuck
2c4638784f Merge branch 'fix-shape' into 'master'
object shape improvements

See merge request os85/httpx!396
2025-07-14 15:38:19 +00:00
HoneyryderChuck
b0016525e3 recover from network unreachable errors when using cached IPs
while this type of error is avoided when doing HEv2, the IPs remain
in the cache; this means that, one the same host is reached, the
IPs are loaded onto the same socket, and if the issue is IPv6
connectivity, it'll break outside of the HEv2 flow.

this error is now protected inside the connect block, so that other
IPs in the list can be tried after; the IP is then evicted from the
cachee.

HEv2 related regression test is disabled in CI, as it's currently
reliable in Gitlab CI, which allows to resolve the IPv6 address,
but does not allow connecting to it
2025-07-14 15:44:47 +01:00
HoneyryderChuck
49555694fe remove check for non unique local ipv6 which is disabling HEv2
not sure anymore under which condition this was done...
2025-07-14 11:57:02 +01:00
HoneyryderChuck
93e5efa32e http2 stream header logs: initial newline to align values and make debug logs clearer 2025-07-14 11:50:22 +01:00
HoneyryderChuck
8b3c1da507 removed ivar left behind and used nowhere 2025-07-14 11:50:22 +01:00
HoneyryderChuck
d64f247e11 fix for Connection too many object shapes
some more ivars which were not initialized in the first place were leading to the warning in CI mode
2025-07-14 11:50:22 +01:00
HoneyryderChuck
f64c3ab599 options: initialize all possible options to improve object shape
Options#merge works by duping-then-filling ivars, but due to not all of them being initialized on object creation, each merge had the potential of adding more object shapes for the same class, which breaks one of the most recent ruby optimizations

this was fixed by caching all possible options names at the class level, and using that as reference in the initalize method to nilify all unreferenced options
2025-07-14 11:50:22 +01:00
HoneyryderChuck
af03ddba3b options: inlining logic from do_initialize in constructor 2025-07-14 09:10:52 +01:00
HoneyryderChuck
7012ca1f27 fixed previous commit, as the tag is not available before 1.15 2025-07-03 16:39:54 +01:00
HoneyryderChuck
d405f8905f fixed ddtrace compatibility for versions under 1.13.0 2025-07-03 16:23:27 +01:00
HoneyryderChuck
3ff10f142a replace h2 upgrade peer with a custom implementation
the remote one has been failing for some time
2025-06-09 22:56:30 +01:00
HoneyryderChuck
51ce9d10a4 bump version to 1.5.1 2025-06-09 09:04:05 +01:00
HoneyryderChuck
6bde11b09c Merge branch 'gh-92' into 'master'
don't bookkeep retry attempts when errors happen on just-checked-out open connections

See merge request os85/httpx!394
2025-05-28 17:54:03 +00:00
HoneyryderChuck
0c2808fa25 prevent needless closing loop when process is interrupted during DNS request
the native resolver needs to be unselected. it was already, but it was taken into account still for bookkeeping. this removes it from the list by eliminating closed selectables from the list (which were probably already removed from the list via callback)

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-28 15:26:11 +01:00
HoneyryderChuck
cb78091e03 don't bookkeep retry attempts when errors happen on just-checked-out open connections
in case of multiple connections to the same server, where the server may have closed all of them at the same time, a request will fail after checkout multiple times, before starting a new one where the request may succeed. this patch allows the prior attempts not to exhaust the number of possible retries on the request

it does so by marking the request as ping when the connection it's being sent to is marked as inactive; this leverages the logic of gating retries bookkeeping in such a case

Closes https://github.com/HoneyryderChuck/httpx/issues/92
2025-05-28 15:25:50 +01:00
HoneyryderChuck
6fa69ba475 Merge branch 'duplicate-method-def' into 'master'
Fix duplicate `option_pool_options` method

See merge request os85/httpx!393
2025-05-21 15:30:34 +00:00
Earlopain
4a78e78d32
Fix duplicate option_pool_options method
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:237: warning: method redefined; discarding old option_pool_options (StandardError)
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:221: warning: previous definition of option_pool_options was here
2025-05-21 12:49:54 +02:00
HoneyryderChuck
0e393987d0 bump version to 1.5.0 2025-05-16 14:04:08 +01:00
HoneyryderChuck
12483fa7c8 missing ivar sigs in tcp class 2025-05-16 11:15:28 +01:00
HoneyryderChuck
d955ba616a deselect idle connections on session termination
session may be interrupted earlier than the connection has finished
the handshake; in such a case, simulate early termination.

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-15 00:31:15 +01:00
HoneyryderChuck
804d5b878b Merge branch 'debug-redact' into 'master'
added :debug_redact option

See merge request os85/httpx!387
2025-05-14 23:01:28 +00:00
HoneyryderChuck
75702165fd remove ping check when querying for repeatable request status
this should be dependent on the exception only, as connections may have closed before ping was released

this addresses https://github.com/HoneyryderChuck/httpx/issues/87\#issuecomment-2866564479
2025-05-14 23:52:18 +01:00
HoneyryderChuck
120bbad126 clear write buffer on connect errors
leaving bytes around messes up the termination handshake and may raise other unwanted errors
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35446e9fe1 fixes for connection coalescing flow
the whole "found connection not open" branch was removed, as currently,
a mergeable connection must not be closed; this means that only
open/inactive connections will be picked up from selector/pool, as
they're the only coalescable connections (have addresses/ssl cert
state). this may be extended to support closed connections though, as
remaining ssl/addresses are enough to make it coalescable at that point,
and then it's just a matter of idling it, so it'll be simpler than it is
today.

coalesced connection gets closed via Connection#terminate at the end
now, in order to propagate whether it was a cloned connection.

added log messages in order to monitor coalescing handshake from logs.
2025-05-13 16:21:06 +01:00
HoneyryderChuck
3ed41ef2bf pool: do not decrement conn counter when returning existing connection, nor increment it when acquiring
this variable is supposed to monitor new connections being created or dropped, existing connection management shouldn't affect it
2025-05-13 16:21:06 +01:00
HoneyryderChuck
9ffbceff87 rename Connection#coalesced_connection=(conn) to Connection.coalesce!(conn) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
757c9ae32c making tcp state transition logs less ambiguous
also show transition states in connected
2025-05-13 16:21:06 +01:00
HoneyryderChuck
5d88ccedf9 redact ping payload as well 2025-05-13 16:21:06 +01:00
HoneyryderChuck
85808b6569 adding logs to select-on-socket phase 2025-05-13 16:21:06 +01:00
HoneyryderChuck
d5483a4264 reconnectable errors: include HTTP/2 parser errors and opnessl errors 2025-05-13 16:21:06 +01:00
HoneyryderChuck
540430c00e assert for request in a faraday test (sometimes this is nil, for some reason) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
3a417a4623 added :debug_redact option
when true, text passed to log messages considered sensitive (wrapped in a +#log_redact+ call) will be logged as "[REDACTED}"
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35c18a1b9b options: meta prog integer options into the same definition 2025-05-13 16:20:28 +01:00
HoneyryderChuck
cf19fe5221 Merge branch 'improv' into 'master'
sig improvements

See merge request os85/httpx!390
2025-05-13 15:18:50 +00:00
HoneyryderChuck
f9c2fc469a options: freeze more ivars by default 2025-05-13 15:52:57 +01:00
HoneyryderChuck
9b513faab4 aligning implementation of the #resolve function in all implementations 2025-05-13 15:52:57 +01:00
HoneyryderChuck
0be39faefc added some missing sigs + type safe code 2025-05-13 15:44:21 +01:00
HoneyryderChuck
08c5f394ba fixed usage of inexisting var 2025-05-13 15:13:02 +01:00
HoneyryderChuck
55411178ce resolver: moved @connections ivar + init into parent class
also, establishing the selectable interface for resolvers
2025-05-13 15:13:02 +01:00
HoneyryderChuck
a5c83e84d3 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!389
2025-05-13 14:10:56 +00:00
HoneyryderChuck
d7e15c4441 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-05-13 11:02:13 +01:00
HoneyryderChuck
012255e49c Merge branch 'ruby-3.5-cgi' into 'master'
Only require from `cgi` what is required

See merge request os85/httpx!391
2025-05-10 00:20:33 +00:00
HoneyryderChuck
d20506acb8 Merge branch 'httpx-issue-350' into 'master'
In file (any serialized) store need to response.finish! on get

Closes #350

See merge request os85/httpx!392
2025-05-10 00:13:41 +00:00
Paul Duey
28399f1b88 In file (any serialized) store need to response.finish! on get 2025-05-09 17:22:39 -04:00
Earlopain
953101afde
Only require from cgi what is required
In Ruby 3.5, most of the `cgi` gem will be removed and moved to a bundled gem.

Luckily, the escape/unescape methods have been left around. So, only the require path needs to be adjusted to avoid a warning.
`cgi/escape` was available since Ruby 2.3

I also moved the require to the file that actually uses it.

https://bugs.ruby-lang.org/issues/21258
2025-05-09 18:54:41 +02:00
HoneyryderChuck
055ee47b83 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!383
2025-04-29 22:44:44 +00:00
HoneyryderChuck
dbad275c65 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-04-29 23:25:41 +01:00
HoneyryderChuck
fe69231e6c Merge branch 'gh-86' into 'master'
persistent plugin: by default, do not retry requests which failed due to a request timeout

See merge request os85/httpx!385
2025-04-29 09:41:45 +00:00
HoneyryderChuck
4c61df768a persistent plugin: by default, do not retry requests which failed due to a request timeout
that isn't a connection-related type of failure, and confuses users when it gets retried, as connection was fine, request was just slow

Fixes https://github.com/HoneyryderChuck/httpx/issues/86
2025-04-27 16:47:50 +01:00
HoneyryderChuck
aec150b030 Merge branch 'issue-347' into 'master'
:callbacks plugin fix: copy callbacks to new session when using the session builder methods

Closes #347 and #348

See merge request os85/httpx!386
2025-04-26 15:12:42 +00:00
HoneyryderChuck
29a43c4bc3 callbacks plugin fix: errors raised in .on_request_error callback should bubble up to user code
this was not happening for errors happening during name resolution, particularly when HEv2 was used, as the second resolver was kept open and didn't stop the selector loop

Closes #348
2025-04-26 03:11:55 +01:00
HoneyryderChuck
34c2fee60c :callbacks plugin fix: copy callbacks to new session when using the session builder methods
such as '.with' or '.wrap', which create a new session object on the fly
2025-04-26 02:34:56 +01:00
HoneyryderChuck
c62966361e moving can_vuffer_more_requests? to private sector
it's only used internally
2025-04-26 01:42:55 +01:00
HoneyryderChuck
2b87a3d5e5 selector: make APIs expecting connections more strict, improve sigs by using interface 2025-04-26 01:42:55 +01:00
HoneyryderChuck
3dd767cdc2 response_cache: also cache request headers, for vary algo computation 2025-04-26 01:42:55 +01:00
HoneyryderChuck
a9255c52aa response_cache plugin: adding more rdoc documentation to methods 2025-04-26 01:42:55 +01:00
HoneyryderChuck
32031e8a03 response_cache plugin: rename cached_response? to not_modified?, more accurate 2025-04-26 01:42:55 +01:00
HoneyryderChuck
f328646c08 Merge branch 'gh-84' into 'master'
adding missing datadog span decoration

See merge request os85/httpx!384
2025-04-26 00:40:49 +00:00
HoneyryderChuck
0484dd76c8 fix for wrong query string encoding when passed an empty :params input
Fixes https://github.com/HoneyryderChuck/httpx/issues/85
2025-04-26 00:20:28 +01:00
HoneyryderChuck
17c1090b7a more agressive timeouts in tests 2025-04-26 00:10:48 +01:00
HoneyryderChuck
87f4ce4b03 adding missing datadog span decoration
including header tags, and other missing span tags
2025-04-25 23:46:11 +01:00
HoneyryderChuck
1ec7442322 Merge branch 'improv-tests' 2025-04-14 17:35:15 +01:00
HoneyryderChuck
723959cf92 wrong option docs 2025-04-13 01:27:27 +01:00
HoneyryderChuck
10b4b9c7c0 remove unused method 2025-04-13 01:27:05 +01:00
HoneyryderChuck
1b39bcd3a3 set approriate coverage key, use it as command 2025-04-13 01:08:18 +01:00
HoneyryderChuck
44a2041ea8 added missing response cache store sigs 2025-04-13 01:07:18 +01:00
HoneyryderChuck
b63f9f1ae2 native: realign log calls, so coverage does not misreport them 2025-04-13 01:06:54 +01:00
HoneyryderChuck
467dd5e7e5 file store: testing path when the same request is stored twice
also, testing usage of symbol response cache store options.
2025-04-13 01:05:42 +01:00
HoneyryderChuck
c626fae3da adding test to force usage of max_requests conditionals under http1 2025-04-13 01:05:08 +01:00
HoneyryderChuck
7f6b78540b Merge branch 'issue-328' into 'master'
pool option: max_connections

Closes #328

See merge request os85/httpx!371
2025-04-12 22:43:18 +00:00
HoneyryderChuck
b120ce4657 new pool option: max_connections
this new option declares how many max inflight-or-idle open connections a session may hold. connections get recycled in case a new one is needed and the pool has closed connections to discard. the same pool timeout error applies as for max_connections_per_origin
2025-04-12 23:29:08 +01:00
HoneyryderChuck
32c36bb4ee Merge branch 'issue-341' into 'master'
response_cache plugin: return cached response from store unless stale

Closes #341

See merge request os85/httpx!382
2025-04-12 21:45:35 +00:00
HoneyryderChuck
cc0626429b prevent overlap of test dirs/files across test instances 2025-04-12 22:09:12 +01:00
HoneyryderChuck
a0e2c1258a allow setting :response_cache_store with a symbol (:store, :file_store)
cleaner to select from one of the two available options
2025-04-12 22:09:12 +01:00
HoneyryderChuck
6bd3c15384 fixing cacheable_response? to exclude headers and freshness
it's called with a fresh response already
2025-04-12 22:09:12 +01:00
HoneyryderChuck
0d23c464f5 simplifying response cache store API
#get, #set, #clear, that's all you need. this can now be some bespoke custom class implementing these primitives
2025-04-12 22:09:12 +01:00
HoneyryderChuck
a75b89db74 response_cache plugin: addin filesystem based store
it stores the cached responses in the filesystem
2025-04-12 22:09:12 +01:00
HoneyryderChuck
7173616154 response cache: fix vary header handling by supporting a defined set of headers
the cache key will be also determined by the supported vary headers values, when present; this means easier lookups, and one level hash fetch, where the same url-verb request may have multiple entries depending on those headers

checking response vary header will therefore be something done at cache response lookup; writes may override when they shouldn't though, as a full match on supported vary headers will be performed, and one can't know in advance the combo of vary headers, which is why insterested parties will have to be judicious with the new  option
2025-04-12 22:09:12 +01:00
HoneyryderChuck
69f9557780 corrected equality comparison of response bodies 2025-04-12 22:09:12 +01:00
HoneyryderChuck
339af65cc1 response cache: store cached response in request, so that copying and cache invalidating work a bit OOTB 2025-04-12 22:09:12 +01:00
HoneyryderChuck
3df6edbcfc response_cache: an immutable response is always fresh 2025-04-12 22:09:11 +01:00
HoneyryderChuck
5c2f8ab0b1 response_cache plugin: return cached response from store unless stale
response age wasn't being taken into account, and cache invalidation request was always being sent; fresh response will stay in the store until expired; when it expires, cache invalidation will be tried (if possible); if invalidated, the new response is put in the store; Bif validated, the body of the cached response is copied, and the cached response stays in the store
2025-04-12 22:09:11 +01:00
HoneyryderChuck
0c335fd03d Merge branch 'gh-82' into 'master'
persistent plugin: drop , allow retries for ping requests, regardless of idempotency property

See merge request os85/httpx!381
2025-04-12 09:14:32 +00:00
HoneyryderChuck
bf19cde364 fix: ping record to match must be kept in a different string
http-2 1.1.0 uses the string input as the ultimate buffer (when input not frozen), which will mutate the argument. in order to keep it around for further comparison, the string is dupped
2025-04-11 16:25:58 +01:00
HoneyryderChuck
7e0ddb7ab2 persistent plugin: when errors happen during connection ping phase, make sure that only connection lost errors are going to be retriable 2025-04-11 14:41:36 +01:00
HoneyryderChuck
4cd3136922 connection: set request timeouts before sending the request to the parser
in situations where the connection is already open/active, the requests would be buffered before setting the timeouts, which would skip transition callbacks associated with writes, such as write timeouts and request timeouts
2025-04-11 14:41:36 +01:00
HoneyryderChuck
642122a0f5 persistent plugin: drop , allow retries for ping requests, regardless of idempotency property
the previous option was there to allow reconnecting on non-idempotent (i.e. POST) requests, but had the unfortunate side-effect of allowing retries for failures (i.e. timeouts) which had nothing to do with a failed connection error; this mitigates it by enabling retries for ping-aware requests, i.e. if there is an error during PING, always retry
2025-04-11 14:41:36 +01:00
HoneyryderChuck
42d42a92b4 added missing test for close_on_fork option 2025-04-09 09:39:53 +01:00
HoneyryderChuck
fb6a509d98 removing duplicate sig 2025-04-06 21:54:03 +01:00
HoneyryderChuck
3c22f36a6c session refactor: remove @responses hash
this was being used as an internal cache for finished responses; this can be however superseded by Request#response, which fulfills the same role alongside the #finished? call; this allows us to drop one variable-size hash which would grow at least as large as the number of requests per call, and was inadvertedly shared across threads when using the same session (even at no danger of colliding, but could cause perhaps problems in jruby?)

also allows to remove one less callback
2025-04-04 11:05:27 +01:00
HoneyryderChuck
51b2693842 Merge branch 'gh-disc-71' into 'master'
:stream_bidi plugin

See merge request os85/httpx!365
2025-04-04 09:51:29 +00:00
HoneyryderChuck
1ab5855961 Merge branch 'gh-74' into 'master'
adding  option, which automatically closes sessions on fork

See merge request os85/httpx!377
2025-04-04 09:49:06 +00:00
HoneyryderChuck
f82816feb3 Merge branch 'issue-339' into 'master'
QUERY plugin

Closes #339

See merge request os85/httpx!374
2025-04-04 09:48:13 +00:00
HoneyryderChuck
ee229aa74c readapt some plugins so that supported verbs can be overridden by custom plugins 2025-04-04 09:32:38 +01:00
HoneyryderChuck
793e900ce8 added the :query plugin, which supports the QUERY http method
added as a plugin for explicit opt-in, as it's still an experimental feature (RFC in draft)
2025-04-04 09:32:38 +01:00
HoneyryderChuck
1241586eb4 introducing subplugins to plugins
subplugins are modules of plugins which register as post-plugins of other plugins

a specific plugin may want to have a side-effect on the functionality of another plugin, so they can use this to register it when the other plugin is loaded
2025-04-04 09:25:53 +01:00
HoneyryderChuck
cbf454ae13 Merge branch 'issue-336' into 'master'
ruby 3.4 features

Closes #336

See merge request os85/httpx!372
2025-04-04 08:24:28 +00:00
HoneyryderChuck
180d3b0e59 adding option, which automatically closes sessions on fork
only for ruby 3.1 or higher. adapted from a similar feature from the connection_pool gem
2025-04-04 00:22:05 +01:00
HoneyryderChuck
84db0072fb new :stream_bidi plugin
this plugin is an HTTP/2 only plugin which enables bidirectional streaming

the client can continue writing request streams as response streams arrive midway

Closes https://github.com/HoneyryderChuck/httpx/discussions/71
2025-04-04 00:21:12 +01:00
HoneyryderChuck
c48f6c8e8f adding Request#can_buffer?
abstracts some logic around whether a request has request body bytes to buffer
2025-04-04 00:20:29 +01:00
HoneyryderChuck
870b8aed69 make .parser_type an instance method instead
allows plugins to override
2025-04-04 00:20:29 +01:00
HoneyryderChuck
56b8e9647a making multipart decoding code more robust 2025-04-04 00:18:53 +01:00
HoneyryderChuck
1f59688791 rename test servlet 2025-04-04 00:18:53 +01:00
HoneyryderChuck
e63c75a86c improvements in headers
using Hash#new(capacity: ) to better predict size; reduce the number of allocated arrays by passing the result of  to the store when possible, and only calling #downcased(str) once; #array_value will also not try to clean up errors in the passed data (it'll either fail loudly, or be fixed downstream)
2025-04-04 00:18:53 +01:00
HoneyryderChuck
3eaf58e258 refactoring timers to more efficiently deal with empty intervals
before, canceling a timer connected to an interval which would become empty would delete it from the main intervals store; this deletion now moves away from the request critical path, and pinging for intervals will drop elapsed-or-empty before returning the shortest one

beyond that, the intervals store won't be constantly recreated if there's no need for it (i.e. nothing has elapsed), which reduce the gc pressure

searching for existing interval on #after now uses bsearch; since the list is ordered, this should make finding one more performant
2025-04-04 00:18:53 +01:00
HoneyryderChuck
9ff62404a6 enabling warning messages 2025-04-04 00:18:53 +01:00
HoneyryderChuck
4d694f9517 ruby 3.4 feature: use String#append_as_bytes in buffers 2025-04-04 00:18:53 +01:00
HoneyryderChuck
22952f6a4a ruby 3.4: set string capacity for buffer-like string 2025-04-04 00:18:53 +01:00
HoneyryderChuck
7660e4c555 implement #inspect in a few places where ouput gets verbose
tweak some existing others
2025-04-04 00:18:53 +01:00
HoneyryderChuck
a9cc787210 ruby 3.4: use set_temporary_name to decorate plugin classes with more descriptive names 2025-04-04 00:18:53 +01:00
HoneyryderChuck
970830a025 bumping version to 1.4.4 2025-04-03 22:17:42 +01:00
HoneyryderChuck
7a3d38aeee Merge branch 'issue-343' into 'master'
session: discard connection callbacks if they're assigned to a different session already

Closes #343

See merge request os85/httpx!379
2025-04-03 18:53:39 +00:00
HoneyryderChuck
54bb617902 fixed regression test of 1.4.1 (it detected a different error, but the outcome is not a goaway error anymore, as persistent conns recover and retry) 2025-04-03 18:34:41 +01:00
HoneyryderChuck
cf08ae99f5 removing unneded require in regression test which loads webmock by mistake 2025-04-03 18:23:56 +01:00
HoneyryderChuck
c8ce4cd8c8 Merge branch 'down-issue-98' into 'master'
stream plugin: allow partial buffering of the response when calling things other than #each

See merge request os85/httpx!380
2025-04-03 17:23:21 +00:00
HoneyryderChuck
6658a2ce24 ssl socket: do not call tcp socket connect if already connected 2025-04-03 18:17:35 +01:00
HoneyryderChuck
7169f6aaaf stream plugin: allow partial buffering of the response when calling things other than #each
this allows calling #status or #headers on a stream response, without buffering the whole response, as it's happening now; this will only work for methods which do not rely on the whole payload to be available, but that should be ok for the stream plugin usage

Fixes https://github.com/janko/down/issues/98
2025-04-03 17:51:02 +01:00
HoneyryderChuck
ffc4824762 do not needlessly probe for readiness on a reconnected connection 2025-04-03 11:04:15 +01:00
HoneyryderChuck
8e050e846f decrementing the in-flight counter in a connection
sockets are sometimes needlessly probed on retries because the counter wasn't taking failed attempts into account
2025-04-03 11:04:15 +01:00
HoneyryderChuck
e40d3c9552 do not exhaust retry attempts when probing connections after keep alive timeout expires
since pools can keep multiple persistent connections which may have been terminated by the peer already, exhausting the one retry attempt from the persistent plugin may make request fail before trying it on an actual connection. in this patch, requests which are preceded by a PING frame used for probing are therefore marked as such, and do not decrement the attempts counter when failing
2025-04-03 11:04:15 +01:00
HoneyryderChuck
ba60ef79a7 if checking out a connection in a closing state, assume that the channel is irrecoverable and hard-close is beforehand
one less callback to manage, which potentially leaks across session usages
2025-03-31 11:46:04 +01:00
HoneyryderChuck
ca49c9ef41 session: discard connection callbacks if they're assigned to a different session already
some connection callbacks are prone to be left behind; when they do, they may access objects that may have been locked by another thread, thereby corrupting state.
2025-03-28 18:26:17 +00:00
HoneyryderChuck
7010484b2a bump version to 1.4.3 2025-03-25 23:30:51 +00:00
HoneyryderChuck
06eba512a6 Merge branch 'issue-340' into 'master'
empty the write buffer on EOF errors in #read too

Closes #340

See merge request os85/httpx!373
2025-03-24 11:18:57 +00:00
HoneyryderChuck
f9ed0ab602 only run rbs tests in latest ruby 2025-03-19 23:55:00 +00:00
HoneyryderChuck
5632e522c2 internal telemetry reutilizes loggable module, which is made to work in places where there are no options 2025-03-19 23:43:29 +00:00
HoneyryderChuck
cfdb719a8e extra subroutines in test http2 server 2025-03-19 23:42:28 +00:00
HoneyryderChuck
b2a1b9cded fixed wrong API call on missing corresponding client PING frame
the function used did not exist; instead, an exception will be raised
2025-03-19 23:42:10 +00:00
HoneyryderChuck
5917c63a70 add more error message context to settings timeout flaky test 2025-03-19 23:41:02 +00:00
HoneyryderChuck
6af8ad0132 missing sig for HTTP2 Connection 2025-03-19 23:30:36 +00:00
HoneyryderChuck
35ac13406d do not run yjit build for older rubies 2025-03-19 23:30:13 +00:00
HoneyryderChuck
d00c46d363 Merge branch 'gh-80' into 'master'
handle HTTP_1_1_REQUIRED stream GOAWAY error code by retrying on new HTTP/1.1 connection

See merge request os85/httpx!375
2025-03-19 23:21:31 +00:00
HoneyryderChuck
a437de36e8 handle HTTP_1_1_REQUIRED stream GOAWAY error code by retrying on new HTTP/1.1 connection
it was previously only handling 421 status codes for the same effect; this achieves parity with the frame-driven redirection
2025-03-19 23:11:51 +00:00
HoneyryderChuck
797fd28142 Merge branch 'faraday-multipart-uploadio-issue' into 'master'
fix: do not close request right after sending it, assume it may have to be retried

See merge request os85/httpx!378
2025-03-19 22:13:19 +00:00
HoneyryderChuck
6d4266d4a4 multipart: initialize @bytesize in the initializer (for object shape opt) 2025-03-19 16:59:25 +00:00
HoneyryderChuck
eb8c18ccda make << a part of Response interface (and ensure ErrorResponse deals with no internal @response) 2025-03-19 16:58:44 +00:00
HoneyryderChuck
4653b48602 fix: do not close request right after sending it, assume it may have to be retried
with the retries plugin, the request payload will be rewinded, and that may not be possible if already closed. this was never detected so far because no request body transcoder internally closes, but the faraday multipart adapter does

the request is therefore closed alongside the response (when the latter is closed)

Fixes https://github.com/HoneyryderChuck/httpx/issues/75\#issuecomment-2731219586
2025-03-19 16:57:47 +00:00
HoneyryderChuck
8287a55b95 Merge branch 'gh-79' into 'master'
remove raise-error middleware from faraday tests

See merge request os85/httpx!376
2025-03-18 22:55:20 +00:00
HoneyryderChuck
9faed647bf remove raise-error middleware from faraday tests
proves that the adapter does not raise on http errors. also added a test to ensure that
2025-03-18 22:42:38 +00:00
HoneyryderChuck
5268f60021 fix sig issues coming from latest rbs 2025-03-18 18:30:53 +00:00
HoneyryderChuck
132e4b4ebe extra subroutines in test http2 server 2025-03-14 23:45:36 +00:00
HoneyryderChuck
b502247284 fixed wrong API call on missing corresponding client PING frame
the function used did not exist; instead, an exception will be raised
2025-03-14 23:45:27 +00:00
HoneyryderChuck
e5d852573a empty the write buffer on EOF errors in #read too
this avoid, on HTTP/2 termination handshake, in case the socket was shown as closed due to EOF, that the bytes are going to be written regardless (due to being misidentified as the GOAWAY frame)
2025-03-14 23:45:12 +00:00
HoneyryderChuck
d17ac7c8c3 webmock: reassign headers after callbacks
these may have been reassigned during them
2025-03-05 23:09:20 +00:00
HoneyryderChuck
b1c08f16d5 bump version to 1.4.2 2025-03-05 22:20:41 +00:00
HoneyryderChuck
f618c6447a tweaking hn script 2025-03-05 13:41:33 +00:00
HoneyryderChuck
4454b1bbcc Merge branch 'issue-334' into 'master'
ensure connection is cleaned up on parser-initiated forced reset

Closes #334

See merge request os85/httpx!363
2025-03-03 18:27:13 +00:00
HoneyryderChuck
88f8f5d287 fix: reset timeout callbacks when requests are routed to a different connection
this may happen in a few contexts, such as connection exhaustion, but more importantly, when a request is retried in a different connection; if the request successfully sets the callbacks before the connection raises an issue and the request is retried in a new one, the callback from the faulty connection are carried with it, and triggered at a time when the connection is back in the connection pool, or worse, used in a different thread

this fix relies on :idle transition callback, which is called before request is routed around
2025-03-03 18:21:04 +00:00
HoneyryderChuck
999b6a603a adding reproduction of the report bug on issue-334 2025-03-03 18:12:03 +00:00
HoneyryderChuck
f8d05b0e82 conn: on eof error, clean up write buffer
socket is closed, do not try to drain it while performing the handshake shutdown
2025-03-03 18:12:03 +00:00
HoneyryderChuck
a7f2271652 add more process context info to logging 2025-03-03 18:12:03 +00:00
HoneyryderChuck
55f1f6800b Merge branch 'gh-77' into 'master'
always raise an error when a non-recoverable error happens when sending the request

See merge request os85/httpx!370
2025-03-03 18:03:23 +00:00
HoneyryderChuck
3e736b1f05 Merge branch 'fix-hev2-overrides' into 'master'
fixes for happy eyeballs implementation

Closes #337

See merge request os85/httpx!368
2025-03-03 18:02:43 +00:00
HoneyryderChuck
f5497eec4f always raise an error when a non-recoverable error happens when sending the request
this should fallback to terminating the session immediately and closing its connections, instead of trying to fit the same exception into the request objects, no point in that

Closes https://github.com/HoneyryderChuck/httpx/issues/77
2025-03-03 16:45:43 +00:00
HoneyryderChuck
08015e0851 fixup! native resolver: refactored retries to use timer intervals 2025-03-01 01:12:39 +00:00
HoneyryderChuck
a0f472ba02 cleanly exit from Exception in the selector loop
was messing up RBS state
2025-03-01 01:03:24 +00:00
HoneyryderChuck
8bee6956eb adding Timer, making Timers#after return it, to allow single cancellation
the previous iteration relied on internal behaviour do delete the correct callback; in the process, logic to delete all callbacks from an interval was accidentally committed, which motivated this refactoring. the premise is: timeouts can cancel the timer; they set themselves as active until done; operation timeouts rely on the previous to be ignored or not.

a new error, OperationTimeoutError, was added for that effect
2025-03-01 01:03:24 +00:00
HoneyryderChuck
97cbdf117d small update in output of hackernews script 2025-02-28 18:37:05 +00:00
HoneyryderChuck
383f2a01d8 fix choice of candidate on no_domain_found error
must pick up name from candidates and pass to #resolve
2025-02-28 18:37:05 +00:00
HoneyryderChuck
8a473b4ccd native resolver: propagate error to all connections and close resolver when socket error 2025-02-28 18:37:05 +00:00
HoneyryderChuck
b6c8f70aaf fix: always prefer timer interval if values are the same 2025-02-28 18:37:05 +00:00
HoneyryderChuck
f5aa6142a0 selector: remove needless begin block 2025-02-28 18:37:05 +00:00
HoneyryderChuck
56d82e6370 connection: make surge it's purged on transition error 2025-02-28 18:37:05 +00:00
HoneyryderChuck
41e95d5b86 fix log message repeating pattern 2025-02-28 18:37:05 +00:00
HoneyryderChuck
46a39f2b0d native: when resolving, purge closed connections, ignore the connection which is being resolved 2025-02-28 18:37:05 +00:00
HoneyryderChuck
8009fc11b7 native resolver: refactored retries to use timer intervals
there were a lot of issues with bookkeeping this at the connection level; in the end, the timers infra was a much better proxy for all of this; set timer after write; cancel it on reading data to parse
2025-02-28 18:37:05 +00:00
HoneyryderChuck
398c08eb4d native resolver: consume resolutions in a loop, do not stop after the first one
this was a busy loop on dns resolution; this should utilize the socket better
2025-02-28 18:37:05 +00:00
HoneyryderChuck
723fda297f close_or_resolve: purge the queriable connections list before figuring out the next step 2025-02-27 19:22:36 +00:00
HoneyryderChuck
35ee625827 fix: in the native resolver, do not fall for the first answer being an alias if the remainder carries IPs
discard alias, use IPs
2025-02-27 19:22:36 +00:00
HoneyryderChuck
210abfb2f5 fix: on the native resolution, do not keep reading from the socket if buffer has data 2025-02-27 19:22:36 +00:00
HoneyryderChuck
53bf6824f8 fix: do not apply the HEv2 resolution delay if the ip was not resolved via DNS
early resolution should trigger immediately
2025-02-27 19:22:36 +00:00
HoneyryderChuck
cb8a97c837 added how to test instructions 2025-02-27 19:22:36 +00:00
HoneyryderChuck
0063ab6093 selector: do not raise conventional error on select timeout when the interval came from a timer
assume that the timer will fire right afterwards, return early
2025-02-27 19:22:36 +00:00
HoneyryderChuck
7811cbf3a7 faraday adaptar: use a default reason when none is matched by Net::HTTP::STATUS_CODES
Fixes https://github.com/HoneyryderChuck/httpx/issues/76
2025-02-22 22:28:57 +00:00
HoneyryderChuck
7c21c33999 bump version to 1.4.1 2025-02-18 13:42:44 +00:00
HoneyryderChuck
e45edcbfce linting issue 2025-02-18 12:55:00 +00:00
HoneyryderChuck
7e705dc57e resolver: early exit for closed connections later, after updating addresses (in case they ever get reused) 2025-02-18 12:46:26 +00:00
HoneyryderChuck
dae4364664 fix for incorrect sig of #pin_connection 2025-02-18 12:45:37 +00:00
HoneyryderChuck
8dfd1edf85 supressing annoying grpc logs where possible 2025-02-18 09:03:05 +00:00
HoneyryderChuck
d2fd20b3ec reasssing current session/selector earlier in the reconnection lifecycle 2025-02-18 09:02:49 +00:00
HoneyryderChuck
28fdbb1a3d one less callback 2025-02-18 09:02:07 +00:00
HoneyryderChuck
23857f196a refactoring attribution of current session and selector
by setting it in select_connection instead
2025-02-18 09:02:01 +00:00
HoneyryderChuck
bf1ef451f2 compose file linting 2025-02-18 08:14:29 +00:00
HoneyryderChuck
d68e98be5a adapted hackernewr script to deal with errors 2025-02-18 08:14:20 +00:00
HoneyryderChuck
fd57d72a22 add support in get.rb script for arbitrary url 2025-02-18 08:14:11 +00:00
HoneyryderChuck
a74bd9f397 use different names for happy eyeballs script 2025-02-18 08:14:02 +00:00
HoneyryderChuck
f76be1983b native resolver: fix stalled resolution on multiple requests to multiple origins
continue resolving when an error happens by immediately writing to the buffer afterwards
2025-02-18 08:13:47 +00:00
HoneyryderChuck
86cb30926f rewrote happy eyeballs implementation to not rely on callbacks
each connection will now check in its sibling and whether it's the original connection (containing the inital batch of requests); internal functions are then called to control how connections react to successful or failed resolutions, which reduces code repetition

the handling of coalesced connections is also simplified, as when that happens, the sibling must also be closed. this allowed to fix some mismatch when handling this use case with callbacks
2025-02-18 08:13:35 +00:00
HoneyryderChuck
ed8fafd11d fix: do not schedule deferred HEv2 ipv4 tcp handshake if the connection has already been closed by the sibling connection 2025-02-18 08:12:07 +00:00
HoneyryderChuck
5333def40d Merge branch 'issue-338' into 'master'
IO.copy_stream changes yielded string on subsequent yields

Closes #338

See merge request os85/httpx!369
2025-02-14 00:27:31 +00:00
HoneyryderChuck
ab78e3189e webmock: fix for integrations which require the request to transition state, due to event emission
one of them being the otel plugin, see https://github.com/open-telemetry/opentelemetry-ruby-contrib/pull/1404
2025-02-14 00:16:53 +00:00
HoneyryderChuck
b26313d18e request body: fixed handling of files as request body
there's a bug (reported in https://bugs.ruby-lang.org/issues/21131) with IO.copy_stream, where yielded duped strings still change value on subsequent yields, which breaks http2 framing, which requires two yields at the same time in the first iteration. it replaces it with #read calls; file handles will now be closed once done streaming, which is a change in behaviour
2025-02-14 00:16:53 +00:00
HoneyryderChuck
2af9bc0626 multipart: force pathname parts to open in binmode 2025-02-13 19:17:14 +00:00
HoneyryderChuck
f573c1c50b transcode: body encoder is now a simple delegator
instead of implementing method missing; this makes it simpler impl-wise, and it'll also make comparing types easier, although not needed ATM
2025-02-13 19:16:45 +00:00
HoneyryderChuck
2d999063fc added tests to reproduce the issue of string changing on IO.copy_stream yield 2025-02-13 19:15:15 +00:00
HoneyryderChuck
1a44b8ea48 Merge branch 'gh-70' into 'master'
datadog plugin fixes

See merge request os85/httpx!364
2025-02-11 00:58:04 +00:00
HoneyryderChuck
8eeafaa008 omit faraday/datadog tests which uncovered a bug 2025-02-11 00:46:18 +00:00
HoneyryderChuck
0ec8e80f0f fixing datadog plugin not sending distributed headers
the headers were being set on the request object after the request was buffered and sent
2025-02-11 00:46:18 +00:00
HoneyryderChuck
f2bca9fcbf altered datadog tests in order to verify the distributed headers from the response body
and not from the request object, which reproduces the bug
2025-02-11 00:46:18 +00:00
HoneyryderChuck
6ca17c47a0 faraday: do not trace on configuration is disabled 2025-02-11 00:46:18 +00:00
HoneyryderChuck
016ed04f61 adding test for integration of datadog on top of faraday backed by httpx 2025-02-11 00:46:18 +00:00
HoneyryderChuck
5b59011a89 moving datadog setup test to support mixin 2025-02-11 00:31:13 +00:00
HoneyryderChuck
7548347421 moving faraday setup test to support mixin 2025-02-11 00:31:13 +00:00
HoneyryderChuck
43c4cf500e datadog: set port as integer in the port span tag
faraday sets it as float and it doesn't seem to break because of it)
2025-02-11 00:31:13 +00:00
HoneyryderChuck
aecb6f5ddd datadog plugin: fix error callback and general issues
also, made the handler a bit more functional style, which curbs some of the complexity
2025-02-11 00:31:13 +00:00
HoneyryderChuck
6ac3d346b9 Merge branch 'method-redefinition-warnings' into 'master'
Fix two method redefinition warnings

See merge request os85/httpx!367
2025-02-07 10:21:26 +00:00
Earlopain
946f93471c
Fix two method redefinition warnings
```
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/selector.rb:95: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/resolver/system.rb:54: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
```

In selector.rb, the definitions are identical, so I kept the delegator

For system.rb, it always returns true so I kept that one
2025-02-07 09:38:30 +01:00
HoneyryderChuck
f68ff945c1 Merge branch 'issue-335' into 'master'
raise error when httpx is used with an url not starting with http or https schemes

Closes #335

See merge request os85/httpx!366
2025-01-28 09:07:07 +00:00
HoneyryderChuck
9fa9dd5350 raise error when httpx is used with an url not starting with http or https schemes
this was previously done in connection initialization, which means that the request would map to an error response with this error; however, the change to thread-safe pools in 1.4.0 caused a regression, where the uri is expected to have an origin before the connection is established; this is fixed by raising an error on request creation, which will need to be caught by the caller

Fixes #335
2025-01-28 00:36:00 +00:00
HoneyryderChuck
1c0cb0185c Merge branch 'issue-333' into 'master'
fix: handle multi goaway frames coming from server

Closes #333

See merge request os85/httpx!362
2025-01-13 13:00:18 +00:00
HoneyryderChuck
2a1338ca5b fix: handle multi goaway frames coming from server
nodejs servers, for example, seem to send them when shutting down servers on timeout; when receiving, in the same buffer, the first correctly closes the parser and emits the message, while the second, because the parser is closed already, will emit an exception; the regression happened because the second exception was swallowed by the pool handler, but now that's gone, and errors on connection consumption get handled; this was worked around by, on the parser, when emitting the errors for pending requests, claearing the queue, as when the second error comes, there's no request to emit the error for

Closes #333
2025-01-12 00:16:31 +00:00
HoneyryderChuck
cb847f25ad Merge branch 'ruby-34' into 'master'
adding support for ruby 3.4

See merge request os85/httpx!360
2025-01-03 01:37:28 +00:00
HoneyryderChuck
44311d08a5 improve resolver logs to include record family in prefix
also, fixed some of the arithmetic associated with logging timeout logs
2025-01-02 23:49:01 +00:00
HoneyryderChuck
17003840d3 adding support for ruby 3.4 2025-01-02 23:38:51 +00:00
HoneyryderChuck
a4bebf91bc Merge branch 'chore/avoid-loading-datadog-dogstatsd' into 'master'
Do not load Datadog tracing when dogstatsd is present

See merge request os85/httpx!361
2025-01-02 23:01:07 +00:00
Hieu Nguyen
691215ca6f Do not load Datadog tracing when dogstatsd is present 2024-12-31 18:54:44 +08:00
HoneyryderChuck
999d86ae3e bump version to 1.4.0 2024-12-18 13:22:09 +00:00
HoneyryderChuck
a4c2fb92e7 improving coverage of modules 2024-12-18 11:10:04 +00:00
HoneyryderChuck
66d3a9e00d Merge branch 'improvs' 2024-12-10 15:09:22 +00:00
HoneyryderChuck
e418783ea9 more sig completeness 2024-12-10 15:09:00 +00:00
HoneyryderChuck
36ddd84c85 improve code around consuming request bodies (particularly body_encoder interface) 2024-12-10 15:09:00 +00:00
HoneyryderChuck
f7a5b3ae90 define selector_store sigs 2024-12-10 15:09:00 +00:00
HoneyryderChuck
3afe853517 make #early_resolve return a boolean, instead of undefined across implementations 2024-12-10 15:09:00 +00:00
HoneyryderChuck
853ebd5e36 improve coverage, eliminate dead code 2024-12-10 15:09:00 +00:00
HoneyryderChuck
f820b8cfcb Merge branch 'issue-325' into 'master'
XML plugin

Closes #325

See merge request os85/httpx!358
2024-12-08 13:14:43 +00:00
HoneyryderChuck
062fd5a7f4 reinstate and deprecate HTTPX::Response#xml method 2024-12-08 12:48:47 +00:00
HoneyryderChuck
70bf874f4a adding gem collection
includes nokogiri type sigs
2024-12-08 12:48:47 +00:00
HoneyryderChuck
bf9d847516 moved xml encoding/decoding + APIs into :xml plugin 2024-12-08 12:48:47 +00:00
HoneyryderChuck
d45cae096b fix: do not raise things which are not exceptions
this is a regression from a ractor compatibility commit, which ensured that errors raised while preparing the request / resolving name are caught and raised, but introduced a regression when name resolution retrieves a cached IP; this error only manifested in dual-stack situations, which can't be tested in CI yet

Closes #329
2024-12-07 20:00:40 +00:00
HoneyryderChuck
717b932e01 improved coverage of content digest plugin tests 2024-12-03 09:00:11 +00:00
HoneyryderChuck
da11cb320c Merge branch 'json-suffix' into 'master'
Accept more MIME types with json suffix

Closes #326

See merge request os85/httpx!357
2024-12-03 08:50:07 +00:00
sarna
4bf07e75ac Accept more MIME types with json suffix
Fixes #326 #327
2024-12-03 08:50:07 +00:00
HoneyryderChuck
3b52ef3c09 Merge branch 'simpler-selector' into 'master'
:pool option + thread-safe session-owned conn pool

See merge request os85/httpx!348
2024-12-02 14:26:17 +00:00
HoneyryderChuck
ac809d18cc content-digest: set validate_content_digest default to false; do not try to compute content-digest for requests with no body 2024-12-02 13:04:57 +00:00
HoneyryderChuck
85019e5493 Merge branch 'content_digest' into 'master'
Add support for `content-digest` headers (RFC9530)

See merge request os85/httpx!354
2024-12-02 12:37:40 +00:00
David Roetzel
95c1a264ee Add support for content-digest headers (RFC9530)
Closes #323
2024-12-02 12:37:40 +00:00
HoneyryderChuck
32313ef02e Merge branch 'fix-json-encode-with-oj' into 'master'
Fix incorrect hash key rendering with Oj JSON encoder

Closes #324

See merge request os85/httpx!356
2024-11-29 19:41:40 +00:00
Denis Sadomowski
ed9df06b38 fix rubocop offenses 2024-11-29 18:26:39 +01:00
Denis Sadomowski
b9086f37cf Compat mode for Oj.dump by default 2024-11-29 17:47:30 +01:00
Denis Sadomowski
d3ed551203 revert arguments to json_dump 2024-11-29 17:40:32 +01:00
Denis Sadomowski
1b0e9b49ef Fix incorrect hash key rendering with Oj JSON encoder 2024-11-28 16:19:17 +01:00
HoneyryderChuck
8797434ae7 Merge branch 'fix-hexdigest-on-compressed-bodies' into 'master'
aws sigv4support calculation of hexdigest on top of compressed bodies in correct way

See merge request os85/httpx!355
2024-11-27 18:06:39 +00:00
HoneyryderChuck
25c87f3b96 fix: do not try to rewind on bodies which respond to #each
also, error when trying to calculate hexdigest on endless bodies
2024-11-27 17:39:20 +00:00
HoneyryderChuck
26c63a43e0 aws sigv4support calculation of hexdigest on top of compressed bodies in a more optimal way
before, compressed bodies were yielding chunks and buffering locally (the  variant in this snippet); they were also failing to rewind, due to lack of method (fixed in the last commit); in this change, support is added for bodies which can read and rewind (but do not map to a local path via ), such as compressed bodies, which at this point haven't been yet buffered; the procedure is then to buffer the compressed body into a tempfile, calculate the hexdigest then rewind the body and move on
2024-11-27 08:55:23 +00:00
HoneyryderChuck
3217fc03f8 allow deflater bodies to rewind 2024-11-27 08:50:57 +00:00
HoneyryderChuck
b7b63c4460 removing unused bits 2024-11-27 08:50:26 +00:00
HoneyryderChuck
7d8388af28 add test for calculation of hexdigest on top of a compressed body 2024-11-27 08:49:57 +00:00
HoneyryderChuck
a53d7f1e01 raise error happening in request-to-connection paths
but only when the selector is empty, as there'll be nothing to select on, and this would fall into an infinite loop
2024-11-19 12:55:44 +00:00
HoneyryderChuck
c019f1b3a7 removing usage of global unshareable object in default options 2024-11-19 12:55:44 +00:00
HoneyryderChuck
594f6056da native resolver: treat tcp hanshake errors as resolve errors 2024-11-19 12:55:44 +00:00
HoneyryderChuck
113e9fd4ef moving leftover option proc into private function 2024-11-19 12:55:44 +00:00
HoneyryderChuck
e32d226151 refactor of internal resolver cache lookup access to make it a bit safer 2024-11-19 12:55:44 +00:00
HoneyryderChuck
a3246e506d freezing all default options 2024-11-19 12:55:44 +00:00
HoneyryderChuck
ccb22827a2 using find_index/delete_at instead of find/delete 2024-11-19 12:55:44 +00:00
HoneyryderChuck
94e154261b store selectors in thread-local variables
instead of fiber-local storage; this allows that under fiber-scheduler based engines, like async, requests on the same session with an open selector will reuse the later, thereby ensuring connection reuse within the same thread

in normal conditions, that'll happen only if the user uses a session object and uses HTTPX::Session#wrap to keep the context open; it'll also work OTTB when using sessions with the  plugin. Otherwise, a new connection will be opened per fiber
2024-11-19 12:55:44 +00:00
HoneyryderChuck
c23561f80c linting... 2024-11-19 12:55:44 +00:00
HoneyryderChuck
681650e9a6 fixed long-standing reenqueue of request in the pending list 2024-11-19 12:55:44 +00:00
HoneyryderChuck
31f0543da2 minor improvement on handling do_init_connection 2024-11-19 12:55:44 +00:00
HoneyryderChuck
5e3daadf9c changing the order of operations handling misdirectedd requests
because you're reconnecting to the same host, now the previous connection is closed, in order to avoid a deadlock on the pool where the per-host conns are exhausted, and the new connection can't be initiated because the older one hasn't been checked back in
2024-11-19 12:55:44 +00:00
HoneyryderChuck
6b9a737756 introducing Connection#peer to point to the host to connect to
this eliminates the overuse of Connection#origin, which in the case of proxied connections was broken in the previous commit

the proxy implementation got simpler, despite this large changeset
2024-11-19 12:55:44 +00:00
HoneyryderChuck
1f9dcfb353 implement per-origin connection threshold per pool
defaulting to unbounded, in order to preserve current behaviour; this will cap the number of connections initiated for a given origin for a pool, which if not shared, will be per-origin; this will include connections from separate option profiles

a pool timeout is defined to checkout a connection when limit is reeached
2024-11-19 12:55:44 +00:00
HoneyryderChuck
d77e97d31d repositioned empty placeholder hash 2024-11-19 12:55:44 +00:00
HoneyryderChuck
69e7e533de synchronize access to connections in the pool
also fixed the coalescing case where the connection may come from the pool, and should therefore be remmoved from there and selected/checked back in accordingly as a result
2024-11-19 12:55:44 +00:00
HoneyryderChuck
840bb55ab3 do not return idle (result of either cloning or coalescing) connections back to the pool 2024-11-19 12:55:44 +00:00
HoneyryderChuck
5223d51475 setting the connection pool locally to the session
allowing it to be plugin extended via pool_class and PoolMethods
2024-11-19 12:55:44 +00:00
HoneyryderChuck
8ffa04d4a8 making pool class a plugin extendable class 2024-11-19 12:55:44 +00:00
HoneyryderChuck
4a351bc095 adapted plugins to the new structure 2024-11-19 12:55:44 +00:00
HoneyryderChuck
11d197ff24 changed internal session structure, so that it uses local selectors directly
pools are then used only to fetch new conenctions; selectors are discarded when not needed anymore; HTTPX.wrap is for now patched, but would ideally be done with in the future
2024-11-19 12:55:44 +00:00
HoneyryderChuck
12fbca468b rewrote Pool class to act as a connection pool, the way it was intended
this leaves synchronization out ftm
2024-11-19 12:55:44 +00:00
HoneyryderChuck
79d5d16c1b moving session with pool test plugin to override on the session and drop pool changes 2024-11-19 12:55:44 +00:00
HoneyryderChuck
e204bc6df0 passing connections to Pool#next_tick and Pool#next_timeout
refactoring towards not centralizing this inforation
2024-11-19 12:55:44 +00:00
HoneyryderChuck
6783b378d3 bump version to 1.3.4 2024-11-19 12:53:34 +00:00
HoneyryderChuck
9d7681cb46 Merge branch 'webmock-form-tempfile' into 'master'
Fix webmock integration when posting tempfiles

Closes #320

See merge request os85/httpx!353
2024-11-06 13:58:04 +00:00
HoneyryderChuck
c6139e40db response body: protect against invalid charset in content-type header
Closes https://github.com/HoneyryderChuck/httpx/issues/66
2024-11-06 13:38:19 +00:00
Earlopain
a4b95db01c Fix webmock integration when posting tempfiles
The fix is two-fold and also allows them to be retryable

Closes https://gitlab.com/os85/httpx/-/issues/320
2024-11-06 13:27:45 +00:00
HoneyryderChuck
91b9e13cd0 bumped version to 1.3.3 2024-10-31 18:00:12 +00:00
HoneyryderChuck
8d5def5f02 Merge branch 'issue-319' into 'master'
fix for webmock request body expecting a string

Closes #319

See merge request os85/httpx!352
2024-10-31 17:58:42 +00:00
HoneyryderChuck
3e504fb511 fix for webmock request body expecting a string
when building the request signature, the body is preemptively converted to a string, which fulfills the expectation for webmock, despite it being a bit of a perf penalty if the request contains a multipart request body, as the body will be fully read to memory

Closes #319

Closes https://github.com/HoneyryderChuck/httpx/issues/65
2024-10-31 17:47:12 +00:00
HoneyryderChuck
492097d551 bumped version to 1.3.2 2024-10-30 11:50:49 +00:00
HoneyryderChuck
02ed2ae87d raise invalid uri if passed request uri does not contain the host part 2024-10-28 10:40:28 +00:00
HoneyryderChuck
599b6865da removing parentheses from regex 2024-10-25 15:54:04 +01:00
HoneyryderChuck
7c0e776044 coverage must be a regex 2024-10-25 13:58:58 +01:00
HoneyryderChuck
7ea0b32161 fix coverage badge generation 2024-10-25 13:55:51 +01:00
HoneyryderChuck
72b0267598 Merge branch 'issue-317' into 'master'
Support WebMock with form/multipart

Closes #317

See merge request os85/httpx!351
2024-10-25 12:55:25 +00:00
Alexey Romanov
4a966d4cb8 Add a regression test for WebMock with form/multipart 2024-10-25 13:43:12 +01:00
HoneyryderChuck
70f1ffc65d Merge branch 'github-issue-63' into 'master'
Prevent `NoMethodError` in the proxy plugin

See merge request os85/httpx!350
2024-10-21 09:23:50 +00:00
Alexey Romanov
fda0ea8b0e Prevent NoMethodError in the proxy plugin
When:
1. the proxy is autodetected from `http_proxy` etc. variables;
2. a request is made which bypasses the proxy (e.g. to an authority in `no_proxy`);
3. this request fails with one of `Proxy::PROXY_ERRORS` (timeout or a system error)

the `fetch_response` method tried to access the proxy URIs array which
isn't initialized by `proxy_options`. This change fixes the
`proxy_error?` check to avoid the issue.
2024-10-21 10:10:12 +01:00
HoneyryderChuck
2443ded12b update CI test certs 2024-09-27 09:16:06 +01:00
HoneyryderChuck
1db2d00d07 rename get tests 2024-09-06 09:43:25 +01:00
HoneyryderChuck
40b4884d87 bumped version to 1.3.1 2024-08-20 17:20:24 +01:00
HoneyryderChuck
823e7446f4 faraday: do not call on_complete when not defined
by default it's not filled in, but middlewares override it

Closes https://github.com/HoneyryderChuck/httpx/issues/61
2024-08-20 16:55:57 +01:00
HoneyryderChuck
83b4c73b92 protect against coalescing connections on the resolver
these could take connections out of the loop, thereby causinng a busy loop, on multiple request scenarios
2024-08-19 16:45:55 +01:00
Diogo Vernier
9844a55205 fix CPU usage loop 2024-08-19 16:45:55 +01:00
HoneyryderChuck
6e1bc89256 Merge branch 'issue-312' into 'master'
allow further extension of the httpx session via faraday config block

Closes #312

See merge request os85/httpx!347
2024-08-19 15:45:41 +00:00
HoneyryderChuck
8ec0765bd7 Merge branch 'max-time' into 'master'
reuse request_timeout on response chains (redirects, retries)

See merge request os85/httpx!345
2024-08-19 15:45:24 +00:00
HoneyryderChuck
6b893872fb allow further extension of the httpx session via faraday config block
Closes #312
2024-08-01 11:41:10 +01:00
HoneyryderChuck
ca8346b193 adding options docs 2024-07-25 16:01:51 +01:00
HoneyryderChuck
7115f0cdce avoid enqueing requests after a period if the request is over
they may have been closed already, due to a timeout or connection dropping. this condition affects delayed retry or redirect follow request.
2024-07-25 11:59:02 +01:00
HoneyryderChuck
74fc7bf77d when bubbling up errors in the connection, handle request error directly
instead of expecting it to be contained within it, and therefore handled explicitly. sometimes they may not.
2024-07-25 11:59:02 +01:00
HoneyryderChuck
002459b9b6 fix: do not generate new connection on 407 check for proxies
instead, look for the correct conn in-session. this does not leak connections with usage
2024-07-25 11:59:02 +01:00
HoneyryderChuck
1ee39870da deactivate connection before deferring a request in the future
this causes busy loops where request is buffered only in the future, and its connection may still be open for readiness probes
2024-07-25 11:59:02 +01:00
HoneyryderChuck
b8db28abd2 make request_timeout reset on returned response, rather than response callback
this makes it not reset on redirect or retried responses, and effectively makes it act as a max-time for individual transactions/requests
2024-07-25 11:59:02 +01:00
HoneyryderChuck
fafe7c140c splatting connections on pool.deactivate call, as per defined sig 2024-07-23 14:48:51 +01:00
HoneyryderChuck
047dc30487 do not use thread variables in mock response test plugin 2024-07-19 12:01:48 +01:00
HoneyryderChuck
7278647688 bump version to 1.3.0 2024-07-10 16:27:24 +01:00
HoneyryderChuck
09fbb32b9a fix: in test, uri URI to build uri with ip address, as concatenating fails for IPv6 2024-07-10 16:10:21 +01:00
HoneyryderChuck
4e7ad8fd23 fix: cookies plugin should not make Session#build_request private
Closes #311
2024-07-10 15:52:56 +01:00
HoneyryderChuck
9a3ddfd0e4 change datadog v2 constraint to not test against beta version
Fixes #310
2024-07-10 15:50:14 +01:00
HoneyryderChuck
e250ea5118 Merge branch 'http-2-gem' into 'master'
switch from http-2-next to http-2

See merge request os85/httpx!344
2024-07-08 15:19:37 +00:00
HoneyryderChuck
2689adc390 Merge branch 'request-options' into 'master'
Options improvements

See merge request os85/httpx!324
2024-07-08 15:19:02 +00:00
HoneyryderChuck
ba31204227 switch from http-2-next to http-2
will be merged back to original repo soon
2024-06-28 15:49:58 +01:00
HoneyryderChuck
0b671fa2f9 simplify ErrorResponse by fetching options from the request, like Response 2024-06-11 18:49:18 +01:00
HoneyryderChuck
8b2ee0b466 remove form, json, ,xml and body from the Options class
Options become a bunch of session and connection level parameters, and requests do not need to maintain a separate Options object when they contain a body anymore, instead, objects is shared with the session, while request-only parameters get passed downwards to the request and its body. This reduces allocations of Options, currently the heaviest object to manage.
2024-06-11 18:23:45 +01:00
HoneyryderChuck
b686119a6f do not try to cast to Options all the time, trust the internal structure 2024-06-11 18:23:12 +01:00
HoneyryderChuck
dcbd2f81e3 change internal buffer fetch using ivar getter 2024-06-11 18:21:54 +01:00
HoneyryderChuck
0fffa98e83 avoid traversing full intervals list, which is ordered by oldest intervals first
by using #drop_while
2024-06-11 18:21:54 +01:00
HoneyryderChuck
08ba389fd6 log mmore info on read for level 3 2024-06-11 18:21:54 +01:00
268 changed files with 8337 additions and 3324 deletions

2
.gitignore vendored
View File

@ -16,3 +16,5 @@ public
build
.sass-cache
wiki
.gem_rbs_collection/
rbs_collection.lock.yaml

View File

@ -39,7 +39,7 @@ cache:
- vendor
lint rubocop code:
image: "ruby:3.3"
image: "ruby:3.4"
variables:
BUNDLE_WITHOUT: test:coverage:assorted
before_script:
@ -47,7 +47,7 @@ lint rubocop code:
script:
- bundle exec rake rubocop
lint rubocop wiki:
image: "ruby:3.3"
image: "ruby:3.4"
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
variables:
@ -61,7 +61,7 @@ lint rubocop wiki:
- rubocop-md
AllCops:
TargetRubyVersion: 3.3
TargetRubyVersion: 3.4
DisabledByDefault: true
FILE
script:
@ -90,25 +90,28 @@ test ruby 3/1:
./spec.sh ruby 3.1
test ruby 3/2:
<<: *test_settings
<<: *yjit_matrix
script:
./spec.sh ruby 3.2
test ruby 3/3:
<<: *test_settings
<<: *yjit_matrix
script:
./spec.sh ruby 3.3
test ruby 3/4:
<<: *test_settings
<<: *yjit_matrix
script:
./spec.sh ruby 3.4
test truffleruby:
<<: *test_settings
script:
./spec.sh truffleruby latest
allow_failure: true
regression tests:
image: "ruby:3.3"
image: "ruby:3.4"
variables:
BUNDLE_WITHOUT: lint:assorted
CI: 1
COVERAGE_KEY: "$RUBY_ENGINE-$RUBY_VERSION-regression-tests"
COVERAGE_KEY: "ruby-3.4-regression-tests"
artifacts:
paths:
- coverage/
@ -120,12 +123,12 @@ regression tests:
- bundle exec rake regression_tests
coverage:
coverage: '/\(\d+.\d+\%\) covered/'
coverage: '/Coverage: \d+.\d+\%/'
stage: prepare
variables:
BUNDLE_WITHOUT: lint:test:assorted
image: "ruby:3.3"
image: "ruby:3.4"
script:
- gem install simplecov --no-doc
# this is a workaround, because simplecov doesn't support relative paths.
@ -147,7 +150,7 @@ pages:
stage: deploy
needs:
- coverage
image: "ruby:3.3"
image: "ruby:3.4"
before_script:
- gem install hanna-nouveau
script:

View File

@ -92,6 +92,10 @@ Style/GlobalVars:
Exclude:
- lib/httpx/plugins/internal_telemetry.rb
Style/CommentedKeyword:
Exclude:
- integration_tests/faraday_datadog_test.rb
Style/RedundantBegin:
Enabled: false
@ -176,3 +180,7 @@ Performance/StringIdentifierArgument:
Style/Lambda:
Enabled: false
Style/TrivialAccessors:
Exclude:
- 'test/pool_test.rb'

View File

@ -9,8 +9,7 @@ gem "rake", "~> 13.0"
group :test do
if RUBY_VERSION >= "3.2.0"
# load from branch while there's no official release
gem "datadog", "~> 2.0.0.beta2"
gem "datadog", "~> 2.0"
else
gem "ddtrace"
end
@ -37,6 +36,11 @@ group :test do
gem "rbs"
gem "yajl-ruby", require: false
end
if RUBY_VERSION >= "3.4.0"
# TODO: remove this once websocket-driver-ruby declares this as dependency
gem "base64"
end
end
platform :mri, :truffleruby do
@ -53,6 +57,7 @@ group :test do
gem "aws-sdk-s3"
gem "faraday"
gem "faraday-multipart"
gem "idnx"
gem "oga"

View File

@ -157,7 +157,6 @@ All Rubies greater or equal to 2.7, and always latest JRuby and Truffleruby.
* Discuss your contribution in an issue
* Fork it
* Make your changes, add some tests
* Ensure all tests pass (`docker-compose -f docker-compose.yml -f docker-compose-ruby-{RUBY_VERSION}.yml run httpx bundle exec rake test`)
* Make your changes, add some tests (follow the instructions from [here](test/README.md))
* Open a Merge Request (that's Pull Request in Github-ish)
* Wait for feedback

View File

@ -0,0 +1,18 @@
# 1.3.0
## Dependencies
`http-2` v1.0.0 is replacing `http-2-next` as the HTTP/2 parser.
`http-2-next` was forked from `http-2` 5 years ago; its improvements have been merged back to `http-2` recently though, so `http-2-next` willl therefore no longer be maintained.
## Improvements
Request-specific options (`:params`, `:form`, `:json` and `:xml`) are now separately kept by the request, which allows them to share `HTTPX::Options`, and reduce the number of copying / allocations.
This means that `HTTPX::Options` will throw an error if you initialize an object which such keys; this should not happen, as this class is considered internal and you should not be using it directly.
## Fixes
* support for the `datadog` gem v2.0.0 in its adapter has been unblocked, now that the gem has been released.
* loading the `:cookies` plugin was making the `Session#build_request` private.

View File

@ -0,0 +1,17 @@
# 1.3.1
## Improvements
* `:request_timeout` will be applied to all HTTP interactions until the final responses returned to the caller. That includes:
* all redirect requests/responses (when using the `:follow_redirects` plugin)
* all retried requests/responses (when using the `:retries` plugin)
* intermediate requests (such as "100-continue")
* faraday adapter: allow further plugins of internal session (ex: `builder.adapter(:httpx) { |sess| sess.plugin(:follow_redirects) }...`)
## Bugfixes
* fix connection leak on proxy auth failed (407) handling
* fix busy loop on deferred requests for the duration interval
* do not further enqueue deferred requests if they have terminated meanwhile.
* fix busy loop caused by coalescing connections when one of them is on the DNS resolution phase still.
* faraday adapter: on parallel mode, skip calling `on_complete` when not defined.

View File

@ -0,0 +1,6 @@
# 1.3.2
## Bugfixes
* Prevent `NoMethodError` in an edge case when the `:proxy` plugin is autoloaded via env vars and webmock adapter are used in tandem, and a real request fails.
* raise invalid uri error if passed request uri does not contain the host part (ex: `"https:/get"`)

View File

@ -0,0 +1,5 @@
# 1.3.3
## Bugfixes
* fixing a regression introduced in 1.3.2 associated with the webmock adapter, which expects matchable request bodies to be strings

View File

@ -0,0 +1,6 @@
# 1.3.4
## Bugfixes
* webmock adapter: fix tempfile usage in multipart requests.
* fix: fallback to binary encoding when parsing incoming invalid charset in HTTP "content-type" header.

View File

@ -0,0 +1,43 @@
# 1.4.0
## Features
### `:content_digest` plugin
The `:content_digest` can be used to calculate the digest of request payloads and set them in the `"content-digest"` header; it can also validate the integrity of responses which declare the same `"content-digest"` header.
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Content-Digest
## Per-session connection pools
This architectural changes moves away from per-thread shared connection pools, and into per-session (also thread-safe) connection pools. Unlike before, this enables connections from a session to be reused across threads, as well as limiting the number of connections that can be open on a given origin peer. This fixes long-standing issues, such as reusing connections under a fiber scheduler loop (such as the one from the gem `async`).
A new `:pool_options` option is introduced, which can be passed an hash with the following sub-options:
* `:max_connections_per_origin`: maximum number of connections a pool allows (unbounded by default, for backwards compatibility).
* `:pool_timeout`: the number of seconds a session will wait for a connection to be checked out (default: 5)
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools
## Improvements
* `:aws_sigv4` plugin: improved digest calculation on compressed request bodies by buffering content to a tempfile.
* `HTTPX::Response#json` will parse payload from extended json MIME types (like `application/ld+json`, `application/hal+json`, ...).
## Bugfixes
* `:aws_sigv4` plugin: do not try to rewind a request body which yields chunks.
* fixed request encoding when `:json` param is passed, and the `oj` gem is used (by using the `:compat` flag).
* native resolver: on message truncation, bubble up tcp handshake errors as resolve errors.
* allow `HTTPX::Response#json` to accept extended JSON mime types (such as responses with `content-type: application/ld+json`)
## Chore
* default options are now fully frozen (in case anyone relies on overriding them).
### `:xml` plugin
XML encoding/decoding (via `:xml` request param, and `HTTPX::Response#xml`) is now available via the `:xml` plugin.
Using `HTTPX::Response#xml` without the plugin will issue a deprecation warning.

View File

@ -0,0 +1,19 @@
# 1.4.1
## Bugfixes
* several `datadog` integration bugfixes
* only load the `datadog` integration when the `datadog` sdk is loaded (and not other gems that may define the `Datadog` module, like `dogstatsd`)
* do not trace if datadog integration is loaded but disabled
* distributed headers are now sent along (when the configuration is enabled, which it is by default)
* fix for handling multiple `GOAWAY` frames coming from the server (node.js servers seem to send multiple frames on connection timeout)
* fix regression for when a url is used with `httpx` which is not `http://` or `https://` (should raise `HTTPX::UnsupportedSchemaError`)
* worked around `IO.copy_stream` which was emitting incorrect bytes for HTTP/2 requests which bodies larger than the maximum supported frame size.
* multipart requests: make sure that a body declared as `Pathname` is opened for reading in binary mode.
* `webmock` integration: ensure that request events are emitted (such as plugins and integrations relying in it, such as `datadog` and the OTel integration)
* native resolver: do not propagate successful name resolutions for connections which were already closed.
* native resolver: fixed name resolution stalling, in a multi-request to multi-origin scenario, when a resolution timeout would happen.
## Chore
* refactor of the happy eyeballs and connection coalescing logic to not rely on callbacks, and instead on instance variable management (makes code more straightforward to read).

View File

@ -0,0 +1,20 @@
# 1.4.2
## Bugfixes
* faraday: use default reason when none is matched by Net::HTTP::STATUS_CODES
* native resolver: keep sending DNS queries if the socket is available, to avoid busy loops on select
* native resolver fixes for Happy Eyeballs v2
* do not apply resolution delay if the IPv4 IP was not resolved via DNS
* ignore ALIAS if DNS response carries IP answers
* do not try to query for names already awaiting answer from the resolver
* make sure all types of errors are propagated to connections
* make sure next candidate is picked up if receiving NX_DOMAIN_NOT_FOUND error from resolver
* raise error happening before any request is flushed to respective connections (avoids loop on non-actionable selector termination).
* fix "NoMethodError: undefined method `after' for nil:NilClass", happening for requests flushed into persistent connections which errored, and were retried in a different connection before triggering the timeout callbacks from the previously-closed connection.
## Chore
* Refactor of timers to allow for explicit and more performant single timer interval cancellation.
* default log message restructured to include info about process, thread and caller.

View File

@ -0,0 +1,11 @@
# 1.4.3
## Bugfixes
* `webmock` adapter: reassign headers to signature after callbacks are called (these may change the headers before virtual send).
* do not close request (and its body) right after sending, instead only on response close
* prevents retries from failing under the `:retries` plugin
* fixes issue when using `faraday-multipart` request bodies
* retry request with HTTP/1 when receiving an HTTP/2 GOAWAY frame with `HTTP_1_1_REQUIRED` error code.
* fix wrong method call on HTTP/2 PING frame with unrecognized code.
* fix EOFError issues on connection termination for long running connections which may have already been terminated by peer and were wrongly trying to complete the HTTP/2 termination handshake.

View File

@ -0,0 +1,14 @@
# 1.4.4
## Improvements
* `:stream` plugin: response will now be partially buffered in order to i.e. inspect response status or headers on the response body without buffering the full response
* this fixes an issue in the `down` gem integration when used with the `:max_size` option.
* do not unnecessarily probe for connection liveness if no more requests are inflight, including failed ones.
* when using persistent connections, do not probe for liveness right after reconnecting after a keep alive timeout.
## Bugfixes
* `:persistent` plugin: do not exhaust retry attempts when probing for (and failing) connection liveness.
* since the introduction of per-session connection pools, and consequentially due to the possibility of multiple inactive connections for the same origin being in the pool, which may have been terminated by the peer server, requests would fail before being able to establish a new connection.
* prevent retrying to connect the TCP socket object when an SSLSocket object is already in place and connecting.

126
doc/release_notes/1_5_0.md Normal file
View File

@ -0,0 +1,126 @@
# 1.5.0
## Features
### `:stream_bidi` plugin
The `:stream_bidi` plugin enables bidirectional streaming support (an HTTP/2 only feature!). It builds on top of the `:stream` plugin, and uses its block-based syntax to process incoming frames, while allowing the user to pipe more data to the request (from the same, or another thread/fiber).
```ruby
http = HTTPX.plugin(:stream_bidi)
request = http.build_request(
"POST",
"https://your-origin.com/stream",
headers: { "content-type" => "application/x-ndjson" },
body: ["{\"message\":\"started\"}\n"]
)
chunks = []
response = http.request(request, stream: true)
Thread.start do
response.each do |chunk|
handle_data(chunk)
end
end
# now send data...
request << "{\"message\":\"foo\"}\n"
request << "{\"message\":\"bar\"}\n"
# ...
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Stream-Bidi
### `:query` plugin
The `:query` plugin adds public methods supporting the `QUERY` HTTP verb:
```ruby
http = HTTPX.plugin(:query)
http.query("https://example.com/gquery", body: "foo=bar") # QUERY /gquery ....
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Query
this functionality was added as a plugin for explicit opt-in, as it's experimental (RFC for the new HTTP verb is still in draft).
### `:response_cache` plugin filesystem based store
The `:response_cache` plugin supports setting the filesystem as the response cache store (instead of just storing them in memory, which is the default `:store`).
```ruby
# cache store in the filesystem, writes to the temporary directory from the OS
http = HTTPX.plugin(:response_cache, response_cache_store: :file_store)
# if you want a separate location
http = HTTPX.plugin(:response_cache).with(response_cache_store: HTTPX::Plugins::ResponseCache::FileStore.new("/path/to/dir"))
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Response-Cache#:file_store
### `:close_on_fork` option
A new option `:close_on_fork` can be used to ensure that a session object which may have open connections will not leak them in case the process is forked (this can be the case of `:persistent` plugin enabled sessions which have add usage before fork):
```ruby
http = HTTPX.plugin(:persistent, close_on_fork: true)
# http may have open connections here
fork do
# http has no connections here
end
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools#Fork-Safety .
### `:debug_redact` option
The `:debug_redact` option will, when enabled, replace parts of the debug logs (enabled via `:debug` and `:debug_level` options) which may contain sensitive information, with the `"[REDACTED]"` placeholder.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Debugging .
### `:max_connections` pool option
A new `:max_connections` pool option (settable under `:pool_options`) can be used to defined the maximum number **overall** of connections for a pool ("in-transit" or "at-rest"); this complements, and supersedes when used, the already existing `:max_connections_per_origin`, which does the same per connection origin.
```ruby
HTTPX.with(pool_options: { max_connections: 100 })
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools .
### Subplugins
An enhancement to the plugins architecture, it allows plugins to define submodules ("subplugins") which are loaded if another plugin is in use, or is loaded afterwards.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Custom-Plugins#Subplugins .
## Improvements
* `:persistent` plugin: several improvements around reconnections of failure:
* reconnections will only happen for "connection broken" errors (and will discard reconnection on timeouts)
* reconnections won't exhaust retries
* `:response_cache` plugin: several improements:
* return cached response if not stale, send conditional request otherwise (it was always doing the latter).
* consider immutable (i.e. `"Cache-Control: immutable"`) responses as never stale.
* `:datadog` adapter: decorate spans with more tags (header, kind, component, etc...)
* timers operations have been improved to use more efficient algorithms and reduce object creation.
## Bugfixes
* ensure that setting request timeouts happens before the request is buffered (the latter could trigger a state transition required by the former).
* `:response_cache` plugin: fix `"Vary"` header handling by supporting a new plugin option, `:supported_vary_headers`, which defines which headers are taken into account for cache key calculation.
* fixed query string encoded value when passed an empty hash to the `:query` param and the URL already contains query string.
* `:callbacks` plugin: ensure the callbacks from a session are copied when a new session is derived from it (via a `.plugin` call, for example).
* `:callbacks` plugin: errors raised from hostname resolution should bubble up to user code.
* fixed connection coalescing selector monitoring in cases where the coalescable connecton is cloned, while other branches were simplified.
* clear the connection write buffer in corner cases where the remaining bytes may be interpreted as GOAWAY handshake frame (and may cause unintended writes to connections already identified as broken).
* remove idle connections from the selector when an error happens before the state changes (this may happen if the thread is interrupted during name resolution).
## Chore
`httpx` makes extensive use of features introduced in ruby 3.4, such as `Module#set_temporary_name` for otherwise plugin-generated anonymous classes (improves debugging and issue reporting), or `String#append_as_bytes` for a small but non-negligible perf boost in buffer operations. It falls back to the previous behaviour when used with ruby 3.3 or lower.
Also, and in preparation for the incoming ruby 3.5 release, dependency of the `cgi` gem (which will be removed from stdlib) was removed.

View File

@ -0,0 +1,6 @@
# 1.5.1
## Bugfixes
* connection errors on persistent connections which have just been checked out from the pool no longer account for retries bookkeeping; the assumption should be that, if a connection has been checked into the pool in an open state, chances are, when it eventually gets checked out, it may be corrupt. This issue was more exacerbated in `:persistent` plugin connections, which by design have a retry of 1, thus failing often immediately after check out without a legitimate request try.
* native resolver: fix issue with process interrupts during DNS request, which caused a busy loop when closing the selector.

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -0,0 +1,23 @@
version: '3'
services:
httpx:
image: ruby:3.4
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint: /usr/local/bin/nghttpx
volumes:
- ./test/support/ci:/home
command: --conf /home/doh-nghttp.conf --no-ocsp --frontend '*,443'
doh-proxy:
image: publicarray/doh-proxy
environment:
- "UNBOUND_SERVICE_HOST=127.0.0.11"

View File

@ -26,6 +26,7 @@ services:
- AMZ_HOST=aws:4566
- WEBDAV_HOST=webdav
- DD_INSTRUMENTATION_TELEMETRY_ENABLED=false
- GRPC_VERBOSITY=ERROR
image: ruby:alpine
privileged: true
depends_on:
@ -40,8 +41,7 @@ services:
- altsvc-nghttp2
volumes:
- ./:/home
entrypoint:
/home/test/support/ci/build.sh
entrypoint: /home/test/support/ci/build.sh
sshproxy:
image: connesc/ssh-gateway
@ -66,51 +66,44 @@ services:
- ./test/support/ci/squid/proxy.conf:/etc/squid/squid.conf
- ./test/support/ci/squid/proxy-users-basic.txt:/etc/squid/proxy-users-basic.txt
- ./test/support/ci/squid/proxy-users-digest.txt:/etc/squid/proxy-users-digest.txt
command:
-d 3
command: -d 3
http2proxy:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 3300:80
depends_on:
- httpproxy
entrypoint:
/usr/local/bin/nghttpx
command:
--no-ocsp --frontend '*,80;no-tls' --backend 'httpproxy,3128' --http2-proxy
entrypoint: /usr/local/bin/nghttpx
command: --no-ocsp --frontend '*,80;no-tls' --backend 'httpproxy,3128' --http2-proxy
nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 80:80
- 443:443
depends_on:
- httpbin
entrypoint:
/usr/local/bin/nghttpx
entrypoint: /usr/local/bin/nghttpx
volumes:
- ./test/support/ci:/home
command:
--conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443'
command: --conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443'
networks:
default:
aliases:
- another
altsvc-nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 81:80
- 444:443
depends_on:
- httpbin
entrypoint:
/usr/local/bin/nghttpx
entrypoint: /usr/local/bin/nghttpx
volumes:
- ./test/support/ci:/home
command:
--conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443' --altsvc "h2,443,nghttp2"
command: --conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443' --altsvc "h2,443,nghttp2"
networks:
default:
aliases:
@ -119,8 +112,7 @@ services:
environment:
- DEBUG=True
image: citizenstig/httpbin
command:
gunicorn --bind=0.0.0.0:8000 --workers=6 --access-logfile - --error-logfile - --log-level debug --capture-output httpbin:app
command: gunicorn --bind=0.0.0.0:8000 --workers=6 --access-logfile - --error-logfile - --log-level debug --capture-output httpbin:app
aws:
image: localstack/localstack

View File

@ -1,11 +1,20 @@
require "httpx"
URLS = %w[https://nghttp2.org/httpbin/get] * 1
if ARGV.empty?
URLS = %w[https://nghttp2.org/httpbin/get] * 1
else
URLS = ARGV
end
responses = HTTPX.get(*URLS)
Array(responses).each(&:raise_for_status)
puts "Status: \n"
puts Array(responses).map(&:status)
puts "Payload: \n"
puts Array(responses).map(&:to_s)
Array(responses).each do |res|
puts "URI: #{res.uri}"
case res
when HTTPX::ErrorResponse
puts "error: #{res.error}"
puts res.error.backtrace
else
puts "STATUS: #{res.status}"
puts res.to_s[0..2048]
end
end

View File

@ -17,20 +17,49 @@ end
Signal.trap("INFO") { print_status } unless ENV.key?("CI")
PAGES = (ARGV.first || 10).to_i
Thread.start do
frontpage = HTTPX.get("https://news.ycombinator.com").to_s
page_links = []
HTTPX.wrap do |http|
PAGES.times.each do |i|
frontpage = http.get("https://news.ycombinator.com?p=#{i+1}").to_s
html = Oga.parse_html(frontpage)
html = Oga.parse_html(frontpage)
links = html.css('.athing .title a').map{|link| link.get('href') }.select { |link| URI(link).absolute? }
links = html.css('.athing .title a').map{|link| link.get('href') }.select { |link| URI(link).absolute? }
links = links.select {|l| l.start_with?("https") }
links = links.select {|l| l.start_with?("https") }
puts links
puts "for page #{i+1}: #{links.size} links"
page_links.concat(links)
end
end
responses = HTTPX.get(*links)
puts "requesting #{page_links.size} links:"
responses = HTTPX.get(*page_links)
# page_links.each_with_index do |l, i|
# puts "#{responses[i].status}: #{l}"
# end
responses, error_responses = responses.partition { |r| r.is_a?(HTTPX::Response) }
puts "#{responses.size} responses (from #{page_links.size})"
puts "by group:"
responses.group_by(&:status).each do |st, res|
res.each do |r|
puts "#{st}: #{r.uri}"
end
end unless responses.empty?
unless error_responses.empty?
puts "error responses (#{error_responses.size})"
error_responses.group_by{ |r| r.error.class }.each do |kl, res|
res.each do |r|
puts "#{r.uri}: #{r.error}"
puts r.error.backtrace&.join("\n")
end
end
end
links.each_with_index do |l, i|
puts "#{responses[i].status}: #{l}"
end
end.join

View File

@ -7,8 +7,8 @@
#
require "httpx"
URLS = %w[http://badipv4.test.ipv6friday.org/] * 1
# URLS = %w[http://badipv6.test.ipv6friday.org/] * 1
# URLS = %w[https://ipv4.test-ipv6.com] * 1
URLS = %w[https://ipv6.test-ipv6.com] * 1
responses = HTTPX.get(*URLS, ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE})

View File

@ -32,7 +32,7 @@ Gem::Specification.new do |gem|
gem.require_paths = ["lib"]
gem.add_runtime_dependency "http-2-next", ">= 1.0.3"
gem.add_runtime_dependency "http-2", ">= 1.0.0"
gem.required_ruby_version = ">= 2.7.0"
end

View File

@ -0,0 +1,133 @@
# frozen_string_literal: true
module DatadogHelpers
DATADOG_VERSION = defined?(DDTrace) ? DDTrace::VERSION : Datadog::VERSION
ERROR_TAG = if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.8.0")
"error.message"
else
"error.msg"
end
private
def verify_instrumented_request(status, verb:, uri:, span: fetch_spans.first, service: datadog_service_name.to_s, error: nil)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
assert span.type == "http"
else
assert span.span_type == "http"
end
assert span.name == "#{datadog_service_name}.request"
assert span.service == service
assert span.get_tag("out.host") == uri.host
assert span.get_tag("out.port") == 80
assert span.get_tag("http.method") == verb
assert span.get_tag("http.url") == uri.path
if status && status >= 400
verify_http_error_span(span, status, error)
elsif error
verify_error_span(span)
else
assert span.status.zero?
assert span.get_tag("http.status_code") == status.to_s
# peer service
# assert span.get_tag("peer.service") == span.service
end
end
def verify_http_error_span(span, status, error)
assert span.get_tag("http.status_code") == status.to_s
assert span.get_tag("error.type") == error
assert !span.get_tag(ERROR_TAG).nil?
assert span.status == 1
end
def verify_error_span(span)
assert span.get_tag("error.type") == "HTTPX::NativeResolveError"
assert !span.get_tag(ERROR_TAG).nil?
assert span.status == 1
end
def verify_no_distributed_headers(request_headers)
assert !request_headers.key?("x-datadog-parent-id")
assert !request_headers.key?("x-datadog-trace-id")
assert !request_headers.key?("x-datadog-sampling-priority")
end
def verify_distributed_headers(request_headers, span: fetch_spans.first, sampling_priority: 1)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
assert request_headers["x-datadog-parent-id"] == span.id.to_s
else
assert request_headers["x-datadog-parent-id"] == span.span_id.to_s
end
assert request_headers["x-datadog-trace-id"] == trace_id(span)
assert request_headers["x-datadog-sampling-priority"] == sampling_priority.to_s
end
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.17.0")
def trace_id(span)
Datadog::Tracing::Utils::TraceId.to_low_order(span.trace_id).to_s
end
else
def trace_id(span)
span.trace_id.to_s
end
end
def verify_analytics_headers(span, sample_rate: nil)
assert span.get_metric("_dd1.sr.eausr") == sample_rate
end
def set_datadog(options = {}, &blk)
Datadog.configure do |c|
c.tracing.instrument(datadog_service_name, options, &blk)
end
tracer # initialize tracer patches
end
def tracer
@tracer ||= begin
tr = Datadog::Tracing.send(:tracer)
def tr.write(trace)
@traces ||= []
@traces << trace
end
tr
end
end
def trace_with_sampling_priority(priority)
tracer.trace("foo.bar") do
tracer.active_trace.sampling_priority = priority
yield
end
end
# Returns spans and caches it (similar to +let(:spans)+).
def spans
@spans ||= fetch_spans
end
# Retrieves and sorts all spans in the current tracer instance.
# This method does not cache its results.
def fetch_spans
spans = (tracer.instance_variable_get(:@traces) || []).map(&:spans)
spans.flatten.sort! do |a, b|
if a.name == b.name
if a.resource == b.resource
if a.start_time == b.start_time
a.end_time <=> b.end_time
else
a.start_time <=> b.start_time
end
else
a.resource <=> b.resource
end
else
a.name <=> b.name
end
end
end
end

View File

@ -10,50 +10,51 @@ end
require "test_helper"
require "support/http_helpers"
require "httpx/adapters/datadog"
DATADOG_VERSION = defined?(DDTrace) ? DDTrace::VERSION : Datadog::VERSION
require_relative "datadog_helpers"
class DatadogTest < Minitest::Test
include HTTPHelpers
include DatadogHelpers
def test_datadog_successful_get_request
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_successful_post_request
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/post", "http://#{httpbin}"))
response = HTTPX.post(uri, body: "bla")
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "POST", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "POST", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_successful_multiple_requests
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
get_uri = URI(build_uri("/get", "http://#{httpbin}"))
post_uri = URI(build_uri("/post", "http://#{httpbin}"))
get_response, post_response = HTTPX.request([["GET", uri], ["POST", uri]])
get_response, post_response = HTTPX.request([["GET", get_uri], ["POST", post_uri]])
verify_status(get_response, 200)
verify_status(post_response, 200)
assert fetch_spans.size == 2, "expected to have 2 spans"
get_span, post_span = fetch_spans
verify_instrumented_request(get_response, span: get_span, verb: "GET", uri: uri)
verify_instrumented_request(post_response, span: post_span, verb: "POST", uri: uri)
verify_distributed_headers(get_response, span: get_span)
verify_distributed_headers(post_response, span: post_span)
verify_instrumented_request(get_response.status, span: get_span, verb: "GET", uri: get_uri)
verify_instrumented_request(post_response.status, span: post_span, verb: "POST", uri: post_uri)
verify_distributed_headers(request_headers(get_response), span: get_span)
verify_distributed_headers(request_headers(post_response), span: post_span)
verify_analytics_headers(get_span)
verify_analytics_headers(post_span)
end
@ -66,8 +67,7 @@ class DatadogTest < Minitest::Test
verify_status(response, 500)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
end
def test_datadog_client_error_request
@ -78,8 +78,7 @@ class DatadogTest < Minitest::Test
verify_status(response, 404)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
end
def test_datadog_some_other_error
@ -90,12 +89,11 @@ class DatadogTest < Minitest::Test
assert response.is_a?(HTTPX::ErrorResponse), "response should contain errors"
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
verify_distributed_headers(response)
verify_instrumented_request(nil, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
end
def test_datadog_host_config
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
set_datadog(describe: /#{uri.host}/) do |http|
http.service_name = "httpbin"
http.split_by_domain = false
@ -105,12 +103,12 @@ class DatadogTest < Minitest::Test
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_split_by_domain
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
set_datadog do |http|
http.split_by_domain = true
end
@ -119,13 +117,13 @@ class DatadogTest < Minitest::Test
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_distributed_headers_disabled
set_datadog(distributed_tracing: false)
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
@ -135,14 +133,14 @@ class DatadogTest < Minitest::Test
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(response)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(request_headers(response))
verify_analytics_headers(span)
end
def test_datadog_distributed_headers_sampling_priority
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
@ -153,34 +151,34 @@ class DatadogTest < Minitest::Test
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_distributed_headers(response, span: span, sampling_priority: sampling_priority)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response), span: span, sampling_priority: sampling_priority)
verify_analytics_headers(span)
end
def test_datadog_analytics_enabled
set_datadog(analytics_enabled: true)
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 1.0)
end
def test_datadog_analytics_sample_rate
set_datadog(analytics_enabled: true, analytics_sample_rate: 0.5)
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 0.5)
end
@ -194,7 +192,7 @@ class DatadogTest < Minitest::Test
assert fetch_spans.size == 3, "expected to 3 spans"
fetch_spans.each do |span|
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
end
end
@ -208,128 +206,15 @@ class DatadogTest < Minitest::Test
def teardown
super
Datadog.registry[:httpx].reset_configuration!
Datadog.configuration.tracing[:httpx].enabled = false
end
def verify_instrumented_request(response, verb:, uri:, span: fetch_spans.first, service: "httpx", error: nil)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0.beta1")
assert span.type == "http"
else
assert span.span_type == "http"
end
assert span.name == "httpx.request"
assert span.service == service
assert span.get_tag("out.host") == uri.host
assert span.get_tag("out.port") == "80"
assert span.get_tag("http.method") == verb
assert span.get_tag("http.url") == uri.path
error_tag = if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.8.0")
"error.message"
else
"error.msg"
end
if error
assert span.get_tag("error.type") == "HTTPX::NativeResolveError"
assert !span.get_tag(error_tag).nil?
assert span.status == 1
elsif response.status >= 400
assert span.get_tag("http.status_code") == response.status.to_s
assert span.get_tag("error.type") == "HTTPX::HTTPError"
assert !span.get_tag(error_tag).nil?
assert span.status == 1
else
assert span.status.zero?
assert span.get_tag("http.status_code") == response.status.to_s
# peer service
assert span.get_tag("peer.service") == span.service
end
def datadog_service_name
:httpx
end
def verify_no_distributed_headers(response)
request = response.instance_variable_get(:@request)
assert !request.headers.key?("x-datadog-parent-id")
assert !request.headers.key?("x-datadog-trace-id")
assert !request.headers.key?("x-datadog-sampling-priority")
end
def verify_distributed_headers(response, span: fetch_spans.first, sampling_priority: 1)
request = response.instance_variable_get(:@request)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0.beta1")
assert request.headers["x-datadog-parent-id"] == span.id.to_s
else
assert request.headers["x-datadog-parent-id"] == span.span_id.to_s
end
assert request.headers["x-datadog-trace-id"] == trace_id(span)
assert request.headers["x-datadog-sampling-priority"] == sampling_priority.to_s
end
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.17.0")
def trace_id(span)
Datadog::Tracing::Utils::TraceId.to_low_order(span.trace_id).to_s
end
else
def trace_id(span)
span.trace_id.to_s
end
end
def verify_analytics_headers(span, sample_rate: nil)
assert span.get_metric("_dd1.sr.eausr") == sample_rate
end
def set_datadog(options = {}, &blk)
Datadog.configure do |c|
c.tracing.instrument(:httpx, options, &blk)
end
tracer # initialize tracer patches
end
def tracer
@tracer ||= begin
tr = Datadog::Tracing.send(:tracer)
def tr.write(trace)
@traces ||= []
@traces << trace
end
tr
end
end
def trace_with_sampling_priority(priority)
tracer.trace("foo.bar") do
tracer.active_trace.sampling_priority = priority
yield
end
end
# Returns spans and caches it (similar to +let(:spans)+).
def spans
@spans ||= fetch_spans
end
# Retrieves and sorts all spans in the current tracer instance.
# This method does not cache its results.
def fetch_spans
spans = (tracer.instance_variable_get(:@traces) || []).map(&:spans)
spans.flatten.sort! do |a, b|
if a.name == b.name
if a.resource == b.resource
if a.start_time == b.start_time
a.end_time <=> b.end_time
else
a.start_time <=> b.start_time
end
else
a.resource <=> b.resource
end
else
a.name <=> b.name
end
end
def request_headers(response)
body = json_body(response)
body["headers"].transform_keys(&:downcase)
end
end

View File

@ -0,0 +1,198 @@
# frozen_string_literal: true
begin
# upcoming 2.0
require "datadog"
rescue LoadError
require "ddtrace"
end
require "test_helper"
require "support/http_helpers"
require "httpx/adapters/faraday"
require_relative "datadog_helpers"
DATADOG_VERSION = defined?(DDTrace) ? DDTrace::VERSION : Datadog::VERSION
class FaradayDatadogTest < Minitest::Test
include HTTPHelpers
include DatadogHelpers
include FaradayHelpers
def test_faraday_datadog_successful_get_request
set_datadog
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_successful_post_request
set_datadog
uri = URI(build_uri("/status/200"))
response = faraday_connection.post(uri, "bla")
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, verb: "POST", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_server_error_request
set_datadog
uri = URI(build_uri("/status/500"))
ex = assert_raises(Faraday::ServerError) do
faraday_connection.tap do |conn|
adapter_handler = conn.builder.handlers.last
conn.builder.insert_before adapter_handler, Faraday::Response::RaiseError
end.get(uri)
end
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(ex.response[:status], verb: "GET", uri: uri, error: "Error 500")
verify_distributed_headers(request_headers(ex.response))
end
def test_faraday_datadog_client_error_request
set_datadog
uri = URI(build_uri("/status/404"))
ex = assert_raises(Faraday::ResourceNotFound) do
faraday_connection.tap do |conn|
adapter_handler = conn.builder.handlers.last
conn.builder.insert_before adapter_handler, Faraday::Response::RaiseError
end.get(uri)
end
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(ex.response[:status], verb: "GET", uri: uri, error: "Error 404")
verify_distributed_headers(request_headers(ex.response))
end
def test_faraday_datadog_some_other_error
set_datadog
uri = URI("http://unexisting/")
assert_raises(HTTPX::NativeResolveError) { faraday_connection.get(uri) }
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(nil, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
end
def test_faraday_datadog_host_config
uri = URI(build_uri("/status/200"))
set_datadog(describe: /#{uri.host}/) do |http|
http.service_name = "httpbin"
http.split_by_domain = false
end
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_split_by_domain
uri = URI(build_uri("/status/200"))
set_datadog do |http|
http.split_by_domain = true
end
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_distributed_headers_disabled
set_datadog(distributed_tracing: false)
uri = URI(build_uri("/status/200"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
faraday_connection.get(uri)
end
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(request_headers(response))
verify_analytics_headers(span)
end unless ENV.key?("CI") # TODO: https://github.com/DataDog/dd-trace-rb/issues/4308
def test_faraday_datadog_distributed_headers_sampling_priority
set_datadog
uri = URI(build_uri("/status/200"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
faraday_connection.get(uri)
end
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response), span: span, sampling_priority: sampling_priority)
verify_analytics_headers(span)
end unless ENV.key?("CI") # TODO: https://github.com/DataDog/dd-trace-rb/issues/4308
def test_faraday_datadog_analytics_enabled
set_datadog(analytics_enabled: true)
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 1.0)
end
def test_faraday_datadog_analytics_sample_rate
set_datadog(analytics_enabled: true, analytics_sample_rate: 0.5)
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 0.5)
end
private
def setup
super
Datadog.registry[:faraday].reset_configuration!
end
def teardown
super
Datadog.registry[:faraday].reset_configuration!
end
def datadog_service_name
:faraday
end
def origin(orig = httpbin)
"http://#{orig}"
end
end

View File

@ -133,7 +133,7 @@ class SentryTest < Minitest::Test
Sentry.init do |config|
config.traces_sample_rate = 1.0
config.logger = mock_logger
config.sdk_logger = mock_logger
config.dsn = DUMMY_DSN
config.transport.transport_class = Sentry::DummyTransport
config.background_worker_threads = 0

View File

@ -155,6 +155,29 @@ class WebmockTest < Minitest::Test
assert_requested(:get, MOCK_URL_HTTP, query: hash_excluding("a" => %w[b c]))
end
def test_verification_that_expected_request_with_hash_as_body
stub_request(:post, MOCK_URL_HTTP).with(body: { foo: "bar" })
http_request(:post, MOCK_URL_HTTP, form: { foo: "bar" })
assert_requested(:post, MOCK_URL_HTTP, body: { foo: "bar" })
end
def test_verification_that_expected_request_occured_with_form_file
file = File.new(fixture_file_path)
stub_request(:post, MOCK_URL_HTTP)
http_request(:post, MOCK_URL_HTTP, form: { file: file })
# TODO: webmock does not support matching multipart request body
assert_requested(:post, MOCK_URL_HTTP)
end
def test_verification_that_expected_request_occured_with_form_tempfile
stub_request(:post, MOCK_URL_HTTP)
Tempfile.open("tmp") do |file|
http_request(:post, MOCK_URL_HTTP, form: { file: file })
end
# TODO: webmock does not support matching multipart request body
assert_requested(:post, MOCK_URL_HTTP)
end
def test_verification_that_non_expected_request_didnt_occur
expected_message = Regexp.new(
"The request GET #{MOCK_URL_HTTP}/ was not expected to execute but it executed 1 time\n\n" \
@ -191,6 +214,37 @@ class WebmockTest < Minitest::Test
end
end
def test_webmock_allows_real_request
WebMock.allow_net_connect!
uri = build_uri("/get?foo=bar")
response = HTTPX.get(uri)
verify_status(response, 200)
verify_body_length(response)
assert_requested(:get, uri, query: { "foo" => "bar" })
end
def test_webmock_allows_real_request_with_body
WebMock.allow_net_connect!
uri = build_uri("/post")
response = HTTPX.post(uri, form: { foo: "bar" })
verify_status(response, 200)
verify_body_length(response)
assert_requested(:post, uri, headers: { "Content-Type" => "application/x-www-form-urlencoded" }, body: "foo=bar")
end
def test_webmock_allows_real_request_with_file_body
WebMock.allow_net_connect!
uri = build_uri("/post")
response = HTTPX.post(uri, form: { image: File.new(fixture_file_path) })
verify_status(response, 200)
verify_body_length(response)
body = json_body(response)
verify_header(body["headers"], "Content-Type", "multipart/form-data")
verify_uploaded_image(body, "image", "image/jpeg")
# TODO: webmock does not support matching multipart request body
# assert_requested(:post, uri, headers: { "Content-Type" => "multipart/form-data" }, form: { "image" => File.new(fixture_file_path) })
end
def test_webmock_mix_mock_and_real_request
WebMock.allow_net_connect!
@ -280,4 +334,8 @@ class WebmockTest < Minitest::Test
def http_request(meth, *uris, **options)
HTTPX.__send__(meth, *uris, **options)
end
def scheme
"http://"
end
end

View File

@ -2,28 +2,11 @@
require "httpx/version"
require "httpx/extensions"
require "httpx/errors"
require "httpx/utils"
require "httpx/punycode"
require "httpx/domain_name"
require "httpx/altsvc"
require "httpx/callbacks"
require "httpx/loggable"
require "httpx/transcoder"
require "httpx/timers"
require "httpx/pool"
require "httpx/headers"
require "httpx/request"
require "httpx/response"
require "httpx/options"
require "httpx/chainable"
# Top-Level Namespace
#
module HTTPX
EMPTY = [].freeze
EMPTY_HASH = {}.freeze
# All plugins should be stored under this module/namespace. Can register and load
# plugins.
@ -53,15 +36,31 @@ module HTTPX
m.synchronize { h[name] = mod }
end
end
extend Chainable
end
require "httpx/extensions"
require "httpx/errors"
require "httpx/utils"
require "httpx/punycode"
require "httpx/domain_name"
require "httpx/altsvc"
require "httpx/callbacks"
require "httpx/loggable"
require "httpx/transcoder"
require "httpx/timers"
require "httpx/pool"
require "httpx/headers"
require "httpx/request"
require "httpx/response"
require "httpx/options"
require "httpx/chainable"
require "httpx/session"
require "httpx/session_extensions"
# load integrations when possible
require "httpx/adapters/datadog" if defined?(DDTrace) || defined?(Datadog)
require "httpx/adapters/datadog" if defined?(DDTrace) || defined?(Datadog::Tracing)
require "httpx/adapters/sentry" if defined?(Sentry)
require "httpx/adapters/webmock" if defined?(WebMock)

View File

@ -13,8 +13,17 @@ module Datadog::Tracing
TYPE_OUTBOUND = Datadog::Tracing::Metadata::Ext::HTTP::TYPE_OUTBOUND
TAG_PEER_SERVICE = Datadog::Tracing::Metadata::Ext::TAG_PEER_SERVICE
TAG_BASE_SERVICE = if Gem::Version.new(DATADOG_VERSION::STRING) < Gem::Version.new("1.15.0")
"_dd.base_service"
else
Datadog::Tracing::Contrib::Ext::Metadata::TAG_BASE_SERVICE
end
TAG_PEER_HOSTNAME = Datadog::Tracing::Metadata::Ext::TAG_PEER_HOSTNAME
TAG_KIND = Datadog::Tracing::Metadata::Ext::TAG_KIND
TAG_CLIENT = Datadog::Tracing::Metadata::Ext::SpanKind::TAG_CLIENT
TAG_COMPONENT = Datadog::Tracing::Metadata::Ext::TAG_COMPONENT
TAG_OPERATION = Datadog::Tracing::Metadata::Ext::TAG_OPERATION
TAG_URL = Datadog::Tracing::Metadata::Ext::HTTP::TAG_URL
TAG_METHOD = Datadog::Tracing::Metadata::Ext::HTTP::TAG_METHOD
TAG_TARGET_HOST = Datadog::Tracing::Metadata::Ext::NET::TAG_TARGET_HOST
@ -27,107 +36,115 @@ module Datadog::Tracing
# Enables tracing for httpx requests.
#
# A span will be created for each request transaction; the span is created lazily only when
# receiving a response, and it is fed the start time stored inside the tracer object.
# buffering a request, and it is fed the start time stored inside the tracer object.
#
module Plugin
class RequestTracer
include Contrib::HttpAnnotationHelper
module RequestTracer
extend Contrib::HttpAnnotationHelper
module_function
SPAN_REQUEST = "httpx.request"
# initializes the tracer object on the +request+.
def initialize(request)
@request = request
@start_time = nil
# initializes tracing on the +request+.
def call(request)
return unless configuration(request).enabled
span = nil
# request objects are reused, when already buffered requests get rerouted to a different
# connection due to connection issues, or when they already got a response, but need to
# be retried. In such situations, the original span needs to be extended for the former,
# while a new is required for the latter.
request.on(:idle) { reset }
request.on(:idle) do
span = nil
end
# the span is initialized when the request is buffered in the parser, which is the closest
# one gets to actually sending the request.
request.on(:headers) { call }
request.on(:headers) do
next if span
span = initialize_span(request, now)
end
request.on(:response) do |response|
unless span
next unless response.is_a?(::HTTPX::ErrorResponse) && response.error.respond_to?(:connection)
# handles the case when the +error+ happened during name resolution, which means
# that the tracing start point hasn't been triggered yet; in such cases, the approximate
# initial resolving time is collected from the connection, and used as span start time,
# and the tracing object in inserted before the on response callback is called.
span = initialize_span(request, response.error.connection.init_time)
end
finish(response, span)
end
end
# sets up the span start time, while preparing the on response callback.
def call(*args)
return if @start_time
start(*args)
@request.once(:response, &method(:finish))
end
private
# just sets the span init time. It can be passed a +start_time+ in cases where
# this is collected outside the request transaction.
def start(start_time = now)
@start_time = start_time
end
# resets the start time for already finished request transactions.
def reset
return unless @start_time
start
end
# creates the span from the collected +@start_time+ to what the +response+ state
# contains. It also resets internal state to allow this object to be reused.
def finish(response)
return unless @start_time
span = initialize_span
return unless span
def finish(response, span)
if response.is_a?(::HTTPX::ErrorResponse)
span.set_error(response.error)
else
span.set_tag(TAG_STATUS_CODE, response.status.to_s)
span.set_error(::HTTPX::HTTPError.new(response)) if response.status >= 400 && response.status <= 599
span.set_tags(
Datadog.configuration.tracing.header_tags.response_tags(response.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
end
span.finish
ensure
@start_time = nil
end
# return a span initialized with the +@request+ state.
def initialize_span
verb = @request.verb
uri = @request.uri
def initialize_span(request, start_time)
verb = request.verb
uri = request.uri
span = create_span(@request)
config = configuration(request)
span = create_span(request, config, start_time)
span.resource = verb
# Add additional request specific tags to the span.
# Tag original global service name if not used
span.set_tag(TAG_BASE_SERVICE, Datadog.configuration.service) if span.service != Datadog.configuration.service
span.set_tag(TAG_URL, @request.path)
span.set_tag(TAG_KIND, TAG_CLIENT)
span.set_tag(TAG_COMPONENT, "httpx")
span.set_tag(TAG_OPERATION, "request")
span.set_tag(TAG_URL, request.path)
span.set_tag(TAG_METHOD, verb)
span.set_tag(TAG_TARGET_HOST, uri.host)
span.set_tag(TAG_TARGET_PORT, uri.port.to_s)
span.set_tag(TAG_TARGET_PORT, uri.port)
span.set_tag(TAG_PEER_HOSTNAME, uri.host)
# Tag as an external peer service
span.set_tag(TAG_PEER_SERVICE, span.service)
# span.set_tag(TAG_PEER_SERVICE, span.service)
if configuration[:distributed_tracing]
if config[:distributed_tracing]
propagate_trace_http(
Datadog::Tracing.active_trace.to_digest,
@request.headers
Datadog::Tracing.active_trace,
request.headers
)
end
# Set analytics sample rate
if Contrib::Analytics.enabled?(configuration[:analytics_enabled])
Contrib::Analytics.set_sample_rate(span, configuration[:analytics_sample_rate])
if Contrib::Analytics.enabled?(config[:analytics_enabled])
Contrib::Analytics.set_sample_rate(span, config[:analytics_sample_rate])
end
span.set_tags(
Datadog.configuration.tracing.header_tags.request_tags(request.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
span
rescue StandardError => e
Datadog.logger.error("error preparing span for http request: #{e}")
@ -138,34 +155,34 @@ module Datadog::Tracing
::Datadog::Core::Utils::Time.now.utc
end
def configuration
@configuration ||= Datadog.configuration.tracing[:httpx, @request.uri.host]
def configuration(request)
Datadog.configuration.tracing[:httpx, request.uri.host]
end
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0.beta1")
def propagate_trace_http(digest, headers)
Datadog::Tracing::Contrib::HTTP.inject(digest, headers)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
def propagate_trace_http(trace, headers)
Datadog::Tracing::Contrib::HTTP.inject(trace, headers)
end
def create_span(request)
def create_span(request, configuration, start_time)
Datadog::Tracing.trace(
SPAN_REQUEST,
service: service_name(request.uri.host, configuration, Datadog.configuration_for(self)),
service: service_name(request.uri.host, configuration),
type: TYPE_OUTBOUND,
start_time: @start_time
start_time: start_time
)
end
else
def propagate_trace_http(digest, headers)
Datadog::Tracing::Propagation::HTTP.inject!(digest, headers)
def propagate_trace_http(trace, headers)
Datadog::Tracing::Propagation::HTTP.inject!(trace.to_digest, headers)
end
def create_span(request)
def create_span(request, configuration, start_time)
Datadog::Tracing.trace(
SPAN_REQUEST,
service: service_name(request.uri.host, configuration, Datadog.configuration_for(self)),
service: service_name(request.uri.host, configuration),
span_type: TYPE_OUTBOUND,
start_time: @start_time
start_time: start_time
)
end
end
@ -178,7 +195,7 @@ module Datadog::Tracing
return unless Datadog::Tracing.enabled?
RequestTracer.new(self)
RequestTracer.call(self)
end
end
@ -190,22 +207,6 @@ module Datadog::Tracing
@init_time = ::Datadog::Core::Utils::Time.now.utc
end
# handles the case when the +error+ happened during name resolution, which meanns
# that the tracing logic hasn't been injected yet; in such cases, the approximate
# initial resolving time is collected from the connection, and used as span start time,
# and the tracing object in inserted before the on response callback is called.
def handle_error(error)
return super unless Datadog::Tracing.enabled?
return super unless error.respond_to?(:connection)
@pending.each do |request|
RequestTracer.new(request).call(error.connection.init_time)
end
super
end
end
end

View File

@ -30,6 +30,7 @@ module Faraday
end
@connection = @connection.plugin(OnDataPlugin) if env.request.stream_response?
@connection = @config_block.call(@connection) || @connection if @config_block
@connection
end
@ -107,9 +108,11 @@ module Faraday
ssl_options
end
else
# :nocov:
def ssl_options_from_env(*)
{}
end
# :nocov:
end
end
@ -146,7 +149,7 @@ module Faraday
module ResponseMethods
def reason
Net::HTTP::STATUS_CODES.fetch(@status)
Net::HTTP::STATUS_CODES.fetch(@status, "Non-Standard status code")
end
end
end
@ -212,7 +215,7 @@ module Faraday
Array(responses).each_with_index do |response, index|
handler = @handlers[index]
handler.on_response.call(response)
handler.on_complete.call(handler.env)
handler.on_complete.call(handler.env) if handler.on_complete
end
end
rescue ::HTTPX::TimeoutError => e

View File

@ -20,7 +20,7 @@ module WebMock
WebMock::RequestSignature.new(
request.verb.downcase.to_sym,
uri.to_s,
body: request.body.each.to_a.join,
body: request.body.to_s,
headers: request.headers.to_h
)
end
@ -47,21 +47,27 @@ module WebMock
end
def build_error_response(request, exception)
HTTPX::ErrorResponse.new(request, exception, request.options)
HTTPX::ErrorResponse.new(request, exception)
end
end
module InstanceMethods
def init_connection(*)
connection = super
private
def do_init_connection(connection, selector)
super
connection.once(:unmock_connection) do
next unless connection.current_session == self
unless connection.addresses
connection.__send__(:callbacks)[:connect_error].clear
pool.__send__(:unregister_connection, connection)
# reset Happy Eyeballs, fail early
connection.sibling = nil
deselect_connection(connection, selector)
end
pool.__send__(:resolve_connection, connection)
resolve_connection(connection, selector)
end
connection
end
end
@ -100,6 +106,10 @@ module WebMock
super
end
def terminate
force_reset
end
def send(request)
request_signature = Plugin.build_webmock_request_signature(request)
WebMock::RequestRegistry.instance.requested_signatures.put(request_signature)
@ -108,8 +118,15 @@ module WebMock
response = Plugin.build_from_webmock_response(request, mock_response)
WebMock::CallbackRegistry.invoke_callbacks({ lib: :httpx }, request_signature, mock_response)
log { "mocking #{request.uri} with #{mock_response.inspect}" }
request.transition(:headers)
request.transition(:body)
request.transition(:trailers)
request.transition(:done)
response.finish!
request.response = response
request.emit(:response, response)
request_signature.headers = request.headers.to_h
response << mock_response.body.dup unless response.is_a?(HTTPX::ErrorResponse)
elsif WebMock.net_connect_allowed?(request_signature.uri)
if WebMock::CallbackRegistry.any_callbacks?

View File

@ -14,8 +14,6 @@ module HTTPX
class Buffer
extend Forwardable
def_delegator :@buffer, :<<
def_delegator :@buffer, :to_s
def_delegator :@buffer, :to_str
@ -30,9 +28,22 @@ module HTTPX
attr_reader :limit
def initialize(limit)
@buffer = "".b
@limit = limit
if RUBY_VERSION >= "3.4.0"
def initialize(limit)
@buffer = String.new("", encoding: Encoding::BINARY, capacity: limit)
@limit = limit
end
def <<(chunk)
@buffer.append_as_bytes(chunk)
end
else
def initialize(limit)
@buffer = "".b
@limit = limit
end
def_delegator :@buffer, :<<
end
def full?

View File

@ -4,7 +4,7 @@ module HTTPX
module Callbacks
def on(type, &action)
callbacks(type) << action
self
action
end
def once(type, &block)
@ -12,20 +12,15 @@ module HTTPX
block.call(*args, &callback)
:delete
end
self
end
def only(type, &block)
callbacks(type).clear
on(type, &block)
end
def emit(type, *args)
log { "emit #{type.inspect} callbacks" } if respond_to?(:log)
callbacks(type).delete_if { |pr| :delete == pr.call(*args) } # rubocop:disable Style/YodaCondition
end
def callbacks_for?(type)
@callbacks.key?(type) && @callbacks[type].any?
@callbacks && @callbacks.key?(type) && @callbacks[type].any?
end
protected

View File

@ -73,7 +73,7 @@ module HTTPX
].include?(callback)
warn "DEPRECATION WARNING: calling `.#{meth}` on plain HTTPX sessions is deprecated. " \
"Use HTTPX.plugin(:callbacks).#{meth} instead."
"Use `HTTPX.plugin(:callbacks).#{meth}` instead."
plugin(:callbacks).__send__(meth, *args, **options, &blk)
else
@ -101,4 +101,6 @@ module HTTPX
end
end
end
extend Chainable
end

View File

@ -41,13 +41,22 @@ module HTTPX
def_delegator :@write_buffer, :empty?
attr_reader :type, :io, :origin, :origins, :state, :pending, :options, :ssl_session
attr_reader :type, :io, :origin, :origins, :state, :pending, :options, :ssl_session, :sibling
attr_writer :timers
attr_writer :current_selector
attr_accessor :family
attr_accessor :current_session, :family
protected :sibling
def initialize(uri, options)
@current_session = @current_selector =
@parser = @sibling = @coalesced_connection =
@io = @ssl_session = @timeout =
@connected_at = @response_received_at = nil
@exhausted = @cloned = @main_sibling = false
@options = Options.new(options)
@type = initialize_type(uri, @options)
@origins = [uri.origin]
@ -56,6 +65,9 @@ module HTTPX
@read_buffer = Buffer.new(@options.buffer_size)
@write_buffer = Buffer.new(@options.buffer_size)
@pending = []
@inflight = 0
@keep_alive_timeout = @options.timeout[:keep_alive_timeout]
on(:error, &method(:on_error))
if @options.io
# if there's an already open IO, get its
@ -66,15 +78,39 @@ module HTTPX
else
transition(:idle)
end
on(:close) do
next if @exhausted # it'll reset
@inflight = 0
@keep_alive_timeout = @options.timeout[:keep_alive_timeout]
# may be called after ":close" above, so after the connection has been checked back in.
# next unless @current_session
@intervals = []
next unless @current_session
@current_session.deselect_connection(self, @current_selector, @cloned)
end
on(:terminate) do
next if @exhausted # it'll reset
current_session = @current_session
current_selector = @current_selector
# may be called after ":close" above, so after the connection has been checked back in.
next unless current_session && current_selector
current_session.deselect_connection(self, current_selector)
end
on(:altsvc) do |alt_origin, origin, alt_params|
build_altsvc_connection(alt_origin, origin, alt_params)
end
self.addresses = @options.addresses if @options.addresses
end
def peer
@origin
end
# this is a semi-private method, to be used by the resolver
# to initiate the io object.
def addresses=(addrs)
@ -119,6 +155,14 @@ module HTTPX
) && @options == connection.options
end
# coalesces +self+ into +connection+.
def coalesce!(connection)
@coalesced_connection = connection
close_sibling
connection.merge(self)
end
# coalescable connections need to be mergeable!
# but internally, #mergeable? is called before #coalescable?
def coalescable?(connection)
@ -161,12 +205,23 @@ module HTTPX
end
end
def io_connected?
return @coalesced_connection.io_connected? if @coalesced_connection
@io && @io.state == :connected
end
def connecting?
@state == :idle
end
def inflight?
@parser && !@parser.empty? && !@write_buffer.empty?
@parser && (
# parser may be dealing with other requests (possibly started from a different fiber)
!@parser.empty? ||
# connection may be doing connection termination handshake
!@write_buffer.empty?
)
end
def interests
@ -182,6 +237,9 @@ module HTTPX
return @parser.interests if @parser
nil
rescue StandardError => e
emit(:error, e)
nil
end
@ -203,6 +261,10 @@ module HTTPX
consume
end
nil
rescue StandardError => e
@write_buffer.clear
emit(:error, e)
raise e
end
def close
@ -212,15 +274,22 @@ module HTTPX
end
def terminate
@connected_at = nil if @state == :closed
case @state
when :idle
purge_after_closed
emit(:terminate)
when :closed
@connected_at = nil
end
close
end
# bypasses the state machine to force closing of connections still connecting.
# **only** used for Happy Eyeballs v2.
def force_reset
def force_reset(cloned = false)
@state = :closing
@cloned = cloned
transition(:closed)
end
@ -233,6 +302,8 @@ module HTTPX
end
def send(request)
return @coalesced_connection.send(request) if @coalesced_connection
if @parser && !@write_buffer.full?
if @response_received_at && @keep_alive_timeout &&
Utils.elapsed_time(@response_received_at) > @keep_alive_timeout
@ -243,6 +314,7 @@ module HTTPX
@pending << request
transition(:active) if @state == :inactive
parser.ping
request.ping!
return
end
@ -253,6 +325,8 @@ module HTTPX
end
def timeout
return if @state == :closed || @state == :inactive
return @timeout if @timeout
return @options.timeout[:connect_timeout] if @state == :idle
@ -280,19 +354,49 @@ module HTTPX
end
def handle_socket_timeout(interval)
@intervals.delete_if(&:elapsed?)
unless @intervals.empty?
# remove the intervals which will elapse
return
end
error = HTTPX::TimeoutError.new(interval, "timed out while waiting on select")
error = OperationTimeoutError.new(interval, "timed out while waiting on select")
error.set_backtrace(caller)
on_error(error)
end
def sibling=(connection)
@sibling = connection
return unless connection
@main_sibling = connection.sibling.nil?
return unless @main_sibling
connection.sibling = self
end
def handle_connect_error(error)
return handle_error(error) unless @sibling && @sibling.connecting?
@sibling.merge(self)
force_reset(true)
end
def disconnect
return unless @current_session && @current_selector
emit(:close)
@current_session = nil
@current_selector = nil
end
# :nocov:
def inspect
"#<#{self.class}:#{object_id} " \
"@origin=#{@origin} " \
"@state=#{@state} " \
"@pending=#{@pending.size} " \
"@io=#{@io}>"
end
# :nocov:
private
def connect
@ -337,8 +441,10 @@ module HTTPX
#
loop do
siz = @io.read(@window_size, @read_buffer)
log(level: 3, color: :cyan) { "IO READ: #{siz} bytes..." }
log(level: 3, color: :cyan) { "IO READ: #{siz} bytes... (wsize: #{@window_size}, rbuffer: #{@read_buffer.bytesize})" }
unless siz
@write_buffer.clear
ex = EOFError.new("descriptor closed")
ex.set_backtrace(caller)
on_error(ex)
@ -393,6 +499,8 @@ module HTTPX
end
log(level: 3, color: :cyan) { "IO WRITE: #{siz} bytes..." }
unless siz
@write_buffer.clear
ex = EOFError.new("descriptor closed")
ex.set_backtrace(caller)
on_error(ex)
@ -439,17 +547,21 @@ module HTTPX
def send_request_to_parser(request)
@inflight += 1
request.peer_address = @io.ip
parser.send(request)
set_request_timeouts(request)
parser.send(request)
return unless @state == :inactive
transition(:active)
# mark request as ping, as this inactive connection may have been
# closed by the server, and we don't want that to influence retry
# bookkeeping.
request.ping!
end
def build_parser(protocol = @io.protocol)
parser = self.class.parser_type(protocol).new(@write_buffer, @options)
parser = parser_type(protocol).new(@write_buffer, @options)
set_parser_callbacks(parser)
parser
end
@ -461,6 +573,7 @@ module HTTPX
end
@response_received_at = Utils.now
@inflight -= 1
response.finish!
request.emit(:response, response)
end
parser.on(:altsvc) do |alt_origin, origin, alt_params|
@ -473,8 +586,27 @@ module HTTPX
request.emit(:promise, parser, stream)
end
parser.on(:exhausted) do
@pending.concat(parser.pending)
emit(:exhausted)
@exhausted = true
current_session = @current_session
current_selector = @current_selector
begin
parser.close
@pending.concat(parser.pending)
ensure
@current_session = current_session
@current_selector = current_selector
end
case @state
when :closed
idling
@exhausted = false
when :closing
once(:closed) do
idling
@exhausted = false
end
end
end
parser.on(:origin) do |origin|
@origins |= [origin]
@ -490,8 +622,14 @@ module HTTPX
end
parser.on(:reset) do
@pending.concat(parser.pending) unless parser.empty?
current_session = @current_session
current_selector = @current_selector
reset
idling unless @pending.empty?
unless @pending.empty?
idling
@current_session = current_session
@current_selector = current_selector
end
end
parser.on(:current_timeout) do
@current_timeout = @timeout = parser.timeout
@ -499,15 +637,28 @@ module HTTPX
parser.on(:timeout) do |tout|
@timeout = tout
end
parser.on(:error) do |request, ex|
case ex
when MisdirectedRequestError
emit(:misdirected, request)
else
response = ErrorResponse.new(request, ex, @options)
request.response = response
request.emit(:response, response)
parser.on(:error) do |request, error|
case error
when :http_1_1_required
current_session = @current_session
current_selector = @current_selector
parser.close
other_connection = current_session.find_connection(@origin, current_selector,
@options.merge(ssl: { alpn_protocols: %w[http/1.1] }))
other_connection.merge(self)
request.transition(:idle)
other_connection.send(request)
next
when OperationTimeoutError
# request level timeouts should take precedence
next unless request.active_timeouts.empty?
end
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end
end
@ -527,15 +678,17 @@ module HTTPX
# connect errors, exit gracefully
error = ConnectionError.new(e.message)
error.set_backtrace(e.backtrace)
connecting? && callbacks_for?(:connect_error) ? emit(:connect_error, error) : handle_error(error)
handle_connect_error(error) if connecting?
@state = :closed
emit(:close)
rescue TLSError, HTTP2Next::Error::ProtocolError, HTTP2Next::Error::HandshakeError => e
purge_after_closed
disconnect
rescue TLSError, ::HTTP2::Error::ProtocolError, ::HTTP2::Error::HandshakeError => e
# connect errors, exit gracefully
handle_error(e)
connecting? && callbacks_for?(:connect_error) ? emit(:connect_error, e) : handle_error(e)
handle_connect_error(e) if connecting?
@state = :closed
emit(:close)
purge_after_closed
disconnect
end
def handle_transition(nextstate)
@ -543,12 +696,12 @@ module HTTPX
when :idle
@timeout = @current_timeout = @options.timeout[:connect_timeout]
@connected_at = nil
@connected_at = @response_received_at = nil
when :open
return if @state == :closed
@io.connect
emit(:tcp_open, self) if @io.state == :connected
close_sibling if @io.state == :connected
return unless @io.connected?
@ -560,6 +713,9 @@ module HTTPX
emit(:open)
when :inactive
return unless @state == :open
# do not deactivate connection in use
return if @inflight.positive?
when :closing
return unless @state == :idle || @state == :open
@ -577,7 +733,8 @@ module HTTPX
return unless @write_buffer.empty?
purge_after_closed
emit(:close) if @pending.empty?
disconnect if @pending.empty?
when :already_open
nextstate = :open
# the first check for given io readiness must still use a timeout.
@ -588,11 +745,30 @@ module HTTPX
return unless @state == :inactive
nextstate = :open
emit(:activate)
# activate
@current_session.select_connection(self, @current_selector)
end
log(level: 3) { "#{@state} -> #{nextstate}" }
@state = nextstate
end
def close_sibling
return unless @sibling
if @sibling.io_connected?
reset
# TODO: transition connection to closed
end
unless @sibling.state == :closed
merge(@sibling) unless @main_sibling
@sibling.force_reset(true)
end
@sibling = nil
end
def purge_after_closed
@io.close if @io
@read_buffer.clear
@ -612,12 +788,40 @@ module HTTPX
end
end
# returns an HTTPX::Connection for the negotiated Alternative Service (or none).
def build_altsvc_connection(alt_origin, origin, alt_params)
# do not allow security downgrades on altsvc negotiation
return if @origin.scheme == "https" && alt_origin.scheme != "https"
altsvc = AltSvc.cached_altsvc_set(origin, alt_params.merge("origin" => alt_origin))
# altsvc already exists, somehow it wasn't advertised, probably noop
return unless altsvc
alt_options = @options.merge(ssl: @options.ssl.merge(hostname: URI(origin).host))
connection = @current_session.find_connection(alt_origin, @current_selector, alt_options)
# advertised altsvc is the same origin being used, ignore
return if connection == self
connection.extend(AltSvc::ConnectionMixin) unless connection.is_a?(AltSvc::ConnectionMixin)
log(level: 1) { "#{origin} alt-svc: #{alt_origin}" }
connection.merge(self)
terminate
rescue UnsupportedSchemeError
altsvc["noop"] = true
nil
end
def build_socket(addrs = nil)
case @type
when "tcp"
TCP.new(@origin, addrs, @options)
TCP.new(peer, addrs, @options)
when "ssl"
SSL.new(@origin, addrs, @options) do |sock|
SSL.new(peer, addrs, @options) do |sock|
sock.ssl_session = @ssl_session
sock.session_new_cb do |sess|
@ssl_session = sess
@ -630,14 +834,14 @@ module HTTPX
path = String(path) if path
UNIX.new(@origin, path, @options)
UNIX.new(peer, path, @options)
else
raise Error, "unsupported transport (#{@type})"
end
end
def on_error(error)
if error.instance_of?(TimeoutError)
def on_error(error, request = nil)
if error.is_a?(OperationTimeoutError)
# inactive connections do not contribute to the select loop, therefore
# they should not fail due to such errors.
@ -650,39 +854,60 @@ module HTTPX
error = error.to_connection_error if connecting?
end
handle_error(error)
handle_error(error, request)
reset
end
def handle_error(error)
parser.handle_error(error) if @parser && parser.respond_to?(:handle_error)
while (request = @pending.shift)
response = ErrorResponse.new(request, error, request.options)
request.response = response
request.emit(:response, response)
def handle_error(error, request = nil)
parser.handle_error(error, request) if @parser && parser.respond_to?(:handle_error)
while (req = @pending.shift)
next if request && req == request
response = ErrorResponse.new(req, error)
req.response = response
req.emit(:response, response)
end
return unless request
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end
def set_request_timeouts(request)
write_timeout = request.write_timeout
set_request_write_timeout(request)
set_request_read_timeout(request)
set_request_request_timeout(request)
end
def set_request_read_timeout(request)
read_timeout = request.read_timeout
return if read_timeout.nil? || read_timeout.infinite?
set_request_timeout(:read_timeout, request, read_timeout, :done, :response) do
read_timeout_callback(request, read_timeout)
end
end
def set_request_write_timeout(request)
write_timeout = request.write_timeout
return if write_timeout.nil? || write_timeout.infinite?
set_request_timeout(:write_timeout, request, write_timeout, :headers, %i[done response]) do
write_timeout_callback(request, write_timeout)
end
end
def set_request_request_timeout(request)
request_timeout = request.request_timeout
unless write_timeout.nil? || write_timeout.infinite?
set_request_timeout(request, write_timeout, :headers, %i[done response]) do
write_timeout_callback(request, write_timeout)
end
end
unless read_timeout.nil? || read_timeout.infinite?
set_request_timeout(request, read_timeout, :done, :response) do
read_timeout_callback(request, read_timeout)
end
end
return if request_timeout.nil? || request_timeout.infinite?
set_request_timeout(request, request_timeout, :headers, :response) do
set_request_timeout(:request_timeout, request, request_timeout, :headers, :complete) do
read_timeout_callback(request, request_timeout, RequestTimeoutError)
end
end
@ -692,7 +917,8 @@ module HTTPX
@write_buffer.clear
error = WriteTimeoutError.new(request, nil, write_timeout)
on_error(error)
on_error(error, request)
end
def read_timeout_callback(request, read_timeout, error_type = ReadTimeoutError)
@ -702,35 +928,31 @@ module HTTPX
@write_buffer.clear
error = error_type.new(request, request.response, read_timeout)
on_error(error)
on_error(error, request)
end
def set_request_timeout(request, timeout, start_event, finish_events, &callback)
request.once(start_event) do
interval = @timers.after(timeout, callback)
def set_request_timeout(label, request, timeout, start_event, finish_events, &callback)
request.set_timeout_callback(start_event) do
timer = @current_selector.after(timeout, callback)
request.active_timeouts << label
Array(finish_events).each do |event|
# clean up request timeouts if the connection errors out
request.once(event) do
if @intervals.include?(interval)
interval.delete(callback)
@intervals.delete(interval) if interval.no_callbacks?
end
request.set_timeout_callback(event) do
timer.cancel
request.active_timeouts.delete(label)
end
end
@intervals << interval
end
end
class << self
def parser_type(protocol)
case protocol
when "h2" then HTTP2
when "http/1.1" then HTTP1
else
raise Error, "unsupported protocol (##{protocol})"
end
def parser_type(protocol)
case protocol
when "h2" then HTTP2
when "http/1.1" then HTTP1
else
raise Error, "unsupported protocol (##{protocol})"
end
end
end

View File

@ -15,7 +15,7 @@ module HTTPX
attr_accessor :max_concurrent_requests
def initialize(buffer, options)
@options = Options.new(options)
@options = options
@max_concurrent_requests = @options.max_concurrent_requests || MAX_REQUESTS
@max_requests = @options.max_requests
@parser = Parser::HTTP1.new(self)
@ -93,7 +93,7 @@ module HTTPX
concurrent_requests_limit = [@max_concurrent_requests, requests_limit].min
@requests.each_with_index do |request, idx|
break if idx >= concurrent_requests_limit
next if request.state == :done
next unless request.can_buffer?
handle(request)
end
@ -119,7 +119,7 @@ module HTTPX
@parser.http_version.join("."),
headers)
log(color: :yellow) { "-> HEADLINE: #{response.status} HTTP/#{@parser.http_version.join(".")}" }
log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{v}" }.join("\n") }
log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v)}" }.join("\n") }
@request.response = response
on_complete if response.finished?
@ -131,7 +131,7 @@ module HTTPX
response = @request.response
log(level: 2) { "trailer headers received" }
log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{v.join(", ")}" }.join("\n") }
log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v.join(", "))}" }.join("\n") }
response.merge_headers(h)
end
@ -141,12 +141,12 @@ module HTTPX
return unless request
log(color: :green) { "-> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "-> #{chunk.inspect}" }
log(level: 2, color: :green) { "-> #{log_redact(chunk.inspect)}" }
response = request.response
response << chunk
rescue StandardError => e
error_response = ErrorResponse.new(request, e, request.options)
error_response = ErrorResponse.new(request, e)
request.response = error_response
dispatch
end
@ -171,7 +171,6 @@ module HTTPX
@request = nil
@requests.shift
response = request.response
response.finish! unless response.is_a?(ErrorResponse)
emit(:response, request, response)
if @parser.upgrade?
@ -197,7 +196,7 @@ module HTTPX
end
end
def handle_error(ex)
def handle_error(ex, request = nil)
if (ex.is_a?(EOFError) || ex.is_a?(TimeoutError)) && @request && @request.response &&
!@request.response.headers.key?("content-length") &&
!@request.response.headers.key?("transfer-encoding")
@ -211,11 +210,15 @@ module HTTPX
if @pipelining
catch(:called) { disable }
else
@requests.each do |request|
emit(:error, request, ex)
@requests.each do |req|
next if request && request == req
emit(:error, req, ex)
end
@pending.each do |request|
emit(:error, request, ex)
@pending.each do |req|
next if request && request == req
emit(:error, req, ex)
end
end
end
@ -358,7 +361,7 @@ module HTTPX
while (chunk = request.drain_body)
log(color: :green) { "<- DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "<- #{chunk.inspect}" }
log(level: 2, color: :green) { "<- #{log_redact(chunk.inspect)}" }
@buffer << chunk
throw(:buffer_full, request) if @buffer.full?
end
@ -378,9 +381,9 @@ module HTTPX
def join_headers2(headers)
headers.each do |field, value|
buffer = "#{capitalized(field)}: #{value}#{CRLF}"
log(color: :yellow) { "<- HEADER: #{buffer.chomp}" }
@buffer << buffer
field = capitalized(field)
log(color: :yellow) { "<- HEADER: #{[field, log_redact(value)].join(": ")}" }
@buffer << "#{field}: #{value}#{CRLF}"
end
end

View File

@ -1,18 +1,24 @@
# frozen_string_literal: true
require "securerandom"
require "http/2/next"
require "http/2"
module HTTPX
class Connection::HTTP2
include Callbacks
include Loggable
MAX_CONCURRENT_REQUESTS = HTTP2Next::DEFAULT_MAX_CONCURRENT_STREAMS
MAX_CONCURRENT_REQUESTS = ::HTTP2::DEFAULT_MAX_CONCURRENT_STREAMS
class Error < Error
def initialize(id, code)
super("stream #{id} closed with error: #{code}")
def initialize(id, error)
super("stream #{id} closed with error: #{error}")
end
end
class PingError < Error
def initialize
super(0, :ping_error)
end
end
@ -25,7 +31,7 @@ module HTTPX
attr_reader :streams, :pending
def initialize(buffer, options)
@options = Options.new(options)
@options = options
@settings = @options.http2_settings
@pending = []
@streams = {}
@ -52,6 +58,8 @@ module HTTPX
if @connection.state == :closed
return unless @handshake_completed
return if @buffer.empty?
return :w
end
@ -92,16 +100,10 @@ module HTTPX
@connection << data
end
def can_buffer_more_requests?
(@handshake_completed || !@wait_for_handshake) &&
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
end
def send(request)
def send(request, head = false)
unless can_buffer_more_requests?
@pending << request
return
head ? @pending.unshift(request) : @pending << request
return false
end
unless (stream = @streams[request])
stream = @connection.new_stream
@ -111,47 +113,57 @@ module HTTPX
end
handle(request, stream)
true
rescue HTTP2Next::Error::StreamLimitExceeded
rescue ::HTTP2::Error::StreamLimitExceeded
@pending.unshift(request)
false
end
def consume
@streams.each do |request, stream|
next if request.state == :done
next unless request.can_buffer?
handle(request, stream)
end
end
def handle_error(ex)
if ex.instance_of?(TimeoutError) && !@handshake_completed && @connection.state != :closed
def handle_error(ex, request = nil)
if ex.is_a?(OperationTimeoutError) && !@handshake_completed && @connection.state != :closed
@connection.goaway(:settings_timeout, "closing due to settings timeout")
emit(:close_handshake)
settings_ex = SettingsTimeoutError.new(ex.timeout, ex.message)
settings_ex.set_backtrace(ex.backtrace)
ex = settings_ex
end
@streams.each_key do |request|
emit(:error, request, ex)
@streams.each_key do |req|
next if request && request == req
emit(:error, req, ex)
end
@pending.each do |request|
emit(:error, request, ex)
while (req = @pending.shift)
next if request && request == req
emit(:error, req, ex)
end
end
def ping
ping = SecureRandom.gen_random(8)
@connection.ping(ping)
@connection.ping(ping.dup)
ensure
@pings << ping
end
private
def can_buffer_more_requests?
(@handshake_completed || !@wait_for_handshake) &&
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
end
def send_pending
while (request = @pending.shift)
# TODO: this request should go back to top of stack
break unless send(request)
break unless send(request, true)
end
end
@ -168,7 +180,7 @@ module HTTPX
end
def init_connection
@connection = HTTP2Next::Client.new(@settings)
@connection = ::HTTP2::Client.new(@settings)
@connection.on(:frame, &method(:on_frame))
@connection.on(:frame_sent, &method(:on_frame_sent))
@connection.on(:frame_received, &method(:on_frame_received))
@ -214,12 +226,12 @@ module HTTPX
extra_headers = set_protocol_headers(request)
if request.headers.key?("host")
log { "forbidden \"host\" header found (#{request.headers["host"]}), will use it as authority..." }
log { "forbidden \"host\" header found (#{log_redact(request.headers["host"])}), will use it as authority..." }
extra_headers[":authority"] = request.headers["host"]
end
log(level: 1, color: :yellow) do
request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n")
"\n#{request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")}"
end
stream.headers(request.headers.each(extra_headers), end_stream: request.body.empty?)
end
@ -231,7 +243,7 @@ module HTTPX
end
log(level: 1, color: :yellow) do
request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n")
request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
stream.headers(request.trailers.each, end_stream: true)
end
@ -242,13 +254,13 @@ module HTTPX
chunk = @drains.delete(request) || request.drain_body
while chunk
next_chunk = request.drain_body
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: -> #{chunk.inspect}" }
stream.data(chunk, end_stream: !(next_chunk || request.trailers? || request.callbacks_for?(:trailers)))
send_chunk(request, stream, chunk, next_chunk)
if next_chunk && (@buffer.full? || request.body.unbounded_body?)
@drains[request] = next_chunk
throw(:buffer_full)
end
chunk = next_chunk
end
@ -257,6 +269,16 @@ module HTTPX
on_stream_refuse(stream, request, error)
end
def send_chunk(request, stream, chunk, next_chunk)
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: -> #{log_redact(chunk.inspect)}" }
stream.data(chunk, end_stream: end_stream?(request, next_chunk))
end
def end_stream?(request, next_chunk)
!(next_chunk || request.trailers? || request.callbacks_for?(:trailers))
end
######
# HTTP/2 Callbacks
######
@ -270,7 +292,7 @@ module HTTPX
end
log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n")
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
_, status = h.shift
headers = request.options.headers_class.new(h)
@ -283,14 +305,14 @@ module HTTPX
def on_stream_trailers(stream, response, h)
log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n")
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
response.merge_headers(h)
end
def on_stream_data(stream, request, data)
log(level: 1, color: :green) { "#{stream.id}: <- DATA: #{data.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: <- #{data.inspect}" }
log(level: 2, color: :green) { "#{stream.id}: <- #{log_redact(data.inspect)}" }
request.response << data
end
@ -307,26 +329,33 @@ module HTTPX
@streams.delete(request)
if error
ex = Error.new(stream.id, error)
ex.set_backtrace(caller)
response = ErrorResponse.new(request, ex, request.options)
request.response = response
emit(:response, request, response)
case error
when :http_1_1_required
emit(:error, request, error)
else
ex = Error.new(stream.id, error)
ex.set_backtrace(caller)
response = ErrorResponse.new(request, ex)
request.response = response
emit(:response, request, response)
end
else
response = request.response
if response && response.is_a?(Response) && response.status == 421
ex = MisdirectedRequestError.new(response)
ex.set_backtrace(caller)
emit(:error, request, ex)
emit(:error, request, :http_1_1_required)
else
emit(:response, request, response)
end
end
send(@pending.shift) unless @pending.empty?
return unless @streams.empty? && exhausted?
close
emit(:exhausted) unless @pending.empty?
if @pending.empty?
close
else
emit(:exhausted)
end
end
def on_frame(bytes)
@ -344,7 +373,12 @@ module HTTPX
is_connection_closed = @connection.state == :closed
if error
@buffer.clear if is_connection_closed
if error == :no_error
case error
when :http_1_1_required
while (request = @pending.shift)
emit(:error, request, error)
end
when :no_error
ex = GoawayError.new
@pending.unshift(*@streams.keys)
@drains.clear
@ -352,8 +386,11 @@ module HTTPX
else
ex = Error.new(0, error)
end
ex.set_backtrace(caller)
handle_error(ex)
if ex
ex.set_backtrace(caller)
handle_error(ex)
end
end
return unless is_connection_closed && @streams.empty?
@ -363,8 +400,15 @@ module HTTPX
def on_frame_sent(frame)
log(level: 2) { "#{frame[:stream]}: frame was sent!" }
log(level: 2, color: :blue) do
payload = frame
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data
payload =
case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}"
end
end
@ -372,15 +416,22 @@ module HTTPX
def on_frame_received(frame)
log(level: 2) { "#{frame[:stream]}: frame was received!" }
log(level: 2, color: :magenta) do
payload = frame
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data
payload =
case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}"
end
end
def on_altsvc(origin, frame)
log(level: 2) { "#{frame[:stream]}: altsvc frame was received" }
log(level: 2) { "#{frame[:stream]}: #{frame.inspect}" }
log(level: 2) { "#{frame[:stream]}: #{log_redact(frame.inspect)}" }
alt_origin = URI.parse("#{frame[:proto]}://#{frame[:host]}:#{frame[:port]}")
params = { "ma" => frame[:max_age] }
emit(:altsvc, origin, alt_origin, origin, params)
@ -395,11 +446,9 @@ module HTTPX
end
def on_pong(ping)
if @pings.delete(ping.to_s)
emit(:pong)
else
close(:protocol_error, "ping payload did not match")
end
raise PingError unless @pings.delete(ping.to_s)
emit(:pong)
end
end
end

View File

@ -29,6 +29,9 @@ module HTTPX
end
end
# Raise when it can't acquire a connection from the pool.
class PoolTimeoutError < TimeoutError; end
# Error raised when there was a timeout establishing the connection to a server.
# This may be raised due to timeouts during TCP and TLS (when applicable) connection
# establishment.
@ -65,6 +68,9 @@ module HTTPX
# Error raised when there was a timeout while resolving a domain to an IP.
class ResolveTimeoutError < TimeoutError; end
# Error raise when there was a timeout waiting for readiness of the socket the request is related to.
class OperationTimeoutError < TimeoutError; end
# Error raised when there was an error while resolving a domain to an IP.
class ResolveError < Error; end
@ -100,8 +106,4 @@ module HTTPX
@response.status
end
end
# error raised when a request was sent a server which can't reproduce a response, and
# has therefore returned an HTTP response using the 421 status code.
class MisdirectedRequestError < HTTPError; end
end

View File

@ -11,20 +11,32 @@ module HTTPX
end
def initialize(headers = nil)
if headers.nil? || headers.empty?
@headers = headers.to_h
return
end
@headers = {}
return unless headers
headers.each do |field, value|
array_value(value).each do |v|
add(downcased(field), v)
field = downcased(field)
value = array_value(value)
current = @headers[field]
if current.nil?
@headers[field] = value
else
current.concat(value)
end
end
end
# cloned initialization
def initialize_clone(orig)
def initialize_clone(orig, **kwargs)
super
@headers = orig.instance_variable_get(:@headers).clone
@headers = orig.instance_variable_get(:@headers).clone(**kwargs)
end
# dupped initialization
@ -39,17 +51,6 @@ module HTTPX
super
end
def same_headers?(headers)
@headers.empty? || begin
headers.each do |k, v|
next unless key?(k)
return false unless v == self[k]
end
true
end
end
# merges headers with another header-quack.
# the merge rule is, if the header already exists,
# ignore what the +other+ headers has. Otherwise, set
@ -119,6 +120,10 @@ module HTTPX
other == to_hash
end
def empty?
@headers.empty?
end
# the headers store in Hash format
def to_hash
Hash[to_a]
@ -137,7 +142,8 @@ module HTTPX
# :nocov:
def inspect
to_hash.inspect
"#<#{self.class}:#{object_id} " \
"#{to_hash.inspect}>"
end
# :nocov:
@ -160,12 +166,7 @@ module HTTPX
private
def array_value(value)
case value
when Array
value.map { |val| String(val).strip }
else
[String(value).strip]
end
Array(value)
end
def downcased(field)

View File

@ -9,7 +9,8 @@ module HTTPX
# rubocop:disable Style/MutableConstant
TLS_OPTIONS = { alpn_protocols: %w[h2 http/1.1].freeze }
# https://github.com/jruby/jruby-openssl/issues/284
TLS_OPTIONS[:verify_hostname] = true if RUBY_ENGINE == "jruby"
# TODO: remove when dropping support for jruby-openssl < 0.15.4
TLS_OPTIONS[:verify_hostname] = true if RUBY_ENGINE == "jruby" && JOpenSSL::VERSION < "0.15.4"
# rubocop:enable Style/MutableConstant
TLS_OPTIONS.freeze
@ -92,9 +93,12 @@ module HTTPX
end
def connect
super
return if @state == :negotiated ||
@state != :connected
return if @state == :negotiated
unless @state == :connected
super
return unless @state == :connected
end
unless @io.is_a?(OpenSSL::SSL::SSLSocket)
if (hostname_is_ip = (@ip == @sni_hostname))

View File

@ -17,7 +17,7 @@ module HTTPX
@state = :idle
@addresses = []
@hostname = origin.host
@options = Options.new(options)
@options = options
@fallback_protocol = @options.fallback_protocol
@port = origin.port
@interests = :w
@ -75,9 +75,18 @@ module HTTPX
@io = build_socket
end
try_connect
rescue Errno::EHOSTUNREACH,
Errno::ENETUNREACH => e
raise e if @ip_index <= 0
log { "failed connecting to #{@ip} (#{e.message}), evict from cache and trying next..." }
Resolver.cached_lookup_evict(@hostname, @ip)
@ip_index -= 1
@io = build_socket
retry
rescue Errno::ECONNREFUSED,
Errno::EADDRNOTAVAIL,
Errno::EHOSTUNREACH,
SocketError,
IOError => e
raise e if @ip_index <= 0
@ -167,7 +176,12 @@ module HTTPX
# :nocov:
def inspect
"#<#{self.class}: #{@ip}:#{@port} (state: #{@state})>"
"#<#{self.class}:#{object_id} " \
"#{@ip}:#{@port} " \
"@state=#{@state} " \
"@hostname=#{@hostname} " \
"@addresses=#{@addresses} " \
"@state=#{@state}>"
end
# :nocov:
@ -195,12 +209,9 @@ module HTTPX
end
def log_transition_state(nextstate)
case nextstate
when :connected
"Connected to #{host} (##{@io.fileno})"
else
"#{host} #{@state} -> #{nextstate}"
end
label = host
label = "#{label}(##{@io.fileno})" if nextstate == :connected
"#{label} #{@state} -> #{nextstate}"
end
end
end

View File

@ -12,7 +12,7 @@ module HTTPX
@addresses = []
@hostname = origin.host
@state = :idle
@options = Options.new(options)
@options = options
@fallback_protocol = @options.fallback_protocol
if @options.io
@io = case @options.io
@ -48,7 +48,7 @@ module HTTPX
transition(:connected)
rescue Errno::EINPROGRESS,
Errno::EALREADY,
::IO::WaitReadable
IO::WaitReadable
end
def expired?
@ -57,7 +57,7 @@ module HTTPX
# :nocov:
def inspect
"#<#{self.class}(path: #{@path}): (state: #{@state})>"
"#<#{self.class}:#{object_id} @path=#{@path}) @state=#{@state})>"
end
# :nocov:

View File

@ -13,22 +13,44 @@ module HTTPX
white: 37,
}.freeze
def log(level: @options.debug_level, color: nil, &msg)
return unless @options.debug
return unless @options.debug_level >= level
USE_DEBUG_LOG = ENV.key?("HTTPX_DEBUG")
debug_stream = @options.debug
def log(
level: @options.debug_level,
color: nil,
debug_level: @options.debug_level,
debug: @options.debug,
&msg
)
return unless debug_level >= level
message = (+"" << msg.call << "\n")
debug_stream = debug || ($stderr if USE_DEBUG_LOG)
return unless debug_stream
klass = self.class
until (class_name = klass.name)
klass = klass.superclass
end
message = +"(pid:#{Process.pid}, " \
"tid:#{Thread.current.object_id}, " \
"fid:#{Fiber.current.object_id}, " \
"self:#{class_name}##{object_id}) "
message << msg.call << "\n"
message = "\e[#{COLORS[color]}m#{message}\e[0m" if color && debug_stream.respond_to?(:isatty) && debug_stream.isatty
debug_stream << message
end
def log_exception(ex, level: @options.debug_level, color: nil)
return unless @options.debug
return unless @options.debug_level >= level
def log_exception(ex, level: @options.debug_level, color: nil, debug_level: @options.debug_level, debug: @options.debug)
log(level: level, color: color, debug_level: debug_level, debug: debug) { ex.full_message }
end
log(level: level, color: color) { ex.full_message }
def log_redact(text, should_redact = @options.debug_redact)
return text.to_s unless should_redact
"[REDACTED]"
end
end
end

View File

@ -18,21 +18,30 @@ module HTTPX
# https://github.com/ruby/resolv/blob/095f1c003f6073730500f02acbdbc55f83d70987/lib/resolv.rb#L408
ip_address_families = begin
list = Socket.ip_address_list
if list.any? { |a| a.ipv6? && !a.ipv6_loopback? && !a.ipv6_linklocal? && !a.ipv6_unique_local? }
if list.any? { |a| a.ipv6? && !a.ipv6_loopback? && !a.ipv6_linklocal? }
[Socket::AF_INET6, Socket::AF_INET]
else
[Socket::AF_INET]
end
rescue NotImplementedError
[Socket::AF_INET]
end.freeze
SET_TEMPORARY_NAME = ->(mod, pl = nil) do
if mod.respond_to?(:set_temporary_name) # ruby 3.4 only
name = mod.name || "#{mod.superclass.name}(plugin)"
name = "#{name}/#{pl}" if pl
mod.set_temporary_name(name)
end
end
DEFAULT_OPTIONS = {
:max_requests => Float::INFINITY,
:debug => ENV.key?("HTTPX_DEBUG") ? $stderr : nil,
:debug => nil,
:debug_level => (ENV["HTTPX_DEBUG"] || 1).to_i,
:ssl => {},
:http2_settings => { settings_enable_push: 0 },
:debug_redact => ENV.key?("HTTPX_DEBUG_REDACT"),
:ssl => EMPTY_HASH,
:http2_settings => { settings_enable_push: 0 }.freeze,
:fallback_protocol => "http/1.1",
:supported_compression_formats => %w[gzip deflate],
:decompress_response_body => true,
@ -47,23 +56,26 @@ module HTTPX
write_timeout: WRITE_TIMEOUT,
request_timeout: REQUEST_TIMEOUT,
},
:headers_class => Class.new(Headers),
:headers_class => Class.new(Headers, &SET_TEMPORARY_NAME),
:headers => {},
:window_size => WINDOW_SIZE,
:buffer_size => BUFFER_SIZE,
:body_threshold_size => MAX_BODY_THRESHOLD_SIZE,
:request_class => Class.new(Request),
:response_class => Class.new(Response),
:request_body_class => Class.new(Request::Body),
:response_body_class => Class.new(Response::Body),
:connection_class => Class.new(Connection),
:options_class => Class.new(self),
:request_class => Class.new(Request, &SET_TEMPORARY_NAME),
:response_class => Class.new(Response, &SET_TEMPORARY_NAME),
:request_body_class => Class.new(Request::Body, &SET_TEMPORARY_NAME),
:response_body_class => Class.new(Response::Body, &SET_TEMPORARY_NAME),
:pool_class => Class.new(Pool, &SET_TEMPORARY_NAME),
:connection_class => Class.new(Connection, &SET_TEMPORARY_NAME),
:options_class => Class.new(self, &SET_TEMPORARY_NAME),
:transport => nil,
:addresses => nil,
:persistent => false,
:resolver_class => (ENV["HTTPX_RESOLVER"] || :native).to_sym,
:resolver_options => { cache: true },
:resolver_options => { cache: true }.freeze,
:pool_options => EMPTY_HASH,
:ip_families => ip_address_families,
:close_on_fork => false,
}.freeze
class << self
@ -90,8 +102,9 @@ module HTTPX
#
# :debug :: an object which log messages are written to (must respond to <tt><<</tt>)
# :debug_level :: the log level of messages (can be 1, 2, or 3).
# :ssl :: a hash of options which can be set as params of OpenSSL::SSL::SSLContext (see HTTPX::IO::SSL)
# :http2_settings :: a hash of options to be passed to a HTTP2Next::Connection (ex: <tt>{ max_concurrent_streams: 2 }</tt>)
# :debug_redact :: whether header/body payload should be redacted (defaults to <tt>false</tt>).
# :ssl :: a hash of options which can be set as params of OpenSSL::SSL::SSLContext (see HTTPX::SSL)
# :http2_settings :: a hash of options to be passed to a HTTP2::Connection (ex: <tt>{ max_concurrent_streams: 2 }</tt>)
# :fallback_protocol :: version of HTTP protocol to use by default in the absence of protocol negotiation
# like ALPN (defaults to <tt>"http/1.1"</tt>)
# :supported_compression_formats :: list of compressions supported by the transcoder layer (defaults to <tt>%w[gzip deflate]</tt>).
@ -110,6 +123,7 @@ module HTTPX
# :request_body_class :: class used to instantiate a request body
# :response_body_class :: class used to instantiate a response body
# :connection_class :: class used to instantiate connections
# :pool_class :: class used to instantiate the session connection pool
# :options_class :: class used to instantiate options
# :transport :: type of transport to use (set to "unix" for UNIX sockets)
# :addresses :: bucket of peer addresses (can be a list of IP addresses, a hash of domain to list of adddresses;
@ -118,31 +132,44 @@ module HTTPX
# :persistent :: whether to persist connections in between requests (defaults to <tt>true</tt>)
# :resolver_class :: which resolver to use (defaults to <tt>:native</tt>, can also be <tt>:system<tt> for
# using getaddrinfo or <tt>:https</tt> for DoH resolver, or a custom class)
# :resolver_options :: hash of options passed to the resolver
# :resolver_options :: hash of options passed to the resolver. Accepted keys depend on the resolver type.
# :pool_options :: hash of options passed to the connection pool (See Pool#initialize).
# :ip_families :: which socket families are supported (system-dependent)
# :origin :: HTTP origin to set on requests with relative path (ex: "https://api.serv.com")
# :base_path :: path to prefix given relative paths with (ex: "/v2")
# :max_concurrent_requests :: max number of requests which can be set concurrently
# :max_requests :: max number of requests which can be made on socket before it reconnects.
# :params :: hash or array of key-values which will be encoded and set in the query string of request uris.
# :form :: hash of array of key-values which will be form-or-multipart-encoded in requests body payload.
# :json :: hash of array of key-values which will be JSON-encoded in requests body payload.
# :xml :: Nokogiri XML nodes which will be encoded in requests body payload.
# :close_on_fork :: whether the session automatically closes when the process is fork (defaults to <tt>false</tt>).
# it only works if the session is persistent (and ruby 3.1 or higher is used).
#
# This list of options are enhanced with each loaded plugin, see the plugin docs for details.
def initialize(options = {})
do_initialize(options)
defaults = DEFAULT_OPTIONS.merge(options)
defaults.each do |k, v|
next if v.nil?
option_method_name = :"option_#{k}"
raise Error, "unknown option: #{k}" unless respond_to?(option_method_name)
value = __send__(option_method_name, v)
instance_variable_set(:"@#{k}", value)
end
freeze
end
def freeze
super
@origin.freeze
@base_path.freeze
@timeout.freeze
@headers.freeze
@addresses.freeze
@supported_compression_formats.freeze
@ssl.freeze
@http2_settings.freeze
@pool_options.freeze
@resolver_options.freeze
@ip_families.freeze
super
end
def option_origin(value)
@ -165,41 +192,6 @@ module HTTPX
Array(value).map(&:to_s)
end
def option_max_concurrent_requests(value)
raise TypeError, ":max_concurrent_requests must be positive" unless value.positive?
value
end
def option_max_requests(value)
raise TypeError, ":max_requests must be positive" unless value.positive?
value
end
def option_window_size(value)
value = Integer(value)
raise TypeError, ":window_size must be positive" unless value.positive?
value
end
def option_buffer_size(value)
value = Integer(value)
raise TypeError, ":buffer_size must be positive" unless value.positive?
value
end
def option_body_threshold_size(value)
bytes = Integer(value)
raise TypeError, ":body_threshold_size must be positive" unless bytes.positive?
bytes
end
def option_transport(value)
transport = value.to_s
raise TypeError, "#{transport} is an unsupported transport type" unless %w[unix].include?(transport)
@ -215,20 +207,47 @@ module HTTPX
Array(value)
end
# number options
%i[
max_concurrent_requests max_requests window_size buffer_size
body_threshold_size debug_level
].each do |option|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# converts +v+ into an Integer before setting the +#{option}+ option.
def option_#{option}(value) # def option_max_requests(v)
value = Integer(value) unless value.infinite?
raise TypeError, ":#{option} must be positive" unless value.positive? # raise TypeError, ":max_requests must be positive" unless value.positive?
value
end
OUT
end
# hashable options
%i[ssl http2_settings resolver_options pool_options].each do |option|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# converts +v+ into an Hash before setting the +#{option}+ option.
def option_#{option}(value) # def option_ssl(v)
Hash[value]
end
OUT
end
%i[
params form json xml body ssl http2_settings
request_class response_class headers_class request_body_class
response_body_class connection_class options_class
io fallback_protocol debug debug_level resolver_class resolver_options
pool_class
io fallback_protocol debug debug_redact resolver_class
compress_request_body decompress_response_body
persistent
persistent close_on_fork
].each do |method_name|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# sets +v+ as the value of the +#{method_name}+ option
def option_#{method_name}(v); v; end # def option_smth(v); v; end
OUT
end
REQUEST_BODY_IVARS = %i[@headers @params @form @xml @json @body].freeze
REQUEST_BODY_IVARS = %i[@headers].freeze
def ==(other)
super || options_equals?(other)
@ -249,14 +268,6 @@ module HTTPX
end
end
OTHER_LOOKUP = ->(obj, k, ivar_map) {
case obj
when Hash
obj[ivar_map[k]]
else
obj.instance_variable_get(k)
end
}
def merge(other)
ivar_map = nil
other_ivars = case other
@ -269,12 +280,12 @@ module HTTPX
return self if other_ivars.empty?
return self if other_ivars.all? { |ivar| instance_variable_get(ivar) == OTHER_LOOKUP[other, ivar, ivar_map] }
return self if other_ivars.all? { |ivar| instance_variable_get(ivar) == access_option(other, ivar, ivar_map) }
opts = dup
other_ivars.each do |ivar|
v = OTHER_LOOKUP[other, ivar, ivar_map]
v = access_option(other, ivar, ivar_map)
unless v
opts.instance_variable_set(ivar, v)
@ -302,31 +313,42 @@ module HTTPX
def extend_with_plugin_classes(pl)
if defined?(pl::RequestMethods) || defined?(pl::RequestClassMethods)
@request_class = @request_class.dup
SET_TEMPORARY_NAME[@request_class, pl]
@request_class.__send__(:include, pl::RequestMethods) if defined?(pl::RequestMethods)
@request_class.extend(pl::RequestClassMethods) if defined?(pl::RequestClassMethods)
end
if defined?(pl::ResponseMethods) || defined?(pl::ResponseClassMethods)
@response_class = @response_class.dup
SET_TEMPORARY_NAME[@response_class, pl]
@response_class.__send__(:include, pl::ResponseMethods) if defined?(pl::ResponseMethods)
@response_class.extend(pl::ResponseClassMethods) if defined?(pl::ResponseClassMethods)
end
if defined?(pl::HeadersMethods) || defined?(pl::HeadersClassMethods)
@headers_class = @headers_class.dup
SET_TEMPORARY_NAME[@headers_class, pl]
@headers_class.__send__(:include, pl::HeadersMethods) if defined?(pl::HeadersMethods)
@headers_class.extend(pl::HeadersClassMethods) if defined?(pl::HeadersClassMethods)
end
if defined?(pl::RequestBodyMethods) || defined?(pl::RequestBodyClassMethods)
@request_body_class = @request_body_class.dup
SET_TEMPORARY_NAME[@request_body_class, pl]
@request_body_class.__send__(:include, pl::RequestBodyMethods) if defined?(pl::RequestBodyMethods)
@request_body_class.extend(pl::RequestBodyClassMethods) if defined?(pl::RequestBodyClassMethods)
end
if defined?(pl::ResponseBodyMethods) || defined?(pl::ResponseBodyClassMethods)
@response_body_class = @response_body_class.dup
SET_TEMPORARY_NAME[@response_body_class, pl]
@response_body_class.__send__(:include, pl::ResponseBodyMethods) if defined?(pl::ResponseBodyMethods)
@response_body_class.extend(pl::ResponseBodyClassMethods) if defined?(pl::ResponseBodyClassMethods)
end
if defined?(pl::PoolMethods)
@pool_class = @pool_class.dup
SET_TEMPORARY_NAME[@pool_class, pl]
@pool_class.__send__(:include, pl::PoolMethods)
end
if defined?(pl::ConnectionMethods)
@connection_class = @connection_class.dup
SET_TEMPORARY_NAME[@connection_class, pl]
@connection_class.__send__(:include, pl::ConnectionMethods)
end
return unless defined?(pl::OptionsMethods)
@ -337,16 +359,12 @@ module HTTPX
private
def do_initialize(options = {})
defaults = DEFAULT_OPTIONS.merge(options)
defaults.each do |k, v|
next if v.nil?
option_method_name = :"option_#{k}"
raise Error, "unknown option: #{k}" unless respond_to?(option_method_name)
value = __send__(option_method_name, v)
instance_variable_set(:"@#{k}", value)
def access_option(obj, k, ivar_map)
case obj
when Hash
obj[ivar_map[k]]
else
obj.instance_variable_get(k)
end
end
end

View File

@ -23,7 +23,7 @@ module HTTPX
def reset!
@state = :idle
@headers.clear
@headers = {}
@content_length = nil
@_has_trailers = nil
end

View File

@ -72,6 +72,9 @@ module HTTPX
end
end
# adds support for the following options:
#
# :aws_profile :: AWS account profile to retrieve credentials from.
module OptionsMethods
def option_aws_profile(value)
String(value)

View File

@ -12,6 +12,7 @@ module HTTPX
module AWSSigV4
Credentials = Struct.new(:username, :password, :security_token)
# Signs requests using the AWS sigv4 signing.
class Signer
def initialize(
service:,
@ -88,7 +89,7 @@ module HTTPX
sts = "#{algo_line}" \
"\n#{datetime}" \
"\n#{credential_scope}" \
"\n#{hexdigest(creq)}"
"\n#{OpenSSL::Digest.new(@algorithm).hexdigest(creq)}"
# signature
k_date = hmac("#{upper_provider_prefix}#{@credentials.password}", date)
@ -109,22 +110,38 @@ module HTTPX
private
def hexdigest(value)
if value.respond_to?(:to_path)
# files, pathnames
OpenSSL::Digest.new(@algorithm).file(value.to_path).hexdigest
elsif value.respond_to?(:each)
digest = OpenSSL::Digest.new(@algorithm)
digest = OpenSSL::Digest.new(@algorithm)
mb_buffer = value.each.with_object("".b) do |chunk, buffer|
buffer << chunk
break if buffer.bytesize >= 1024 * 1024
if value.respond_to?(:read)
if value.respond_to?(:to_path)
# files, pathnames
digest.file(value.to_path).hexdigest
else
# gzipped request bodies
raise Error, "request body must be rewindable" unless value.respond_to?(:rewind)
buffer = Tempfile.new("httpx", encoding: Encoding::BINARY, mode: File::RDWR)
begin
IO.copy_stream(value, buffer)
buffer.flush
digest.file(buffer.to_path).hexdigest
ensure
value.rewind
buffer.close
buffer.unlink
end
end
else
# error on endless generators
raise Error, "hexdigest for endless enumerators is not supported" if value.unbounded_body?
mb_buffer = value.each.with_object("".b) do |chunk, b|
b << chunk
break if b.bytesize >= 1024 * 1024
end
digest.update(mb_buffer)
value.rewind
digest.hexdigest
else
OpenSSL::Digest.new(@algorithm).hexdigest(value)
digest.hexdigest(mb_buffer)
end
end
@ -141,7 +158,7 @@ module HTTPX
def load_dependencies(*)
require "set"
require "digest/sha2"
require "openssl"
require "cgi/escape"
end
def configure(klass)
@ -149,6 +166,9 @@ module HTTPX
end
end
# adds support for the following options:
#
# :sigv4_signer :: instance of HTTPX::Plugins::AWSSigV4 used to sign requests.
module OptionsMethods
def option_sigv4_signer(value)
value.is_a?(Signer) ? value : Signer.new(value)
@ -160,7 +180,7 @@ module HTTPX
with(sigv4_signer: Signer.new(**options))
end
def build_request(*, _)
def build_request(*)
request = super
return request if request.headers.key?("authorization")

View File

@ -8,6 +8,13 @@ module HTTPX
# https://gitlab.com/os85/httpx/-/wikis/Events
#
module Callbacks
CALLBACKS = %i[
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].freeze
# connection closed user-space errors happen after errors can be surfaced to requests,
# so they need to pierce through the scheduler, which is only possible by simulating an
# interrupt.
@ -16,27 +23,38 @@ module HTTPX
module InstanceMethods
include HTTPX::Callbacks
%i[
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].each do |meth|
CALLBACKS.each do |meth|
class_eval(<<-MOD, __FILE__, __LINE__ + 1)
def on_#{meth}(&blk) # def on_connection_opened(&blk)
on(:#{meth}, &blk) # on(:connection_opened, &blk)
self # self
end # end
MOD
end
private
def init_connection(uri, options)
connection = super
def branch(options, &blk)
super(options).tap do |sess|
CALLBACKS.each do |cb|
next unless callbacks_for?(cb)
sess.callbacks(cb).concat(callbacks(cb))
end
sess.wrap(&blk) if blk
end
end
def do_init_connection(connection, selector)
super
connection.on(:open) do
next unless connection.current_session == self
emit_or_callback_error(:connection_opened, connection.origin, connection.io.socket)
end
connection.on(:close) do
next unless connection.current_session == self
emit_or_callback_error(:connection_closed, connection.origin) if connection.used?
end
@ -84,6 +102,12 @@ module HTTPX
rescue CallbackError => e
raise e.cause
end
def close(*)
super
rescue CallbackError => e
raise e.cause
end
end
end
register_plugin :callbacks, Callbacks

View File

@ -32,15 +32,11 @@ module HTTPX
@circuit_store = CircuitStore.new(@options)
end
def initialize_dup(orig)
super
@circuit_store = orig.instance_variable_get(:@circuit_store).dup
end
%i[circuit_open].each do |meth|
class_eval(<<-MOD, __FILE__, __LINE__ + 1)
def on_#{meth}(&blk) # def on_circuit_open(&blk)
on(:#{meth}, &blk) # on(:circuit_open, &blk)
self # self
end # end
MOD
end
@ -74,10 +70,11 @@ module HTTPX
short_circuit_responses
end
def on_response(request, response)
emit(:circuit_open, request) if try_circuit_open(request, response)
def set_request_callbacks(request)
super
request.on(:response) do |response|
emit(:circuit_open, request) if try_circuit_open(request, response)
end
end
def try_circuit_open(request, response)
@ -97,6 +94,16 @@ module HTTPX
end
end
# adds support for the following options:
#
# :circuit_breaker_max_attempts :: the number of attempts the circuit allows, before it is opened (defaults to <tt>3</tt>).
# :circuit_breaker_reset_attempts_in :: the time a circuit stays open at most, before it resets (defaults to <tt>60</tt>).
# :circuit_breaker_break_on :: callable defining an alternative rule for a response to break
# (i.e. <tt>->(res) { res.status == 429 } </tt>)
# :circuit_breaker_break_in :: the time that must elapse before an open circuit can transit to the half-open state
# (defaults to <tt><60</tt>).
# :circuit_breaker_half_open_drip_rate :: the rate of requests a circuit allows to be performed when in an half-open state
# (defaults to <tt>1</tt>).
module OptionsMethods
def option_circuit_breaker_max_attempts(value)
attempts = Integer(value)

View File

@ -0,0 +1,202 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds `Content-Digest` headers to requests
# and can validate these headers on responses
#
# https://datatracker.ietf.org/doc/html/rfc9530
#
module ContentDigest
class Error < HTTPX::Error; end
# Error raised on response "content-digest" header validation.
class ValidationError < Error
attr_reader :response
def initialize(message, response)
super(message)
@response = response
end
end
class MissingContentDigestError < ValidationError; end
class InvalidContentDigestError < ValidationError; end
SUPPORTED_ALGORITHMS = {
"sha-256" => OpenSSL::Digest::SHA256,
"sha-512" => OpenSSL::Digest::SHA512,
}.freeze
class << self
def extra_options(options)
options.merge(encode_content_digest: true, validate_content_digest: false, content_digest_algorithm: "sha-256")
end
end
# add support for the following options:
#
# :content_digest_algorithm :: the digest algorithm to use. Currently supports `sha-256` and `sha-512`. (defaults to `sha-256`)
# :encode_content_digest :: whether a <tt>Content-Digest</tt> header should be computed for the request;
# can also be a callable object (i.e. <tt>->(req) { ... }</tt>, defaults to <tt>true</tt>)
# :validate_content_digest :: whether a <tt>Content-Digest</tt> header in the response should be validated;
# can also be a callable object (i.e. <tt>->(res) { ... }</tt>, defaults to <tt>false</tt>)
module OptionsMethods
def option_content_digest_algorithm(value)
raise TypeError, ":content_digest_algorithm must be one of 'sha-256', 'sha-512'" unless SUPPORTED_ALGORITHMS.key?(value)
value
end
def option_encode_content_digest(value)
value
end
def option_validate_content_digest(value)
value
end
end
module ResponseBodyMethods
attr_reader :content_digest_buffer
def initialize(response, options)
super
return unless response.headers.key?("content-digest")
should_validate = options.validate_content_digest
should_validate = should_validate.call(response) if should_validate.respond_to?(:call)
return unless should_validate
@content_digest_buffer = Response::Buffer.new(
threshold_size: @options.body_threshold_size,
bytesize: @length,
encoding: @encoding
)
end
def write(chunk)
@content_digest_buffer.write(chunk) if @content_digest_buffer
super
end
def close
if @content_digest_buffer
@content_digest_buffer.close
@content_digest_buffer = nil
end
super
end
end
module InstanceMethods
def build_request(*)
request = super
return request if request.empty?
return request if request.headers.key?("content-digest")
perform_encoding = @options.encode_content_digest
perform_encoding = perform_encoding.call(request) if perform_encoding.respond_to?(:call)
return request unless perform_encoding
digest = base64digest(request.body)
request.headers.add("content-digest", "#{@options.content_digest_algorithm}=:#{digest}:")
request
end
private
def fetch_response(request, _, _)
response = super
return response unless response.is_a?(Response)
perform_validation = @options.validate_content_digest
perform_validation = perform_validation.call(response) if perform_validation.respond_to?(:call)
validate_content_digest(response) if perform_validation
response
rescue ValidationError => e
ErrorResponse.new(request, e)
end
def validate_content_digest(response)
content_digest_header = response.headers["content-digest"]
raise MissingContentDigestError.new("response is missing a `content-digest` header", response) unless content_digest_header
digests = extract_content_digests(content_digest_header)
included_algorithms = SUPPORTED_ALGORITHMS.keys & digests.keys
raise MissingContentDigestError.new("unsupported algorithms: #{digests.keys.join(", ")}", response) if included_algorithms.empty?
content_buffer = response.body.content_digest_buffer
included_algorithms.each do |algorithm|
digest = SUPPORTED_ALGORITHMS.fetch(algorithm).new
digest_received = digests[algorithm]
digest_computed =
if content_buffer.respond_to?(:to_path)
content_buffer.flush
digest.file(content_buffer.to_path).base64digest
else
digest.base64digest(content_buffer.to_s)
end
raise InvalidContentDigestError.new("#{algorithm} digest does not match content",
response) unless digest_received == digest_computed
end
end
def extract_content_digests(header)
header.split(",").to_h do |entry|
algorithm, digest = entry.split("=", 2)
raise Error, "#{entry} is an invalid digest format" unless algorithm && digest
[algorithm, digest.byteslice(1..-2)]
end
end
def base64digest(body)
digest = SUPPORTED_ALGORITHMS.fetch(@options.content_digest_algorithm).new
if body.respond_to?(:read)
if body.respond_to?(:to_path)
digest.file(body.to_path).base64digest
else
raise ContentDigestError, "request body must be rewindable" unless body.respond_to?(:rewind)
buffer = Tempfile.new("httpx", encoding: Encoding::BINARY, mode: File::RDWR)
begin
IO.copy_stream(body, buffer)
buffer.flush
digest.file(buffer.to_path).base64digest
ensure
body.rewind
buffer.close
buffer.unlink
end
end
else
raise ContentDigestError, "base64digest for endless enumerators is not supported" if body.unbounded_body?
buffer = "".b
body.each { |chunk| buffer << chunk }
digest.base64digest(buffer)
end
end
end
end
register_plugin :content_digest, ContentDigest
end
end

View File

@ -40,23 +40,23 @@ module HTTPX
end
end
def build_request(*)
request = super
request.headers.set_cookie(request.options.cookies[request.uri])
request
end
private
def on_response(_request, response)
if response && response.respond_to?(:headers) && (set_cookie = response.headers["set-cookie"])
def set_request_callbacks(request)
super
request.on(:response) do |response|
next unless response && response.respond_to?(:headers) && (set_cookie = response.headers["set-cookie"])
log { "cookies: set-cookie is over #{Cookie::MAX_LENGTH}" } if set_cookie.bytesize > Cookie::MAX_LENGTH
@options.cookies.parse(set_cookie)
end
super
end
def build_request(*, _)
request = super
request.headers.set_cookie(request.options.cookies[request.uri])
request
end
end
@ -70,6 +70,9 @@ module HTTPX
end
end
# adds support for the following options:
#
# :cookies :: cookie jar for the session (can be a Hash, an Array, an instance of HTTPX::Plugins::Cookies::CookieJar)
module OptionsMethods
def option_headers(*)
value = super

View File

@ -59,8 +59,6 @@ module HTTPX
return @cookies.each(&blk) unless uri
uri = URI(uri)
now = Time.now
tpath = uri.path

View File

@ -83,7 +83,7 @@ module HTTPX
scanner.skip(RE_WSP)
name, value = scan_name_value(scanner, true)
value = nil if name.empty?
value = nil if name && name.empty?
attrs = {}
@ -98,15 +98,18 @@ module HTTPX
aname, avalue = scan_name_value(scanner, true)
next if aname.empty? || value.nil?
next if (aname.nil? || aname.empty?) || value.nil?
aname.downcase!
case aname
when "expires"
next unless avalue
# RFC 6265 5.2.1
(avalue &&= Time.parse(avalue)) || next
(avalue = Time.parse(avalue)) || next
when "max-age"
next unless avalue
# RFC 6265 5.2.2
next unless /\A-?\d+\z/.match?(avalue)
@ -119,7 +122,7 @@ module HTTPX
# RFC 6265 5.2.4
# A relative path must be ignored rather than normalizing it
# to "/".
next unless avalue.start_with?("/")
next unless avalue && avalue.start_with?("/")
when "secure", "httponly"
# RFC 6265 5.2.5, 5.2.6
avalue = true

View File

@ -20,6 +20,9 @@ module HTTPX
end
end
# adds support for the following options:
#
# :digest :: instance of HTTPX::Plugins::Authentication::Digest, used to authenticate requests in the session.
module OptionsMethods
def option_digest(value)
raise TypeError, ":digest must be a #{Authentication::Digest}" unless value.is_a?(Authentication::Digest)

View File

@ -20,6 +20,11 @@ module HTTPX
end
end
# adds support for the following options:
#
# :expect_timeout :: time (in seconds) to wait for a 100-expect response,
# before retrying without the Expect header (defaults to <tt>2</tt>).
# :expect_threshold_size :: min threshold (in bytes) of the request payload to enable the 100-continue negotiation on.
module OptionsMethods
def option_expect_timeout(value)
seconds = Float(value)
@ -79,7 +84,7 @@ module HTTPX
return if expect_timeout.nil? || expect_timeout.infinite?
set_request_timeout(request, expect_timeout, :expect, %i[body response]) do
set_request_timeout(:expect_timeout, request, expect_timeout, :expect, %i[body response]) do
# expect timeout expired
if request.state == :expect && !request.expects?
Expect.no_expect_store << request.origin
@ -91,15 +96,16 @@ module HTTPX
end
module InstanceMethods
def fetch_response(request, connections, options)
response = @responses.delete(request)
def fetch_response(request, selector, options)
response = super
return unless response
if response.is_a?(Response) && response.status == 417 && request.headers.key?("expect")
response.close
request.headers.delete("expect")
request.transition(:idle)
send_request(request, connections, options)
send_request(request, selector, options)
return
end

View File

@ -4,12 +4,17 @@ module HTTPX
InsecureRedirectError = Class.new(Error)
module Plugins
#
# This plugin adds support for following redirect (status 30X) responses.
# This plugin adds support for automatically following redirect (status 30X) responses.
#
# It has an upper bound of followed redirects (see *MAX_REDIRECTS*), after which it
# will return the last redirect response. It will **not** raise an exception.
# It has a default upper bound of followed redirects (see *MAX_REDIRECTS* and the *max_redirects* option),
# after which it will return the last redirect response. It will **not** raise an exception.
#
# It also doesn't follow insecure redirects (https -> http) by default (see *follow_insecure_redirects*).
# It doesn't follow insecure redirects (https -> http) by default (see *follow_insecure_redirects*).
#
# It doesn't propagate authorization related headers to requests redirecting to different origins
# (see *allow_auth_to_other_origins*) to override.
#
# It allows customization of when to redirect via the *redirect_on* callback option).
#
# https://gitlab.com/os85/httpx/wikis/Follow-Redirects
#
@ -20,6 +25,14 @@ module HTTPX
using URIExtensions
# adds support for the following options:
#
# :max_redirects :: max number of times a request will be redirected (defaults to <tt>3</tt>).
# :follow_insecure_redirects :: whether redirects to an "http://" URI, when coming from an "https//", are allowed
# (defaults to <tt>false</tt>).
# :allow_auth_to_other_origins :: whether auth-related headers, such as "Authorization", are propagated on redirection
# (defaults to <tt>false</tt>).
# :redirect_on :: optional callback which receives the redirect location and can halt the redirect chain if it returns <tt>false</tt>.
module OptionsMethods
def option_max_redirects(value)
num = Integer(value)
@ -44,15 +57,16 @@ module HTTPX
end
module InstanceMethods
# returns a session with the *max_redirects* option set to +n+
def max_redirects(n)
with(max_redirects: n.to_i)
end
private
def fetch_response(request, connections, options)
def fetch_response(request, selector, options)
redirect_request = request.redirect_request
response = super(redirect_request, connections, options)
response = super(redirect_request, selector, options)
return unless response
max_redirects = redirect_request.max_redirects
@ -71,40 +85,40 @@ module HTTPX
# build redirect request
request_body = redirect_request.body
redirect_method = "GET"
redirect_params = {}
if response.status == 305 && options.respond_to?(:proxy)
request_body.rewind
# The requested resource MUST be accessed through the proxy given by
# the Location field. The Location field gives the URI of the proxy.
retry_options = options.merge(headers: redirect_request.headers,
proxy: { uri: redirect_uri },
body: request_body,
max_redirects: max_redirects - 1)
redirect_options = options.merge(headers: redirect_request.headers,
proxy: { uri: redirect_uri },
max_redirects: max_redirects - 1)
redirect_params[:body] = request_body
redirect_uri = redirect_request.uri
options = retry_options
options = redirect_options
else
redirect_headers = redirect_request_headers(redirect_request.uri, redirect_uri, request.headers, options)
retry_opts = Hash[options].merge(max_redirects: max_redirects - 1)
redirect_opts = Hash[options]
redirect_params[:max_redirects] = max_redirects - 1
unless request_body.empty?
if response.status == 307
# The method and the body of the original request are reused to perform the redirected request.
redirect_method = redirect_request.verb
request_body.rewind
retry_opts[:body] = request_body
redirect_params[:body] = request_body
else
# redirects are **ALWAYS** GET, so remove body-related headers
REQUEST_BODY_HEADERS.each do |h|
redirect_headers.delete(h)
end
retry_opts.delete(:body)
redirect_params[:body] = nil
end
end
retry_opts[:headers] = redirect_headers.to_h
retry_options = options.class.new(retry_opts)
options = options.class.new(redirect_opts.merge(headers: redirect_headers.to_h))
end
redirect_uri = Utils.to_uri(redirect_uri)
@ -114,34 +128,44 @@ module HTTPX
redirect_uri.scheme == "http"
error = InsecureRedirectError.new(redirect_uri.to_s)
error.set_backtrace(caller)
return ErrorResponse.new(request, error, options)
return ErrorResponse.new(request, error)
end
retry_request = build_request(redirect_method, redirect_uri, retry_options)
retry_request = build_request(redirect_method, redirect_uri, redirect_params, options)
request.redirect_request = retry_request
retry_after = response.headers["retry-after"]
redirect_after = response.headers["retry-after"]
if retry_after
if redirect_after
# Servers send the "Retry-After" header field to indicate how long the
# user agent ought to wait before making a follow-up request.
# When sent with any 3xx (Redirection) response, Retry-After indicates
# the minimum time that the user agent is asked to wait before issuing
# the redirected request.
#
retry_after = Utils.parse_retry_after(retry_after)
redirect_after = Utils.parse_retry_after(redirect_after)
log { "redirecting after #{retry_after} secs..." }
pool.after(retry_after) do
send_request(retry_request, connections, options)
retry_start = Utils.now
log { "redirecting after #{redirect_after} secs..." }
selector.after(redirect_after) do
if (response = request.response)
response.finish!
retry_request.response = response
# request has terminated abruptly meanwhile
retry_request.emit(:response, response)
else
log { "redirecting (elapsed time: #{Utils.elapsed_time(retry_start)})!!" }
send_request(retry_request, selector, options)
end
end
else
send_request(retry_request, connections, options)
send_request(retry_request, selector, options)
end
nil
end
# :nodoc:
def redirect_request_headers(original_uri, redirect_uri, headers, options)
headers = headers.dup
@ -149,14 +173,14 @@ module HTTPX
return headers unless headers.key?("authorization")
unless original_uri.origin == redirect_uri.origin
headers = headers.dup
headers.delete("authorization")
end
return headers if original_uri.origin == redirect_uri.origin
headers.delete("authorization")
headers
end
# :nodoc:
def __get_location_from_response(response)
# @type var location_uri: http_uri
location_uri = URI(response.headers["location"])
@ -166,12 +190,15 @@ module HTTPX
end
module RequestMethods
# returns the top-most original HTTPX::Request from the redirect chain
attr_accessor :root_request
# returns the follow-up redirect request, or itself
def redirect_request
@redirect_request || self
end
# sets the follow-up redirect request
def redirect_request=(req)
@redirect_request = req
req.root_request = @root_request || self
@ -179,7 +206,7 @@ module HTTPX
end
def response
return super unless @redirect_request
return super unless @redirect_request && @response.nil?
@redirect_request.response
end
@ -188,6 +215,16 @@ module HTTPX
@options.max_redirects || MAX_REDIRECTS
end
end
module ConnectionMethods
private
def set_request_request_timeout(request)
return unless request.root_request.nil?
super
end
end
end
register_plugin :follow_redirects, FollowRedirects
end

View File

@ -110,10 +110,10 @@ module HTTPX
end
module RequestBodyMethods
def initialize(headers, _)
def initialize(*, **)
super
if (compression = headers["grpc-encoding"])
if (compression = @headers["grpc-encoding"])
deflater_body = self.class.initialize_deflater_body(@body, compression)
@body = Transcoder::GRPCEncoding.encode(deflater_body || @body, compressed: !deflater_body.nil?)
else

View File

@ -15,7 +15,7 @@ module HTTPX
end
def inspect
"#GRPC::Call(#{grpc_response})"
"#{self.class}(#{grpc_response})"
end
def to_s

View File

@ -29,6 +29,8 @@ module HTTPX
buf = outbuf if outbuf
buf = buf.b if buf.frozen?
buf.prepend([compressed_flag, buf.bytesize].pack("CL>"))
buf
end

View File

@ -25,26 +25,6 @@ module HTTPX
end
end
module InstanceMethods
def send_requests(*requests)
upgrade_request, *remainder = requests
return super unless VALID_H2C_VERBS.include?(upgrade_request.verb) && upgrade_request.scheme == "http"
connection = pool.find_connection(upgrade_request.uri, upgrade_request.options)
return super if connection && connection.upgrade_protocol == "h2c"
# build upgrade request
upgrade_request.headers.add("connection", "upgrade")
upgrade_request.headers.add("connection", "http2-settings")
upgrade_request.headers["upgrade"] = "h2c"
upgrade_request.headers["http2-settings"] = HTTP2Next::Client.settings_header(upgrade_request.options.http2_settings)
super(upgrade_request, *remainder)
end
end
class H2CParser < Connection::HTTP2
def upgrade(request, response)
# skip checks, it is assumed that this is the first
@ -62,9 +42,38 @@ module HTTPX
end
end
module RequestMethods
def valid_h2c_verb?
VALID_H2C_VERBS.include?(@verb)
end
end
module ConnectionMethods
using URIExtensions
def initialize(*)
super
@h2c_handshake = false
end
def send(request)
return super if @h2c_handshake
return super unless request.valid_h2c_verb? && request.scheme == "http"
return super if @upgrade_protocol == "h2c"
@h2c_handshake = true
# build upgrade request
request.headers.add("connection", "upgrade")
request.headers.add("connection", "http2-settings")
request.headers["upgrade"] = "h2c"
request.headers["http2-settings"] = ::HTTP2::Client.settings_header(request.options.http2_settings)
super
end
def upgrade_to_h2c(request, response)
prev_parser = @parser

View File

@ -13,6 +13,12 @@ module HTTPX
# by the end user in $http_init_time, different diff metrics can be shown. The "point of time" is calculated
# using the monotonic clock.
module InternalTelemetry
DEBUG_LEVEL = 3
def self.extra_options(options)
options.merge(debug_level: 3)
end
module TrackTimeMethods
private
@ -28,16 +34,19 @@ module HTTPX
after_time = Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond)
# $http_init_time = after_time
elapsed = after_time - prev_time
warn(+"\e[31m" << "[ELAPSED TIME]: #{label}: #{elapsed} (ms)" << "\e[0m")
end
end
# klass = self.class
module NativeResolverMethods
def transition(nextstate)
state = @state
val = super
meter_elapsed_time("Resolver::Native: #{state} -> #{nextstate}")
val
# until (class_name = klass.name)
# klass = klass.superclass
# end
log(
level: DEBUG_LEVEL,
color: :red,
debug_level: @options ? @options.debug_level : DEBUG_LEVEL,
debug: nil
) do
"[ELAPSED TIME]: #{label}: #{elapsed} (ms)" << "\e[0m"
end
end
end
@ -51,13 +60,6 @@ module HTTPX
meter_elapsed_time("Session: initializing...")
super
meter_elapsed_time("Session: initialized!!!")
resolver_type = @options.resolver_class
resolver_type = Resolver.resolver_for(resolver_type)
return unless resolver_type <= Resolver::Native
resolver_type.prepend TrackTimeMethods
resolver_type.prepend NativeResolverMethods
@options = @options.merge(resolver_class: resolver_type)
end
def close(*)
@ -76,31 +78,27 @@ module HTTPX
meter_elapsed_time("Session -> response") if response
response
end
def coalesce_connections(conn1, conn2, selector, *)
result = super
meter_elapsed_time("Connection##{conn2.object_id} coalescing to Connection##{conn1.object_id}") if result
result
end
end
module RequestMethods
module PoolMethods
def self.included(klass)
klass.prepend Loggable
klass.prepend TrackTimeMethods
super
end
def transition(nextstate)
prev_state = @state
super
meter_elapsed_time("Request##{object_id}[#{@verb} #{@uri}: #{prev_state}] -> #{@state}") if prev_state != @state
end
end
module ConnectionMethods
def self.included(klass)
klass.prepend TrackTimeMethods
super
end
def handle_transition(nextstate)
state = @state
super
meter_elapsed_time("Connection##{object_id}[#{@origin}]: #{state} -> #{nextstate}") if nextstate == @state
def checkin_connection(connection)
super.tap do
meter_elapsed_time("Pool##{object_id}: checked in connection for Connection##{connection.object_id}[#{connection.origin}]}")
end
end
end
end

View File

@ -155,7 +155,7 @@ module HTTPX
with(oauth_session: oauth_session.merge(access_token: access_token, refresh_token: refresh_token))
end
def build_request(*, _)
def build_request(*)
request = super
return request if request.headers.key?("authorization")

View File

@ -24,12 +24,49 @@ module HTTPX
else
1
end
klass.plugin(:retries, max_retries: max_retries, retry_change_requests: true)
klass.plugin(:retries, max_retries: max_retries)
end
def self.extra_options(options)
options.merge(persistent: true)
end
module InstanceMethods
private
def repeatable_request?(request, _)
super || begin
response = request.response
return false unless response && response.is_a?(ErrorResponse)
error = response.error
Retries::RECONNECTABLE_ERRORS.any? { |klass| error.is_a?(klass) }
end
end
def retryable_error?(ex)
super &&
# under the persistent plugin rules, requests are only retried for connection related errors,
# which do not include request timeout related errors. This only gets overriden if the end user
# manually changed +:max_retries+ to something else, which means it is aware of the
# consequences.
(!ex.is_a?(RequestTimeoutError) || @options.max_retries != 1)
end
def get_current_selector
super(&nil) || begin
return unless block_given?
default = yield
set_current_selector(default)
default
end
end
end
end
register_plugin :persistent, Persistent
end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
module HTTPX
class HTTPProxyError < ConnectionError; end
class ProxyError < ConnectionError; end
module Plugins
#
@ -15,7 +15,8 @@ module HTTPX
# https://gitlab.com/os85/httpx/wikis/Proxy
#
module Proxy
Error = HTTPProxyError
class ProxyConnectionError < ProxyError; end
PROXY_ERRORS = [TimeoutError, IOError, SystemCallError, Error].freeze
class << self
@ -28,34 +29,62 @@ module HTTPX
def extra_options(options)
options.merge(supported_proxy_protocols: [])
end
def subplugins
{
retries: ProxyRetries,
}
end
end
class Parameters
attr_reader :uri, :username, :password, :scheme
attr_reader :uri, :username, :password, :scheme, :no_proxy
def initialize(uri:, scheme: nil, username: nil, password: nil, **extra)
@uri = uri.is_a?(URI::Generic) ? uri : URI(uri)
@username = username || @uri.user
@password = password || @uri.password
def initialize(uri: nil, scheme: nil, username: nil, password: nil, no_proxy: nil, **extra)
@no_proxy = Array(no_proxy) if no_proxy
@uris = Array(uri)
uri = @uris.first
return unless @username && @password
@username = username
@password = password
scheme ||= case @uri.scheme
when "socks5"
@uri.scheme
when "http", "https"
"basic"
else
return
@ns = 0
if uri
@uri = uri.is_a?(URI::Generic) ? uri : URI(uri)
@username ||= @uri.user
@password ||= @uri.password
end
@scheme = scheme
auth_scheme = scheme.to_s.capitalize
return unless @uri && @username && @password
require_relative "auth/#{scheme}" unless defined?(Authentication) && Authentication.const_defined?(auth_scheme, false)
@authenticator = nil
@scheme ||= infer_default_auth_scheme(@uri)
@authenticator = Authentication.const_get(auth_scheme).new(@username, @password, **extra)
return unless @scheme
@authenticator = load_authenticator(@scheme, @username, @password, **extra)
end
def shift
# TODO: this operation must be synchronized
@ns += 1
@uri = @uris[@ns]
return unless @uri
@uri = URI(@uri) unless @uri.is_a?(URI::Generic)
scheme = infer_default_auth_scheme(@uri)
return unless scheme != @scheme
@scheme = scheme
@username = username || @uri.user
@password = password || @uri.password
@authenticator = load_authenticator(scheme, @username, @password)
end
def can_authenticate?(*args)
@ -87,11 +116,34 @@ module HTTPX
super
end
end
private
def infer_default_auth_scheme(uri)
case uri.scheme
when "socks5"
uri.scheme
when "http", "https"
"basic"
end
end
def load_authenticator(scheme, username, password, **extra)
auth_scheme = scheme.to_s.capitalize
require_relative "auth/#{scheme}" unless defined?(Authentication) && Authentication.const_defined?(auth_scheme, false)
Authentication.const_get(auth_scheme).new(username, password, **extra)
end
end
# adds support for the following options:
#
# :proxy :: proxy options defining *:uri*, *:username*, *:password* or
# *:scheme* (i.e. <tt>{ uri: "http://proxy" }</tt>)
module OptionsMethods
def option_proxy(value)
value.is_a?(Parameters) ? value : Hash[value]
value.is_a?(Parameters) ? value : Parameters.new(**Hash[value])
end
def option_supported_proxy_protocols(value)
@ -102,91 +154,79 @@ module HTTPX
end
module InstanceMethods
private
def find_connection(request, connections, options)
def find_connection(request_uri, selector, options)
return super unless options.respond_to?(:proxy)
uri = URI(request.uri)
if (next_proxy = request_uri.find_proxy)
return super(request_uri, selector, options.merge(proxy: Parameters.new(uri: next_proxy)))
end
proxy_opts = if (next_proxy = uri.find_proxy)
{ uri: next_proxy }
else
proxy = options.proxy
proxy = options.proxy
return super unless proxy
return super unless proxy
return super(request, connections, options.merge(proxy: nil)) unless proxy.key?(:uri)
next_proxy = proxy.uri
@_proxy_uris ||= Array(proxy[:uri])
raise ProxyError, "Failed to connect to proxy" unless next_proxy
next_proxy = @_proxy_uris.first
raise Error, "Failed to connect to proxy" unless next_proxy
raise ProxyError,
"#{next_proxy.scheme}: unsupported proxy protocol" unless options.supported_proxy_protocols.include?(next_proxy.scheme)
next_proxy = URI(next_proxy)
if (no_proxy = proxy.no_proxy)
no_proxy = no_proxy.join(",") if no_proxy.is_a?(Array)
raise Error,
"#{next_proxy.scheme}: unsupported proxy protocol" unless options.supported_proxy_protocols.include?(next_proxy.scheme)
# TODO: setting proxy to nil leaks the connection object in the pool
return super(request_uri, selector, options.merge(proxy: nil)) unless URI::Generic.use_proxy?(request_uri.host, next_proxy.host,
next_proxy.port, no_proxy)
end
if proxy.key?(:no_proxy)
super(request_uri, selector, options.merge(proxy: proxy))
end
no_proxy = proxy[:no_proxy]
no_proxy = no_proxy.join(",") if no_proxy.is_a?(Array)
private
return super(request, connections, options.merge(proxy: nil)) unless URI::Generic.use_proxy?(uri.host, next_proxy.host,
next_proxy.port, no_proxy)
def fetch_response(request, selector, options)
response = request.response # in case it goes wrong later
begin
response = super
if response.is_a?(ErrorResponse) && proxy_error?(request, response, options)
options.proxy.shift
# return last error response if no more proxies to try
return response if options.proxy.uri.nil?
log { "failed connecting to proxy, trying next..." }
request.transition(:idle)
send_request(request, selector, options)
return
end
proxy.merge(uri: next_proxy)
response
rescue ProxyError
# may happen if coupled with retries, and there are no more proxies to try, in which case
# it'll end up here
response
end
proxy = Parameters.new(**proxy_opts)
proxy_options = options.merge(proxy: proxy)
connection = pool.find_connection(uri, proxy_options) || init_connection(uri, proxy_options)
unless connections.nil? || connections.include?(connection)
connections << connection
set_connection_callbacks(connection, connections, options)
end
connection
end
def fetch_response(request, connections, options)
response = super
def proxy_error?(_request, response, options)
return false unless options.proxy
if response.is_a?(ErrorResponse) && proxy_error?(request, response)
@_proxy_uris.shift
# return last error response if no more proxies to try
return response if @_proxy_uris.empty?
log { "failed connecting to proxy, trying next..." }
request.transition(:idle)
send_request(request, connections, options)
return
end
response
end
def proxy_error?(_request, response)
error = response.error
case error
when NativeResolveError
return false unless @_proxy_uris && !@_proxy_uris.empty?
proxy_uri = URI(options.proxy.uri)
proxy_uri = URI(@_proxy_uris.first)
origin = error.connection.origin
peer = error.connection.peer
# failed resolving proxy domain
origin.host == proxy_uri.host && origin.port == proxy_uri.port
peer.host == proxy_uri.host && peer.port == proxy_uri.port
when ResolveError
return false unless @_proxy_uris && !@_proxy_uris.empty?
proxy_uri = URI(@_proxy_uris.first)
proxy_uri = URI(options.proxy.uri)
error.message.end_with?(proxy_uri.to_s)
when *PROXY_ERRORS
when ProxyConnectionError
# timeout errors connecting to proxy
true
else
@ -204,25 +244,11 @@ module HTTPX
# redefining the connection origin as the proxy's URI,
# as this will be used as the tcp peer ip.
proxy_uri = URI(@options.proxy.uri)
@origin.host = proxy_uri.host
@origin.port = proxy_uri.port
@proxy_uri = URI(@options.proxy.uri)
end
def coalescable?(connection)
return super unless @options.proxy
if @io.protocol == "h2" &&
@origin.scheme == "https" &&
connection.origin.scheme == "https" &&
@io.can_verify_peer?
# in proxied connections, .origin is the proxy ; Given names
# are stored in .origins, this is what is used.
origin = URI(connection.origins.first)
@io.verify_hostname(origin.host)
else
@origin == connection.origin
end
def peer
@proxy_uri || super
end
def connecting?
@ -240,6 +266,14 @@ module HTTPX
when :connecting
consume
end
rescue *PROXY_ERRORS => e
if connecting?
error = ProxyConnectionError.new(e.message)
error.set_backtrace(e.backtrace)
raise error
end
raise e
end
def reset
@ -248,7 +282,7 @@ module HTTPX
@state = :open
super
emit(:close)
# emit(:close)
end
private
@ -281,13 +315,29 @@ module HTTPX
end
super
end
def purge_after_closed
super
@io = @io.proxy_io if @io.respond_to?(:proxy_io)
end
end
module ProxyRetries
module InstanceMethods
def retryable_error?(ex)
super || ex.is_a?(ProxyConnectionError)
end
end
end
end
register_plugin :proxy, Proxy
end
class ProxySSL < SSL
attr_reader :proxy_io
def initialize(tcp, request_uri, options)
@proxy_io = tcp
@io = tcp.to_io
super(request_uri, tcp.addresses, options)
@hostname = request_uri.host

View File

@ -23,24 +23,19 @@ module HTTPX
with(proxy: opts.merge(scheme: "ntlm"))
end
def fetch_response(request, connections, options)
def fetch_response(request, selector, options)
response = super
if response &&
response.is_a?(Response) &&
response.status == 407 &&
!request.headers.key?("proxy-authorization") &&
response.headers.key?("proxy-authenticate")
connection = find_connection(request, connections, options)
if connection.options.proxy.can_authenticate?(response.headers["proxy-authenticate"])
request.transition(:idle)
request.headers["proxy-authorization"] =
connection.options.proxy.authenticate(request, response.headers["proxy-authenticate"])
send_request(request, connections)
return
end
response.headers.key?("proxy-authenticate") && options.proxy.can_authenticate?(response.headers["proxy-authenticate"])
request.transition(:idle)
request.headers["proxy-authorization"] =
options.proxy.authenticate(request, response.headers["proxy-authenticate"])
send_request(request, selector, options)
return
end
response
@ -65,11 +60,18 @@ module HTTPX
return unless @io.connected?
@parser || begin
@parser = self.class.parser_type(@io.protocol).new(@write_buffer, @options.merge(max_concurrent_requests: 1))
@parser = parser_type(@io.protocol).new(@write_buffer, @options.merge(max_concurrent_requests: 1))
parser = @parser
parser.extend(ProxyParser)
parser.on(:response, &method(:__http_on_connect))
parser.on(:close) { transition(:closing) }
parser.on(:close) do |force|
next unless @parser
if force
reset
emit(:terminate)
end
end
parser.on(:reset) do
if parser.empty?
reset
@ -90,8 +92,9 @@ module HTTPX
case @state
when :connecting
@parser.close
parser = @parser
@parser = nil
parser.close
when :idle
@parser.callbacks.clear
set_parser_callbacks(@parser)
@ -135,6 +138,8 @@ module HTTPX
else
pending = @pending + @parser.pending
while (req = pending.shift)
response.finish!
req.response = response
req.emit(:response, response)
end
reset
@ -163,8 +168,8 @@ module HTTPX
end
class ConnectRequest < Request
def initialize(uri, _options)
super("CONNECT", uri, {})
def initialize(uri, options)
super("CONNECT", uri, options)
@headers.delete("accept")
end

View File

@ -4,7 +4,7 @@ require "resolv"
require "ipaddr"
module HTTPX
class Socks4Error < HTTPProxyError; end
class Socks4Error < ProxyError; end
module Plugins
module Proxy
@ -89,7 +89,7 @@ module HTTPX
def initialize(buffer, options)
@buffer = buffer
@options = Options.new(options)
@options = options
end
def close; end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true
module HTTPX
class Socks5Error < HTTPProxyError; end
class Socks5Error < ProxyError; end
module Plugins
module Proxy
@ -141,7 +141,7 @@ module HTTPX
def initialize(buffer, options)
@buffer = buffer
@options = Options.new(options)
@options = options
end
def close; end

View File

@ -0,0 +1,35 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds support for using the experimental QUERY HTTP method
#
# https://gitlab.com/os85/httpx/wikis/Query
module Query
def self.subplugins
{
retries: QueryRetries,
}
end
module InstanceMethods
def query(*uri, **options)
request("QUERY", uri, **options)
end
end
module QueryRetries
module InstanceMethods
private
def repeatable_request?(request, options)
super || request.verb == "QUERY"
end
end
end
end
register_plugin :query, Query
end
end

View File

@ -10,21 +10,18 @@ module HTTPX
module ResponseCache
CACHEABLE_VERBS = %w[GET HEAD].freeze
CACHEABLE_STATUS_CODES = [200, 203, 206, 300, 301, 410].freeze
SUPPORTED_VARY_HEADERS = %w[accept accept-encoding accept-language cookie origin].sort.freeze
private_constant :CACHEABLE_VERBS
private_constant :CACHEABLE_STATUS_CODES
class << self
def load_dependencies(*)
require_relative "response_cache/store"
require_relative "response_cache/file_store"
end
def cacheable_request?(request)
CACHEABLE_VERBS.include?(request.verb) &&
(
!request.headers.key?("cache-control") || !request.headers.get("cache-control").include?("no-store")
)
end
# whether the +response+ can be stored in the response cache.
# (i.e. has a cacheable body, does not contain directives prohibiting storage, etc...)
def cacheable_response?(response)
response.is_a?(Response) &&
(
@ -39,82 +36,230 @@ module HTTPX
# directive prohibits caching. However, a cache that does not support
# the Range and Content-Range headers MUST NOT cache 206 (Partial
# Content) responses.
response.status != 206 && (
response.headers.key?("etag") || response.headers.key?("last-modified") || response.fresh?
)
response.status != 206
end
def cached_response?(response)
# whether the +response+
def not_modified?(response)
response.is_a?(Response) && response.status == 304
end
def extra_options(options)
options.merge(response_cache_store: Store.new)
options.merge(
supported_vary_headers: SUPPORTED_VARY_HEADERS,
response_cache_store: :store,
)
end
end
# adds support for the following options:
#
# :supported_vary_headers :: array of header values that will be considered for a "vary" header based cache validation
# (defaults to {SUPPORTED_VARY_HEADERS}).
# :response_cache_store :: object where cached responses are fetch from or stored in; defaults to <tt>:store</tt> (in-memory
# cache), can be set to <tt>:file_store</tt> (file system cache store) as well, or any object which
# abides by the Cache Store Interface
#
# The Cache Store Interface requires implementation of the following methods:
#
# * +#get(request) -> response or nil+
# * +#set(request, response) -> void+
# * +#clear() -> void+)
#
module OptionsMethods
def option_response_cache_store(value)
raise TypeError, "must be an instance of #{Store}" unless value.is_a?(Store)
case value
when :store
Store.new
when :file_store
FileStore.new
else
value
end
end
value
def option_supported_vary_headers(value)
Array(value).sort
end
end
module InstanceMethods
# wipes out all cached responses from the cache store.
def clear_response_cache
@options.response_cache_store.clear
end
def build_request(*)
request = super
return request unless ResponseCache.cacheable_request?(request) && @options.response_cache_store.cached?(request)
return request unless cacheable_request?(request)
@options.response_cache_store.prepare(request)
prepare_cache(request)
request
end
private
def send_request(request, *)
return request if request.response
super
end
def fetch_response(request, *)
response = super
return unless response
if ResponseCache.cached_response?(response)
if ResponseCache.not_modified?(response)
log { "returning cached response for #{request.uri}" }
cached_response = @options.response_cache_store.lookup(request)
response.copy_from_cached(cached_response)
else
@options.response_cache_store.cache(request, response)
response.copy_from_cached!
elsif request.cacheable_verb? && ResponseCache.cacheable_response?(response)
request.options.response_cache_store.set(request, response) unless response.cached?
end
response
end
# will either assign a still-fresh cached response to +request+, or set up its HTTP
# cache invalidation headers in case it's not fresh anymore.
def prepare_cache(request)
cached_response = request.options.response_cache_store.get(request)
return unless cached_response && match_by_vary?(request, cached_response)
cached_response.body.rewind
if cached_response.fresh?
cached_response = cached_response.dup
cached_response.mark_as_cached!
request.response = cached_response
request.emit(:response, cached_response)
return
end
request.cached_response = cached_response
if !request.headers.key?("if-modified-since") && (last_modified = cached_response.headers["last-modified"])
request.headers.add("if-modified-since", last_modified)
end
if !request.headers.key?("if-none-match") && (etag = cached_response.headers["etag"]) # rubocop:disable Style/GuardClause
request.headers.add("if-none-match", etag)
end
end
def cacheable_request?(request)
request.cacheable_verb? &&
(
!request.headers.key?("cache-control") || !request.headers.get("cache-control").include?("no-store")
)
end
# whether the +response+ complies with the directives set by the +request+ "vary" header
# (true when none is available).
def match_by_vary?(request, response)
vary = response.vary
return true unless vary
original_request = response.original_request
if vary == %w[*]
request.options.supported_vary_headers.each do |field|
return false unless request.headers[field] == original_request.headers[field]
end
return true
end
vary.all? do |field|
!original_request.headers.key?(field) || request.headers[field] == original_request.headers[field]
end
end
end
module RequestMethods
# points to a previously cached Response corresponding to this request.
attr_accessor :cached_response
def initialize(*)
super
@cached_response = nil
end
def merge_headers(*)
super
@response_cache_key = nil
end
# returns whether this request is cacheable as per HTTP caching rules.
def cacheable_verb?
CACHEABLE_VERBS.include?(@verb)
end
# returns a unique cache key as a String identifying this request
def response_cache_key
@response_cache_key ||= Digest::SHA1.hexdigest("httpx-response-cache-#{@verb}-#{@uri}")
@response_cache_key ||= begin
keys = [@verb, @uri]
@options.supported_vary_headers.each do |field|
value = @headers[field]
keys << value if value
end
Digest::SHA1.hexdigest("httpx-response-cache-#{keys.join("-")}")
end
end
end
module ResponseMethods
def copy_from_cached(other)
# 304 responses do not have content-type, which are needed for decoding.
@headers = @headers.class.new(other.headers.merge(@headers))
attr_writer :original_request
@body = other.body.dup
def initialize(*)
super
@cached = false
end
# a copy of the request this response was originally cached from
def original_request
@original_request || @request
end
# whether this Response was duplicated from a previously {RequestMethods#cached_response}.
def cached?
@cached
end
# sets this Response as being duplicated from a previously cached response.
def mark_as_cached!
@cached = true
end
# eager-copies the response headers and body from {RequestMethods#cached_response}.
def copy_from_cached!
cached_response = @request.cached_response
return unless cached_response
# 304 responses do not have content-type, which are needed for decoding.
@headers = @headers.class.new(cached_response.headers.merge(@headers))
@body = cached_response.body.dup
@body.rewind
end
# A response is fresh if its age has not yet exceeded its freshness lifetime.
# other (#cache_control} directives may influence the outcome, as per the rules
# from the {rfc}[https://www.rfc-editor.org/rfc/rfc7234]
def fresh?
if cache_control
return false if cache_control.include?("no-cache")
return true if cache_control.include?("immutable")
# check age: max-age
max_age = cache_control.find { |directive| directive.start_with?("s-maxage") }
@ -132,15 +277,16 @@ module HTTPX
begin
expires = Time.httpdate(@headers["expires"])
rescue ArgumentError
return true
return false
end
return (expires - Time.now).to_i.positive?
end
true
false
end
# returns the "cache-control" directives as an Array of String(s).
def cache_control
return @cache_control if defined?(@cache_control)
@ -151,24 +297,28 @@ module HTTPX
end
end
# returns the "vary" header value as an Array of (String) headers.
def vary
return @vary if defined?(@vary)
@vary = begin
return unless @headers.key?("vary")
@headers["vary"].split(/ *, */)
@headers["vary"].split(/ *, */).map(&:downcase)
end
end
private
# returns the value of the "age" header as an Integer (time since epoch).
# if no "age" of header exists, it returns the number of seconds since {#date}.
def age
return @headers["age"].to_i if @headers.key?("age")
(Time.now - date).to_i
end
# returns the value of the "date" header as a Time object
def date
@date ||= Time.httpdate(@headers["date"])
rescue NoMethodError, ArgumentError

View File

@ -0,0 +1,140 @@
# frozen_string_literal: true
require "pathname"
module HTTPX::Plugins
module ResponseCache
# Implementation of a file system based cache store.
#
# It stores cached responses in a file under a directory pointed by the +dir+
# variable (defaults to the default temp directory from the OS), in a custom
# format (similar but different from HTTP/1.1 request/response framing).
class FileStore
CRLF = HTTPX::Connection::HTTP1::CRLF
attr_reader :dir
def initialize(dir = Dir.tmpdir)
@dir = Pathname.new(dir).join("httpx-response-cache")
FileUtils.mkdir_p(@dir)
end
def clear
FileUtils.rm_rf(@dir)
end
def get(request)
path = file_path(request)
return unless File.exist?(path)
File.open(path, mode: File::RDONLY | File::BINARY) do |f|
f.flock(File::Constants::LOCK_SH)
read_from_file(request, f)
end
end
def set(request, response)
path = file_path(request)
file_exists = File.exist?(path)
mode = file_exists ? File::RDWR : File::CREAT | File::Constants::WRONLY
File.open(path, mode: mode | File::BINARY) do |f|
f.flock(File::Constants::LOCK_EX)
if file_exists
cached_response = read_from_file(request, f)
if cached_response
next if cached_response == request.cached_response
cached_response.close
f.truncate(0)
f.rewind
end
end
# cache the request headers
f << request.verb << CRLF
f << request.uri << CRLF
request.headers.each do |field, value|
f << field << ":" << value << CRLF
end
f << CRLF
# cache the response
f << response.status << CRLF
f << response.version << CRLF
response.headers.each do |field, value|
f << field << ":" << value << CRLF
end
f << CRLF
response.body.rewind
IO.copy_stream(response.body, f)
end
end
private
def file_path(request)
@dir.join(request.response_cache_key)
end
def read_from_file(request, f)
# if it's an empty file
return if f.eof?
# read request data
verb = f.readline.delete_suffix!(CRLF)
uri = f.readline.delete_suffix!(CRLF)
request_headers = {}
while (line = f.readline) != CRLF
line.delete_suffix!(CRLF)
sep_index = line.index(":")
field = line.byteslice(0..(sep_index - 1))
value = line.byteslice((sep_index + 1)..-1)
request_headers[field] = value
end
status = f.readline.delete_suffix!(CRLF)
version = f.readline.delete_suffix!(CRLF)
response_headers = {}
while (line = f.readline) != CRLF
line.delete_suffix!(CRLF)
sep_index = line.index(":")
field = line.byteslice(0..(sep_index - 1))
value = line.byteslice((sep_index + 1)..-1)
response_headers[field] = value
end
original_request = request.options.request_class.new(verb, uri, request.options)
original_request.merge_headers(request_headers)
response = request.options.response_class.new(request, status, version, response_headers)
response.original_request = original_request
response.finish!
IO.copy_stream(f, response.body)
response
end
end
end
end

View File

@ -2,6 +2,7 @@
module HTTPX::Plugins
module ResponseCache
# Implementation of a thread-safe in-memory cache store.
class Store
def initialize
@store = {}
@ -12,80 +13,19 @@ module HTTPX::Plugins
@store_mutex.synchronize { @store.clear }
end
def lookup(request)
responses = _get(request)
return unless responses
responses.find(&method(:match_by_vary?).curry(2)[request])
end
def cached?(request)
lookup(request)
end
def cache(request, response)
return unless ResponseCache.cacheable_request?(request) && ResponseCache.cacheable_response?(response)
_set(request, response)
end
def prepare(request)
cached_response = lookup(request)
return unless cached_response
return unless match_by_vary?(request, cached_response)
if !request.headers.key?("if-modified-since") && (last_modified = cached_response.headers["last-modified"])
request.headers.add("if-modified-since", last_modified)
end
if !request.headers.key?("if-none-match") && (etag = cached_response.headers["etag"]) # rubocop:disable Style/GuardClause
request.headers.add("if-none-match", etag)
end
end
private
def match_by_vary?(request, response)
vary = response.vary
return true unless vary
original_request = response.instance_variable_get(:@request)
return request.headers.same_headers?(original_request.headers) if vary == %w[*]
vary.all? do |cache_field|
cache_field.downcase!
!original_request.headers.key?(cache_field) || request.headers[cache_field] == original_request.headers[cache_field]
end
end
def _get(request)
def get(request)
@store_mutex.synchronize do
responses = @store[request.response_cache_key]
return unless responses
responses.select! do |res|
!res.body.closed? && res.fresh?
end
responses
@store[request.response_cache_key]
end
end
def _set(request, response)
def set(request, response)
@store_mutex.synchronize do
responses = (@store[request.response_cache_key] ||= [])
cached_response = @store[request.response_cache_key]
responses.reject! do |res|
res.body.closed? || !res.fresh? || match_by_vary?(request, res)
end
cached_response.close if cached_response
responses << response
@store[request.response_cache_key] = response
end
end
end

View File

@ -3,7 +3,12 @@
module HTTPX
module Plugins
#
# This plugin adds support for retrying requests when certain errors happen.
# This plugin adds support for retrying requests when errors happen.
#
# It has a default max number of retries (see *MAX_RETRIES* and the *max_retries* option),
# after which it will return the last response, error or not. It will **not** raise an exception.
#
# It does not retry which are not considered idempotent (see *retry_change_requests* to override).
#
# https://gitlab.com/os85/httpx/wikis/Retries
#
@ -12,7 +17,9 @@ module HTTPX
# TODO: pass max_retries in a configure/load block
IDEMPOTENT_METHODS = %w[GET OPTIONS HEAD PUT DELETE].freeze
RETRYABLE_ERRORS = [
# subset of retryable errors which are safe to retry when reconnecting
RECONNECTABLE_ERRORS = [
IOError,
EOFError,
Errno::ECONNRESET,
@ -20,12 +27,15 @@ module HTTPX
Errno::EPIPE,
Errno::EINVAL,
Errno::ETIMEDOUT,
Parser::Error,
TLSError,
TimeoutError,
ConnectionError,
Connection::HTTP2::GoawayError,
TLSError,
Connection::HTTP2::Error,
].freeze
RETRYABLE_ERRORS = (RECONNECTABLE_ERRORS + [
Parser::Error,
TimeoutError,
]).freeze
DEFAULT_JITTER = ->(interval) { interval * ((rand + 1) * 0.5) }
if ENV.key?("HTTPX_NO_JITTER")
@ -38,6 +48,14 @@ module HTTPX
end
end
# adds support for the following options:
#
# :max_retries :: max number of times a request will be retried (defaults to <tt>3</tt>).
# :retry_change_requests :: whether idempotent requests are retried (defaults to <tt>false</tt>).
# :retry_after:: seconds after which a request is retried; can also be a callable object (i.e. <tt>->(req, res) { ... } </tt>)
# :retry_jitter :: number of seconds applied to *:retry_after* (must be a callable, i.e. <tt>->(retry_after) { ... } </tt>).
# :retry_on :: callable which alternatively defines a different rule for when a response is to be retried
# (i.e. <tt>->(res) { ... }</tt>).
module OptionsMethods
def option_retry_after(value)
# return early if callable
@ -75,29 +93,30 @@ module HTTPX
end
module InstanceMethods
# returns a `:retries` plugin enabled session with +n+ maximum retries per request setting.
def max_retries(n)
with(max_retries: n.to_i)
with(max_retries: n)
end
private
def fetch_response(request, connections, options)
def fetch_response(request, selector, options)
response = super
if response &&
request.retries.positive? &&
__repeatable_request?(request, options) &&
repeatable_request?(request, options) &&
(
(
response.is_a?(ErrorResponse) && __retryable_error?(response.error)
response.is_a?(ErrorResponse) && retryable_error?(response.error)
) ||
(
options.retry_on && options.retry_on.call(response)
)
)
__try_partial_retry(request, response)
try_partial_retry(request, response)
log { "failed to get response, #{request.retries} tries to go..." }
request.retries -= 1
request.retries -= 1 unless request.ping? # do not exhaust retries on connection liveness probes
request.transition(:idle)
retry_after = options.retry_after
@ -111,12 +130,18 @@ module HTTPX
retry_start = Utils.now
log { "retrying after #{retry_after} secs..." }
pool.after(retry_after) do
log { "retrying (elapsed time: #{Utils.elapsed_time(retry_start)})!!" }
send_request(request, connections, options)
selector.after(retry_after) do
if (response = request.response)
response.finish!
# request has terminated abruptly meanwhile
request.emit(:response, response)
else
log { "retrying (elapsed time: #{Utils.elapsed_time(retry_start)})!!" }
send_request(request, selector, options)
end
end
else
send_request(request, connections, options)
send_request(request, selector, options)
end
return
@ -124,24 +149,26 @@ module HTTPX
response
end
def __repeatable_request?(request, options)
# returns whether +request+ can be retried.
def repeatable_request?(request, options)
IDEMPOTENT_METHODS.include?(request.verb) || options.retry_change_requests
end
def __retryable_error?(ex)
# returns whether the +ex+ exception happend for a retriable request.
def retryable_error?(ex)
RETRYABLE_ERRORS.any? { |klass| ex.is_a?(klass) }
end
def proxy_error?(request, response)
def proxy_error?(request, response, _)
super && !request.retries.positive?
end
#
# Atttempt to set the request to perform a partial range request.
# Attempt to set the request to perform a partial range request.
# This happens if the peer server accepts byte-range requests, and
# the last response contains some body payload.
#
def __try_partial_retry(request, response)
def try_partial_retry(request, response)
response = response.response if response.is_a?(ErrorResponse)
return unless response
@ -149,7 +176,7 @@ module HTTPX
unless response.headers.key?("accept-ranges") &&
response.headers["accept-ranges"] == "bytes" && # there's nothing else supported though...
(original_body = response.body)
response.close if response.respond_to?(:close)
response.body.close
return
end
@ -162,10 +189,13 @@ module HTTPX
end
module RequestMethods
# number of retries left.
attr_accessor :retries
# a response partially received before.
attr_writer :partial_response
# initializes the request instance, sets the number of retries for the request.
def initialize(*args)
super
@retries = @options.max_retries

View File

@ -87,6 +87,9 @@ module HTTPX
end
end
# adds support for the following options:
#
# :allowed_schemes :: list of URI schemes allowed (defaults to <tt>["https", "http"]</tt>)
module OptionsMethods
def option_allowed_schemes(value)
Array(value)
@ -100,7 +103,7 @@ module HTTPX
error = ServerSideRequestForgeryError.new("#{request.uri} URI scheme not allowed")
error.set_backtrace(caller)
response = ErrorResponse.new(request, error, request.options)
response = ErrorResponse.new(request, error)
request.emit(:response, response)
response
end

View File

@ -2,29 +2,43 @@
module HTTPX
class StreamResponse
attr_reader :request
def initialize(request, session)
@request = request
@options = @request.options
@session = session
@response = nil
@response_enum = nil
@buffered_chunks = []
end
def each(&block)
return enum_for(__method__) unless block
if (response_enum = @response_enum)
@response_enum = nil
# streaming already started, let's finish it
while (chunk = @buffered_chunks.shift)
block.call(chunk)
end
# consume enum til the end
begin
while (chunk = response_enum.next)
block.call(chunk)
end
rescue StopIteration
return
end
end
@request.stream = self
begin
@on_chunk = block
if @request.response
# if we've already started collecting the payload, yield it first
# before proceeding.
body = @request.response.body
body.each do |chunk|
on_chunk(chunk)
end
end
response = @session.request(@request)
response.raise_for_status
ensure
@ -59,38 +73,50 @@ module HTTPX
# :nocov:
def inspect
"#<StreamResponse:#{object_id}>"
"#<#{self.class}:#{object_id}>"
end
# :nocov:
def to_s
response.to_s
if @request.response
@request.response.to_s
else
@buffered_chunks.join
end
end
private
def response
return @response if @response
@request.response || begin
@response = @session.request(@request)
response_enum = each
while (chunk = response_enum.next)
@buffered_chunks << chunk
break if @request.response
end
@response_enum = response_enum
@request.response
end
end
def respond_to_missing?(meth, *args)
response.respond_to?(meth, *args) || super
def respond_to_missing?(meth, include_private)
if (response = @request.response)
response.respond_to_missing?(meth, include_private)
else
@options.response_class.method_defined?(meth) || (include_private && @options.response_class.private_method_defined?(meth))
end || super
end
def method_missing(meth, *args, &block)
def method_missing(meth, *args, **kwargs, &block)
return super unless response.respond_to?(meth)
response.__send__(meth, *args, &block)
response.__send__(meth, *args, **kwargs, &block)
end
end
module Plugins
#
# This plugin adds support for stream response (text/event-stream).
# This plugin adds support for streaming a response (useful for i.e. "text/event-stream" payloads).
#
# https://gitlab.com/os85/httpx/wikis/Stream
#

View File

@ -0,0 +1,315 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds support for bidirectional HTTP/2 streams.
#
# https://gitlab.com/os85/httpx/wikis/StreamBidi
#
# It is required that the request body allows chunk to be buffered, (i.e., responds to +#<<(chunk)+).
module StreamBidi
# Extension of the Connection::HTTP2 class, which adds functionality to
# deal with a request that can't be drained and must be interleaved with
# the response streams.
#
# The streams keeps send DATA frames while there's data; when they're ain't,
# the stream is kept open; it must be explicitly closed by the end user.
#
class HTTP2Bidi < Connection::HTTP2
def initialize(*)
super
@lock = Thread::Mutex.new
end
%i[close empty? exhausted? send <<].each do |lock_meth|
class_eval(<<-METH, __FILE__, __LINE__ + 1)
# lock.aware version of +#{lock_meth}+
def #{lock_meth}(*) # def close(*)
return super if @lock.owned?
# small race condition between
# checking for ownership and
# acquiring lock.
# TODO: fix this at the parser.
@lock.synchronize { super }
end
METH
end
private
%i[join_headers join_trailers join_body].each do |lock_meth|
class_eval(<<-METH, __FILE__, __LINE__ + 1)
# lock.aware version of +#{lock_meth}+
def #{lock_meth}(*) # def join_headers(*)
return super if @lock.owned?
# small race condition between
# checking for ownership and
# acquiring lock.
# TODO: fix this at the parser.
@lock.synchronize { super }
end
METH
end
def handle_stream(stream, request)
request.on(:body) do
next unless request.headers_sent
handle(request, stream)
emit(:flush_buffer)
end
super
end
# when there ain't more chunks, it makes the buffer as full.
def send_chunk(request, stream, chunk, next_chunk)
super
return if next_chunk
request.transition(:waiting_for_chunk)
throw(:buffer_full)
end
# sets end-stream flag when the request is closed.
def end_stream?(request, next_chunk)
request.closed? && next_chunk.nil?
end
end
# BidiBuffer is a Buffer which can be receive data from threads othr
# than the thread of the corresponding Connection/Session.
#
# It synchronizes access to a secondary internal +@oob_buffer+, which periodically
# is reconciled to the main internal +@buffer+.
class BidiBuffer < Buffer
def initialize(*)
super
@parent_thread = Thread.current
@oob_mutex = Thread::Mutex.new
@oob_buffer = "".b
end
# buffers the +chunk+ to be sent
def <<(chunk)
return super if Thread.current == @parent_thread
@oob_mutex.synchronize { @oob_buffer << chunk }
end
# reconciles the main and secondary buffer (which receives data from other threads).
def rebuffer
raise Error, "can only rebuffer while waiting on a response" unless Thread.current == @parent_thread
@oob_mutex.synchronize do
@buffer << @oob_buffer
@oob_buffer.clear
end
end
end
# Proxy to wake up the session main loop when one
# of the connections has buffered data to write. It abides by the HTTPX::_Selectable API,
# which allows it to be registered in the selector alongside actual HTTP-based
# HTTPX::Connection objects.
class Signal
def initialize
@closed = false
@pipe_read, @pipe_write = IO.pipe
end
def state
@closed ? :closed : :open
end
# noop
def log(**, &_); end
def to_io
@pipe_read.to_io
end
def wakeup
return if @closed
@pipe_write.write("\0")
end
def call
return if @closed
@pipe_read.readpartial(1)
end
def interests
return if @closed
:r
end
def timeout; end
def terminate
@pipe_write.close
@pipe_read.close
@closed = true
end
# noop (the owner connection will take of it)
def handle_socket_timeout(interval); end
end
class << self
def load_dependencies(klass)
klass.plugin(:stream)
end
def extra_options(options)
options.merge(fallback_protocol: "h2")
end
end
module InstanceMethods
def initialize(*)
@signal = Signal.new
super
end
def close(selector = Selector.new)
@signal.terminate
selector.deregister(@signal)
super(selector)
end
def select_connection(connection, selector)
super
selector.register(@signal)
connection.signal = @signal
end
def deselect_connection(connection, *)
super
connection.signal = nil
end
end
# Adds synchronization to request operations which may buffer payloads from different
# threads.
module RequestMethods
attr_accessor :headers_sent
def initialize(*)
super
@headers_sent = false
@closed = false
@mutex = Thread::Mutex.new
end
def closed?
@closed
end
def can_buffer?
super && @state != :waiting_for_chunk
end
# overrides state management transitions to introduce an intermediate
# +:waiting_for_chunk+ state, which the request transitions to once payload
# is buffered.
def transition(nextstate)
headers_sent = @headers_sent
case nextstate
when :waiting_for_chunk
return unless @state == :body
when :body
case @state
when :headers
headers_sent = true
when :waiting_for_chunk
# HACK: to allow super to pass through
@state = :headers
end
end
super.tap do
# delay setting this up until after the first transition to :body
@headers_sent = headers_sent
end
end
def <<(chunk)
@mutex.synchronize do
if @drainer
@body.clear if @body.respond_to?(:clear)
@drainer = nil
end
@body << chunk
transition(:body)
end
end
def close
@mutex.synchronize do
return if @closed
@closed = true
end
# last chunk to send which ends the stream
self << ""
end
end
module RequestBodyMethods
def initialize(*, **)
super
@headers.delete("content-length")
end
def empty?
false
end
end
# overrides the declaration of +@write_buffer+, which is now a thread-safe buffer
# responding to the same API.
module ConnectionMethods
attr_writer :signal
def initialize(*)
super
@write_buffer = BidiBuffer.new(@options.buffer_size)
end
# rebuffers the +@write_buffer+ before calculating interests.
def interests
@write_buffer.rebuffer
super
end
private
def parser_type(protocol)
return HTTP2Bidi if protocol == "h2"
super
end
def set_parser_callbacks(parser)
super
parser.on(:flush_buffer) do
@signal.wakeup if @signal
end
end
end
end
register_plugin :stream_bidi, StreamBidi
end
end

View File

@ -28,7 +28,7 @@ module HTTPX
end
module InstanceMethods
def fetch_response(request, connections, options)
def fetch_response(request, selector, options)
response = super
if response
@ -45,7 +45,7 @@ module HTTPX
return response unless protocol_handler
log { "upgrading to #{upgrade_protocol}..." }
connection = find_connection(request, connections, options)
connection = find_connection(request.uri, selector, options)
# do not upgrade already upgraded connections
return if connection.upgrade_protocol == upgrade_protocol
@ -60,21 +60,22 @@ module HTTPX
response
end
def close(*args)
return super if args.empty?
connections, = args
pool.close(connections.reject(&:hijacked))
end
end
module ConnectionMethods
attr_reader :upgrade_protocol, :hijacked
def initialize(*)
super
@upgrade_protocol = nil
end
def hijack_io
@hijacked = true
# connection is taken away from selector and not given back to the pool.
@current_session.deselect_connection(self, @current_selector, true)
end
end
end

View File

@ -8,6 +8,10 @@ module HTTPX
# https://gitlab.com/os85/httpx/wikis/WebDav
#
module WebDav
def self.configure(klass)
klass.plugin(:xml)
end
module InstanceMethods
def copy(src, dest)
request("COPY", src, headers: { "destination" => @options.origin.merge(dest) })
@ -43,6 +47,8 @@ module HTTPX
ensure
unlock(path, lock_token)
end
response
end
def unlock(path, lock_token)

76
lib/httpx/plugins/xml.rb Normal file
View File

@ -0,0 +1,76 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin supports request XML encoding/response decoding using the nokogiri gem.
#
# https://gitlab.com/os85/httpx/wikis/XML
#
module XML
MIME_TYPES = %r{\b(application|text)/(.+\+)?xml\b}.freeze
module Transcoder
module_function
class Encoder
def initialize(xml)
@raw = xml
end
def content_type
charset = @raw.respond_to?(:encoding) && @raw.encoding ? @raw.encoding.to_s.downcase : "utf-8"
"application/xml; charset=#{charset}"
end
def bytesize
@raw.to_s.bytesize
end
def to_s
@raw.to_s
end
end
def encode(xml)
Encoder.new(xml)
end
def decode(response)
content_type = response.content_type.mime_type
raise HTTPX::Error, "invalid form mime type (#{content_type})" unless MIME_TYPES.match?(content_type)
Nokogiri::XML.method(:parse)
end
end
class << self
def load_dependencies(*)
require "nokogiri"
end
end
module ResponseMethods
# decodes the response payload into a Nokogiri::XML::Node object **if** the payload is valid
# "application/xml" (requires the "nokogiri" gem).
def xml
decode(Transcoder)
end
end
module RequestBodyClassMethods
# ..., xml: Nokogiri::XML::Node #=> xml encoder
def initialize_body(params)
if (xml = params.delete(:xml))
# @type var xml: Nokogiri::XML::Node | String
return Transcoder.encode(xml)
end
super
end
end
end
register_plugin(:xml, XML)
end
end

View File

@ -1,6 +1,5 @@
# frozen_string_literal: true
require "forwardable"
require "httpx/selector"
require "httpx/connection"
require "httpx/resolver"
@ -8,110 +7,34 @@ require "httpx/resolver"
module HTTPX
class Pool
using ArrayExtensions::FilterMap
extend Forwardable
using URIExtensions
def_delegator :@timers, :after
POOL_TIMEOUT = 5
def initialize
@resolvers = {}
@timers = Timers.new
@selector = Selector.new
# Sets up the connection pool with the given +options+, which can be the following:
#
# :max_connections:: the maximum number of connections held in the pool.
# :max_connections_per_origin :: the maximum number of connections held in the pool pointing to a given origin.
# :pool_timeout :: the number of seconds to wait for a connection to a given origin (before raising HTTPX::PoolTimeoutError)
#
def initialize(options)
@max_connections = options.fetch(:max_connections, Float::INFINITY)
@max_connections_per_origin = options.fetch(:max_connections_per_origin, Float::INFINITY)
@pool_timeout = options.fetch(:pool_timeout, POOL_TIMEOUT)
@resolvers = Hash.new { |hs, resolver_type| hs[resolver_type] = [] }
@resolver_mtx = Thread::Mutex.new
@connections = []
@connection_mtx = Thread::Mutex.new
@connections_counter = 0
@max_connections_cond = ConditionVariable.new
@origin_counters = Hash.new(0)
@origin_conds = Hash.new { |hs, orig| hs[orig] = ConditionVariable.new }
end
def wrap
connections = @connections
@connections = []
begin
yield self
ensure
@connections.unshift(*connections)
end
end
def empty?
@connections.empty?
end
def next_tick
catch(:jump_tick) do
timeout = next_timeout
if timeout && timeout.negative?
@timers.fire
throw(:jump_tick)
end
begin
@selector.select(timeout, &:call)
@timers.fire
rescue TimeoutError => e
@timers.fire(e)
end
end
rescue StandardError => e
@connections.each do |connection|
connection.emit(:error, e)
end
rescue Exception # rubocop:disable Lint/RescueException
@connections.each(&:force_reset)
raise
end
def close(connections = @connections)
return if connections.empty?
connections = connections.reject(&:inflight?)
connections.each(&:terminate)
next_tick until connections.none? { |c| c.state != :idle && @connections.include?(c) }
# close resolvers
outstanding_connections = @connections
resolver_connections = @resolvers.each_value.flat_map(&:connections).compact
outstanding_connections -= resolver_connections
return unless outstanding_connections.empty?
@resolvers.each_value do |resolver|
resolver.close unless resolver.closed?
end
# for https resolver
resolver_connections.each(&:terminate)
next_tick until resolver_connections.none? { |c| c.state != :idle && @connections.include?(c) }
end
def init_connection(connection, _options)
connection.timers = @timers
connection.on(:activate) do
select_connection(connection)
end
connection.on(:exhausted) do
case connection.state
when :closed
connection.idling
@connections << connection
select_connection(connection)
when :closing
connection.once(:close) do
connection.idling
@connections << connection
select_connection(connection)
end
end
end
connection.on(:close) do
unregister_connection(connection)
end
connection.on(:terminate) do
unregister_connection(connection, true)
end
resolve_connection(connection) unless connection.family
end
def deactivate(connections)
connections.each do |connection|
connection.deactivate
deselect_connection(connection) if connection.state == :inactive
# connections returned by this function are not expected to return to the connection pool.
def pop_connection
@connection_mtx.synchronize do
drop_connection
end
end
@ -119,185 +42,144 @@ module HTTPX
# Many hostnames are reachable through the same IP, so we try to
# maximize pipelining by opening as few connections as possible.
#
def find_connection(uri, options)
conn = @connections.find do |connection|
connection.match?(uri, options)
end
def checkout_connection(uri, options)
return checkout_new_connection(uri, options) if options.io
return unless conn
@connection_mtx.synchronize do
acquire_connection(uri, options) || begin
if @connections_counter == @max_connections
# this takes precedence over per-origin
@max_connections_cond.wait(@connection_mtx, @pool_timeout)
case conn.state
when :closed
conn.idling
select_connection(conn)
when :closing
conn.once(:close) do
conn.idling
select_connection(conn)
acquire_connection(uri, options) || begin
if @connections_counter == @max_connections
# if no matching usable connection was found, the pool will make room and drop a closed connection. if none is found,
# this means that all of them are persistent or being used, so raise a timeout error.
conn = @connections.find { |c| c.state == :closed }
raise PoolTimeoutError.new(@pool_timeout,
"Timed out after #{@pool_timeout} seconds while waiting for a connection") unless conn
drop_connection(conn)
end
end
end
if @origin_counters[uri.origin] == @max_connections_per_origin
@origin_conds[uri.origin].wait(@connection_mtx, @pool_timeout)
return acquire_connection(uri, options) ||
raise(PoolTimeoutError.new(@pool_timeout,
"Timed out after #{@pool_timeout} seconds while waiting for a connection to #{uri.origin}"))
end
@connections_counter += 1
@origin_counters[uri.origin] += 1
checkout_new_connection(uri, options)
end
end
conn
end
def checkin_connection(connection)
return if connection.options.io
@connection_mtx.synchronize do
@connections << connection
@max_connections_cond.signal
@origin_conds[connection.origin.to_s].signal
end
end
def checkout_mergeable_connection(connection)
return if connection.options.io
@connection_mtx.synchronize do
idx = @connections.find_index do |ch|
ch != connection && ch.mergeable?(connection)
end
@connections.delete_at(idx) if idx
end
end
def reset_resolvers
@resolver_mtx.synchronize { @resolvers.clear }
end
def checkout_resolver(options)
resolver_type = options.resolver_class
resolver_type = Resolver.resolver_for(resolver_type)
@resolver_mtx.synchronize do
resolvers = @resolvers[resolver_type]
idx = resolvers.find_index do |res|
res.options == options
end
resolvers.delete_at(idx) if idx
end || checkout_new_resolver(resolver_type, options)
end
def checkin_resolver(resolver)
@resolver_mtx.synchronize do
resolvers = @resolvers[resolver.class]
resolver = resolver.multi
resolvers << resolver unless resolvers.include?(resolver)
end
end
# :nocov:
def inspect
"#<#{self.class}:#{object_id} " \
"@max_connections_per_origin=#{@max_connections_per_origin} " \
"@pool_timeout=#{@pool_timeout} " \
"@connections=#{@connections.size}>"
end
# :nocov:
private
def resolve_connection(connection)
@connections << connection unless @connections.include?(connection)
if connection.addresses || connection.open?
#
# there are two cases in which we want to activate initialization of
# connection immediately:
#
# 1. when the connection already has addresses, i.e. it doesn't need to
# resolve a name (not the same as name being an IP, yet)
# 2. when the connection is initialized with an external already open IO.
#
connection.once(:connect_error, &connection.method(:handle_error))
on_resolver_connection(connection)
return
def acquire_connection(uri, options)
idx = @connections.find_index do |connection|
connection.match?(uri, options)
end
find_resolver_for(connection) do |resolver|
resolver << try_clone_connection(connection, resolver.family)
next if resolver.empty?
return unless idx
select_connection(resolver)
end
@connections.delete_at(idx)
end
def try_clone_connection(connection, family)
connection.family ||= family
return connection if connection.family == family
new_connection = connection.class.new(connection.origin, connection.options)
new_connection.family = family
connection.once(:tcp_open) { new_connection.force_reset }
connection.once(:connect_error) do |err|
if new_connection.connecting?
new_connection.merge(connection)
connection.emit(:cloned, new_connection)
connection.force_reset
else
connection.__send__(:handle_error, err)
end
end
new_connection.once(:tcp_open) do |new_conn|
if new_conn != connection
new_conn.merge(connection)
connection.force_reset
end
end
new_connection.once(:connect_error) do |err|
if connection.connecting?
# main connection has the requests
connection.merge(new_connection)
new_connection.emit(:cloned, connection)
new_connection.force_reset
else
new_connection.__send__(:handle_error, err)
end
end
init_connection(new_connection, connection.options)
new_connection
def checkout_new_connection(uri, options)
options.connection_class.new(uri, options)
end
def on_resolver_connection(connection)
@connections << connection unless @connections.include?(connection)
found_connection = @connections.find do |ch|
ch != connection && ch.mergeable?(connection)
end
return register_connection(connection) unless found_connection
if found_connection.open?
coalesce_connections(found_connection, connection)
throw(:coalesced, found_connection) unless @connections.include?(connection)
def checkout_new_resolver(resolver_type, options)
if resolver_type.multi?
Resolver::Multi.new(resolver_type, options)
else
found_connection.once(:open) do
coalesce_connections(found_connection, connection)
end
resolver_type.new(options)
end
end
def on_resolver_error(connection, error)
return connection.emit(:connect_error, error) if connection.connecting? && connection.callbacks_for?(:connect_error)
# drops and returns the +connection+ from the connection pool; if +connection+ is <tt>nil</tt> (default),
# the first available connection from the pool will be dropped.
def drop_connection(connection = nil)
if connection
@connections.delete(connection)
else
connection = @connections.shift
connection.emit(:error, error)
end
def on_resolver_close(resolver)
resolver_type = resolver.class
return if resolver.closed?
@resolvers.delete(resolver_type)
deselect_connection(resolver)
resolver.close unless resolver.closed?
end
def register_connection(connection)
select_connection(connection)
end
def unregister_connection(connection, cleanup = !connection.used?)
@connections.delete(connection) if cleanup
deselect_connection(connection)
end
def select_connection(connection)
@selector.register(connection)
end
def deselect_connection(connection)
@selector.deregister(connection)
end
def coalesce_connections(conn1, conn2)
return register_connection(conn2) unless conn1.coalescable?(conn2)
conn2.emit(:tcp_open, conn1)
conn1.merge(conn2)
@connections.delete(conn2)
end
def next_timeout
[
@timers.wait_interval,
*@resolvers.values.reject(&:closed?).filter_map(&:timeout),
*@connections.filter_map(&:timeout),
].compact.min
end
def find_resolver_for(connection)
connection_options = connection.options
resolver_type = connection_options.resolver_class
resolver_type = Resolver.resolver_for(resolver_type)
@resolvers[resolver_type] ||= begin
resolver_manager = if resolver_type.multi?
Resolver::Multi.new(resolver_type, connection_options)
else
resolver_type.new(connection_options)
end
resolver_manager.on(:resolve, &method(:on_resolver_connection))
resolver_manager.on(:error, &method(:on_resolver_error))
resolver_manager.on(:close, &method(:on_resolver_close))
resolver_manager
return unless connection
end
manager = @resolvers[resolver_type]
@connections_counter -= 1
@origin_conds.delete(connection.origin) if (@origin_counters[connection.origin.to_s] -= 1).zero?
(manager.is_a?(Resolver::Multi) && manager.early_resolve(connection)) || manager.resolvers.each do |resolver|
resolver.pool = self
yield resolver
end
manager
connection
end
end
end

View File

@ -8,11 +8,14 @@ module HTTPX
# as well as maintaining the state machine which manages streaming the request onto the wire.
class Request
extend Forwardable
include Loggable
include Callbacks
using URIExtensions
ALLOWED_URI_SCHEMES = %w[https http].freeze
# default value used for "user-agent" header, when not overridden.
USER_AGENT = "httpx.rb/#{VERSION}"
USER_AGENT = "httpx.rb/#{VERSION}".freeze # rubocop:disable Style/RedundantFreeze
# the upcased string HTTP verb for this request.
attr_reader :verb
@ -43,16 +46,52 @@ module HTTPX
attr_writer :persistent
attr_reader :active_timeouts
# will be +true+ when request body has been completely flushed.
def_delegator :@body, :empty?
# initializes the instance with the given +verb+, an absolute or relative +uri+, and the
# request options.
def initialize(verb, uri, options = {})
# closes the body
def_delegator :@body, :close
# initializes the instance with the given +verb+ (an upppercase String, ex. 'GEt'),
# an absolute or relative +uri+ (either as String or URI::HTTP object), the
# request +options+ (instance of HTTPX::Options) and an optional Hash of +params+.
#
# Besides any of the options documented in HTTPX::Options (which would override or merge with what
# +options+ sets), it accepts also the following:
#
# :params :: hash or array of key-values which will be encoded and set in the query string of request uris.
# :body :: to be encoded in the request body payload. can be a String, an IO object (i.e. a File), or an Enumerable.
# :form :: hash of array of key-values which will be form-urlencoded- or multipart-encoded in requests body payload.
# :json :: hash of array of key-values which will be JSON-encoded in requests body payload.
# :xml :: Nokogiri XML nodes which will be encoded in requests body payload.
#
# :body, :form, :json and :xml are all mutually exclusive, i.e. only one of them gets picked up.
def initialize(verb, uri, options, params = EMPTY_HASH)
@verb = verb.to_s.upcase
@options = Options.new(options)
@uri = Utils.to_uri(uri)
if @uri.relative?
@headers = options.headers.dup
merge_headers(params.delete(:headers)) if params.key?(:headers)
@headers["user-agent"] ||= USER_AGENT
@headers["accept"] ||= "*/*"
# forego compression in the Range request case
if @headers.key?("range")
@headers.delete("accept-encoding")
else
@headers["accept-encoding"] ||= options.supported_compression_formats
end
@query_params = params.delete(:params) if params.key?(:params)
@body = options.request_body_class.new(@headers, options, **params)
@options = @body.options
if @uri.relative? || @uri.host.nil?
origin = @options.origin
raise(Error, "invalid URI: #{@uri}") unless origin
@ -61,28 +100,37 @@ module HTTPX
@uri = origin.merge("#{base_path}#{@uri}")
end
@headers = @options.headers.dup
@headers["user-agent"] ||= USER_AGENT
@headers["accept"] ||= "*/*"
raise UnsupportedSchemeError, "#{@uri}: #{@uri.scheme}: unsupported URI scheme" unless ALLOWED_URI_SCHEMES.include?(@uri.scheme)
@body = @options.request_body_class.new(@headers, @options)
@state = :idle
@response = nil
@peer_address = nil
@ping = false
@persistent = @options.persistent
@active_timeouts = []
end
# the read timeout defied for this requet.
# whether request has been buffered with a ping
def ping?
@ping
end
# marks the request as having been buffered with a ping
def ping!
@ping = true
end
# the read timeout defined for this request.
def read_timeout
@options.timeout[:read_timeout]
end
# the write timeout defied for this requet.
# the write timeout defined for this request.
def write_timeout
@options.timeout[:write_timeout]
end
# the request timeout defied for this requet.
# the request timeout defined for this request.
def request_timeout
@options.timeout[:request_timeout]
end
@ -91,10 +139,12 @@ module HTTPX
@persistent
end
# if the request contains trailer headers
def trailers?
defined?(@trailers)
end
# returns an instance of HTTPX::Headers containing the trailer headers
def trailers
@trailers ||= @options.headers_class.new
end
@ -106,6 +156,11 @@ module HTTPX
:w
end
def can_buffer?
@state != :done
end
# merges +h+ into the instance of HTTPX::Headers of the request.
def merge_headers(h)
@headers = @headers.merge(h)
end
@ -172,7 +227,7 @@ module HTTPX
return @query if defined?(@query)
query = []
if (q = @options.params)
if (q = @query_params) && !q.empty?
query << Transcoder::Form.encode(q)
end
query << @uri.query if @uri.query
@ -197,7 +252,7 @@ module HTTPX
# :nocov:
def inspect
"#<HTTPX::Request:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"#{@verb} " \
"#{uri} " \
"@headers=#{@headers} " \
@ -210,10 +265,13 @@ module HTTPX
case nextstate
when :idle
@body.rewind
@ping = false
@response = nil
@drainer = nil
@active_timeouts.clear
when :headers
return unless @state == :idle
when :body
return unless @state == :headers ||
@state == :expect
@ -234,7 +292,9 @@ module HTTPX
return unless @state == :body
when :done
return if @state == :expect
end
log(level: 3) { "#{@state}] -> #{nextstate}" }
@state = nextstate
emit(@state, self)
nil
@ -244,6 +304,15 @@ module HTTPX
def expects?
@headers["expect"] == "100-continue" && @informational_status == 100 && !@response
end
def set_timeout_callback(event, &callback)
clb = once(event, &callback)
# reset timeout callbacks when requests get rerouted to a different connection
once(:idle) do
callbacks(event).delete(clb)
end
end
end
end

View File

@ -4,30 +4,44 @@ module HTTPX
# Implementation of the HTTP Request body as a delegator which iterates (responds to +each+) payload chunks.
class Request::Body < SimpleDelegator
class << self
def new(_, options)
return options.body if options.body.is_a?(self)
def new(_, options, body: nil, **params)
if body.is_a?(self)
# request derives its options from body
body.options = options.merge(params)
return body
end
super
end
end
# inits the instance with the request +headers+ and +options+, which contain the payload definition.
def initialize(headers, options)
@headers = headers
attr_accessor :options
# forego compression in the Range request case
if @headers.key?("range")
@headers.delete("accept-encoding")
else
@headers["accept-encoding"] ||= options.supported_compression_formats
# inits the instance with the request +headers+, +options+ and +params+, which contain the payload definition.
# it wraps the given body with the appropriate encoder on initialization.
#
# ..., json: { foo: "bar" }) #=> json encoder
# ..., form: { foo: "bar" }) #=> form urlencoded encoder
# ..., form: { foo: Pathname.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { foo: File.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { body: "bla") }) #=> raw data encoder
def initialize(h, options, **params)
@headers = h
@body = self.class.initialize_body(params)
@options = options.merge(params)
if @body
if @options.compress_request_body && @headers.key?("content-encoding")
@headers.get("content-encoding").each do |encoding|
@body = self.class.initialize_deflater_body(@body, encoding)
end
end
@headers["content-type"] ||= @body.content_type
@headers["content-length"] = @body.bytesize unless unbounded_body?
end
initialize_body(options)
return if @body.nil?
@headers["content-type"] ||= @body.content_type
@headers["content-length"] = @body.bytesize unless unbounded_body?
super(@body)
end
@ -38,7 +52,11 @@ module HTTPX
body = stream(@body)
if body.respond_to?(:read)
::IO.copy_stream(body, ProcIO.new(block))
while (chunk = body.read(16_384))
block.call(chunk)
end
# TODO: use copy_stream once bug is resolved: https://bugs.ruby-lang.org/issues/21131
# IO.copy_stream(body, ProcIO.new(block))
elsif body.respond_to?(:each)
body.each(&block)
else
@ -46,6 +64,10 @@ module HTTPX
end
end
def close
@body.close if @body.respond_to?(:close)
end
# if the +@body+ is rewindable, it rewinnds it.
def rewind
return if empty?
@ -94,39 +116,25 @@ module HTTPX
# :nocov:
def inspect
"#<HTTPX::Request::Body:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"#{unbounded_body? ? "stream" : "@bytesize=#{bytesize}"}>"
end
# :nocov:
private
# wraps the given body with the appropriate encoder.
#
# ..., json: { foo: "bar" }) #=> json encoder
# ..., form: { foo: "bar" }) #=> form urlencoded encoder
# ..., form: { foo: Pathname.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { foo: File.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { body: "bla") }) #=> raw data encoder
def initialize_body(options)
@body = if options.body
Transcoder::Body.encode(options.body)
elsif options.form
Transcoder::Form.encode(options.form)
elsif options.json
Transcoder::JSON.encode(options.json)
elsif options.xml
Transcoder::Xml.encode(options.xml)
end
return unless @body && options.compress_request_body && @headers.key?("content-encoding")
@headers.get("content-encoding").each do |encoding|
@body = self.class.initialize_deflater_body(@body, encoding)
end
end
class << self
def initialize_body(params)
if (body = params.delete(:body))
# @type var body: bodyIO
Transcoder::Body.encode(body)
elsif (form = params.delete(:form))
# @type var form: Transcoder::urlencoded_input
Transcoder::Form.encode(form)
elsif (json = params.delete(:json))
# @type var body: _ToJson
Transcoder::JSON.encode(json)
end
end
# returns the +body+ wrapped with the correct deflater accordinng to the given +encodisng+.
def initialize_deflater_body(body, encoding)
case encoding
@ -142,17 +150,4 @@ module HTTPX
end
end
end
# Wrapper yielder which can be used with functions which expect an IO writer.
class ProcIO
def initialize(block)
@block = block
end
# Implementation the IO write protocol, which yield the given chunk to +@block+.
def write(data)
@block.call(data.dup)
data.bytesize
end
end
end

View File

@ -53,8 +53,8 @@ module HTTPX
def cached_lookup(hostname)
now = Utils.now
@lookup_mutex.synchronize do
lookup(hostname, now)
lookup_synchronize do |lookups|
lookup(hostname, lookups, now)
end
end
@ -63,37 +63,49 @@ module HTTPX
entries.each do |entry|
entry["TTL"] += now
end
@lookup_mutex.synchronize do
lookup_synchronize do |lookups|
case family
when Socket::AF_INET6
@lookups[hostname].concat(entries)
lookups[hostname].concat(entries)
when Socket::AF_INET
@lookups[hostname].unshift(*entries)
lookups[hostname].unshift(*entries)
end
entries.each do |entry|
next unless entry["name"] != hostname
case family
when Socket::AF_INET6
@lookups[entry["name"]] << entry
lookups[entry["name"]] << entry
when Socket::AF_INET
@lookups[entry["name"]].unshift(entry)
lookups[entry["name"]].unshift(entry)
end
end
end
end
# do not use directly!
def lookup(hostname, ttl)
return unless @lookups.key?(hostname)
def cached_lookup_evict(hostname, ip)
ip = ip.to_s
entries = @lookups[hostname] = @lookups[hostname].select do |address|
lookup_synchronize do |lookups|
entries = lookups[hostname]
return unless entries
lookups.delete_if { |entry| entry["data"] == ip }
end
end
# do not use directly!
def lookup(hostname, lookups, ttl)
return unless lookups.key?(hostname)
entries = lookups[hostname] = lookups[hostname].select do |address|
address["TTL"] > ttl
end
ips = entries.flat_map do |address|
if address.key?("alias")
lookup(address["alias"], ttl)
if (als = address["alias"])
lookup(als, lookups, ttl)
else
IPAddr.new(address["data"])
end
@ -103,12 +115,11 @@ module HTTPX
end
def generate_id
@identifier_mutex.synchronize { @identifier = (@identifier + 1) & 0xFFFF }
id_synchronize { @identifier = (@identifier + 1) & 0xFFFF }
end
def encode_dns_query(hostname, type: Resolv::DNS::Resource::IN::A, message_id: generate_id)
Resolv::DNS::Message.new.tap do |query|
query.id = message_id
Resolv::DNS::Message.new(message_id).tap do |query|
query.rd = 1
query.add_question(hostname, type)
end.encode
@ -150,5 +161,13 @@ module HTTPX
[:ok, addresses]
end
def lookup_synchronize
@lookup_mutex.synchronize { yield(@lookups) }
end
def id_synchronize(&block)
@identifier_mutex.synchronize(&block)
end
end
end

View File

@ -2,11 +2,14 @@
require "resolv"
require "uri"
require "cgi"
require "forwardable"
require "httpx/base64"
module HTTPX
# Implementation of a DoH name resolver (https://www.youtube.com/watch?v=unMXvnY2FNM).
# It wraps an HTTPX::Connection object which integrates with the main session in the
# same manner as other performed HTTP requests.
#
class Resolver::HTTPS < Resolver::Resolver
extend Forwardable
using URIExtensions
@ -27,14 +30,13 @@ module HTTPX
use_get: false,
}.freeze
def_delegators :@resolver_connection, :state, :connecting?, :to_io, :call, :close, :terminate
def_delegators :@resolver_connection, :state, :connecting?, :to_io, :call, :close, :terminate, :inflight?, :handle_socket_timeout
def initialize(_, options)
super
@resolver_options = DEFAULTS.merge(@options.resolver_options)
@queries = {}
@requests = {}
@connections = []
@uri = URI(@resolver_options[:uri])
@uri_addresses = nil
@resolver = Resolv::DNS.new
@ -43,7 +45,7 @@ module HTTPX
end
def <<(connection)
return if @uri.origin == connection.origin.to_s
return if @uri.origin == connection.peer.to_s
@uri_addresses ||= HTTPX::Resolver.nolookup_resolve(@uri.host) || @resolver.getaddresses(@uri.host)
@ -66,28 +68,29 @@ module HTTPX
end
def resolver_connection
@resolver_connection ||= @pool.find_connection(@uri, @options) || begin
@building_connection = true
connection = @options.connection_class.new(@uri, @options.merge(ssl: { alpn_protocols: %w[h2] }))
@pool.init_connection(connection, @options)
# only explicity emit addresses if connection didn't pre-resolve, i.e. it's not an IP.
emit_addresses(connection, @family, @uri_addresses) unless connection.addresses
@building_connection = false
connection
# TODO: leaks connection object into the pool
@resolver_connection ||= @current_session.find_connection(@uri, @current_selector,
@options.merge(ssl: { alpn_protocols: %w[h2] })).tap do |conn|
emit_addresses(conn, @family, @uri_addresses) unless conn.addresses
end
end
private
def resolve(connection = @connections.first, hostname = nil)
return if @building_connection
def resolve(connection = nil, hostname = nil)
@connections.shift until @connections.empty? || @connections.first.state != :closed
connection ||= @connections.first
return unless connection
hostname ||= @queries.key(connection)
if hostname.nil?
hostname = connection.origin.host
log { "resolver: resolve IDN #{connection.origin.non_ascii_hostname} as #{hostname}" } if connection.origin.non_ascii_hostname
hostname = connection.peer.host
log do
"resolver #{FAMILY_TYPES[@record_type]}: resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}"
end if connection.peer.non_ascii_hostname
hostname = @resolver.generate_candidates(hostname).each do |name|
@queries[name.to_s] = connection
@ -95,7 +98,7 @@ module HTTPX
else
@queries[hostname] = connection
end
log { "resolver: query #{FAMILY_TYPES[RECORD_TYPES[@family]]} for #{hostname}" }
log { "resolver #{FAMILY_TYPES[@record_type]}: query for #{hostname}" }
begin
request = build_request(hostname)
@ -106,7 +109,7 @@ module HTTPX
@connections << connection
rescue ResolveError, Resolv::DNS::EncodeError => e
reset_hostname(hostname)
emit_resolve_error(connection, connection.origin.host, e)
emit_resolve_error(connection, connection.peer.host, e)
end
end
@ -115,7 +118,7 @@ module HTTPX
rescue StandardError => e
hostname = @requests.delete(request)
connection = reset_hostname(hostname)
emit_resolve_error(connection, connection.origin.host, e)
emit_resolve_error(connection, connection.peer.host, e)
else
# @type var response: HTTPX::Response
parse(request, response)
@ -154,7 +157,7 @@ module HTTPX
when :decode_error
host = @requests.delete(request)
connection = reset_hostname(host)
emit_resolve_error(connection, connection.origin.host, result)
emit_resolve_error(connection, connection.peer.host, result)
end
end
@ -174,7 +177,7 @@ module HTTPX
alias_address = answers[address["alias"]]
if alias_address.nil?
reset_hostname(address["name"])
if catch(:coalesced) { early_resolve(connection, hostname: address["alias"]) }
if early_resolve(connection, hostname: address["alias"])
@connections.delete(connection)
else
resolve(connection, address["alias"])
@ -199,7 +202,7 @@ module HTTPX
@queries.delete_if { |_, conn| connection == conn }
Resolver.cached_lookup_set(hostname, @family, addresses) if @resolver_options[:cache]
emit_addresses(connection, @family, addresses.map { |addr| addr["data"] })
catch(:coalesced) { emit_addresses(connection, @family, addresses.map { |addr| addr["data"] }) }
end
end
return if @connections.empty?
@ -219,7 +222,7 @@ module HTTPX
uri.query = URI.encode_www_form(params)
request = rklass.new("GET", uri, @options)
else
request = rklass.new("POST", uri, @options.merge(body: [payload]))
request = rklass.new("POST", uri, @options, body: [payload])
request.headers["content-type"] = "application/dns-message"
end
request.headers["accept"] = "application/dns-message"

View File

@ -8,27 +8,49 @@ module HTTPX
include Callbacks
using ArrayExtensions::FilterMap
attr_reader :resolvers
attr_reader :resolvers, :options
def initialize(resolver_type, options)
@current_selector = nil
@current_session = nil
@options = options
@resolver_options = @options.resolver_options
@resolvers = options.ip_families.map do |ip_family|
resolver = resolver_type.new(ip_family, options)
resolver.on(:resolve, &method(:on_resolver_connection))
resolver.on(:error, &method(:on_resolver_error))
resolver.on(:close) { on_resolver_close(resolver) }
resolver.multi = self
resolver
end
@errors = Hash.new { |hs, k| hs[k] = [] }
end
def current_selector=(s)
@current_selector = s
@resolvers.each { |r| r.__send__(__method__, s) }
end
def current_session=(s)
@current_session = s
@resolvers.each { |r| r.__send__(__method__, s) }
end
def log(*args, **kwargs, &blk)
@resolvers.each { |r| r.__send__(__method__, *args, **kwargs, &blk) }
end
def closed?
@resolvers.all?(&:closed?)
end
def empty?
@resolvers.all?(&:empty?)
end
def inflight?
@resolvers.any(&:inflight?)
end
def timeout
@resolvers.filter_map(&:timeout).min
end
@ -42,10 +64,11 @@ module HTTPX
end
def early_resolve(connection)
hostname = connection.origin.host
hostname = connection.peer.host
addresses = @resolver_options[:cache] && (connection.addresses || HTTPX::Resolver.nolookup_resolve(hostname))
return unless addresses
return false unless addresses
resolved = false
addresses.group_by(&:family).sort { |(f1, _), (f2, _)| f2 <=> f1 }.each do |family, addrs|
# try to match the resolver by family. However, there are cases where that's not possible, as when
# the system does not have IPv6 connectivity, but it does support IPv6 via loopback/link-local.
@ -55,21 +78,20 @@ module HTTPX
# it does not matter which resolver it is, as early-resolve code is shared.
resolver.emit_addresses(connection, family, addrs, true)
resolved = true
end
resolved
end
private
def lazy_resolve(connection)
@resolvers.each do |resolver|
resolver << @current_session.try_clone_connection(connection, @current_selector, resolver.family)
next if resolver.empty?
def on_resolver_connection(connection)
emit(:resolve, connection)
end
def on_resolver_error(connection, error)
emit(:error, connection, error)
end
def on_resolver_close(resolver)
emit(:close, resolver)
@current_session.select_resolver(resolver, @current_selector)
end
end
end
end

View File

@ -4,6 +4,9 @@ require "forwardable"
require "resolv"
module HTTPX
# Implements a pure ruby name resolver, which abides by the Selectable API.
# It delegates DNS payload encoding/decoding to the +resolv+ stlid gem.
#
class Resolver::Native < Resolver::Resolver
extend Forwardable
using URIExtensions
@ -34,7 +37,7 @@ module HTTPX
@search = Array(@resolver_options[:search]).map { |srch| srch.scan(/[^.]+/) }
@_timeouts = Array(@resolver_options[:timeouts])
@timeouts = Hash.new { |timeouts, host| timeouts[host] = @_timeouts.dup }
@connections = []
@name = nil
@queries = {}
@read_buffer = "".b
@write_buffer = Buffer.new(@resolver_options[:packet_size])
@ -45,6 +48,10 @@ module HTTPX
transition(:closed)
end
def terminate
emit(:close, self)
end
def closed?
@state == :closed
end
@ -58,19 +65,6 @@ module HTTPX
when :open
consume
end
nil
rescue Errno::EHOSTUNREACH => e
@ns_index += 1
nameserver = @nameserver
if nameserver && @ns_index < nameserver.size
log { "resolver: failed resolving on nameserver #{@nameserver[@ns_index - 1]} (#{e.message})" }
transition(:idle)
@timeouts.clear
else
handle_error(e)
end
rescue NativeResolveError => e
handle_error(e)
end
def interests
@ -105,9 +99,7 @@ module HTTPX
@timeouts.values_at(*hosts).reject(&:empty?).map(&:first).min
end
def handle_socket_timeout(interval)
do_retry(interval)
end
def handle_socket_timeout(interval); end
private
@ -120,51 +112,89 @@ module HTTPX
end
def consume
dread if calculate_interests == :r
do_retry
dwrite if calculate_interests == :w
loop do
dread if calculate_interests == :r
break unless calculate_interests == :w
# do_retry
dwrite
break unless calculate_interests == :r
end
rescue Errno::EHOSTUNREACH => e
@ns_index += 1
nameserver = @nameserver
if nameserver && @ns_index < nameserver.size
log { "resolver #{FAMILY_TYPES[@record_type]}: failed resolving on nameserver #{@nameserver[@ns_index - 1]} (#{e.message})" }
transition(:idle)
@timeouts.clear
retry
else
handle_error(e)
emit(:close, self)
end
rescue NativeResolveError => e
handle_error(e)
close_or_resolve
retry unless closed?
end
def do_retry(loop_time = nil)
return if @queries.empty? || !@start_timeout
def schedule_retry
h = @name
loop_time ||= Utils.elapsed_time(@start_timeout)
return unless h
query = @queries.first
connection = @queries[h]
return unless query
timeouts = @timeouts[h]
timeout = timeouts.shift
h, connection = query
host = connection.origin.host
timeout = (@timeouts[host][0] -= loop_time)
@timer = @current_selector.after(timeout) do
next unless @connections.include?(connection)
return unless timeout <= 0
do_retry(h, connection, timeout)
end
end
@timeouts[host].shift
def do_retry(h, connection, interval)
timeouts = @timeouts[h]
if !@timeouts[host].empty?
log { "resolver: timeout after #{timeout}s, retry(#{@timeouts[host].first}) #{host}..." }
if !timeouts.empty?
log { "resolver #{FAMILY_TYPES[@record_type]}: timeout after #{interval}s, retry (with #{timeouts.first}s) #{h}..." }
# must downgrade to tcp AND retry on same host as last
downgrade_socket
resolve(connection, h)
elsif @ns_index + 1 < @nameserver.size
# try on the next nameserver
@ns_index += 1
log { "resolver: failed resolving #{host} on nameserver #{@nameserver[@ns_index - 1]} (timeout error)" }
log do
"resolver #{FAMILY_TYPES[@record_type]}: failed resolving #{h} on nameserver #{@nameserver[@ns_index - 1]} (timeout error)"
end
transition(:idle)
@timeouts.clear
resolve(connection, h)
else
@timeouts.delete(host)
@timeouts.delete(h)
reset_hostname(h, reset_candidates: false)
return unless @queries.empty?
unless @queries.empty?
resolve(connection)
return
end
@connections.delete(connection)
host = connection.peer.host
# This loop_time passed to the exception is bogus. Ideally we would pass the total
# resolve timeout, including from the previous retries.
raise ResolveTimeoutError.new(loop_time, "Timed out while resolving #{connection.origin.host}")
ex = ResolveTimeoutError.new(interval, "Timed out while resolving #{host}")
ex.set_backtrace(ex ? ex.backtrace : caller)
emit_resolve_error(connection, host, ex)
close_or_resolve
end
end
@ -213,7 +243,7 @@ module HTTPX
parse(@read_buffer)
end
return if @state == :closed
return if @state == :closed || !@write_buffer.empty?
end
end
@ -231,11 +261,15 @@ module HTTPX
return unless siz.positive?
schedule_retry if @write_buffer.empty?
return if @state == :closed
end
end
def parse(buffer)
@timer.cancel
code, result = Resolver.decode_dns_answer(buffer)
case code
@ -246,12 +280,17 @@ module HTTPX
hostname, connection = @queries.first
reset_hostname(hostname, reset_candidates: false)
unless @queries.value?(connection)
@connections.delete(connection)
raise NativeResolveError.new(connection, connection.origin.host, "name or service not known")
end
other_candidate, _ = @queries.find { |_, conn| conn == connection }
resolve
if other_candidate
resolve(connection, other_candidate)
else
@connections.delete(connection)
ex = NativeResolveError.new(connection, connection.peer.host, "name or service not known")
ex.set_backtrace(ex ? ex.backtrace : caller)
emit_resolve_error(connection, connection.peer.host, ex)
close_or_resolve
end
when :message_truncated
# TODO: what to do if it's already tcp??
return if @socket_type == :tcp
@ -265,13 +304,13 @@ module HTTPX
hostname, connection = @queries.first
reset_hostname(hostname)
@connections.delete(connection)
ex = NativeResolveError.new(connection, connection.origin.host, "unknown DNS error (error code #{result})")
ex = NativeResolveError.new(connection, connection.peer.host, "unknown DNS error (error code #{result})")
raise ex
when :decode_error
hostname, connection = @queries.first
reset_hostname(hostname)
@connections.delete(connection)
ex = NativeResolveError.new(connection, connection.origin.host, result.message)
ex = NativeResolveError.new(connection, connection.peer.host, result.message)
ex.set_backtrace(result.backtrace)
raise ex
end
@ -283,7 +322,7 @@ module HTTPX
hostname, connection = @queries.first
reset_hostname(hostname)
@connections.delete(connection)
raise NativeResolveError.new(connection, connection.origin.host)
raise NativeResolveError.new(connection, connection.peer.host)
else
address = addresses.first
name = address["name"]
@ -306,12 +345,14 @@ module HTTPX
connection = @queries.delete(name)
end
if address.key?("alias") # CNAME
hostname_alias = address["alias"]
# clean up intermediate queries
@timeouts.delete(name) unless connection.origin.host == name
alias_addresses, addresses = addresses.partition { |addr| addr.key?("alias") }
if catch(:coalesced) { early_resolve(connection, hostname: hostname_alias) }
if addresses.empty? && !alias_addresses.empty? # CNAME
hostname_alias = alias_addresses.first["alias"]
# clean up intermediate queries
@timeouts.delete(name) unless connection.peer.host == name
if early_resolve(connection, hostname: hostname_alias)
@connections.delete(connection)
else
if @socket_type == :tcp
@ -320,24 +361,26 @@ module HTTPX
transition(:idle)
transition(:open)
end
log { "resolver: ALIAS #{hostname_alias} for #{name}" }
log { "resolver #{FAMILY_TYPES[@record_type]}: ALIAS #{hostname_alias} for #{name}" }
resolve(connection, hostname_alias)
return
end
else
reset_hostname(name, connection: connection)
@timeouts.delete(connection.origin.host)
@timeouts.delete(connection.peer.host)
@connections.delete(connection)
Resolver.cached_lookup_set(connection.origin.host, @family, addresses) if @resolver_options[:cache]
emit_addresses(connection, @family, addresses.map { |addr| addr["data"] })
Resolver.cached_lookup_set(connection.peer.host, @family, addresses) if @resolver_options[:cache]
catch(:coalesced) { emit_addresses(connection, @family, addresses.map { |addr| addr["data"] }) }
end
end
return emit(:close) if @connections.empty?
resolve
close_or_resolve
end
def resolve(connection = @connections.first, hostname = nil)
def resolve(connection = nil, hostname = nil)
@connections.shift until @connections.empty? || @connections.first.state != :closed
connection ||= @connections.find { |c| !@queries.value?(c) }
raise Error, "no URI to resolve" unless connection
return unless @write_buffer.empty?
@ -345,8 +388,10 @@ module HTTPX
hostname ||= @queries.key(connection)
if hostname.nil?
hostname = connection.origin.host
log { "resolver: resolve IDN #{connection.origin.non_ascii_hostname} as #{hostname}" } if connection.origin.non_ascii_hostname
hostname = connection.peer.host
if connection.peer.non_ascii_hostname
log { "resolver #{FAMILY_TYPES[@record_type]}: resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}" }
end
hostname = generate_candidates(hostname).each do |name|
@queries[name] = connection
@ -354,11 +399,17 @@ module HTTPX
else
@queries[hostname] = connection
end
log { "resolver: query #{@record_type.name.split("::").last} for #{hostname}" }
@name = hostname
log { "resolver #{FAMILY_TYPES[@record_type]}: query for #{hostname}" }
begin
@write_buffer << encode_dns_query(hostname)
rescue Resolv::DNS::EncodeError => e
reset_hostname(hostname, connection: connection)
@connections.delete(connection)
emit_resolve_error(connection, hostname, e)
close_or_resolve
end
end
@ -388,10 +439,10 @@ module HTTPX
case @socket_type
when :udp
log { "resolver: server: udp://#{ip}:#{port}..." }
log { "resolver #{FAMILY_TYPES[@record_type]}: server: udp://#{ip}:#{port}..." }
UDP.new(ip, port, @options)
when :tcp
log { "resolver: server: tcp://#{ip}:#{port}..." }
log { "resolver #{FAMILY_TYPES[@record_type]}: server: tcp://#{ip}:#{port}..." }
origin = URI("tcp://#{ip}:#{port}")
TCP.new(origin, [ip], @options)
end
@ -429,23 +480,41 @@ module HTTPX
@write_buffer.clear
@read_buffer.clear
end
log(level: 3) { "#{@state} -> #{nextstate}" }
@state = nextstate
rescue Errno::ECONNREFUSED,
Errno::EADDRNOTAVAIL,
Errno::EHOSTUNREACH,
SocketError,
IOError,
ConnectTimeoutError => e
# these errors may happen during TCP handshake
# treat them as resolve errors.
handle_error(e)
emit(:close, self)
end
def handle_error(error)
if error.respond_to?(:connection) &&
error.respond_to?(:host)
reset_hostname(error.host, connection: error.connection)
@connections.delete(error.connection)
emit_resolve_error(error.connection, error.host, error)
else
@queries.each do |host, connection|
reset_hostname(host, connection: connection)
@connections.delete(connection)
emit_resolve_error(connection, host, error)
end
while (connection = @connections.shift)
emit_resolve_error(connection, connection.peer.host, error)
end
end
end
def reset_hostname(hostname, connection: @queries.delete(hostname), reset_candidates: true)
@timeouts.delete(hostname)
@timeouts.delete(hostname)
return unless connection && reset_candidates
@ -455,5 +524,16 @@ module HTTPX
# reset timeouts
@timeouts.delete_if { |h, _| candidates.include?(h) }
end
def close_or_resolve
# drop already closed connections
@connections.shift until @connections.empty? || @connections.first.state != :closed
if (@connections - @queries.values).empty?
emit(:close, self)
else
resolve
end
end
end
end

View File

@ -4,6 +4,9 @@ require "resolv"
require "ipaddr"
module HTTPX
# Base class for all internal internet name resolvers. It handles basic blocks
# from the Selectable API.
#
class Resolver::Resolver
include Callbacks
include Loggable
@ -26,14 +29,27 @@ module HTTPX
end
end
attr_reader :family
attr_reader :family, :options
attr_writer :pool
attr_writer :current_selector, :current_session
attr_accessor :multi
def initialize(family, options)
@family = family
@record_type = RECORD_TYPES[family]
@options = Options.new(options)
@options = options
@connections = []
set_resolver_callbacks
end
def each_connection(&block)
enum_for(__method__) unless block
return unless @connections
@connections.each(&block)
end
def close; end
@ -48,6 +64,10 @@ module HTTPX
true
end
def inflight?
false
end
def emit_addresses(connection, family, addresses, early_resolve = false)
addresses.map! do |address|
address.is_a?(IPAddr) ? address : IPAddr.new(address.to_s)
@ -56,17 +76,22 @@ module HTTPX
# double emission check, but allow early resolution to work
return if !early_resolve && connection.addresses && !addresses.intersect?(connection.addresses)
log { "resolver: answer #{FAMILY_TYPES[RECORD_TYPES[family]]} #{connection.origin.host}: #{addresses.inspect}" }
if @pool && # if triggered by early resolve, pool may not be here yet
!connection.io &&
connection.options.ip_families.size > 1 &&
family == Socket::AF_INET &&
addresses.first.to_s != connection.origin.host.to_s
log { "resolver: A response, applying resolution delay..." }
@pool.after(0.05) do
unless connection.state == :closed ||
# double emission check
(connection.addresses && addresses.intersect?(connection.addresses))
log do
"resolver #{FAMILY_TYPES[RECORD_TYPES[family]]}: " \
"answer #{connection.peer.host}: #{addresses.inspect} (early resolve: #{early_resolve})"
end
if !early_resolve && # do not apply resolution delay for non-dns name resolution
@current_selector && # just in case...
family == Socket::AF_INET && # resolution delay only applies to IPv4
!connection.io && # connection already has addresses and initiated/ended handshake
connection.options.ip_families.size > 1 && # no need to delay if not supporting dual stack IP
addresses.first.to_s != connection.peer.host.to_s # connection URL host is already the IP (early resolve included perhaps?)
log { "resolver #{FAMILY_TYPES[RECORD_TYPES[family]]}: applying resolution delay..." }
@current_selector.after(0.05) do
# double emission check
unless connection.addresses && addresses.intersect?(connection.addresses)
emit_resolved_connection(connection, addresses, early_resolve)
end
end
@ -81,6 +106,8 @@ module HTTPX
begin
connection.addresses = addresses
return if connection.state == :closed
emit(:resolve, connection)
rescue StandardError => e
if early_resolve
@ -92,20 +119,22 @@ module HTTPX
end
end
def early_resolve(connection, hostname: connection.origin.host)
def early_resolve(connection, hostname: connection.peer.host)
addresses = @resolver_options[:cache] && (connection.addresses || HTTPX::Resolver.nolookup_resolve(hostname))
return unless addresses
return false unless addresses
addresses = addresses.select { |addr| addr.family == @family }
return if addresses.empty?
return false if addresses.empty?
emit_addresses(connection, @family, addresses, true)
true
end
def emit_resolve_error(connection, hostname = connection.origin.host, ex = nil)
emit(:error, connection, resolve_error(hostname, ex))
def emit_resolve_error(connection, hostname = connection.peer.host, ex = nil)
emit_connection_error(connection, resolve_error(hostname, ex))
end
def resolve_error(hostname, ex = nil)
@ -116,5 +145,25 @@ module HTTPX
error.set_backtrace(ex ? ex.backtrace : caller)
error
end
def set_resolver_callbacks
on(:resolve, &method(:resolve_connection))
on(:error, &method(:emit_connection_error))
on(:close, &method(:close_resolver))
end
def resolve_connection(connection)
@current_session.__send__(:on_resolver_connection, connection, @current_selector)
end
def emit_connection_error(connection, error)
return connection.handle_connect_error(error) if connection.connecting?
connection.emit(:error, error)
end
def close_resolver(resolver)
@current_session.__send__(:on_resolver_close, resolver, @current_selector)
end
end
end

View File

@ -1,12 +1,19 @@
# frozen_string_literal: true
require "forwardable"
require "resolv"
module HTTPX
# Implementation of a synchronous name resolver which relies on the system resolver,
# which is lib'c getaddrinfo function (abstracted in ruby via Addrinfo.getaddrinfo).
#
# Its main advantage is relying on the reference implementation for name resolution
# across most/all OSs which deploy ruby (it's what TCPSocket also uses), its main
# disadvantage is the inability to set timeouts / check socket for readiness events,
# hence why it relies on using the Timeout module, which poses a lot of problems for
# the selector loop, specially when network is unstable.
#
class Resolver::System < Resolver::Resolver
using URIExtensions
extend Forwardable
RESOLV_ERRORS = [Resolv::ResolvError,
Resolv::DNS::Requester::RequestError,
@ -24,17 +31,14 @@ module HTTPX
attr_reader :state
def_delegator :@connections, :empty?
def initialize(options)
super(nil, options)
super(0, options)
@resolver_options = @options.resolver_options
resolv_options = @resolver_options.dup
timeouts = resolv_options.delete(:timeouts) || Resolver::RESOLVE_TIMEOUT
@_timeouts = Array(timeouts)
@timeouts = Hash.new { |tims, host| tims[host] = @_timeouts.dup }
resolv_options.delete(:cache)
@connections = []
@queries = []
@ips = []
@pipe_mutex = Thread::Mutex.new
@ -47,8 +51,12 @@ module HTTPX
yield self
end
def connections
EMPTY
def multi
self
end
def empty?
true
end
def close
@ -84,7 +92,7 @@ module HTTPX
return unless connection
@timeouts[connection.origin.host].first
@timeouts[connection.peer.host].first
end
def <<(connection)
@ -92,10 +100,22 @@ module HTTPX
resolve
end
def early_resolve(connection, **)
self << connection
true
end
def handle_socket_timeout(interval)
error = HTTPX::ResolveTimeoutError.new(interval, "timed out while waiting on select")
error.set_backtrace(caller)
on_error(error)
@queries.each do |host, connection|
@connections.delete(connection)
emit_resolve_error(connection, host, error)
end
while (connection = @connections.shift)
emit_resolve_error(connection, connection.peer.host, error)
end
end
private
@ -107,7 +127,7 @@ module HTTPX
when :open
return unless @state == :idle
@pipe_read, @pipe_write = ::IO.pipe
@pipe_read, @pipe_write = IO.pipe
when :closed
return unless @state == :open
@ -120,23 +140,29 @@ module HTTPX
def consume
return if @connections.empty?
while @pipe_read.ready? && (event = @pipe_read.getbyte)
if @pipe_read.wait_readable
event = @pipe_read.getbyte
case event
when DONE
*pair, addrs = @pipe_mutex.synchronize { @ips.pop }
@queries.delete(pair)
if pair
@queries.delete(pair)
family, connection = pair
@connections.delete(connection)
family, connection = pair
emit_addresses(connection, family, addrs)
catch(:coalesced) { emit_addresses(connection, family, addrs) }
end
when ERROR
*pair, error = @pipe_mutex.synchronize { @ips.pop }
@queries.delete(pair)
if pair && error
@queries.delete(pair)
@connections.delete(connection)
family, connection = pair
emit_resolve_error(connection, connection.origin.host, error)
_, connection = pair
emit_resolve_error(connection, connection.peer.host, error)
end
end
@connections.delete(connection) if @queries.empty?
end
return emit(:close, self) if @connections.empty?
@ -144,13 +170,20 @@ module HTTPX
resolve
end
def resolve(connection = @connections.first)
def resolve(connection = nil, hostname = nil)
@connections.shift until @connections.empty? || @connections.first.state != :closed
connection ||= @connections.first
raise Error, "no URI to resolve" unless connection
return unless @queries.empty?
hostname = connection.origin.host
hostname ||= connection.peer.host
scheme = connection.origin.scheme
log { "resolver: resolve IDN #{connection.origin.non_ascii_hostname} as #{hostname}" } if connection.origin.non_ascii_hostname
log do
"resolver: resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}"
end if connection.peer.non_ascii_hostname
transition(:open)
@ -164,7 +197,7 @@ module HTTPX
def async_resolve(connection, hostname, scheme)
families = connection.options.ip_families
log { "resolver: query for #{hostname}" }
timeouts = @timeouts[connection.origin.host]
timeouts = @timeouts[connection.peer.host]
resolve_timeout = timeouts.first
Thread.start do
@ -210,5 +243,11 @@ module HTTPX
def __addrinfo_resolve(host, scheme)
Addrinfo.getaddrinfo(host, scheme, Socket::AF_UNSPEC, Socket::SOCK_STREAM)
end
def emit_connection_error(_, error)
throw(:resolve_error, error)
end
def close_resolver(resolver); end
end
end

View File

@ -52,9 +52,6 @@ module HTTPX
# copies the response body to a different location.
def_delegator :@body, :copy_to
# closes the body.
def_delegator :@body, :close
# the corresponding request uri.
def_delegator :@request, :uri
@ -74,6 +71,20 @@ module HTTPX
@content_type = nil
end
# dupped initialization
def initialize_dup(orig)
super
# if a response gets dupped, the body handle must also get dupped to prevent
# two responses from using the same file handle to read.
@body = orig.body.dup
end
# closes the respective +@request+ and +@body+.
def close
@request.close
@body.close
end
# merges headers defined in +h+ into the response headers.
def merge_headers(h)
@headers = @headers.merge(h)
@ -123,7 +134,7 @@ module HTTPX
# :nocov:
def inspect
"#<Response:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"HTTP/#{version} " \
"@status=#{@status} " \
"@headers=#{@headers} " \
@ -166,10 +177,12 @@ module HTTPX
decode(Transcoder::Form)
end
# decodes the response payload into a Nokogiri::XML::Node object **if** the payload is valid
# "application/xml" (requires the "nokogiri" gem).
def xml
decode(Transcoder::Xml)
# TODO: remove at next major version.
warn "DEPRECATION WARNING: calling `.#{__method__}` on plain HTTPX responses is deprecated. " \
"Use HTTPX.plugin(:xml) sessions and call `.#{__method__}` in its responses instead."
require "httpx/plugins/xml"
decode(Plugins::XML::Transcoder)
end
private
@ -247,11 +260,11 @@ module HTTPX
# the IP address of the peer server.
def_delegator :@request, :peer_address
def initialize(request, error, options)
def initialize(request, error)
@request = request
@response = request.response if request.response.is_a?(Response)
@error = error
@options = Options.new(options)
@options = request.options
log_exception(@error)
end
@ -262,7 +275,7 @@ module HTTPX
# closes the error resources.
def close
@response.close if @response && @response.respond_to?(:close)
@response.close if @response
end
# always true for error responses.
@ -270,6 +283,8 @@ module HTTPX
true
end
def finish!; end
# raises the wrapped exception.
def raise_for_status
raise @error
@ -277,6 +292,8 @@ module HTTPX
# buffers lost chunks to error response
def <<(data)
return unless @response
@response << data
end
end

View File

@ -11,18 +11,32 @@ module HTTPX
# Array of encodings contained in the response "content-encoding" header.
attr_reader :encodings
attr_reader :buffer
protected :buffer
# initialized with the corresponding HTTPX::Response +response+ and HTTPX::Options +options+.
def initialize(response, options)
@response = response
@headers = response.headers
@options = options
@window_size = options.window_size
@encoding = response.content_type.charset || Encoding::BINARY
@encodings = []
@length = 0
@buffer = nil
@reader = nil
@state = :idle
# initialize response encoding
@encoding = if (enc = response.content_type.charset)
begin
Encoding.find(enc)
rescue ArgumentError
Encoding::BINARY
end
else
Encoding::BINARY
end
initialize_inflaters
end
@ -122,7 +136,7 @@ module HTTPX
if dest.respond_to?(:path) && @buffer.respond_to?(:path)
FileUtils.mv(@buffer.path, dest.path)
else
::IO.copy_stream(@buffer, dest)
IO.copy_stream(@buffer, dest)
end
end
@ -137,18 +151,17 @@ module HTTPX
end
def ==(other)
object_id == other.object_id || begin
if other.respond_to?(:read)
_with_same_buffer_pos { FileUtils.compare_stream(@buffer, other) }
else
to_s == other.to_s
end
end
super || case other
when Response::Body
@buffer == other.buffer
else
@buffer = other
end
end
# :nocov:
def inspect
"#<HTTPX::Response::Body:#{object_id} " \
"#<#{self.class}:#{object_id} " \
"@state=#{@state} " \
"@length=#{@length}>"
end
@ -215,19 +228,6 @@ module HTTPX
@state = nextstate
end
def _with_same_buffer_pos # :nodoc:
return yield unless @buffer && @buffer.respond_to?(:pos)
# @type ivar @buffer: StringIO | Tempfile
current_pos = @buffer.pos
@buffer.rewind
begin
yield
ensure
@buffer.pos = current_pos
end
end
class << self
def initialize_inflater_by_encoding(encoding, response, **kwargs) # :nodoc:
case encoding

View File

@ -7,6 +7,9 @@ require "tempfile"
module HTTPX
# wraps and delegates to an internal buffer, which can be a StringIO or a Tempfile.
class Response::Buffer < SimpleDelegator
attr_reader :buffer
protected :buffer
# initializes buffer with the +threshold_size+ over which the payload gets buffer to a tempfile,
# the initial +bytesize+, and the +encoding+.
def initialize(threshold_size:, bytesize: 0, encoding: Encoding::BINARY)
@ -20,7 +23,14 @@ module HTTPX
def initialize_dup(other)
super
@buffer = other.instance_variable_get(:@buffer).dup
# create new descriptor in READ-ONLY mode
@buffer =
case other.buffer
when StringIO
StringIO.new(other.buffer.string, mode: File::RDONLY)
else
other.buffer.class.new(other.buffer.path, encoding: Encoding::BINARY, mode: File::RDONLY)
end
end
# size in bytes of the buffered content.
@ -46,7 +56,7 @@ module HTTPX
end
when Tempfile
rewind
content = _with_same_buffer_pos { @buffer.read }
content = @buffer.read
begin
content.force_encoding(@encoding)
rescue ArgumentError # ex: unknown encoding name - utf
@ -61,6 +71,30 @@ module HTTPX
@buffer.unlink if @buffer.respond_to?(:unlink)
end
def ==(other)
super || begin
return false unless other.is_a?(Response::Buffer)
if @buffer.nil?
other.buffer.nil?
elsif @buffer.respond_to?(:read) &&
other.respond_to?(:read)
buffer_pos = @buffer.pos
other_pos = other.buffer.pos
@buffer.rewind
other.buffer.rewind
begin
FileUtils.compare_stream(@buffer, other.buffer)
ensure
@buffer.pos = buffer_pos
other.buffer.pos = other_pos
end
else
to_s == other.to_s
end
end
end
private
# initializes the buffer into a StringIO, or turns it into a Tempfile when the threshold
@ -76,21 +110,11 @@ module HTTPX
if aux
aux.rewind
::IO.copy_stream(aux, @buffer)
IO.copy_stream(aux, @buffer)
aux.close
end
__setobj__(@buffer)
end
def _with_same_buffer_pos # :nodoc:
current_pos = @buffer.pos
@buffer.rewind
begin
yield
ensure
@buffer.pos = current_pos
end
end
end
end

View File

@ -2,70 +2,154 @@
require "io/wait"
class HTTPX::Selector
READABLE = %i[rw r].freeze
WRITABLE = %i[rw w].freeze
module HTTPX
class Selector
extend Forwardable
private_constant :READABLE
private_constant :WRITABLE
READABLE = %i[rw r].freeze
WRITABLE = %i[rw w].freeze
def initialize
@selectables = []
end
private_constant :READABLE
private_constant :WRITABLE
# deregisters +io+ from selectables.
def deregister(io)
@selectables.delete(io)
end
def_delegator :@timers, :after
# register +io+.
def register(io)
return if @selectables.include?(io)
def_delegator :@selectables, :empty?
@selectables << io
end
def initialize
@timers = Timers.new
@selectables = []
@is_timer_interval = false
end
private
def each(&blk)
@selectables.each(&blk)
end
def select_many(interval, &block)
selectables, r, w = nil
# first, we group IOs based on interest type. On call to #interests however,
# things might already happen, and new IOs might be registered, so we might
# have to start all over again. We do this until we group all selectables
begin
loop do
begin
r = nil
w = nil
selectables = @selectables
@selectables = []
selectables.delete_if do |io|
interests = io.interests
(r ||= []) << io if READABLE.include?(interests)
(w ||= []) << io if WRITABLE.include?(interests)
io.state == :closed
end
if @selectables.empty?
@selectables = selectables
# do not run event loop if there's nothing to wait on.
# this might happen if connect failed and connection was unregistered.
return if (!r || r.empty?) && (!w || w.empty?) && !selectables.empty?
break
else
@selectables.concat(selectables)
end
rescue StandardError
@selectables = selectables if selectables
raise
def next_tick
catch(:jump_tick) do
timeout = next_timeout
if timeout && timeout.negative?
@timers.fire
throw(:jump_tick)
end
begin
select(timeout) do |c|
c.log(level: 2) { "[#{c.state}] selected#{" after #{timeout} secs" unless timeout.nil?}..." }
c.call
end
@timers.fire
rescue TimeoutError => e
@timers.fire(e)
end
end
rescue StandardError => e
each_connection do |c|
c.emit(:error, e)
end
rescue Exception # rubocop:disable Lint/RescueException
each_connection do |conn|
conn.force_reset
conn.disconnect
end
raise
end
def terminate
# array may change during iteration
selectables = @selectables.reject(&:inflight?)
selectables.delete_if do |sel|
sel.terminate
sel.state == :closed
end
until selectables.empty?
next_tick
selectables &= @selectables
end
end
def find_resolver(options)
res = @selectables.find do |c|
c.is_a?(Resolver::Resolver) && options == c.options
end
res.multi if res
end
def each_connection(&block)
return enum_for(__method__) unless block
@selectables.each do |c|
case c
when Resolver::Resolver
c.each_connection(&block)
when Connection
yield c
end
end
end
def find_connection(request_uri, options)
each_connection.find do |connection|
connection.match?(request_uri, options)
end
end
def find_mergeable_connection(connection)
each_connection.find do |ch|
ch != connection && ch.mergeable?(connection)
end
end
# deregisters +io+ from selectables.
def deregister(io)
@selectables.delete(io)
end
# register +io+.
def register(io)
return if @selectables.include?(io)
@selectables << io
end
private
def select(interval, &block)
# do not cause an infinite loop here.
#
# this may happen if timeout calculation actually triggered an error which causes
# the connections to be reaped (such as the total timeout error) before #select
# gets called.
return if interval.nil? && @selectables.empty?
return select_one(interval, &block) if @selectables.size == 1
select_many(interval, &block)
end
def select_many(interval, &block)
r, w = nil
# first, we group IOs based on interest type. On call to #interests however,
# things might already happen, and new IOs might be registered, so we might
# have to start all over again. We do this until we group all selectables
@selectables.delete_if do |io|
interests = io.interests
io.log(level: 2) { "[#{io.state}] registering for select (#{interests})#{" for #{interval} seconds" unless interval.nil?}" }
(r ||= []) << io if READABLE.include?(interests)
(w ||= []) << io if WRITABLE.include?(interests)
io.state == :closed
end
# TODO: what to do if there are no selectables?
@ -76,63 +160,65 @@ class HTTPX::Selector
[*r, *w].each { |io| io.handle_socket_timeout(interval) }
return
end
rescue IOError, SystemCallError
@selectables.reject!(&:closed?)
retry
if writers
readers.each do |io|
yield io
# so that we don't yield 2 times
writers.delete(io)
end if readers
writers.each(&block)
else
readers.each(&block) if readers
end
end
if writers
readers.each do |io|
yield io
def select_one(interval)
io = @selectables.first
# so that we don't yield 2 times
writers.delete(io)
end if readers
return unless io
writers.each(&block)
else
readers.each(&block) if readers
interests = io.interests
io.log(level: 2) { "[#{io.state}] registering for select (#{interests})#{" for #{interval} seconds" unless interval.nil?}" }
result = case interests
when :r then io.to_io.wait_readable(interval)
when :w then io.to_io.wait_writable(interval)
when :rw then io.to_io.wait(interval, :read_write)
when nil then return
end
unless result || interval.nil?
io.handle_socket_timeout(interval) unless @is_timer_interval
return
end
# raise TimeoutError.new(interval, "timed out while waiting on select")
yield io
# rescue IOError, SystemCallError
# @selectables.reject!(&:closed?)
# raise unless @selectables.empty?
end
def next_timeout
@is_timer_interval = false
timer_interval = @timers.wait_interval
connection_interval = @selectables.filter_map(&:timeout).min
return connection_interval unless timer_interval
if connection_interval.nil? || timer_interval <= connection_interval
@is_timer_interval = true
return timer_interval
end
connection_interval
end
end
def select_one(interval)
io = @selectables.first
return unless io
interests = io.interests
result = case interests
when :r then io.to_io.wait_readable(interval)
when :w then io.to_io.wait_writable(interval)
when :rw then io.to_io.wait(interval, :read_write)
when nil then return
end
unless result || interval.nil?
io.handle_socket_timeout(interval)
return
end
# raise HTTPX::TimeoutError.new(interval, "timed out while waiting on select")
yield io
rescue IOError, SystemCallError
@selectables.reject!(&:closed?)
raise unless @selectables.empty?
end
def select(interval, &block)
# do not cause an infinite loop here.
#
# this may happen if timeout calculation actually triggered an error which causes
# the connections to be reaped (such as the total timeout error) before #select
# gets called.
return if interval.nil? && @selectables.empty?
return select_one(interval, &block) if @selectables.size == 1
select_many(interval, &block)
end
public :select
end

View File

@ -9,16 +9,17 @@ module HTTPX
include Loggable
include Chainable
EMPTY_HASH = {}.freeze
# initializes the session with a set of +options+, which will be shared by all
# requests sent from it.
#
# When pass a block, it'll yield itself to it, then closes after the block is evaluated.
def initialize(options = EMPTY_HASH, &blk)
@options = self.class.default_options.merge(options)
@responses = {}
@persistent = @options.persistent
@pool = @options.pool_class.new(@options.pool_options)
@wrapped = false
@closing = false
INSTANCES[self] = self if @persistent && @options.close_on_fork && INSTANCES
wrap(&blk) if blk
end
@ -28,21 +29,54 @@ module HTTPX
# http.get("https://wikipedia.com")
# end # wikipedia connection closes here
def wrap
prev_persistent = @persistent
@persistent = true
pool.wrap do
begin
yield self
ensure
@persistent = prev_persistent
close unless @persistent
prev_wrapped = @wrapped
@wrapped = true
was_initialized = false
current_selector = get_current_selector do
selector = Selector.new
set_current_selector(selector)
was_initialized = true
selector
end
begin
yield self
ensure
unless prev_wrapped
if @persistent
deactivate(current_selector)
else
close(current_selector)
end
end
@wrapped = prev_wrapped
set_current_selector(nil) if was_initialized
end
end
# closes all the active connections from the session
def close(*args)
pool.close(*args)
# closes all the active connections from the session.
#
# when called directly without specifying +selector+, all available connections
# will be picked up from the connection pool and closed. Connections in use
# by other sessions, or same session in a different thread, will not be reaped.
def close(selector = Selector.new)
# throw resolvers away from the pool
@pool.reset_resolvers
# preparing to throw away connections
while (connection = @pool.pop_connection)
next if connection.state == :closed
select_connection(connection, selector)
end
begin
@closing = true
selector.terminate
ensure
@closing = false
end
end
# performs one, or multple requests; it accepts:
@ -65,10 +99,10 @@ module HTTPX
# resp1, resp2 = session.request(["POST", "https://server.org/a", form: { "foo" => "bar" }], ["GET", "https://server.org/b"])
# resp1, resp2 = session.request("GET", ["https://server.org/a", "https://server.org/b"], headers: { "x-api-token" => "TOKEN" })
#
def request(*args, **options)
def request(*args, **params)
raise ArgumentError, "must perform at least one request" if args.empty?
requests = args.first.is_a?(Request) ? args : build_requests(*args, options)
requests = args.first.is_a?(Request) ? args : build_requests(*args, params)
responses = send_requests(*requests)
return responses.first if responses.size == 1
@ -81,26 +115,108 @@ module HTTPX
#
# req = session.build_request("GET", "https://server.com")
# resp = session.request(req)
def build_request(verb, uri, options = EMPTY_HASH)
rklass = @options.request_class
options = @options.merge(options) unless options.is_a?(Options)
request = rklass.new(verb, uri, options)
def build_request(verb, uri, params = EMPTY_HASH, options = @options)
rklass = options.request_class
request = rklass.new(verb, uri, options, params)
request.persistent = @persistent
set_request_callbacks(request)
request
end
private
# returns the HTTPX::Pool object which manages the networking required to
# perform requests.
def pool
Thread.current[:httpx_connection_pool] ||= Pool.new
def select_connection(connection, selector)
pin_connection(connection, selector)
selector.register(connection)
end
# callback executed when a response for a given request has been received.
def on_response(request, response)
@responses[request] = response
def pin_connection(connection, selector)
connection.current_session = self
connection.current_selector = selector
end
alias_method :select_resolver, :select_connection
def deselect_connection(connection, selector, cloned = false)
connection.log(level: 2) do
"deregistering connection##{connection.object_id}(#{connection.state}) from selector##{selector.object_id}"
end
selector.deregister(connection)
# when connections coalesce
return if connection.state == :idle
return if cloned
return if @closing && connection.state == :closed
connection.log(level: 2) { "check-in connection##{connection.object_id}(#{connection.state}) in pool##{@pool.object_id}" }
@pool.checkin_connection(connection)
end
def deselect_resolver(resolver, selector)
resolver.log(level: 2) do
"deregistering resolver##{resolver.object_id}(#{resolver.state}) from selector##{selector.object_id}"
end
selector.deregister(resolver)
return if @closing && resolver.closed?
resolver.log(level: 2) { "check-in resolver##{resolver.object_id}(#{resolver.state}) in pool##{@pool.object_id}" }
@pool.checkin_resolver(resolver)
end
def try_clone_connection(connection, selector, family)
connection.family ||= family
return connection if connection.family == family
new_connection = connection.class.new(connection.origin, connection.options)
new_connection.family = family
connection.sibling = new_connection
do_init_connection(new_connection, selector)
new_connection
end
# returns the HTTPX::Connection through which the +request+ should be sent through.
def find_connection(request_uri, selector, options)
if (connection = selector.find_connection(request_uri, options))
connection.idling if connection.state == :closed
connection.log(level: 2) { "found connection##{connection.object_id}(#{connection.state}) in selector##{selector.object_id}" }
return connection
end
connection = @pool.checkout_connection(request_uri, options)
connection.log(level: 2) { "found connection##{connection.object_id}(#{connection.state}) in pool##{@pool.object_id}" }
case connection.state
when :idle
do_init_connection(connection, selector)
when :open
if options.io
select_connection(connection, selector)
else
pin_connection(connection, selector)
end
when :closing, :closed
connection.idling
select_connection(connection, selector)
else
pin_connection(connection, selector)
end
connection
end
private
def deactivate(selector)
selector.each_connection do |connection|
connection.deactivate
deselect_connection(connection, selector) if connection.state == :inactive
end
end
# callback executed when an HTTP/2 promise frame has been received.
@ -110,104 +226,54 @@ module HTTPX
end
# returns the corresponding HTTP::Response to the given +request+ if it has been received.
def fetch_response(request, _, _)
@responses.delete(request)
def fetch_response(request, _selector, _options)
response = request.response
return unless response && response.finished?
log(level: 2) { "response fetched" }
response
end
# returns the HTTPX::Connection through which the +request+ should be sent through.
def find_connection(request, connections, options)
uri = request.uri
connection = pool.find_connection(uri, options) || init_connection(uri, options)
unless connections.nil? || connections.include?(connection)
connections << connection
set_connection_callbacks(connection, connections, options)
end
connection
end
def send_request(request, connections, options = request.options)
error = catch(:resolve_error) do
connection = find_connection(request, connections, options)
connection.send(request)
end
return unless error.is_a?(Error)
request.emit(:response, ErrorResponse.new(request, error, options))
end
# sets the callbacks on the +connection+ required to process certain specific
# connection lifecycle events which deal with request rerouting.
def set_connection_callbacks(connection, connections, options, cloned: false)
connection.only(:misdirected) do |misdirected_request|
other_connection = connection.create_idle(ssl: { alpn_protocols: %w[http/1.1] })
other_connection.merge(connection)
catch(:coalesced) do
pool.init_connection(other_connection, options)
# sends the +request+ to the corresponding HTTPX::Connection
def send_request(request, selector, options = request.options)
error = begin
catch(:resolve_error) do
connection = find_connection(request.uri, selector, options)
connection.send(request)
end
set_connection_callbacks(other_connection, connections, options)
connections << other_connection
misdirected_request.transition(:idle)
other_connection.send(misdirected_request)
rescue StandardError => e
e
end
connection.only(:altsvc) do |alt_origin, origin, alt_params|
other_connection = build_altsvc_connection(connection, connections, alt_origin, origin, alt_params, options)
connections << other_connection if other_connection
end
connection.only(:cloned) do |cloned_conn|
set_connection_callbacks(cloned_conn, connections, options, cloned: true)
connections << cloned_conn
end unless cloned
end
return unless error && error.is_a?(Exception)
# returns an HTTPX::Connection for the negotiated Alternative Service (or none).
def build_altsvc_connection(existing_connection, connections, alt_origin, origin, alt_params, options)
# do not allow security downgrades on altsvc negotiation
return if existing_connection.origin.scheme == "https" && alt_origin.scheme != "https"
raise error unless error.is_a?(Error)
altsvc = AltSvc.cached_altsvc_set(origin, alt_params.merge("origin" => alt_origin))
# altsvc already exists, somehow it wasn't advertised, probably noop
return unless altsvc
alt_options = options.merge(ssl: options.ssl.merge(hostname: URI(origin).host))
connection = pool.find_connection(alt_origin, alt_options) || init_connection(alt_origin, alt_options)
# advertised altsvc is the same origin being used, ignore
return if connection == existing_connection
connection.extend(AltSvc::ConnectionMixin) unless connection.is_a?(AltSvc::ConnectionMixin)
set_connection_callbacks(connection, connections, alt_options)
log(level: 1) { "#{origin} alt-svc: #{alt_origin}" }
connection.merge(existing_connection)
existing_connection.terminate
connection
rescue UnsupportedSchemeError
altsvc["noop"] = true
nil
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end
# returns a set of HTTPX::Request objects built from the given +args+ and +options+.
def build_requests(*args, options)
request_options = @options.merge(options)
def build_requests(*args, params)
requests = if args.size == 1
reqs = args.first
reqs.map do |verb, uri, opts = EMPTY_HASH|
build_request(verb, uri, request_options.merge(opts))
reqs.map do |verb, uri, ps = EMPTY_HASH|
request_params = params
request_params = request_params.merge(ps) unless ps.empty?
build_request(verb, uri, request_params)
end
else
verb, uris = args
if uris.respond_to?(:each)
uris.enum_for(:each).map do |uri, opts = EMPTY_HASH|
build_request(verb, uri, request_options.merge(opts))
uris.enum_for(:each).map do |uri, ps = EMPTY_HASH|
request_params = params
request_params = request_params.merge(ps) unless ps.empty?
build_request(verb, uri, request_params)
end
else
[build_request(verb, uris, request_options)]
[build_request(verb, uris, params)]
end
end
raise ArgumentError, "wrong number of URIs (given 0, expect 1..+1)" if requests.empty?
@ -216,71 +282,183 @@ module HTTPX
end
def set_request_callbacks(request)
request.on(:response, &method(:on_response).curry(2)[request])
request.on(:promise, &method(:on_promise))
end
def init_connection(uri, options)
connection = options.connection_class.new(uri, options)
catch(:coalesced) do
pool.init_connection(connection, options)
connection
end
def do_init_connection(connection, selector)
resolve_connection(connection, selector) unless connection.family
end
# sends an array of HTTPX::Request +requests+, returns the respective array of HTTPX::Response objects.
def send_requests(*requests)
connections = _send_requests(requests)
receive_requests(requests, connections)
selector = get_current_selector { Selector.new }
begin
_send_requests(requests, selector)
receive_requests(requests, selector)
ensure
unless @wrapped
if @persistent
deactivate(selector)
else
close(selector)
end
end
end
end
# sends an array of HTTPX::Request objects
def _send_requests(requests)
connections = []
def _send_requests(requests, selector)
requests.each do |request|
send_request(request, connections)
send_request(request, selector)
end
connections
end
# returns the array of HTTPX::Response objects corresponding to the array of HTTPX::Request +requests+.
def receive_requests(requests, connections)
# @type var responses: Array[response]
responses = []
def receive_requests(requests, selector)
responses = [] # : Array[response]
begin
# guarantee ordered responses
loop do
request = requests.first
# guarantee ordered responses
loop do
request = requests.first
return responses unless request
return responses unless request
catch(:coalesced) { pool.next_tick } until (response = fetch_response(request, connections, request.options))
catch(:coalesced) { selector.next_tick } until (response = fetch_response(request, selector, request.options))
request.emit(:complete, response)
responses << response
requests.shift
responses << response
requests.shift
break if requests.empty?
break if requests.empty?
next unless pool.empty?
next unless selector.empty?
# in some cases, the pool of connections might have been drained because there was some
# handshake error, and the error responses have already been emitted, but there was no
# opportunity to traverse the requests, hence we're returning only a fraction of the errors
# we were supposed to. This effectively fetches the existing responses and return them.
while (request = requests.shift)
responses << fetch_response(request, connections, request.options)
# in some cases, the pool of connections might have been drained because there was some
# handshake error, and the error responses have already been emitted, but there was no
# opportunity to traverse the requests, hence we're returning only a fraction of the errors
# we were supposed to. This effectively fetches the existing responses and return them.
exit_from_loop = true
requests_to_remove = [] # : Array[Request]
requests.each do |req|
response = fetch_response(req, selector, request.options)
if exit_from_loop && response
req.emit(:complete, response)
responses << response
requests_to_remove << req
else
# fetch_response may resend requests. when that happens, we need to go back to the initial
# loop and process the selector. we still do a pass-through on the remainder of requests, so
# that every request that need to be resent, is resent.
exit_from_loop = false
raise Error, "something went wrong, responses not found and requests not resent" if selector.empty?
end
break
end
responses
ensure
if @persistent
pool.deactivate(connections)
else
close(connections)
break if exit_from_loop
requests -= requests_to_remove
end
responses
end
def resolve_connection(connection, selector)
if connection.addresses || connection.open?
#
# there are two cases in which we want to activate initialization of
# connection immediately:
#
# 1. when the connection already has addresses, i.e. it doesn't need to
# resolve a name (not the same as name being an IP, yet)
# 2. when the connection is initialized with an external already open IO.
#
on_resolver_connection(connection, selector)
return
end
resolver = find_resolver_for(connection, selector)
resolver.early_resolve(connection) || resolver.lazy_resolve(connection)
end
def on_resolver_connection(connection, selector)
from_pool = false
found_connection = selector.find_mergeable_connection(connection) || begin
from_pool = true
@pool.checkout_mergeable_connection(connection)
end
return select_connection(connection, selector) unless found_connection
connection.log(level: 2) do
"try coalescing from #{from_pool ? "pool##{@pool.object_id}" : "selector##{selector.object_id}"} " \
"(conn##{found_connection.object_id}[#{found_connection.origin}])"
end
coalesce_connections(found_connection, connection, selector, from_pool)
end
def on_resolver_close(resolver, selector)
return if resolver.closed?
deselect_resolver(resolver, selector)
resolver.close unless resolver.closed?
end
def find_resolver_for(connection, selector)
if (resolver = selector.find_resolver(connection.options))
resolver.log(level: 2) { "found resolver##{connection.object_id}(#{connection.state}) in selector##{selector.object_id}" }
return resolver
end
resolver = @pool.checkout_resolver(connection.options)
resolver.log(level: 2) { "found resolver##{connection.object_id}(#{connection.state}) in pool##{@pool.object_id}" }
resolver.current_session = self
resolver.current_selector = selector
resolver
end
# coalesces +conn2+ into +conn1+. if +conn1+ was loaded from the connection pool
# (it is known via +from_pool+), then it adds its to the +selector+.
def coalesce_connections(conn1, conn2, selector, from_pool)
unless conn1.coalescable?(conn2)
conn2.log(level: 2) { "not coalescing with conn##{conn1.object_id}[#{conn1.origin}])" }
select_connection(conn2, selector)
if from_pool
conn1.log(level: 2) { "check-in connection##{conn1.object_id}(#{conn1.state}) in pool##{@pool.object_id}" }
@pool.checkin_connection(conn1)
end
return false
end
conn2.log(level: 2) { "coalescing with conn##{conn1.object_id}[#{conn1.origin}])" }
conn2.coalesce!(conn1)
select_connection(conn1, selector) if from_pool
conn2.disconnect
true
end
def get_current_selector
selector_store[self] || (yield if block_given?)
end
def set_current_selector(selector)
if selector
selector_store[self] = selector
else
selector_store.delete(self)
end
end
def selector_store
th_current = Thread.current
th_current.thread_variable_get(:httpx_persistent_selector_store) || begin
{}.compare_by_identity.tap do |store|
th_current.thread_variable_set(:httpx_persistent_selector_store, store)
end
end
end
@ -305,6 +483,7 @@ module HTTPX
# session_with_custom = session.plugin(CustomPlugin)
#
def plugin(pl, options = nil, &block)
label = pl
# raise Error, "Cannot add a plugin to a frozen config" if frozen?
pl = Plugins.load_plugin(pl) if pl.is_a?(Symbol)
if !@plugins.include?(pl)
@ -329,9 +508,36 @@ module HTTPX
@default_options = pl.extra_options(@default_options) if pl.respond_to?(:extra_options)
@default_options = @default_options.merge(options) if options
if pl.respond_to?(:subplugins)
pl.subplugins.transform_keys(&Plugins.method(:load_plugin)).each do |main_pl, sub_pl|
# in case the main plugin has already been loaded, then apply subplugin functionality
# immediately
next unless @plugins.include?(main_pl)
plugin(sub_pl, options, &block)
end
end
pl.configure(self, &block) if pl.respond_to?(:configure)
if label.is_a?(Symbol)
# in case an already-loaded plugin complements functionality of
# the plugin currently being loaded, loaded it now
@plugins.each do |registered_pl|
next if registered_pl == pl
next unless registered_pl.respond_to?(:subplugins)
sub_pl = registered_pl.subplugins[label]
next unless sub_pl
plugin(sub_pl, options, &block)
end
end
@default_options.freeze
set_temporary_name("#{superclass}/#{pl}") if respond_to?(:set_temporary_name) # ruby 3.4 only
elsif options
# this can happen when two plugins are loaded, an one of them calls the other under the hood,
# albeit changing some default.
@ -340,9 +546,40 @@ module HTTPX
@default_options.freeze
end
self
end
end
# setup of the support for close_on_fork sessions.
# adapted from https://github.com/mperham/connection_pool/blob/main/lib/connection_pool.rb#L48
if Process.respond_to?(:fork)
INSTANCES = ObjectSpace::WeakMap.new
private_constant :INSTANCES
def self.after_fork
INSTANCES.each_value(&:close)
nil
end
if ::Process.respond_to?(:_fork)
module ForkTracker
def _fork
pid = super
Session.after_fork if pid.zero?
pid
end
end
Process.singleton_class.prepend(ForkTracker)
end
else
INSTANCES = nil
private_constant :INSTANCES
def self.after_fork
# noop
end
end
end
# session may be overridden by certain adapters.

View File

@ -7,17 +7,16 @@ module HTTPX
end
def after(interval_in_secs, cb = nil, &blk)
return unless interval_in_secs
callback = cb || blk
raise Error, "timer must have a callback" unless callback
# I'm assuming here that most requests will have the same
# request timeout, as in most cases they share common set of
# options. A user setting different request timeouts for 100s of
# requests will already have a hard time dealing with that.
unless (interval = @intervals.find { |t| t.interval == interval_in_secs })
unless (interval = @intervals.bsearch { |t| t.interval == interval_in_secs })
interval = Interval.new(interval_in_secs)
interval.on_empty { @intervals.delete(interval) }
@intervals << interval
@intervals.sort!
end
@ -26,10 +25,12 @@ module HTTPX
@next_interval_at = nil
interval
Timer.new(interval, callback)
end
def wait_interval
drop_elapsed!
return if @intervals.empty?
@next_interval_at = Utils.now
@ -43,11 +44,36 @@ module HTTPX
elapsed_time = Utils.elapsed_time(@next_interval_at)
@intervals.delete_if { |interval| interval.elapse(elapsed_time) <= 0 }
drop_elapsed!(elapsed_time)
@intervals = @intervals.drop_while { |interval| interval.elapse(elapsed_time) <= 0 }
@next_interval_at = nil if @intervals.empty?
end
private
def drop_elapsed!(elapsed_time = 0)
# check first, if not elapsed, then return
first_interval = @intervals.first
return unless first_interval && first_interval.elapsed?(elapsed_time)
# TODO: would be nice to have a drop_while!
@intervals = @intervals.drop_while { |interval| interval.elapse(elapsed_time) <= 0 }
end
class Timer
def initialize(interval, callback)
@interval = interval
@callback = callback
end
def cancel
@interval.delete(@callback)
end
end
class Interval
include Comparable
@ -56,11 +82,6 @@ module HTTPX
def initialize(interval)
@interval = interval
@callbacks = []
@on_empty = nil
end
def on_empty(&blk)
@on_empty = blk
end
def <=>(other)
@ -83,18 +104,20 @@ module HTTPX
def delete(callback)
@callbacks.delete(callback)
@on_empty.call if @callbacks.empty?
end
def no_callbacks?
@callbacks.empty?
end
def elapsed?
@interval <= 0
def elapsed?(elapsed = 0)
(@interval - elapsed) <= 0 || @callbacks.empty?
end
def elapse(elapsed)
# same as elapsing
return 0 if @callbacks.empty?
@interval -= elapsed
if @interval <= 0

View File

@ -86,7 +86,6 @@ end
require "httpx/transcoder/body"
require "httpx/transcoder/form"
require "httpx/transcoder/json"
require "httpx/transcoder/xml"
require "httpx/transcoder/chunker"
require "httpx/transcoder/deflate"
require "httpx/transcoder/gzip"

Some files were not shown because too many files have changed in this diff Show More