Compare commits

...

396 Commits

Author SHA1 Message Date
HoneyryderChuck
0261449b39 fixed sig for callbacks_for 2025-08-08 17:06:03 +01:00
HoneyryderChuck
84c8126cd9 callback_for: check for ivar existence first
Closes #353
2025-08-08 16:30:17 +01:00
HoneyryderChuck
ff3f1f726f fix warning about argument potentially being ignored 2025-08-07 12:34:59 +01:00
HoneyryderChuck
b8b710470c fix sentry deprecation 2025-08-07 12:30:31 +01:00
HoneyryderChuck
0f3e3ab068 remove trailing :: from IO module usage, as there's no more internal module 2025-08-07 12:30:21 +01:00
HoneyryderChuck
095fbb3463 using local aws for the max requests tests
reduce exposure to httpbin.org even more
2025-08-07 12:12:50 +01:00
HoneyryderChuck
7790589c1f linting issue 2025-08-07 11:28:18 +01:00
HoneyryderChuck
dd8608ec3b small improv in max requests tests to make it tolerant to multi-homed networks 2025-08-07 11:22:29 +01:00
HoneyryderChuck
8205b351aa removing usage of httpbin.org peer in tests wherever possible
it has been quite unstable, 503'ing often
2025-08-07 11:21:59 +01:00
HoneyryderChuck
5992628926 update nghttp2 used in CI tests 2025-08-07 11:21:02 +01:00
HoneyryderChuck
39370b5883 Merge branch 'issue-337' into 'master'
fix for issues blocking reconnection in proxy mode

Closes #337

See merge request os85/httpx!397
2025-07-30 09:49:51 +00:00
HoneyryderChuck
1801a7815c http2 parser: fix calculation when connection closes and there's no termination handshake 2025-07-18 17:48:23 +01:00
HoneyryderChuck
0953e4f91a fix for #receive_requests bailout routing when out of selectables
the routine was using #fetch_response, which may return nil, and wasn't handling it, making it potentially return a nil instead of a response/errorresponse object. since, depending on the plugins, #fetch_response may reroute requests, one allows to keep in the loop in case there are selectables again to process as a result of it
2025-07-18 17:48:23 +01:00
HoneyryderChuck
a78a3f0b7c proxy fixes: allow proxy connection errors to be retriable
when coupled with the retries plugin, the exception is raised inside send_request, which breaks the integration; in order to protect from it, the proxy plugin will protect from proxy connection errors (socket/timeout errors happening until tunnel established) and allow them to be retried, while ignoring other proxy errors; meanwhile, the naming of errors was simplified, and now there's an HTTPX::ProxyError replacing HTTPX::HTTPProxyError (which is a breaking change).
2025-07-18 17:48:23 +01:00
HoneyryderChuck
aeb8fe5382 fix proxy ssl reconnection
when a proxied ssl connection would be lost, standard reconnection wouldn't work, as it would not pick the information from the internal tcp socket. in order to fix this, the connection retrieves the proxied io on reset/purge, which makes the establish a new proxyssl connection on reconnect
2025-07-18 17:48:23 +01:00
HoneyryderChuck
03170b6c89 promote certain transition logs to regular code (under level 3)
not really useful as telemetry metered, but would have been useful for other bugs
2025-07-18 17:48:23 +01:00
HoneyryderChuck
814d607a45 Revert "options: initialize all possible options to improve object shape"
This reverts commit f64c3ab5990b68f850d0d190535a45162929f0af.
2025-07-18 17:47:08 +01:00
HoneyryderChuck
5502332e7e logging when connections are deregistered from the selector/pool
also, logging when a response is fetched in the session
2025-07-18 17:46:43 +01:00
HoneyryderChuck
f3b68950d6 adding current fiber id to log message tags 2025-07-18 17:45:21 +01:00
HoneyryderChuck
2c4638784f Merge branch 'fix-shape' into 'master'
object shape improvements

See merge request os85/httpx!396
2025-07-14 15:38:19 +00:00
HoneyryderChuck
b0016525e3 recover from network unreachable errors when using cached IPs
while this type of error is avoided when doing HEv2, the IPs remain
in the cache; this means that, one the same host is reached, the
IPs are loaded onto the same socket, and if the issue is IPv6
connectivity, it'll break outside of the HEv2 flow.

this error is now protected inside the connect block, so that other
IPs in the list can be tried after; the IP is then evicted from the
cachee.

HEv2 related regression test is disabled in CI, as it's currently
reliable in Gitlab CI, which allows to resolve the IPv6 address,
but does not allow connecting to it
2025-07-14 15:44:47 +01:00
HoneyryderChuck
49555694fe remove check for non unique local ipv6 which is disabling HEv2
not sure anymore under which condition this was done...
2025-07-14 11:57:02 +01:00
HoneyryderChuck
93e5efa32e http2 stream header logs: initial newline to align values and make debug logs clearer 2025-07-14 11:50:22 +01:00
HoneyryderChuck
8b3c1da507 removed ivar left behind and used nowhere 2025-07-14 11:50:22 +01:00
HoneyryderChuck
d64f247e11 fix for Connection too many object shapes
some more ivars which were not initialized in the first place were leading to the warning in CI mode
2025-07-14 11:50:22 +01:00
HoneyryderChuck
f64c3ab599 options: initialize all possible options to improve object shape
Options#merge works by duping-then-filling ivars, but due to not all of them being initialized on object creation, each merge had the potential of adding more object shapes for the same class, which breaks one of the most recent ruby optimizations

this was fixed by caching all possible options names at the class level, and using that as reference in the initalize method to nilify all unreferenced options
2025-07-14 11:50:22 +01:00
HoneyryderChuck
af03ddba3b options: inlining logic from do_initialize in constructor 2025-07-14 09:10:52 +01:00
HoneyryderChuck
7012ca1f27 fixed previous commit, as the tag is not available before 1.15 2025-07-03 16:39:54 +01:00
HoneyryderChuck
d405f8905f fixed ddtrace compatibility for versions under 1.13.0 2025-07-03 16:23:27 +01:00
HoneyryderChuck
3ff10f142a replace h2 upgrade peer with a custom implementation
the remote one has been failing for some time
2025-06-09 22:56:30 +01:00
HoneyryderChuck
51ce9d10a4 bump version to 1.5.1 2025-06-09 09:04:05 +01:00
HoneyryderChuck
6bde11b09c Merge branch 'gh-92' into 'master'
don't bookkeep retry attempts when errors happen on just-checked-out open connections

See merge request os85/httpx!394
2025-05-28 17:54:03 +00:00
HoneyryderChuck
0c2808fa25 prevent needless closing loop when process is interrupted during DNS request
the native resolver needs to be unselected. it was already, but it was taken into account still for bookkeeping. this removes it from the list by eliminating closed selectables from the list (which were probably already removed from the list via callback)

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-28 15:26:11 +01:00
HoneyryderChuck
cb78091e03 don't bookkeep retry attempts when errors happen on just-checked-out open connections
in case of multiple connections to the same server, where the server may have closed all of them at the same time, a request will fail after checkout multiple times, before starting a new one where the request may succeed. this patch allows the prior attempts not to exhaust the number of possible retries on the request

it does so by marking the request as ping when the connection it's being sent to is marked as inactive; this leverages the logic of gating retries bookkeeping in such a case

Closes https://github.com/HoneyryderChuck/httpx/issues/92
2025-05-28 15:25:50 +01:00
HoneyryderChuck
6fa69ba475 Merge branch 'duplicate-method-def' into 'master'
Fix duplicate `option_pool_options` method

See merge request os85/httpx!393
2025-05-21 15:30:34 +00:00
Earlopain
4a78e78d32
Fix duplicate option_pool_options method
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:237: warning: method redefined; discarding old option_pool_options (StandardError)
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:221: warning: previous definition of option_pool_options was here
2025-05-21 12:49:54 +02:00
HoneyryderChuck
0e393987d0 bump version to 1.5.0 2025-05-16 14:04:08 +01:00
HoneyryderChuck
12483fa7c8 missing ivar sigs in tcp class 2025-05-16 11:15:28 +01:00
HoneyryderChuck
d955ba616a deselect idle connections on session termination
session may be interrupted earlier than the connection has finished
the handshake; in such a case, simulate early termination.

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-15 00:31:15 +01:00
HoneyryderChuck
804d5b878b Merge branch 'debug-redact' into 'master'
added :debug_redact option

See merge request os85/httpx!387
2025-05-14 23:01:28 +00:00
HoneyryderChuck
75702165fd remove ping check when querying for repeatable request status
this should be dependent on the exception only, as connections may have closed before ping was released

this addresses https://github.com/HoneyryderChuck/httpx/issues/87\#issuecomment-2866564479
2025-05-14 23:52:18 +01:00
HoneyryderChuck
120bbad126 clear write buffer on connect errors
leaving bytes around messes up the termination handshake and may raise other unwanted errors
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35446e9fe1 fixes for connection coalescing flow
the whole "found connection not open" branch was removed, as currently,
a mergeable connection must not be closed; this means that only
open/inactive connections will be picked up from selector/pool, as
they're the only coalescable connections (have addresses/ssl cert
state). this may be extended to support closed connections though, as
remaining ssl/addresses are enough to make it coalescable at that point,
and then it's just a matter of idling it, so it'll be simpler than it is
today.

coalesced connection gets closed via Connection#terminate at the end
now, in order to propagate whether it was a cloned connection.

added log messages in order to monitor coalescing handshake from logs.
2025-05-13 16:21:06 +01:00
HoneyryderChuck
3ed41ef2bf pool: do not decrement conn counter when returning existing connection, nor increment it when acquiring
this variable is supposed to monitor new connections being created or dropped, existing connection management shouldn't affect it
2025-05-13 16:21:06 +01:00
HoneyryderChuck
9ffbceff87 rename Connection#coalesced_connection=(conn) to Connection.coalesce!(conn) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
757c9ae32c making tcp state transition logs less ambiguous
also show transition states in connected
2025-05-13 16:21:06 +01:00
HoneyryderChuck
5d88ccedf9 redact ping payload as well 2025-05-13 16:21:06 +01:00
HoneyryderChuck
85808b6569 adding logs to select-on-socket phase 2025-05-13 16:21:06 +01:00
HoneyryderChuck
d5483a4264 reconnectable errors: include HTTP/2 parser errors and opnessl errors 2025-05-13 16:21:06 +01:00
HoneyryderChuck
540430c00e assert for request in a faraday test (sometimes this is nil, for some reason) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
3a417a4623 added :debug_redact option
when true, text passed to log messages considered sensitive (wrapped in a +#log_redact+ call) will be logged as "[REDACTED}"
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35c18a1b9b options: meta prog integer options into the same definition 2025-05-13 16:20:28 +01:00
HoneyryderChuck
cf19fe5221 Merge branch 'improv' into 'master'
sig improvements

See merge request os85/httpx!390
2025-05-13 15:18:50 +00:00
HoneyryderChuck
f9c2fc469a options: freeze more ivars by default 2025-05-13 15:52:57 +01:00
HoneyryderChuck
9b513faab4 aligning implementation of the #resolve function in all implementations 2025-05-13 15:52:57 +01:00
HoneyryderChuck
0be39faefc added some missing sigs + type safe code 2025-05-13 15:44:21 +01:00
HoneyryderChuck
08c5f394ba fixed usage of inexisting var 2025-05-13 15:13:02 +01:00
HoneyryderChuck
55411178ce resolver: moved @connections ivar + init into parent class
also, establishing the selectable interface for resolvers
2025-05-13 15:13:02 +01:00
HoneyryderChuck
a5c83e84d3 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!389
2025-05-13 14:10:56 +00:00
HoneyryderChuck
d7e15c4441 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-05-13 11:02:13 +01:00
HoneyryderChuck
012255e49c Merge branch 'ruby-3.5-cgi' into 'master'
Only require from `cgi` what is required

See merge request os85/httpx!391
2025-05-10 00:20:33 +00:00
HoneyryderChuck
d20506acb8 Merge branch 'httpx-issue-350' into 'master'
In file (any serialized) store need to response.finish! on get

Closes #350

See merge request os85/httpx!392
2025-05-10 00:13:41 +00:00
Paul Duey
28399f1b88 In file (any serialized) store need to response.finish! on get 2025-05-09 17:22:39 -04:00
Earlopain
953101afde
Only require from cgi what is required
In Ruby 3.5, most of the `cgi` gem will be removed and moved to a bundled gem.

Luckily, the escape/unescape methods have been left around. So, only the require path needs to be adjusted to avoid a warning.
`cgi/escape` was available since Ruby 2.3

I also moved the require to the file that actually uses it.

https://bugs.ruby-lang.org/issues/21258
2025-05-09 18:54:41 +02:00
HoneyryderChuck
055ee47b83 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!383
2025-04-29 22:44:44 +00:00
HoneyryderChuck
dbad275c65 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-04-29 23:25:41 +01:00
HoneyryderChuck
fe69231e6c Merge branch 'gh-86' into 'master'
persistent plugin: by default, do not retry requests which failed due to a request timeout

See merge request os85/httpx!385
2025-04-29 09:41:45 +00:00
HoneyryderChuck
4c61df768a persistent plugin: by default, do not retry requests which failed due to a request timeout
that isn't a connection-related type of failure, and confuses users when it gets retried, as connection was fine, request was just slow

Fixes https://github.com/HoneyryderChuck/httpx/issues/86
2025-04-27 16:47:50 +01:00
HoneyryderChuck
aec150b030 Merge branch 'issue-347' into 'master'
:callbacks plugin fix: copy callbacks to new session when using the session builder methods

Closes #347 and #348

See merge request os85/httpx!386
2025-04-26 15:12:42 +00:00
HoneyryderChuck
29a43c4bc3 callbacks plugin fix: errors raised in .on_request_error callback should bubble up to user code
this was not happening for errors happening during name resolution, particularly when HEv2 was used, as the second resolver was kept open and didn't stop the selector loop

Closes #348
2025-04-26 03:11:55 +01:00
HoneyryderChuck
34c2fee60c :callbacks plugin fix: copy callbacks to new session when using the session builder methods
such as '.with' or '.wrap', which create a new session object on the fly
2025-04-26 02:34:56 +01:00
HoneyryderChuck
c62966361e moving can_vuffer_more_requests? to private sector
it's only used internally
2025-04-26 01:42:55 +01:00
HoneyryderChuck
2b87a3d5e5 selector: make APIs expecting connections more strict, improve sigs by using interface 2025-04-26 01:42:55 +01:00
HoneyryderChuck
3dd767cdc2 response_cache: also cache request headers, for vary algo computation 2025-04-26 01:42:55 +01:00
HoneyryderChuck
a9255c52aa response_cache plugin: adding more rdoc documentation to methods 2025-04-26 01:42:55 +01:00
HoneyryderChuck
32031e8a03 response_cache plugin: rename cached_response? to not_modified?, more accurate 2025-04-26 01:42:55 +01:00
HoneyryderChuck
f328646c08 Merge branch 'gh-84' into 'master'
adding missing datadog span decoration

See merge request os85/httpx!384
2025-04-26 00:40:49 +00:00
HoneyryderChuck
0484dd76c8 fix for wrong query string encoding when passed an empty :params input
Fixes https://github.com/HoneyryderChuck/httpx/issues/85
2025-04-26 00:20:28 +01:00
HoneyryderChuck
17c1090b7a more agressive timeouts in tests 2025-04-26 00:10:48 +01:00
HoneyryderChuck
87f4ce4b03 adding missing datadog span decoration
including header tags, and other missing span tags
2025-04-25 23:46:11 +01:00
HoneyryderChuck
1ec7442322 Merge branch 'improv-tests' 2025-04-14 17:35:15 +01:00
HoneyryderChuck
723959cf92 wrong option docs 2025-04-13 01:27:27 +01:00
HoneyryderChuck
10b4b9c7c0 remove unused method 2025-04-13 01:27:05 +01:00
HoneyryderChuck
1b39bcd3a3 set approriate coverage key, use it as command 2025-04-13 01:08:18 +01:00
HoneyryderChuck
44a2041ea8 added missing response cache store sigs 2025-04-13 01:07:18 +01:00
HoneyryderChuck
b63f9f1ae2 native: realign log calls, so coverage does not misreport them 2025-04-13 01:06:54 +01:00
HoneyryderChuck
467dd5e7e5 file store: testing path when the same request is stored twice
also, testing usage of symbol response cache store options.
2025-04-13 01:05:42 +01:00
HoneyryderChuck
c626fae3da adding test to force usage of max_requests conditionals under http1 2025-04-13 01:05:08 +01:00
HoneyryderChuck
7f6b78540b Merge branch 'issue-328' into 'master'
pool option: max_connections

Closes #328

See merge request os85/httpx!371
2025-04-12 22:43:18 +00:00
HoneyryderChuck
b120ce4657 new pool option: max_connections
this new option declares how many max inflight-or-idle open connections a session may hold. connections get recycled in case a new one is needed and the pool has closed connections to discard. the same pool timeout error applies as for max_connections_per_origin
2025-04-12 23:29:08 +01:00
HoneyryderChuck
32c36bb4ee Merge branch 'issue-341' into 'master'
response_cache plugin: return cached response from store unless stale

Closes #341

See merge request os85/httpx!382
2025-04-12 21:45:35 +00:00
HoneyryderChuck
cc0626429b prevent overlap of test dirs/files across test instances 2025-04-12 22:09:12 +01:00
HoneyryderChuck
a0e2c1258a allow setting :response_cache_store with a symbol (:store, :file_store)
cleaner to select from one of the two available options
2025-04-12 22:09:12 +01:00
HoneyryderChuck
6bd3c15384 fixing cacheable_response? to exclude headers and freshness
it's called with a fresh response already
2025-04-12 22:09:12 +01:00
HoneyryderChuck
0d23c464f5 simplifying response cache store API
#get, #set, #clear, that's all you need. this can now be some bespoke custom class implementing these primitives
2025-04-12 22:09:12 +01:00
HoneyryderChuck
a75b89db74 response_cache plugin: addin filesystem based store
it stores the cached responses in the filesystem
2025-04-12 22:09:12 +01:00
HoneyryderChuck
7173616154 response cache: fix vary header handling by supporting a defined set of headers
the cache key will be also determined by the supported vary headers values, when present; this means easier lookups, and one level hash fetch, where the same url-verb request may have multiple entries depending on those headers

checking response vary header will therefore be something done at cache response lookup; writes may override when they shouldn't though, as a full match on supported vary headers will be performed, and one can't know in advance the combo of vary headers, which is why insterested parties will have to be judicious with the new  option
2025-04-12 22:09:12 +01:00
HoneyryderChuck
69f9557780 corrected equality comparison of response bodies 2025-04-12 22:09:12 +01:00
HoneyryderChuck
339af65cc1 response cache: store cached response in request, so that copying and cache invalidating work a bit OOTB 2025-04-12 22:09:12 +01:00
HoneyryderChuck
3df6edbcfc response_cache: an immutable response is always fresh 2025-04-12 22:09:11 +01:00
HoneyryderChuck
5c2f8ab0b1 response_cache plugin: return cached response from store unless stale
response age wasn't being taken into account, and cache invalidation request was always being sent; fresh response will stay in the store until expired; when it expires, cache invalidation will be tried (if possible); if invalidated, the new response is put in the store; Bif validated, the body of the cached response is copied, and the cached response stays in the store
2025-04-12 22:09:11 +01:00
HoneyryderChuck
0c335fd03d Merge branch 'gh-82' into 'master'
persistent plugin: drop , allow retries for ping requests, regardless of idempotency property

See merge request os85/httpx!381
2025-04-12 09:14:32 +00:00
HoneyryderChuck
bf19cde364 fix: ping record to match must be kept in a different string
http-2 1.1.0 uses the string input as the ultimate buffer (when input not frozen), which will mutate the argument. in order to keep it around for further comparison, the string is dupped
2025-04-11 16:25:58 +01:00
HoneyryderChuck
7e0ddb7ab2 persistent plugin: when errors happen during connection ping phase, make sure that only connection lost errors are going to be retriable 2025-04-11 14:41:36 +01:00
HoneyryderChuck
4cd3136922 connection: set request timeouts before sending the request to the parser
in situations where the connection is already open/active, the requests would be buffered before setting the timeouts, which would skip transition callbacks associated with writes, such as write timeouts and request timeouts
2025-04-11 14:41:36 +01:00
HoneyryderChuck
642122a0f5 persistent plugin: drop , allow retries for ping requests, regardless of idempotency property
the previous option was there to allow reconnecting on non-idempotent (i.e. POST) requests, but had the unfortunate side-effect of allowing retries for failures (i.e. timeouts) which had nothing to do with a failed connection error; this mitigates it by enabling retries for ping-aware requests, i.e. if there is an error during PING, always retry
2025-04-11 14:41:36 +01:00
HoneyryderChuck
42d42a92b4 added missing test for close_on_fork option 2025-04-09 09:39:53 +01:00
HoneyryderChuck
fb6a509d98 removing duplicate sig 2025-04-06 21:54:03 +01:00
HoneyryderChuck
3c22f36a6c session refactor: remove @responses hash
this was being used as an internal cache for finished responses; this can be however superseded by Request#response, which fulfills the same role alongside the #finished? call; this allows us to drop one variable-size hash which would grow at least as large as the number of requests per call, and was inadvertedly shared across threads when using the same session (even at no danger of colliding, but could cause perhaps problems in jruby?)

also allows to remove one less callback
2025-04-04 11:05:27 +01:00
HoneyryderChuck
51b2693842 Merge branch 'gh-disc-71' into 'master'
:stream_bidi plugin

See merge request os85/httpx!365
2025-04-04 09:51:29 +00:00
HoneyryderChuck
1ab5855961 Merge branch 'gh-74' into 'master'
adding  option, which automatically closes sessions on fork

See merge request os85/httpx!377
2025-04-04 09:49:06 +00:00
HoneyryderChuck
f82816feb3 Merge branch 'issue-339' into 'master'
QUERY plugin

Closes #339

See merge request os85/httpx!374
2025-04-04 09:48:13 +00:00
HoneyryderChuck
ee229aa74c readapt some plugins so that supported verbs can be overridden by custom plugins 2025-04-04 09:32:38 +01:00
HoneyryderChuck
793e900ce8 added the :query plugin, which supports the QUERY http method
added as a plugin for explicit opt-in, as it's still an experimental feature (RFC in draft)
2025-04-04 09:32:38 +01:00
HoneyryderChuck
1241586eb4 introducing subplugins to plugins
subplugins are modules of plugins which register as post-plugins of other plugins

a specific plugin may want to have a side-effect on the functionality of another plugin, so they can use this to register it when the other plugin is loaded
2025-04-04 09:25:53 +01:00
HoneyryderChuck
cbf454ae13 Merge branch 'issue-336' into 'master'
ruby 3.4 features

Closes #336

See merge request os85/httpx!372
2025-04-04 08:24:28 +00:00
HoneyryderChuck
180d3b0e59 adding option, which automatically closes sessions on fork
only for ruby 3.1 or higher. adapted from a similar feature from the connection_pool gem
2025-04-04 00:22:05 +01:00
HoneyryderChuck
84db0072fb new :stream_bidi plugin
this plugin is an HTTP/2 only plugin which enables bidirectional streaming

the client can continue writing request streams as response streams arrive midway

Closes https://github.com/HoneyryderChuck/httpx/discussions/71
2025-04-04 00:21:12 +01:00
HoneyryderChuck
c48f6c8e8f adding Request#can_buffer?
abstracts some logic around whether a request has request body bytes to buffer
2025-04-04 00:20:29 +01:00
HoneyryderChuck
870b8aed69 make .parser_type an instance method instead
allows plugins to override
2025-04-04 00:20:29 +01:00
HoneyryderChuck
56b8e9647a making multipart decoding code more robust 2025-04-04 00:18:53 +01:00
HoneyryderChuck
1f59688791 rename test servlet 2025-04-04 00:18:53 +01:00
HoneyryderChuck
e63c75a86c improvements in headers
using Hash#new(capacity: ) to better predict size; reduce the number of allocated arrays by passing the result of  to the store when possible, and only calling #downcased(str) once; #array_value will also not try to clean up errors in the passed data (it'll either fail loudly, or be fixed downstream)
2025-04-04 00:18:53 +01:00
HoneyryderChuck
3eaf58e258 refactoring timers to more efficiently deal with empty intervals
before, canceling a timer connected to an interval which would become empty would delete it from the main intervals store; this deletion now moves away from the request critical path, and pinging for intervals will drop elapsed-or-empty before returning the shortest one

beyond that, the intervals store won't be constantly recreated if there's no need for it (i.e. nothing has elapsed), which reduce the gc pressure

searching for existing interval on #after now uses bsearch; since the list is ordered, this should make finding one more performant
2025-04-04 00:18:53 +01:00
HoneyryderChuck
9ff62404a6 enabling warning messages 2025-04-04 00:18:53 +01:00
HoneyryderChuck
4d694f9517 ruby 3.4 feature: use String#append_as_bytes in buffers 2025-04-04 00:18:53 +01:00
HoneyryderChuck
22952f6a4a ruby 3.4: set string capacity for buffer-like string 2025-04-04 00:18:53 +01:00
HoneyryderChuck
7660e4c555 implement #inspect in a few places where ouput gets verbose
tweak some existing others
2025-04-04 00:18:53 +01:00
HoneyryderChuck
a9cc787210 ruby 3.4: use set_temporary_name to decorate plugin classes with more descriptive names 2025-04-04 00:18:53 +01:00
HoneyryderChuck
970830a025 bumping version to 1.4.4 2025-04-03 22:17:42 +01:00
HoneyryderChuck
7a3d38aeee Merge branch 'issue-343' into 'master'
session: discard connection callbacks if they're assigned to a different session already

Closes #343

See merge request os85/httpx!379
2025-04-03 18:53:39 +00:00
HoneyryderChuck
54bb617902 fixed regression test of 1.4.1 (it detected a different error, but the outcome is not a goaway error anymore, as persistent conns recover and retry) 2025-04-03 18:34:41 +01:00
HoneyryderChuck
cf08ae99f5 removing unneded require in regression test which loads webmock by mistake 2025-04-03 18:23:56 +01:00
HoneyryderChuck
c8ce4cd8c8 Merge branch 'down-issue-98' into 'master'
stream plugin: allow partial buffering of the response when calling things other than #each

See merge request os85/httpx!380
2025-04-03 17:23:21 +00:00
HoneyryderChuck
6658a2ce24 ssl socket: do not call tcp socket connect if already connected 2025-04-03 18:17:35 +01:00
HoneyryderChuck
7169f6aaaf stream plugin: allow partial buffering of the response when calling things other than #each
this allows calling #status or #headers on a stream response, without buffering the whole response, as it's happening now; this will only work for methods which do not rely on the whole payload to be available, but that should be ok for the stream plugin usage

Fixes https://github.com/janko/down/issues/98
2025-04-03 17:51:02 +01:00
HoneyryderChuck
ffc4824762 do not needlessly probe for readiness on a reconnected connection 2025-04-03 11:04:15 +01:00
HoneyryderChuck
8e050e846f decrementing the in-flight counter in a connection
sockets are sometimes needlessly probed on retries because the counter wasn't taking failed attempts into account
2025-04-03 11:04:15 +01:00
HoneyryderChuck
e40d3c9552 do not exhaust retry attempts when probing connections after keep alive timeout expires
since pools can keep multiple persistent connections which may have been terminated by the peer already, exhausting the one retry attempt from the persistent plugin may make request fail before trying it on an actual connection. in this patch, requests which are preceded by a PING frame used for probing are therefore marked as such, and do not decrement the attempts counter when failing
2025-04-03 11:04:15 +01:00
HoneyryderChuck
ba60ef79a7 if checking out a connection in a closing state, assume that the channel is irrecoverable and hard-close is beforehand
one less callback to manage, which potentially leaks across session usages
2025-03-31 11:46:04 +01:00
HoneyryderChuck
ca49c9ef41 session: discard connection callbacks if they're assigned to a different session already
some connection callbacks are prone to be left behind; when they do, they may access objects that may have been locked by another thread, thereby corrupting state.
2025-03-28 18:26:17 +00:00
HoneyryderChuck
7010484b2a bump version to 1.4.3 2025-03-25 23:30:51 +00:00
HoneyryderChuck
06eba512a6 Merge branch 'issue-340' into 'master'
empty the write buffer on EOF errors in #read too

Closes #340

See merge request os85/httpx!373
2025-03-24 11:18:57 +00:00
HoneyryderChuck
f9ed0ab602 only run rbs tests in latest ruby 2025-03-19 23:55:00 +00:00
HoneyryderChuck
5632e522c2 internal telemetry reutilizes loggable module, which is made to work in places where there are no options 2025-03-19 23:43:29 +00:00
HoneyryderChuck
cfdb719a8e extra subroutines in test http2 server 2025-03-19 23:42:28 +00:00
HoneyryderChuck
b2a1b9cded fixed wrong API call on missing corresponding client PING frame
the function used did not exist; instead, an exception will be raised
2025-03-19 23:42:10 +00:00
HoneyryderChuck
5917c63a70 add more error message context to settings timeout flaky test 2025-03-19 23:41:02 +00:00
HoneyryderChuck
6af8ad0132 missing sig for HTTP2 Connection 2025-03-19 23:30:36 +00:00
HoneyryderChuck
35ac13406d do not run yjit build for older rubies 2025-03-19 23:30:13 +00:00
HoneyryderChuck
d00c46d363 Merge branch 'gh-80' into 'master'
handle HTTP_1_1_REQUIRED stream GOAWAY error code by retrying on new HTTP/1.1 connection

See merge request os85/httpx!375
2025-03-19 23:21:31 +00:00
HoneyryderChuck
a437de36e8 handle HTTP_1_1_REQUIRED stream GOAWAY error code by retrying on new HTTP/1.1 connection
it was previously only handling 421 status codes for the same effect; this achieves parity with the frame-driven redirection
2025-03-19 23:11:51 +00:00
HoneyryderChuck
797fd28142 Merge branch 'faraday-multipart-uploadio-issue' into 'master'
fix: do not close request right after sending it, assume it may have to be retried

See merge request os85/httpx!378
2025-03-19 22:13:19 +00:00
HoneyryderChuck
6d4266d4a4 multipart: initialize @bytesize in the initializer (for object shape opt) 2025-03-19 16:59:25 +00:00
HoneyryderChuck
eb8c18ccda make << a part of Response interface (and ensure ErrorResponse deals with no internal @response) 2025-03-19 16:58:44 +00:00
HoneyryderChuck
4653b48602 fix: do not close request right after sending it, assume it may have to be retried
with the retries plugin, the request payload will be rewinded, and that may not be possible if already closed. this was never detected so far because no request body transcoder internally closes, but the faraday multipart adapter does

the request is therefore closed alongside the response (when the latter is closed)

Fixes https://github.com/HoneyryderChuck/httpx/issues/75\#issuecomment-2731219586
2025-03-19 16:57:47 +00:00
HoneyryderChuck
8287a55b95 Merge branch 'gh-79' into 'master'
remove raise-error middleware from faraday tests

See merge request os85/httpx!376
2025-03-18 22:55:20 +00:00
HoneyryderChuck
9faed647bf remove raise-error middleware from faraday tests
proves that the adapter does not raise on http errors. also added a test to ensure that
2025-03-18 22:42:38 +00:00
HoneyryderChuck
5268f60021 fix sig issues coming from latest rbs 2025-03-18 18:30:53 +00:00
HoneyryderChuck
132e4b4ebe extra subroutines in test http2 server 2025-03-14 23:45:36 +00:00
HoneyryderChuck
b502247284 fixed wrong API call on missing corresponding client PING frame
the function used did not exist; instead, an exception will be raised
2025-03-14 23:45:27 +00:00
HoneyryderChuck
e5d852573a empty the write buffer on EOF errors in #read too
this avoid, on HTTP/2 termination handshake, in case the socket was shown as closed due to EOF, that the bytes are going to be written regardless (due to being misidentified as the GOAWAY frame)
2025-03-14 23:45:12 +00:00
HoneyryderChuck
d17ac7c8c3 webmock: reassign headers after callbacks
these may have been reassigned during them
2025-03-05 23:09:20 +00:00
HoneyryderChuck
b1c08f16d5 bump version to 1.4.2 2025-03-05 22:20:41 +00:00
HoneyryderChuck
f618c6447a tweaking hn script 2025-03-05 13:41:33 +00:00
HoneyryderChuck
4454b1bbcc Merge branch 'issue-334' into 'master'
ensure connection is cleaned up on parser-initiated forced reset

Closes #334

See merge request os85/httpx!363
2025-03-03 18:27:13 +00:00
HoneyryderChuck
88f8f5d287 fix: reset timeout callbacks when requests are routed to a different connection
this may happen in a few contexts, such as connection exhaustion, but more importantly, when a request is retried in a different connection; if the request successfully sets the callbacks before the connection raises an issue and the request is retried in a new one, the callback from the faulty connection are carried with it, and triggered at a time when the connection is back in the connection pool, or worse, used in a different thread

this fix relies on :idle transition callback, which is called before request is routed around
2025-03-03 18:21:04 +00:00
HoneyryderChuck
999b6a603a adding reproduction of the report bug on issue-334 2025-03-03 18:12:03 +00:00
HoneyryderChuck
f8d05b0e82 conn: on eof error, clean up write buffer
socket is closed, do not try to drain it while performing the handshake shutdown
2025-03-03 18:12:03 +00:00
HoneyryderChuck
a7f2271652 add more process context info to logging 2025-03-03 18:12:03 +00:00
HoneyryderChuck
55f1f6800b Merge branch 'gh-77' into 'master'
always raise an error when a non-recoverable error happens when sending the request

See merge request os85/httpx!370
2025-03-03 18:03:23 +00:00
HoneyryderChuck
3e736b1f05 Merge branch 'fix-hev2-overrides' into 'master'
fixes for happy eyeballs implementation

Closes #337

See merge request os85/httpx!368
2025-03-03 18:02:43 +00:00
HoneyryderChuck
f5497eec4f always raise an error when a non-recoverable error happens when sending the request
this should fallback to terminating the session immediately and closing its connections, instead of trying to fit the same exception into the request objects, no point in that

Closes https://github.com/HoneyryderChuck/httpx/issues/77
2025-03-03 16:45:43 +00:00
HoneyryderChuck
08015e0851 fixup! native resolver: refactored retries to use timer intervals 2025-03-01 01:12:39 +00:00
HoneyryderChuck
a0f472ba02 cleanly exit from Exception in the selector loop
was messing up RBS state
2025-03-01 01:03:24 +00:00
HoneyryderChuck
8bee6956eb adding Timer, making Timers#after return it, to allow single cancellation
the previous iteration relied on internal behaviour do delete the correct callback; in the process, logic to delete all callbacks from an interval was accidentally committed, which motivated this refactoring. the premise is: timeouts can cancel the timer; they set themselves as active until done; operation timeouts rely on the previous to be ignored or not.

a new error, OperationTimeoutError, was added for that effect
2025-03-01 01:03:24 +00:00
HoneyryderChuck
97cbdf117d small update in output of hackernews script 2025-02-28 18:37:05 +00:00
HoneyryderChuck
383f2a01d8 fix choice of candidate on no_domain_found error
must pick up name from candidates and pass to #resolve
2025-02-28 18:37:05 +00:00
HoneyryderChuck
8a473b4ccd native resolver: propagate error to all connections and close resolver when socket error 2025-02-28 18:37:05 +00:00
HoneyryderChuck
b6c8f70aaf fix: always prefer timer interval if values are the same 2025-02-28 18:37:05 +00:00
HoneyryderChuck
f5aa6142a0 selector: remove needless begin block 2025-02-28 18:37:05 +00:00
HoneyryderChuck
56d82e6370 connection: make surge it's purged on transition error 2025-02-28 18:37:05 +00:00
HoneyryderChuck
41e95d5b86 fix log message repeating pattern 2025-02-28 18:37:05 +00:00
HoneyryderChuck
46a39f2b0d native: when resolving, purge closed connections, ignore the connection which is being resolved 2025-02-28 18:37:05 +00:00
HoneyryderChuck
8009fc11b7 native resolver: refactored retries to use timer intervals
there were a lot of issues with bookkeeping this at the connection level; in the end, the timers infra was a much better proxy for all of this; set timer after write; cancel it on reading data to parse
2025-02-28 18:37:05 +00:00
HoneyryderChuck
398c08eb4d native resolver: consume resolutions in a loop, do not stop after the first one
this was a busy loop on dns resolution; this should utilize the socket better
2025-02-28 18:37:05 +00:00
HoneyryderChuck
723fda297f close_or_resolve: purge the queriable connections list before figuring out the next step 2025-02-27 19:22:36 +00:00
HoneyryderChuck
35ee625827 fix: in the native resolver, do not fall for the first answer being an alias if the remainder carries IPs
discard alias, use IPs
2025-02-27 19:22:36 +00:00
HoneyryderChuck
210abfb2f5 fix: on the native resolution, do not keep reading from the socket if buffer has data 2025-02-27 19:22:36 +00:00
HoneyryderChuck
53bf6824f8 fix: do not apply the HEv2 resolution delay if the ip was not resolved via DNS
early resolution should trigger immediately
2025-02-27 19:22:36 +00:00
HoneyryderChuck
cb8a97c837 added how to test instructions 2025-02-27 19:22:36 +00:00
HoneyryderChuck
0063ab6093 selector: do not raise conventional error on select timeout when the interval came from a timer
assume that the timer will fire right afterwards, return early
2025-02-27 19:22:36 +00:00
HoneyryderChuck
7811cbf3a7 faraday adaptar: use a default reason when none is matched by Net::HTTP::STATUS_CODES
Fixes https://github.com/HoneyryderChuck/httpx/issues/76
2025-02-22 22:28:57 +00:00
HoneyryderChuck
7c21c33999 bump version to 1.4.1 2025-02-18 13:42:44 +00:00
HoneyryderChuck
e45edcbfce linting issue 2025-02-18 12:55:00 +00:00
HoneyryderChuck
7e705dc57e resolver: early exit for closed connections later, after updating addresses (in case they ever get reused) 2025-02-18 12:46:26 +00:00
HoneyryderChuck
dae4364664 fix for incorrect sig of #pin_connection 2025-02-18 12:45:37 +00:00
HoneyryderChuck
8dfd1edf85 supressing annoying grpc logs where possible 2025-02-18 09:03:05 +00:00
HoneyryderChuck
d2fd20b3ec reasssing current session/selector earlier in the reconnection lifecycle 2025-02-18 09:02:49 +00:00
HoneyryderChuck
28fdbb1a3d one less callback 2025-02-18 09:02:07 +00:00
HoneyryderChuck
23857f196a refactoring attribution of current session and selector
by setting it in select_connection instead
2025-02-18 09:02:01 +00:00
HoneyryderChuck
bf1ef451f2 compose file linting 2025-02-18 08:14:29 +00:00
HoneyryderChuck
d68e98be5a adapted hackernewr script to deal with errors 2025-02-18 08:14:20 +00:00
HoneyryderChuck
fd57d72a22 add support in get.rb script for arbitrary url 2025-02-18 08:14:11 +00:00
HoneyryderChuck
a74bd9f397 use different names for happy eyeballs script 2025-02-18 08:14:02 +00:00
HoneyryderChuck
f76be1983b native resolver: fix stalled resolution on multiple requests to multiple origins
continue resolving when an error happens by immediately writing to the buffer afterwards
2025-02-18 08:13:47 +00:00
HoneyryderChuck
86cb30926f rewrote happy eyeballs implementation to not rely on callbacks
each connection will now check in its sibling and whether it's the original connection (containing the inital batch of requests); internal functions are then called to control how connections react to successful or failed resolutions, which reduces code repetition

the handling of coalesced connections is also simplified, as when that happens, the sibling must also be closed. this allowed to fix some mismatch when handling this use case with callbacks
2025-02-18 08:13:35 +00:00
HoneyryderChuck
ed8fafd11d fix: do not schedule deferred HEv2 ipv4 tcp handshake if the connection has already been closed by the sibling connection 2025-02-18 08:12:07 +00:00
HoneyryderChuck
5333def40d Merge branch 'issue-338' into 'master'
IO.copy_stream changes yielded string on subsequent yields

Closes #338

See merge request os85/httpx!369
2025-02-14 00:27:31 +00:00
HoneyryderChuck
ab78e3189e webmock: fix for integrations which require the request to transition state, due to event emission
one of them being the otel plugin, see https://github.com/open-telemetry/opentelemetry-ruby-contrib/pull/1404
2025-02-14 00:16:53 +00:00
HoneyryderChuck
b26313d18e request body: fixed handling of files as request body
there's a bug (reported in https://bugs.ruby-lang.org/issues/21131) with IO.copy_stream, where yielded duped strings still change value on subsequent yields, which breaks http2 framing, which requires two yields at the same time in the first iteration. it replaces it with #read calls; file handles will now be closed once done streaming, which is a change in behaviour
2025-02-14 00:16:53 +00:00
HoneyryderChuck
2af9bc0626 multipart: force pathname parts to open in binmode 2025-02-13 19:17:14 +00:00
HoneyryderChuck
f573c1c50b transcode: body encoder is now a simple delegator
instead of implementing method missing; this makes it simpler impl-wise, and it'll also make comparing types easier, although not needed ATM
2025-02-13 19:16:45 +00:00
HoneyryderChuck
2d999063fc added tests to reproduce the issue of string changing on IO.copy_stream yield 2025-02-13 19:15:15 +00:00
HoneyryderChuck
1a44b8ea48 Merge branch 'gh-70' into 'master'
datadog plugin fixes

See merge request os85/httpx!364
2025-02-11 00:58:04 +00:00
HoneyryderChuck
8eeafaa008 omit faraday/datadog tests which uncovered a bug 2025-02-11 00:46:18 +00:00
HoneyryderChuck
0ec8e80f0f fixing datadog plugin not sending distributed headers
the headers were being set on the request object after the request was buffered and sent
2025-02-11 00:46:18 +00:00
HoneyryderChuck
f2bca9fcbf altered datadog tests in order to verify the distributed headers from the response body
and not from the request object, which reproduces the bug
2025-02-11 00:46:18 +00:00
HoneyryderChuck
6ca17c47a0 faraday: do not trace on configuration is disabled 2025-02-11 00:46:18 +00:00
HoneyryderChuck
016ed04f61 adding test for integration of datadog on top of faraday backed by httpx 2025-02-11 00:46:18 +00:00
HoneyryderChuck
5b59011a89 moving datadog setup test to support mixin 2025-02-11 00:31:13 +00:00
HoneyryderChuck
7548347421 moving faraday setup test to support mixin 2025-02-11 00:31:13 +00:00
HoneyryderChuck
43c4cf500e datadog: set port as integer in the port span tag
faraday sets it as float and it doesn't seem to break because of it)
2025-02-11 00:31:13 +00:00
HoneyryderChuck
aecb6f5ddd datadog plugin: fix error callback and general issues
also, made the handler a bit more functional style, which curbs some of the complexity
2025-02-11 00:31:13 +00:00
HoneyryderChuck
6ac3d346b9 Merge branch 'method-redefinition-warnings' into 'master'
Fix two method redefinition warnings

See merge request os85/httpx!367
2025-02-07 10:21:26 +00:00
Earlopain
946f93471c
Fix two method redefinition warnings
```
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/selector.rb:95: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/resolver/system.rb:54: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
```

In selector.rb, the definitions are identical, so I kept the delegator

For system.rb, it always returns true so I kept that one
2025-02-07 09:38:30 +01:00
HoneyryderChuck
f68ff945c1 Merge branch 'issue-335' into 'master'
raise error when httpx is used with an url not starting with http or https schemes

Closes #335

See merge request os85/httpx!366
2025-01-28 09:07:07 +00:00
HoneyryderChuck
9fa9dd5350 raise error when httpx is used with an url not starting with http or https schemes
this was previously done in connection initialization, which means that the request would map to an error response with this error; however, the change to thread-safe pools in 1.4.0 caused a regression, where the uri is expected to have an origin before the connection is established; this is fixed by raising an error on request creation, which will need to be caught by the caller

Fixes #335
2025-01-28 00:36:00 +00:00
HoneyryderChuck
1c0cb0185c Merge branch 'issue-333' into 'master'
fix: handle multi goaway frames coming from server

Closes #333

See merge request os85/httpx!362
2025-01-13 13:00:18 +00:00
HoneyryderChuck
2a1338ca5b fix: handle multi goaway frames coming from server
nodejs servers, for example, seem to send them when shutting down servers on timeout; when receiving, in the same buffer, the first correctly closes the parser and emits the message, while the second, because the parser is closed already, will emit an exception; the regression happened because the second exception was swallowed by the pool handler, but now that's gone, and errors on connection consumption get handled; this was worked around by, on the parser, when emitting the errors for pending requests, claearing the queue, as when the second error comes, there's no request to emit the error for

Closes #333
2025-01-12 00:16:31 +00:00
HoneyryderChuck
cb847f25ad Merge branch 'ruby-34' into 'master'
adding support for ruby 3.4

See merge request os85/httpx!360
2025-01-03 01:37:28 +00:00
HoneyryderChuck
44311d08a5 improve resolver logs to include record family in prefix
also, fixed some of the arithmetic associated with logging timeout logs
2025-01-02 23:49:01 +00:00
HoneyryderChuck
17003840d3 adding support for ruby 3.4 2025-01-02 23:38:51 +00:00
HoneyryderChuck
a4bebf91bc Merge branch 'chore/avoid-loading-datadog-dogstatsd' into 'master'
Do not load Datadog tracing when dogstatsd is present

See merge request os85/httpx!361
2025-01-02 23:01:07 +00:00
Hieu Nguyen
691215ca6f Do not load Datadog tracing when dogstatsd is present 2024-12-31 18:54:44 +08:00
HoneyryderChuck
999d86ae3e bump version to 1.4.0 2024-12-18 13:22:09 +00:00
HoneyryderChuck
a4c2fb92e7 improving coverage of modules 2024-12-18 11:10:04 +00:00
HoneyryderChuck
66d3a9e00d Merge branch 'improvs' 2024-12-10 15:09:22 +00:00
HoneyryderChuck
e418783ea9 more sig completeness 2024-12-10 15:09:00 +00:00
HoneyryderChuck
36ddd84c85 improve code around consuming request bodies (particularly body_encoder interface) 2024-12-10 15:09:00 +00:00
HoneyryderChuck
f7a5b3ae90 define selector_store sigs 2024-12-10 15:09:00 +00:00
HoneyryderChuck
3afe853517 make #early_resolve return a boolean, instead of undefined across implementations 2024-12-10 15:09:00 +00:00
HoneyryderChuck
853ebd5e36 improve coverage, eliminate dead code 2024-12-10 15:09:00 +00:00
HoneyryderChuck
f820b8cfcb Merge branch 'issue-325' into 'master'
XML plugin

Closes #325

See merge request os85/httpx!358
2024-12-08 13:14:43 +00:00
HoneyryderChuck
062fd5a7f4 reinstate and deprecate HTTPX::Response#xml method 2024-12-08 12:48:47 +00:00
HoneyryderChuck
70bf874f4a adding gem collection
includes nokogiri type sigs
2024-12-08 12:48:47 +00:00
HoneyryderChuck
bf9d847516 moved xml encoding/decoding + APIs into :xml plugin 2024-12-08 12:48:47 +00:00
HoneyryderChuck
d45cae096b fix: do not raise things which are not exceptions
this is a regression from a ractor compatibility commit, which ensured that errors raised while preparing the request / resolving name are caught and raised, but introduced a regression when name resolution retrieves a cached IP; this error only manifested in dual-stack situations, which can't be tested in CI yet

Closes #329
2024-12-07 20:00:40 +00:00
HoneyryderChuck
717b932e01 improved coverage of content digest plugin tests 2024-12-03 09:00:11 +00:00
HoneyryderChuck
da11cb320c Merge branch 'json-suffix' into 'master'
Accept more MIME types with json suffix

Closes #326

See merge request os85/httpx!357
2024-12-03 08:50:07 +00:00
sarna
4bf07e75ac Accept more MIME types with json suffix
Fixes #326 #327
2024-12-03 08:50:07 +00:00
HoneyryderChuck
3b52ef3c09 Merge branch 'simpler-selector' into 'master'
:pool option + thread-safe session-owned conn pool

See merge request os85/httpx!348
2024-12-02 14:26:17 +00:00
HoneyryderChuck
ac809d18cc content-digest: set validate_content_digest default to false; do not try to compute content-digest for requests with no body 2024-12-02 13:04:57 +00:00
HoneyryderChuck
85019e5493 Merge branch 'content_digest' into 'master'
Add support for `content-digest` headers (RFC9530)

See merge request os85/httpx!354
2024-12-02 12:37:40 +00:00
David Roetzel
95c1a264ee Add support for content-digest headers (RFC9530)
Closes #323
2024-12-02 12:37:40 +00:00
HoneyryderChuck
32313ef02e Merge branch 'fix-json-encode-with-oj' into 'master'
Fix incorrect hash key rendering with Oj JSON encoder

Closes #324

See merge request os85/httpx!356
2024-11-29 19:41:40 +00:00
Denis Sadomowski
ed9df06b38 fix rubocop offenses 2024-11-29 18:26:39 +01:00
Denis Sadomowski
b9086f37cf Compat mode for Oj.dump by default 2024-11-29 17:47:30 +01:00
Denis Sadomowski
d3ed551203 revert arguments to json_dump 2024-11-29 17:40:32 +01:00
Denis Sadomowski
1b0e9b49ef Fix incorrect hash key rendering with Oj JSON encoder 2024-11-28 16:19:17 +01:00
HoneyryderChuck
8797434ae7 Merge branch 'fix-hexdigest-on-compressed-bodies' into 'master'
aws sigv4support calculation of hexdigest on top of compressed bodies in correct way

See merge request os85/httpx!355
2024-11-27 18:06:39 +00:00
HoneyryderChuck
25c87f3b96 fix: do not try to rewind on bodies which respond to #each
also, error when trying to calculate hexdigest on endless bodies
2024-11-27 17:39:20 +00:00
HoneyryderChuck
26c63a43e0 aws sigv4support calculation of hexdigest on top of compressed bodies in a more optimal way
before, compressed bodies were yielding chunks and buffering locally (the  variant in this snippet); they were also failing to rewind, due to lack of method (fixed in the last commit); in this change, support is added for bodies which can read and rewind (but do not map to a local path via ), such as compressed bodies, which at this point haven't been yet buffered; the procedure is then to buffer the compressed body into a tempfile, calculate the hexdigest then rewind the body and move on
2024-11-27 08:55:23 +00:00
HoneyryderChuck
3217fc03f8 allow deflater bodies to rewind 2024-11-27 08:50:57 +00:00
HoneyryderChuck
b7b63c4460 removing unused bits 2024-11-27 08:50:26 +00:00
HoneyryderChuck
7d8388af28 add test for calculation of hexdigest on top of a compressed body 2024-11-27 08:49:57 +00:00
HoneyryderChuck
a53d7f1e01 raise error happening in request-to-connection paths
but only when the selector is empty, as there'll be nothing to select on, and this would fall into an infinite loop
2024-11-19 12:55:44 +00:00
HoneyryderChuck
c019f1b3a7 removing usage of global unshareable object in default options 2024-11-19 12:55:44 +00:00
HoneyryderChuck
594f6056da native resolver: treat tcp hanshake errors as resolve errors 2024-11-19 12:55:44 +00:00
HoneyryderChuck
113e9fd4ef moving leftover option proc into private function 2024-11-19 12:55:44 +00:00
HoneyryderChuck
e32d226151 refactor of internal resolver cache lookup access to make it a bit safer 2024-11-19 12:55:44 +00:00
HoneyryderChuck
a3246e506d freezing all default options 2024-11-19 12:55:44 +00:00
HoneyryderChuck
ccb22827a2 using find_index/delete_at instead of find/delete 2024-11-19 12:55:44 +00:00
HoneyryderChuck
94e154261b store selectors in thread-local variables
instead of fiber-local storage; this allows that under fiber-scheduler based engines, like async, requests on the same session with an open selector will reuse the later, thereby ensuring connection reuse within the same thread

in normal conditions, that'll happen only if the user uses a session object and uses HTTPX::Session#wrap to keep the context open; it'll also work OTTB when using sessions with the  plugin. Otherwise, a new connection will be opened per fiber
2024-11-19 12:55:44 +00:00
HoneyryderChuck
c23561f80c linting... 2024-11-19 12:55:44 +00:00
HoneyryderChuck
681650e9a6 fixed long-standing reenqueue of request in the pending list 2024-11-19 12:55:44 +00:00
HoneyryderChuck
31f0543da2 minor improvement on handling do_init_connection 2024-11-19 12:55:44 +00:00
HoneyryderChuck
5e3daadf9c changing the order of operations handling misdirectedd requests
because you're reconnecting to the same host, now the previous connection is closed, in order to avoid a deadlock on the pool where the per-host conns are exhausted, and the new connection can't be initiated because the older one hasn't been checked back in
2024-11-19 12:55:44 +00:00
HoneyryderChuck
6b9a737756 introducing Connection#peer to point to the host to connect to
this eliminates the overuse of Connection#origin, which in the case of proxied connections was broken in the previous commit

the proxy implementation got simpler, despite this large changeset
2024-11-19 12:55:44 +00:00
HoneyryderChuck
1f9dcfb353 implement per-origin connection threshold per pool
defaulting to unbounded, in order to preserve current behaviour; this will cap the number of connections initiated for a given origin for a pool, which if not shared, will be per-origin; this will include connections from separate option profiles

a pool timeout is defined to checkout a connection when limit is reeached
2024-11-19 12:55:44 +00:00
HoneyryderChuck
d77e97d31d repositioned empty placeholder hash 2024-11-19 12:55:44 +00:00
HoneyryderChuck
69e7e533de synchronize access to connections in the pool
also fixed the coalescing case where the connection may come from the pool, and should therefore be remmoved from there and selected/checked back in accordingly as a result
2024-11-19 12:55:44 +00:00
HoneyryderChuck
840bb55ab3 do not return idle (result of either cloning or coalescing) connections back to the pool 2024-11-19 12:55:44 +00:00
HoneyryderChuck
5223d51475 setting the connection pool locally to the session
allowing it to be plugin extended via pool_class and PoolMethods
2024-11-19 12:55:44 +00:00
HoneyryderChuck
8ffa04d4a8 making pool class a plugin extendable class 2024-11-19 12:55:44 +00:00
HoneyryderChuck
4a351bc095 adapted plugins to the new structure 2024-11-19 12:55:44 +00:00
HoneyryderChuck
11d197ff24 changed internal session structure, so that it uses local selectors directly
pools are then used only to fetch new conenctions; selectors are discarded when not needed anymore; HTTPX.wrap is for now patched, but would ideally be done with in the future
2024-11-19 12:55:44 +00:00
HoneyryderChuck
12fbca468b rewrote Pool class to act as a connection pool, the way it was intended
this leaves synchronization out ftm
2024-11-19 12:55:44 +00:00
HoneyryderChuck
79d5d16c1b moving session with pool test plugin to override on the session and drop pool changes 2024-11-19 12:55:44 +00:00
HoneyryderChuck
e204bc6df0 passing connections to Pool#next_tick and Pool#next_timeout
refactoring towards not centralizing this inforation
2024-11-19 12:55:44 +00:00
HoneyryderChuck
6783b378d3 bump version to 1.3.4 2024-11-19 12:53:34 +00:00
HoneyryderChuck
9d7681cb46 Merge branch 'webmock-form-tempfile' into 'master'
Fix webmock integration when posting tempfiles

Closes #320

See merge request os85/httpx!353
2024-11-06 13:58:04 +00:00
HoneyryderChuck
c6139e40db response body: protect against invalid charset in content-type header
Closes https://github.com/HoneyryderChuck/httpx/issues/66
2024-11-06 13:38:19 +00:00
Earlopain
a4b95db01c Fix webmock integration when posting tempfiles
The fix is two-fold and also allows them to be retryable

Closes https://gitlab.com/os85/httpx/-/issues/320
2024-11-06 13:27:45 +00:00
HoneyryderChuck
91b9e13cd0 bumped version to 1.3.3 2024-10-31 18:00:12 +00:00
HoneyryderChuck
8d5def5f02 Merge branch 'issue-319' into 'master'
fix for webmock request body expecting a string

Closes #319

See merge request os85/httpx!352
2024-10-31 17:58:42 +00:00
HoneyryderChuck
3e504fb511 fix for webmock request body expecting a string
when building the request signature, the body is preemptively converted to a string, which fulfills the expectation for webmock, despite it being a bit of a perf penalty if the request contains a multipart request body, as the body will be fully read to memory

Closes #319

Closes https://github.com/HoneyryderChuck/httpx/issues/65
2024-10-31 17:47:12 +00:00
HoneyryderChuck
492097d551 bumped version to 1.3.2 2024-10-30 11:50:49 +00:00
HoneyryderChuck
02ed2ae87d raise invalid uri if passed request uri does not contain the host part 2024-10-28 10:40:28 +00:00
HoneyryderChuck
599b6865da removing parentheses from regex 2024-10-25 15:54:04 +01:00
HoneyryderChuck
7c0e776044 coverage must be a regex 2024-10-25 13:58:58 +01:00
HoneyryderChuck
7ea0b32161 fix coverage badge generation 2024-10-25 13:55:51 +01:00
HoneyryderChuck
72b0267598 Merge branch 'issue-317' into 'master'
Support WebMock with form/multipart

Closes #317

See merge request os85/httpx!351
2024-10-25 12:55:25 +00:00
Alexey Romanov
4a966d4cb8 Add a regression test for WebMock with form/multipart 2024-10-25 13:43:12 +01:00
HoneyryderChuck
70f1ffc65d Merge branch 'github-issue-63' into 'master'
Prevent `NoMethodError` in the proxy plugin

See merge request os85/httpx!350
2024-10-21 09:23:50 +00:00
Alexey Romanov
fda0ea8b0e Prevent NoMethodError in the proxy plugin
When:
1. the proxy is autodetected from `http_proxy` etc. variables;
2. a request is made which bypasses the proxy (e.g. to an authority in `no_proxy`);
3. this request fails with one of `Proxy::PROXY_ERRORS` (timeout or a system error)

the `fetch_response` method tried to access the proxy URIs array which
isn't initialized by `proxy_options`. This change fixes the
`proxy_error?` check to avoid the issue.
2024-10-21 10:10:12 +01:00
HoneyryderChuck
2443ded12b update CI test certs 2024-09-27 09:16:06 +01:00
HoneyryderChuck
1db2d00d07 rename get tests 2024-09-06 09:43:25 +01:00
HoneyryderChuck
40b4884d87 bumped version to 1.3.1 2024-08-20 17:20:24 +01:00
HoneyryderChuck
823e7446f4 faraday: do not call on_complete when not defined
by default it's not filled in, but middlewares override it

Closes https://github.com/HoneyryderChuck/httpx/issues/61
2024-08-20 16:55:57 +01:00
HoneyryderChuck
83b4c73b92 protect against coalescing connections on the resolver
these could take connections out of the loop, thereby causinng a busy loop, on multiple request scenarios
2024-08-19 16:45:55 +01:00
Diogo Vernier
9844a55205 fix CPU usage loop 2024-08-19 16:45:55 +01:00
HoneyryderChuck
6e1bc89256 Merge branch 'issue-312' into 'master'
allow further extension of the httpx session via faraday config block

Closes #312

See merge request os85/httpx!347
2024-08-19 15:45:41 +00:00
HoneyryderChuck
8ec0765bd7 Merge branch 'max-time' into 'master'
reuse request_timeout on response chains (redirects, retries)

See merge request os85/httpx!345
2024-08-19 15:45:24 +00:00
HoneyryderChuck
6b893872fb allow further extension of the httpx session via faraday config block
Closes #312
2024-08-01 11:41:10 +01:00
HoneyryderChuck
ca8346b193 adding options docs 2024-07-25 16:01:51 +01:00
HoneyryderChuck
7115f0cdce avoid enqueing requests after a period if the request is over
they may have been closed already, due to a timeout or connection dropping. this condition affects delayed retry or redirect follow request.
2024-07-25 11:59:02 +01:00
HoneyryderChuck
74fc7bf77d when bubbling up errors in the connection, handle request error directly
instead of expecting it to be contained within it, and therefore handled explicitly. sometimes they may not.
2024-07-25 11:59:02 +01:00
HoneyryderChuck
002459b9b6 fix: do not generate new connection on 407 check for proxies
instead, look for the correct conn in-session. this does not leak connections with usage
2024-07-25 11:59:02 +01:00
HoneyryderChuck
1ee39870da deactivate connection before deferring a request in the future
this causes busy loops where request is buffered only in the future, and its connection may still be open for readiness probes
2024-07-25 11:59:02 +01:00
HoneyryderChuck
b8db28abd2 make request_timeout reset on returned response, rather than response callback
this makes it not reset on redirect or retried responses, and effectively makes it act as a max-time for individual transactions/requests
2024-07-25 11:59:02 +01:00
HoneyryderChuck
fafe7c140c splatting connections on pool.deactivate call, as per defined sig 2024-07-23 14:48:51 +01:00
HoneyryderChuck
047dc30487 do not use thread variables in mock response test plugin 2024-07-19 12:01:48 +01:00
HoneyryderChuck
7278647688 bump version to 1.3.0 2024-07-10 16:27:24 +01:00
HoneyryderChuck
09fbb32b9a fix: in test, uri URI to build uri with ip address, as concatenating fails for IPv6 2024-07-10 16:10:21 +01:00
HoneyryderChuck
4e7ad8fd23 fix: cookies plugin should not make Session#build_request private
Closes #311
2024-07-10 15:52:56 +01:00
HoneyryderChuck
9a3ddfd0e4 change datadog v2 constraint to not test against beta version
Fixes #310
2024-07-10 15:50:14 +01:00
HoneyryderChuck
e250ea5118 Merge branch 'http-2-gem' into 'master'
switch from http-2-next to http-2

See merge request os85/httpx!344
2024-07-08 15:19:37 +00:00
HoneyryderChuck
2689adc390 Merge branch 'request-options' into 'master'
Options improvements

See merge request os85/httpx!324
2024-07-08 15:19:02 +00:00
HoneyryderChuck
ba31204227 switch from http-2-next to http-2
will be merged back to original repo soon
2024-06-28 15:49:58 +01:00
HoneyryderChuck
581b749e89 bumped version to 1.2.6 2024-06-17 10:58:39 +01:00
HoneyryderChuck
7562346357 fix: do not try fetching the retry-after on error responses
Closes #307
2024-06-11 19:09:47 +01:00
HoneyryderChuck
e7aa53365e typing retries #fetch_response 2024-06-11 19:08:44 +01:00
HoneyryderChuck
0b671fa2f9 simplify ErrorResponse by fetching options from the request, like Response 2024-06-11 18:49:18 +01:00
HoneyryderChuck
8b2ee0b466 remove form, json, ,xml and body from the Options class
Options become a bunch of session and connection level parameters, and requests do not need to maintain a separate Options object when they contain a body anymore, instead, objects is shared with the session, while request-only parameters get passed downwards to the request and its body. This reduces allocations of Options, currently the heaviest object to manage.
2024-06-11 18:23:45 +01:00
HoneyryderChuck
b686119a6f do not try to cast to Options all the time, trust the internal structure 2024-06-11 18:23:12 +01:00
HoneyryderChuck
dcbd2f81e3 change internal buffer fetch using ivar getter 2024-06-11 18:21:54 +01:00
HoneyryderChuck
0fffa98e83 avoid traversing full intervals list, which is ordered by oldest intervals first
by using #drop_while
2024-06-11 18:21:54 +01:00
HoneyryderChuck
08ba389fd6 log mmore info on read for level 3 2024-06-11 18:21:54 +01:00
HoneyryderChuck
587271ff77 improving sigs 2024-06-11 18:21:22 +01:00
HoneyryderChuck
7062b3c49b Merge branch 'gh-52' into 'master'
native resolver: moved timeouts reset out of idle transition, retry alias

See merge request os85/httpx!342
2024-06-11 16:24:52 +00:00
HoneyryderChuck
b1cec40743 native: retry last tried name for a given DNS query
this prevents a timeout on an alias method to start from scratch from original
2024-06-11 16:36:36 +01:00
HoneyryderChuck
2d6fde2e5d downgrade to udp when retrying dns queries 2024-06-11 16:27:51 +01:00
HoneyryderChuck
3a3188efff adding a log msg when transitioning to resolving an alias 2024-06-11 16:27:51 +01:00
HoneyryderChuck
7928624639 native resolver: moved timeouts reset out of idle transition
in order to reuse idle transition for situations, and also because the only cases where it's required to reset timeouts is when retrying on a separate nameserver
2024-06-11 16:27:51 +01:00
HoneyryderChuck
d61df6d84f fixing resolver options extension on tests (although it wasn't breakig anything) 2024-06-11 16:27:51 +01:00
HoneyryderChuck
c388d8ec9a slow dns server: support for single hostname slowness 2024-06-11 16:27:51 +01:00
HoneyryderChuck
ad02ad5327 test dns server for tcp queries 2024-06-11 16:27:51 +01:00
HoneyryderChuck
af6ce5dca4 fixing redirect_on sig 2024-06-09 19:40:28 +01:00
HoneyryderChuck
68dd8e223f Merge branch 'gh-53' into 'master'
remove body-related headers on POST-to-GET redirects

See merge request os85/httpx!343
2024-06-09 15:06:21 +00:00
HoneyryderChuck
d9fbd5194e fixup! adding tests for POST-to-GET redirection, both for 307 and not 2024-06-05 18:09:13 +01:00
HoneyryderChuck
0ba7112a9f remove body-related headers on POST-to-GET redirects
except in the case where method and body must be preserved on redirects, as in 307 case
2024-06-05 17:58:05 +01:00
HoneyryderChuck
0c262bc19d adding tests for POST-to-GET redirection, both for 307 and not 2024-06-05 17:58:05 +01:00
HoneyryderChuck
b03a46d25e Merge branch 'gh-54' into 'master'
set options from env on the request

See merge request os85/httpx!341
2024-06-05 12:51:04 +00:00
HoneyryderChuck
69f58bc358 lock ffi for older ruby 2024-06-04 11:20:04 +01:00
HoneyryderChuck
41c1aace80 set options from env on the request
faraday users may use the yielded block to set different options per request
2024-06-03 18:11:04 +01:00
HoneyryderChuck
423f05173c bump version to 1.2.5 2024-05-14 15:24:58 +01:00
HoneyryderChuck
d82008ddcf Merge branch 'fix-stream-plugin' into 'master'
stream plugin: reverted back to yielding buffered payloads for streamed responses

See merge request os85/httpx!340
2024-05-14 14:21:09 +00:00
HoneyryderChuck
19f46574cb reduce payload size in timeout test 2024-05-14 15:04:10 +01:00
HoneyryderChuck
713887cf08 reordered connection init inn case the uri is not an HTTP uri 2024-05-13 18:10:19 +01:00
HoneyryderChuck
a3cfcc71ec stream plugin: reverted back to yielding buffered payloads for streamed responses
the bug this was deleted for seems to not be hanging on this behaviour anymore, and this at least allows the down integration to not change significantly.
2024-05-13 18:10:19 +01:00
HoneyryderChuck
0f431500c0 Merge branch 'gh-47' into 'master'
response cache plugin: fix to use correct last-modified header

See merge request os85/httpx!339
2024-05-02 16:35:20 +00:00
HoneyryderChuck
9d03dab83d missing require for uri lib 2024-05-02 17:22:29 +01:00
HoneyryderChuck
7e7c06597a upgrade test datadog to v2 beta 2024-05-02 17:02:37 +01:00
HoneyryderChuck
83157412e7 response cache plugin: merge headers from cached response
some are required for other features, such as the convenience decoding methods

Fixes https://github.com/HoneyryderChuck/httpx/issues/47
2024-05-02 16:58:16 +01:00
HoneyryderChuck
461dac06d5 response cache plugin: fix to use correct last-modified header
Fixes https://github.com/HoneyryderChuck/httpx/issues/49
2024-05-02 16:57:13 +01:00
HoneyryderChuck
d60cfb7e44 bumped version to 1.2.4 2024-04-02 15:59:34 +01:00
HoneyryderChuck
20c8dde9ef fixed usage of String#split when forming key/value pairs
some of it is used to parse values where there are = characters, such as base64 strings
2024-04-02 09:39:52 +01:00
HoneyryderChuck
594640c10c removed irb call left behind... 2024-03-25 09:47:28 +00:00
HoneyryderChuck
1f7a251925 updated datadog 2.0 prerelease tag 2024-03-22 18:49:08 +00:00
HoneyryderChuck
7ab251f755 Merge branch 'issue-305' into 'master'
fix: datadog not generating new span on retried requests

Closes #305

See merge request os85/httpx!337
2024-03-22 14:42:53 +00:00
HoneyryderChuck
3d9779cc63 adapt to datadog gem upcoming changes
names changes from ddtrace to datadog, as well as namespace
2024-03-22 13:44:16 +00:00
HoneyryderChuck
b234465219 ci: show bundler logs 2024-03-22 12:51:15 +00:00
HoneyryderChuck
51a8b508ac fix: datadog not generating new span on retried requests
spans initiation gate wasn't being reset in the case of retries, which
reuse the same object; a redesign was done to ensure the span initiates
before really sending the request, is reused when the request object is
reset and reused, and when the error happens outside of the request
transaction, such as during name resolution.
2024-03-22 12:51:15 +00:00
HoneyryderChuck
b86529655f Merge branch 'gh-43' into 'master'
fix: recover from connection lost leaving process hanging on persistent connections

See merge request os85/httpx!335
2024-03-17 10:10:43 +00:00
HoneyryderChuck
4434daa5ea fix: recover from connection lost leaving process hanging on persistent connections
the recovery model of long running connections is to mark requests as pending, ping the connection to fill the write buffer, and moveon. since the last changes which impoved connection object reusage, the way that the procedures were stacked created a conundrum, where the inactive connection would move to idle before being activated, so it'd never go back to the connection pool; this switches operations, so an inactive connection activates first and is picked up by the pool, before ping-and-reconnect happens
2024-03-15 15:39:57 +00:00
HoneyryderChuck
dec17e8d85 Merge branch 'issue-304' into 'master'
allows for returning buffering to error response on loop error

Closes #304

See merge request os85/httpx!334
2024-03-14 14:10:23 +00:00
HoneyryderChuck
c6a63b55a9 allows for returning buffering to error response on loop error
in some situations on unexpected loop errors, read buffer may still contain response bytes to buffer, which couldn't be buffered to the error responses after error has propagated; this makes it possible by delegating bytes to wrapped response
2024-03-11 23:07:05 +00:00
Tony Hsu
be5a91ce2e ddtrace 2.0 changes 2024-03-11 22:46:42 +00:00
HoneyryderChuck
c4445074ad bump version to 1.2.3 2024-03-04 11:59:40 +00:00
HoneyryderChuck
b1146b9f55 Merge branch 'master' of gitlab.com:os85/httpx 2024-03-01 16:53:33 +00:00
HoneyryderChuck
78d67cd364 wrong ruby engine cond 2024-02-29 15:58:42 +00:00
HoneyryderChuck
2fbec7ab6a Merge branch 'issue-296' into 'master'
elapsing timeouts: guard against mutation of callbacks while looping

Closes #296

See merge request os85/httpx!329
2024-02-29 14:48:00 +00:00
HoneyryderChuck
fbfd17351f disable ssh proxy tests for truffleruby as well 2024-02-29 14:47:54 +00:00
HoneyryderChuck
3c914f741d remove unused var 2024-02-29 14:15:36 +00:00
HoneyryderChuck
ad14df6a7a Merge branch 'issue-287' into 'master'
native resolver will cleanly go from tcp to udp on CNAME resolution

Closes #287

See merge request os85/httpx!331
2024-02-29 14:08:03 +00:00
HoneyryderChuck
cf43257006 documenting delegated methods in Response 2024-02-28 14:38:01 +00:00
Mostafa Dahab
06076fc908 Allow zero max retries 2024-02-28 14:37:38 +00:00
HoneyryderChuck
d5c9a518d8 Merge branch 'github-pr-41' into 'master'
datadog: do not set lazy for newer versions (deprecated)

See merge request os85/httpx!332
2024-02-28 13:42:18 +00:00
HoneyryderChuck
d5eee7f2d1 Merge branch 'issue-299' into 'master'
fix for not allowing default oauth auth method when setting grant type and scope

Closes #299

See merge request os85/httpx!330
2024-02-27 21:20:35 +00:00
HoneyryderChuck
ab51dcbbc1 datadog: do not set lazy for newer versions (deprecated) 2024-02-27 12:23:34 +00:00
HoneyryderChuck
8982dc0fe4 remove regression test 0.19.3
peers used for the test changed their TLS certificate config, and I can't find replacement peers.
2024-02-27 11:40:47 +00:00
HoneyryderChuck
8e3d5f4094 fix: native resolver will cleanly go from tcp to udp on CNAME resolution
if a CNAME came on a tcp dns response, the follow-up dns query would be erased and never done; this fixes it by keeping buffer state on fall back to udp
2024-02-26 18:11:24 +00:00
HoneyryderChuck
77006fd0c9 fix for not allowing default oauth auth method when setting grant type and scope
the oauth plugin already documents defaulting to client_secret_basic
2024-02-26 16:33:09 +00:00
HoneyryderChuck
bab19efcfe fix: make sure happy eyeballs cloned connections set the session callbacks
fixed an issue where a 421 response would not call the misredirected callback, as it wasn't being re-set in the cloned connection, therefore would be never called, and would hang...
2024-02-25 23:24:30 +00:00
HoneyryderChuck
f1bccaae2e elapsing timeouts: guard against mutation of callbacks while looping
triggering timer callbacks may call Connection#consume, which may trigger interval cleanup process of the timer callback. this does not happen usually, but it can happen in the context of multiple requests to the same host using the expect plugin
2024-02-09 14:46:27 +00:00
284 changed files with 8839 additions and 3362 deletions

4
.gitignore vendored
View File

@ -15,4 +15,6 @@ tmp
public public
build build
.sass-cache .sass-cache
wiki wiki
.gem_rbs_collection/
rbs_collection.lock.yaml

View File

@ -8,7 +8,7 @@ image:
name: docker/compose:latest name: docker/compose:latest
variables: variables:
# this variable enables caching withing docker-in-docker # this variable enables caching within docker-in-docker
# https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-workflow-with-docker-executor # https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-workflow-with-docker-executor
MOUNT_POINT: /builds/$CI_PROJECT_PATH/vendor MOUNT_POINT: /builds/$CI_PROJECT_PATH/vendor
# bundler-specific # bundler-specific
@ -39,7 +39,7 @@ cache:
- vendor - vendor
lint rubocop code: lint rubocop code:
image: "ruby:3.3" image: "ruby:3.4"
variables: variables:
BUNDLE_WITHOUT: test:coverage:assorted BUNDLE_WITHOUT: test:coverage:assorted
before_script: before_script:
@ -47,7 +47,7 @@ lint rubocop code:
script: script:
- bundle exec rake rubocop - bundle exec rake rubocop
lint rubocop wiki: lint rubocop wiki:
image: "ruby:3.3" image: "ruby:3.4"
rules: rules:
- if: $CI_PIPELINE_SOURCE == "schedule" - if: $CI_PIPELINE_SOURCE == "schedule"
variables: variables:
@ -61,7 +61,7 @@ lint rubocop wiki:
- rubocop-md - rubocop-md
AllCops: AllCops:
TargetRubyVersion: 3.3 TargetRubyVersion: 3.4
DisabledByDefault: true DisabledByDefault: true
FILE FILE
script: script:
@ -90,25 +90,28 @@ test ruby 3/1:
./spec.sh ruby 3.1 ./spec.sh ruby 3.1
test ruby 3/2: test ruby 3/2:
<<: *test_settings <<: *test_settings
<<: *yjit_matrix
script: script:
./spec.sh ruby 3.2 ./spec.sh ruby 3.2
test ruby 3/3: test ruby 3/3:
<<: *test_settings <<: *test_settings
<<: *yjit_matrix
script: script:
./spec.sh ruby 3.3 ./spec.sh ruby 3.3
test ruby 3/4:
<<: *test_settings
<<: *yjit_matrix
script:
./spec.sh ruby 3.4
test truffleruby: test truffleruby:
<<: *test_settings <<: *test_settings
script: script:
./spec.sh truffleruby latest ./spec.sh truffleruby latest
allow_failure: true allow_failure: true
regression tests: regression tests:
image: "ruby:3.3" image: "ruby:3.4"
variables: variables:
BUNDLE_WITHOUT: lint:assorted BUNDLE_WITHOUT: lint:assorted
CI: 1 CI: 1
COVERAGE_KEY: "$RUBY_ENGINE-$RUBY_VERSION-regression-tests" COVERAGE_KEY: "ruby-3.4-regression-tests"
artifacts: artifacts:
paths: paths:
- coverage/ - coverage/
@ -120,12 +123,12 @@ regression tests:
- bundle exec rake regression_tests - bundle exec rake regression_tests
coverage: coverage:
coverage: '/\(\d+.\d+\%\) covered/' coverage: '/Coverage: \d+.\d+\%/'
stage: prepare stage: prepare
variables: variables:
BUNDLE_WITHOUT: lint:test:assorted BUNDLE_WITHOUT: lint:test:assorted
image: "ruby:3.3" image: "ruby:3.4"
script: script:
- gem install simplecov --no-doc - gem install simplecov --no-doc
# this is a workaround, because simplecov doesn't support relative paths. # this is a workaround, because simplecov doesn't support relative paths.
@ -147,7 +150,7 @@ pages:
stage: deploy stage: deploy
needs: needs:
- coverage - coverage
image: "ruby:3.3" image: "ruby:3.4"
before_script: before_script:
- gem install hanna-nouveau - gem install hanna-nouveau
script: script:

View File

@ -92,6 +92,10 @@ Style/GlobalVars:
Exclude: Exclude:
- lib/httpx/plugins/internal_telemetry.rb - lib/httpx/plugins/internal_telemetry.rb
Style/CommentedKeyword:
Exclude:
- integration_tests/faraday_datadog_test.rb
Style/RedundantBegin: Style/RedundantBegin:
Enabled: false Enabled: false
@ -176,3 +180,7 @@ Performance/StringIdentifierArgument:
Style/Lambda: Style/Lambda:
Enabled: false Enabled: false
Style/TrivialAccessors:
Exclude:
- 'test/pool_test.rb'

15
Gemfile
View File

@ -8,7 +8,11 @@ gemspec
gem "rake", "~> 13.0" gem "rake", "~> 13.0"
group :test do group :test do
gem "ddtrace" if RUBY_VERSION >= "3.2.0"
gem "datadog", "~> 2.0"
else
gem "ddtrace"
end
gem "http-form_data", ">= 2.0.0" gem "http-form_data", ">= 2.0.0"
gem "minitest" gem "minitest"
gem "minitest-proveit" gem "minitest-proveit"
@ -32,6 +36,11 @@ group :test do
gem "rbs" gem "rbs"
gem "yajl-ruby", require: false gem "yajl-ruby", require: false
end end
if RUBY_VERSION >= "3.4.0"
# TODO: remove this once websocket-driver-ruby declares this as dependency
gem "base64"
end
end end
platform :mri, :truffleruby do platform :mri, :truffleruby do
@ -48,12 +57,16 @@ group :test do
gem "aws-sdk-s3" gem "aws-sdk-s3"
gem "faraday" gem "faraday"
gem "faraday-multipart"
gem "idnx" gem "idnx"
gem "oga" gem "oga"
gem "webrick" if RUBY_VERSION >= "3.0.0" gem "webrick" if RUBY_VERSION >= "3.0.0"
# https://github.com/TwP/logging/issues/247 # https://github.com/TwP/logging/issues/247
gem "syslog" if RUBY_VERSION >= "3.3.0" gem "syslog" if RUBY_VERSION >= "3.3.0"
# https://github.com/ffi/ffi/issues/1103
# ruby 2.7 only, it seems
gem "ffi", "< 1.17.0" if Gem::VERSION < "3.3.22"
end end
group :lint do group :lint do

View File

@ -157,7 +157,6 @@ All Rubies greater or equal to 2.7, and always latest JRuby and Truffleruby.
* Discuss your contribution in an issue * Discuss your contribution in an issue
* Fork it * Fork it
* Make your changes, add some tests * Make your changes, add some tests (follow the instructions from [here](test/README.md))
* Ensure all tests pass (`docker-compose -f docker-compose.yml -f docker-compose-ruby-{RUBY_VERSION}.yml run httpx bundle exec rake test`)
* Open a Merge Request (that's Pull Request in Github-ish) * Open a Merge Request (that's Pull Request in Github-ish)
* Wait for feedback * Wait for feedback

View File

@ -0,0 +1,16 @@
# 1.2.3
## Improvements
* `:retries` plugin: allow `:max_retries` set to 0 (allows for a soft disable of retries when using the faraday adapter).
## Bugfixes
* `:oauth` plugin: fix for default auth method being ignored when setting grant type and scope as options only.
* ensure happy eyeballs-initiated cloned connections also set session callbacks (caused issues when server would respond with a 421 response, an event requiring a valid internal callback).
* native resolver cleanly transitions from tcp to udp after truncated DNS query (causing issues on follow-up CNAME resolution).
* elapsing timeouts now guard against mutation of callbacks while looping (prevents skipping callbacks in situations where a previous one would remove itself from the collection).
## Chore
* datadog adapter: do not call `.lazy` on options (avoids deprecation warning, to be removed in ddtrace 2.0)

View File

@ -0,0 +1,8 @@
# 1.2.4
## Bugfixes
* fixed issue related to inability to buffer payload to error responses (which may happen on certain error handling situations).
* fixed recovery from a lost persistent connection leaving process due to ping being sent while still marked as inactive.
* fixed datadog integration, which was not generating new spans on retried requests (when `:retries` plugin is enabled).
* fixed splitting strings into key value pairs in cases where the value would contain a "=", such as in certain base64 payloads.

View File

@ -0,0 +1,7 @@
# 1.2.5
## Bugfixes
* fix for usage of correct `last-modified` header in `response_cache` plugin.
* fix usage of decoding helper methods (i.e. `response.json`) with `response_cache` plugin.
* `stream` plugin: reverted back to yielding buffered payloads for streamed responses (broke `down` integration)

View File

@ -0,0 +1,13 @@
# 1.2.6
## Improvements
* `native` resolver: when timing out on DNS query for an alias, retry the DNS query for the alias (instead of the original hostname).
## Bugfixes
* `faraday` adapter: set `env` options on the request object, so they are available in the request object when yielded.
* `follow_redirects` plugin: remove body-related headers (`content-length`, `content-type`) on POST-to-GET redirects.
* `follow_redirects` plugin: maintain verb (and body) of original request when the response status code is 307.
* `native` resolver: when timing out on TCP-based name resolution, downgrade to UDP before retrying.
* `rate_limiter` plugin: do not try fetching the retry-after of error responses.

View File

@ -0,0 +1,18 @@
# 1.3.0
## Dependencies
`http-2` v1.0.0 is replacing `http-2-next` as the HTTP/2 parser.
`http-2-next` was forked from `http-2` 5 years ago; its improvements have been merged back to `http-2` recently though, so `http-2-next` willl therefore no longer be maintained.
## Improvements
Request-specific options (`:params`, `:form`, `:json` and `:xml`) are now separately kept by the request, which allows them to share `HTTPX::Options`, and reduce the number of copying / allocations.
This means that `HTTPX::Options` will throw an error if you initialize an object which such keys; this should not happen, as this class is considered internal and you should not be using it directly.
## Fixes
* support for the `datadog` gem v2.0.0 in its adapter has been unblocked, now that the gem has been released.
* loading the `:cookies` plugin was making the `Session#build_request` private.

View File

@ -0,0 +1,17 @@
# 1.3.1
## Improvements
* `:request_timeout` will be applied to all HTTP interactions until the final responses returned to the caller. That includes:
* all redirect requests/responses (when using the `:follow_redirects` plugin)
* all retried requests/responses (when using the `:retries` plugin)
* intermediate requests (such as "100-continue")
* faraday adapter: allow further plugins of internal session (ex: `builder.adapter(:httpx) { |sess| sess.plugin(:follow_redirects) }...`)
## Bugfixes
* fix connection leak on proxy auth failed (407) handling
* fix busy loop on deferred requests for the duration interval
* do not further enqueue deferred requests if they have terminated meanwhile.
* fix busy loop caused by coalescing connections when one of them is on the DNS resolution phase still.
* faraday adapter: on parallel mode, skip calling `on_complete` when not defined.

View File

@ -0,0 +1,6 @@
# 1.3.2
## Bugfixes
* Prevent `NoMethodError` in an edge case when the `:proxy` plugin is autoloaded via env vars and webmock adapter are used in tandem, and a real request fails.
* raise invalid uri error if passed request uri does not contain the host part (ex: `"https:/get"`)

View File

@ -0,0 +1,5 @@
# 1.3.3
## Bugfixes
* fixing a regression introduced in 1.3.2 associated with the webmock adapter, which expects matchable request bodies to be strings

View File

@ -0,0 +1,6 @@
# 1.3.4
## Bugfixes
* webmock adapter: fix tempfile usage in multipart requests.
* fix: fallback to binary encoding when parsing incoming invalid charset in HTTP "content-type" header.

View File

@ -0,0 +1,43 @@
# 1.4.0
## Features
### `:content_digest` plugin
The `:content_digest` can be used to calculate the digest of request payloads and set them in the `"content-digest"` header; it can also validate the integrity of responses which declare the same `"content-digest"` header.
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Content-Digest
## Per-session connection pools
This architectural changes moves away from per-thread shared connection pools, and into per-session (also thread-safe) connection pools. Unlike before, this enables connections from a session to be reused across threads, as well as limiting the number of connections that can be open on a given origin peer. This fixes long-standing issues, such as reusing connections under a fiber scheduler loop (such as the one from the gem `async`).
A new `:pool_options` option is introduced, which can be passed an hash with the following sub-options:
* `:max_connections_per_origin`: maximum number of connections a pool allows (unbounded by default, for backwards compatibility).
* `:pool_timeout`: the number of seconds a session will wait for a connection to be checked out (default: 5)
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools
## Improvements
* `:aws_sigv4` plugin: improved digest calculation on compressed request bodies by buffering content to a tempfile.
* `HTTPX::Response#json` will parse payload from extended json MIME types (like `application/ld+json`, `application/hal+json`, ...).
## Bugfixes
* `:aws_sigv4` plugin: do not try to rewind a request body which yields chunks.
* fixed request encoding when `:json` param is passed, and the `oj` gem is used (by using the `:compat` flag).
* native resolver: on message truncation, bubble up tcp handshake errors as resolve errors.
* allow `HTTPX::Response#json` to accept extended JSON mime types (such as responses with `content-type: application/ld+json`)
## Chore
* default options are now fully frozen (in case anyone relies on overriding them).
### `:xml` plugin
XML encoding/decoding (via `:xml` request param, and `HTTPX::Response#xml`) is now available via the `:xml` plugin.
Using `HTTPX::Response#xml` without the plugin will issue a deprecation warning.

View File

@ -0,0 +1,19 @@
# 1.4.1
## Bugfixes
* several `datadog` integration bugfixes
* only load the `datadog` integration when the `datadog` sdk is loaded (and not other gems that may define the `Datadog` module, like `dogstatsd`)
* do not trace if datadog integration is loaded but disabled
* distributed headers are now sent along (when the configuration is enabled, which it is by default)
* fix for handling multiple `GOAWAY` frames coming from the server (node.js servers seem to send multiple frames on connection timeout)
* fix regression for when a url is used with `httpx` which is not `http://` or `https://` (should raise `HTTPX::UnsupportedSchemaError`)
* worked around `IO.copy_stream` which was emitting incorrect bytes for HTTP/2 requests which bodies larger than the maximum supported frame size.
* multipart requests: make sure that a body declared as `Pathname` is opened for reading in binary mode.
* `webmock` integration: ensure that request events are emitted (such as plugins and integrations relying in it, such as `datadog` and the OTel integration)
* native resolver: do not propagate successful name resolutions for connections which were already closed.
* native resolver: fixed name resolution stalling, in a multi-request to multi-origin scenario, when a resolution timeout would happen.
## Chore
* refactor of the happy eyeballs and connection coalescing logic to not rely on callbacks, and instead on instance variable management (makes code more straightforward to read).

View File

@ -0,0 +1,20 @@
# 1.4.2
## Bugfixes
* faraday: use default reason when none is matched by Net::HTTP::STATUS_CODES
* native resolver: keep sending DNS queries if the socket is available, to avoid busy loops on select
* native resolver fixes for Happy Eyeballs v2
* do not apply resolution delay if the IPv4 IP was not resolved via DNS
* ignore ALIAS if DNS response carries IP answers
* do not try to query for names already awaiting answer from the resolver
* make sure all types of errors are propagated to connections
* make sure next candidate is picked up if receiving NX_DOMAIN_NOT_FOUND error from resolver
* raise error happening before any request is flushed to respective connections (avoids loop on non-actionable selector termination).
* fix "NoMethodError: undefined method `after' for nil:NilClass", happening for requests flushed into persistent connections which errored, and were retried in a different connection before triggering the timeout callbacks from the previously-closed connection.
## Chore
* Refactor of timers to allow for explicit and more performant single timer interval cancellation.
* default log message restructured to include info about process, thread and caller.

View File

@ -0,0 +1,11 @@
# 1.4.3
## Bugfixes
* `webmock` adapter: reassign headers to signature after callbacks are called (these may change the headers before virtual send).
* do not close request (and its body) right after sending, instead only on response close
* prevents retries from failing under the `:retries` plugin
* fixes issue when using `faraday-multipart` request bodies
* retry request with HTTP/1 when receiving an HTTP/2 GOAWAY frame with `HTTP_1_1_REQUIRED` error code.
* fix wrong method call on HTTP/2 PING frame with unrecognized code.
* fix EOFError issues on connection termination for long running connections which may have already been terminated by peer and were wrongly trying to complete the HTTP/2 termination handshake.

View File

@ -0,0 +1,14 @@
# 1.4.4
## Improvements
* `:stream` plugin: response will now be partially buffered in order to i.e. inspect response status or headers on the response body without buffering the full response
* this fixes an issue in the `down` gem integration when used with the `:max_size` option.
* do not unnecessarily probe for connection liveness if no more requests are inflight, including failed ones.
* when using persistent connections, do not probe for liveness right after reconnecting after a keep alive timeout.
## Bugfixes
* `:persistent` plugin: do not exhaust retry attempts when probing for (and failing) connection liveness.
* since the introduction of per-session connection pools, and consequentially due to the possibility of multiple inactive connections for the same origin being in the pool, which may have been terminated by the peer server, requests would fail before being able to establish a new connection.
* prevent retrying to connect the TCP socket object when an SSLSocket object is already in place and connecting.

126
doc/release_notes/1_5_0.md Normal file
View File

@ -0,0 +1,126 @@
# 1.5.0
## Features
### `:stream_bidi` plugin
The `:stream_bidi` plugin enables bidirectional streaming support (an HTTP/2 only feature!). It builds on top of the `:stream` plugin, and uses its block-based syntax to process incoming frames, while allowing the user to pipe more data to the request (from the same, or another thread/fiber).
```ruby
http = HTTPX.plugin(:stream_bidi)
request = http.build_request(
"POST",
"https://your-origin.com/stream",
headers: { "content-type" => "application/x-ndjson" },
body: ["{\"message\":\"started\"}\n"]
)
chunks = []
response = http.request(request, stream: true)
Thread.start do
response.each do |chunk|
handle_data(chunk)
end
end
# now send data...
request << "{\"message\":\"foo\"}\n"
request << "{\"message\":\"bar\"}\n"
# ...
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Stream-Bidi
### `:query` plugin
The `:query` plugin adds public methods supporting the `QUERY` HTTP verb:
```ruby
http = HTTPX.plugin(:query)
http.query("https://example.com/gquery", body: "foo=bar") # QUERY /gquery ....
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Query
this functionality was added as a plugin for explicit opt-in, as it's experimental (RFC for the new HTTP verb is still in draft).
### `:response_cache` plugin filesystem based store
The `:response_cache` plugin supports setting the filesystem as the response cache store (instead of just storing them in memory, which is the default `:store`).
```ruby
# cache store in the filesystem, writes to the temporary directory from the OS
http = HTTPX.plugin(:response_cache, response_cache_store: :file_store)
# if you want a separate location
http = HTTPX.plugin(:response_cache).with(response_cache_store: HTTPX::Plugins::ResponseCache::FileStore.new("/path/to/dir"))
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Response-Cache#:file_store
### `:close_on_fork` option
A new option `:close_on_fork` can be used to ensure that a session object which may have open connections will not leak them in case the process is forked (this can be the case of `:persistent` plugin enabled sessions which have add usage before fork):
```ruby
http = HTTPX.plugin(:persistent, close_on_fork: true)
# http may have open connections here
fork do
# http has no connections here
end
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools#Fork-Safety .
### `:debug_redact` option
The `:debug_redact` option will, when enabled, replace parts of the debug logs (enabled via `:debug` and `:debug_level` options) which may contain sensitive information, with the `"[REDACTED]"` placeholder.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Debugging .
### `:max_connections` pool option
A new `:max_connections` pool option (settable under `:pool_options`) can be used to defined the maximum number **overall** of connections for a pool ("in-transit" or "at-rest"); this complements, and supersedes when used, the already existing `:max_connections_per_origin`, which does the same per connection origin.
```ruby
HTTPX.with(pool_options: { max_connections: 100 })
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools .
### Subplugins
An enhancement to the plugins architecture, it allows plugins to define submodules ("subplugins") which are loaded if another plugin is in use, or is loaded afterwards.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Custom-Plugins#Subplugins .
## Improvements
* `:persistent` plugin: several improvements around reconnections of failure:
* reconnections will only happen for "connection broken" errors (and will discard reconnection on timeouts)
* reconnections won't exhaust retries
* `:response_cache` plugin: several improements:
* return cached response if not stale, send conditional request otherwise (it was always doing the latter).
* consider immutable (i.e. `"Cache-Control: immutable"`) responses as never stale.
* `:datadog` adapter: decorate spans with more tags (header, kind, component, etc...)
* timers operations have been improved to use more efficient algorithms and reduce object creation.
## Bugfixes
* ensure that setting request timeouts happens before the request is buffered (the latter could trigger a state transition required by the former).
* `:response_cache` plugin: fix `"Vary"` header handling by supporting a new plugin option, `:supported_vary_headers`, which defines which headers are taken into account for cache key calculation.
* fixed query string encoded value when passed an empty hash to the `:query` param and the URL already contains query string.
* `:callbacks` plugin: ensure the callbacks from a session are copied when a new session is derived from it (via a `.plugin` call, for example).
* `:callbacks` plugin: errors raised from hostname resolution should bubble up to user code.
* fixed connection coalescing selector monitoring in cases where the coalescable connecton is cloned, while other branches were simplified.
* clear the connection write buffer in corner cases where the remaining bytes may be interpreted as GOAWAY handshake frame (and may cause unintended writes to connections already identified as broken).
* remove idle connections from the selector when an error happens before the state changes (this may happen if the thread is interrupted during name resolution).
## Chore
`httpx` makes extensive use of features introduced in ruby 3.4, such as `Module#set_temporary_name` for otherwise plugin-generated anonymous classes (improves debugging and issue reporting), or `String#append_as_bytes` for a small but non-negligible perf boost in buffer operations. It falls back to the previous behaviour when used with ruby 3.3 or lower.
Also, and in preparation for the incoming ruby 3.5 release, dependency of the `cgi` gem (which will be removed from stdlib) was removed.

View File

@ -0,0 +1,6 @@
# 1.5.1
## Bugfixes
* connection errors on persistent connections which have just been checked out from the pool no longer account for retries bookkeeping; the assumption should be that, if a connection has been checked into the pool in an open state, chances are, when it eventually gets checked out, it may be corrupt. This issue was more exacerbated in `:persistent` plugin connections, which by design have a retry of 1, thus failing often immediately after check out without a legitimate request try.
* native resolver: fix issue with process interrupts during DNS request, which caused a busy loop when closing the selector.

View File

@ -9,7 +9,7 @@ services:
- doh - doh
doh: doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on: depends_on:
- doh-proxy - doh-proxy
entrypoint: entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh - doh
doh: doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on: depends_on:
- doh-proxy - doh-proxy
entrypoint: entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh - doh
doh: doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on: depends_on:
- doh-proxy - doh-proxy
entrypoint: entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh - doh
doh: doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on: depends_on:
- doh-proxy - doh-proxy
entrypoint: entrypoint:

View File

@ -9,7 +9,7 @@ services:
- doh - doh
doh: doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on: depends_on:
- doh-proxy - doh-proxy
entrypoint: entrypoint:

View File

@ -0,0 +1,23 @@
version: '3'
services:
httpx:
image: ruby:3.4
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint: /usr/local/bin/nghttpx
volumes:
- ./test/support/ci:/home
command: --conf /home/doh-nghttp.conf --no-ocsp --frontend '*,443'
doh-proxy:
image: publicarray/doh-proxy
environment:
- "UNBOUND_SERVICE_HOST=127.0.0.11"

View File

@ -26,6 +26,7 @@ services:
- AMZ_HOST=aws:4566 - AMZ_HOST=aws:4566
- WEBDAV_HOST=webdav - WEBDAV_HOST=webdav
- DD_INSTRUMENTATION_TELEMETRY_ENABLED=false - DD_INSTRUMENTATION_TELEMETRY_ENABLED=false
- GRPC_VERBOSITY=ERROR
image: ruby:alpine image: ruby:alpine
privileged: true privileged: true
depends_on: depends_on:
@ -40,8 +41,7 @@ services:
- altsvc-nghttp2 - altsvc-nghttp2
volumes: volumes:
- ./:/home - ./:/home
entrypoint: entrypoint: /home/test/support/ci/build.sh
/home/test/support/ci/build.sh
sshproxy: sshproxy:
image: connesc/ssh-gateway image: connesc/ssh-gateway
@ -66,51 +66,44 @@ services:
- ./test/support/ci/squid/proxy.conf:/etc/squid/squid.conf - ./test/support/ci/squid/proxy.conf:/etc/squid/squid.conf
- ./test/support/ci/squid/proxy-users-basic.txt:/etc/squid/proxy-users-basic.txt - ./test/support/ci/squid/proxy-users-basic.txt:/etc/squid/proxy-users-basic.txt
- ./test/support/ci/squid/proxy-users-digest.txt:/etc/squid/proxy-users-digest.txt - ./test/support/ci/squid/proxy-users-digest.txt:/etc/squid/proxy-users-digest.txt
command: command: -d 3
-d 3
http2proxy: http2proxy:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
ports: ports:
- 3300:80 - 3300:80
depends_on: depends_on:
- httpproxy - httpproxy
entrypoint: entrypoint: /usr/local/bin/nghttpx
/usr/local/bin/nghttpx command: --no-ocsp --frontend '*,80;no-tls' --backend 'httpproxy,3128' --http2-proxy
command:
--no-ocsp --frontend '*,80;no-tls' --backend 'httpproxy,3128' --http2-proxy
nghttp2: nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
ports: ports:
- 80:80 - 80:80
- 443:443 - 443:443
depends_on: depends_on:
- httpbin - httpbin
entrypoint: entrypoint: /usr/local/bin/nghttpx
/usr/local/bin/nghttpx
volumes: volumes:
- ./test/support/ci:/home - ./test/support/ci:/home
command: command: --conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443'
--conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443'
networks: networks:
default: default:
aliases: aliases:
- another - another
altsvc-nghttp2: altsvc-nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1 image: registry.gitlab.com/os85/httpx/nghttp2:3
ports: ports:
- 81:80 - 81:80
- 444:443 - 444:443
depends_on: depends_on:
- httpbin - httpbin
entrypoint: entrypoint: /usr/local/bin/nghttpx
/usr/local/bin/nghttpx
volumes: volumes:
- ./test/support/ci:/home - ./test/support/ci:/home
command: command: --conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443' --altsvc "h2,443,nghttp2"
--conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443' --altsvc "h2,443,nghttp2"
networks: networks:
default: default:
aliases: aliases:
@ -119,8 +112,7 @@ services:
environment: environment:
- DEBUG=True - DEBUG=True
image: citizenstig/httpbin image: citizenstig/httpbin
command: command: gunicorn --bind=0.0.0.0:8000 --workers=6 --access-logfile - --error-logfile - --log-level debug --capture-output httpbin:app
gunicorn --bind=0.0.0.0:8000 --workers=6 --access-logfile - --error-logfile - --log-level debug --capture-output httpbin:app
aws: aws:
image: localstack/localstack image: localstack/localstack

View File

@ -1,11 +1,20 @@
require "httpx" require "httpx"
URLS = %w[https://nghttp2.org/httpbin/get] * 1 if ARGV.empty?
URLS = %w[https://nghttp2.org/httpbin/get] * 1
else
URLS = ARGV
end
responses = HTTPX.get(*URLS) responses = HTTPX.get(*URLS)
Array(responses).each(&:raise_for_status) Array(responses).each do |res|
puts "Status: \n" puts "URI: #{res.uri}"
puts Array(responses).map(&:status) case res
puts "Payload: \n" when HTTPX::ErrorResponse
puts Array(responses).map(&:to_s) puts "error: #{res.error}"
puts res.error.backtrace
else
puts "STATUS: #{res.status}"
puts res.to_s[0..2048]
end
end

View File

@ -17,20 +17,49 @@ end
Signal.trap("INFO") { print_status } unless ENV.key?("CI") Signal.trap("INFO") { print_status } unless ENV.key?("CI")
PAGES = (ARGV.first || 10).to_i
Thread.start do Thread.start do
frontpage = HTTPX.get("https://news.ycombinator.com").to_s page_links = []
HTTPX.wrap do |http|
PAGES.times.each do |i|
frontpage = http.get("https://news.ycombinator.com?p=#{i+1}").to_s
html = Oga.parse_html(frontpage) html = Oga.parse_html(frontpage)
links = html.css('.athing .title a').map{|link| link.get('href') }.select { |link| URI(link).absolute? } links = html.css('.athing .title a').map{|link| link.get('href') }.select { |link| URI(link).absolute? }
links = links.select {|l| l.start_with?("https") } links = links.select {|l| l.start_with?("https") }
puts links puts "for page #{i+1}: #{links.size} links"
page_links.concat(links)
end
end
responses = HTTPX.get(*links) puts "requesting #{page_links.size} links:"
responses = HTTPX.get(*page_links)
# page_links.each_with_index do |l, i|
# puts "#{responses[i].status}: #{l}"
# end
responses, error_responses = responses.partition { |r| r.is_a?(HTTPX::Response) }
puts "#{responses.size} responses (from #{page_links.size})"
puts "by group:"
responses.group_by(&:status).each do |st, res|
res.each do |r|
puts "#{st}: #{r.uri}"
end
end unless responses.empty?
unless error_responses.empty?
puts "error responses (#{error_responses.size})"
error_responses.group_by{ |r| r.error.class }.each do |kl, res|
res.each do |r|
puts "#{r.uri}: #{r.error}"
puts r.error.backtrace&.join("\n")
end
end
end
links.each_with_index do |l, i|
puts "#{responses[i].status}: #{l}"
end
end.join end.join

View File

@ -7,8 +7,8 @@
# #
require "httpx" require "httpx"
URLS = %w[http://badipv4.test.ipv6friday.org/] * 1 # URLS = %w[https://ipv4.test-ipv6.com] * 1
# URLS = %w[http://badipv6.test.ipv6friday.org/] * 1 URLS = %w[https://ipv6.test-ipv6.com] * 1
responses = HTTPX.get(*URLS, ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE}) responses = HTTPX.get(*URLS, ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE})

View File

@ -32,7 +32,7 @@ Gem::Specification.new do |gem|
gem.require_paths = ["lib"] gem.require_paths = ["lib"]
gem.add_runtime_dependency "http-2-next", ">= 1.0.3" gem.add_runtime_dependency "http-2", ">= 1.0.0"
gem.required_ruby_version = ">= 2.7.0" gem.required_ruby_version = ">= 2.7.0"
end end

View File

@ -1,3 +1,3 @@
# Integration # Integration
This section is to test certain cases where we can't reliably reproduce in our test environments, but can be ran locally. This section is to test certain cases where we can't reliably reproduce in our test environments, but can be ran locally.

View File

@ -0,0 +1,133 @@
# frozen_string_literal: true
module DatadogHelpers
DATADOG_VERSION = defined?(DDTrace) ? DDTrace::VERSION : Datadog::VERSION
ERROR_TAG = if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.8.0")
"error.message"
else
"error.msg"
end
private
def verify_instrumented_request(status, verb:, uri:, span: fetch_spans.first, service: datadog_service_name.to_s, error: nil)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
assert span.type == "http"
else
assert span.span_type == "http"
end
assert span.name == "#{datadog_service_name}.request"
assert span.service == service
assert span.get_tag("out.host") == uri.host
assert span.get_tag("out.port") == 80
assert span.get_tag("http.method") == verb
assert span.get_tag("http.url") == uri.path
if status && status >= 400
verify_http_error_span(span, status, error)
elsif error
verify_error_span(span)
else
assert span.status.zero?
assert span.get_tag("http.status_code") == status.to_s
# peer service
# assert span.get_tag("peer.service") == span.service
end
end
def verify_http_error_span(span, status, error)
assert span.get_tag("http.status_code") == status.to_s
assert span.get_tag("error.type") == error
assert !span.get_tag(ERROR_TAG).nil?
assert span.status == 1
end
def verify_error_span(span)
assert span.get_tag("error.type") == "HTTPX::NativeResolveError"
assert !span.get_tag(ERROR_TAG).nil?
assert span.status == 1
end
def verify_no_distributed_headers(request_headers)
assert !request_headers.key?("x-datadog-parent-id")
assert !request_headers.key?("x-datadog-trace-id")
assert !request_headers.key?("x-datadog-sampling-priority")
end
def verify_distributed_headers(request_headers, span: fetch_spans.first, sampling_priority: 1)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
assert request_headers["x-datadog-parent-id"] == span.id.to_s
else
assert request_headers["x-datadog-parent-id"] == span.span_id.to_s
end
assert request_headers["x-datadog-trace-id"] == trace_id(span)
assert request_headers["x-datadog-sampling-priority"] == sampling_priority.to_s
end
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.17.0")
def trace_id(span)
Datadog::Tracing::Utils::TraceId.to_low_order(span.trace_id).to_s
end
else
def trace_id(span)
span.trace_id.to_s
end
end
def verify_analytics_headers(span, sample_rate: nil)
assert span.get_metric("_dd1.sr.eausr") == sample_rate
end
def set_datadog(options = {}, &blk)
Datadog.configure do |c|
c.tracing.instrument(datadog_service_name, options, &blk)
end
tracer # initialize tracer patches
end
def tracer
@tracer ||= begin
tr = Datadog::Tracing.send(:tracer)
def tr.write(trace)
@traces ||= []
@traces << trace
end
tr
end
end
def trace_with_sampling_priority(priority)
tracer.trace("foo.bar") do
tracer.active_trace.sampling_priority = priority
yield
end
end
# Returns spans and caches it (similar to +let(:spans)+).
def spans
@spans ||= fetch_spans
end
# Retrieves and sorts all spans in the current tracer instance.
# This method does not cache its results.
def fetch_spans
spans = (tracer.instance_variable_get(:@traces) || []).map(&:spans)
spans.flatten.sort! do |a, b|
if a.name == b.name
if a.resource == b.resource
if a.start_time == b.start_time
a.end_time <=> b.end_time
else
a.start_time <=> b.start_time
end
else
a.resource <=> b.resource
end
else
a.name <=> b.name
end
end
end
end

View File

@ -1,51 +1,60 @@
# frozen_string_literal: true # frozen_string_literal: true
require "ddtrace" begin
# upcoming 2.0
require "datadog"
rescue LoadError
require "ddtrace"
end
require "test_helper" require "test_helper"
require "support/http_helpers" require "support/http_helpers"
require "httpx/adapters/datadog" require "httpx/adapters/datadog"
require_relative "datadog_helpers"
class DatadogTest < Minitest::Test class DatadogTest < Minitest::Test
include HTTPHelpers include HTTPHelpers
include DatadogHelpers
def test_datadog_successful_get_request def test_datadog_successful_get_request
set_datadog set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri) response = HTTPX.get(uri)
verify_status(response, 200) verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri) verify_instrumented_request(response.status, verb: "GET", uri: uri)
verify_distributed_headers(response) verify_distributed_headers(request_headers(response))
end end
def test_datadog_successful_post_request def test_datadog_successful_post_request
set_datadog set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/post", "http://#{httpbin}"))
response = HTTPX.post(uri, body: "bla") response = HTTPX.post(uri, body: "bla")
verify_status(response, 200) verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "POST", uri: uri) verify_instrumented_request(response.status, verb: "POST", uri: uri)
verify_distributed_headers(response) verify_distributed_headers(request_headers(response))
end end
def test_datadog_successful_multiple_requests def test_datadog_successful_multiple_requests
set_datadog set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}")) get_uri = URI(build_uri("/get", "http://#{httpbin}"))
post_uri = URI(build_uri("/post", "http://#{httpbin}"))
get_response, post_response = HTTPX.request([["GET", uri], ["POST", uri]]) get_response, post_response = HTTPX.request([["GET", get_uri], ["POST", post_uri]])
verify_status(get_response, 200) verify_status(get_response, 200)
verify_status(post_response, 200) verify_status(post_response, 200)
assert fetch_spans.size == 2, "expected to have 2 spans" assert fetch_spans.size == 2, "expected to have 2 spans"
get_span, post_span = fetch_spans get_span, post_span = fetch_spans
verify_instrumented_request(get_response, span: get_span, verb: "GET", uri: uri) verify_instrumented_request(get_response.status, span: get_span, verb: "GET", uri: get_uri)
verify_instrumented_request(post_response, span: post_span, verb: "POST", uri: uri) verify_instrumented_request(post_response.status, span: post_span, verb: "POST", uri: post_uri)
verify_distributed_headers(get_response, span: get_span) verify_distributed_headers(request_headers(get_response), span: get_span)
verify_distributed_headers(post_response, span: post_span) verify_distributed_headers(request_headers(post_response), span: post_span)
verify_analytics_headers(get_span) verify_analytics_headers(get_span)
verify_analytics_headers(post_span) verify_analytics_headers(post_span)
end end
@ -58,8 +67,7 @@ class DatadogTest < Minitest::Test
verify_status(response, 500) verify_status(response, 500)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri) verify_instrumented_request(response.status, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
verify_distributed_headers(response)
end end
def test_datadog_client_error_request def test_datadog_client_error_request
@ -70,8 +78,7 @@ class DatadogTest < Minitest::Test
verify_status(response, 404) verify_status(response, 404)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri) verify_instrumented_request(response.status, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
verify_distributed_headers(response)
end end
def test_datadog_some_other_error def test_datadog_some_other_error
@ -82,12 +89,11 @@ class DatadogTest < Minitest::Test
assert response.is_a?(HTTPX::ErrorResponse), "response should contain errors" assert response.is_a?(HTTPX::ErrorResponse), "response should contain errors"
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError") verify_instrumented_request(nil, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
verify_distributed_headers(response)
end end
def test_datadog_host_config def test_datadog_host_config
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/get", "http://#{httpbin}"))
set_datadog(describe: /#{uri.host}/) do |http| set_datadog(describe: /#{uri.host}/) do |http|
http.service_name = "httpbin" http.service_name = "httpbin"
http.split_by_domain = false http.split_by_domain = false
@ -97,12 +103,12 @@ class DatadogTest < Minitest::Test
verify_status(response, 200) verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, service: "httpbin", verb: "GET", uri: uri) verify_instrumented_request(response.status, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(response) verify_distributed_headers(request_headers(response))
end end
def test_datadog_split_by_domain def test_datadog_split_by_domain
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/get", "http://#{httpbin}"))
set_datadog do |http| set_datadog do |http|
http.split_by_domain = true http.split_by_domain = true
end end
@ -111,13 +117,13 @@ class DatadogTest < Minitest::Test
verify_status(response, 200) verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, service: uri.host, verb: "GET", uri: uri) verify_instrumented_request(response.status, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(response) verify_distributed_headers(request_headers(response))
end end
def test_datadog_distributed_headers_disabled def test_datadog_distributed_headers_disabled
set_datadog(distributed_tracing: false) set_datadog(distributed_tracing: false)
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/get", "http://#{httpbin}"))
sampling_priority = 10 sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do response = trace_with_sampling_priority(sampling_priority) do
@ -127,14 +133,14 @@ class DatadogTest < Minitest::Test
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri) verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(response) verify_no_distributed_headers(request_headers(response))
verify_analytics_headers(span) verify_analytics_headers(span)
end end
def test_datadog_distributed_headers_sampling_priority def test_datadog_distributed_headers_sampling_priority
set_datadog set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/get", "http://#{httpbin}"))
sampling_priority = 10 sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do response = trace_with_sampling_priority(sampling_priority) do
@ -145,37 +151,51 @@ class DatadogTest < Minitest::Test
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri) verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_distributed_headers(response, span: span, sampling_priority: sampling_priority) verify_distributed_headers(request_headers(response), span: span, sampling_priority: sampling_priority)
verify_analytics_headers(span) verify_analytics_headers(span)
end end
def test_datadog_analytics_enabled def test_datadog_analytics_enabled
set_datadog(analytics_enabled: true) set_datadog(analytics_enabled: true)
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri) response = HTTPX.get(uri)
verify_status(response, 200) verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri) verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 1.0) verify_analytics_headers(span, sample_rate: 1.0)
end end
def test_datadog_analytics_sample_rate def test_datadog_analytics_sample_rate
set_datadog(analytics_enabled: true, analytics_sample_rate: 0.5) set_datadog(analytics_enabled: true, analytics_sample_rate: 0.5)
uri = URI(build_uri("/status/200", "http://#{httpbin}")) uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri) response = HTTPX.get(uri)
verify_status(response, 200) verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans" assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri) verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 0.5) verify_analytics_headers(span, sample_rate: 0.5)
end end
def test_datadog_per_request_span_with_retries
set_datadog
uri = URI(build_uri("/status/404", "http://#{httpbin}"))
http = HTTPX.plugin(:retries, max_retries: 2, retry_on: ->(r) { r.status == 404 })
response = http.get(uri)
verify_status(response, 404)
assert fetch_spans.size == 3, "expected to 3 spans"
fetch_spans.each do |span|
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
end
end
private private
def setup def setup
@ -186,120 +206,15 @@ class DatadogTest < Minitest::Test
def teardown def teardown
super super
Datadog.registry[:httpx].reset_configuration! Datadog.registry[:httpx].reset_configuration!
Datadog.configuration.tracing[:httpx].enabled = false
end end
def verify_instrumented_request(response, verb:, uri:, span: fetch_spans.first, service: "httpx", error: nil) def datadog_service_name
assert span.span_type == "http" :httpx
assert span.name == "httpx.request"
assert span.service == service
assert span.get_tag("out.host") == uri.host
assert span.get_tag("out.port") == "80"
assert span.get_tag("http.method") == verb
assert span.get_tag("http.url") == uri.path
error_tag = if defined?(::DDTrace) && Gem::Version.new(::DDTrace::VERSION::STRING) >= Gem::Version.new("1.8.0")
"error.message"
else
"error.msg"
end
if error
assert span.get_tag("error.type") == "HTTPX::NativeResolveError"
assert !span.get_tag(error_tag).nil?
assert span.status == 1
elsif response.status >= 400
assert span.get_tag("http.status_code") == response.status.to_s
assert span.get_tag("error.type") == "HTTPX::HTTPError"
assert !span.get_tag(error_tag).nil?
assert span.status == 1
else
assert span.status.zero?
assert span.get_tag("http.status_code") == response.status.to_s
# peer service
assert span.get_tag("peer.service") == span.service
end
end end
def verify_no_distributed_headers(response) def request_headers(response)
request = response.instance_variable_get(:@request) body = json_body(response)
body["headers"].transform_keys(&:downcase)
assert !request.headers.key?("x-datadog-parent-id")
assert !request.headers.key?("x-datadog-trace-id")
assert !request.headers.key?("x-datadog-sampling-priority")
end
def verify_distributed_headers(response, span: fetch_spans.first, sampling_priority: 1)
request = response.instance_variable_get(:@request)
assert request.headers["x-datadog-parent-id"] == span.span_id.to_s
assert request.headers["x-datadog-trace-id"] == trace_id(span)
assert request.headers["x-datadog-sampling-priority"] == sampling_priority.to_s
end
if defined?(::DDTrace) && Gem::Version.new(::DDTrace::VERSION::STRING) >= Gem::Version.new("1.17.0")
def trace_id(span)
Datadog::Tracing::Utils::TraceId.to_low_order(span.trace_id).to_s
end
else
def trace_id(span)
span.trace_id.to_s
end
end
def verify_analytics_headers(span, sample_rate: nil)
assert span.get_metric("_dd1.sr.eausr") == sample_rate
end
def set_datadog(options = {}, &blk)
Datadog.configure do |c|
c.tracing.instrument(:httpx, options, &blk)
end
tracer # initialize tracer patches
end
def tracer
@tracer ||= begin
tr = Datadog::Tracing.send(:tracer)
def tr.write(trace)
@traces ||= []
@traces << trace
end
tr
end
end
def trace_with_sampling_priority(priority)
tracer.trace("foo.bar") do
tracer.active_trace.sampling_priority = priority
yield
end
end
# Returns spans and caches it (similar to +let(:spans)+).
def spans
@spans ||= fetch_spans
end
# Retrieves and sorts all spans in the current tracer instance.
# This method does not cache its results.
def fetch_spans
spans = (tracer.instance_variable_get(:@traces) || []).map(&:spans)
spans.flatten.sort! do |a, b|
if a.name == b.name
if a.resource == b.resource
if a.start_time == b.start_time
a.end_time <=> b.end_time
else
a.start_time <=> b.start_time
end
else
a.resource <=> b.resource
end
else
a.name <=> b.name
end
end
end end
end end

View File

@ -0,0 +1,198 @@
# frozen_string_literal: true
begin
# upcoming 2.0
require "datadog"
rescue LoadError
require "ddtrace"
end
require "test_helper"
require "support/http_helpers"
require "httpx/adapters/faraday"
require_relative "datadog_helpers"
DATADOG_VERSION = defined?(DDTrace) ? DDTrace::VERSION : Datadog::VERSION
class FaradayDatadogTest < Minitest::Test
include HTTPHelpers
include DatadogHelpers
include FaradayHelpers
def test_faraday_datadog_successful_get_request
set_datadog
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_successful_post_request
set_datadog
uri = URI(build_uri("/status/200"))
response = faraday_connection.post(uri, "bla")
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, verb: "POST", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_server_error_request
set_datadog
uri = URI(build_uri("/status/500"))
ex = assert_raises(Faraday::ServerError) do
faraday_connection.tap do |conn|
adapter_handler = conn.builder.handlers.last
conn.builder.insert_before adapter_handler, Faraday::Response::RaiseError
end.get(uri)
end
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(ex.response[:status], verb: "GET", uri: uri, error: "Error 500")
verify_distributed_headers(request_headers(ex.response))
end
def test_faraday_datadog_client_error_request
set_datadog
uri = URI(build_uri("/status/404"))
ex = assert_raises(Faraday::ResourceNotFound) do
faraday_connection.tap do |conn|
adapter_handler = conn.builder.handlers.last
conn.builder.insert_before adapter_handler, Faraday::Response::RaiseError
end.get(uri)
end
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(ex.response[:status], verb: "GET", uri: uri, error: "Error 404")
verify_distributed_headers(request_headers(ex.response))
end
def test_faraday_datadog_some_other_error
set_datadog
uri = URI("http://unexisting/")
assert_raises(HTTPX::NativeResolveError) { faraday_connection.get(uri) }
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(nil, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
end
def test_faraday_datadog_host_config
uri = URI(build_uri("/status/200"))
set_datadog(describe: /#{uri.host}/) do |http|
http.service_name = "httpbin"
http.split_by_domain = false
end
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_split_by_domain
uri = URI(build_uri("/status/200"))
set_datadog do |http|
http.split_by_domain = true
end
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_distributed_headers_disabled
set_datadog(distributed_tracing: false)
uri = URI(build_uri("/status/200"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
faraday_connection.get(uri)
end
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(request_headers(response))
verify_analytics_headers(span)
end unless ENV.key?("CI") # TODO: https://github.com/DataDog/dd-trace-rb/issues/4308
def test_faraday_datadog_distributed_headers_sampling_priority
set_datadog
uri = URI(build_uri("/status/200"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
faraday_connection.get(uri)
end
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response), span: span, sampling_priority: sampling_priority)
verify_analytics_headers(span)
end unless ENV.key?("CI") # TODO: https://github.com/DataDog/dd-trace-rb/issues/4308
def test_faraday_datadog_analytics_enabled
set_datadog(analytics_enabled: true)
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 1.0)
end
def test_faraday_datadog_analytics_sample_rate
set_datadog(analytics_enabled: true, analytics_sample_rate: 0.5)
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 0.5)
end
private
def setup
super
Datadog.registry[:faraday].reset_configuration!
end
def teardown
super
Datadog.registry[:faraday].reset_configuration!
end
def datadog_service_name
:faraday
end
def origin(orig = httpbin)
"http://#{orig}"
end
end

View File

@ -133,7 +133,7 @@ class SentryTest < Minitest::Test
Sentry.init do |config| Sentry.init do |config|
config.traces_sample_rate = 1.0 config.traces_sample_rate = 1.0
config.logger = mock_logger config.sdk_logger = mock_logger
config.dsn = DUMMY_DSN config.dsn = DUMMY_DSN
config.transport.transport_class = Sentry::DummyTransport config.transport.transport_class = Sentry::DummyTransport
config.background_worker_threads = 0 config.background_worker_threads = 0

View File

@ -155,6 +155,29 @@ class WebmockTest < Minitest::Test
assert_requested(:get, MOCK_URL_HTTP, query: hash_excluding("a" => %w[b c])) assert_requested(:get, MOCK_URL_HTTP, query: hash_excluding("a" => %w[b c]))
end end
def test_verification_that_expected_request_with_hash_as_body
stub_request(:post, MOCK_URL_HTTP).with(body: { foo: "bar" })
http_request(:post, MOCK_URL_HTTP, form: { foo: "bar" })
assert_requested(:post, MOCK_URL_HTTP, body: { foo: "bar" })
end
def test_verification_that_expected_request_occured_with_form_file
file = File.new(fixture_file_path)
stub_request(:post, MOCK_URL_HTTP)
http_request(:post, MOCK_URL_HTTP, form: { file: file })
# TODO: webmock does not support matching multipart request body
assert_requested(:post, MOCK_URL_HTTP)
end
def test_verification_that_expected_request_occured_with_form_tempfile
stub_request(:post, MOCK_URL_HTTP)
Tempfile.open("tmp") do |file|
http_request(:post, MOCK_URL_HTTP, form: { file: file })
end
# TODO: webmock does not support matching multipart request body
assert_requested(:post, MOCK_URL_HTTP)
end
def test_verification_that_non_expected_request_didnt_occur def test_verification_that_non_expected_request_didnt_occur
expected_message = Regexp.new( expected_message = Regexp.new(
"The request GET #{MOCK_URL_HTTP}/ was not expected to execute but it executed 1 time\n\n" \ "The request GET #{MOCK_URL_HTTP}/ was not expected to execute but it executed 1 time\n\n" \
@ -191,6 +214,37 @@ class WebmockTest < Minitest::Test
end end
end end
def test_webmock_allows_real_request
WebMock.allow_net_connect!
uri = build_uri("/get?foo=bar")
response = HTTPX.get(uri)
verify_status(response, 200)
verify_body_length(response)
assert_requested(:get, uri, query: { "foo" => "bar" })
end
def test_webmock_allows_real_request_with_body
WebMock.allow_net_connect!
uri = build_uri("/post")
response = HTTPX.post(uri, form: { foo: "bar" })
verify_status(response, 200)
verify_body_length(response)
assert_requested(:post, uri, headers: { "Content-Type" => "application/x-www-form-urlencoded" }, body: "foo=bar")
end
def test_webmock_allows_real_request_with_file_body
WebMock.allow_net_connect!
uri = build_uri("/post")
response = HTTPX.post(uri, form: { image: File.new(fixture_file_path) })
verify_status(response, 200)
verify_body_length(response)
body = json_body(response)
verify_header(body["headers"], "Content-Type", "multipart/form-data")
verify_uploaded_image(body, "image", "image/jpeg")
# TODO: webmock does not support matching multipart request body
# assert_requested(:post, uri, headers: { "Content-Type" => "multipart/form-data" }, form: { "image" => File.new(fixture_file_path) })
end
def test_webmock_mix_mock_and_real_request def test_webmock_mix_mock_and_real_request
WebMock.allow_net_connect! WebMock.allow_net_connect!
@ -280,4 +334,8 @@ class WebmockTest < Minitest::Test
def http_request(meth, *uris, **options) def http_request(meth, *uris, **options)
HTTPX.__send__(meth, *uris, **options) HTTPX.__send__(meth, *uris, **options)
end end
def scheme
"http://"
end
end end

View File

@ -2,28 +2,11 @@
require "httpx/version" require "httpx/version"
require "httpx/extensions"
require "httpx/errors"
require "httpx/utils"
require "httpx/punycode"
require "httpx/domain_name"
require "httpx/altsvc"
require "httpx/callbacks"
require "httpx/loggable"
require "httpx/transcoder"
require "httpx/timers"
require "httpx/pool"
require "httpx/headers"
require "httpx/request"
require "httpx/response"
require "httpx/options"
require "httpx/chainable"
# Top-Level Namespace # Top-Level Namespace
# #
module HTTPX module HTTPX
EMPTY = [].freeze EMPTY = [].freeze
EMPTY_HASH = {}.freeze
# All plugins should be stored under this module/namespace. Can register and load # All plugins should be stored under this module/namespace. Can register and load
# plugins. # plugins.
@ -53,15 +36,31 @@ module HTTPX
m.synchronize { h[name] = mod } m.synchronize { h[name] = mod }
end end
end end
extend Chainable
end end
require "httpx/extensions"
require "httpx/errors"
require "httpx/utils"
require "httpx/punycode"
require "httpx/domain_name"
require "httpx/altsvc"
require "httpx/callbacks"
require "httpx/loggable"
require "httpx/transcoder"
require "httpx/timers"
require "httpx/pool"
require "httpx/headers"
require "httpx/request"
require "httpx/response"
require "httpx/options"
require "httpx/chainable"
require "httpx/session" require "httpx/session"
require "httpx/session_extensions" require "httpx/session_extensions"
# load integrations when possible # load integrations when possible
require "httpx/adapters/datadog" if defined?(DDTrace) || defined?(Datadog) require "httpx/adapters/datadog" if defined?(DDTrace) || defined?(Datadog::Tracing)
require "httpx/adapters/sentry" if defined?(Sentry) require "httpx/adapters/sentry" if defined?(Sentry)
require "httpx/adapters/webmock" if defined?(WebMock) require "httpx/adapters/webmock" if defined?(WebMock)

View File

@ -7,12 +7,23 @@ require "datadog/tracing/contrib/patcher"
module Datadog::Tracing module Datadog::Tracing
module Contrib module Contrib
module HTTPX module HTTPX
DATADOG_VERSION = defined?(::DDTrace) ? ::DDTrace::VERSION : ::Datadog::VERSION
METADATA_MODULE = Datadog::Tracing::Metadata METADATA_MODULE = Datadog::Tracing::Metadata
TYPE_OUTBOUND = Datadog::Tracing::Metadata::Ext::HTTP::TYPE_OUTBOUND TYPE_OUTBOUND = Datadog::Tracing::Metadata::Ext::HTTP::TYPE_OUTBOUND
TAG_PEER_SERVICE = Datadog::Tracing::Metadata::Ext::TAG_PEER_SERVICE TAG_BASE_SERVICE = if Gem::Version.new(DATADOG_VERSION::STRING) < Gem::Version.new("1.15.0")
"_dd.base_service"
else
Datadog::Tracing::Contrib::Ext::Metadata::TAG_BASE_SERVICE
end
TAG_PEER_HOSTNAME = Datadog::Tracing::Metadata::Ext::TAG_PEER_HOSTNAME
TAG_KIND = Datadog::Tracing::Metadata::Ext::TAG_KIND
TAG_CLIENT = Datadog::Tracing::Metadata::Ext::SpanKind::TAG_CLIENT
TAG_COMPONENT = Datadog::Tracing::Metadata::Ext::TAG_COMPONENT
TAG_OPERATION = Datadog::Tracing::Metadata::Ext::TAG_OPERATION
TAG_URL = Datadog::Tracing::Metadata::Ext::HTTP::TAG_URL TAG_URL = Datadog::Tracing::Metadata::Ext::HTTP::TAG_URL
TAG_METHOD = Datadog::Tracing::Metadata::Ext::HTTP::TAG_METHOD TAG_METHOD = Datadog::Tracing::Metadata::Ext::HTTP::TAG_METHOD
TAG_TARGET_HOST = Datadog::Tracing::Metadata::Ext::NET::TAG_TARGET_HOST TAG_TARGET_HOST = Datadog::Tracing::Metadata::Ext::NET::TAG_TARGET_HOST
@ -22,94 +33,179 @@ module Datadog::Tracing
# HTTPX Datadog Plugin # HTTPX Datadog Plugin
# #
# Enables tracing for httpx requests. A span will be created for each individual requests, # Enables tracing for httpx requests.
# and it'll trace since the moment it is fed to the connection, until the moment the response is #
# fed back to the session. # A span will be created for each request transaction; the span is created lazily only when
# buffering a request, and it is fed the start time stored inside the tracer object.
# #
module Plugin module Plugin
class RequestTracer module RequestTracer
include Contrib::HttpAnnotationHelper extend Contrib::HttpAnnotationHelper
module_function
SPAN_REQUEST = "httpx.request" SPAN_REQUEST = "httpx.request"
def initialize(request) # initializes tracing on the +request+.
@request = request def call(request)
return unless configuration(request).enabled
span = nil
# request objects are reused, when already buffered requests get rerouted to a different
# connection due to connection issues, or when they already got a response, but need to
# be retried. In such situations, the original span needs to be extended for the former,
# while a new is required for the latter.
request.on(:idle) do
span = nil
end
# the span is initialized when the request is buffered in the parser, which is the closest
# one gets to actually sending the request.
request.on(:headers) do
next if span
span = initialize_span(request, now)
end
request.on(:response) do |response|
unless span
next unless response.is_a?(::HTTPX::ErrorResponse) && response.error.respond_to?(:connection)
# handles the case when the +error+ happened during name resolution, which means
# that the tracing start point hasn't been triggered yet; in such cases, the approximate
# initial resolving time is collected from the connection, and used as span start time,
# and the tracing object in inserted before the on response callback is called.
span = initialize_span(request, response.error.connection.init_time)
end
finish(response, span)
end
end end
def call def finish(response, span)
return unless Datadog::Tracing.enabled? if response.is_a?(::HTTPX::ErrorResponse)
span.set_error(response.error)
else
span.set_tag(TAG_STATUS_CODE, response.status.to_s)
@request.on(:response, &method(:finish)) span.set_error(::HTTPX::HTTPError.new(response)) if response.status >= 400 && response.status <= 599
verb = @request.verb span.set_tags(
uri = @request.uri Datadog.configuration.tracing.header_tags.response_tags(response.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
end
@span = Datadog::Tracing.trace( span.finish
SPAN_REQUEST, end
service: service_name(@request.uri.host, configuration, Datadog.configuration_for(self)),
span_type: TYPE_OUTBOUND
)
@span.resource = verb # return a span initialized with the +@request+ state.
def initialize_span(request, start_time)
verb = request.verb
uri = request.uri
# Add additional request specific tags to the span. config = configuration(request)
@span.set_tag(TAG_URL, @request.path) span = create_span(request, config, start_time)
@span.set_tag(TAG_METHOD, verb)
@span.set_tag(TAG_TARGET_HOST, uri.host) span.resource = verb
@span.set_tag(TAG_TARGET_PORT, uri.port.to_s)
# Tag original global service name if not used
span.set_tag(TAG_BASE_SERVICE, Datadog.configuration.service) if span.service != Datadog.configuration.service
span.set_tag(TAG_KIND, TAG_CLIENT)
span.set_tag(TAG_COMPONENT, "httpx")
span.set_tag(TAG_OPERATION, "request")
span.set_tag(TAG_URL, request.path)
span.set_tag(TAG_METHOD, verb)
span.set_tag(TAG_TARGET_HOST, uri.host)
span.set_tag(TAG_TARGET_PORT, uri.port)
span.set_tag(TAG_PEER_HOSTNAME, uri.host)
# Tag as an external peer service # Tag as an external peer service
@span.set_tag(TAG_PEER_SERVICE, @span.service) # span.set_tag(TAG_PEER_SERVICE, span.service)
Datadog::Tracing::Propagation::HTTP.inject!(Datadog::Tracing.active_trace, if config[:distributed_tracing]
@request.headers) if @configuration[:distributed_tracing] propagate_trace_http(
Datadog::Tracing.active_trace,
request.headers
)
end
# Set analytics sample rate # Set analytics sample rate
if Contrib::Analytics.enabled?(@configuration[:analytics_enabled]) if Contrib::Analytics.enabled?(config[:analytics_enabled])
Contrib::Analytics.set_sample_rate(@span, @configuration[:analytics_sample_rate]) Contrib::Analytics.set_sample_rate(span, config[:analytics_sample_rate])
end end
span.set_tags(
Datadog.configuration.tracing.header_tags.request_tags(request.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
span
rescue StandardError => e rescue StandardError => e
Datadog.logger.error("error preparing span for http request: #{e}") Datadog.logger.error("error preparing span for http request: #{e}")
Datadog.logger.error(e.backtrace) Datadog.logger.error(e.backtrace)
end end
def finish(response) def now
return unless @span ::Datadog::Core::Utils::Time.now.utc
if response.is_a?(::HTTPX::ErrorResponse)
@span.set_error(response.error)
else
@span.set_tag(TAG_STATUS_CODE, response.status.to_s)
@span.set_error(::HTTPX::HTTPError.new(response)) if response.status >= 400 && response.status <= 599
end
@span.finish
end end
private def configuration(request)
Datadog.configuration.tracing[:httpx, request.uri.host]
end
def configuration if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
@configuration ||= Datadog.configuration.tracing[:httpx, @request.uri.host] def propagate_trace_http(trace, headers)
Datadog::Tracing::Contrib::HTTP.inject(trace, headers)
end
def create_span(request, configuration, start_time)
Datadog::Tracing.trace(
SPAN_REQUEST,
service: service_name(request.uri.host, configuration),
type: TYPE_OUTBOUND,
start_time: start_time
)
end
else
def propagate_trace_http(trace, headers)
Datadog::Tracing::Propagation::HTTP.inject!(trace.to_digest, headers)
end
def create_span(request, configuration, start_time)
Datadog::Tracing.trace(
SPAN_REQUEST,
service: service_name(request.uri.host, configuration),
span_type: TYPE_OUTBOUND,
start_time: start_time
)
end
end end
end end
module RequestMethods module RequestMethods
def __datadog_enable_trace! # intercepts request initialization to inject the tracing logic.
return if @__datadog_enable_trace def initialize(*)
super
RequestTracer.new(self).call return unless Datadog::Tracing.enabled?
@__datadog_enable_trace = true
RequestTracer.call(self)
end end
end end
module ConnectionMethods module ConnectionMethods
def send(request) attr_reader :init_time
request.__datadog_enable_trace!
def initialize(*)
super super
@init_time = ::Datadog::Core::Utils::Time.now.utc
end end
end end
end end
@ -126,7 +222,7 @@ module Datadog::Tracing
option :distributed_tracing, default: true option :distributed_tracing, default: true
option :split_by_domain, default: false option :split_by_domain, default: false
if Gem::Version.new(DDTrace::VERSION::STRING) >= Gem::Version.new("1.13.0") if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
option :enabled do |o| option :enabled do |o|
o.type :bool o.type :bool
o.env "DD_TRACE_HTTPX_ENABLED" o.env "DD_TRACE_HTTPX_ENABLED"
@ -169,25 +265,25 @@ module Datadog::Tracing
"httpx" "httpx"
) )
end end
o.lazy o.lazy unless Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
end end
else else
option :service_name do |o| option :service_name do |o|
o.default do o.default do
ENV.fetch("DD_TRACE_HTTPX_SERVICE_NAME", "httpx") ENV.fetch("DD_TRACE_HTTPX_SERVICE_NAME", "httpx")
end end
o.lazy o.lazy unless Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
end end
end end
option :distributed_tracing, default: true option :distributed_tracing, default: true
if Gem::Version.new(DDTrace::VERSION::STRING) >= Gem::Version.new("1.15.0") if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.15.0")
option :error_handler do |o| option :error_handler do |o|
o.type :proc o.type :proc
o.default_proc(&DEFAULT_ERROR_HANDLER) o.default_proc(&DEFAULT_ERROR_HANDLER)
end end
elsif Gem::Version.new(DDTrace::VERSION::STRING) >= Gem::Version.new("1.13.0") elsif Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
option :error_handler do |o| option :error_handler do |o|
o.type :proc o.type :proc
o.experimental_default_proc(&DEFAULT_ERROR_HANDLER) o.experimental_default_proc(&DEFAULT_ERROR_HANDLER)

View File

@ -30,6 +30,7 @@ module Faraday
end end
@connection = @connection.plugin(OnDataPlugin) if env.request.stream_response? @connection = @connection.plugin(OnDataPlugin) if env.request.stream_response?
@connection = @config_block.call(@connection) || @connection if @config_block
@connection @connection
end end
@ -61,6 +62,7 @@ module Faraday
request_options = { request_options = {
headers: env.request_headers, headers: env.request_headers,
body: env.body, body: env.body,
**options_from_env(env),
} }
[meth.to_s.upcase, env.url, request_options] [meth.to_s.upcase, env.url, request_options]
end end
@ -106,9 +108,11 @@ module Faraday
ssl_options ssl_options
end end
else else
# :nocov:
def ssl_options_from_env(*) def ssl_options_from_env(*)
{} {}
end end
# :nocov:
end end
end end
@ -145,7 +149,7 @@ module Faraday
module ResponseMethods module ResponseMethods
def reason def reason
Net::HTTP::STATUS_CODES.fetch(@status) Net::HTTP::STATUS_CODES.fetch(@status, "Non-Standard status code")
end end
end end
end end
@ -211,7 +215,7 @@ module Faraday
Array(responses).each_with_index do |response, index| Array(responses).each_with_index do |response, index|
handler = @handlers[index] handler = @handlers[index]
handler.on_response.call(response) handler.on_response.call(response)
handler.on_complete.call(handler.env) handler.on_complete.call(handler.env) if handler.on_complete
end end
end end
rescue ::HTTPX::TimeoutError => e rescue ::HTTPX::TimeoutError => e

View File

@ -20,7 +20,7 @@ module WebMock
WebMock::RequestSignature.new( WebMock::RequestSignature.new(
request.verb.downcase.to_sym, request.verb.downcase.to_sym,
uri.to_s, uri.to_s,
body: request.body.each.to_a.join, body: request.body.to_s,
headers: request.headers.to_h headers: request.headers.to_h
) )
end end
@ -47,21 +47,27 @@ module WebMock
end end
def build_error_response(request, exception) def build_error_response(request, exception)
HTTPX::ErrorResponse.new(request, exception, request.options) HTTPX::ErrorResponse.new(request, exception)
end end
end end
module InstanceMethods module InstanceMethods
def init_connection(*) private
connection = super
def do_init_connection(connection, selector)
super
connection.once(:unmock_connection) do connection.once(:unmock_connection) do
next unless connection.current_session == self
unless connection.addresses unless connection.addresses
connection.__send__(:callbacks)[:connect_error].clear # reset Happy Eyeballs, fail early
pool.__send__(:unregister_connection, connection) connection.sibling = nil
deselect_connection(connection, selector)
end end
pool.__send__(:resolve_connection, connection) resolve_connection(connection, selector)
end end
connection
end end
end end
@ -100,6 +106,10 @@ module WebMock
super super
end end
def terminate
force_reset
end
def send(request) def send(request)
request_signature = Plugin.build_webmock_request_signature(request) request_signature = Plugin.build_webmock_request_signature(request)
WebMock::RequestRegistry.instance.requested_signatures.put(request_signature) WebMock::RequestRegistry.instance.requested_signatures.put(request_signature)
@ -108,8 +118,15 @@ module WebMock
response = Plugin.build_from_webmock_response(request, mock_response) response = Plugin.build_from_webmock_response(request, mock_response)
WebMock::CallbackRegistry.invoke_callbacks({ lib: :httpx }, request_signature, mock_response) WebMock::CallbackRegistry.invoke_callbacks({ lib: :httpx }, request_signature, mock_response)
log { "mocking #{request.uri} with #{mock_response.inspect}" } log { "mocking #{request.uri} with #{mock_response.inspect}" }
request.transition(:headers)
request.transition(:body)
request.transition(:trailers)
request.transition(:done)
response.finish!
request.response = response request.response = response
request.emit(:response, response) request.emit(:response, response)
request_signature.headers = request.headers.to_h
response << mock_response.body.dup unless response.is_a?(HTTPX::ErrorResponse) response << mock_response.body.dup unless response.is_a?(HTTPX::ErrorResponse)
elsif WebMock.net_connect_allowed?(request_signature.uri) elsif WebMock.net_connect_allowed?(request_signature.uri)
if WebMock::CallbackRegistry.any_callbacks? if WebMock::CallbackRegistry.any_callbacks?

View File

@ -131,9 +131,9 @@ module HTTPX
scanner.skip(/;/) scanner.skip(/;/)
break if scanner.eos? || scanner.scan(/ *, */) break if scanner.eos? || scanner.scan(/ *, */)
end end
alt_params = Hash[alt_params.map { |field| field.split("=") }] alt_params = Hash[alt_params.map { |field| field.split("=", 2) }]
alt_proto, alt_authority = alt_service.split("=") alt_proto, alt_authority = alt_service.split("=", 2)
alt_origin = parse_altsvc_origin(alt_proto, alt_authority) alt_origin = parse_altsvc_origin(alt_proto, alt_authority)
return unless alt_origin return unless alt_origin

View File

@ -14,8 +14,6 @@ module HTTPX
class Buffer class Buffer
extend Forwardable extend Forwardable
def_delegator :@buffer, :<<
def_delegator :@buffer, :to_s def_delegator :@buffer, :to_s
def_delegator :@buffer, :to_str def_delegator :@buffer, :to_str
@ -30,9 +28,22 @@ module HTTPX
attr_reader :limit attr_reader :limit
def initialize(limit) if RUBY_VERSION >= "3.4.0"
@buffer = "".b def initialize(limit)
@limit = limit @buffer = String.new("", encoding: Encoding::BINARY, capacity: limit)
@limit = limit
end
def <<(chunk)
@buffer.append_as_bytes(chunk)
end
else
def initialize(limit)
@buffer = "".b
@limit = limit
end
def_delegator :@buffer, :<<
end end
def full? def full?

View File

@ -4,7 +4,7 @@ module HTTPX
module Callbacks module Callbacks
def on(type, &action) def on(type, &action)
callbacks(type) << action callbacks(type) << action
self action
end end
def once(type, &block) def once(type, &block)
@ -12,20 +12,15 @@ module HTTPX
block.call(*args, &callback) block.call(*args, &callback)
:delete :delete
end end
self
end
def only(type, &block)
callbacks(type).clear
on(type, &block)
end end
def emit(type, *args) def emit(type, *args)
log { "emit #{type.inspect} callbacks" } if respond_to?(:log)
callbacks(type).delete_if { |pr| :delete == pr.call(*args) } # rubocop:disable Style/YodaCondition callbacks(type).delete_if { |pr| :delete == pr.call(*args) } # rubocop:disable Style/YodaCondition
end end
def callbacks_for?(type) def callbacks_for?(type)
@callbacks.key?(type) && @callbacks[type].any? @callbacks && @callbacks.key?(type) && @callbacks[type].any?
end end
protected protected

View File

@ -73,7 +73,7 @@ module HTTPX
].include?(callback) ].include?(callback)
warn "DEPRECATION WARNING: calling `.#{meth}` on plain HTTPX sessions is deprecated. " \ warn "DEPRECATION WARNING: calling `.#{meth}` on plain HTTPX sessions is deprecated. " \
"Use HTTPX.plugin(:callbacks).#{meth} instead." "Use `HTTPX.plugin(:callbacks).#{meth}` instead."
plugin(:callbacks).__send__(meth, *args, **options, &blk) plugin(:callbacks).__send__(meth, *args, **options, &blk)
else else
@ -101,4 +101,6 @@ module HTTPX
end end
end end
end end
extend Chainable
end end

View File

@ -41,21 +41,33 @@ module HTTPX
def_delegator :@write_buffer, :empty? def_delegator :@write_buffer, :empty?
attr_reader :type, :io, :origin, :origins, :state, :pending, :options, :ssl_session attr_reader :type, :io, :origin, :origins, :state, :pending, :options, :ssl_session, :sibling
attr_writer :timers attr_writer :current_selector
attr_accessor :family attr_accessor :current_session, :family
protected :sibling
def initialize(uri, options) def initialize(uri, options)
@origins = [uri.origin] @current_session = @current_selector =
@origin = Utils.to_uri(uri.origin) @parser = @sibling = @coalesced_connection =
@io = @ssl_session = @timeout =
@connected_at = @response_received_at = nil
@exhausted = @cloned = @main_sibling = false
@options = Options.new(options) @options = Options.new(options)
@type = initialize_type(uri, @options) @type = initialize_type(uri, @options)
@origins = [uri.origin]
@origin = Utils.to_uri(uri.origin)
@window_size = @options.window_size @window_size = @options.window_size
@read_buffer = Buffer.new(@options.buffer_size) @read_buffer = Buffer.new(@options.buffer_size)
@write_buffer = Buffer.new(@options.buffer_size) @write_buffer = Buffer.new(@options.buffer_size)
@pending = [] @pending = []
@inflight = 0
@keep_alive_timeout = @options.timeout[:keep_alive_timeout]
on(:error, &method(:on_error)) on(:error, &method(:on_error))
if @options.io if @options.io
# if there's an already open IO, get its # if there's an already open IO, get its
@ -66,15 +78,39 @@ module HTTPX
else else
transition(:idle) transition(:idle)
end end
on(:close) do
next if @exhausted # it'll reset
@inflight = 0 # may be called after ":close" above, so after the connection has been checked back in.
@keep_alive_timeout = @options.timeout[:keep_alive_timeout] # next unless @current_session
@intervals = [] next unless @current_session
@current_session.deselect_connection(self, @current_selector, @cloned)
end
on(:terminate) do
next if @exhausted # it'll reset
current_session = @current_session
current_selector = @current_selector
# may be called after ":close" above, so after the connection has been checked back in.
next unless current_session && current_selector
current_session.deselect_connection(self, current_selector)
end
on(:altsvc) do |alt_origin, origin, alt_params|
build_altsvc_connection(alt_origin, origin, alt_params)
end
self.addresses = @options.addresses if @options.addresses self.addresses = @options.addresses if @options.addresses
end end
def peer
@origin
end
# this is a semi-private method, to be used by the resolver # this is a semi-private method, to be used by the resolver
# to initiate the io object. # to initiate the io object.
def addresses=(addrs) def addresses=(addrs)
@ -119,6 +155,14 @@ module HTTPX
) && @options == connection.options ) && @options == connection.options
end end
# coalesces +self+ into +connection+.
def coalesce!(connection)
@coalesced_connection = connection
close_sibling
connection.merge(self)
end
# coalescable connections need to be mergeable! # coalescable connections need to be mergeable!
# but internally, #mergeable? is called before #coalescable? # but internally, #mergeable? is called before #coalescable?
def coalescable?(connection) def coalescable?(connection)
@ -161,12 +205,23 @@ module HTTPX
end end
end end
def io_connected?
return @coalesced_connection.io_connected? if @coalesced_connection
@io && @io.state == :connected
end
def connecting? def connecting?
@state == :idle @state == :idle
end end
def inflight? def inflight?
@parser && !@parser.empty? && !@write_buffer.empty? @parser && (
# parser may be dealing with other requests (possibly started from a different fiber)
!@parser.empty? ||
# connection may be doing connection termination handshake
!@write_buffer.empty?
)
end end
def interests def interests
@ -182,6 +237,9 @@ module HTTPX
return @parser.interests if @parser return @parser.interests if @parser
nil
rescue StandardError => e
emit(:error, e)
nil nil
end end
@ -203,6 +261,10 @@ module HTTPX
consume consume
end end
nil nil
rescue StandardError => e
@write_buffer.clear
emit(:error, e)
raise e
end end
def close def close
@ -212,15 +274,22 @@ module HTTPX
end end
def terminate def terminate
@connected_at = nil if @state == :closed case @state
when :idle
purge_after_closed
emit(:terminate)
when :closed
@connected_at = nil
end
close close
end end
# bypasses the state machine to force closing of connections still connecting. # bypasses the state machine to force closing of connections still connecting.
# **only** used for Happy Eyeballs v2. # **only** used for Happy Eyeballs v2.
def force_reset def force_reset(cloned = false)
@state = :closing @state = :closing
@cloned = cloned
transition(:closed) transition(:closed)
end end
@ -233,6 +302,8 @@ module HTTPX
end end
def send(request) def send(request)
return @coalesced_connection.send(request) if @coalesced_connection
if @parser && !@write_buffer.full? if @parser && !@write_buffer.full?
if @response_received_at && @keep_alive_timeout && if @response_received_at && @keep_alive_timeout &&
Utils.elapsed_time(@response_received_at) > @keep_alive_timeout Utils.elapsed_time(@response_received_at) > @keep_alive_timeout
@ -241,8 +312,9 @@ module HTTPX
# for such cases, we want to ping for availability before deciding to shovel requests. # for such cases, we want to ping for availability before deciding to shovel requests.
log(level: 3) { "keep alive timeout expired, pinging connection..." } log(level: 3) { "keep alive timeout expired, pinging connection..." }
@pending << request @pending << request
parser.ping
transition(:active) if @state == :inactive transition(:active) if @state == :inactive
parser.ping
request.ping!
return return
end end
@ -253,6 +325,8 @@ module HTTPX
end end
def timeout def timeout
return if @state == :closed || @state == :inactive
return @timeout if @timeout return @timeout if @timeout
return @options.timeout[:connect_timeout] if @state == :idle return @options.timeout[:connect_timeout] if @state == :idle
@ -280,19 +354,49 @@ module HTTPX
end end
def handle_socket_timeout(interval) def handle_socket_timeout(interval)
@intervals.delete_if(&:elapsed?) error = OperationTimeoutError.new(interval, "timed out while waiting on select")
unless @intervals.empty?
# remove the intervals which will elapse
return
end
error = HTTPX::TimeoutError.new(interval, "timed out while waiting on select")
error.set_backtrace(caller) error.set_backtrace(caller)
on_error(error) on_error(error)
end end
def sibling=(connection)
@sibling = connection
return unless connection
@main_sibling = connection.sibling.nil?
return unless @main_sibling
connection.sibling = self
end
def handle_connect_error(error)
return handle_error(error) unless @sibling && @sibling.connecting?
@sibling.merge(self)
force_reset(true)
end
def disconnect
return unless @current_session && @current_selector
emit(:close)
@current_session = nil
@current_selector = nil
end
# :nocov:
def inspect
"#<#{self.class}:#{object_id} " \
"@origin=#{@origin} " \
"@state=#{@state} " \
"@pending=#{@pending.size} " \
"@io=#{@io}>"
end
# :nocov:
private private
def connect def connect
@ -337,8 +441,10 @@ module HTTPX
# #
loop do loop do
siz = @io.read(@window_size, @read_buffer) siz = @io.read(@window_size, @read_buffer)
log(level: 3, color: :cyan) { "IO READ: #{siz} bytes..." } log(level: 3, color: :cyan) { "IO READ: #{siz} bytes... (wsize: #{@window_size}, rbuffer: #{@read_buffer.bytesize})" }
unless siz unless siz
@write_buffer.clear
ex = EOFError.new("descriptor closed") ex = EOFError.new("descriptor closed")
ex.set_backtrace(caller) ex.set_backtrace(caller)
on_error(ex) on_error(ex)
@ -393,6 +499,8 @@ module HTTPX
end end
log(level: 3, color: :cyan) { "IO WRITE: #{siz} bytes..." } log(level: 3, color: :cyan) { "IO WRITE: #{siz} bytes..." }
unless siz unless siz
@write_buffer.clear
ex = EOFError.new("descriptor closed") ex = EOFError.new("descriptor closed")
ex.set_backtrace(caller) ex.set_backtrace(caller)
on_error(ex) on_error(ex)
@ -439,17 +547,21 @@ module HTTPX
def send_request_to_parser(request) def send_request_to_parser(request)
@inflight += 1 @inflight += 1
request.peer_address = @io.ip request.peer_address = @io.ip
parser.send(request)
set_request_timeouts(request) set_request_timeouts(request)
parser.send(request)
return unless @state == :inactive return unless @state == :inactive
transition(:active) transition(:active)
# mark request as ping, as this inactive connection may have been
# closed by the server, and we don't want that to influence retry
# bookkeeping.
request.ping!
end end
def build_parser(protocol = @io.protocol) def build_parser(protocol = @io.protocol)
parser = self.class.parser_type(protocol).new(@write_buffer, @options) parser = parser_type(protocol).new(@write_buffer, @options)
set_parser_callbacks(parser) set_parser_callbacks(parser)
parser parser
end end
@ -461,6 +573,7 @@ module HTTPX
end end
@response_received_at = Utils.now @response_received_at = Utils.now
@inflight -= 1 @inflight -= 1
response.finish!
request.emit(:response, response) request.emit(:response, response)
end end
parser.on(:altsvc) do |alt_origin, origin, alt_params| parser.on(:altsvc) do |alt_origin, origin, alt_params|
@ -473,8 +586,27 @@ module HTTPX
request.emit(:promise, parser, stream) request.emit(:promise, parser, stream)
end end
parser.on(:exhausted) do parser.on(:exhausted) do
@pending.concat(parser.pending) @exhausted = true
emit(:exhausted) current_session = @current_session
current_selector = @current_selector
begin
parser.close
@pending.concat(parser.pending)
ensure
@current_session = current_session
@current_selector = current_selector
end
case @state
when :closed
idling
@exhausted = false
when :closing
once(:closed) do
idling
@exhausted = false
end
end
end end
parser.on(:origin) do |origin| parser.on(:origin) do |origin|
@origins |= [origin] @origins |= [origin]
@ -490,8 +622,14 @@ module HTTPX
end end
parser.on(:reset) do parser.on(:reset) do
@pending.concat(parser.pending) unless parser.empty? @pending.concat(parser.pending) unless parser.empty?
current_session = @current_session
current_selector = @current_selector
reset reset
idling unless @pending.empty? unless @pending.empty?
idling
@current_session = current_session
@current_selector = current_selector
end
end end
parser.on(:current_timeout) do parser.on(:current_timeout) do
@current_timeout = @timeout = parser.timeout @current_timeout = @timeout = parser.timeout
@ -499,15 +637,28 @@ module HTTPX
parser.on(:timeout) do |tout| parser.on(:timeout) do |tout|
@timeout = tout @timeout = tout
end end
parser.on(:error) do |request, ex| parser.on(:error) do |request, error|
case ex case error
when MisdirectedRequestError when :http_1_1_required
emit(:misdirected, request) current_session = @current_session
else current_selector = @current_selector
response = ErrorResponse.new(request, ex, @options) parser.close
request.response = response
request.emit(:response, response) other_connection = current_session.find_connection(@origin, current_selector,
@options.merge(ssl: { alpn_protocols: %w[http/1.1] }))
other_connection.merge(self)
request.transition(:idle)
other_connection.send(request)
next
when OperationTimeoutError
# request level timeouts should take precedence
next unless request.active_timeouts.empty?
end end
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end end
end end
@ -527,15 +678,17 @@ module HTTPX
# connect errors, exit gracefully # connect errors, exit gracefully
error = ConnectionError.new(e.message) error = ConnectionError.new(e.message)
error.set_backtrace(e.backtrace) error.set_backtrace(e.backtrace)
connecting? && callbacks_for?(:connect_error) ? emit(:connect_error, error) : handle_error(error) handle_connect_error(error) if connecting?
@state = :closed @state = :closed
emit(:close) purge_after_closed
rescue TLSError, HTTP2Next::Error::ProtocolError, HTTP2Next::Error::HandshakeError => e disconnect
rescue TLSError, ::HTTP2::Error::ProtocolError, ::HTTP2::Error::HandshakeError => e
# connect errors, exit gracefully # connect errors, exit gracefully
handle_error(e) handle_error(e)
connecting? && callbacks_for?(:connect_error) ? emit(:connect_error, e) : handle_error(e) handle_connect_error(e) if connecting?
@state = :closed @state = :closed
emit(:close) purge_after_closed
disconnect
end end
def handle_transition(nextstate) def handle_transition(nextstate)
@ -543,12 +696,12 @@ module HTTPX
when :idle when :idle
@timeout = @current_timeout = @options.timeout[:connect_timeout] @timeout = @current_timeout = @options.timeout[:connect_timeout]
@connected_at = nil @connected_at = @response_received_at = nil
when :open when :open
return if @state == :closed return if @state == :closed
@io.connect @io.connect
emit(:tcp_open, self) if @io.state == :connected close_sibling if @io.state == :connected
return unless @io.connected? return unless @io.connected?
@ -560,6 +713,9 @@ module HTTPX
emit(:open) emit(:open)
when :inactive when :inactive
return unless @state == :open return unless @state == :open
# do not deactivate connection in use
return if @inflight.positive?
when :closing when :closing
return unless @state == :idle || @state == :open return unless @state == :idle || @state == :open
@ -577,7 +733,8 @@ module HTTPX
return unless @write_buffer.empty? return unless @write_buffer.empty?
purge_after_closed purge_after_closed
emit(:close) if @pending.empty? disconnect if @pending.empty?
when :already_open when :already_open
nextstate = :open nextstate = :open
# the first check for given io readiness must still use a timeout. # the first check for given io readiness must still use a timeout.
@ -588,11 +745,30 @@ module HTTPX
return unless @state == :inactive return unless @state == :inactive
nextstate = :open nextstate = :open
emit(:activate)
# activate
@current_session.select_connection(self, @current_selector)
end end
log(level: 3) { "#{@state} -> #{nextstate}" }
@state = nextstate @state = nextstate
end end
def close_sibling
return unless @sibling
if @sibling.io_connected?
reset
# TODO: transition connection to closed
end
unless @sibling.state == :closed
merge(@sibling) unless @main_sibling
@sibling.force_reset(true)
end
@sibling = nil
end
def purge_after_closed def purge_after_closed
@io.close if @io @io.close if @io
@read_buffer.clear @read_buffer.clear
@ -612,12 +788,40 @@ module HTTPX
end end
end end
# returns an HTTPX::Connection for the negotiated Alternative Service (or none).
def build_altsvc_connection(alt_origin, origin, alt_params)
# do not allow security downgrades on altsvc negotiation
return if @origin.scheme == "https" && alt_origin.scheme != "https"
altsvc = AltSvc.cached_altsvc_set(origin, alt_params.merge("origin" => alt_origin))
# altsvc already exists, somehow it wasn't advertised, probably noop
return unless altsvc
alt_options = @options.merge(ssl: @options.ssl.merge(hostname: URI(origin).host))
connection = @current_session.find_connection(alt_origin, @current_selector, alt_options)
# advertised altsvc is the same origin being used, ignore
return if connection == self
connection.extend(AltSvc::ConnectionMixin) unless connection.is_a?(AltSvc::ConnectionMixin)
log(level: 1) { "#{origin} alt-svc: #{alt_origin}" }
connection.merge(self)
terminate
rescue UnsupportedSchemeError
altsvc["noop"] = true
nil
end
def build_socket(addrs = nil) def build_socket(addrs = nil)
case @type case @type
when "tcp" when "tcp"
TCP.new(@origin, addrs, @options) TCP.new(peer, addrs, @options)
when "ssl" when "ssl"
SSL.new(@origin, addrs, @options) do |sock| SSL.new(peer, addrs, @options) do |sock|
sock.ssl_session = @ssl_session sock.ssl_session = @ssl_session
sock.session_new_cb do |sess| sock.session_new_cb do |sess|
@ssl_session = sess @ssl_session = sess
@ -626,14 +830,18 @@ module HTTPX
end end
end end
when "unix" when "unix"
UNIX.new(@origin, addrs, @options) path = Array(addrs).first
path = String(path) if path
UNIX.new(peer, path, @options)
else else
raise Error, "unsupported transport (#{@type})" raise Error, "unsupported transport (#{@type})"
end end
end end
def on_error(error) def on_error(error, request = nil)
if error.instance_of?(TimeoutError) if error.is_a?(OperationTimeoutError)
# inactive connections do not contribute to the select loop, therefore # inactive connections do not contribute to the select loop, therefore
# they should not fail due to such errors. # they should not fail due to such errors.
@ -646,39 +854,60 @@ module HTTPX
error = error.to_connection_error if connecting? error = error.to_connection_error if connecting?
end end
handle_error(error) handle_error(error, request)
reset reset
end end
def handle_error(error) def handle_error(error, request = nil)
parser.handle_error(error) if @parser && parser.respond_to?(:handle_error) parser.handle_error(error, request) if @parser && parser.respond_to?(:handle_error)
while (request = @pending.shift) while (req = @pending.shift)
response = ErrorResponse.new(request, error, request.options) next if request && req == request
request.response = response
request.emit(:response, response) response = ErrorResponse.new(req, error)
req.response = response
req.emit(:response, response)
end end
return unless request
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end end
def set_request_timeouts(request) def set_request_timeouts(request)
write_timeout = request.write_timeout set_request_write_timeout(request)
set_request_read_timeout(request)
set_request_request_timeout(request)
end
def set_request_read_timeout(request)
read_timeout = request.read_timeout read_timeout = request.read_timeout
return if read_timeout.nil? || read_timeout.infinite?
set_request_timeout(:read_timeout, request, read_timeout, :done, :response) do
read_timeout_callback(request, read_timeout)
end
end
def set_request_write_timeout(request)
write_timeout = request.write_timeout
return if write_timeout.nil? || write_timeout.infinite?
set_request_timeout(:write_timeout, request, write_timeout, :headers, %i[done response]) do
write_timeout_callback(request, write_timeout)
end
end
def set_request_request_timeout(request)
request_timeout = request.request_timeout request_timeout = request.request_timeout
unless write_timeout.nil? || write_timeout.infinite?
set_request_timeout(request, write_timeout, :headers, %i[done response]) do
write_timeout_callback(request, write_timeout)
end
end
unless read_timeout.nil? || read_timeout.infinite?
set_request_timeout(request, read_timeout, :done, :response) do
read_timeout_callback(request, read_timeout)
end
end
return if request_timeout.nil? || request_timeout.infinite? return if request_timeout.nil? || request_timeout.infinite?
set_request_timeout(request, request_timeout, :headers, :response) do set_request_timeout(:request_timeout, request, request_timeout, :headers, :complete) do
read_timeout_callback(request, request_timeout, RequestTimeoutError) read_timeout_callback(request, request_timeout, RequestTimeoutError)
end end
end end
@ -688,7 +917,8 @@ module HTTPX
@write_buffer.clear @write_buffer.clear
error = WriteTimeoutError.new(request, nil, write_timeout) error = WriteTimeoutError.new(request, nil, write_timeout)
on_error(error)
on_error(error, request)
end end
def read_timeout_callback(request, read_timeout, error_type = ReadTimeoutError) def read_timeout_callback(request, read_timeout, error_type = ReadTimeoutError)
@ -698,35 +928,31 @@ module HTTPX
@write_buffer.clear @write_buffer.clear
error = error_type.new(request, request.response, read_timeout) error = error_type.new(request, request.response, read_timeout)
on_error(error)
on_error(error, request)
end end
def set_request_timeout(request, timeout, start_event, finish_events, &callback) def set_request_timeout(label, request, timeout, start_event, finish_events, &callback)
request.once(start_event) do request.set_timeout_callback(start_event) do
interval = @timers.after(timeout, callback) timer = @current_selector.after(timeout, callback)
request.active_timeouts << label
Array(finish_events).each do |event| Array(finish_events).each do |event|
# clean up request timeouts if the connection errors out # clean up request timeouts if the connection errors out
request.once(event) do request.set_timeout_callback(event) do
if @intervals.include?(interval) timer.cancel
interval.delete(callback) request.active_timeouts.delete(label)
@intervals.delete(interval) if interval.no_callbacks?
end
end end
end end
@intervals << interval
end end
end end
class << self def parser_type(protocol)
def parser_type(protocol) case protocol
case protocol when "h2" then HTTP2
when "h2" then HTTP2 when "http/1.1" then HTTP1
when "http/1.1" then HTTP1 else
else raise Error, "unsupported protocol (##{protocol})"
raise Error, "unsupported protocol (##{protocol})"
end
end end
end end
end end

View File

@ -15,7 +15,7 @@ module HTTPX
attr_accessor :max_concurrent_requests attr_accessor :max_concurrent_requests
def initialize(buffer, options) def initialize(buffer, options)
@options = Options.new(options) @options = options
@max_concurrent_requests = @options.max_concurrent_requests || MAX_REQUESTS @max_concurrent_requests = @options.max_concurrent_requests || MAX_REQUESTS
@max_requests = @options.max_requests @max_requests = @options.max_requests
@parser = Parser::HTTP1.new(self) @parser = Parser::HTTP1.new(self)
@ -93,7 +93,7 @@ module HTTPX
concurrent_requests_limit = [@max_concurrent_requests, requests_limit].min concurrent_requests_limit = [@max_concurrent_requests, requests_limit].min
@requests.each_with_index do |request, idx| @requests.each_with_index do |request, idx|
break if idx >= concurrent_requests_limit break if idx >= concurrent_requests_limit
next if request.state == :done next unless request.can_buffer?
handle(request) handle(request)
end end
@ -119,7 +119,7 @@ module HTTPX
@parser.http_version.join("."), @parser.http_version.join("."),
headers) headers)
log(color: :yellow) { "-> HEADLINE: #{response.status} HTTP/#{@parser.http_version.join(".")}" } log(color: :yellow) { "-> HEADLINE: #{response.status} HTTP/#{@parser.http_version.join(".")}" }
log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{v}" }.join("\n") } log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v)}" }.join("\n") }
@request.response = response @request.response = response
on_complete if response.finished? on_complete if response.finished?
@ -131,7 +131,7 @@ module HTTPX
response = @request.response response = @request.response
log(level: 2) { "trailer headers received" } log(level: 2) { "trailer headers received" }
log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{v.join(", ")}" }.join("\n") } log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v.join(", "))}" }.join("\n") }
response.merge_headers(h) response.merge_headers(h)
end end
@ -141,12 +141,12 @@ module HTTPX
return unless request return unless request
log(color: :green) { "-> DATA: #{chunk.bytesize} bytes..." } log(color: :green) { "-> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "-> #{chunk.inspect}" } log(level: 2, color: :green) { "-> #{log_redact(chunk.inspect)}" }
response = request.response response = request.response
response << chunk response << chunk
rescue StandardError => e rescue StandardError => e
error_response = ErrorResponse.new(request, e, request.options) error_response = ErrorResponse.new(request, e)
request.response = error_response request.response = error_response
dispatch dispatch
end end
@ -171,7 +171,6 @@ module HTTPX
@request = nil @request = nil
@requests.shift @requests.shift
response = request.response response = request.response
response.finish! unless response.is_a?(ErrorResponse)
emit(:response, request, response) emit(:response, request, response)
if @parser.upgrade? if @parser.upgrade?
@ -197,7 +196,7 @@ module HTTPX
end end
end end
def handle_error(ex) def handle_error(ex, request = nil)
if (ex.is_a?(EOFError) || ex.is_a?(TimeoutError)) && @request && @request.response && if (ex.is_a?(EOFError) || ex.is_a?(TimeoutError)) && @request && @request.response &&
!@request.response.headers.key?("content-length") && !@request.response.headers.key?("content-length") &&
!@request.response.headers.key?("transfer-encoding") !@request.response.headers.key?("transfer-encoding")
@ -211,11 +210,15 @@ module HTTPX
if @pipelining if @pipelining
catch(:called) { disable } catch(:called) { disable }
else else
@requests.each do |request| @requests.each do |req|
emit(:error, request, ex) next if request && request == req
emit(:error, req, ex)
end end
@pending.each do |request| @pending.each do |req|
emit(:error, request, ex) next if request && request == req
emit(:error, req, ex)
end end
end end
end end
@ -245,7 +248,7 @@ module HTTPX
return unless keep_alive return unless keep_alive
parameters = Hash[keep_alive.split(/ *, */).map do |pair| parameters = Hash[keep_alive.split(/ *, */).map do |pair|
pair.split(/ *= */) pair.split(/ *= */, 2)
end] end]
@max_requests = parameters["max"].to_i - 1 if parameters.key?("max") @max_requests = parameters["max"].to_i - 1 if parameters.key?("max")
@ -358,7 +361,7 @@ module HTTPX
while (chunk = request.drain_body) while (chunk = request.drain_body)
log(color: :green) { "<- DATA: #{chunk.bytesize} bytes..." } log(color: :green) { "<- DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "<- #{chunk.inspect}" } log(level: 2, color: :green) { "<- #{log_redact(chunk.inspect)}" }
@buffer << chunk @buffer << chunk
throw(:buffer_full, request) if @buffer.full? throw(:buffer_full, request) if @buffer.full?
end end
@ -378,15 +381,16 @@ module HTTPX
def join_headers2(headers) def join_headers2(headers)
headers.each do |field, value| headers.each do |field, value|
buffer = "#{capitalized(field)}: #{value}#{CRLF}" field = capitalized(field)
log(color: :yellow) { "<- HEADER: #{buffer.chomp}" } log(color: :yellow) { "<- HEADER: #{[field, log_redact(value)].join(": ")}" }
@buffer << buffer @buffer << "#{field}: #{value}#{CRLF}"
end end
end end
UPCASED = { UPCASED = {
"www-authenticate" => "WWW-Authenticate", "www-authenticate" => "WWW-Authenticate",
"http2-settings" => "HTTP2-Settings", "http2-settings" => "HTTP2-Settings",
"content-md5" => "Content-MD5",
}.freeze }.freeze
def capitalized(field) def capitalized(field)

View File

@ -1,18 +1,24 @@
# frozen_string_literal: true # frozen_string_literal: true
require "securerandom" require "securerandom"
require "http/2/next" require "http/2"
module HTTPX module HTTPX
class Connection::HTTP2 class Connection::HTTP2
include Callbacks include Callbacks
include Loggable include Loggable
MAX_CONCURRENT_REQUESTS = HTTP2Next::DEFAULT_MAX_CONCURRENT_STREAMS MAX_CONCURRENT_REQUESTS = ::HTTP2::DEFAULT_MAX_CONCURRENT_STREAMS
class Error < Error class Error < Error
def initialize(id, code) def initialize(id, error)
super("stream #{id} closed with error: #{code}") super("stream #{id} closed with error: #{error}")
end
end
class PingError < Error
def initialize
super(0, :ping_error)
end end
end end
@ -25,7 +31,7 @@ module HTTPX
attr_reader :streams, :pending attr_reader :streams, :pending
def initialize(buffer, options) def initialize(buffer, options)
@options = Options.new(options) @options = options
@settings = @options.http2_settings @settings = @options.http2_settings
@pending = [] @pending = []
@streams = {} @streams = {}
@ -52,6 +58,8 @@ module HTTPX
if @connection.state == :closed if @connection.state == :closed
return unless @handshake_completed return unless @handshake_completed
return if @buffer.empty?
return :w return :w
end end
@ -92,16 +100,10 @@ module HTTPX
@connection << data @connection << data
end end
def can_buffer_more_requests? def send(request, head = false)
(@handshake_completed || !@wait_for_handshake) &&
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
end
def send(request)
unless can_buffer_more_requests? unless can_buffer_more_requests?
@pending << request head ? @pending.unshift(request) : @pending << request
return return false
end end
unless (stream = @streams[request]) unless (stream = @streams[request])
stream = @connection.new_stream stream = @connection.new_stream
@ -111,47 +113,57 @@ module HTTPX
end end
handle(request, stream) handle(request, stream)
true true
rescue HTTP2Next::Error::StreamLimitExceeded rescue ::HTTP2::Error::StreamLimitExceeded
@pending.unshift(request) @pending.unshift(request)
false
end end
def consume def consume
@streams.each do |request, stream| @streams.each do |request, stream|
next if request.state == :done next unless request.can_buffer?
handle(request, stream) handle(request, stream)
end end
end end
def handle_error(ex) def handle_error(ex, request = nil)
if ex.instance_of?(TimeoutError) && !@handshake_completed && @connection.state != :closed if ex.is_a?(OperationTimeoutError) && !@handshake_completed && @connection.state != :closed
@connection.goaway(:settings_timeout, "closing due to settings timeout") @connection.goaway(:settings_timeout, "closing due to settings timeout")
emit(:close_handshake) emit(:close_handshake)
settings_ex = SettingsTimeoutError.new(ex.timeout, ex.message) settings_ex = SettingsTimeoutError.new(ex.timeout, ex.message)
settings_ex.set_backtrace(ex.backtrace) settings_ex.set_backtrace(ex.backtrace)
ex = settings_ex ex = settings_ex
end end
@streams.each_key do |request| @streams.each_key do |req|
emit(:error, request, ex) next if request && request == req
emit(:error, req, ex)
end end
@pending.each do |request| while (req = @pending.shift)
emit(:error, request, ex) next if request && request == req
emit(:error, req, ex)
end end
end end
def ping def ping
ping = SecureRandom.gen_random(8) ping = SecureRandom.gen_random(8)
@connection.ping(ping) @connection.ping(ping.dup)
ensure ensure
@pings << ping @pings << ping
end end
private private
def can_buffer_more_requests?
(@handshake_completed || !@wait_for_handshake) &&
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
end
def send_pending def send_pending
while (request = @pending.shift) while (request = @pending.shift)
# TODO: this request should go back to top of stack break unless send(request, true)
break unless send(request)
end end
end end
@ -168,7 +180,7 @@ module HTTPX
end end
def init_connection def init_connection
@connection = HTTP2Next::Client.new(@settings) @connection = ::HTTP2::Client.new(@settings)
@connection.on(:frame, &method(:on_frame)) @connection.on(:frame, &method(:on_frame))
@connection.on(:frame_sent, &method(:on_frame_sent)) @connection.on(:frame_sent, &method(:on_frame_sent))
@connection.on(:frame_received, &method(:on_frame_received)) @connection.on(:frame_received, &method(:on_frame_received))
@ -214,12 +226,12 @@ module HTTPX
extra_headers = set_protocol_headers(request) extra_headers = set_protocol_headers(request)
if request.headers.key?("host") if request.headers.key?("host")
log { "forbidden \"host\" header found (#{request.headers["host"]}), will use it as authority..." } log { "forbidden \"host\" header found (#{log_redact(request.headers["host"])}), will use it as authority..." }
extra_headers[":authority"] = request.headers["host"] extra_headers[":authority"] = request.headers["host"]
end end
log(level: 1, color: :yellow) do log(level: 1, color: :yellow) do
request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n") "\n#{request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")}"
end end
stream.headers(request.headers.each(extra_headers), end_stream: request.body.empty?) stream.headers(request.headers.each(extra_headers), end_stream: request.body.empty?)
end end
@ -231,7 +243,7 @@ module HTTPX
end end
log(level: 1, color: :yellow) do log(level: 1, color: :yellow) do
request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n") request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end end
stream.headers(request.trailers.each, end_stream: true) stream.headers(request.trailers.each, end_stream: true)
end end
@ -242,13 +254,13 @@ module HTTPX
chunk = @drains.delete(request) || request.drain_body chunk = @drains.delete(request) || request.drain_body
while chunk while chunk
next_chunk = request.drain_body next_chunk = request.drain_body
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." } send_chunk(request, stream, chunk, next_chunk)
log(level: 2, color: :green) { "#{stream.id}: -> #{chunk.inspect}" }
stream.data(chunk, end_stream: !(next_chunk || request.trailers? || request.callbacks_for?(:trailers)))
if next_chunk && (@buffer.full? || request.body.unbounded_body?) if next_chunk && (@buffer.full? || request.body.unbounded_body?)
@drains[request] = next_chunk @drains[request] = next_chunk
throw(:buffer_full) throw(:buffer_full)
end end
chunk = next_chunk chunk = next_chunk
end end
@ -257,6 +269,16 @@ module HTTPX
on_stream_refuse(stream, request, error) on_stream_refuse(stream, request, error)
end end
def send_chunk(request, stream, chunk, next_chunk)
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: -> #{log_redact(chunk.inspect)}" }
stream.data(chunk, end_stream: end_stream?(request, next_chunk))
end
def end_stream?(request, next_chunk)
!(next_chunk || request.trailers? || request.callbacks_for?(:trailers))
end
###### ######
# HTTP/2 Callbacks # HTTP/2 Callbacks
###### ######
@ -270,7 +292,7 @@ module HTTPX
end end
log(color: :yellow) do log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n") h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end end
_, status = h.shift _, status = h.shift
headers = request.options.headers_class.new(h) headers = request.options.headers_class.new(h)
@ -283,14 +305,14 @@ module HTTPX
def on_stream_trailers(stream, response, h) def on_stream_trailers(stream, response, h)
log(color: :yellow) do log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n") h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end end
response.merge_headers(h) response.merge_headers(h)
end end
def on_stream_data(stream, request, data) def on_stream_data(stream, request, data)
log(level: 1, color: :green) { "#{stream.id}: <- DATA: #{data.bytesize} bytes..." } log(level: 1, color: :green) { "#{stream.id}: <- DATA: #{data.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: <- #{data.inspect}" } log(level: 2, color: :green) { "#{stream.id}: <- #{log_redact(data.inspect)}" }
request.response << data request.response << data
end end
@ -307,26 +329,33 @@ module HTTPX
@streams.delete(request) @streams.delete(request)
if error if error
ex = Error.new(stream.id, error) case error
ex.set_backtrace(caller) when :http_1_1_required
response = ErrorResponse.new(request, ex, request.options) emit(:error, request, error)
request.response = response else
emit(:response, request, response) ex = Error.new(stream.id, error)
ex.set_backtrace(caller)
response = ErrorResponse.new(request, ex)
request.response = response
emit(:response, request, response)
end
else else
response = request.response response = request.response
if response && response.is_a?(Response) && response.status == 421 if response && response.is_a?(Response) && response.status == 421
ex = MisdirectedRequestError.new(response) emit(:error, request, :http_1_1_required)
ex.set_backtrace(caller)
emit(:error, request, ex)
else else
emit(:response, request, response) emit(:response, request, response)
end end
end end
send(@pending.shift) unless @pending.empty? send(@pending.shift) unless @pending.empty?
return unless @streams.empty? && exhausted? return unless @streams.empty? && exhausted?
close if @pending.empty?
emit(:exhausted) unless @pending.empty? close
else
emit(:exhausted)
end
end end
def on_frame(bytes) def on_frame(bytes)
@ -344,7 +373,12 @@ module HTTPX
is_connection_closed = @connection.state == :closed is_connection_closed = @connection.state == :closed
if error if error
@buffer.clear if is_connection_closed @buffer.clear if is_connection_closed
if error == :no_error case error
when :http_1_1_required
while (request = @pending.shift)
emit(:error, request, error)
end
when :no_error
ex = GoawayError.new ex = GoawayError.new
@pending.unshift(*@streams.keys) @pending.unshift(*@streams.keys)
@drains.clear @drains.clear
@ -352,8 +386,11 @@ module HTTPX
else else
ex = Error.new(0, error) ex = Error.new(0, error)
end end
ex.set_backtrace(caller)
handle_error(ex) if ex
ex.set_backtrace(caller)
handle_error(ex)
end
end end
return unless is_connection_closed && @streams.empty? return unless is_connection_closed && @streams.empty?
@ -363,8 +400,15 @@ module HTTPX
def on_frame_sent(frame) def on_frame_sent(frame)
log(level: 2) { "#{frame[:stream]}: frame was sent!" } log(level: 2) { "#{frame[:stream]}: frame was sent!" }
log(level: 2, color: :blue) do log(level: 2, color: :blue) do
payload = frame payload =
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}" "#{frame[:stream]}: #{payload}"
end end
end end
@ -372,15 +416,22 @@ module HTTPX
def on_frame_received(frame) def on_frame_received(frame)
log(level: 2) { "#{frame[:stream]}: frame was received!" } log(level: 2) { "#{frame[:stream]}: frame was received!" }
log(level: 2, color: :magenta) do log(level: 2, color: :magenta) do
payload = frame payload =
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}" "#{frame[:stream]}: #{payload}"
end end
end end
def on_altsvc(origin, frame) def on_altsvc(origin, frame)
log(level: 2) { "#{frame[:stream]}: altsvc frame was received" } log(level: 2) { "#{frame[:stream]}: altsvc frame was received" }
log(level: 2) { "#{frame[:stream]}: #{frame.inspect}" } log(level: 2) { "#{frame[:stream]}: #{log_redact(frame.inspect)}" }
alt_origin = URI.parse("#{frame[:proto]}://#{frame[:host]}:#{frame[:port]}") alt_origin = URI.parse("#{frame[:proto]}://#{frame[:host]}:#{frame[:port]}")
params = { "ma" => frame[:max_age] } params = { "ma" => frame[:max_age] }
emit(:altsvc, origin, alt_origin, origin, params) emit(:altsvc, origin, alt_origin, origin, params)
@ -395,11 +446,9 @@ module HTTPX
end end
def on_pong(ping) def on_pong(ping)
if @pings.delete(ping.to_s) raise PingError unless @pings.delete(ping.to_s)
emit(:pong)
else emit(:pong)
close(:protocol_error, "ping payload did not match")
end
end end
end end
end end

View File

@ -29,6 +29,9 @@ module HTTPX
end end
end end
# Raise when it can't acquire a connection from the pool.
class PoolTimeoutError < TimeoutError; end
# Error raised when there was a timeout establishing the connection to a server. # Error raised when there was a timeout establishing the connection to a server.
# This may be raised due to timeouts during TCP and TLS (when applicable) connection # This may be raised due to timeouts during TCP and TLS (when applicable) connection
# establishment. # establishment.
@ -65,6 +68,9 @@ module HTTPX
# Error raised when there was a timeout while resolving a domain to an IP. # Error raised when there was a timeout while resolving a domain to an IP.
class ResolveTimeoutError < TimeoutError; end class ResolveTimeoutError < TimeoutError; end
# Error raise when there was a timeout waiting for readiness of the socket the request is related to.
class OperationTimeoutError < TimeoutError; end
# Error raised when there was an error while resolving a domain to an IP. # Error raised when there was an error while resolving a domain to an IP.
class ResolveError < Error; end class ResolveError < Error; end
@ -100,8 +106,4 @@ module HTTPX
@response.status @response.status
end end
end end
# error raised when a request was sent a server which can't reproduce a response, and
# has therefore returned an HTTP response using the 421 status code.
class MisdirectedRequestError < HTTPError; end
end end

View File

@ -11,20 +11,32 @@ module HTTPX
end end
def initialize(headers = nil) def initialize(headers = nil)
if headers.nil? || headers.empty?
@headers = headers.to_h
return
end
@headers = {} @headers = {}
return unless headers
headers.each do |field, value| headers.each do |field, value|
array_value(value).each do |v| field = downcased(field)
add(downcased(field), v)
value = array_value(value)
current = @headers[field]
if current.nil?
@headers[field] = value
else
current.concat(value)
end end
end end
end end
# cloned initialization # cloned initialization
def initialize_clone(orig) def initialize_clone(orig, **kwargs)
super super
@headers = orig.instance_variable_get(:@headers).clone @headers = orig.instance_variable_get(:@headers).clone(**kwargs)
end end
# dupped initialization # dupped initialization
@ -39,17 +51,6 @@ module HTTPX
super super
end end
def same_headers?(headers)
@headers.empty? || begin
headers.each do |k, v|
next unless key?(k)
return false unless v == self[k]
end
true
end
end
# merges headers with another header-quack. # merges headers with another header-quack.
# the merge rule is, if the header already exists, # the merge rule is, if the header already exists,
# ignore what the +other+ headers has. Otherwise, set # ignore what the +other+ headers has. Otherwise, set
@ -119,6 +120,10 @@ module HTTPX
other == to_hash other == to_hash
end end
def empty?
@headers.empty?
end
# the headers store in Hash format # the headers store in Hash format
def to_hash def to_hash
Hash[to_a] Hash[to_a]
@ -137,7 +142,8 @@ module HTTPX
# :nocov: # :nocov:
def inspect def inspect
to_hash.inspect "#<#{self.class}:#{object_id} " \
"#{to_hash.inspect}>"
end end
# :nocov: # :nocov:
@ -160,12 +166,7 @@ module HTTPX
private private
def array_value(value) def array_value(value)
case value Array(value)
when Array
value.map { |val| String(val).strip }
else
[String(value).strip]
end
end end
def downcased(field) def downcased(field)

View File

@ -9,7 +9,8 @@ module HTTPX
# rubocop:disable Style/MutableConstant # rubocop:disable Style/MutableConstant
TLS_OPTIONS = { alpn_protocols: %w[h2 http/1.1].freeze } TLS_OPTIONS = { alpn_protocols: %w[h2 http/1.1].freeze }
# https://github.com/jruby/jruby-openssl/issues/284 # https://github.com/jruby/jruby-openssl/issues/284
TLS_OPTIONS[:verify_hostname] = true if RUBY_ENGINE == "jruby" # TODO: remove when dropping support for jruby-openssl < 0.15.4
TLS_OPTIONS[:verify_hostname] = true if RUBY_ENGINE == "jruby" && JOpenSSL::VERSION < "0.15.4"
# rubocop:enable Style/MutableConstant # rubocop:enable Style/MutableConstant
TLS_OPTIONS.freeze TLS_OPTIONS.freeze
@ -92,9 +93,12 @@ module HTTPX
end end
def connect def connect
super return if @state == :negotiated
return if @state == :negotiated ||
@state != :connected unless @state == :connected
super
return unless @state == :connected
end
unless @io.is_a?(OpenSSL::SSL::SSLSocket) unless @io.is_a?(OpenSSL::SSL::SSLSocket)
if (hostname_is_ip = (@ip == @sni_hostname)) if (hostname_is_ip = (@ip == @sni_hostname))

View File

@ -17,7 +17,7 @@ module HTTPX
@state = :idle @state = :idle
@addresses = [] @addresses = []
@hostname = origin.host @hostname = origin.host
@options = Options.new(options) @options = options
@fallback_protocol = @options.fallback_protocol @fallback_protocol = @options.fallback_protocol
@port = origin.port @port = origin.port
@interests = :w @interests = :w
@ -75,9 +75,18 @@ module HTTPX
@io = build_socket @io = build_socket
end end
try_connect try_connect
rescue Errno::EHOSTUNREACH,
Errno::ENETUNREACH => e
raise e if @ip_index <= 0
log { "failed connecting to #{@ip} (#{e.message}), evict from cache and trying next..." }
Resolver.cached_lookup_evict(@hostname, @ip)
@ip_index -= 1
@io = build_socket
retry
rescue Errno::ECONNREFUSED, rescue Errno::ECONNREFUSED,
Errno::EADDRNOTAVAIL, Errno::EADDRNOTAVAIL,
Errno::EHOSTUNREACH,
SocketError, SocketError,
IOError => e IOError => e
raise e if @ip_index <= 0 raise e if @ip_index <= 0
@ -167,7 +176,12 @@ module HTTPX
# :nocov: # :nocov:
def inspect def inspect
"#<#{self.class}: #{@ip}:#{@port} (state: #{@state})>" "#<#{self.class}:#{object_id} " \
"#{@ip}:#{@port} " \
"@state=#{@state} " \
"@hostname=#{@hostname} " \
"@addresses=#{@addresses} " \
"@state=#{@state}>"
end end
# :nocov: # :nocov:
@ -195,12 +209,9 @@ module HTTPX
end end
def log_transition_state(nextstate) def log_transition_state(nextstate)
case nextstate label = host
when :connected label = "#{label}(##{@io.fileno})" if nextstate == :connected
"Connected to #{host} (##{@io.fileno})" "#{label} #{@state} -> #{nextstate}"
else
"#{host} #{@state} -> #{nextstate}"
end
end end
end end
end end

View File

@ -8,11 +8,11 @@ module HTTPX
alias_method :host, :path alias_method :host, :path
def initialize(origin, addresses, options) def initialize(origin, path, options)
@addresses = [] @addresses = []
@hostname = origin.host @hostname = origin.host
@state = :idle @state = :idle
@options = Options.new(options) @options = options
@fallback_protocol = @options.fallback_protocol @fallback_protocol = @options.fallback_protocol
if @options.io if @options.io
@io = case @options.io @io = case @options.io
@ -26,8 +26,10 @@ module HTTPX
@path = @io.path @path = @io.path
@keep_open = true @keep_open = true
@state = :connected @state = :connected
elsif path
@path = path
else else
@path = addresses.first raise Error, "No path given where to store the socket"
end end
@io ||= build_socket @io ||= build_socket
end end
@ -46,7 +48,7 @@ module HTTPX
transition(:connected) transition(:connected)
rescue Errno::EINPROGRESS, rescue Errno::EINPROGRESS,
Errno::EALREADY, Errno::EALREADY,
::IO::WaitReadable IO::WaitReadable
end end
def expired? def expired?
@ -55,7 +57,7 @@ module HTTPX
# :nocov: # :nocov:
def inspect def inspect
"#<#{self.class}(path: #{@path}): (state: #{@state})>" "#<#{self.class}:#{object_id} @path=#{@path}) @state=#{@state})>"
end end
# :nocov: # :nocov:

View File

@ -13,22 +13,44 @@ module HTTPX
white: 37, white: 37,
}.freeze }.freeze
def log(level: @options.debug_level, color: nil, &msg) USE_DEBUG_LOG = ENV.key?("HTTPX_DEBUG")
return unless @options.debug
return unless @options.debug_level >= level
debug_stream = @options.debug def log(
level: @options.debug_level,
color: nil,
debug_level: @options.debug_level,
debug: @options.debug,
&msg
)
return unless debug_level >= level
message = (+"" << msg.call << "\n") debug_stream = debug || ($stderr if USE_DEBUG_LOG)
return unless debug_stream
klass = self.class
until (class_name = klass.name)
klass = klass.superclass
end
message = +"(pid:#{Process.pid}, " \
"tid:#{Thread.current.object_id}, " \
"fid:#{Fiber.current.object_id}, " \
"self:#{class_name}##{object_id}) "
message << msg.call << "\n"
message = "\e[#{COLORS[color]}m#{message}\e[0m" if color && debug_stream.respond_to?(:isatty) && debug_stream.isatty message = "\e[#{COLORS[color]}m#{message}\e[0m" if color && debug_stream.respond_to?(:isatty) && debug_stream.isatty
debug_stream << message debug_stream << message
end end
def log_exception(ex, level: @options.debug_level, color: nil) def log_exception(ex, level: @options.debug_level, color: nil, debug_level: @options.debug_level, debug: @options.debug)
return unless @options.debug log(level: level, color: color, debug_level: debug_level, debug: debug) { ex.full_message }
return unless @options.debug_level >= level end
log(level: level, color: color) { ex.full_message } def log_redact(text, should_redact = @options.debug_redact)
return text.to_s unless should_redact
"[REDACTED]"
end end
end end
end end

View File

@ -18,21 +18,30 @@ module HTTPX
# https://github.com/ruby/resolv/blob/095f1c003f6073730500f02acbdbc55f83d70987/lib/resolv.rb#L408 # https://github.com/ruby/resolv/blob/095f1c003f6073730500f02acbdbc55f83d70987/lib/resolv.rb#L408
ip_address_families = begin ip_address_families = begin
list = Socket.ip_address_list list = Socket.ip_address_list
if list.any? { |a| a.ipv6? && !a.ipv6_loopback? && !a.ipv6_linklocal? && !a.ipv6_unique_local? } if list.any? { |a| a.ipv6? && !a.ipv6_loopback? && !a.ipv6_linklocal? }
[Socket::AF_INET6, Socket::AF_INET] [Socket::AF_INET6, Socket::AF_INET]
else else
[Socket::AF_INET] [Socket::AF_INET]
end end
rescue NotImplementedError rescue NotImplementedError
[Socket::AF_INET] [Socket::AF_INET]
end.freeze
SET_TEMPORARY_NAME = ->(mod, pl = nil) do
if mod.respond_to?(:set_temporary_name) # ruby 3.4 only
name = mod.name || "#{mod.superclass.name}(plugin)"
name = "#{name}/#{pl}" if pl
mod.set_temporary_name(name)
end
end end
DEFAULT_OPTIONS = { DEFAULT_OPTIONS = {
:max_requests => Float::INFINITY, :max_requests => Float::INFINITY,
:debug => ENV.key?("HTTPX_DEBUG") ? $stderr : nil, :debug => nil,
:debug_level => (ENV["HTTPX_DEBUG"] || 1).to_i, :debug_level => (ENV["HTTPX_DEBUG"] || 1).to_i,
:ssl => {}, :debug_redact => ENV.key?("HTTPX_DEBUG_REDACT"),
:http2_settings => { settings_enable_push: 0 }, :ssl => EMPTY_HASH,
:http2_settings => { settings_enable_push: 0 }.freeze,
:fallback_protocol => "http/1.1", :fallback_protocol => "http/1.1",
:supported_compression_formats => %w[gzip deflate], :supported_compression_formats => %w[gzip deflate],
:decompress_response_body => true, :decompress_response_body => true,
@ -47,23 +56,26 @@ module HTTPX
write_timeout: WRITE_TIMEOUT, write_timeout: WRITE_TIMEOUT,
request_timeout: REQUEST_TIMEOUT, request_timeout: REQUEST_TIMEOUT,
}, },
:headers_class => Class.new(Headers), :headers_class => Class.new(Headers, &SET_TEMPORARY_NAME),
:headers => {}, :headers => {},
:window_size => WINDOW_SIZE, :window_size => WINDOW_SIZE,
:buffer_size => BUFFER_SIZE, :buffer_size => BUFFER_SIZE,
:body_threshold_size => MAX_BODY_THRESHOLD_SIZE, :body_threshold_size => MAX_BODY_THRESHOLD_SIZE,
:request_class => Class.new(Request), :request_class => Class.new(Request, &SET_TEMPORARY_NAME),
:response_class => Class.new(Response), :response_class => Class.new(Response, &SET_TEMPORARY_NAME),
:request_body_class => Class.new(Request::Body), :request_body_class => Class.new(Request::Body, &SET_TEMPORARY_NAME),
:response_body_class => Class.new(Response::Body), :response_body_class => Class.new(Response::Body, &SET_TEMPORARY_NAME),
:connection_class => Class.new(Connection), :pool_class => Class.new(Pool, &SET_TEMPORARY_NAME),
:options_class => Class.new(self), :connection_class => Class.new(Connection, &SET_TEMPORARY_NAME),
:options_class => Class.new(self, &SET_TEMPORARY_NAME),
:transport => nil, :transport => nil,
:addresses => nil, :addresses => nil,
:persistent => false, :persistent => false,
:resolver_class => (ENV["HTTPX_RESOLVER"] || :native).to_sym, :resolver_class => (ENV["HTTPX_RESOLVER"] || :native).to_sym,
:resolver_options => { cache: true }, :resolver_options => { cache: true }.freeze,
:pool_options => EMPTY_HASH,
:ip_families => ip_address_families, :ip_families => ip_address_families,
:close_on_fork => false,
}.freeze }.freeze
class << self class << self
@ -90,8 +102,9 @@ module HTTPX
# #
# :debug :: an object which log messages are written to (must respond to <tt><<</tt>) # :debug :: an object which log messages are written to (must respond to <tt><<</tt>)
# :debug_level :: the log level of messages (can be 1, 2, or 3). # :debug_level :: the log level of messages (can be 1, 2, or 3).
# :ssl :: a hash of options which can be set as params of OpenSSL::SSL::SSLContext (see HTTPX::IO::SSL) # :debug_redact :: whether header/body payload should be redacted (defaults to <tt>false</tt>).
# :http2_settings :: a hash of options to be passed to a HTTP2Next::Connection (ex: <tt>{ max_concurrent_streams: 2 }</tt>) # :ssl :: a hash of options which can be set as params of OpenSSL::SSL::SSLContext (see HTTPX::SSL)
# :http2_settings :: a hash of options to be passed to a HTTP2::Connection (ex: <tt>{ max_concurrent_streams: 2 }</tt>)
# :fallback_protocol :: version of HTTP protocol to use by default in the absence of protocol negotiation # :fallback_protocol :: version of HTTP protocol to use by default in the absence of protocol negotiation
# like ALPN (defaults to <tt>"http/1.1"</tt>) # like ALPN (defaults to <tt>"http/1.1"</tt>)
# :supported_compression_formats :: list of compressions supported by the transcoder layer (defaults to <tt>%w[gzip deflate]</tt>). # :supported_compression_formats :: list of compressions supported by the transcoder layer (defaults to <tt>%w[gzip deflate]</tt>).
@ -110,6 +123,7 @@ module HTTPX
# :request_body_class :: class used to instantiate a request body # :request_body_class :: class used to instantiate a request body
# :response_body_class :: class used to instantiate a response body # :response_body_class :: class used to instantiate a response body
# :connection_class :: class used to instantiate connections # :connection_class :: class used to instantiate connections
# :pool_class :: class used to instantiate the session connection pool
# :options_class :: class used to instantiate options # :options_class :: class used to instantiate options
# :transport :: type of transport to use (set to "unix" for UNIX sockets) # :transport :: type of transport to use (set to "unix" for UNIX sockets)
# :addresses :: bucket of peer addresses (can be a list of IP addresses, a hash of domain to list of adddresses; # :addresses :: bucket of peer addresses (can be a list of IP addresses, a hash of domain to list of adddresses;
@ -118,31 +132,44 @@ module HTTPX
# :persistent :: whether to persist connections in between requests (defaults to <tt>true</tt>) # :persistent :: whether to persist connections in between requests (defaults to <tt>true</tt>)
# :resolver_class :: which resolver to use (defaults to <tt>:native</tt>, can also be <tt>:system<tt> for # :resolver_class :: which resolver to use (defaults to <tt>:native</tt>, can also be <tt>:system<tt> for
# using getaddrinfo or <tt>:https</tt> for DoH resolver, or a custom class) # using getaddrinfo or <tt>:https</tt> for DoH resolver, or a custom class)
# :resolver_options :: hash of options passed to the resolver # :resolver_options :: hash of options passed to the resolver. Accepted keys depend on the resolver type.
# :pool_options :: hash of options passed to the connection pool (See Pool#initialize).
# :ip_families :: which socket families are supported (system-dependent) # :ip_families :: which socket families are supported (system-dependent)
# :origin :: HTTP origin to set on requests with relative path (ex: "https://api.serv.com") # :origin :: HTTP origin to set on requests with relative path (ex: "https://api.serv.com")
# :base_path :: path to prefix given relative paths with (ex: "/v2") # :base_path :: path to prefix given relative paths with (ex: "/v2")
# :max_concurrent_requests :: max number of requests which can be set concurrently # :max_concurrent_requests :: max number of requests which can be set concurrently
# :max_requests :: max number of requests which can be made on socket before it reconnects. # :max_requests :: max number of requests which can be made on socket before it reconnects.
# :params :: hash or array of key-values which will be encoded and set in the query string of request uris. # :close_on_fork :: whether the session automatically closes when the process is fork (defaults to <tt>false</tt>).
# :form :: hash of array of key-values which will be form-or-multipart-encoded in requests body payload. # it only works if the session is persistent (and ruby 3.1 or higher is used).
# :json :: hash of array of key-values which will be JSON-encoded in requests body payload.
# :xml :: Nokogiri XML nodes which will be encoded in requests body payload.
# #
# This list of options are enhanced with each loaded plugin, see the plugin docs for details. # This list of options are enhanced with each loaded plugin, see the plugin docs for details.
def initialize(options = {}) def initialize(options = {})
do_initialize(options) defaults = DEFAULT_OPTIONS.merge(options)
defaults.each do |k, v|
next if v.nil?
option_method_name = :"option_#{k}"
raise Error, "unknown option: #{k}" unless respond_to?(option_method_name)
value = __send__(option_method_name, v)
instance_variable_set(:"@#{k}", value)
end
freeze freeze
end end
def freeze def freeze
super
@origin.freeze @origin.freeze
@base_path.freeze @base_path.freeze
@timeout.freeze @timeout.freeze
@headers.freeze @headers.freeze
@addresses.freeze @addresses.freeze
@supported_compression_formats.freeze @supported_compression_formats.freeze
@ssl.freeze
@http2_settings.freeze
@pool_options.freeze
@resolver_options.freeze
@ip_families.freeze
super
end end
def option_origin(value) def option_origin(value)
@ -165,41 +192,6 @@ module HTTPX
Array(value).map(&:to_s) Array(value).map(&:to_s)
end end
def option_max_concurrent_requests(value)
raise TypeError, ":max_concurrent_requests must be positive" unless value.positive?
value
end
def option_max_requests(value)
raise TypeError, ":max_requests must be positive" unless value.positive?
value
end
def option_window_size(value)
value = Integer(value)
raise TypeError, ":window_size must be positive" unless value.positive?
value
end
def option_buffer_size(value)
value = Integer(value)
raise TypeError, ":buffer_size must be positive" unless value.positive?
value
end
def option_body_threshold_size(value)
bytes = Integer(value)
raise TypeError, ":body_threshold_size must be positive" unless bytes.positive?
bytes
end
def option_transport(value) def option_transport(value)
transport = value.to_s transport = value.to_s
raise TypeError, "#{transport} is an unsupported transport type" unless %w[unix].include?(transport) raise TypeError, "#{transport} is an unsupported transport type" unless %w[unix].include?(transport)
@ -215,20 +207,47 @@ module HTTPX
Array(value) Array(value)
end end
# number options
%i[
max_concurrent_requests max_requests window_size buffer_size
body_threshold_size debug_level
].each do |option|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# converts +v+ into an Integer before setting the +#{option}+ option.
def option_#{option}(value) # def option_max_requests(v)
value = Integer(value) unless value.infinite?
raise TypeError, ":#{option} must be positive" unless value.positive? # raise TypeError, ":max_requests must be positive" unless value.positive?
value
end
OUT
end
# hashable options
%i[ssl http2_settings resolver_options pool_options].each do |option|
class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# converts +v+ into an Hash before setting the +#{option}+ option.
def option_#{option}(value) # def option_ssl(v)
Hash[value]
end
OUT
end
%i[ %i[
params form json xml body ssl http2_settings
request_class response_class headers_class request_body_class request_class response_class headers_class request_body_class
response_body_class connection_class options_class response_body_class connection_class options_class
io fallback_protocol debug debug_level resolver_class resolver_options pool_class
io fallback_protocol debug debug_redact resolver_class
compress_request_body decompress_response_body compress_request_body decompress_response_body
persistent persistent close_on_fork
].each do |method_name| ].each do |method_name|
class_eval(<<-OUT, __FILE__, __LINE__ + 1) class_eval(<<-OUT, __FILE__, __LINE__ + 1)
# sets +v+ as the value of the +#{method_name}+ option
def option_#{method_name}(v); v; end # def option_smth(v); v; end def option_#{method_name}(v); v; end # def option_smth(v); v; end
OUT OUT
end end
REQUEST_BODY_IVARS = %i[@headers @params @form @xml @json @body].freeze REQUEST_BODY_IVARS = %i[@headers].freeze
def ==(other) def ==(other)
super || options_equals?(other) super || options_equals?(other)
@ -249,14 +268,6 @@ module HTTPX
end end
end end
OTHER_LOOKUP = ->(obj, k, ivar_map) {
case obj
when Hash
obj[ivar_map[k]]
else
obj.instance_variable_get(k)
end
}
def merge(other) def merge(other)
ivar_map = nil ivar_map = nil
other_ivars = case other other_ivars = case other
@ -269,12 +280,12 @@ module HTTPX
return self if other_ivars.empty? return self if other_ivars.empty?
return self if other_ivars.all? { |ivar| instance_variable_get(ivar) == OTHER_LOOKUP[other, ivar, ivar_map] } return self if other_ivars.all? { |ivar| instance_variable_get(ivar) == access_option(other, ivar, ivar_map) }
opts = dup opts = dup
other_ivars.each do |ivar| other_ivars.each do |ivar|
v = OTHER_LOOKUP[other, ivar, ivar_map] v = access_option(other, ivar, ivar_map)
unless v unless v
opts.instance_variable_set(ivar, v) opts.instance_variable_set(ivar, v)
@ -302,31 +313,42 @@ module HTTPX
def extend_with_plugin_classes(pl) def extend_with_plugin_classes(pl)
if defined?(pl::RequestMethods) || defined?(pl::RequestClassMethods) if defined?(pl::RequestMethods) || defined?(pl::RequestClassMethods)
@request_class = @request_class.dup @request_class = @request_class.dup
SET_TEMPORARY_NAME[@request_class, pl]
@request_class.__send__(:include, pl::RequestMethods) if defined?(pl::RequestMethods) @request_class.__send__(:include, pl::RequestMethods) if defined?(pl::RequestMethods)
@request_class.extend(pl::RequestClassMethods) if defined?(pl::RequestClassMethods) @request_class.extend(pl::RequestClassMethods) if defined?(pl::RequestClassMethods)
end end
if defined?(pl::ResponseMethods) || defined?(pl::ResponseClassMethods) if defined?(pl::ResponseMethods) || defined?(pl::ResponseClassMethods)
@response_class = @response_class.dup @response_class = @response_class.dup
SET_TEMPORARY_NAME[@response_class, pl]
@response_class.__send__(:include, pl::ResponseMethods) if defined?(pl::ResponseMethods) @response_class.__send__(:include, pl::ResponseMethods) if defined?(pl::ResponseMethods)
@response_class.extend(pl::ResponseClassMethods) if defined?(pl::ResponseClassMethods) @response_class.extend(pl::ResponseClassMethods) if defined?(pl::ResponseClassMethods)
end end
if defined?(pl::HeadersMethods) || defined?(pl::HeadersClassMethods) if defined?(pl::HeadersMethods) || defined?(pl::HeadersClassMethods)
@headers_class = @headers_class.dup @headers_class = @headers_class.dup
SET_TEMPORARY_NAME[@headers_class, pl]
@headers_class.__send__(:include, pl::HeadersMethods) if defined?(pl::HeadersMethods) @headers_class.__send__(:include, pl::HeadersMethods) if defined?(pl::HeadersMethods)
@headers_class.extend(pl::HeadersClassMethods) if defined?(pl::HeadersClassMethods) @headers_class.extend(pl::HeadersClassMethods) if defined?(pl::HeadersClassMethods)
end end
if defined?(pl::RequestBodyMethods) || defined?(pl::RequestBodyClassMethods) if defined?(pl::RequestBodyMethods) || defined?(pl::RequestBodyClassMethods)
@request_body_class = @request_body_class.dup @request_body_class = @request_body_class.dup
SET_TEMPORARY_NAME[@request_body_class, pl]
@request_body_class.__send__(:include, pl::RequestBodyMethods) if defined?(pl::RequestBodyMethods) @request_body_class.__send__(:include, pl::RequestBodyMethods) if defined?(pl::RequestBodyMethods)
@request_body_class.extend(pl::RequestBodyClassMethods) if defined?(pl::RequestBodyClassMethods) @request_body_class.extend(pl::RequestBodyClassMethods) if defined?(pl::RequestBodyClassMethods)
end end
if defined?(pl::ResponseBodyMethods) || defined?(pl::ResponseBodyClassMethods) if defined?(pl::ResponseBodyMethods) || defined?(pl::ResponseBodyClassMethods)
@response_body_class = @response_body_class.dup @response_body_class = @response_body_class.dup
SET_TEMPORARY_NAME[@response_body_class, pl]
@response_body_class.__send__(:include, pl::ResponseBodyMethods) if defined?(pl::ResponseBodyMethods) @response_body_class.__send__(:include, pl::ResponseBodyMethods) if defined?(pl::ResponseBodyMethods)
@response_body_class.extend(pl::ResponseBodyClassMethods) if defined?(pl::ResponseBodyClassMethods) @response_body_class.extend(pl::ResponseBodyClassMethods) if defined?(pl::ResponseBodyClassMethods)
end end
if defined?(pl::PoolMethods)
@pool_class = @pool_class.dup
SET_TEMPORARY_NAME[@pool_class, pl]
@pool_class.__send__(:include, pl::PoolMethods)
end
if defined?(pl::ConnectionMethods) if defined?(pl::ConnectionMethods)
@connection_class = @connection_class.dup @connection_class = @connection_class.dup
SET_TEMPORARY_NAME[@connection_class, pl]
@connection_class.__send__(:include, pl::ConnectionMethods) @connection_class.__send__(:include, pl::ConnectionMethods)
end end
return unless defined?(pl::OptionsMethods) return unless defined?(pl::OptionsMethods)
@ -337,16 +359,12 @@ module HTTPX
private private
def do_initialize(options = {}) def access_option(obj, k, ivar_map)
defaults = DEFAULT_OPTIONS.merge(options) case obj
defaults.each do |k, v| when Hash
next if v.nil? obj[ivar_map[k]]
else
option_method_name = :"option_#{k}" obj.instance_variable_get(k)
raise Error, "unknown option: #{k}" unless respond_to?(option_method_name)
value = __send__(option_method_name, v)
instance_variable_set(:"@#{k}", value)
end end
end end
end end

View File

@ -23,7 +23,7 @@ module HTTPX
def reset! def reset!
@state = :idle @state = :idle
@headers.clear @headers = {}
@content_length = nil @content_length = nil
@_has_trailers = nil @_has_trailers = nil
end end
@ -75,6 +75,7 @@ module HTTPX
buffer = @buffer buffer = @buffer
while (idx = buffer.index("\n")) while (idx = buffer.index("\n"))
# @type var line: String
line = buffer.byteslice(0..idx) line = buffer.byteslice(0..idx)
raise Error, "wrong header format" if line.start_with?("\s", "\t") raise Error, "wrong header format" if line.start_with?("\s", "\t")
@ -101,9 +102,11 @@ module HTTPX
separator_index = line.index(":") separator_index = line.index(":")
raise Error, "wrong header format" unless separator_index raise Error, "wrong header format" unless separator_index
# @type var key: String
key = line.byteslice(0..(separator_index - 1)) key = line.byteslice(0..(separator_index - 1))
key.rstrip! # was lstripped previously! key.rstrip! # was lstripped previously!
# @type var value: String
value = line.byteslice((separator_index + 1)..-1) value = line.byteslice((separator_index + 1)..-1)
value.strip! value.strip!
raise Error, "wrong header format" if value.nil? raise Error, "wrong header format" if value.nil?
@ -118,6 +121,7 @@ module HTTPX
@observer.on_data(chunk) @observer.on_data(chunk)
end end
elsif @content_length elsif @content_length
# @type var data: String
data = @buffer.byteslice(0, @content_length) data = @buffer.byteslice(0, @content_length)
@buffer = @buffer.byteslice(@content_length..-1) || "".b @buffer = @buffer.byteslice(@content_length..-1) || "".b
@content_length -= data.bytesize @content_length -= data.bytesize

View File

@ -30,7 +30,8 @@ module HTTPX
auth_info = authenticate[/^(\w+) (.*)/, 2] auth_info = authenticate[/^(\w+) (.*)/, 2]
params = auth_info.split(/ *, */) params = auth_info.split(/ *, */)
.to_h { |val| val.split("=") }.transform_values { |v| v.delete("\"") } .to_h { |val| val.split("=", 2) }
.transform_values { |v| v.delete("\"") }
nonce = params["nonce"] nonce = params["nonce"]
nc = next_nonce nc = next_nonce

View File

@ -72,6 +72,9 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :aws_profile :: AWS account profile to retrieve credentials from.
module OptionsMethods module OptionsMethods
def option_aws_profile(value) def option_aws_profile(value)
String(value) String(value)

View File

@ -12,6 +12,7 @@ module HTTPX
module AWSSigV4 module AWSSigV4
Credentials = Struct.new(:username, :password, :security_token) Credentials = Struct.new(:username, :password, :security_token)
# Signs requests using the AWS sigv4 signing.
class Signer class Signer
def initialize( def initialize(
service:, service:,
@ -88,7 +89,7 @@ module HTTPX
sts = "#{algo_line}" \ sts = "#{algo_line}" \
"\n#{datetime}" \ "\n#{datetime}" \
"\n#{credential_scope}" \ "\n#{credential_scope}" \
"\n#{hexdigest(creq)}" "\n#{OpenSSL::Digest.new(@algorithm).hexdigest(creq)}"
# signature # signature
k_date = hmac("#{upper_provider_prefix}#{@credentials.password}", date) k_date = hmac("#{upper_provider_prefix}#{@credentials.password}", date)
@ -109,22 +110,38 @@ module HTTPX
private private
def hexdigest(value) def hexdigest(value)
if value.respond_to?(:to_path) digest = OpenSSL::Digest.new(@algorithm)
# files, pathnames
OpenSSL::Digest.new(@algorithm).file(value.to_path).hexdigest
elsif value.respond_to?(:each)
digest = OpenSSL::Digest.new(@algorithm)
mb_buffer = value.each.with_object("".b) do |chunk, buffer| if value.respond_to?(:read)
buffer << chunk if value.respond_to?(:to_path)
break if buffer.bytesize >= 1024 * 1024 # files, pathnames
digest.file(value.to_path).hexdigest
else
# gzipped request bodies
raise Error, "request body must be rewindable" unless value.respond_to?(:rewind)
buffer = Tempfile.new("httpx", encoding: Encoding::BINARY, mode: File::RDWR)
begin
IO.copy_stream(value, buffer)
buffer.flush
digest.file(buffer.to_path).hexdigest
ensure
value.rewind
buffer.close
buffer.unlink
end
end
else
# error on endless generators
raise Error, "hexdigest for endless enumerators is not supported" if value.unbounded_body?
mb_buffer = value.each.with_object("".b) do |chunk, b|
b << chunk
break if b.bytesize >= 1024 * 1024
end end
digest.update(mb_buffer) digest.hexdigest(mb_buffer)
value.rewind
digest.hexdigest
else
OpenSSL::Digest.new(@algorithm).hexdigest(value)
end end
end end
@ -141,7 +158,7 @@ module HTTPX
def load_dependencies(*) def load_dependencies(*)
require "set" require "set"
require "digest/sha2" require "digest/sha2"
require "openssl" require "cgi/escape"
end end
def configure(klass) def configure(klass)
@ -149,6 +166,9 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :sigv4_signer :: instance of HTTPX::Plugins::AWSSigV4 used to sign requests.
module OptionsMethods module OptionsMethods
def option_sigv4_signer(value) def option_sigv4_signer(value)
value.is_a?(Signer) ? value : Signer.new(value) value.is_a?(Signer) ? value : Signer.new(value)
@ -160,7 +180,7 @@ module HTTPX
with(sigv4_signer: Signer.new(**options)) with(sigv4_signer: Signer.new(**options))
end end
def build_request(*, _) def build_request(*)
request = super request = super
return request if request.headers.key?("authorization") return request if request.headers.key?("authorization")
@ -197,8 +217,8 @@ module HTTPX
params.each.with_index.sort do |a, b| params.each.with_index.sort do |a, b|
a, a_offset = a a, a_offset = a
b, b_offset = b b, b_offset = b
a_name, a_value = a.split("=") a_name, a_value = a.split("=", 2)
b_name, b_value = b.split("=") b_name, b_value = b.split("=", 2)
if a_name == b_name if a_name == b_name
if a_value == b_value if a_value == b_value
a_offset <=> b_offset a_offset <=> b_offset

View File

@ -8,6 +8,13 @@ module HTTPX
# https://gitlab.com/os85/httpx/-/wikis/Events # https://gitlab.com/os85/httpx/-/wikis/Events
# #
module Callbacks module Callbacks
CALLBACKS = %i[
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].freeze
# connection closed user-space errors happen after errors can be surfaced to requests, # connection closed user-space errors happen after errors can be surfaced to requests,
# so they need to pierce through the scheduler, which is only possible by simulating an # so they need to pierce through the scheduler, which is only possible by simulating an
# interrupt. # interrupt.
@ -16,27 +23,38 @@ module HTTPX
module InstanceMethods module InstanceMethods
include HTTPX::Callbacks include HTTPX::Callbacks
%i[ CALLBACKS.each do |meth|
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].each do |meth|
class_eval(<<-MOD, __FILE__, __LINE__ + 1) class_eval(<<-MOD, __FILE__, __LINE__ + 1)
def on_#{meth}(&blk) # def on_connection_opened(&blk) def on_#{meth}(&blk) # def on_connection_opened(&blk)
on(:#{meth}, &blk) # on(:connection_opened, &blk) on(:#{meth}, &blk) # on(:connection_opened, &blk)
self # self
end # end end # end
MOD MOD
end end
private private
def init_connection(uri, options) def branch(options, &blk)
connection = super super(options).tap do |sess|
CALLBACKS.each do |cb|
next unless callbacks_for?(cb)
sess.callbacks(cb).concat(callbacks(cb))
end
sess.wrap(&blk) if blk
end
end
def do_init_connection(connection, selector)
super
connection.on(:open) do connection.on(:open) do
next unless connection.current_session == self
emit_or_callback_error(:connection_opened, connection.origin, connection.io.socket) emit_or_callback_error(:connection_opened, connection.origin, connection.io.socket)
end end
connection.on(:close) do connection.on(:close) do
next unless connection.current_session == self
emit_or_callback_error(:connection_closed, connection.origin) if connection.used? emit_or_callback_error(:connection_closed, connection.origin) if connection.used?
end end
@ -84,6 +102,12 @@ module HTTPX
rescue CallbackError => e rescue CallbackError => e
raise e.cause raise e.cause
end end
def close(*)
super
rescue CallbackError => e
raise e.cause
end
end end
end end
register_plugin :callbacks, Callbacks register_plugin :callbacks, Callbacks

View File

@ -32,15 +32,11 @@ module HTTPX
@circuit_store = CircuitStore.new(@options) @circuit_store = CircuitStore.new(@options)
end end
def initialize_dup(orig)
super
@circuit_store = orig.instance_variable_get(:@circuit_store).dup
end
%i[circuit_open].each do |meth| %i[circuit_open].each do |meth|
class_eval(<<-MOD, __FILE__, __LINE__ + 1) class_eval(<<-MOD, __FILE__, __LINE__ + 1)
def on_#{meth}(&blk) # def on_circuit_open(&blk) def on_#{meth}(&blk) # def on_circuit_open(&blk)
on(:#{meth}, &blk) # on(:circuit_open, &blk) on(:#{meth}, &blk) # on(:circuit_open, &blk)
self # self
end # end end # end
MOD MOD
end end
@ -74,10 +70,11 @@ module HTTPX
short_circuit_responses short_circuit_responses
end end
def on_response(request, response) def set_request_callbacks(request)
emit(:circuit_open, request) if try_circuit_open(request, response)
super super
request.on(:response) do |response|
emit(:circuit_open, request) if try_circuit_open(request, response)
end
end end
def try_circuit_open(request, response) def try_circuit_open(request, response)
@ -97,6 +94,16 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :circuit_breaker_max_attempts :: the number of attempts the circuit allows, before it is opened (defaults to <tt>3</tt>).
# :circuit_breaker_reset_attempts_in :: the time a circuit stays open at most, before it resets (defaults to <tt>60</tt>).
# :circuit_breaker_break_on :: callable defining an alternative rule for a response to break
# (i.e. <tt>->(res) { res.status == 429 } </tt>)
# :circuit_breaker_break_in :: the time that must elapse before an open circuit can transit to the half-open state
# (defaults to <tt><60</tt>).
# :circuit_breaker_half_open_drip_rate :: the rate of requests a circuit allows to be performed when in an half-open state
# (defaults to <tt>1</tt>).
module OptionsMethods module OptionsMethods
def option_circuit_breaker_max_attempts(value) def option_circuit_breaker_max_attempts(value)
attempts = Integer(value) attempts = Integer(value)

View File

@ -0,0 +1,202 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds `Content-Digest` headers to requests
# and can validate these headers on responses
#
# https://datatracker.ietf.org/doc/html/rfc9530
#
module ContentDigest
class Error < HTTPX::Error; end
# Error raised on response "content-digest" header validation.
class ValidationError < Error
attr_reader :response
def initialize(message, response)
super(message)
@response = response
end
end
class MissingContentDigestError < ValidationError; end
class InvalidContentDigestError < ValidationError; end
SUPPORTED_ALGORITHMS = {
"sha-256" => OpenSSL::Digest::SHA256,
"sha-512" => OpenSSL::Digest::SHA512,
}.freeze
class << self
def extra_options(options)
options.merge(encode_content_digest: true, validate_content_digest: false, content_digest_algorithm: "sha-256")
end
end
# add support for the following options:
#
# :content_digest_algorithm :: the digest algorithm to use. Currently supports `sha-256` and `sha-512`. (defaults to `sha-256`)
# :encode_content_digest :: whether a <tt>Content-Digest</tt> header should be computed for the request;
# can also be a callable object (i.e. <tt>->(req) { ... }</tt>, defaults to <tt>true</tt>)
# :validate_content_digest :: whether a <tt>Content-Digest</tt> header in the response should be validated;
# can also be a callable object (i.e. <tt>->(res) { ... }</tt>, defaults to <tt>false</tt>)
module OptionsMethods
def option_content_digest_algorithm(value)
raise TypeError, ":content_digest_algorithm must be one of 'sha-256', 'sha-512'" unless SUPPORTED_ALGORITHMS.key?(value)
value
end
def option_encode_content_digest(value)
value
end
def option_validate_content_digest(value)
value
end
end
module ResponseBodyMethods
attr_reader :content_digest_buffer
def initialize(response, options)
super
return unless response.headers.key?("content-digest")
should_validate = options.validate_content_digest
should_validate = should_validate.call(response) if should_validate.respond_to?(:call)
return unless should_validate
@content_digest_buffer = Response::Buffer.new(
threshold_size: @options.body_threshold_size,
bytesize: @length,
encoding: @encoding
)
end
def write(chunk)
@content_digest_buffer.write(chunk) if @content_digest_buffer
super
end
def close
if @content_digest_buffer
@content_digest_buffer.close
@content_digest_buffer = nil
end
super
end
end
module InstanceMethods
def build_request(*)
request = super
return request if request.empty?
return request if request.headers.key?("content-digest")
perform_encoding = @options.encode_content_digest
perform_encoding = perform_encoding.call(request) if perform_encoding.respond_to?(:call)
return request unless perform_encoding
digest = base64digest(request.body)
request.headers.add("content-digest", "#{@options.content_digest_algorithm}=:#{digest}:")
request
end
private
def fetch_response(request, _, _)
response = super
return response unless response.is_a?(Response)
perform_validation = @options.validate_content_digest
perform_validation = perform_validation.call(response) if perform_validation.respond_to?(:call)
validate_content_digest(response) if perform_validation
response
rescue ValidationError => e
ErrorResponse.new(request, e)
end
def validate_content_digest(response)
content_digest_header = response.headers["content-digest"]
raise MissingContentDigestError.new("response is missing a `content-digest` header", response) unless content_digest_header
digests = extract_content_digests(content_digest_header)
included_algorithms = SUPPORTED_ALGORITHMS.keys & digests.keys
raise MissingContentDigestError.new("unsupported algorithms: #{digests.keys.join(", ")}", response) if included_algorithms.empty?
content_buffer = response.body.content_digest_buffer
included_algorithms.each do |algorithm|
digest = SUPPORTED_ALGORITHMS.fetch(algorithm).new
digest_received = digests[algorithm]
digest_computed =
if content_buffer.respond_to?(:to_path)
content_buffer.flush
digest.file(content_buffer.to_path).base64digest
else
digest.base64digest(content_buffer.to_s)
end
raise InvalidContentDigestError.new("#{algorithm} digest does not match content",
response) unless digest_received == digest_computed
end
end
def extract_content_digests(header)
header.split(",").to_h do |entry|
algorithm, digest = entry.split("=", 2)
raise Error, "#{entry} is an invalid digest format" unless algorithm && digest
[algorithm, digest.byteslice(1..-2)]
end
end
def base64digest(body)
digest = SUPPORTED_ALGORITHMS.fetch(@options.content_digest_algorithm).new
if body.respond_to?(:read)
if body.respond_to?(:to_path)
digest.file(body.to_path).base64digest
else
raise ContentDigestError, "request body must be rewindable" unless body.respond_to?(:rewind)
buffer = Tempfile.new("httpx", encoding: Encoding::BINARY, mode: File::RDWR)
begin
IO.copy_stream(body, buffer)
buffer.flush
digest.file(buffer.to_path).base64digest
ensure
body.rewind
buffer.close
buffer.unlink
end
end
else
raise ContentDigestError, "base64digest for endless enumerators is not supported" if body.unbounded_body?
buffer = "".b
body.each { |chunk| buffer << chunk }
digest.base64digest(buffer)
end
end
end
end
register_plugin :content_digest, ContentDigest
end
end

View File

@ -40,23 +40,23 @@ module HTTPX
end end
end end
def build_request(*)
request = super
request.headers.set_cookie(request.options.cookies[request.uri])
request
end
private private
def on_response(_request, response) def set_request_callbacks(request)
if response && response.respond_to?(:headers) && (set_cookie = response.headers["set-cookie"]) super
request.on(:response) do |response|
next unless response && response.respond_to?(:headers) && (set_cookie = response.headers["set-cookie"])
log { "cookies: set-cookie is over #{Cookie::MAX_LENGTH}" } if set_cookie.bytesize > Cookie::MAX_LENGTH log { "cookies: set-cookie is over #{Cookie::MAX_LENGTH}" } if set_cookie.bytesize > Cookie::MAX_LENGTH
@options.cookies.parse(set_cookie) @options.cookies.parse(set_cookie)
end end
super
end
def build_request(*, _)
request = super
request.headers.set_cookie(request.options.cookies[request.uri])
request
end end
end end
@ -70,6 +70,9 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :cookies :: cookie jar for the session (can be a Hash, an Array, an instance of HTTPX::Plugins::Cookies::CookieJar)
module OptionsMethods module OptionsMethods
def option_headers(*) def option_headers(*)
value = super value = super

View File

@ -59,8 +59,6 @@ module HTTPX
return @cookies.each(&blk) unless uri return @cookies.each(&blk) unless uri
uri = URI(uri)
now = Time.now now = Time.now
tpath = uri.path tpath = uri.path

View File

@ -83,7 +83,7 @@ module HTTPX
scanner.skip(RE_WSP) scanner.skip(RE_WSP)
name, value = scan_name_value(scanner, true) name, value = scan_name_value(scanner, true)
value = nil if name.empty? value = nil if name && name.empty?
attrs = {} attrs = {}
@ -98,15 +98,18 @@ module HTTPX
aname, avalue = scan_name_value(scanner, true) aname, avalue = scan_name_value(scanner, true)
next if aname.empty? || value.nil? next if (aname.nil? || aname.empty?) || value.nil?
aname.downcase! aname.downcase!
case aname case aname
when "expires" when "expires"
next unless avalue
# RFC 6265 5.2.1 # RFC 6265 5.2.1
(avalue &&= Time.parse(avalue)) || next (avalue = Time.parse(avalue)) || next
when "max-age" when "max-age"
next unless avalue
# RFC 6265 5.2.2 # RFC 6265 5.2.2
next unless /\A-?\d+\z/.match?(avalue) next unless /\A-?\d+\z/.match?(avalue)
@ -119,7 +122,7 @@ module HTTPX
# RFC 6265 5.2.4 # RFC 6265 5.2.4
# A relative path must be ignored rather than normalizing it # A relative path must be ignored rather than normalizing it
# to "/". # to "/".
next unless avalue.start_with?("/") next unless avalue && avalue.start_with?("/")
when "secure", "httponly" when "secure", "httponly"
# RFC 6265 5.2.5, 5.2.6 # RFC 6265 5.2.5, 5.2.6
avalue = true avalue = true

View File

@ -20,6 +20,9 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :digest :: instance of HTTPX::Plugins::Authentication::Digest, used to authenticate requests in the session.
module OptionsMethods module OptionsMethods
def option_digest(value) def option_digest(value)
raise TypeError, ":digest must be a #{Authentication::Digest}" unless value.is_a?(Authentication::Digest) raise TypeError, ":digest must be a #{Authentication::Digest}" unless value.is_a?(Authentication::Digest)

View File

@ -20,6 +20,11 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :expect_timeout :: time (in seconds) to wait for a 100-expect response,
# before retrying without the Expect header (defaults to <tt>2</tt>).
# :expect_threshold_size :: min threshold (in bytes) of the request payload to enable the 100-continue negotiation on.
module OptionsMethods module OptionsMethods
def option_expect_timeout(value) def option_expect_timeout(value)
seconds = Float(value) seconds = Float(value)
@ -79,7 +84,7 @@ module HTTPX
return if expect_timeout.nil? || expect_timeout.infinite? return if expect_timeout.nil? || expect_timeout.infinite?
set_request_timeout(request, expect_timeout, :expect, %i[body response]) do set_request_timeout(:expect_timeout, request, expect_timeout, :expect, %i[body response]) do
# expect timeout expired # expect timeout expired
if request.state == :expect && !request.expects? if request.state == :expect && !request.expects?
Expect.no_expect_store << request.origin Expect.no_expect_store << request.origin
@ -91,15 +96,16 @@ module HTTPX
end end
module InstanceMethods module InstanceMethods
def fetch_response(request, connections, options) def fetch_response(request, selector, options)
response = @responses.delete(request) response = super
return unless response return unless response
if response.is_a?(Response) && response.status == 417 && request.headers.key?("expect") if response.is_a?(Response) && response.status == 417 && request.headers.key?("expect")
response.close response.close
request.headers.delete("expect") request.headers.delete("expect")
request.transition(:idle) request.transition(:idle)
send_request(request, connections, options) send_request(request, selector, options)
return return
end end

View File

@ -4,21 +4,35 @@ module HTTPX
InsecureRedirectError = Class.new(Error) InsecureRedirectError = Class.new(Error)
module Plugins module Plugins
# #
# This plugin adds support for following redirect (status 30X) responses. # This plugin adds support for automatically following redirect (status 30X) responses.
# #
# It has an upper bound of followed redirects (see *MAX_REDIRECTS*), after which it # It has a default upper bound of followed redirects (see *MAX_REDIRECTS* and the *max_redirects* option),
# will return the last redirect response. It will **not** raise an exception. # after which it will return the last redirect response. It will **not** raise an exception.
# #
# It also doesn't follow insecure redirects (https -> http) by default (see *follow_insecure_redirects*). # It doesn't follow insecure redirects (https -> http) by default (see *follow_insecure_redirects*).
#
# It doesn't propagate authorization related headers to requests redirecting to different origins
# (see *allow_auth_to_other_origins*) to override.
#
# It allows customization of when to redirect via the *redirect_on* callback option).
# #
# https://gitlab.com/os85/httpx/wikis/Follow-Redirects # https://gitlab.com/os85/httpx/wikis/Follow-Redirects
# #
module FollowRedirects module FollowRedirects
MAX_REDIRECTS = 3 MAX_REDIRECTS = 3
REDIRECT_STATUS = (300..399).freeze REDIRECT_STATUS = (300..399).freeze
REQUEST_BODY_HEADERS = %w[transfer-encoding content-encoding content-type content-length content-language content-md5 trailer].freeze
using URIExtensions using URIExtensions
# adds support for the following options:
#
# :max_redirects :: max number of times a request will be redirected (defaults to <tt>3</tt>).
# :follow_insecure_redirects :: whether redirects to an "http://" URI, when coming from an "https//", are allowed
# (defaults to <tt>false</tt>).
# :allow_auth_to_other_origins :: whether auth-related headers, such as "Authorization", are propagated on redirection
# (defaults to <tt>false</tt>).
# :redirect_on :: optional callback which receives the redirect location and can halt the redirect chain if it returns <tt>false</tt>.
module OptionsMethods module OptionsMethods
def option_max_redirects(value) def option_max_redirects(value)
num = Integer(value) num = Integer(value)
@ -43,15 +57,16 @@ module HTTPX
end end
module InstanceMethods module InstanceMethods
# returns a session with the *max_redirects* option set to +n+
def max_redirects(n) def max_redirects(n)
with(max_redirects: n.to_i) with(max_redirects: n.to_i)
end end
private private
def fetch_response(request, connections, options) def fetch_response(request, selector, options)
redirect_request = request.redirect_request redirect_request = request.redirect_request
response = super(redirect_request, connections, options) response = super(redirect_request, selector, options)
return unless response return unless response
max_redirects = redirect_request.max_redirects max_redirects = redirect_request.max_redirects
@ -60,7 +75,6 @@ module HTTPX
return response unless REDIRECT_STATUS.include?(response.status) && response.headers.key?("location") return response unless REDIRECT_STATUS.include?(response.status) && response.headers.key?("location")
return response unless max_redirects.positive? return response unless max_redirects.positive?
# build redirect request
redirect_uri = __get_location_from_response(response) redirect_uri = __get_location_from_response(response)
if options.redirect_on if options.redirect_on
@ -68,25 +82,43 @@ module HTTPX
return response unless redirect_allowed return response unless redirect_allowed
end end
# build redirect request
request_body = redirect_request.body
redirect_method = "GET"
redirect_params = {}
if response.status == 305 && options.respond_to?(:proxy) if response.status == 305 && options.respond_to?(:proxy)
request_body.rewind
# The requested resource MUST be accessed through the proxy given by # The requested resource MUST be accessed through the proxy given by
# the Location field. The Location field gives the URI of the proxy. # the Location field. The Location field gives the URI of the proxy.
retry_options = options.merge(headers: redirect_request.headers, redirect_options = options.merge(headers: redirect_request.headers,
proxy: { uri: redirect_uri }, proxy: { uri: redirect_uri },
body: redirect_request.body, max_redirects: max_redirects - 1)
max_redirects: max_redirects - 1)
redirect_params[:body] = request_body
redirect_uri = redirect_request.uri redirect_uri = redirect_request.uri
options = retry_options options = redirect_options
else else
redirect_headers = redirect_request_headers(redirect_request.uri, redirect_uri, request.headers, options) redirect_headers = redirect_request_headers(redirect_request.uri, redirect_uri, request.headers, options)
redirect_opts = Hash[options]
redirect_params[:max_redirects] = max_redirects - 1
# redirects are **ALWAYS** GET unless request_body.empty?
retry_opts = Hash[options].merge( if response.status == 307
headers: redirect_headers.to_h, # The method and the body of the original request are reused to perform the redirected request.
body: redirect_request.body, redirect_method = redirect_request.verb
max_redirects: max_redirects - 1 request_body.rewind
) redirect_params[:body] = request_body
retry_options = options.class.new(retry_opts) else
# redirects are **ALWAYS** GET, so remove body-related headers
REQUEST_BODY_HEADERS.each do |h|
redirect_headers.delete(h)
end
redirect_params[:body] = nil
end
end
options = options.class.new(redirect_opts.merge(headers: redirect_headers.to_h))
end end
redirect_uri = Utils.to_uri(redirect_uri) redirect_uri = Utils.to_uri(redirect_uri)
@ -96,48 +128,61 @@ module HTTPX
redirect_uri.scheme == "http" redirect_uri.scheme == "http"
error = InsecureRedirectError.new(redirect_uri.to_s) error = InsecureRedirectError.new(redirect_uri.to_s)
error.set_backtrace(caller) error.set_backtrace(caller)
return ErrorResponse.new(request, error, options) return ErrorResponse.new(request, error)
end end
retry_request = build_request("GET", redirect_uri, retry_options) retry_request = build_request(redirect_method, redirect_uri, redirect_params, options)
request.redirect_request = retry_request request.redirect_request = retry_request
retry_after = response.headers["retry-after"] redirect_after = response.headers["retry-after"]
if retry_after if redirect_after
# Servers send the "Retry-After" header field to indicate how long the # Servers send the "Retry-After" header field to indicate how long the
# user agent ought to wait before making a follow-up request. # user agent ought to wait before making a follow-up request.
# When sent with any 3xx (Redirection) response, Retry-After indicates # When sent with any 3xx (Redirection) response, Retry-After indicates
# the minimum time that the user agent is asked to wait before issuing # the minimum time that the user agent is asked to wait before issuing
# the redirected request. # the redirected request.
# #
retry_after = Utils.parse_retry_after(retry_after) redirect_after = Utils.parse_retry_after(redirect_after)
log { "redirecting after #{retry_after} secs..." } retry_start = Utils.now
pool.after(retry_after) do log { "redirecting after #{redirect_after} secs..." }
send_request(retry_request, connections, options) selector.after(redirect_after) do
if (response = request.response)
response.finish!
retry_request.response = response
# request has terminated abruptly meanwhile
retry_request.emit(:response, response)
else
log { "redirecting (elapsed time: #{Utils.elapsed_time(retry_start)})!!" }
send_request(retry_request, selector, options)
end
end end
else else
send_request(retry_request, connections, options) send_request(retry_request, selector, options)
end end
nil nil
end end
# :nodoc:
def redirect_request_headers(original_uri, redirect_uri, headers, options) def redirect_request_headers(original_uri, redirect_uri, headers, options)
headers = headers.dup
return headers if options.allow_auth_to_other_origins return headers if options.allow_auth_to_other_origins
return headers unless headers.key?("authorization") return headers unless headers.key?("authorization")
unless original_uri.origin == redirect_uri.origin return headers if original_uri.origin == redirect_uri.origin
headers = headers.dup
headers.delete("authorization") headers.delete("authorization")
end
headers headers
end end
# :nodoc:
def __get_location_from_response(response) def __get_location_from_response(response)
# @type var location_uri: http_uri
location_uri = URI(response.headers["location"]) location_uri = URI(response.headers["location"])
location_uri = response.uri.merge(location_uri) if location_uri.relative? location_uri = response.uri.merge(location_uri) if location_uri.relative?
location_uri location_uri
@ -145,12 +190,15 @@ module HTTPX
end end
module RequestMethods module RequestMethods
# returns the top-most original HTTPX::Request from the redirect chain
attr_accessor :root_request attr_accessor :root_request
# returns the follow-up redirect request, or itself
def redirect_request def redirect_request
@redirect_request || self @redirect_request || self
end end
# sets the follow-up redirect request
def redirect_request=(req) def redirect_request=(req)
@redirect_request = req @redirect_request = req
req.root_request = @root_request || self req.root_request = @root_request || self
@ -158,7 +206,7 @@ module HTTPX
end end
def response def response
return super unless @redirect_request return super unless @redirect_request && @response.nil?
@redirect_request.response @redirect_request.response
end end
@ -167,6 +215,16 @@ module HTTPX
@options.max_redirects || MAX_REDIRECTS @options.max_redirects || MAX_REDIRECTS
end end
end end
module ConnectionMethods
private
def set_request_request_timeout(request)
return unless request.root_request.nil?
super
end
end
end end
register_plugin :follow_redirects, FollowRedirects register_plugin :follow_redirects, FollowRedirects
end end

View File

@ -110,10 +110,10 @@ module HTTPX
end end
module RequestBodyMethods module RequestBodyMethods
def initialize(headers, _) def initialize(*, **)
super super
if (compression = headers["grpc-encoding"]) if (compression = @headers["grpc-encoding"])
deflater_body = self.class.initialize_deflater_body(@body, compression) deflater_body = self.class.initialize_deflater_body(@body, compression)
@body = Transcoder::GRPCEncoding.encode(deflater_body || @body, compressed: !deflater_body.nil?) @body = Transcoder::GRPCEncoding.encode(deflater_body || @body, compressed: !deflater_body.nil?)
else else
@ -124,6 +124,7 @@ module HTTPX
module InstanceMethods module InstanceMethods
def with_channel_credentials(ca_path, key = nil, cert = nil, **ssl_opts) def with_channel_credentials(ca_path, key = nil, cert = nil, **ssl_opts)
# @type var ssl_params: ::Hash[::Symbol, untyped]
ssl_params = { ssl_params = {
**ssl_opts, **ssl_opts,
ca_file: ca_path, ca_file: ca_path,

View File

@ -15,7 +15,7 @@ module HTTPX
end end
def inspect def inspect
"#GRPC::Call(#{grpc_response})" "#{self.class}(#{grpc_response})"
end end
def to_s def to_s

View File

@ -29,6 +29,8 @@ module HTTPX
buf = outbuf if outbuf buf = outbuf if outbuf
buf = buf.b if buf.frozen?
buf.prepend([compressed_flag, buf.bytesize].pack("CL>")) buf.prepend([compressed_flag, buf.bytesize].pack("CL>"))
buf buf
end end

View File

@ -25,26 +25,6 @@ module HTTPX
end end
end end
module InstanceMethods
def send_requests(*requests)
upgrade_request, *remainder = requests
return super unless VALID_H2C_VERBS.include?(upgrade_request.verb) && upgrade_request.scheme == "http"
connection = pool.find_connection(upgrade_request.uri, upgrade_request.options)
return super if connection && connection.upgrade_protocol == "h2c"
# build upgrade request
upgrade_request.headers.add("connection", "upgrade")
upgrade_request.headers.add("connection", "http2-settings")
upgrade_request.headers["upgrade"] = "h2c"
upgrade_request.headers["http2-settings"] = HTTP2Next::Client.settings_header(upgrade_request.options.http2_settings)
super(upgrade_request, *remainder)
end
end
class H2CParser < Connection::HTTP2 class H2CParser < Connection::HTTP2
def upgrade(request, response) def upgrade(request, response)
# skip checks, it is assumed that this is the first # skip checks, it is assumed that this is the first
@ -62,9 +42,38 @@ module HTTPX
end end
end end
module RequestMethods
def valid_h2c_verb?
VALID_H2C_VERBS.include?(@verb)
end
end
module ConnectionMethods module ConnectionMethods
using URIExtensions using URIExtensions
def initialize(*)
super
@h2c_handshake = false
end
def send(request)
return super if @h2c_handshake
return super unless request.valid_h2c_verb? && request.scheme == "http"
return super if @upgrade_protocol == "h2c"
@h2c_handshake = true
# build upgrade request
request.headers.add("connection", "upgrade")
request.headers.add("connection", "http2-settings")
request.headers["upgrade"] = "h2c"
request.headers["http2-settings"] = ::HTTP2::Client.settings_header(request.options.http2_settings)
super
end
def upgrade_to_h2c(request, response) def upgrade_to_h2c(request, response)
prev_parser = @parser prev_parser = @parser

View File

@ -13,6 +13,12 @@ module HTTPX
# by the end user in $http_init_time, different diff metrics can be shown. The "point of time" is calculated # by the end user in $http_init_time, different diff metrics can be shown. The "point of time" is calculated
# using the monotonic clock. # using the monotonic clock.
module InternalTelemetry module InternalTelemetry
DEBUG_LEVEL = 3
def self.extra_options(options)
options.merge(debug_level: 3)
end
module TrackTimeMethods module TrackTimeMethods
private private
@ -28,16 +34,19 @@ module HTTPX
after_time = Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond) after_time = Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond)
# $http_init_time = after_time # $http_init_time = after_time
elapsed = after_time - prev_time elapsed = after_time - prev_time
warn(+"\e[31m" << "[ELAPSED TIME]: #{label}: #{elapsed} (ms)" << "\e[0m") # klass = self.class
end
end
module NativeResolverMethods # until (class_name = klass.name)
def transition(nextstate) # klass = klass.superclass
state = @state # end
val = super log(
meter_elapsed_time("Resolver::Native: #{state} -> #{nextstate}") level: DEBUG_LEVEL,
val color: :red,
debug_level: @options ? @options.debug_level : DEBUG_LEVEL,
debug: nil
) do
"[ELAPSED TIME]: #{label}: #{elapsed} (ms)" << "\e[0m"
end
end end
end end
@ -51,13 +60,6 @@ module HTTPX
meter_elapsed_time("Session: initializing...") meter_elapsed_time("Session: initializing...")
super super
meter_elapsed_time("Session: initialized!!!") meter_elapsed_time("Session: initialized!!!")
resolver_type = @options.resolver_class
resolver_type = Resolver.resolver_for(resolver_type)
return unless resolver_type <= Resolver::Native
resolver_type.prepend TrackTimeMethods
resolver_type.prepend NativeResolverMethods
@options = @options.merge(resolver_class: resolver_type)
end end
def close(*) def close(*)
@ -76,31 +78,27 @@ module HTTPX
meter_elapsed_time("Session -> response") if response meter_elapsed_time("Session -> response") if response
response response
end end
def coalesce_connections(conn1, conn2, selector, *)
result = super
meter_elapsed_time("Connection##{conn2.object_id} coalescing to Connection##{conn1.object_id}") if result
result
end
end end
module RequestMethods module PoolMethods
def self.included(klass) def self.included(klass)
klass.prepend Loggable
klass.prepend TrackTimeMethods klass.prepend TrackTimeMethods
super super
end end
def transition(nextstate) def checkin_connection(connection)
prev_state = @state super.tap do
super meter_elapsed_time("Pool##{object_id}: checked in connection for Connection##{connection.object_id}[#{connection.origin}]}")
meter_elapsed_time("Request##{object_id}[#{@verb} #{@uri}: #{prev_state}] -> #{@state}") if prev_state != @state end
end
end
module ConnectionMethods
def self.included(klass)
klass.prepend TrackTimeMethods
super
end
def handle_transition(nextstate)
state = @state
super
meter_elapsed_time("Connection##{object_id}[#{@origin}]: #{state} -> #{nextstate}") if nextstate == @state
end end
end end
end end

View File

@ -16,7 +16,7 @@ module HTTPX
SUPPORTED_AUTH_METHODS = %w[client_secret_basic client_secret_post].freeze SUPPORTED_AUTH_METHODS = %w[client_secret_basic client_secret_post].freeze
class OAuthSession class OAuthSession
attr_reader :token_endpoint_auth_method, :grant_type, :client_id, :client_secret, :access_token, :refresh_token, :scope attr_reader :grant_type, :client_id, :client_secret, :access_token, :refresh_token, :scope
def initialize( def initialize(
issuer:, issuer:,
@ -28,7 +28,7 @@ module HTTPX
token_endpoint: nil, token_endpoint: nil,
response_type: nil, response_type: nil,
grant_type: nil, grant_type: nil,
token_endpoint_auth_method: "client_secret_basic" token_endpoint_auth_method: nil
) )
@issuer = URI(issuer) @issuer = URI(issuer)
@client_id = client_id @client_id = client_id
@ -43,10 +43,10 @@ module HTTPX
end end
@access_token = access_token @access_token = access_token
@refresh_token = refresh_token @refresh_token = refresh_token
@token_endpoint_auth_method = String(token_endpoint_auth_method) @token_endpoint_auth_method = String(token_endpoint_auth_method) if token_endpoint_auth_method
@grant_type = grant_type || (@refresh_token ? "refresh_token" : "client_credentials") @grant_type = grant_type || (@refresh_token ? "refresh_token" : "client_credentials")
unless SUPPORTED_AUTH_METHODS.include?(@token_endpoint_auth_method) unless @token_endpoint_auth_method.nil? || SUPPORTED_AUTH_METHODS.include?(@token_endpoint_auth_method)
raise Error, "#{@token_endpoint_auth_method} is not a supported auth method" raise Error, "#{@token_endpoint_auth_method} is not a supported auth method"
end end
@ -59,8 +59,12 @@ module HTTPX
@token_endpoint || "#{@issuer}/token" @token_endpoint || "#{@issuer}/token"
end end
def token_endpoint_auth_method
@token_endpoint_auth_method || "client_secret_basic"
end
def load(http) def load(http)
return if @token_endpoint_auth_method && @grant_type && @scope return if @grant_type && @scope
metadata = http.get("#{@issuer}/.well-known/oauth-authorization-server").raise_for_status.json metadata = http.get("#{@issuer}/.well-known/oauth-authorization-server").raise_for_status.json
@ -123,11 +127,11 @@ module HTTPX
# auth # auth
case oauth_session.token_endpoint_auth_method case oauth_session.token_endpoint_auth_method
when "client_secret_basic"
headers["authorization"] = Authentication::Basic.new(oauth_session.client_id, oauth_session.client_secret).authenticate
when "client_secret_post" when "client_secret_post"
form_post["client_id"] = oauth_session.client_id form_post["client_id"] = oauth_session.client_id
form_post["client_secret"] = oauth_session.client_secret form_post["client_secret"] = oauth_session.client_secret
when "client_secret_basic"
headers["authorization"] = Authentication::Basic.new(oauth_session.client_id, oauth_session.client_secret).authenticate
end end
case grant_type case grant_type
@ -151,7 +155,7 @@ module HTTPX
with(oauth_session: oauth_session.merge(access_token: access_token, refresh_token: refresh_token)) with(oauth_session: oauth_session.merge(access_token: access_token, refresh_token: refresh_token))
end end
def build_request(*, _) def build_request(*)
request = super request = super
return request if request.headers.key?("authorization") return request if request.headers.key?("authorization")

View File

@ -24,12 +24,49 @@ module HTTPX
else else
1 1
end end
klass.plugin(:retries, max_retries: max_retries, retry_change_requests: true) klass.plugin(:retries, max_retries: max_retries)
end end
def self.extra_options(options) def self.extra_options(options)
options.merge(persistent: true) options.merge(persistent: true)
end end
module InstanceMethods
private
def repeatable_request?(request, _)
super || begin
response = request.response
return false unless response && response.is_a?(ErrorResponse)
error = response.error
Retries::RECONNECTABLE_ERRORS.any? { |klass| error.is_a?(klass) }
end
end
def retryable_error?(ex)
super &&
# under the persistent plugin rules, requests are only retried for connection related errors,
# which do not include request timeout related errors. This only gets overriden if the end user
# manually changed +:max_retries+ to something else, which means it is aware of the
# consequences.
(!ex.is_a?(RequestTimeoutError) || @options.max_retries != 1)
end
def get_current_selector
super(&nil) || begin
return unless block_given?
default = yield
set_current_selector(default)
default
end
end
end
end end
register_plugin :persistent, Persistent register_plugin :persistent, Persistent
end end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true # frozen_string_literal: true
module HTTPX module HTTPX
class HTTPProxyError < ConnectionError; end class ProxyError < ConnectionError; end
module Plugins module Plugins
# #
@ -15,7 +15,8 @@ module HTTPX
# https://gitlab.com/os85/httpx/wikis/Proxy # https://gitlab.com/os85/httpx/wikis/Proxy
# #
module Proxy module Proxy
Error = HTTPProxyError class ProxyConnectionError < ProxyError; end
PROXY_ERRORS = [TimeoutError, IOError, SystemCallError, Error].freeze PROXY_ERRORS = [TimeoutError, IOError, SystemCallError, Error].freeze
class << self class << self
@ -28,34 +29,62 @@ module HTTPX
def extra_options(options) def extra_options(options)
options.merge(supported_proxy_protocols: []) options.merge(supported_proxy_protocols: [])
end end
def subplugins
{
retries: ProxyRetries,
}
end
end end
class Parameters class Parameters
attr_reader :uri, :username, :password, :scheme attr_reader :uri, :username, :password, :scheme, :no_proxy
def initialize(uri:, scheme: nil, username: nil, password: nil, **extra) def initialize(uri: nil, scheme: nil, username: nil, password: nil, no_proxy: nil, **extra)
@uri = uri.is_a?(URI::Generic) ? uri : URI(uri) @no_proxy = Array(no_proxy) if no_proxy
@username = username || @uri.user @uris = Array(uri)
@password = password || @uri.password uri = @uris.first
return unless @username && @password @username = username
@password = password
scheme ||= case @uri.scheme @ns = 0
when "socks5"
@uri.scheme if uri
when "http", "https" @uri = uri.is_a?(URI::Generic) ? uri : URI(uri)
"basic" @username ||= @uri.user
else @password ||= @uri.password
return
end end
@scheme = scheme @scheme = scheme
auth_scheme = scheme.to_s.capitalize return unless @uri && @username && @password
require_relative "auth/#{scheme}" unless defined?(Authentication) && Authentication.const_defined?(auth_scheme, false) @authenticator = nil
@scheme ||= infer_default_auth_scheme(@uri)
@authenticator = Authentication.const_get(auth_scheme).new(@username, @password, **extra) return unless @scheme
@authenticator = load_authenticator(@scheme, @username, @password, **extra)
end
def shift
# TODO: this operation must be synchronized
@ns += 1
@uri = @uris[@ns]
return unless @uri
@uri = URI(@uri) unless @uri.is_a?(URI::Generic)
scheme = infer_default_auth_scheme(@uri)
return unless scheme != @scheme
@scheme = scheme
@username = username || @uri.user
@password = password || @uri.password
@authenticator = load_authenticator(scheme, @username, @password)
end end
def can_authenticate?(*args) def can_authenticate?(*args)
@ -87,11 +116,34 @@ module HTTPX
super super
end end
end end
private
def infer_default_auth_scheme(uri)
case uri.scheme
when "socks5"
uri.scheme
when "http", "https"
"basic"
end
end
def load_authenticator(scheme, username, password, **extra)
auth_scheme = scheme.to_s.capitalize
require_relative "auth/#{scheme}" unless defined?(Authentication) && Authentication.const_defined?(auth_scheme, false)
Authentication.const_get(auth_scheme).new(username, password, **extra)
end
end end
# adds support for the following options:
#
# :proxy :: proxy options defining *:uri*, *:username*, *:password* or
# *:scheme* (i.e. <tt>{ uri: "http://proxy" }</tt>)
module OptionsMethods module OptionsMethods
def option_proxy(value) def option_proxy(value)
value.is_a?(Parameters) ? value : Hash[value] value.is_a?(Parameters) ? value : Parameters.new(**Hash[value])
end end
def option_supported_proxy_protocols(value) def option_supported_proxy_protocols(value)
@ -102,91 +154,79 @@ module HTTPX
end end
module InstanceMethods module InstanceMethods
private def find_connection(request_uri, selector, options)
def find_connection(request, connections, options)
return super unless options.respond_to?(:proxy) return super unless options.respond_to?(:proxy)
uri = URI(request.uri) if (next_proxy = request_uri.find_proxy)
return super(request_uri, selector, options.merge(proxy: Parameters.new(uri: next_proxy)))
end
proxy_opts = if (next_proxy = uri.find_proxy) proxy = options.proxy
{ uri: next_proxy }
else
proxy = options.proxy
return super unless proxy return super unless proxy
return super(request, connections, options.merge(proxy: nil)) unless proxy.key?(:uri) next_proxy = proxy.uri
@_proxy_uris ||= Array(proxy[:uri]) raise ProxyError, "Failed to connect to proxy" unless next_proxy
next_proxy = @_proxy_uris.first raise ProxyError,
raise Error, "Failed to connect to proxy" unless next_proxy "#{next_proxy.scheme}: unsupported proxy protocol" unless options.supported_proxy_protocols.include?(next_proxy.scheme)
next_proxy = URI(next_proxy) if (no_proxy = proxy.no_proxy)
no_proxy = no_proxy.join(",") if no_proxy.is_a?(Array)
raise Error, # TODO: setting proxy to nil leaks the connection object in the pool
"#{next_proxy.scheme}: unsupported proxy protocol" unless options.supported_proxy_protocols.include?(next_proxy.scheme) return super(request_uri, selector, options.merge(proxy: nil)) unless URI::Generic.use_proxy?(request_uri.host, next_proxy.host,
next_proxy.port, no_proxy)
end
if proxy.key?(:no_proxy) super(request_uri, selector, options.merge(proxy: proxy))
end
no_proxy = proxy[:no_proxy] private
no_proxy = no_proxy.join(",") if no_proxy.is_a?(Array)
return super(request, connections, options.merge(proxy: nil)) unless URI::Generic.use_proxy?(uri.host, next_proxy.host, def fetch_response(request, selector, options)
next_proxy.port, no_proxy) response = request.response # in case it goes wrong later
begin
response = super
if response.is_a?(ErrorResponse) && proxy_error?(request, response, options)
options.proxy.shift
# return last error response if no more proxies to try
return response if options.proxy.uri.nil?
log { "failed connecting to proxy, trying next..." }
request.transition(:idle)
send_request(request, selector, options)
return
end end
response
proxy.merge(uri: next_proxy) rescue ProxyError
# may happen if coupled with retries, and there are no more proxies to try, in which case
# it'll end up here
response
end end
proxy = Parameters.new(**proxy_opts)
proxy_options = options.merge(proxy: proxy)
connection = pool.find_connection(uri, proxy_options) || init_connection(uri, proxy_options)
unless connections.nil? || connections.include?(connection)
connections << connection
set_connection_callbacks(connection, connections, options)
end
connection
end end
def fetch_response(request, connections, options) def proxy_error?(_request, response, options)
response = super return false unless options.proxy
if response.is_a?(ErrorResponse) && proxy_error?(request, response)
@_proxy_uris.shift
# return last error response if no more proxies to try
return response if @_proxy_uris.empty?
log { "failed connecting to proxy, trying next..." }
request.transition(:idle)
send_request(request, connections, options)
return
end
response
end
def proxy_error?(_request, response)
error = response.error error = response.error
case error case error
when NativeResolveError when NativeResolveError
return false unless @_proxy_uris && !@_proxy_uris.empty? proxy_uri = URI(options.proxy.uri)
proxy_uri = URI(@_proxy_uris.first) peer = error.connection.peer
origin = error.connection.origin
# failed resolving proxy domain # failed resolving proxy domain
origin.host == proxy_uri.host && origin.port == proxy_uri.port peer.host == proxy_uri.host && peer.port == proxy_uri.port
when ResolveError when ResolveError
return false unless @_proxy_uris && !@_proxy_uris.empty? proxy_uri = URI(options.proxy.uri)
proxy_uri = URI(@_proxy_uris.first)
error.message.end_with?(proxy_uri.to_s) error.message.end_with?(proxy_uri.to_s)
when *PROXY_ERRORS when ProxyConnectionError
# timeout errors connecting to proxy # timeout errors connecting to proxy
true true
else else
@ -204,25 +244,11 @@ module HTTPX
# redefining the connection origin as the proxy's URI, # redefining the connection origin as the proxy's URI,
# as this will be used as the tcp peer ip. # as this will be used as the tcp peer ip.
proxy_uri = URI(@options.proxy.uri) @proxy_uri = URI(@options.proxy.uri)
@origin.host = proxy_uri.host
@origin.port = proxy_uri.port
end end
def coalescable?(connection) def peer
return super unless @options.proxy @proxy_uri || super
if @io.protocol == "h2" &&
@origin.scheme == "https" &&
connection.origin.scheme == "https" &&
@io.can_verify_peer?
# in proxied connections, .origin is the proxy ; Given names
# are stored in .origins, this is what is used.
origin = URI(connection.origins.first)
@io.verify_hostname(origin.host)
else
@origin == connection.origin
end
end end
def connecting? def connecting?
@ -240,6 +266,14 @@ module HTTPX
when :connecting when :connecting
consume consume
end end
rescue *PROXY_ERRORS => e
if connecting?
error = ProxyConnectionError.new(e.message)
error.set_backtrace(e.backtrace)
raise error
end
raise e
end end
def reset def reset
@ -248,7 +282,7 @@ module HTTPX
@state = :open @state = :open
super super
emit(:close) # emit(:close)
end end
private private
@ -281,13 +315,29 @@ module HTTPX
end end
super super
end end
def purge_after_closed
super
@io = @io.proxy_io if @io.respond_to?(:proxy_io)
end
end
module ProxyRetries
module InstanceMethods
def retryable_error?(ex)
super || ex.is_a?(ProxyConnectionError)
end
end
end end
end end
register_plugin :proxy, Proxy register_plugin :proxy, Proxy
end end
class ProxySSL < SSL class ProxySSL < SSL
attr_reader :proxy_io
def initialize(tcp, request_uri, options) def initialize(tcp, request_uri, options)
@proxy_io = tcp
@io = tcp.to_io @io = tcp.to_io
super(request_uri, tcp.addresses, options) super(request_uri, tcp.addresses, options)
@hostname = request_uri.host @hostname = request_uri.host

View File

@ -23,24 +23,19 @@ module HTTPX
with(proxy: opts.merge(scheme: "ntlm")) with(proxy: opts.merge(scheme: "ntlm"))
end end
def fetch_response(request, connections, options) def fetch_response(request, selector, options)
response = super response = super
if response && if response &&
response.is_a?(Response) && response.is_a?(Response) &&
response.status == 407 && response.status == 407 &&
!request.headers.key?("proxy-authorization") && !request.headers.key?("proxy-authorization") &&
response.headers.key?("proxy-authenticate") response.headers.key?("proxy-authenticate") && options.proxy.can_authenticate?(response.headers["proxy-authenticate"])
request.transition(:idle)
connection = find_connection(request, connections, options) request.headers["proxy-authorization"] =
options.proxy.authenticate(request, response.headers["proxy-authenticate"])
if connection.options.proxy.can_authenticate?(response.headers["proxy-authenticate"]) send_request(request, selector, options)
request.transition(:idle) return
request.headers["proxy-authorization"] =
connection.options.proxy.authenticate(request, response.headers["proxy-authenticate"])
send_request(request, connections)
return
end
end end
response response
@ -65,11 +60,18 @@ module HTTPX
return unless @io.connected? return unless @io.connected?
@parser || begin @parser || begin
@parser = self.class.parser_type(@io.protocol).new(@write_buffer, @options.merge(max_concurrent_requests: 1)) @parser = parser_type(@io.protocol).new(@write_buffer, @options.merge(max_concurrent_requests: 1))
parser = @parser parser = @parser
parser.extend(ProxyParser) parser.extend(ProxyParser)
parser.on(:response, &method(:__http_on_connect)) parser.on(:response, &method(:__http_on_connect))
parser.on(:close) { transition(:closing) } parser.on(:close) do |force|
next unless @parser
if force
reset
emit(:terminate)
end
end
parser.on(:reset) do parser.on(:reset) do
if parser.empty? if parser.empty?
reset reset
@ -90,8 +92,9 @@ module HTTPX
case @state case @state
when :connecting when :connecting
@parser.close parser = @parser
@parser = nil @parser = nil
parser.close
when :idle when :idle
@parser.callbacks.clear @parser.callbacks.clear
set_parser_callbacks(@parser) set_parser_callbacks(@parser)
@ -135,6 +138,8 @@ module HTTPX
else else
pending = @pending + @parser.pending pending = @pending + @parser.pending
while (req = pending.shift) while (req = pending.shift)
response.finish!
req.response = response
req.emit(:response, response) req.emit(:response, response)
end end
reset reset
@ -163,8 +168,8 @@ module HTTPX
end end
class ConnectRequest < Request class ConnectRequest < Request
def initialize(uri, _options) def initialize(uri, options)
super("CONNECT", uri, {}) super("CONNECT", uri, options)
@headers.delete("accept") @headers.delete("accept")
end end

View File

@ -4,7 +4,7 @@ require "resolv"
require "ipaddr" require "ipaddr"
module HTTPX module HTTPX
class Socks4Error < HTTPProxyError; end class Socks4Error < ProxyError; end
module Plugins module Plugins
module Proxy module Proxy
@ -89,7 +89,7 @@ module HTTPX
def initialize(buffer, options) def initialize(buffer, options)
@buffer = buffer @buffer = buffer
@options = Options.new(options) @options = options
end end
def close; end def close; end

View File

@ -1,7 +1,7 @@
# frozen_string_literal: true # frozen_string_literal: true
module HTTPX module HTTPX
class Socks5Error < HTTPProxyError; end class Socks5Error < ProxyError; end
module Plugins module Plugins
module Proxy module Proxy
@ -141,7 +141,7 @@ module HTTPX
def initialize(buffer, options) def initialize(buffer, options)
@buffer = buffer @buffer = buffer
@options = Options.new(options) @options = options
end end
def close; end def close; end

View File

@ -0,0 +1,35 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds support for using the experimental QUERY HTTP method
#
# https://gitlab.com/os85/httpx/wikis/Query
module Query
def self.subplugins
{
retries: QueryRetries,
}
end
module InstanceMethods
def query(*uri, **options)
request("QUERY", uri, **options)
end
end
module QueryRetries
module InstanceMethods
private
def repeatable_request?(request, options)
super || request.verb == "QUERY"
end
end
end
end
register_plugin :query, Query
end
end

View File

@ -39,6 +39,8 @@ module HTTPX
# the redirected request. # the redirected request.
# #
def retry_after_rate_limit(_, response) def retry_after_rate_limit(_, response)
return unless response.is_a?(Response)
retry_after = response.headers["retry-after"] retry_after = response.headers["retry-after"]
return unless retry_after return unless retry_after

View File

@ -10,21 +10,18 @@ module HTTPX
module ResponseCache module ResponseCache
CACHEABLE_VERBS = %w[GET HEAD].freeze CACHEABLE_VERBS = %w[GET HEAD].freeze
CACHEABLE_STATUS_CODES = [200, 203, 206, 300, 301, 410].freeze CACHEABLE_STATUS_CODES = [200, 203, 206, 300, 301, 410].freeze
SUPPORTED_VARY_HEADERS = %w[accept accept-encoding accept-language cookie origin].sort.freeze
private_constant :CACHEABLE_VERBS private_constant :CACHEABLE_VERBS
private_constant :CACHEABLE_STATUS_CODES private_constant :CACHEABLE_STATUS_CODES
class << self class << self
def load_dependencies(*) def load_dependencies(*)
require_relative "response_cache/store" require_relative "response_cache/store"
require_relative "response_cache/file_store"
end end
def cacheable_request?(request) # whether the +response+ can be stored in the response cache.
CACHEABLE_VERBS.include?(request.verb) && # (i.e. has a cacheable body, does not contain directives prohibiting storage, etc...)
(
!request.headers.key?("cache-control") || !request.headers.get("cache-control").include?("no-store")
)
end
def cacheable_response?(response) def cacheable_response?(response)
response.is_a?(Response) && response.is_a?(Response) &&
( (
@ -39,79 +36,230 @@ module HTTPX
# directive prohibits caching. However, a cache that does not support # directive prohibits caching. However, a cache that does not support
# the Range and Content-Range headers MUST NOT cache 206 (Partial # the Range and Content-Range headers MUST NOT cache 206 (Partial
# Content) responses. # Content) responses.
response.status != 206 && ( response.status != 206
response.headers.key?("etag") || response.headers.key?("last-modified-at") || response.fresh?
)
end end
def cached_response?(response) # whether the +response+
def not_modified?(response)
response.is_a?(Response) && response.status == 304 response.is_a?(Response) && response.status == 304
end end
def extra_options(options) def extra_options(options)
options.merge(response_cache_store: Store.new) options.merge(
supported_vary_headers: SUPPORTED_VARY_HEADERS,
response_cache_store: :store,
)
end end
end end
# adds support for the following options:
#
# :supported_vary_headers :: array of header values that will be considered for a "vary" header based cache validation
# (defaults to {SUPPORTED_VARY_HEADERS}).
# :response_cache_store :: object where cached responses are fetch from or stored in; defaults to <tt>:store</tt> (in-memory
# cache), can be set to <tt>:file_store</tt> (file system cache store) as well, or any object which
# abides by the Cache Store Interface
#
# The Cache Store Interface requires implementation of the following methods:
#
# * +#get(request) -> response or nil+
# * +#set(request, response) -> void+
# * +#clear() -> void+)
#
module OptionsMethods module OptionsMethods
def option_response_cache_store(value) def option_response_cache_store(value)
raise TypeError, "must be an instance of #{Store}" unless value.is_a?(Store) case value
when :store
Store.new
when :file_store
FileStore.new
else
value
end
end
value def option_supported_vary_headers(value)
Array(value).sort
end end
end end
module InstanceMethods module InstanceMethods
# wipes out all cached responses from the cache store.
def clear_response_cache def clear_response_cache
@options.response_cache_store.clear @options.response_cache_store.clear
end end
def build_request(*) def build_request(*)
request = super request = super
return request unless ResponseCache.cacheable_request?(request) && @options.response_cache_store.cached?(request) return request unless cacheable_request?(request)
@options.response_cache_store.prepare(request) prepare_cache(request)
request request
end end
private
def send_request(request, *)
return request if request.response
super
end
def fetch_response(request, *) def fetch_response(request, *)
response = super response = super
return unless response return unless response
if ResponseCache.cached_response?(response) if ResponseCache.not_modified?(response)
log { "returning cached response for #{request.uri}" } log { "returning cached response for #{request.uri}" }
cached_response = @options.response_cache_store.lookup(request)
response.copy_from_cached(cached_response) response.copy_from_cached!
elsif request.cacheable_verb? && ResponseCache.cacheable_response?(response)
else request.options.response_cache_store.set(request, response) unless response.cached?
@options.response_cache_store.cache(request, response)
end end
response response
end end
# will either assign a still-fresh cached response to +request+, or set up its HTTP
# cache invalidation headers in case it's not fresh anymore.
def prepare_cache(request)
cached_response = request.options.response_cache_store.get(request)
return unless cached_response && match_by_vary?(request, cached_response)
cached_response.body.rewind
if cached_response.fresh?
cached_response = cached_response.dup
cached_response.mark_as_cached!
request.response = cached_response
request.emit(:response, cached_response)
return
end
request.cached_response = cached_response
if !request.headers.key?("if-modified-since") && (last_modified = cached_response.headers["last-modified"])
request.headers.add("if-modified-since", last_modified)
end
if !request.headers.key?("if-none-match") && (etag = cached_response.headers["etag"]) # rubocop:disable Style/GuardClause
request.headers.add("if-none-match", etag)
end
end
def cacheable_request?(request)
request.cacheable_verb? &&
(
!request.headers.key?("cache-control") || !request.headers.get("cache-control").include?("no-store")
)
end
# whether the +response+ complies with the directives set by the +request+ "vary" header
# (true when none is available).
def match_by_vary?(request, response)
vary = response.vary
return true unless vary
original_request = response.original_request
if vary == %w[*]
request.options.supported_vary_headers.each do |field|
return false unless request.headers[field] == original_request.headers[field]
end
return true
end
vary.all? do |field|
!original_request.headers.key?(field) || request.headers[field] == original_request.headers[field]
end
end
end end
module RequestMethods module RequestMethods
# points to a previously cached Response corresponding to this request.
attr_accessor :cached_response
def initialize(*)
super
@cached_response = nil
end
def merge_headers(*)
super
@response_cache_key = nil
end
# returns whether this request is cacheable as per HTTP caching rules.
def cacheable_verb?
CACHEABLE_VERBS.include?(@verb)
end
# returns a unique cache key as a String identifying this request
def response_cache_key def response_cache_key
@response_cache_key ||= Digest::SHA1.hexdigest("httpx-response-cache-#{@verb}-#{@uri}") @response_cache_key ||= begin
keys = [@verb, @uri]
@options.supported_vary_headers.each do |field|
value = @headers[field]
keys << value if value
end
Digest::SHA1.hexdigest("httpx-response-cache-#{keys.join("-")}")
end
end end
end end
module ResponseMethods module ResponseMethods
def copy_from_cached(other) attr_writer :original_request
@body = other.body.dup
def initialize(*)
super
@cached = false
end
# a copy of the request this response was originally cached from
def original_request
@original_request || @request
end
# whether this Response was duplicated from a previously {RequestMethods#cached_response}.
def cached?
@cached
end
# sets this Response as being duplicated from a previously cached response.
def mark_as_cached!
@cached = true
end
# eager-copies the response headers and body from {RequestMethods#cached_response}.
def copy_from_cached!
cached_response = @request.cached_response
return unless cached_response
# 304 responses do not have content-type, which are needed for decoding.
@headers = @headers.class.new(cached_response.headers.merge(@headers))
@body = cached_response.body.dup
@body.rewind @body.rewind
end end
# A response is fresh if its age has not yet exceeded its freshness lifetime. # A response is fresh if its age has not yet exceeded its freshness lifetime.
# other (#cache_control} directives may influence the outcome, as per the rules
# from the {rfc}[https://www.rfc-editor.org/rfc/rfc7234]
def fresh? def fresh?
if cache_control if cache_control
return false if cache_control.include?("no-cache") return false if cache_control.include?("no-cache")
return true if cache_control.include?("immutable")
# check age: max-age # check age: max-age
max_age = cache_control.find { |directive| directive.start_with?("s-maxage") } max_age = cache_control.find { |directive| directive.start_with?("s-maxage") }
@ -129,15 +277,16 @@ module HTTPX
begin begin
expires = Time.httpdate(@headers["expires"]) expires = Time.httpdate(@headers["expires"])
rescue ArgumentError rescue ArgumentError
return true return false
end end
return (expires - Time.now).to_i.positive? return (expires - Time.now).to_i.positive?
end end
true false
end end
# returns the "cache-control" directives as an Array of String(s).
def cache_control def cache_control
return @cache_control if defined?(@cache_control) return @cache_control if defined?(@cache_control)
@ -148,24 +297,28 @@ module HTTPX
end end
end end
# returns the "vary" header value as an Array of (String) headers.
def vary def vary
return @vary if defined?(@vary) return @vary if defined?(@vary)
@vary = begin @vary = begin
return unless @headers.key?("vary") return unless @headers.key?("vary")
@headers["vary"].split(/ *, */) @headers["vary"].split(/ *, */).map(&:downcase)
end end
end end
private private
# returns the value of the "age" header as an Integer (time since epoch).
# if no "age" of header exists, it returns the number of seconds since {#date}.
def age def age
return @headers["age"].to_i if @headers.key?("age") return @headers["age"].to_i if @headers.key?("age")
(Time.now - date).to_i (Time.now - date).to_i
end end
# returns the value of the "date" header as a Time object
def date def date
@date ||= Time.httpdate(@headers["date"]) @date ||= Time.httpdate(@headers["date"])
rescue NoMethodError, ArgumentError rescue NoMethodError, ArgumentError

View File

@ -0,0 +1,140 @@
# frozen_string_literal: true
require "pathname"
module HTTPX::Plugins
module ResponseCache
# Implementation of a file system based cache store.
#
# It stores cached responses in a file under a directory pointed by the +dir+
# variable (defaults to the default temp directory from the OS), in a custom
# format (similar but different from HTTP/1.1 request/response framing).
class FileStore
CRLF = HTTPX::Connection::HTTP1::CRLF
attr_reader :dir
def initialize(dir = Dir.tmpdir)
@dir = Pathname.new(dir).join("httpx-response-cache")
FileUtils.mkdir_p(@dir)
end
def clear
FileUtils.rm_rf(@dir)
end
def get(request)
path = file_path(request)
return unless File.exist?(path)
File.open(path, mode: File::RDONLY | File::BINARY) do |f|
f.flock(File::Constants::LOCK_SH)
read_from_file(request, f)
end
end
def set(request, response)
path = file_path(request)
file_exists = File.exist?(path)
mode = file_exists ? File::RDWR : File::CREAT | File::Constants::WRONLY
File.open(path, mode: mode | File::BINARY) do |f|
f.flock(File::Constants::LOCK_EX)
if file_exists
cached_response = read_from_file(request, f)
if cached_response
next if cached_response == request.cached_response
cached_response.close
f.truncate(0)
f.rewind
end
end
# cache the request headers
f << request.verb << CRLF
f << request.uri << CRLF
request.headers.each do |field, value|
f << field << ":" << value << CRLF
end
f << CRLF
# cache the response
f << response.status << CRLF
f << response.version << CRLF
response.headers.each do |field, value|
f << field << ":" << value << CRLF
end
f << CRLF
response.body.rewind
IO.copy_stream(response.body, f)
end
end
private
def file_path(request)
@dir.join(request.response_cache_key)
end
def read_from_file(request, f)
# if it's an empty file
return if f.eof?
# read request data
verb = f.readline.delete_suffix!(CRLF)
uri = f.readline.delete_suffix!(CRLF)
request_headers = {}
while (line = f.readline) != CRLF
line.delete_suffix!(CRLF)
sep_index = line.index(":")
field = line.byteslice(0..(sep_index - 1))
value = line.byteslice((sep_index + 1)..-1)
request_headers[field] = value
end
status = f.readline.delete_suffix!(CRLF)
version = f.readline.delete_suffix!(CRLF)
response_headers = {}
while (line = f.readline) != CRLF
line.delete_suffix!(CRLF)
sep_index = line.index(":")
field = line.byteslice(0..(sep_index - 1))
value = line.byteslice((sep_index + 1)..-1)
response_headers[field] = value
end
original_request = request.options.request_class.new(verb, uri, request.options)
original_request.merge_headers(request_headers)
response = request.options.response_class.new(request, status, version, response_headers)
response.original_request = original_request
response.finish!
IO.copy_stream(f, response.body)
response
end
end
end
end

View File

@ -2,6 +2,7 @@
module HTTPX::Plugins module HTTPX::Plugins
module ResponseCache module ResponseCache
# Implementation of a thread-safe in-memory cache store.
class Store class Store
def initialize def initialize
@store = {} @store = {}
@ -12,80 +13,19 @@ module HTTPX::Plugins
@store_mutex.synchronize { @store.clear } @store_mutex.synchronize { @store.clear }
end end
def lookup(request) def get(request)
responses = _get(request)
return unless responses
responses.find(&method(:match_by_vary?).curry(2)[request])
end
def cached?(request)
lookup(request)
end
def cache(request, response)
return unless ResponseCache.cacheable_request?(request) && ResponseCache.cacheable_response?(response)
_set(request, response)
end
def prepare(request)
cached_response = lookup(request)
return unless cached_response
return unless match_by_vary?(request, cached_response)
if !request.headers.key?("if-modified-since") && (last_modified = cached_response.headers["last-modified"])
request.headers.add("if-modified-since", last_modified)
end
if !request.headers.key?("if-none-match") && (etag = cached_response.headers["etag"]) # rubocop:disable Style/GuardClause
request.headers.add("if-none-match", etag)
end
end
private
def match_by_vary?(request, response)
vary = response.vary
return true unless vary
original_request = response.instance_variable_get(:@request)
return request.headers.same_headers?(original_request.headers) if vary == %w[*]
vary.all? do |cache_field|
cache_field.downcase!
!original_request.headers.key?(cache_field) || request.headers[cache_field] == original_request.headers[cache_field]
end
end
def _get(request)
@store_mutex.synchronize do @store_mutex.synchronize do
responses = @store[request.response_cache_key] @store[request.response_cache_key]
return unless responses
responses.select! do |res|
!res.body.closed? && res.fresh?
end
responses
end end
end end
def _set(request, response) def set(request, response)
@store_mutex.synchronize do @store_mutex.synchronize do
responses = (@store[request.response_cache_key] ||= []) cached_response = @store[request.response_cache_key]
responses.reject! do |res| cached_response.close if cached_response
res.body.closed? || !res.fresh? || match_by_vary?(request, res)
end
responses << response @store[request.response_cache_key] = response
end end
end end
end end

View File

@ -3,7 +3,12 @@
module HTTPX module HTTPX
module Plugins module Plugins
# #
# This plugin adds support for retrying requests when certain errors happen. # This plugin adds support for retrying requests when errors happen.
#
# It has a default max number of retries (see *MAX_RETRIES* and the *max_retries* option),
# after which it will return the last response, error or not. It will **not** raise an exception.
#
# It does not retry which are not considered idempotent (see *retry_change_requests* to override).
# #
# https://gitlab.com/os85/httpx/wikis/Retries # https://gitlab.com/os85/httpx/wikis/Retries
# #
@ -12,7 +17,9 @@ module HTTPX
# TODO: pass max_retries in a configure/load block # TODO: pass max_retries in a configure/load block
IDEMPOTENT_METHODS = %w[GET OPTIONS HEAD PUT DELETE].freeze IDEMPOTENT_METHODS = %w[GET OPTIONS HEAD PUT DELETE].freeze
RETRYABLE_ERRORS = [
# subset of retryable errors which are safe to retry when reconnecting
RECONNECTABLE_ERRORS = [
IOError, IOError,
EOFError, EOFError,
Errno::ECONNRESET, Errno::ECONNRESET,
@ -20,12 +27,15 @@ module HTTPX
Errno::EPIPE, Errno::EPIPE,
Errno::EINVAL, Errno::EINVAL,
Errno::ETIMEDOUT, Errno::ETIMEDOUT,
Parser::Error,
TLSError,
TimeoutError,
ConnectionError, ConnectionError,
Connection::HTTP2::GoawayError, TLSError,
Connection::HTTP2::Error,
].freeze ].freeze
RETRYABLE_ERRORS = (RECONNECTABLE_ERRORS + [
Parser::Error,
TimeoutError,
]).freeze
DEFAULT_JITTER = ->(interval) { interval * ((rand + 1) * 0.5) } DEFAULT_JITTER = ->(interval) { interval * ((rand + 1) * 0.5) }
if ENV.key?("HTTPX_NO_JITTER") if ENV.key?("HTTPX_NO_JITTER")
@ -38,6 +48,14 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :max_retries :: max number of times a request will be retried (defaults to <tt>3</tt>).
# :retry_change_requests :: whether idempotent requests are retried (defaults to <tt>false</tt>).
# :retry_after:: seconds after which a request is retried; can also be a callable object (i.e. <tt>->(req, res) { ... } </tt>)
# :retry_jitter :: number of seconds applied to *:retry_after* (must be a callable, i.e. <tt>->(retry_after) { ... } </tt>).
# :retry_on :: callable which alternatively defines a different rule for when a response is to be retried
# (i.e. <tt>->(res) { ... }</tt>).
module OptionsMethods module OptionsMethods
def option_retry_after(value) def option_retry_after(value)
# return early if callable # return early if callable
@ -58,7 +76,7 @@ module HTTPX
def option_max_retries(value) def option_max_retries(value)
num = Integer(value) num = Integer(value)
raise TypeError, ":max_retries must be positive" unless num.positive? raise TypeError, ":max_retries must be positive" unless num >= 0
num num
end end
@ -75,29 +93,30 @@ module HTTPX
end end
module InstanceMethods module InstanceMethods
# returns a `:retries` plugin enabled session with +n+ maximum retries per request setting.
def max_retries(n) def max_retries(n)
with(max_retries: n.to_i) with(max_retries: n)
end end
private private
def fetch_response(request, connections, options) def fetch_response(request, selector, options)
response = super response = super
if response && if response &&
request.retries.positive? && request.retries.positive? &&
__repeatable_request?(request, options) && repeatable_request?(request, options) &&
( (
( (
response.is_a?(ErrorResponse) && __retryable_error?(response.error) response.is_a?(ErrorResponse) && retryable_error?(response.error)
) || ) ||
( (
options.retry_on && options.retry_on.call(response) options.retry_on && options.retry_on.call(response)
) )
) )
__try_partial_retry(request, response) try_partial_retry(request, response)
log { "failed to get response, #{request.retries} tries to go..." } log { "failed to get response, #{request.retries} tries to go..." }
request.retries -= 1 request.retries -= 1 unless request.ping? # do not exhaust retries on connection liveness probes
request.transition(:idle) request.transition(:idle)
retry_after = options.retry_after retry_after = options.retry_after
@ -111,12 +130,18 @@ module HTTPX
retry_start = Utils.now retry_start = Utils.now
log { "retrying after #{retry_after} secs..." } log { "retrying after #{retry_after} secs..." }
pool.after(retry_after) do selector.after(retry_after) do
log { "retrying (elapsed time: #{Utils.elapsed_time(retry_start)})!!" } if (response = request.response)
send_request(request, connections, options) response.finish!
# request has terminated abruptly meanwhile
request.emit(:response, response)
else
log { "retrying (elapsed time: #{Utils.elapsed_time(retry_start)})!!" }
send_request(request, selector, options)
end
end end
else else
send_request(request, connections, options) send_request(request, selector, options)
end end
return return
@ -124,24 +149,26 @@ module HTTPX
response response
end end
def __repeatable_request?(request, options) # returns whether +request+ can be retried.
def repeatable_request?(request, options)
IDEMPOTENT_METHODS.include?(request.verb) || options.retry_change_requests IDEMPOTENT_METHODS.include?(request.verb) || options.retry_change_requests
end end
def __retryable_error?(ex) # returns whether the +ex+ exception happend for a retriable request.
def retryable_error?(ex)
RETRYABLE_ERRORS.any? { |klass| ex.is_a?(klass) } RETRYABLE_ERRORS.any? { |klass| ex.is_a?(klass) }
end end
def proxy_error?(request, response) def proxy_error?(request, response, _)
super && !request.retries.positive? super && !request.retries.positive?
end end
# #
# Atttempt to set the request to perform a partial range request. # Attempt to set the request to perform a partial range request.
# This happens if the peer server accepts byte-range requests, and # This happens if the peer server accepts byte-range requests, and
# the last response contains some body payload. # the last response contains some body payload.
# #
def __try_partial_retry(request, response) def try_partial_retry(request, response)
response = response.response if response.is_a?(ErrorResponse) response = response.response if response.is_a?(ErrorResponse)
return unless response return unless response
@ -149,7 +176,7 @@ module HTTPX
unless response.headers.key?("accept-ranges") && unless response.headers.key?("accept-ranges") &&
response.headers["accept-ranges"] == "bytes" && # there's nothing else supported though... response.headers["accept-ranges"] == "bytes" && # there's nothing else supported though...
(original_body = response.body) (original_body = response.body)
response.close if response.respond_to?(:close) response.body.close
return return
end end
@ -162,10 +189,13 @@ module HTTPX
end end
module RequestMethods module RequestMethods
# number of retries left.
attr_accessor :retries attr_accessor :retries
# a response partially received before.
attr_writer :partial_response attr_writer :partial_response
# initializes the request instance, sets the number of retries for the request.
def initialize(*args) def initialize(*args)
super super
@retries = @options.max_retries @retries = @options.max_retries

View File

@ -87,6 +87,9 @@ module HTTPX
end end
end end
# adds support for the following options:
#
# :allowed_schemes :: list of URI schemes allowed (defaults to <tt>["https", "http"]</tt>)
module OptionsMethods module OptionsMethods
def option_allowed_schemes(value) def option_allowed_schemes(value)
Array(value) Array(value)
@ -100,7 +103,7 @@ module HTTPX
error = ServerSideRequestForgeryError.new("#{request.uri} URI scheme not allowed") error = ServerSideRequestForgeryError.new("#{request.uri} URI scheme not allowed")
error.set_backtrace(caller) error.set_backtrace(caller)
response = ErrorResponse.new(request, error, request.options) response = ErrorResponse.new(request, error)
request.emit(:response, response) request.emit(:response, response)
response response
end end

View File

@ -2,23 +2,46 @@
module HTTPX module HTTPX
class StreamResponse class StreamResponse
attr_reader :request
def initialize(request, session) def initialize(request, session)
@request = request @request = request
@options = @request.options
@session = session @session = session
@response = nil @response_enum = nil
@buffered_chunks = []
end end
def each(&block) def each(&block)
return enum_for(__method__) unless block return enum_for(__method__) unless block
if (response_enum = @response_enum)
@response_enum = nil
# streaming already started, let's finish it
while (chunk = @buffered_chunks.shift)
block.call(chunk)
end
# consume enum til the end
begin
while (chunk = response_enum.next)
block.call(chunk)
end
rescue StopIteration
return
end
end
@request.stream = self @request.stream = self
begin begin
@on_chunk = block @on_chunk = block
response = @session.request(@request)
response.raise_for_status response.raise_for_status
ensure ensure
response.close if @response
@on_chunk = nil @on_chunk = nil
end end
end end
@ -50,38 +73,50 @@ module HTTPX
# :nocov: # :nocov:
def inspect def inspect
"#<StreamResponse:#{object_id}>" "#<#{self.class}:#{object_id}>"
end end
# :nocov: # :nocov:
def to_s def to_s
response.to_s if @request.response
@request.response.to_s
else
@buffered_chunks.join
end
end end
private private
def response def response
return @response if @response
@request.response || begin @request.response || begin
@response = @session.request(@request) response_enum = each
while (chunk = response_enum.next)
@buffered_chunks << chunk
break if @request.response
end
@response_enum = response_enum
@request.response
end end
end end
def respond_to_missing?(meth, *args) def respond_to_missing?(meth, include_private)
response.respond_to?(meth, *args) || super if (response = @request.response)
response.respond_to_missing?(meth, include_private)
else
@options.response_class.method_defined?(meth) || (include_private && @options.response_class.private_method_defined?(meth))
end || super
end end
def method_missing(meth, *args, &block) def method_missing(meth, *args, **kwargs, &block)
return super unless response.respond_to?(meth) return super unless response.respond_to?(meth)
response.__send__(meth, *args, &block) response.__send__(meth, *args, **kwargs, &block)
end end
end end
module Plugins module Plugins
# #
# This plugin adds support for stream response (text/event-stream). # This plugin adds support for streaming a response (useful for i.e. "text/event-stream" payloads).
# #
# https://gitlab.com/os85/httpx/wikis/Stream # https://gitlab.com/os85/httpx/wikis/Stream
# #

View File

@ -0,0 +1,315 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin adds support for bidirectional HTTP/2 streams.
#
# https://gitlab.com/os85/httpx/wikis/StreamBidi
#
# It is required that the request body allows chunk to be buffered, (i.e., responds to +#<<(chunk)+).
module StreamBidi
# Extension of the Connection::HTTP2 class, which adds functionality to
# deal with a request that can't be drained and must be interleaved with
# the response streams.
#
# The streams keeps send DATA frames while there's data; when they're ain't,
# the stream is kept open; it must be explicitly closed by the end user.
#
class HTTP2Bidi < Connection::HTTP2
def initialize(*)
super
@lock = Thread::Mutex.new
end
%i[close empty? exhausted? send <<].each do |lock_meth|
class_eval(<<-METH, __FILE__, __LINE__ + 1)
# lock.aware version of +#{lock_meth}+
def #{lock_meth}(*) # def close(*)
return super if @lock.owned?
# small race condition between
# checking for ownership and
# acquiring lock.
# TODO: fix this at the parser.
@lock.synchronize { super }
end
METH
end
private
%i[join_headers join_trailers join_body].each do |lock_meth|
class_eval(<<-METH, __FILE__, __LINE__ + 1)
# lock.aware version of +#{lock_meth}+
def #{lock_meth}(*) # def join_headers(*)
return super if @lock.owned?
# small race condition between
# checking for ownership and
# acquiring lock.
# TODO: fix this at the parser.
@lock.synchronize { super }
end
METH
end
def handle_stream(stream, request)
request.on(:body) do
next unless request.headers_sent
handle(request, stream)
emit(:flush_buffer)
end
super
end
# when there ain't more chunks, it makes the buffer as full.
def send_chunk(request, stream, chunk, next_chunk)
super
return if next_chunk
request.transition(:waiting_for_chunk)
throw(:buffer_full)
end
# sets end-stream flag when the request is closed.
def end_stream?(request, next_chunk)
request.closed? && next_chunk.nil?
end
end
# BidiBuffer is a Buffer which can be receive data from threads othr
# than the thread of the corresponding Connection/Session.
#
# It synchronizes access to a secondary internal +@oob_buffer+, which periodically
# is reconciled to the main internal +@buffer+.
class BidiBuffer < Buffer
def initialize(*)
super
@parent_thread = Thread.current
@oob_mutex = Thread::Mutex.new
@oob_buffer = "".b
end
# buffers the +chunk+ to be sent
def <<(chunk)
return super if Thread.current == @parent_thread
@oob_mutex.synchronize { @oob_buffer << chunk }
end
# reconciles the main and secondary buffer (which receives data from other threads).
def rebuffer
raise Error, "can only rebuffer while waiting on a response" unless Thread.current == @parent_thread
@oob_mutex.synchronize do
@buffer << @oob_buffer
@oob_buffer.clear
end
end
end
# Proxy to wake up the session main loop when one
# of the connections has buffered data to write. It abides by the HTTPX::_Selectable API,
# which allows it to be registered in the selector alongside actual HTTP-based
# HTTPX::Connection objects.
class Signal
def initialize
@closed = false
@pipe_read, @pipe_write = IO.pipe
end
def state
@closed ? :closed : :open
end
# noop
def log(**, &_); end
def to_io
@pipe_read.to_io
end
def wakeup
return if @closed
@pipe_write.write("\0")
end
def call
return if @closed
@pipe_read.readpartial(1)
end
def interests
return if @closed
:r
end
def timeout; end
def terminate
@pipe_write.close
@pipe_read.close
@closed = true
end
# noop (the owner connection will take of it)
def handle_socket_timeout(interval); end
end
class << self
def load_dependencies(klass)
klass.plugin(:stream)
end
def extra_options(options)
options.merge(fallback_protocol: "h2")
end
end
module InstanceMethods
def initialize(*)
@signal = Signal.new
super
end
def close(selector = Selector.new)
@signal.terminate
selector.deregister(@signal)
super(selector)
end
def select_connection(connection, selector)
super
selector.register(@signal)
connection.signal = @signal
end
def deselect_connection(connection, *)
super
connection.signal = nil
end
end
# Adds synchronization to request operations which may buffer payloads from different
# threads.
module RequestMethods
attr_accessor :headers_sent
def initialize(*)
super
@headers_sent = false
@closed = false
@mutex = Thread::Mutex.new
end
def closed?
@closed
end
def can_buffer?
super && @state != :waiting_for_chunk
end
# overrides state management transitions to introduce an intermediate
# +:waiting_for_chunk+ state, which the request transitions to once payload
# is buffered.
def transition(nextstate)
headers_sent = @headers_sent
case nextstate
when :waiting_for_chunk
return unless @state == :body
when :body
case @state
when :headers
headers_sent = true
when :waiting_for_chunk
# HACK: to allow super to pass through
@state = :headers
end
end
super.tap do
# delay setting this up until after the first transition to :body
@headers_sent = headers_sent
end
end
def <<(chunk)
@mutex.synchronize do
if @drainer
@body.clear if @body.respond_to?(:clear)
@drainer = nil
end
@body << chunk
transition(:body)
end
end
def close
@mutex.synchronize do
return if @closed
@closed = true
end
# last chunk to send which ends the stream
self << ""
end
end
module RequestBodyMethods
def initialize(*, **)
super
@headers.delete("content-length")
end
def empty?
false
end
end
# overrides the declaration of +@write_buffer+, which is now a thread-safe buffer
# responding to the same API.
module ConnectionMethods
attr_writer :signal
def initialize(*)
super
@write_buffer = BidiBuffer.new(@options.buffer_size)
end
# rebuffers the +@write_buffer+ before calculating interests.
def interests
@write_buffer.rebuffer
super
end
private
def parser_type(protocol)
return HTTP2Bidi if protocol == "h2"
super
end
def set_parser_callbacks(parser)
super
parser.on(:flush_buffer) do
@signal.wakeup if @signal
end
end
end
end
register_plugin :stream_bidi, StreamBidi
end
end

View File

@ -28,7 +28,7 @@ module HTTPX
end end
module InstanceMethods module InstanceMethods
def fetch_response(request, connections, options) def fetch_response(request, selector, options)
response = super response = super
if response if response
@ -45,7 +45,7 @@ module HTTPX
return response unless protocol_handler return response unless protocol_handler
log { "upgrading to #{upgrade_protocol}..." } log { "upgrading to #{upgrade_protocol}..." }
connection = find_connection(request, connections, options) connection = find_connection(request.uri, selector, options)
# do not upgrade already upgraded connections # do not upgrade already upgraded connections
return if connection.upgrade_protocol == upgrade_protocol return if connection.upgrade_protocol == upgrade_protocol
@ -60,21 +60,22 @@ module HTTPX
response response
end end
def close(*args)
return super if args.empty?
connections, = args
pool.close(connections.reject(&:hijacked))
end
end end
module ConnectionMethods module ConnectionMethods
attr_reader :upgrade_protocol, :hijacked attr_reader :upgrade_protocol, :hijacked
def initialize(*)
super
@upgrade_protocol = nil
end
def hijack_io def hijack_io
@hijacked = true @hijacked = true
# connection is taken away from selector and not given back to the pool.
@current_session.deselect_connection(self, @current_selector, true)
end end
end end
end end

View File

@ -8,6 +8,10 @@ module HTTPX
# https://gitlab.com/os85/httpx/wikis/WebDav # https://gitlab.com/os85/httpx/wikis/WebDav
# #
module WebDav module WebDav
def self.configure(klass)
klass.plugin(:xml)
end
module InstanceMethods module InstanceMethods
def copy(src, dest) def copy(src, dest)
request("COPY", src, headers: { "destination" => @options.origin.merge(dest) }) request("COPY", src, headers: { "destination" => @options.origin.merge(dest) })
@ -43,6 +47,8 @@ module HTTPX
ensure ensure
unlock(path, lock_token) unlock(path, lock_token)
end end
response
end end
def unlock(path, lock_token) def unlock(path, lock_token)

76
lib/httpx/plugins/xml.rb Normal file
View File

@ -0,0 +1,76 @@
# frozen_string_literal: true
module HTTPX
module Plugins
#
# This plugin supports request XML encoding/response decoding using the nokogiri gem.
#
# https://gitlab.com/os85/httpx/wikis/XML
#
module XML
MIME_TYPES = %r{\b(application|text)/(.+\+)?xml\b}.freeze
module Transcoder
module_function
class Encoder
def initialize(xml)
@raw = xml
end
def content_type
charset = @raw.respond_to?(:encoding) && @raw.encoding ? @raw.encoding.to_s.downcase : "utf-8"
"application/xml; charset=#{charset}"
end
def bytesize
@raw.to_s.bytesize
end
def to_s
@raw.to_s
end
end
def encode(xml)
Encoder.new(xml)
end
def decode(response)
content_type = response.content_type.mime_type
raise HTTPX::Error, "invalid form mime type (#{content_type})" unless MIME_TYPES.match?(content_type)
Nokogiri::XML.method(:parse)
end
end
class << self
def load_dependencies(*)
require "nokogiri"
end
end
module ResponseMethods
# decodes the response payload into a Nokogiri::XML::Node object **if** the payload is valid
# "application/xml" (requires the "nokogiri" gem).
def xml
decode(Transcoder)
end
end
module RequestBodyClassMethods
# ..., xml: Nokogiri::XML::Node #=> xml encoder
def initialize_body(params)
if (xml = params.delete(:xml))
# @type var xml: Nokogiri::XML::Node | String
return Transcoder.encode(xml)
end
super
end
end
end
register_plugin(:xml, XML)
end
end

View File

@ -1,6 +1,5 @@
# frozen_string_literal: true # frozen_string_literal: true
require "forwardable"
require "httpx/selector" require "httpx/selector"
require "httpx/connection" require "httpx/connection"
require "httpx/resolver" require "httpx/resolver"
@ -8,110 +7,34 @@ require "httpx/resolver"
module HTTPX module HTTPX
class Pool class Pool
using ArrayExtensions::FilterMap using ArrayExtensions::FilterMap
extend Forwardable using URIExtensions
def_delegator :@timers, :after POOL_TIMEOUT = 5
def initialize # Sets up the connection pool with the given +options+, which can be the following:
@resolvers = {} #
@timers = Timers.new # :max_connections:: the maximum number of connections held in the pool.
@selector = Selector.new # :max_connections_per_origin :: the maximum number of connections held in the pool pointing to a given origin.
# :pool_timeout :: the number of seconds to wait for a connection to a given origin (before raising HTTPX::PoolTimeoutError)
#
def initialize(options)
@max_connections = options.fetch(:max_connections, Float::INFINITY)
@max_connections_per_origin = options.fetch(:max_connections_per_origin, Float::INFINITY)
@pool_timeout = options.fetch(:pool_timeout, POOL_TIMEOUT)
@resolvers = Hash.new { |hs, resolver_type| hs[resolver_type] = [] }
@resolver_mtx = Thread::Mutex.new
@connections = [] @connections = []
@connection_mtx = Thread::Mutex.new
@connections_counter = 0
@max_connections_cond = ConditionVariable.new
@origin_counters = Hash.new(0)
@origin_conds = Hash.new { |hs, orig| hs[orig] = ConditionVariable.new }
end end
def wrap # connections returned by this function are not expected to return to the connection pool.
connections = @connections def pop_connection
@connections = [] @connection_mtx.synchronize do
drop_connection
begin
yield self
ensure
@connections.unshift(*connections)
end
end
def empty?
@connections.empty?
end
def next_tick
catch(:jump_tick) do
timeout = next_timeout
if timeout && timeout.negative?
@timers.fire
throw(:jump_tick)
end
begin
@selector.select(timeout, &:call)
@timers.fire
rescue TimeoutError => e
@timers.fire(e)
end
end
rescue StandardError => e
@connections.each do |connection|
connection.emit(:error, e)
end
rescue Exception # rubocop:disable Lint/RescueException
@connections.each(&:force_reset)
raise
end
def close(connections = @connections)
return if connections.empty?
connections = connections.reject(&:inflight?)
connections.each(&:terminate)
next_tick until connections.none? { |c| c.state != :idle && @connections.include?(c) }
# close resolvers
outstanding_connections = @connections
resolver_connections = @resolvers.each_value.flat_map(&:connections).compact
outstanding_connections -= resolver_connections
return unless outstanding_connections.empty?
@resolvers.each_value do |resolver|
resolver.close unless resolver.closed?
end
# for https resolver
resolver_connections.each(&:terminate)
next_tick until resolver_connections.none? { |c| c.state != :idle && @connections.include?(c) }
end
def init_connection(connection, _options)
connection.timers = @timers
connection.on(:activate) do
select_connection(connection)
end
connection.on(:exhausted) do
case connection.state
when :closed
connection.idling
@connections << connection
select_connection(connection)
when :closing
connection.once(:close) do
connection.idling
@connections << connection
select_connection(connection)
end
end
end
connection.on(:close) do
unregister_connection(connection)
end
connection.on(:terminate) do
unregister_connection(connection, true)
end
resolve_connection(connection) unless connection.family
end
def deactivate(connections)
connections.each do |connection|
connection.deactivate
deselect_connection(connection) if connection.state == :inactive
end end
end end
@ -119,183 +42,144 @@ module HTTPX
# Many hostnames are reachable through the same IP, so we try to # Many hostnames are reachable through the same IP, so we try to
# maximize pipelining by opening as few connections as possible. # maximize pipelining by opening as few connections as possible.
# #
def find_connection(uri, options) def checkout_connection(uri, options)
conn = @connections.find do |connection| return checkout_new_connection(uri, options) if options.io
connection.match?(uri, options)
end
return unless conn @connection_mtx.synchronize do
acquire_connection(uri, options) || begin
if @connections_counter == @max_connections
# this takes precedence over per-origin
@max_connections_cond.wait(@connection_mtx, @pool_timeout)
case conn.state acquire_connection(uri, options) || begin
when :closed if @connections_counter == @max_connections
conn.idling # if no matching usable connection was found, the pool will make room and drop a closed connection. if none is found,
select_connection(conn) # this means that all of them are persistent or being used, so raise a timeout error.
when :closing conn = @connections.find { |c| c.state == :closed }
conn.once(:close) do
conn.idling raise PoolTimeoutError.new(@pool_timeout,
select_connection(conn) "Timed out after #{@pool_timeout} seconds while waiting for a connection") unless conn
drop_connection(conn)
end
end
end
if @origin_counters[uri.origin] == @max_connections_per_origin
@origin_conds[uri.origin].wait(@connection_mtx, @pool_timeout)
return acquire_connection(uri, options) ||
raise(PoolTimeoutError.new(@pool_timeout,
"Timed out after #{@pool_timeout} seconds while waiting for a connection to #{uri.origin}"))
end
@connections_counter += 1
@origin_counters[uri.origin] += 1
checkout_new_connection(uri, options)
end end
end end
conn
end end
def checkin_connection(connection)
return if connection.options.io
@connection_mtx.synchronize do
@connections << connection
@max_connections_cond.signal
@origin_conds[connection.origin.to_s].signal
end
end
def checkout_mergeable_connection(connection)
return if connection.options.io
@connection_mtx.synchronize do
idx = @connections.find_index do |ch|
ch != connection && ch.mergeable?(connection)
end
@connections.delete_at(idx) if idx
end
end
def reset_resolvers
@resolver_mtx.synchronize { @resolvers.clear }
end
def checkout_resolver(options)
resolver_type = options.resolver_class
resolver_type = Resolver.resolver_for(resolver_type)
@resolver_mtx.synchronize do
resolvers = @resolvers[resolver_type]
idx = resolvers.find_index do |res|
res.options == options
end
resolvers.delete_at(idx) if idx
end || checkout_new_resolver(resolver_type, options)
end
def checkin_resolver(resolver)
@resolver_mtx.synchronize do
resolvers = @resolvers[resolver.class]
resolver = resolver.multi
resolvers << resolver unless resolvers.include?(resolver)
end
end
# :nocov:
def inspect
"#<#{self.class}:#{object_id} " \
"@max_connections_per_origin=#{@max_connections_per_origin} " \
"@pool_timeout=#{@pool_timeout} " \
"@connections=#{@connections.size}>"
end
# :nocov:
private private
def resolve_connection(connection) def acquire_connection(uri, options)
@connections << connection unless @connections.include?(connection) idx = @connections.find_index do |connection|
connection.match?(uri, options)
if connection.addresses || connection.open?
#
# there are two cases in which we want to activate initialization of
# connection immediately:
#
# 1. when the connection already has addresses, i.e. it doesn't need to
# resolve a name (not the same as name being an IP, yet)
# 2. when the connection is initialized with an external already open IO.
#
connection.once(:connect_error, &connection.method(:handle_error))
on_resolver_connection(connection)
return
end end
find_resolver_for(connection) do |resolver| return unless idx
resolver << try_clone_connection(connection, resolver.family)
next if resolver.empty?
select_connection(resolver) @connections.delete_at(idx)
end
end end
def try_clone_connection(connection, family) def checkout_new_connection(uri, options)
connection.family ||= family options.connection_class.new(uri, options)
return connection if connection.family == family
new_connection = connection.class.new(connection.origin, connection.options)
new_connection.family = family
connection.once(:tcp_open) { new_connection.force_reset }
connection.once(:connect_error) do |err|
if new_connection.connecting?
new_connection.merge(connection)
connection.force_reset
else
connection.__send__(:handle_error, err)
end
end
new_connection.once(:tcp_open) do |new_conn|
if new_conn != connection
new_conn.merge(connection)
connection.force_reset
end
end
new_connection.once(:connect_error) do |err|
if connection.connecting?
# main connection has the requests
connection.merge(new_connection)
new_connection.force_reset
else
new_connection.__send__(:handle_error, err)
end
end
init_connection(new_connection, connection.options)
new_connection
end end
def on_resolver_connection(connection) def checkout_new_resolver(resolver_type, options)
@connections << connection unless @connections.include?(connection) if resolver_type.multi?
found_connection = @connections.find do |ch| Resolver::Multi.new(resolver_type, options)
ch != connection && ch.mergeable?(connection)
end
return register_connection(connection) unless found_connection
if found_connection.open?
coalesce_connections(found_connection, connection)
throw(:coalesced, found_connection) unless @connections.include?(connection)
else else
found_connection.once(:open) do resolver_type.new(options)
coalesce_connections(found_connection, connection)
end
end end
end end
def on_resolver_error(connection, error) # drops and returns the +connection+ from the connection pool; if +connection+ is <tt>nil</tt> (default),
return connection.emit(:connect_error, error) if connection.connecting? && connection.callbacks_for?(:connect_error) # the first available connection from the pool will be dropped.
def drop_connection(connection = nil)
if connection
@connections.delete(connection)
else
connection = @connections.shift
connection.emit(:error, error) return unless connection
end
def on_resolver_close(resolver)
resolver_type = resolver.class
return if resolver.closed?
@resolvers.delete(resolver_type)
deselect_connection(resolver)
resolver.close unless resolver.closed?
end
def register_connection(connection)
select_connection(connection)
end
def unregister_connection(connection, cleanup = !connection.used?)
@connections.delete(connection) if cleanup
deselect_connection(connection)
end
def select_connection(connection)
@selector.register(connection)
end
def deselect_connection(connection)
@selector.deregister(connection)
end
def coalesce_connections(conn1, conn2)
return register_connection(conn2) unless conn1.coalescable?(conn2)
conn2.emit(:tcp_open, conn1)
conn1.merge(conn2)
@connections.delete(conn2)
end
def next_timeout
[
@timers.wait_interval,
*@resolvers.values.reject(&:closed?).filter_map(&:timeout),
*@connections.filter_map(&:timeout),
].compact.min
end
def find_resolver_for(connection)
connection_options = connection.options
resolver_type = connection_options.resolver_class
resolver_type = Resolver.resolver_for(resolver_type)
@resolvers[resolver_type] ||= begin
resolver_manager = if resolver_type.multi?
Resolver::Multi.new(resolver_type, connection_options)
else
resolver_type.new(connection_options)
end
resolver_manager.on(:resolve, &method(:on_resolver_connection))
resolver_manager.on(:error, &method(:on_resolver_error))
resolver_manager.on(:close, &method(:on_resolver_close))
resolver_manager
end end
manager = @resolvers[resolver_type] @connections_counter -= 1
@origin_conds.delete(connection.origin) if (@origin_counters[connection.origin.to_s] -= 1).zero?
(manager.is_a?(Resolver::Multi) && manager.early_resolve(connection)) || manager.resolvers.each do |resolver| connection
resolver.pool = self
yield resolver
end
manager
end end
end end
end end

View File

@ -8,11 +8,14 @@ module HTTPX
# as well as maintaining the state machine which manages streaming the request onto the wire. # as well as maintaining the state machine which manages streaming the request onto the wire.
class Request class Request
extend Forwardable extend Forwardable
include Loggable
include Callbacks include Callbacks
using URIExtensions using URIExtensions
ALLOWED_URI_SCHEMES = %w[https http].freeze
# default value used for "user-agent" header, when not overridden. # default value used for "user-agent" header, when not overridden.
USER_AGENT = "httpx.rb/#{VERSION}" USER_AGENT = "httpx.rb/#{VERSION}".freeze # rubocop:disable Style/RedundantFreeze
# the upcased string HTTP verb for this request. # the upcased string HTTP verb for this request.
attr_reader :verb attr_reader :verb
@ -43,16 +46,52 @@ module HTTPX
attr_writer :persistent attr_writer :persistent
attr_reader :active_timeouts
# will be +true+ when request body has been completely flushed. # will be +true+ when request body has been completely flushed.
def_delegator :@body, :empty? def_delegator :@body, :empty?
# initializes the instance with the given +verb+, an absolute or relative +uri+, and the # closes the body
# request options. def_delegator :@body, :close
def initialize(verb, uri, options = {})
# initializes the instance with the given +verb+ (an upppercase String, ex. 'GEt'),
# an absolute or relative +uri+ (either as String or URI::HTTP object), the
# request +options+ (instance of HTTPX::Options) and an optional Hash of +params+.
#
# Besides any of the options documented in HTTPX::Options (which would override or merge with what
# +options+ sets), it accepts also the following:
#
# :params :: hash or array of key-values which will be encoded and set in the query string of request uris.
# :body :: to be encoded in the request body payload. can be a String, an IO object (i.e. a File), or an Enumerable.
# :form :: hash of array of key-values which will be form-urlencoded- or multipart-encoded in requests body payload.
# :json :: hash of array of key-values which will be JSON-encoded in requests body payload.
# :xml :: Nokogiri XML nodes which will be encoded in requests body payload.
#
# :body, :form, :json and :xml are all mutually exclusive, i.e. only one of them gets picked up.
def initialize(verb, uri, options, params = EMPTY_HASH)
@verb = verb.to_s.upcase @verb = verb.to_s.upcase
@options = Options.new(options)
@uri = Utils.to_uri(uri) @uri = Utils.to_uri(uri)
if @uri.relative?
@headers = options.headers.dup
merge_headers(params.delete(:headers)) if params.key?(:headers)
@headers["user-agent"] ||= USER_AGENT
@headers["accept"] ||= "*/*"
# forego compression in the Range request case
if @headers.key?("range")
@headers.delete("accept-encoding")
else
@headers["accept-encoding"] ||= options.supported_compression_formats
end
@query_params = params.delete(:params) if params.key?(:params)
@body = options.request_body_class.new(@headers, options, **params)
@options = @body.options
if @uri.relative? || @uri.host.nil?
origin = @options.origin origin = @options.origin
raise(Error, "invalid URI: #{@uri}") unless origin raise(Error, "invalid URI: #{@uri}") unless origin
@ -61,28 +100,37 @@ module HTTPX
@uri = origin.merge("#{base_path}#{@uri}") @uri = origin.merge("#{base_path}#{@uri}")
end end
@headers = @options.headers.dup raise UnsupportedSchemeError, "#{@uri}: #{@uri.scheme}: unsupported URI scheme" unless ALLOWED_URI_SCHEMES.include?(@uri.scheme)
@headers["user-agent"] ||= USER_AGENT
@headers["accept"] ||= "*/*"
@body = @options.request_body_class.new(@headers, @options)
@state = :idle @state = :idle
@response = nil @response = nil
@peer_address = nil @peer_address = nil
@ping = false
@persistent = @options.persistent @persistent = @options.persistent
@active_timeouts = []
end end
# the read timeout defied for this requet. # whether request has been buffered with a ping
def ping?
@ping
end
# marks the request as having been buffered with a ping
def ping!
@ping = true
end
# the read timeout defined for this request.
def read_timeout def read_timeout
@options.timeout[:read_timeout] @options.timeout[:read_timeout]
end end
# the write timeout defied for this requet. # the write timeout defined for this request.
def write_timeout def write_timeout
@options.timeout[:write_timeout] @options.timeout[:write_timeout]
end end
# the request timeout defied for this requet. # the request timeout defined for this request.
def request_timeout def request_timeout
@options.timeout[:request_timeout] @options.timeout[:request_timeout]
end end
@ -91,10 +139,12 @@ module HTTPX
@persistent @persistent
end end
# if the request contains trailer headers
def trailers? def trailers?
defined?(@trailers) defined?(@trailers)
end end
# returns an instance of HTTPX::Headers containing the trailer headers
def trailers def trailers
@trailers ||= @options.headers_class.new @trailers ||= @options.headers_class.new
end end
@ -106,6 +156,11 @@ module HTTPX
:w :w
end end
def can_buffer?
@state != :done
end
# merges +h+ into the instance of HTTPX::Headers of the request.
def merge_headers(h) def merge_headers(h)
@headers = @headers.merge(h) @headers = @headers.merge(h)
end end
@ -172,7 +227,7 @@ module HTTPX
return @query if defined?(@query) return @query if defined?(@query)
query = [] query = []
if (q = @options.params) if (q = @query_params) && !q.empty?
query << Transcoder::Form.encode(q) query << Transcoder::Form.encode(q)
end end
query << @uri.query if @uri.query query << @uri.query if @uri.query
@ -197,7 +252,7 @@ module HTTPX
# :nocov: # :nocov:
def inspect def inspect
"#<HTTPX::Request:#{object_id} " \ "#<#{self.class}:#{object_id} " \
"#{@verb} " \ "#{@verb} " \
"#{uri} " \ "#{uri} " \
"@headers=#{@headers} " \ "@headers=#{@headers} " \
@ -210,10 +265,13 @@ module HTTPX
case nextstate case nextstate
when :idle when :idle
@body.rewind @body.rewind
@ping = false
@response = nil @response = nil
@drainer = nil @drainer = nil
@active_timeouts.clear
when :headers when :headers
return unless @state == :idle return unless @state == :idle
when :body when :body
return unless @state == :headers || return unless @state == :headers ||
@state == :expect @state == :expect
@ -234,7 +292,9 @@ module HTTPX
return unless @state == :body return unless @state == :body
when :done when :done
return if @state == :expect return if @state == :expect
end end
log(level: 3) { "#{@state}] -> #{nextstate}" }
@state = nextstate @state = nextstate
emit(@state, self) emit(@state, self)
nil nil
@ -244,6 +304,15 @@ module HTTPX
def expects? def expects?
@headers["expect"] == "100-continue" && @informational_status == 100 && !@response @headers["expect"] == "100-continue" && @informational_status == 100 && !@response
end end
def set_timeout_callback(event, &callback)
clb = once(event, &callback)
# reset timeout callbacks when requests get rerouted to a different connection
once(:idle) do
callbacks(event).delete(clb)
end
end
end end
end end

View File

@ -4,30 +4,44 @@ module HTTPX
# Implementation of the HTTP Request body as a delegator which iterates (responds to +each+) payload chunks. # Implementation of the HTTP Request body as a delegator which iterates (responds to +each+) payload chunks.
class Request::Body < SimpleDelegator class Request::Body < SimpleDelegator
class << self class << self
def new(_, options) def new(_, options, body: nil, **params)
return options.body if options.body.is_a?(self) if body.is_a?(self)
# request derives its options from body
body.options = options.merge(params)
return body
end
super super
end end
end end
# inits the instance with the request +headers+ and +options+, which contain the payload definition. attr_accessor :options
def initialize(headers, options)
@headers = headers
# forego compression in the Range request case # inits the instance with the request +headers+, +options+ and +params+, which contain the payload definition.
if @headers.key?("range") # it wraps the given body with the appropriate encoder on initialization.
@headers.delete("accept-encoding") #
else # ..., json: { foo: "bar" }) #=> json encoder
@headers["accept-encoding"] ||= options.supported_compression_formats # ..., form: { foo: "bar" }) #=> form urlencoded encoder
# ..., form: { foo: Pathname.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { foo: File.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { body: "bla") }) #=> raw data encoder
def initialize(h, options, **params)
@headers = h
@body = self.class.initialize_body(params)
@options = options.merge(params)
if @body
if @options.compress_request_body && @headers.key?("content-encoding")
@headers.get("content-encoding").each do |encoding|
@body = self.class.initialize_deflater_body(@body, encoding)
end
end
@headers["content-type"] ||= @body.content_type
@headers["content-length"] = @body.bytesize unless unbounded_body?
end end
initialize_body(options)
return if @body.nil?
@headers["content-type"] ||= @body.content_type
@headers["content-length"] = @body.bytesize unless unbounded_body?
super(@body) super(@body)
end end
@ -38,7 +52,11 @@ module HTTPX
body = stream(@body) body = stream(@body)
if body.respond_to?(:read) if body.respond_to?(:read)
::IO.copy_stream(body, ProcIO.new(block)) while (chunk = body.read(16_384))
block.call(chunk)
end
# TODO: use copy_stream once bug is resolved: https://bugs.ruby-lang.org/issues/21131
# IO.copy_stream(body, ProcIO.new(block))
elsif body.respond_to?(:each) elsif body.respond_to?(:each)
body.each(&block) body.each(&block)
else else
@ -46,6 +64,10 @@ module HTTPX
end end
end end
def close
@body.close if @body.respond_to?(:close)
end
# if the +@body+ is rewindable, it rewinnds it. # if the +@body+ is rewindable, it rewinnds it.
def rewind def rewind
return if empty? return if empty?
@ -94,39 +116,25 @@ module HTTPX
# :nocov: # :nocov:
def inspect def inspect
"#<HTTPX::Request::Body:#{object_id} " \ "#<#{self.class}:#{object_id} " \
"#{unbounded_body? ? "stream" : "@bytesize=#{bytesize}"}>" "#{unbounded_body? ? "stream" : "@bytesize=#{bytesize}"}>"
end end
# :nocov: # :nocov:
private
# wraps the given body with the appropriate encoder.
#
# ..., json: { foo: "bar" }) #=> json encoder
# ..., form: { foo: "bar" }) #=> form urlencoded encoder
# ..., form: { foo: Pathname.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { foo: File.open("path/to/file") }) #=> multipart urlencoded encoder
# ..., form: { body: "bla") }) #=> raw data encoder
def initialize_body(options)
@body = if options.body
Transcoder::Body.encode(options.body)
elsif options.form
Transcoder::Form.encode(options.form)
elsif options.json
Transcoder::JSON.encode(options.json)
elsif options.xml
Transcoder::Xml.encode(options.xml)
end
return unless @body && options.compress_request_body && @headers.key?("content-encoding")
@headers.get("content-encoding").each do |encoding|
@body = self.class.initialize_deflater_body(@body, encoding)
end
end
class << self class << self
def initialize_body(params)
if (body = params.delete(:body))
# @type var body: bodyIO
Transcoder::Body.encode(body)
elsif (form = params.delete(:form))
# @type var form: Transcoder::urlencoded_input
Transcoder::Form.encode(form)
elsif (json = params.delete(:json))
# @type var body: _ToJson
Transcoder::JSON.encode(json)
end
end
# returns the +body+ wrapped with the correct deflater accordinng to the given +encodisng+. # returns the +body+ wrapped with the correct deflater accordinng to the given +encodisng+.
def initialize_deflater_body(body, encoding) def initialize_deflater_body(body, encoding)
case encoding case encoding
@ -142,17 +150,4 @@ module HTTPX
end end
end end
end end
# Wrapper yielder which can be used with functions which expect an IO writer.
class ProcIO
def initialize(block)
@block = block
end
# Implementation the IO write protocol, which yield the given chunk to +@block+.
def write(data)
@block.call(data.dup)
data.bytesize
end
end
end end

View File

@ -53,8 +53,8 @@ module HTTPX
def cached_lookup(hostname) def cached_lookup(hostname)
now = Utils.now now = Utils.now
@lookup_mutex.synchronize do lookup_synchronize do |lookups|
lookup(hostname, now) lookup(hostname, lookups, now)
end end
end end
@ -63,37 +63,49 @@ module HTTPX
entries.each do |entry| entries.each do |entry|
entry["TTL"] += now entry["TTL"] += now
end end
@lookup_mutex.synchronize do lookup_synchronize do |lookups|
case family case family
when Socket::AF_INET6 when Socket::AF_INET6
@lookups[hostname].concat(entries) lookups[hostname].concat(entries)
when Socket::AF_INET when Socket::AF_INET
@lookups[hostname].unshift(*entries) lookups[hostname].unshift(*entries)
end end
entries.each do |entry| entries.each do |entry|
next unless entry["name"] != hostname next unless entry["name"] != hostname
case family case family
when Socket::AF_INET6 when Socket::AF_INET6
@lookups[entry["name"]] << entry lookups[entry["name"]] << entry
when Socket::AF_INET when Socket::AF_INET
@lookups[entry["name"]].unshift(entry) lookups[entry["name"]].unshift(entry)
end end
end end
end end
end end
# do not use directly! def cached_lookup_evict(hostname, ip)
def lookup(hostname, ttl) ip = ip.to_s
return unless @lookups.key?(hostname)
entries = @lookups[hostname] = @lookups[hostname].select do |address| lookup_synchronize do |lookups|
entries = lookups[hostname]
return unless entries
lookups.delete_if { |entry| entry["data"] == ip }
end
end
# do not use directly!
def lookup(hostname, lookups, ttl)
return unless lookups.key?(hostname)
entries = lookups[hostname] = lookups[hostname].select do |address|
address["TTL"] > ttl address["TTL"] > ttl
end end
ips = entries.flat_map do |address| ips = entries.flat_map do |address|
if address.key?("alias") if (als = address["alias"])
lookup(address["alias"], ttl) lookup(als, lookups, ttl)
else else
IPAddr.new(address["data"]) IPAddr.new(address["data"])
end end
@ -103,12 +115,11 @@ module HTTPX
end end
def generate_id def generate_id
@identifier_mutex.synchronize { @identifier = (@identifier + 1) & 0xFFFF } id_synchronize { @identifier = (@identifier + 1) & 0xFFFF }
end end
def encode_dns_query(hostname, type: Resolv::DNS::Resource::IN::A, message_id: generate_id) def encode_dns_query(hostname, type: Resolv::DNS::Resource::IN::A, message_id: generate_id)
Resolv::DNS::Message.new.tap do |query| Resolv::DNS::Message.new(message_id).tap do |query|
query.id = message_id
query.rd = 1 query.rd = 1
query.add_question(hostname, type) query.add_question(hostname, type)
end.encode end.encode
@ -150,5 +161,13 @@ module HTTPX
[:ok, addresses] [:ok, addresses]
end end
def lookup_synchronize
@lookup_mutex.synchronize { yield(@lookups) }
end
def id_synchronize(&block)
@identifier_mutex.synchronize(&block)
end
end end
end end

View File

@ -2,11 +2,14 @@
require "resolv" require "resolv"
require "uri" require "uri"
require "cgi"
require "forwardable" require "forwardable"
require "httpx/base64" require "httpx/base64"
module HTTPX module HTTPX
# Implementation of a DoH name resolver (https://www.youtube.com/watch?v=unMXvnY2FNM).
# It wraps an HTTPX::Connection object which integrates with the main session in the
# same manner as other performed HTTP requests.
#
class Resolver::HTTPS < Resolver::Resolver class Resolver::HTTPS < Resolver::Resolver
extend Forwardable extend Forwardable
using URIExtensions using URIExtensions
@ -27,14 +30,13 @@ module HTTPX
use_get: false, use_get: false,
}.freeze }.freeze
def_delegators :@resolver_connection, :state, :connecting?, :to_io, :call, :close, :terminate def_delegators :@resolver_connection, :state, :connecting?, :to_io, :call, :close, :terminate, :inflight?, :handle_socket_timeout
def initialize(_, options) def initialize(_, options)
super super
@resolver_options = DEFAULTS.merge(@options.resolver_options) @resolver_options = DEFAULTS.merge(@options.resolver_options)
@queries = {} @queries = {}
@requests = {} @requests = {}
@connections = []
@uri = URI(@resolver_options[:uri]) @uri = URI(@resolver_options[:uri])
@uri_addresses = nil @uri_addresses = nil
@resolver = Resolv::DNS.new @resolver = Resolv::DNS.new
@ -43,7 +45,7 @@ module HTTPX
end end
def <<(connection) def <<(connection)
return if @uri.origin == connection.origin.to_s return if @uri.origin == connection.peer.to_s
@uri_addresses ||= HTTPX::Resolver.nolookup_resolve(@uri.host) || @resolver.getaddresses(@uri.host) @uri_addresses ||= HTTPX::Resolver.nolookup_resolve(@uri.host) || @resolver.getaddresses(@uri.host)
@ -66,28 +68,29 @@ module HTTPX
end end
def resolver_connection def resolver_connection
@resolver_connection ||= @pool.find_connection(@uri, @options) || begin # TODO: leaks connection object into the pool
@building_connection = true @resolver_connection ||= @current_session.find_connection(@uri, @current_selector,
connection = @options.connection_class.new(@uri, @options.merge(ssl: { alpn_protocols: %w[h2] })) @options.merge(ssl: { alpn_protocols: %w[h2] })).tap do |conn|
@pool.init_connection(connection, @options) emit_addresses(conn, @family, @uri_addresses) unless conn.addresses
# only explicity emit addresses if connection didn't pre-resolve, i.e. it's not an IP.
emit_addresses(connection, @family, @uri_addresses) unless connection.addresses
@building_connection = false
connection
end end
end end
private private
def resolve(connection = @connections.first, hostname = nil) def resolve(connection = nil, hostname = nil)
return if @building_connection @connections.shift until @connections.empty? || @connections.first.state != :closed
connection ||= @connections.first
return unless connection return unless connection
hostname ||= @queries.key(connection) hostname ||= @queries.key(connection)
if hostname.nil? if hostname.nil?
hostname = connection.origin.host hostname = connection.peer.host
log { "resolver: resolve IDN #{connection.origin.non_ascii_hostname} as #{hostname}" } if connection.origin.non_ascii_hostname log do
"resolver #{FAMILY_TYPES[@record_type]}: resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}"
end if connection.peer.non_ascii_hostname
hostname = @resolver.generate_candidates(hostname).each do |name| hostname = @resolver.generate_candidates(hostname).each do |name|
@queries[name.to_s] = connection @queries[name.to_s] = connection
@ -95,7 +98,7 @@ module HTTPX
else else
@queries[hostname] = connection @queries[hostname] = connection
end end
log { "resolver: query #{FAMILY_TYPES[RECORD_TYPES[@family]]} for #{hostname}" } log { "resolver #{FAMILY_TYPES[@record_type]}: query for #{hostname}" }
begin begin
request = build_request(hostname) request = build_request(hostname)
@ -106,7 +109,7 @@ module HTTPX
@connections << connection @connections << connection
rescue ResolveError, Resolv::DNS::EncodeError => e rescue ResolveError, Resolv::DNS::EncodeError => e
reset_hostname(hostname) reset_hostname(hostname)
emit_resolve_error(connection, connection.origin.host, e) emit_resolve_error(connection, connection.peer.host, e)
end end
end end
@ -115,7 +118,7 @@ module HTTPX
rescue StandardError => e rescue StandardError => e
hostname = @requests.delete(request) hostname = @requests.delete(request)
connection = reset_hostname(hostname) connection = reset_hostname(hostname)
emit_resolve_error(connection, connection.origin.host, e) emit_resolve_error(connection, connection.peer.host, e)
else else
# @type var response: HTTPX::Response # @type var response: HTTPX::Response
parse(request, response) parse(request, response)
@ -154,7 +157,7 @@ module HTTPX
when :decode_error when :decode_error
host = @requests.delete(request) host = @requests.delete(request)
connection = reset_hostname(host) connection = reset_hostname(host)
emit_resolve_error(connection, connection.origin.host, result) emit_resolve_error(connection, connection.peer.host, result)
end end
end end
@ -174,7 +177,7 @@ module HTTPX
alias_address = answers[address["alias"]] alias_address = answers[address["alias"]]
if alias_address.nil? if alias_address.nil?
reset_hostname(address["name"]) reset_hostname(address["name"])
if catch(:coalesced) { early_resolve(connection, hostname: address["alias"]) } if early_resolve(connection, hostname: address["alias"])
@connections.delete(connection) @connections.delete(connection)
else else
resolve(connection, address["alias"]) resolve(connection, address["alias"])
@ -199,7 +202,7 @@ module HTTPX
@queries.delete_if { |_, conn| connection == conn } @queries.delete_if { |_, conn| connection == conn }
Resolver.cached_lookup_set(hostname, @family, addresses) if @resolver_options[:cache] Resolver.cached_lookup_set(hostname, @family, addresses) if @resolver_options[:cache]
emit_addresses(connection, @family, addresses.map { |addr| addr["data"] }) catch(:coalesced) { emit_addresses(connection, @family, addresses.map { |addr| addr["data"] }) }
end end
end end
return if @connections.empty? return if @connections.empty?
@ -219,7 +222,7 @@ module HTTPX
uri.query = URI.encode_www_form(params) uri.query = URI.encode_www_form(params)
request = rklass.new("GET", uri, @options) request = rklass.new("GET", uri, @options)
else else
request = rklass.new("POST", uri, @options.merge(body: [payload])) request = rklass.new("POST", uri, @options, body: [payload])
request.headers["content-type"] = "application/dns-message" request.headers["content-type"] = "application/dns-message"
end end
request.headers["accept"] = "application/dns-message" request.headers["accept"] = "application/dns-message"

View File

@ -8,27 +8,49 @@ module HTTPX
include Callbacks include Callbacks
using ArrayExtensions::FilterMap using ArrayExtensions::FilterMap
attr_reader :resolvers attr_reader :resolvers, :options
def initialize(resolver_type, options) def initialize(resolver_type, options)
@current_selector = nil
@current_session = nil
@options = options @options = options
@resolver_options = @options.resolver_options @resolver_options = @options.resolver_options
@resolvers = options.ip_families.map do |ip_family| @resolvers = options.ip_families.map do |ip_family|
resolver = resolver_type.new(ip_family, options) resolver = resolver_type.new(ip_family, options)
resolver.on(:resolve, &method(:on_resolver_connection)) resolver.multi = self
resolver.on(:error, &method(:on_resolver_error))
resolver.on(:close) { on_resolver_close(resolver) }
resolver resolver
end end
@errors = Hash.new { |hs, k| hs[k] = [] } @errors = Hash.new { |hs, k| hs[k] = [] }
end end
def current_selector=(s)
@current_selector = s
@resolvers.each { |r| r.__send__(__method__, s) }
end
def current_session=(s)
@current_session = s
@resolvers.each { |r| r.__send__(__method__, s) }
end
def log(*args, **kwargs, &blk)
@resolvers.each { |r| r.__send__(__method__, *args, **kwargs, &blk) }
end
def closed? def closed?
@resolvers.all?(&:closed?) @resolvers.all?(&:closed?)
end end
def empty?
@resolvers.all?(&:empty?)
end
def inflight?
@resolvers.any(&:inflight?)
end
def timeout def timeout
@resolvers.filter_map(&:timeout).min @resolvers.filter_map(&:timeout).min
end end
@ -42,10 +64,11 @@ module HTTPX
end end
def early_resolve(connection) def early_resolve(connection)
hostname = connection.origin.host hostname = connection.peer.host
addresses = @resolver_options[:cache] && (connection.addresses || HTTPX::Resolver.nolookup_resolve(hostname)) addresses = @resolver_options[:cache] && (connection.addresses || HTTPX::Resolver.nolookup_resolve(hostname))
return unless addresses return false unless addresses
resolved = false
addresses.group_by(&:family).sort { |(f1, _), (f2, _)| f2 <=> f1 }.each do |family, addrs| addresses.group_by(&:family).sort { |(f1, _), (f2, _)| f2 <=> f1 }.each do |family, addrs|
# try to match the resolver by family. However, there are cases where that's not possible, as when # try to match the resolver by family. However, there are cases where that's not possible, as when
# the system does not have IPv6 connectivity, but it does support IPv6 via loopback/link-local. # the system does not have IPv6 connectivity, but it does support IPv6 via loopback/link-local.
@ -55,21 +78,20 @@ module HTTPX
# it does not matter which resolver it is, as early-resolve code is shared. # it does not matter which resolver it is, as early-resolve code is shared.
resolver.emit_addresses(connection, family, addrs, true) resolver.emit_addresses(connection, family, addrs, true)
resolved = true
end end
resolved
end end
private def lazy_resolve(connection)
@resolvers.each do |resolver|
resolver << @current_session.try_clone_connection(connection, @current_selector, resolver.family)
next if resolver.empty?
def on_resolver_connection(connection) @current_session.select_resolver(resolver, @current_selector)
emit(:resolve, connection) end
end
def on_resolver_error(connection, error)
emit(:error, connection, error)
end
def on_resolver_close(resolver)
emit(:close, resolver)
end end
end end
end end

View File

@ -4,6 +4,9 @@ require "forwardable"
require "resolv" require "resolv"
module HTTPX module HTTPX
# Implements a pure ruby name resolver, which abides by the Selectable API.
# It delegates DNS payload encoding/decoding to the +resolv+ stlid gem.
#
class Resolver::Native < Resolver::Resolver class Resolver::Native < Resolver::Resolver
extend Forwardable extend Forwardable
using URIExtensions using URIExtensions
@ -34,7 +37,7 @@ module HTTPX
@search = Array(@resolver_options[:search]).map { |srch| srch.scan(/[^.]+/) } @search = Array(@resolver_options[:search]).map { |srch| srch.scan(/[^.]+/) }
@_timeouts = Array(@resolver_options[:timeouts]) @_timeouts = Array(@resolver_options[:timeouts])
@timeouts = Hash.new { |timeouts, host| timeouts[host] = @_timeouts.dup } @timeouts = Hash.new { |timeouts, host| timeouts[host] = @_timeouts.dup }
@connections = [] @name = nil
@queries = {} @queries = {}
@read_buffer = "".b @read_buffer = "".b
@write_buffer = Buffer.new(@resolver_options[:packet_size]) @write_buffer = Buffer.new(@resolver_options[:packet_size])
@ -45,6 +48,10 @@ module HTTPX
transition(:closed) transition(:closed)
end end
def terminate
emit(:close, self)
end
def closed? def closed?
@state == :closed @state == :closed
end end
@ -58,18 +65,6 @@ module HTTPX
when :open when :open
consume consume
end end
nil
rescue Errno::EHOSTUNREACH => e
@ns_index += 1
nameserver = @nameserver
if nameserver && @ns_index < nameserver.size
log { "resolver: failed resolving on nameserver #{@nameserver[@ns_index - 1]} (#{e.message})" }
transition(:idle)
else
handle_error(e)
end
rescue NativeResolveError => e
handle_error(e)
end end
def interests def interests
@ -104,9 +99,7 @@ module HTTPX
@timeouts.values_at(*hosts).reject(&:empty?).map(&:first).min @timeouts.values_at(*hosts).reject(&:empty?).map(&:first).min
end end
def handle_socket_timeout(interval) def handle_socket_timeout(interval); end
do_retry(interval)
end
private private
@ -119,48 +112,89 @@ module HTTPX
end end
def consume def consume
dread if calculate_interests == :r loop do
do_retry dread if calculate_interests == :r
dwrite if calculate_interests == :w
break unless calculate_interests == :w
# do_retry
dwrite
break unless calculate_interests == :r
end
rescue Errno::EHOSTUNREACH => e
@ns_index += 1
nameserver = @nameserver
if nameserver && @ns_index < nameserver.size
log { "resolver #{FAMILY_TYPES[@record_type]}: failed resolving on nameserver #{@nameserver[@ns_index - 1]} (#{e.message})" }
transition(:idle)
@timeouts.clear
retry
else
handle_error(e)
emit(:close, self)
end
rescue NativeResolveError => e
handle_error(e)
close_or_resolve
retry unless closed?
end end
def do_retry(loop_time = nil) def schedule_retry
return if @queries.empty? || !@start_timeout h = @name
loop_time ||= Utils.elapsed_time(@start_timeout) return unless h
query = @queries.first connection = @queries[h]
return unless query timeouts = @timeouts[h]
timeout = timeouts.shift
h, connection = query @timer = @current_selector.after(timeout) do
host = connection.origin.host next unless @connections.include?(connection)
timeout = (@timeouts[host][0] -= loop_time)
return unless timeout <= 0 do_retry(h, connection, timeout)
end
end
@timeouts[host].shift def do_retry(h, connection, interval)
timeouts = @timeouts[h]
if !@timeouts[host].empty? if !timeouts.empty?
log { "resolver: timeout after #{timeout}s, retry(#{@timeouts[host].first}) #{host}..." } log { "resolver #{FAMILY_TYPES[@record_type]}: timeout after #{interval}s, retry (with #{timeouts.first}s) #{h}..." }
resolve(connection) # must downgrade to tcp AND retry on same host as last
downgrade_socket
resolve(connection, h)
elsif @ns_index + 1 < @nameserver.size elsif @ns_index + 1 < @nameserver.size
# try on the next nameserver # try on the next nameserver
@ns_index += 1 @ns_index += 1
log { "resolver: failed resolving #{host} on nameserver #{@nameserver[@ns_index - 1]} (timeout error)" } log do
"resolver #{FAMILY_TYPES[@record_type]}: failed resolving #{h} on nameserver #{@nameserver[@ns_index - 1]} (timeout error)"
end
transition(:idle) transition(:idle)
resolve(connection) @timeouts.clear
resolve(connection, h)
else else
@timeouts.delete(host) @timeouts.delete(h)
reset_hostname(h, reset_candidates: false) reset_hostname(h, reset_candidates: false)
return unless @queries.empty? unless @queries.empty?
resolve(connection)
return
end
@connections.delete(connection) @connections.delete(connection)
host = connection.peer.host
# This loop_time passed to the exception is bogus. Ideally we would pass the total # This loop_time passed to the exception is bogus. Ideally we would pass the total
# resolve timeout, including from the previous retries. # resolve timeout, including from the previous retries.
raise ResolveTimeoutError.new(loop_time, "Timed out while resolving #{connection.origin.host}") ex = ResolveTimeoutError.new(interval, "Timed out while resolving #{host}")
ex.set_backtrace(ex ? ex.backtrace : caller)
emit_resolve_error(connection, host, ex)
close_or_resolve
end end
end end
@ -187,10 +221,9 @@ module HTTPX
next unless @large_packet.full? next unless @large_packet.full?
parse(@large_packet.to_s) parse(@large_packet.to_s)
@socket_type = @resolver_options.fetch(:socket_type, :udp)
@large_packet = nil @large_packet = nil
transition(:closed) # downgrade to udp again
downgrade_socket
return return
else else
size = @read_buffer[0, 2].unpack1("n") size = @read_buffer[0, 2].unpack1("n")
@ -210,7 +243,7 @@ module HTTPX
parse(@read_buffer) parse(@read_buffer)
end end
return if @state == :closed return if @state == :closed || !@write_buffer.empty?
end end
end end
@ -228,11 +261,15 @@ module HTTPX
return unless siz.positive? return unless siz.positive?
schedule_retry if @write_buffer.empty?
return if @state == :closed return if @state == :closed
end end
end end
def parse(buffer) def parse(buffer)
@timer.cancel
code, result = Resolver.decode_dns_answer(buffer) code, result = Resolver.decode_dns_answer(buffer)
case code case code
@ -243,12 +280,17 @@ module HTTPX
hostname, connection = @queries.first hostname, connection = @queries.first
reset_hostname(hostname, reset_candidates: false) reset_hostname(hostname, reset_candidates: false)
unless @queries.value?(connection) other_candidate, _ = @queries.find { |_, conn| conn == connection }
@connections.delete(connection)
raise NativeResolveError.new(connection, connection.origin.host, "name or service not known")
end
resolve if other_candidate
resolve(connection, other_candidate)
else
@connections.delete(connection)
ex = NativeResolveError.new(connection, connection.peer.host, "name or service not known")
ex.set_backtrace(ex ? ex.backtrace : caller)
emit_resolve_error(connection, connection.peer.host, ex)
close_or_resolve
end
when :message_truncated when :message_truncated
# TODO: what to do if it's already tcp?? # TODO: what to do if it's already tcp??
return if @socket_type == :tcp return if @socket_type == :tcp
@ -262,13 +304,13 @@ module HTTPX
hostname, connection = @queries.first hostname, connection = @queries.first
reset_hostname(hostname) reset_hostname(hostname)
@connections.delete(connection) @connections.delete(connection)
ex = NativeResolveError.new(connection, connection.origin.host, "unknown DNS error (error code #{result})") ex = NativeResolveError.new(connection, connection.peer.host, "unknown DNS error (error code #{result})")
raise ex raise ex
when :decode_error when :decode_error
hostname, connection = @queries.first hostname, connection = @queries.first
reset_hostname(hostname) reset_hostname(hostname)
@connections.delete(connection) @connections.delete(connection)
ex = NativeResolveError.new(connection, connection.origin.host, result.message) ex = NativeResolveError.new(connection, connection.peer.host, result.message)
ex.set_backtrace(result.backtrace) ex.set_backtrace(result.backtrace)
raise ex raise ex
end end
@ -280,7 +322,7 @@ module HTTPX
hostname, connection = @queries.first hostname, connection = @queries.first
reset_hostname(hostname) reset_hostname(hostname)
@connections.delete(connection) @connections.delete(connection)
raise NativeResolveError.new(connection, connection.origin.host) raise NativeResolveError.new(connection, connection.peer.host)
else else
address = addresses.first address = addresses.first
name = address["name"] name = address["name"]
@ -303,30 +345,42 @@ module HTTPX
connection = @queries.delete(name) connection = @queries.delete(name)
end end
if address.key?("alias") # CNAME alias_addresses, addresses = addresses.partition { |addr| addr.key?("alias") }
# clean up intermediate queries
@timeouts.delete(name) unless connection.origin.host == name
if catch(:coalesced) { early_resolve(connection, hostname: address["alias"]) } if addresses.empty? && !alias_addresses.empty? # CNAME
hostname_alias = alias_addresses.first["alias"]
# clean up intermediate queries
@timeouts.delete(name) unless connection.peer.host == name
if early_resolve(connection, hostname: hostname_alias)
@connections.delete(connection) @connections.delete(connection)
else else
resolve(connection, address["alias"]) if @socket_type == :tcp
# must downgrade to udp if tcp
@socket_type = @resolver_options.fetch(:socket_type, :udp)
transition(:idle)
transition(:open)
end
log { "resolver #{FAMILY_TYPES[@record_type]}: ALIAS #{hostname_alias} for #{name}" }
resolve(connection, hostname_alias)
return return
end end
else else
reset_hostname(name, connection: connection) reset_hostname(name, connection: connection)
@timeouts.delete(connection.origin.host) @timeouts.delete(connection.peer.host)
@connections.delete(connection) @connections.delete(connection)
Resolver.cached_lookup_set(connection.origin.host, @family, addresses) if @resolver_options[:cache] Resolver.cached_lookup_set(connection.peer.host, @family, addresses) if @resolver_options[:cache]
emit_addresses(connection, @family, addresses.map { |addr| addr["data"] }) catch(:coalesced) { emit_addresses(connection, @family, addresses.map { |addr| addr["data"] }) }
end end
end end
return emit(:close) if @connections.empty? close_or_resolve
resolve
end end
def resolve(connection = @connections.first, hostname = nil) def resolve(connection = nil, hostname = nil)
@connections.shift until @connections.empty? || @connections.first.state != :closed
connection ||= @connections.find { |c| !@queries.value?(c) }
raise Error, "no URI to resolve" unless connection raise Error, "no URI to resolve" unless connection
return unless @write_buffer.empty? return unless @write_buffer.empty?
@ -334,8 +388,10 @@ module HTTPX
hostname ||= @queries.key(connection) hostname ||= @queries.key(connection)
if hostname.nil? if hostname.nil?
hostname = connection.origin.host hostname = connection.peer.host
log { "resolver: resolve IDN #{connection.origin.non_ascii_hostname} as #{hostname}" } if connection.origin.non_ascii_hostname if connection.peer.non_ascii_hostname
log { "resolver #{FAMILY_TYPES[@record_type]}: resolve IDN #{connection.peer.non_ascii_hostname} as #{hostname}" }
end
hostname = generate_candidates(hostname).each do |name| hostname = generate_candidates(hostname).each do |name|
@queries[name] = connection @queries[name] = connection
@ -343,11 +399,17 @@ module HTTPX
else else
@queries[hostname] = connection @queries[hostname] = connection
end end
log { "resolver: query #{@record_type.name.split("::").last} for #{hostname}" }
@name = hostname
log { "resolver #{FAMILY_TYPES[@record_type]}: query for #{hostname}" }
begin begin
@write_buffer << encode_dns_query(hostname) @write_buffer << encode_dns_query(hostname)
rescue Resolv::DNS::EncodeError => e rescue Resolv::DNS::EncodeError => e
reset_hostname(hostname, connection: connection)
@connections.delete(connection)
emit_resolve_error(connection, hostname, e) emit_resolve_error(connection, hostname, e)
close_or_resolve
end end
end end
@ -377,15 +439,23 @@ module HTTPX
case @socket_type case @socket_type
when :udp when :udp
log { "resolver: server: udp://#{ip}:#{port}..." } log { "resolver #{FAMILY_TYPES[@record_type]}: server: udp://#{ip}:#{port}..." }
UDP.new(ip, port, @options) UDP.new(ip, port, @options)
when :tcp when :tcp
log { "resolver: server: tcp://#{ip}:#{port}..." } log { "resolver #{FAMILY_TYPES[@record_type]}: server: tcp://#{ip}:#{port}..." }
origin = URI("tcp://#{ip}:#{port}") origin = URI("tcp://#{ip}:#{port}")
TCP.new(origin, [ip], @options) TCP.new(origin, [ip], @options)
end end
end end
def downgrade_socket
return unless @socket_type == :tcp
@socket_type = @resolver_options.fetch(:socket_type, :udp)
transition(:idle)
transition(:open)
end
def transition(nextstate) def transition(nextstate)
case nextstate case nextstate
when :idle when :idle
@ -393,7 +463,6 @@ module HTTPX
@io.close @io.close
@io = nil @io = nil
end end
@timeouts.clear
when :open when :open
return unless @state == :idle return unless @state == :idle
@ -411,23 +480,41 @@ module HTTPX
@write_buffer.clear @write_buffer.clear
@read_buffer.clear @read_buffer.clear
end end
log(level: 3) { "#{@state} -> #{nextstate}" }
@state = nextstate @state = nextstate
rescue Errno::ECONNREFUSED,
Errno::EADDRNOTAVAIL,
Errno::EHOSTUNREACH,
SocketError,
IOError,
ConnectTimeoutError => e
# these errors may happen during TCP handshake
# treat them as resolve errors.
handle_error(e)
emit(:close, self)
end end
def handle_error(error) def handle_error(error)
if error.respond_to?(:connection) && if error.respond_to?(:connection) &&
error.respond_to?(:host) error.respond_to?(:host)
reset_hostname(error.host, connection: error.connection)
@connections.delete(error.connection)
emit_resolve_error(error.connection, error.host, error) emit_resolve_error(error.connection, error.host, error)
else else
@queries.each do |host, connection| @queries.each do |host, connection|
reset_hostname(host, connection: connection)
@connections.delete(connection)
emit_resolve_error(connection, host, error) emit_resolve_error(connection, host, error)
end end
while (connection = @connections.shift)
emit_resolve_error(connection, connection.peer.host, error)
end
end end
end end
def reset_hostname(hostname, connection: @queries.delete(hostname), reset_candidates: true) def reset_hostname(hostname, connection: @queries.delete(hostname), reset_candidates: true)
@timeouts.delete(hostname) @timeouts.delete(hostname)
@timeouts.delete(hostname)
return unless connection && reset_candidates return unless connection && reset_candidates
@ -437,5 +524,16 @@ module HTTPX
# reset timeouts # reset timeouts
@timeouts.delete_if { |h, _| candidates.include?(h) } @timeouts.delete_if { |h, _| candidates.include?(h) }
end end
def close_or_resolve
# drop already closed connections
@connections.shift until @connections.empty? || @connections.first.state != :closed
if (@connections - @queries.values).empty?
emit(:close, self)
else
resolve
end
end
end end
end end

View File

@ -4,6 +4,9 @@ require "resolv"
require "ipaddr" require "ipaddr"
module HTTPX module HTTPX
# Base class for all internal internet name resolvers. It handles basic blocks
# from the Selectable API.
#
class Resolver::Resolver class Resolver::Resolver
include Callbacks include Callbacks
include Loggable include Loggable
@ -26,14 +29,27 @@ module HTTPX
end end
end end
attr_reader :family attr_reader :family, :options
attr_writer :pool attr_writer :current_selector, :current_session
attr_accessor :multi
def initialize(family, options) def initialize(family, options)
@family = family @family = family
@record_type = RECORD_TYPES[family] @record_type = RECORD_TYPES[family]
@options = Options.new(options) @options = options
@connections = []
set_resolver_callbacks
end
def each_connection(&block)
enum_for(__method__) unless block
return unless @connections
@connections.each(&block)
end end
def close; end def close; end
@ -48,6 +64,10 @@ module HTTPX
true true
end end
def inflight?
false
end
def emit_addresses(connection, family, addresses, early_resolve = false) def emit_addresses(connection, family, addresses, early_resolve = false)
addresses.map! do |address| addresses.map! do |address|
address.is_a?(IPAddr) ? address : IPAddr.new(address.to_s) address.is_a?(IPAddr) ? address : IPAddr.new(address.to_s)
@ -56,17 +76,22 @@ module HTTPX
# double emission check, but allow early resolution to work # double emission check, but allow early resolution to work
return if !early_resolve && connection.addresses && !addresses.intersect?(connection.addresses) return if !early_resolve && connection.addresses && !addresses.intersect?(connection.addresses)
log { "resolver: answer #{FAMILY_TYPES[RECORD_TYPES[family]]} #{connection.origin.host}: #{addresses.inspect}" } log do
if @pool && # if triggered by early resolve, pool may not be here yet "resolver #{FAMILY_TYPES[RECORD_TYPES[family]]}: " \
!connection.io && "answer #{connection.peer.host}: #{addresses.inspect} (early resolve: #{early_resolve})"
connection.options.ip_families.size > 1 && end
family == Socket::AF_INET &&
addresses.first.to_s != connection.origin.host.to_s if !early_resolve && # do not apply resolution delay for non-dns name resolution
log { "resolver: A response, applying resolution delay..." } @current_selector && # just in case...
@pool.after(0.05) do family == Socket::AF_INET && # resolution delay only applies to IPv4
unless connection.state == :closed || !connection.io && # connection already has addresses and initiated/ended handshake
# double emission check connection.options.ip_families.size > 1 && # no need to delay if not supporting dual stack IP
(connection.addresses && addresses.intersect?(connection.addresses)) addresses.first.to_s != connection.peer.host.to_s # connection URL host is already the IP (early resolve included perhaps?)
log { "resolver #{FAMILY_TYPES[RECORD_TYPES[family]]}: applying resolution delay..." }
@current_selector.after(0.05) do
# double emission check
unless connection.addresses && addresses.intersect?(connection.addresses)
emit_resolved_connection(connection, addresses, early_resolve) emit_resolved_connection(connection, addresses, early_resolve)
end end
end end
@ -81,6 +106,8 @@ module HTTPX
begin begin
connection.addresses = addresses connection.addresses = addresses
return if connection.state == :closed
emit(:resolve, connection) emit(:resolve, connection)
rescue StandardError => e rescue StandardError => e
if early_resolve if early_resolve
@ -92,20 +119,22 @@ module HTTPX
end end
end end
def early_resolve(connection, hostname: connection.origin.host) def early_resolve(connection, hostname: connection.peer.host)
addresses = @resolver_options[:cache] && (connection.addresses || HTTPX::Resolver.nolookup_resolve(hostname)) addresses = @resolver_options[:cache] && (connection.addresses || HTTPX::Resolver.nolookup_resolve(hostname))
return unless addresses return false unless addresses
addresses = addresses.select { |addr| addr.family == @family } addresses = addresses.select { |addr| addr.family == @family }
return if addresses.empty? return false if addresses.empty?
emit_addresses(connection, @family, addresses, true) emit_addresses(connection, @family, addresses, true)
true
end end
def emit_resolve_error(connection, hostname = connection.origin.host, ex = nil) def emit_resolve_error(connection, hostname = connection.peer.host, ex = nil)
emit(:error, connection, resolve_error(hostname, ex)) emit_connection_error(connection, resolve_error(hostname, ex))
end end
def resolve_error(hostname, ex = nil) def resolve_error(hostname, ex = nil)
@ -116,5 +145,25 @@ module HTTPX
error.set_backtrace(ex ? ex.backtrace : caller) error.set_backtrace(ex ? ex.backtrace : caller)
error error
end end
def set_resolver_callbacks
on(:resolve, &method(:resolve_connection))
on(:error, &method(:emit_connection_error))
on(:close, &method(:close_resolver))
end
def resolve_connection(connection)
@current_session.__send__(:on_resolver_connection, connection, @current_selector)
end
def emit_connection_error(connection, error)
return connection.handle_connect_error(error) if connection.connecting?
connection.emit(:error, error)
end
def close_resolver(resolver)
@current_session.__send__(:on_resolver_close, resolver, @current_selector)
end
end end
end end

Some files were not shown because too many files have changed in this diff Show More