Compare commits

...

749 Commits

Author SHA1 Message Date
HoneyryderChuck
0261449b39 fixed sig for callbacks_for 2025-08-08 17:06:03 +01:00
HoneyryderChuck
84c8126cd9 callback_for: check for ivar existence first
Closes #353
2025-08-08 16:30:17 +01:00
HoneyryderChuck
ff3f1f726f fix warning about argument potentially being ignored 2025-08-07 12:34:59 +01:00
HoneyryderChuck
b8b710470c fix sentry deprecation 2025-08-07 12:30:31 +01:00
HoneyryderChuck
0f3e3ab068 remove trailing :: from IO module usage, as there's no more internal module 2025-08-07 12:30:21 +01:00
HoneyryderChuck
095fbb3463 using local aws for the max requests tests
reduce exposure to httpbin.org even more
2025-08-07 12:12:50 +01:00
HoneyryderChuck
7790589c1f linting issue 2025-08-07 11:28:18 +01:00
HoneyryderChuck
dd8608ec3b small improv in max requests tests to make it tolerant to multi-homed networks 2025-08-07 11:22:29 +01:00
HoneyryderChuck
8205b351aa removing usage of httpbin.org peer in tests wherever possible
it has been quite unstable, 503'ing often
2025-08-07 11:21:59 +01:00
HoneyryderChuck
5992628926 update nghttp2 used in CI tests 2025-08-07 11:21:02 +01:00
HoneyryderChuck
39370b5883 Merge branch 'issue-337' into 'master'
fix for issues blocking reconnection in proxy mode

Closes #337

See merge request os85/httpx!397
2025-07-30 09:49:51 +00:00
HoneyryderChuck
1801a7815c http2 parser: fix calculation when connection closes and there's no termination handshake 2025-07-18 17:48:23 +01:00
HoneyryderChuck
0953e4f91a fix for #receive_requests bailout routing when out of selectables
the routine was using #fetch_response, which may return nil, and wasn't handling it, making it potentially return a nil instead of a response/errorresponse object. since, depending on the plugins, #fetch_response may reroute requests, one allows to keep in the loop in case there are selectables again to process as a result of it
2025-07-18 17:48:23 +01:00
HoneyryderChuck
a78a3f0b7c proxy fixes: allow proxy connection errors to be retriable
when coupled with the retries plugin, the exception is raised inside send_request, which breaks the integration; in order to protect from it, the proxy plugin will protect from proxy connection errors (socket/timeout errors happening until tunnel established) and allow them to be retried, while ignoring other proxy errors; meanwhile, the naming of errors was simplified, and now there's an HTTPX::ProxyError replacing HTTPX::HTTPProxyError (which is a breaking change).
2025-07-18 17:48:23 +01:00
HoneyryderChuck
aeb8fe5382 fix proxy ssl reconnection
when a proxied ssl connection would be lost, standard reconnection wouldn't work, as it would not pick the information from the internal tcp socket. in order to fix this, the connection retrieves the proxied io on reset/purge, which makes the establish a new proxyssl connection on reconnect
2025-07-18 17:48:23 +01:00
HoneyryderChuck
03170b6c89 promote certain transition logs to regular code (under level 3)
not really useful as telemetry metered, but would have been useful for other bugs
2025-07-18 17:48:23 +01:00
HoneyryderChuck
814d607a45 Revert "options: initialize all possible options to improve object shape"
This reverts commit f64c3ab5990b68f850d0d190535a45162929f0af.
2025-07-18 17:47:08 +01:00
HoneyryderChuck
5502332e7e logging when connections are deregistered from the selector/pool
also, logging when a response is fetched in the session
2025-07-18 17:46:43 +01:00
HoneyryderChuck
f3b68950d6 adding current fiber id to log message tags 2025-07-18 17:45:21 +01:00
HoneyryderChuck
2c4638784f Merge branch 'fix-shape' into 'master'
object shape improvements

See merge request os85/httpx!396
2025-07-14 15:38:19 +00:00
HoneyryderChuck
b0016525e3 recover from network unreachable errors when using cached IPs
while this type of error is avoided when doing HEv2, the IPs remain
in the cache; this means that, one the same host is reached, the
IPs are loaded onto the same socket, and if the issue is IPv6
connectivity, it'll break outside of the HEv2 flow.

this error is now protected inside the connect block, so that other
IPs in the list can be tried after; the IP is then evicted from the
cachee.

HEv2 related regression test is disabled in CI, as it's currently
reliable in Gitlab CI, which allows to resolve the IPv6 address,
but does not allow connecting to it
2025-07-14 15:44:47 +01:00
HoneyryderChuck
49555694fe remove check for non unique local ipv6 which is disabling HEv2
not sure anymore under which condition this was done...
2025-07-14 11:57:02 +01:00
HoneyryderChuck
93e5efa32e http2 stream header logs: initial newline to align values and make debug logs clearer 2025-07-14 11:50:22 +01:00
HoneyryderChuck
8b3c1da507 removed ivar left behind and used nowhere 2025-07-14 11:50:22 +01:00
HoneyryderChuck
d64f247e11 fix for Connection too many object shapes
some more ivars which were not initialized in the first place were leading to the warning in CI mode
2025-07-14 11:50:22 +01:00
HoneyryderChuck
f64c3ab599 options: initialize all possible options to improve object shape
Options#merge works by duping-then-filling ivars, but due to not all of them being initialized on object creation, each merge had the potential of adding more object shapes for the same class, which breaks one of the most recent ruby optimizations

this was fixed by caching all possible options names at the class level, and using that as reference in the initalize method to nilify all unreferenced options
2025-07-14 11:50:22 +01:00
HoneyryderChuck
af03ddba3b options: inlining logic from do_initialize in constructor 2025-07-14 09:10:52 +01:00
HoneyryderChuck
7012ca1f27 fixed previous commit, as the tag is not available before 1.15 2025-07-03 16:39:54 +01:00
HoneyryderChuck
d405f8905f fixed ddtrace compatibility for versions under 1.13.0 2025-07-03 16:23:27 +01:00
HoneyryderChuck
3ff10f142a replace h2 upgrade peer with a custom implementation
the remote one has been failing for some time
2025-06-09 22:56:30 +01:00
HoneyryderChuck
51ce9d10a4 bump version to 1.5.1 2025-06-09 09:04:05 +01:00
HoneyryderChuck
6bde11b09c Merge branch 'gh-92' into 'master'
don't bookkeep retry attempts when errors happen on just-checked-out open connections

See merge request os85/httpx!394
2025-05-28 17:54:03 +00:00
HoneyryderChuck
0c2808fa25 prevent needless closing loop when process is interrupted during DNS request
the native resolver needs to be unselected. it was already, but it was taken into account still for bookkeeping. this removes it from the list by eliminating closed selectables from the list (which were probably already removed from the list via callback)

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-28 15:26:11 +01:00
HoneyryderChuck
cb78091e03 don't bookkeep retry attempts when errors happen on just-checked-out open connections
in case of multiple connections to the same server, where the server may have closed all of them at the same time, a request will fail after checkout multiple times, before starting a new one where the request may succeed. this patch allows the prior attempts not to exhaust the number of possible retries on the request

it does so by marking the request as ping when the connection it's being sent to is marked as inactive; this leverages the logic of gating retries bookkeeping in such a case

Closes https://github.com/HoneyryderChuck/httpx/issues/92
2025-05-28 15:25:50 +01:00
HoneyryderChuck
6fa69ba475 Merge branch 'duplicate-method-def' into 'master'
Fix duplicate `option_pool_options` method

See merge request os85/httpx!393
2025-05-21 15:30:34 +00:00
Earlopain
4a78e78d32
Fix duplicate option_pool_options method
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:237: warning: method redefined; discarding old option_pool_options (StandardError)
> /usr/local/bundle/bundler/gems/httpx-0e393987d027/lib/httpx/options.rb:221: warning: previous definition of option_pool_options was here
2025-05-21 12:49:54 +02:00
HoneyryderChuck
0e393987d0 bump version to 1.5.0 2025-05-16 14:04:08 +01:00
HoneyryderChuck
12483fa7c8 missing ivar sigs in tcp class 2025-05-16 11:15:28 +01:00
HoneyryderChuck
d955ba616a deselect idle connections on session termination
session may be interrupted earlier than the connection has finished
the handshake; in such a case, simulate early termination.

Closes https://github.com/HoneyryderChuck/httpx/issues/91
2025-05-15 00:31:15 +01:00
HoneyryderChuck
804d5b878b Merge branch 'debug-redact' into 'master'
added :debug_redact option

See merge request os85/httpx!387
2025-05-14 23:01:28 +00:00
HoneyryderChuck
75702165fd remove ping check when querying for repeatable request status
this should be dependent on the exception only, as connections may have closed before ping was released

this addresses https://github.com/HoneyryderChuck/httpx/issues/87\#issuecomment-2866564479
2025-05-14 23:52:18 +01:00
HoneyryderChuck
120bbad126 clear write buffer on connect errors
leaving bytes around messes up the termination handshake and may raise other unwanted errors
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35446e9fe1 fixes for connection coalescing flow
the whole "found connection not open" branch was removed, as currently,
a mergeable connection must not be closed; this means that only
open/inactive connections will be picked up from selector/pool, as
they're the only coalescable connections (have addresses/ssl cert
state). this may be extended to support closed connections though, as
remaining ssl/addresses are enough to make it coalescable at that point,
and then it's just a matter of idling it, so it'll be simpler than it is
today.

coalesced connection gets closed via Connection#terminate at the end
now, in order to propagate whether it was a cloned connection.

added log messages in order to monitor coalescing handshake from logs.
2025-05-13 16:21:06 +01:00
HoneyryderChuck
3ed41ef2bf pool: do not decrement conn counter when returning existing connection, nor increment it when acquiring
this variable is supposed to monitor new connections being created or dropped, existing connection management shouldn't affect it
2025-05-13 16:21:06 +01:00
HoneyryderChuck
9ffbceff87 rename Connection#coalesced_connection=(conn) to Connection.coalesce!(conn) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
757c9ae32c making tcp state transition logs less ambiguous
also show transition states in connected
2025-05-13 16:21:06 +01:00
HoneyryderChuck
5d88ccedf9 redact ping payload as well 2025-05-13 16:21:06 +01:00
HoneyryderChuck
85808b6569 adding logs to select-on-socket phase 2025-05-13 16:21:06 +01:00
HoneyryderChuck
d5483a4264 reconnectable errors: include HTTP/2 parser errors and opnessl errors 2025-05-13 16:21:06 +01:00
HoneyryderChuck
540430c00e assert for request in a faraday test (sometimes this is nil, for some reason) 2025-05-13 16:21:06 +01:00
HoneyryderChuck
3a417a4623 added :debug_redact option
when true, text passed to log messages considered sensitive (wrapped in a +#log_redact+ call) will be logged as "[REDACTED}"
2025-05-13 16:21:06 +01:00
HoneyryderChuck
35c18a1b9b options: meta prog integer options into the same definition 2025-05-13 16:20:28 +01:00
HoneyryderChuck
cf19fe5221 Merge branch 'improv' into 'master'
sig improvements

See merge request os85/httpx!390
2025-05-13 15:18:50 +00:00
HoneyryderChuck
f9c2fc469a options: freeze more ivars by default 2025-05-13 15:52:57 +01:00
HoneyryderChuck
9b513faab4 aligning implementation of the #resolve function in all implementations 2025-05-13 15:52:57 +01:00
HoneyryderChuck
0be39faefc added some missing sigs + type safe code 2025-05-13 15:44:21 +01:00
HoneyryderChuck
08c5f394ba fixed usage of inexisting var 2025-05-13 15:13:02 +01:00
HoneyryderChuck
55411178ce resolver: moved @connections ivar + init into parent class
also, establishing the selectable interface for resolvers
2025-05-13 15:13:02 +01:00
HoneyryderChuck
a5c83e84d3 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!389
2025-05-13 14:10:56 +00:00
HoneyryderChuck
d7e15c4441 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-05-13 11:02:13 +01:00
HoneyryderChuck
012255e49c Merge branch 'ruby-3.5-cgi' into 'master'
Only require from `cgi` what is required

See merge request os85/httpx!391
2025-05-10 00:20:33 +00:00
HoneyryderChuck
d20506acb8 Merge branch 'httpx-issue-350' into 'master'
In file (any serialized) store need to response.finish! on get

Closes #350

See merge request os85/httpx!392
2025-05-10 00:13:41 +00:00
Paul Duey
28399f1b88 In file (any serialized) store need to response.finish! on get 2025-05-09 17:22:39 -04:00
Earlopain
953101afde
Only require from cgi what is required
In Ruby 3.5, most of the `cgi` gem will be removed and moved to a bundled gem.

Luckily, the escape/unescape methods have been left around. So, only the require path needs to be adjusted to avoid a warning.
`cgi/escape` was available since Ruby 2.3

I also moved the require to the file that actually uses it.

https://bugs.ruby-lang.org/issues/21258
2025-05-09 18:54:41 +02:00
HoneyryderChuck
055ee47b83 Merge branch 'stream-bidi-thread' into 'master'
stream_bidi: allows payload to be buffered to requests from other threads

See merge request os85/httpx!383
2025-04-29 22:44:44 +00:00
HoneyryderChuck
dbad275c65 stream_bidi: allows payload to be buffered to requests from other threads
this is achieved by inserting some synchronization primitives when buffering the content, and waking up the main select loop, via an IO pipe
2025-04-29 23:25:41 +01:00
HoneyryderChuck
fe69231e6c Merge branch 'gh-86' into 'master'
persistent plugin: by default, do not retry requests which failed due to a request timeout

See merge request os85/httpx!385
2025-04-29 09:41:45 +00:00
HoneyryderChuck
4c61df768a persistent plugin: by default, do not retry requests which failed due to a request timeout
that isn't a connection-related type of failure, and confuses users when it gets retried, as connection was fine, request was just slow

Fixes https://github.com/HoneyryderChuck/httpx/issues/86
2025-04-27 16:47:50 +01:00
HoneyryderChuck
aec150b030 Merge branch 'issue-347' into 'master'
:callbacks plugin fix: copy callbacks to new session when using the session builder methods

Closes #347 and #348

See merge request os85/httpx!386
2025-04-26 15:12:42 +00:00
HoneyryderChuck
29a43c4bc3 callbacks plugin fix: errors raised in .on_request_error callback should bubble up to user code
this was not happening for errors happening during name resolution, particularly when HEv2 was used, as the second resolver was kept open and didn't stop the selector loop

Closes #348
2025-04-26 03:11:55 +01:00
HoneyryderChuck
34c2fee60c :callbacks plugin fix: copy callbacks to new session when using the session builder methods
such as '.with' or '.wrap', which create a new session object on the fly
2025-04-26 02:34:56 +01:00
HoneyryderChuck
c62966361e moving can_vuffer_more_requests? to private sector
it's only used internally
2025-04-26 01:42:55 +01:00
HoneyryderChuck
2b87a3d5e5 selector: make APIs expecting connections more strict, improve sigs by using interface 2025-04-26 01:42:55 +01:00
HoneyryderChuck
3dd767cdc2 response_cache: also cache request headers, for vary algo computation 2025-04-26 01:42:55 +01:00
HoneyryderChuck
a9255c52aa response_cache plugin: adding more rdoc documentation to methods 2025-04-26 01:42:55 +01:00
HoneyryderChuck
32031e8a03 response_cache plugin: rename cached_response? to not_modified?, more accurate 2025-04-26 01:42:55 +01:00
HoneyryderChuck
f328646c08 Merge branch 'gh-84' into 'master'
adding missing datadog span decoration

See merge request os85/httpx!384
2025-04-26 00:40:49 +00:00
HoneyryderChuck
0484dd76c8 fix for wrong query string encoding when passed an empty :params input
Fixes https://github.com/HoneyryderChuck/httpx/issues/85
2025-04-26 00:20:28 +01:00
HoneyryderChuck
17c1090b7a more agressive timeouts in tests 2025-04-26 00:10:48 +01:00
HoneyryderChuck
87f4ce4b03 adding missing datadog span decoration
including header tags, and other missing span tags
2025-04-25 23:46:11 +01:00
HoneyryderChuck
1ec7442322 Merge branch 'improv-tests' 2025-04-14 17:35:15 +01:00
HoneyryderChuck
723959cf92 wrong option docs 2025-04-13 01:27:27 +01:00
HoneyryderChuck
10b4b9c7c0 remove unused method 2025-04-13 01:27:05 +01:00
HoneyryderChuck
1b39bcd3a3 set approriate coverage key, use it as command 2025-04-13 01:08:18 +01:00
HoneyryderChuck
44a2041ea8 added missing response cache store sigs 2025-04-13 01:07:18 +01:00
HoneyryderChuck
b63f9f1ae2 native: realign log calls, so coverage does not misreport them 2025-04-13 01:06:54 +01:00
HoneyryderChuck
467dd5e7e5 file store: testing path when the same request is stored twice
also, testing usage of symbol response cache store options.
2025-04-13 01:05:42 +01:00
HoneyryderChuck
c626fae3da adding test to force usage of max_requests conditionals under http1 2025-04-13 01:05:08 +01:00
HoneyryderChuck
7f6b78540b Merge branch 'issue-328' into 'master'
pool option: max_connections

Closes #328

See merge request os85/httpx!371
2025-04-12 22:43:18 +00:00
HoneyryderChuck
b120ce4657 new pool option: max_connections
this new option declares how many max inflight-or-idle open connections a session may hold. connections get recycled in case a new one is needed and the pool has closed connections to discard. the same pool timeout error applies as for max_connections_per_origin
2025-04-12 23:29:08 +01:00
HoneyryderChuck
32c36bb4ee Merge branch 'issue-341' into 'master'
response_cache plugin: return cached response from store unless stale

Closes #341

See merge request os85/httpx!382
2025-04-12 21:45:35 +00:00
HoneyryderChuck
cc0626429b prevent overlap of test dirs/files across test instances 2025-04-12 22:09:12 +01:00
HoneyryderChuck
a0e2c1258a allow setting :response_cache_store with a symbol (:store, :file_store)
cleaner to select from one of the two available options
2025-04-12 22:09:12 +01:00
HoneyryderChuck
6bd3c15384 fixing cacheable_response? to exclude headers and freshness
it's called with a fresh response already
2025-04-12 22:09:12 +01:00
HoneyryderChuck
0d23c464f5 simplifying response cache store API
#get, #set, #clear, that's all you need. this can now be some bespoke custom class implementing these primitives
2025-04-12 22:09:12 +01:00
HoneyryderChuck
a75b89db74 response_cache plugin: addin filesystem based store
it stores the cached responses in the filesystem
2025-04-12 22:09:12 +01:00
HoneyryderChuck
7173616154 response cache: fix vary header handling by supporting a defined set of headers
the cache key will be also determined by the supported vary headers values, when present; this means easier lookups, and one level hash fetch, where the same url-verb request may have multiple entries depending on those headers

checking response vary header will therefore be something done at cache response lookup; writes may override when they shouldn't though, as a full match on supported vary headers will be performed, and one can't know in advance the combo of vary headers, which is why insterested parties will have to be judicious with the new  option
2025-04-12 22:09:12 +01:00
HoneyryderChuck
69f9557780 corrected equality comparison of response bodies 2025-04-12 22:09:12 +01:00
HoneyryderChuck
339af65cc1 response cache: store cached response in request, so that copying and cache invalidating work a bit OOTB 2025-04-12 22:09:12 +01:00
HoneyryderChuck
3df6edbcfc response_cache: an immutable response is always fresh 2025-04-12 22:09:11 +01:00
HoneyryderChuck
5c2f8ab0b1 response_cache plugin: return cached response from store unless stale
response age wasn't being taken into account, and cache invalidation request was always being sent; fresh response will stay in the store until expired; when it expires, cache invalidation will be tried (if possible); if invalidated, the new response is put in the store; Bif validated, the body of the cached response is copied, and the cached response stays in the store
2025-04-12 22:09:11 +01:00
HoneyryderChuck
0c335fd03d Merge branch 'gh-82' into 'master'
persistent plugin: drop , allow retries for ping requests, regardless of idempotency property

See merge request os85/httpx!381
2025-04-12 09:14:32 +00:00
HoneyryderChuck
bf19cde364 fix: ping record to match must be kept in a different string
http-2 1.1.0 uses the string input as the ultimate buffer (when input not frozen), which will mutate the argument. in order to keep it around for further comparison, the string is dupped
2025-04-11 16:25:58 +01:00
HoneyryderChuck
7e0ddb7ab2 persistent plugin: when errors happen during connection ping phase, make sure that only connection lost errors are going to be retriable 2025-04-11 14:41:36 +01:00
HoneyryderChuck
4cd3136922 connection: set request timeouts before sending the request to the parser
in situations where the connection is already open/active, the requests would be buffered before setting the timeouts, which would skip transition callbacks associated with writes, such as write timeouts and request timeouts
2025-04-11 14:41:36 +01:00
HoneyryderChuck
642122a0f5 persistent plugin: drop , allow retries for ping requests, regardless of idempotency property
the previous option was there to allow reconnecting on non-idempotent (i.e. POST) requests, but had the unfortunate side-effect of allowing retries for failures (i.e. timeouts) which had nothing to do with a failed connection error; this mitigates it by enabling retries for ping-aware requests, i.e. if there is an error during PING, always retry
2025-04-11 14:41:36 +01:00
HoneyryderChuck
42d42a92b4 added missing test for close_on_fork option 2025-04-09 09:39:53 +01:00
HoneyryderChuck
fb6a509d98 removing duplicate sig 2025-04-06 21:54:03 +01:00
HoneyryderChuck
3c22f36a6c session refactor: remove @responses hash
this was being used as an internal cache for finished responses; this can be however superseded by Request#response, which fulfills the same role alongside the #finished? call; this allows us to drop one variable-size hash which would grow at least as large as the number of requests per call, and was inadvertedly shared across threads when using the same session (even at no danger of colliding, but could cause perhaps problems in jruby?)

also allows to remove one less callback
2025-04-04 11:05:27 +01:00
HoneyryderChuck
51b2693842 Merge branch 'gh-disc-71' into 'master'
:stream_bidi plugin

See merge request os85/httpx!365
2025-04-04 09:51:29 +00:00
HoneyryderChuck
1ab5855961 Merge branch 'gh-74' into 'master'
adding  option, which automatically closes sessions on fork

See merge request os85/httpx!377
2025-04-04 09:49:06 +00:00
HoneyryderChuck
f82816feb3 Merge branch 'issue-339' into 'master'
QUERY plugin

Closes #339

See merge request os85/httpx!374
2025-04-04 09:48:13 +00:00
HoneyryderChuck
ee229aa74c readapt some plugins so that supported verbs can be overridden by custom plugins 2025-04-04 09:32:38 +01:00
HoneyryderChuck
793e900ce8 added the :query plugin, which supports the QUERY http method
added as a plugin for explicit opt-in, as it's still an experimental feature (RFC in draft)
2025-04-04 09:32:38 +01:00
HoneyryderChuck
1241586eb4 introducing subplugins to plugins
subplugins are modules of plugins which register as post-plugins of other plugins

a specific plugin may want to have a side-effect on the functionality of another plugin, so they can use this to register it when the other plugin is loaded
2025-04-04 09:25:53 +01:00
HoneyryderChuck
cbf454ae13 Merge branch 'issue-336' into 'master'
ruby 3.4 features

Closes #336

See merge request os85/httpx!372
2025-04-04 08:24:28 +00:00
HoneyryderChuck
180d3b0e59 adding option, which automatically closes sessions on fork
only for ruby 3.1 or higher. adapted from a similar feature from the connection_pool gem
2025-04-04 00:22:05 +01:00
HoneyryderChuck
84db0072fb new :stream_bidi plugin
this plugin is an HTTP/2 only plugin which enables bidirectional streaming

the client can continue writing request streams as response streams arrive midway

Closes https://github.com/HoneyryderChuck/httpx/discussions/71
2025-04-04 00:21:12 +01:00
HoneyryderChuck
c48f6c8e8f adding Request#can_buffer?
abstracts some logic around whether a request has request body bytes to buffer
2025-04-04 00:20:29 +01:00
HoneyryderChuck
870b8aed69 make .parser_type an instance method instead
allows plugins to override
2025-04-04 00:20:29 +01:00
HoneyryderChuck
56b8e9647a making multipart decoding code more robust 2025-04-04 00:18:53 +01:00
HoneyryderChuck
1f59688791 rename test servlet 2025-04-04 00:18:53 +01:00
HoneyryderChuck
e63c75a86c improvements in headers
using Hash#new(capacity: ) to better predict size; reduce the number of allocated arrays by passing the result of  to the store when possible, and only calling #downcased(str) once; #array_value will also not try to clean up errors in the passed data (it'll either fail loudly, or be fixed downstream)
2025-04-04 00:18:53 +01:00
HoneyryderChuck
3eaf58e258 refactoring timers to more efficiently deal with empty intervals
before, canceling a timer connected to an interval which would become empty would delete it from the main intervals store; this deletion now moves away from the request critical path, and pinging for intervals will drop elapsed-or-empty before returning the shortest one

beyond that, the intervals store won't be constantly recreated if there's no need for it (i.e. nothing has elapsed), which reduce the gc pressure

searching for existing interval on #after now uses bsearch; since the list is ordered, this should make finding one more performant
2025-04-04 00:18:53 +01:00
HoneyryderChuck
9ff62404a6 enabling warning messages 2025-04-04 00:18:53 +01:00
HoneyryderChuck
4d694f9517 ruby 3.4 feature: use String#append_as_bytes in buffers 2025-04-04 00:18:53 +01:00
HoneyryderChuck
22952f6a4a ruby 3.4: set string capacity for buffer-like string 2025-04-04 00:18:53 +01:00
HoneyryderChuck
7660e4c555 implement #inspect in a few places where ouput gets verbose
tweak some existing others
2025-04-04 00:18:53 +01:00
HoneyryderChuck
a9cc787210 ruby 3.4: use set_temporary_name to decorate plugin classes with more descriptive names 2025-04-04 00:18:53 +01:00
HoneyryderChuck
970830a025 bumping version to 1.4.4 2025-04-03 22:17:42 +01:00
HoneyryderChuck
7a3d38aeee Merge branch 'issue-343' into 'master'
session: discard connection callbacks if they're assigned to a different session already

Closes #343

See merge request os85/httpx!379
2025-04-03 18:53:39 +00:00
HoneyryderChuck
54bb617902 fixed regression test of 1.4.1 (it detected a different error, but the outcome is not a goaway error anymore, as persistent conns recover and retry) 2025-04-03 18:34:41 +01:00
HoneyryderChuck
cf08ae99f5 removing unneded require in regression test which loads webmock by mistake 2025-04-03 18:23:56 +01:00
HoneyryderChuck
c8ce4cd8c8 Merge branch 'down-issue-98' into 'master'
stream plugin: allow partial buffering of the response when calling things other than #each

See merge request os85/httpx!380
2025-04-03 17:23:21 +00:00
HoneyryderChuck
6658a2ce24 ssl socket: do not call tcp socket connect if already connected 2025-04-03 18:17:35 +01:00
HoneyryderChuck
7169f6aaaf stream plugin: allow partial buffering of the response when calling things other than #each
this allows calling #status or #headers on a stream response, without buffering the whole response, as it's happening now; this will only work for methods which do not rely on the whole payload to be available, but that should be ok for the stream plugin usage

Fixes https://github.com/janko/down/issues/98
2025-04-03 17:51:02 +01:00
HoneyryderChuck
ffc4824762 do not needlessly probe for readiness on a reconnected connection 2025-04-03 11:04:15 +01:00
HoneyryderChuck
8e050e846f decrementing the in-flight counter in a connection
sockets are sometimes needlessly probed on retries because the counter wasn't taking failed attempts into account
2025-04-03 11:04:15 +01:00
HoneyryderChuck
e40d3c9552 do not exhaust retry attempts when probing connections after keep alive timeout expires
since pools can keep multiple persistent connections which may have been terminated by the peer already, exhausting the one retry attempt from the persistent plugin may make request fail before trying it on an actual connection. in this patch, requests which are preceded by a PING frame used for probing are therefore marked as such, and do not decrement the attempts counter when failing
2025-04-03 11:04:15 +01:00
HoneyryderChuck
ba60ef79a7 if checking out a connection in a closing state, assume that the channel is irrecoverable and hard-close is beforehand
one less callback to manage, which potentially leaks across session usages
2025-03-31 11:46:04 +01:00
HoneyryderChuck
ca49c9ef41 session: discard connection callbacks if they're assigned to a different session already
some connection callbacks are prone to be left behind; when they do, they may access objects that may have been locked by another thread, thereby corrupting state.
2025-03-28 18:26:17 +00:00
HoneyryderChuck
7010484b2a bump version to 1.4.3 2025-03-25 23:30:51 +00:00
HoneyryderChuck
06eba512a6 Merge branch 'issue-340' into 'master'
empty the write buffer on EOF errors in #read too

Closes #340

See merge request os85/httpx!373
2025-03-24 11:18:57 +00:00
HoneyryderChuck
f9ed0ab602 only run rbs tests in latest ruby 2025-03-19 23:55:00 +00:00
HoneyryderChuck
5632e522c2 internal telemetry reutilizes loggable module, which is made to work in places where there are no options 2025-03-19 23:43:29 +00:00
HoneyryderChuck
cfdb719a8e extra subroutines in test http2 server 2025-03-19 23:42:28 +00:00
HoneyryderChuck
b2a1b9cded fixed wrong API call on missing corresponding client PING frame
the function used did not exist; instead, an exception will be raised
2025-03-19 23:42:10 +00:00
HoneyryderChuck
5917c63a70 add more error message context to settings timeout flaky test 2025-03-19 23:41:02 +00:00
HoneyryderChuck
6af8ad0132 missing sig for HTTP2 Connection 2025-03-19 23:30:36 +00:00
HoneyryderChuck
35ac13406d do not run yjit build for older rubies 2025-03-19 23:30:13 +00:00
HoneyryderChuck
d00c46d363 Merge branch 'gh-80' into 'master'
handle HTTP_1_1_REQUIRED stream GOAWAY error code by retrying on new HTTP/1.1 connection

See merge request os85/httpx!375
2025-03-19 23:21:31 +00:00
HoneyryderChuck
a437de36e8 handle HTTP_1_1_REQUIRED stream GOAWAY error code by retrying on new HTTP/1.1 connection
it was previously only handling 421 status codes for the same effect; this achieves parity with the frame-driven redirection
2025-03-19 23:11:51 +00:00
HoneyryderChuck
797fd28142 Merge branch 'faraday-multipart-uploadio-issue' into 'master'
fix: do not close request right after sending it, assume it may have to be retried

See merge request os85/httpx!378
2025-03-19 22:13:19 +00:00
HoneyryderChuck
6d4266d4a4 multipart: initialize @bytesize in the initializer (for object shape opt) 2025-03-19 16:59:25 +00:00
HoneyryderChuck
eb8c18ccda make << a part of Response interface (and ensure ErrorResponse deals with no internal @response) 2025-03-19 16:58:44 +00:00
HoneyryderChuck
4653b48602 fix: do not close request right after sending it, assume it may have to be retried
with the retries plugin, the request payload will be rewinded, and that may not be possible if already closed. this was never detected so far because no request body transcoder internally closes, but the faraday multipart adapter does

the request is therefore closed alongside the response (when the latter is closed)

Fixes https://github.com/HoneyryderChuck/httpx/issues/75\#issuecomment-2731219586
2025-03-19 16:57:47 +00:00
HoneyryderChuck
8287a55b95 Merge branch 'gh-79' into 'master'
remove raise-error middleware from faraday tests

See merge request os85/httpx!376
2025-03-18 22:55:20 +00:00
HoneyryderChuck
9faed647bf remove raise-error middleware from faraday tests
proves that the adapter does not raise on http errors. also added a test to ensure that
2025-03-18 22:42:38 +00:00
HoneyryderChuck
5268f60021 fix sig issues coming from latest rbs 2025-03-18 18:30:53 +00:00
HoneyryderChuck
132e4b4ebe extra subroutines in test http2 server 2025-03-14 23:45:36 +00:00
HoneyryderChuck
b502247284 fixed wrong API call on missing corresponding client PING frame
the function used did not exist; instead, an exception will be raised
2025-03-14 23:45:27 +00:00
HoneyryderChuck
e5d852573a empty the write buffer on EOF errors in #read too
this avoid, on HTTP/2 termination handshake, in case the socket was shown as closed due to EOF, that the bytes are going to be written regardless (due to being misidentified as the GOAWAY frame)
2025-03-14 23:45:12 +00:00
HoneyryderChuck
d17ac7c8c3 webmock: reassign headers after callbacks
these may have been reassigned during them
2025-03-05 23:09:20 +00:00
HoneyryderChuck
b1c08f16d5 bump version to 1.4.2 2025-03-05 22:20:41 +00:00
HoneyryderChuck
f618c6447a tweaking hn script 2025-03-05 13:41:33 +00:00
HoneyryderChuck
4454b1bbcc Merge branch 'issue-334' into 'master'
ensure connection is cleaned up on parser-initiated forced reset

Closes #334

See merge request os85/httpx!363
2025-03-03 18:27:13 +00:00
HoneyryderChuck
88f8f5d287 fix: reset timeout callbacks when requests are routed to a different connection
this may happen in a few contexts, such as connection exhaustion, but more importantly, when a request is retried in a different connection; if the request successfully sets the callbacks before the connection raises an issue and the request is retried in a new one, the callback from the faulty connection are carried with it, and triggered at a time when the connection is back in the connection pool, or worse, used in a different thread

this fix relies on :idle transition callback, which is called before request is routed around
2025-03-03 18:21:04 +00:00
HoneyryderChuck
999b6a603a adding reproduction of the report bug on issue-334 2025-03-03 18:12:03 +00:00
HoneyryderChuck
f8d05b0e82 conn: on eof error, clean up write buffer
socket is closed, do not try to drain it while performing the handshake shutdown
2025-03-03 18:12:03 +00:00
HoneyryderChuck
a7f2271652 add more process context info to logging 2025-03-03 18:12:03 +00:00
HoneyryderChuck
55f1f6800b Merge branch 'gh-77' into 'master'
always raise an error when a non-recoverable error happens when sending the request

See merge request os85/httpx!370
2025-03-03 18:03:23 +00:00
HoneyryderChuck
3e736b1f05 Merge branch 'fix-hev2-overrides' into 'master'
fixes for happy eyeballs implementation

Closes #337

See merge request os85/httpx!368
2025-03-03 18:02:43 +00:00
HoneyryderChuck
f5497eec4f always raise an error when a non-recoverable error happens when sending the request
this should fallback to terminating the session immediately and closing its connections, instead of trying to fit the same exception into the request objects, no point in that

Closes https://github.com/HoneyryderChuck/httpx/issues/77
2025-03-03 16:45:43 +00:00
HoneyryderChuck
08015e0851 fixup! native resolver: refactored retries to use timer intervals 2025-03-01 01:12:39 +00:00
HoneyryderChuck
a0f472ba02 cleanly exit from Exception in the selector loop
was messing up RBS state
2025-03-01 01:03:24 +00:00
HoneyryderChuck
8bee6956eb adding Timer, making Timers#after return it, to allow single cancellation
the previous iteration relied on internal behaviour do delete the correct callback; in the process, logic to delete all callbacks from an interval was accidentally committed, which motivated this refactoring. the premise is: timeouts can cancel the timer; they set themselves as active until done; operation timeouts rely on the previous to be ignored or not.

a new error, OperationTimeoutError, was added for that effect
2025-03-01 01:03:24 +00:00
HoneyryderChuck
97cbdf117d small update in output of hackernews script 2025-02-28 18:37:05 +00:00
HoneyryderChuck
383f2a01d8 fix choice of candidate on no_domain_found error
must pick up name from candidates and pass to #resolve
2025-02-28 18:37:05 +00:00
HoneyryderChuck
8a473b4ccd native resolver: propagate error to all connections and close resolver when socket error 2025-02-28 18:37:05 +00:00
HoneyryderChuck
b6c8f70aaf fix: always prefer timer interval if values are the same 2025-02-28 18:37:05 +00:00
HoneyryderChuck
f5aa6142a0 selector: remove needless begin block 2025-02-28 18:37:05 +00:00
HoneyryderChuck
56d82e6370 connection: make surge it's purged on transition error 2025-02-28 18:37:05 +00:00
HoneyryderChuck
41e95d5b86 fix log message repeating pattern 2025-02-28 18:37:05 +00:00
HoneyryderChuck
46a39f2b0d native: when resolving, purge closed connections, ignore the connection which is being resolved 2025-02-28 18:37:05 +00:00
HoneyryderChuck
8009fc11b7 native resolver: refactored retries to use timer intervals
there were a lot of issues with bookkeeping this at the connection level; in the end, the timers infra was a much better proxy for all of this; set timer after write; cancel it on reading data to parse
2025-02-28 18:37:05 +00:00
HoneyryderChuck
398c08eb4d native resolver: consume resolutions in a loop, do not stop after the first one
this was a busy loop on dns resolution; this should utilize the socket better
2025-02-28 18:37:05 +00:00
HoneyryderChuck
723fda297f close_or_resolve: purge the queriable connections list before figuring out the next step 2025-02-27 19:22:36 +00:00
HoneyryderChuck
35ee625827 fix: in the native resolver, do not fall for the first answer being an alias if the remainder carries IPs
discard alias, use IPs
2025-02-27 19:22:36 +00:00
HoneyryderChuck
210abfb2f5 fix: on the native resolution, do not keep reading from the socket if buffer has data 2025-02-27 19:22:36 +00:00
HoneyryderChuck
53bf6824f8 fix: do not apply the HEv2 resolution delay if the ip was not resolved via DNS
early resolution should trigger immediately
2025-02-27 19:22:36 +00:00
HoneyryderChuck
cb8a97c837 added how to test instructions 2025-02-27 19:22:36 +00:00
HoneyryderChuck
0063ab6093 selector: do not raise conventional error on select timeout when the interval came from a timer
assume that the timer will fire right afterwards, return early
2025-02-27 19:22:36 +00:00
HoneyryderChuck
7811cbf3a7 faraday adaptar: use a default reason when none is matched by Net::HTTP::STATUS_CODES
Fixes https://github.com/HoneyryderChuck/httpx/issues/76
2025-02-22 22:28:57 +00:00
HoneyryderChuck
7c21c33999 bump version to 1.4.1 2025-02-18 13:42:44 +00:00
HoneyryderChuck
e45edcbfce linting issue 2025-02-18 12:55:00 +00:00
HoneyryderChuck
7e705dc57e resolver: early exit for closed connections later, after updating addresses (in case they ever get reused) 2025-02-18 12:46:26 +00:00
HoneyryderChuck
dae4364664 fix for incorrect sig of #pin_connection 2025-02-18 12:45:37 +00:00
HoneyryderChuck
8dfd1edf85 supressing annoying grpc logs where possible 2025-02-18 09:03:05 +00:00
HoneyryderChuck
d2fd20b3ec reasssing current session/selector earlier in the reconnection lifecycle 2025-02-18 09:02:49 +00:00
HoneyryderChuck
28fdbb1a3d one less callback 2025-02-18 09:02:07 +00:00
HoneyryderChuck
23857f196a refactoring attribution of current session and selector
by setting it in select_connection instead
2025-02-18 09:02:01 +00:00
HoneyryderChuck
bf1ef451f2 compose file linting 2025-02-18 08:14:29 +00:00
HoneyryderChuck
d68e98be5a adapted hackernewr script to deal with errors 2025-02-18 08:14:20 +00:00
HoneyryderChuck
fd57d72a22 add support in get.rb script for arbitrary url 2025-02-18 08:14:11 +00:00
HoneyryderChuck
a74bd9f397 use different names for happy eyeballs script 2025-02-18 08:14:02 +00:00
HoneyryderChuck
f76be1983b native resolver: fix stalled resolution on multiple requests to multiple origins
continue resolving when an error happens by immediately writing to the buffer afterwards
2025-02-18 08:13:47 +00:00
HoneyryderChuck
86cb30926f rewrote happy eyeballs implementation to not rely on callbacks
each connection will now check in its sibling and whether it's the original connection (containing the inital batch of requests); internal functions are then called to control how connections react to successful or failed resolutions, which reduces code repetition

the handling of coalesced connections is also simplified, as when that happens, the sibling must also be closed. this allowed to fix some mismatch when handling this use case with callbacks
2025-02-18 08:13:35 +00:00
HoneyryderChuck
ed8fafd11d fix: do not schedule deferred HEv2 ipv4 tcp handshake if the connection has already been closed by the sibling connection 2025-02-18 08:12:07 +00:00
HoneyryderChuck
5333def40d Merge branch 'issue-338' into 'master'
IO.copy_stream changes yielded string on subsequent yields

Closes #338

See merge request os85/httpx!369
2025-02-14 00:27:31 +00:00
HoneyryderChuck
ab78e3189e webmock: fix for integrations which require the request to transition state, due to event emission
one of them being the otel plugin, see https://github.com/open-telemetry/opentelemetry-ruby-contrib/pull/1404
2025-02-14 00:16:53 +00:00
HoneyryderChuck
b26313d18e request body: fixed handling of files as request body
there's a bug (reported in https://bugs.ruby-lang.org/issues/21131) with IO.copy_stream, where yielded duped strings still change value on subsequent yields, which breaks http2 framing, which requires two yields at the same time in the first iteration. it replaces it with #read calls; file handles will now be closed once done streaming, which is a change in behaviour
2025-02-14 00:16:53 +00:00
HoneyryderChuck
2af9bc0626 multipart: force pathname parts to open in binmode 2025-02-13 19:17:14 +00:00
HoneyryderChuck
f573c1c50b transcode: body encoder is now a simple delegator
instead of implementing method missing; this makes it simpler impl-wise, and it'll also make comparing types easier, although not needed ATM
2025-02-13 19:16:45 +00:00
HoneyryderChuck
2d999063fc added tests to reproduce the issue of string changing on IO.copy_stream yield 2025-02-13 19:15:15 +00:00
HoneyryderChuck
1a44b8ea48 Merge branch 'gh-70' into 'master'
datadog plugin fixes

See merge request os85/httpx!364
2025-02-11 00:58:04 +00:00
HoneyryderChuck
8eeafaa008 omit faraday/datadog tests which uncovered a bug 2025-02-11 00:46:18 +00:00
HoneyryderChuck
0ec8e80f0f fixing datadog plugin not sending distributed headers
the headers were being set on the request object after the request was buffered and sent
2025-02-11 00:46:18 +00:00
HoneyryderChuck
f2bca9fcbf altered datadog tests in order to verify the distributed headers from the response body
and not from the request object, which reproduces the bug
2025-02-11 00:46:18 +00:00
HoneyryderChuck
6ca17c47a0 faraday: do not trace on configuration is disabled 2025-02-11 00:46:18 +00:00
HoneyryderChuck
016ed04f61 adding test for integration of datadog on top of faraday backed by httpx 2025-02-11 00:46:18 +00:00
HoneyryderChuck
5b59011a89 moving datadog setup test to support mixin 2025-02-11 00:31:13 +00:00
HoneyryderChuck
7548347421 moving faraday setup test to support mixin 2025-02-11 00:31:13 +00:00
HoneyryderChuck
43c4cf500e datadog: set port as integer in the port span tag
faraday sets it as float and it doesn't seem to break because of it)
2025-02-11 00:31:13 +00:00
HoneyryderChuck
aecb6f5ddd datadog plugin: fix error callback and general issues
also, made the handler a bit more functional style, which curbs some of the complexity
2025-02-11 00:31:13 +00:00
HoneyryderChuck
6ac3d346b9 Merge branch 'method-redefinition-warnings' into 'master'
Fix two method redefinition warnings

See merge request os85/httpx!367
2025-02-07 10:21:26 +00:00
Earlopain
946f93471c
Fix two method redefinition warnings
```
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/selector.rb:95: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
/usr/local/bundle/gems/httpx-1.4.0/lib/httpx/resolver/system.rb:54: warning: method redefined; discarding old empty?
/usr/local/lib/ruby/3.4.0/forwardable.rb:231: warning: previous definition of empty? was here
```

In selector.rb, the definitions are identical, so I kept the delegator

For system.rb, it always returns true so I kept that one
2025-02-07 09:38:30 +01:00
HoneyryderChuck
f68ff945c1 Merge branch 'issue-335' into 'master'
raise error when httpx is used with an url not starting with http or https schemes

Closes #335

See merge request os85/httpx!366
2025-01-28 09:07:07 +00:00
HoneyryderChuck
9fa9dd5350 raise error when httpx is used with an url not starting with http or https schemes
this was previously done in connection initialization, which means that the request would map to an error response with this error; however, the change to thread-safe pools in 1.4.0 caused a regression, where the uri is expected to have an origin before the connection is established; this is fixed by raising an error on request creation, which will need to be caught by the caller

Fixes #335
2025-01-28 00:36:00 +00:00
HoneyryderChuck
1c0cb0185c Merge branch 'issue-333' into 'master'
fix: handle multi goaway frames coming from server

Closes #333

See merge request os85/httpx!362
2025-01-13 13:00:18 +00:00
HoneyryderChuck
2a1338ca5b fix: handle multi goaway frames coming from server
nodejs servers, for example, seem to send them when shutting down servers on timeout; when receiving, in the same buffer, the first correctly closes the parser and emits the message, while the second, because the parser is closed already, will emit an exception; the regression happened because the second exception was swallowed by the pool handler, but now that's gone, and errors on connection consumption get handled; this was worked around by, on the parser, when emitting the errors for pending requests, claearing the queue, as when the second error comes, there's no request to emit the error for

Closes #333
2025-01-12 00:16:31 +00:00
HoneyryderChuck
cb847f25ad Merge branch 'ruby-34' into 'master'
adding support for ruby 3.4

See merge request os85/httpx!360
2025-01-03 01:37:28 +00:00
HoneyryderChuck
44311d08a5 improve resolver logs to include record family in prefix
also, fixed some of the arithmetic associated with logging timeout logs
2025-01-02 23:49:01 +00:00
HoneyryderChuck
17003840d3 adding support for ruby 3.4 2025-01-02 23:38:51 +00:00
HoneyryderChuck
a4bebf91bc Merge branch 'chore/avoid-loading-datadog-dogstatsd' into 'master'
Do not load Datadog tracing when dogstatsd is present

See merge request os85/httpx!361
2025-01-02 23:01:07 +00:00
Hieu Nguyen
691215ca6f Do not load Datadog tracing when dogstatsd is present 2024-12-31 18:54:44 +08:00
HoneyryderChuck
999d86ae3e bump version to 1.4.0 2024-12-18 13:22:09 +00:00
HoneyryderChuck
a4c2fb92e7 improving coverage of modules 2024-12-18 11:10:04 +00:00
HoneyryderChuck
66d3a9e00d Merge branch 'improvs' 2024-12-10 15:09:22 +00:00
HoneyryderChuck
e418783ea9 more sig completeness 2024-12-10 15:09:00 +00:00
HoneyryderChuck
36ddd84c85 improve code around consuming request bodies (particularly body_encoder interface) 2024-12-10 15:09:00 +00:00
HoneyryderChuck
f7a5b3ae90 define selector_store sigs 2024-12-10 15:09:00 +00:00
HoneyryderChuck
3afe853517 make #early_resolve return a boolean, instead of undefined across implementations 2024-12-10 15:09:00 +00:00
HoneyryderChuck
853ebd5e36 improve coverage, eliminate dead code 2024-12-10 15:09:00 +00:00
HoneyryderChuck
f820b8cfcb Merge branch 'issue-325' into 'master'
XML plugin

Closes #325

See merge request os85/httpx!358
2024-12-08 13:14:43 +00:00
HoneyryderChuck
062fd5a7f4 reinstate and deprecate HTTPX::Response#xml method 2024-12-08 12:48:47 +00:00
HoneyryderChuck
70bf874f4a adding gem collection
includes nokogiri type sigs
2024-12-08 12:48:47 +00:00
HoneyryderChuck
bf9d847516 moved xml encoding/decoding + APIs into :xml plugin 2024-12-08 12:48:47 +00:00
HoneyryderChuck
d45cae096b fix: do not raise things which are not exceptions
this is a regression from a ractor compatibility commit, which ensured that errors raised while preparing the request / resolving name are caught and raised, but introduced a regression when name resolution retrieves a cached IP; this error only manifested in dual-stack situations, which can't be tested in CI yet

Closes #329
2024-12-07 20:00:40 +00:00
HoneyryderChuck
717b932e01 improved coverage of content digest plugin tests 2024-12-03 09:00:11 +00:00
HoneyryderChuck
da11cb320c Merge branch 'json-suffix' into 'master'
Accept more MIME types with json suffix

Closes #326

See merge request os85/httpx!357
2024-12-03 08:50:07 +00:00
sarna
4bf07e75ac Accept more MIME types with json suffix
Fixes #326 #327
2024-12-03 08:50:07 +00:00
HoneyryderChuck
3b52ef3c09 Merge branch 'simpler-selector' into 'master'
:pool option + thread-safe session-owned conn pool

See merge request os85/httpx!348
2024-12-02 14:26:17 +00:00
HoneyryderChuck
ac809d18cc content-digest: set validate_content_digest default to false; do not try to compute content-digest for requests with no body 2024-12-02 13:04:57 +00:00
HoneyryderChuck
85019e5493 Merge branch 'content_digest' into 'master'
Add support for `content-digest` headers (RFC9530)

See merge request os85/httpx!354
2024-12-02 12:37:40 +00:00
David Roetzel
95c1a264ee Add support for content-digest headers (RFC9530)
Closes #323
2024-12-02 12:37:40 +00:00
HoneyryderChuck
32313ef02e Merge branch 'fix-json-encode-with-oj' into 'master'
Fix incorrect hash key rendering with Oj JSON encoder

Closes #324

See merge request os85/httpx!356
2024-11-29 19:41:40 +00:00
Denis Sadomowski
ed9df06b38 fix rubocop offenses 2024-11-29 18:26:39 +01:00
Denis Sadomowski
b9086f37cf Compat mode for Oj.dump by default 2024-11-29 17:47:30 +01:00
Denis Sadomowski
d3ed551203 revert arguments to json_dump 2024-11-29 17:40:32 +01:00
Denis Sadomowski
1b0e9b49ef Fix incorrect hash key rendering with Oj JSON encoder 2024-11-28 16:19:17 +01:00
HoneyryderChuck
8797434ae7 Merge branch 'fix-hexdigest-on-compressed-bodies' into 'master'
aws sigv4support calculation of hexdigest on top of compressed bodies in correct way

See merge request os85/httpx!355
2024-11-27 18:06:39 +00:00
HoneyryderChuck
25c87f3b96 fix: do not try to rewind on bodies which respond to #each
also, error when trying to calculate hexdigest on endless bodies
2024-11-27 17:39:20 +00:00
HoneyryderChuck
26c63a43e0 aws sigv4support calculation of hexdigest on top of compressed bodies in a more optimal way
before, compressed bodies were yielding chunks and buffering locally (the  variant in this snippet); they were also failing to rewind, due to lack of method (fixed in the last commit); in this change, support is added for bodies which can read and rewind (but do not map to a local path via ), such as compressed bodies, which at this point haven't been yet buffered; the procedure is then to buffer the compressed body into a tempfile, calculate the hexdigest then rewind the body and move on
2024-11-27 08:55:23 +00:00
HoneyryderChuck
3217fc03f8 allow deflater bodies to rewind 2024-11-27 08:50:57 +00:00
HoneyryderChuck
b7b63c4460 removing unused bits 2024-11-27 08:50:26 +00:00
HoneyryderChuck
7d8388af28 add test for calculation of hexdigest on top of a compressed body 2024-11-27 08:49:57 +00:00
HoneyryderChuck
a53d7f1e01 raise error happening in request-to-connection paths
but only when the selector is empty, as there'll be nothing to select on, and this would fall into an infinite loop
2024-11-19 12:55:44 +00:00
HoneyryderChuck
c019f1b3a7 removing usage of global unshareable object in default options 2024-11-19 12:55:44 +00:00
HoneyryderChuck
594f6056da native resolver: treat tcp hanshake errors as resolve errors 2024-11-19 12:55:44 +00:00
HoneyryderChuck
113e9fd4ef moving leftover option proc into private function 2024-11-19 12:55:44 +00:00
HoneyryderChuck
e32d226151 refactor of internal resolver cache lookup access to make it a bit safer 2024-11-19 12:55:44 +00:00
HoneyryderChuck
a3246e506d freezing all default options 2024-11-19 12:55:44 +00:00
HoneyryderChuck
ccb22827a2 using find_index/delete_at instead of find/delete 2024-11-19 12:55:44 +00:00
HoneyryderChuck
94e154261b store selectors in thread-local variables
instead of fiber-local storage; this allows that under fiber-scheduler based engines, like async, requests on the same session with an open selector will reuse the later, thereby ensuring connection reuse within the same thread

in normal conditions, that'll happen only if the user uses a session object and uses HTTPX::Session#wrap to keep the context open; it'll also work OTTB when using sessions with the  plugin. Otherwise, a new connection will be opened per fiber
2024-11-19 12:55:44 +00:00
HoneyryderChuck
c23561f80c linting... 2024-11-19 12:55:44 +00:00
HoneyryderChuck
681650e9a6 fixed long-standing reenqueue of request in the pending list 2024-11-19 12:55:44 +00:00
HoneyryderChuck
31f0543da2 minor improvement on handling do_init_connection 2024-11-19 12:55:44 +00:00
HoneyryderChuck
5e3daadf9c changing the order of operations handling misdirectedd requests
because you're reconnecting to the same host, now the previous connection is closed, in order to avoid a deadlock on the pool where the per-host conns are exhausted, and the new connection can't be initiated because the older one hasn't been checked back in
2024-11-19 12:55:44 +00:00
HoneyryderChuck
6b9a737756 introducing Connection#peer to point to the host to connect to
this eliminates the overuse of Connection#origin, which in the case of proxied connections was broken in the previous commit

the proxy implementation got simpler, despite this large changeset
2024-11-19 12:55:44 +00:00
HoneyryderChuck
1f9dcfb353 implement per-origin connection threshold per pool
defaulting to unbounded, in order to preserve current behaviour; this will cap the number of connections initiated for a given origin for a pool, which if not shared, will be per-origin; this will include connections from separate option profiles

a pool timeout is defined to checkout a connection when limit is reeached
2024-11-19 12:55:44 +00:00
HoneyryderChuck
d77e97d31d repositioned empty placeholder hash 2024-11-19 12:55:44 +00:00
HoneyryderChuck
69e7e533de synchronize access to connections in the pool
also fixed the coalescing case where the connection may come from the pool, and should therefore be remmoved from there and selected/checked back in accordingly as a result
2024-11-19 12:55:44 +00:00
HoneyryderChuck
840bb55ab3 do not return idle (result of either cloning or coalescing) connections back to the pool 2024-11-19 12:55:44 +00:00
HoneyryderChuck
5223d51475 setting the connection pool locally to the session
allowing it to be plugin extended via pool_class and PoolMethods
2024-11-19 12:55:44 +00:00
HoneyryderChuck
8ffa04d4a8 making pool class a plugin extendable class 2024-11-19 12:55:44 +00:00
HoneyryderChuck
4a351bc095 adapted plugins to the new structure 2024-11-19 12:55:44 +00:00
HoneyryderChuck
11d197ff24 changed internal session structure, so that it uses local selectors directly
pools are then used only to fetch new conenctions; selectors are discarded when not needed anymore; HTTPX.wrap is for now patched, but would ideally be done with in the future
2024-11-19 12:55:44 +00:00
HoneyryderChuck
12fbca468b rewrote Pool class to act as a connection pool, the way it was intended
this leaves synchronization out ftm
2024-11-19 12:55:44 +00:00
HoneyryderChuck
79d5d16c1b moving session with pool test plugin to override on the session and drop pool changes 2024-11-19 12:55:44 +00:00
HoneyryderChuck
e204bc6df0 passing connections to Pool#next_tick and Pool#next_timeout
refactoring towards not centralizing this inforation
2024-11-19 12:55:44 +00:00
HoneyryderChuck
6783b378d3 bump version to 1.3.4 2024-11-19 12:53:34 +00:00
HoneyryderChuck
9d7681cb46 Merge branch 'webmock-form-tempfile' into 'master'
Fix webmock integration when posting tempfiles

Closes #320

See merge request os85/httpx!353
2024-11-06 13:58:04 +00:00
HoneyryderChuck
c6139e40db response body: protect against invalid charset in content-type header
Closes https://github.com/HoneyryderChuck/httpx/issues/66
2024-11-06 13:38:19 +00:00
Earlopain
a4b95db01c Fix webmock integration when posting tempfiles
The fix is two-fold and also allows them to be retryable

Closes https://gitlab.com/os85/httpx/-/issues/320
2024-11-06 13:27:45 +00:00
HoneyryderChuck
91b9e13cd0 bumped version to 1.3.3 2024-10-31 18:00:12 +00:00
HoneyryderChuck
8d5def5f02 Merge branch 'issue-319' into 'master'
fix for webmock request body expecting a string

Closes #319

See merge request os85/httpx!352
2024-10-31 17:58:42 +00:00
HoneyryderChuck
3e504fb511 fix for webmock request body expecting a string
when building the request signature, the body is preemptively converted to a string, which fulfills the expectation for webmock, despite it being a bit of a perf penalty if the request contains a multipart request body, as the body will be fully read to memory

Closes #319

Closes https://github.com/HoneyryderChuck/httpx/issues/65
2024-10-31 17:47:12 +00:00
HoneyryderChuck
492097d551 bumped version to 1.3.2 2024-10-30 11:50:49 +00:00
HoneyryderChuck
02ed2ae87d raise invalid uri if passed request uri does not contain the host part 2024-10-28 10:40:28 +00:00
HoneyryderChuck
599b6865da removing parentheses from regex 2024-10-25 15:54:04 +01:00
HoneyryderChuck
7c0e776044 coverage must be a regex 2024-10-25 13:58:58 +01:00
HoneyryderChuck
7ea0b32161 fix coverage badge generation 2024-10-25 13:55:51 +01:00
HoneyryderChuck
72b0267598 Merge branch 'issue-317' into 'master'
Support WebMock with form/multipart

Closes #317

See merge request os85/httpx!351
2024-10-25 12:55:25 +00:00
Alexey Romanov
4a966d4cb8 Add a regression test for WebMock with form/multipart 2024-10-25 13:43:12 +01:00
HoneyryderChuck
70f1ffc65d Merge branch 'github-issue-63' into 'master'
Prevent `NoMethodError` in the proxy plugin

See merge request os85/httpx!350
2024-10-21 09:23:50 +00:00
Alexey Romanov
fda0ea8b0e Prevent NoMethodError in the proxy plugin
When:
1. the proxy is autodetected from `http_proxy` etc. variables;
2. a request is made which bypasses the proxy (e.g. to an authority in `no_proxy`);
3. this request fails with one of `Proxy::PROXY_ERRORS` (timeout or a system error)

the `fetch_response` method tried to access the proxy URIs array which
isn't initialized by `proxy_options`. This change fixes the
`proxy_error?` check to avoid the issue.
2024-10-21 10:10:12 +01:00
HoneyryderChuck
2443ded12b update CI test certs 2024-09-27 09:16:06 +01:00
HoneyryderChuck
1db2d00d07 rename get tests 2024-09-06 09:43:25 +01:00
HoneyryderChuck
40b4884d87 bumped version to 1.3.1 2024-08-20 17:20:24 +01:00
HoneyryderChuck
823e7446f4 faraday: do not call on_complete when not defined
by default it's not filled in, but middlewares override it

Closes https://github.com/HoneyryderChuck/httpx/issues/61
2024-08-20 16:55:57 +01:00
HoneyryderChuck
83b4c73b92 protect against coalescing connections on the resolver
these could take connections out of the loop, thereby causinng a busy loop, on multiple request scenarios
2024-08-19 16:45:55 +01:00
Diogo Vernier
9844a55205 fix CPU usage loop 2024-08-19 16:45:55 +01:00
HoneyryderChuck
6e1bc89256 Merge branch 'issue-312' into 'master'
allow further extension of the httpx session via faraday config block

Closes #312

See merge request os85/httpx!347
2024-08-19 15:45:41 +00:00
HoneyryderChuck
8ec0765bd7 Merge branch 'max-time' into 'master'
reuse request_timeout on response chains (redirects, retries)

See merge request os85/httpx!345
2024-08-19 15:45:24 +00:00
HoneyryderChuck
6b893872fb allow further extension of the httpx session via faraday config block
Closes #312
2024-08-01 11:41:10 +01:00
HoneyryderChuck
ca8346b193 adding options docs 2024-07-25 16:01:51 +01:00
HoneyryderChuck
7115f0cdce avoid enqueing requests after a period if the request is over
they may have been closed already, due to a timeout or connection dropping. this condition affects delayed retry or redirect follow request.
2024-07-25 11:59:02 +01:00
HoneyryderChuck
74fc7bf77d when bubbling up errors in the connection, handle request error directly
instead of expecting it to be contained within it, and therefore handled explicitly. sometimes they may not.
2024-07-25 11:59:02 +01:00
HoneyryderChuck
002459b9b6 fix: do not generate new connection on 407 check for proxies
instead, look for the correct conn in-session. this does not leak connections with usage
2024-07-25 11:59:02 +01:00
HoneyryderChuck
1ee39870da deactivate connection before deferring a request in the future
this causes busy loops where request is buffered only in the future, and its connection may still be open for readiness probes
2024-07-25 11:59:02 +01:00
HoneyryderChuck
b8db28abd2 make request_timeout reset on returned response, rather than response callback
this makes it not reset on redirect or retried responses, and effectively makes it act as a max-time for individual transactions/requests
2024-07-25 11:59:02 +01:00
HoneyryderChuck
fafe7c140c splatting connections on pool.deactivate call, as per defined sig 2024-07-23 14:48:51 +01:00
HoneyryderChuck
047dc30487 do not use thread variables in mock response test plugin 2024-07-19 12:01:48 +01:00
HoneyryderChuck
7278647688 bump version to 1.3.0 2024-07-10 16:27:24 +01:00
HoneyryderChuck
09fbb32b9a fix: in test, uri URI to build uri with ip address, as concatenating fails for IPv6 2024-07-10 16:10:21 +01:00
HoneyryderChuck
4e7ad8fd23 fix: cookies plugin should not make Session#build_request private
Closes #311
2024-07-10 15:52:56 +01:00
HoneyryderChuck
9a3ddfd0e4 change datadog v2 constraint to not test against beta version
Fixes #310
2024-07-10 15:50:14 +01:00
HoneyryderChuck
e250ea5118 Merge branch 'http-2-gem' into 'master'
switch from http-2-next to http-2

See merge request os85/httpx!344
2024-07-08 15:19:37 +00:00
HoneyryderChuck
2689adc390 Merge branch 'request-options' into 'master'
Options improvements

See merge request os85/httpx!324
2024-07-08 15:19:02 +00:00
HoneyryderChuck
ba31204227 switch from http-2-next to http-2
will be merged back to original repo soon
2024-06-28 15:49:58 +01:00
HoneyryderChuck
581b749e89 bumped version to 1.2.6 2024-06-17 10:58:39 +01:00
HoneyryderChuck
7562346357 fix: do not try fetching the retry-after on error responses
Closes #307
2024-06-11 19:09:47 +01:00
HoneyryderChuck
e7aa53365e typing retries #fetch_response 2024-06-11 19:08:44 +01:00
HoneyryderChuck
0b671fa2f9 simplify ErrorResponse by fetching options from the request, like Response 2024-06-11 18:49:18 +01:00
HoneyryderChuck
8b2ee0b466 remove form, json, ,xml and body from the Options class
Options become a bunch of session and connection level parameters, and requests do not need to maintain a separate Options object when they contain a body anymore, instead, objects is shared with the session, while request-only parameters get passed downwards to the request and its body. This reduces allocations of Options, currently the heaviest object to manage.
2024-06-11 18:23:45 +01:00
HoneyryderChuck
b686119a6f do not try to cast to Options all the time, trust the internal structure 2024-06-11 18:23:12 +01:00
HoneyryderChuck
dcbd2f81e3 change internal buffer fetch using ivar getter 2024-06-11 18:21:54 +01:00
HoneyryderChuck
0fffa98e83 avoid traversing full intervals list, which is ordered by oldest intervals first
by using #drop_while
2024-06-11 18:21:54 +01:00
HoneyryderChuck
08ba389fd6 log mmore info on read for level 3 2024-06-11 18:21:54 +01:00
HoneyryderChuck
587271ff77 improving sigs 2024-06-11 18:21:22 +01:00
HoneyryderChuck
7062b3c49b Merge branch 'gh-52' into 'master'
native resolver: moved timeouts reset out of idle transition, retry alias

See merge request os85/httpx!342
2024-06-11 16:24:52 +00:00
HoneyryderChuck
b1cec40743 native: retry last tried name for a given DNS query
this prevents a timeout on an alias method to start from scratch from original
2024-06-11 16:36:36 +01:00
HoneyryderChuck
2d6fde2e5d downgrade to udp when retrying dns queries 2024-06-11 16:27:51 +01:00
HoneyryderChuck
3a3188efff adding a log msg when transitioning to resolving an alias 2024-06-11 16:27:51 +01:00
HoneyryderChuck
7928624639 native resolver: moved timeouts reset out of idle transition
in order to reuse idle transition for situations, and also because the only cases where it's required to reset timeouts is when retrying on a separate nameserver
2024-06-11 16:27:51 +01:00
HoneyryderChuck
d61df6d84f fixing resolver options extension on tests (although it wasn't breakig anything) 2024-06-11 16:27:51 +01:00
HoneyryderChuck
c388d8ec9a slow dns server: support for single hostname slowness 2024-06-11 16:27:51 +01:00
HoneyryderChuck
ad02ad5327 test dns server for tcp queries 2024-06-11 16:27:51 +01:00
HoneyryderChuck
af6ce5dca4 fixing redirect_on sig 2024-06-09 19:40:28 +01:00
HoneyryderChuck
68dd8e223f Merge branch 'gh-53' into 'master'
remove body-related headers on POST-to-GET redirects

See merge request os85/httpx!343
2024-06-09 15:06:21 +00:00
HoneyryderChuck
d9fbd5194e fixup! adding tests for POST-to-GET redirection, both for 307 and not 2024-06-05 18:09:13 +01:00
HoneyryderChuck
0ba7112a9f remove body-related headers on POST-to-GET redirects
except in the case where method and body must be preserved on redirects, as in 307 case
2024-06-05 17:58:05 +01:00
HoneyryderChuck
0c262bc19d adding tests for POST-to-GET redirection, both for 307 and not 2024-06-05 17:58:05 +01:00
HoneyryderChuck
b03a46d25e Merge branch 'gh-54' into 'master'
set options from env on the request

See merge request os85/httpx!341
2024-06-05 12:51:04 +00:00
HoneyryderChuck
69f58bc358 lock ffi for older ruby 2024-06-04 11:20:04 +01:00
HoneyryderChuck
41c1aace80 set options from env on the request
faraday users may use the yielded block to set different options per request
2024-06-03 18:11:04 +01:00
HoneyryderChuck
423f05173c bump version to 1.2.5 2024-05-14 15:24:58 +01:00
HoneyryderChuck
d82008ddcf Merge branch 'fix-stream-plugin' into 'master'
stream plugin: reverted back to yielding buffered payloads for streamed responses

See merge request os85/httpx!340
2024-05-14 14:21:09 +00:00
HoneyryderChuck
19f46574cb reduce payload size in timeout test 2024-05-14 15:04:10 +01:00
HoneyryderChuck
713887cf08 reordered connection init inn case the uri is not an HTTP uri 2024-05-13 18:10:19 +01:00
HoneyryderChuck
a3cfcc71ec stream plugin: reverted back to yielding buffered payloads for streamed responses
the bug this was deleted for seems to not be hanging on this behaviour anymore, and this at least allows the down integration to not change significantly.
2024-05-13 18:10:19 +01:00
HoneyryderChuck
0f431500c0 Merge branch 'gh-47' into 'master'
response cache plugin: fix to use correct last-modified header

See merge request os85/httpx!339
2024-05-02 16:35:20 +00:00
HoneyryderChuck
9d03dab83d missing require for uri lib 2024-05-02 17:22:29 +01:00
HoneyryderChuck
7e7c06597a upgrade test datadog to v2 beta 2024-05-02 17:02:37 +01:00
HoneyryderChuck
83157412e7 response cache plugin: merge headers from cached response
some are required for other features, such as the convenience decoding methods

Fixes https://github.com/HoneyryderChuck/httpx/issues/47
2024-05-02 16:58:16 +01:00
HoneyryderChuck
461dac06d5 response cache plugin: fix to use correct last-modified header
Fixes https://github.com/HoneyryderChuck/httpx/issues/49
2024-05-02 16:57:13 +01:00
HoneyryderChuck
d60cfb7e44 bumped version to 1.2.4 2024-04-02 15:59:34 +01:00
HoneyryderChuck
20c8dde9ef fixed usage of String#split when forming key/value pairs
some of it is used to parse values where there are = characters, such as base64 strings
2024-04-02 09:39:52 +01:00
HoneyryderChuck
594640c10c removed irb call left behind... 2024-03-25 09:47:28 +00:00
HoneyryderChuck
1f7a251925 updated datadog 2.0 prerelease tag 2024-03-22 18:49:08 +00:00
HoneyryderChuck
7ab251f755 Merge branch 'issue-305' into 'master'
fix: datadog not generating new span on retried requests

Closes #305

See merge request os85/httpx!337
2024-03-22 14:42:53 +00:00
HoneyryderChuck
3d9779cc63 adapt to datadog gem upcoming changes
names changes from ddtrace to datadog, as well as namespace
2024-03-22 13:44:16 +00:00
HoneyryderChuck
b234465219 ci: show bundler logs 2024-03-22 12:51:15 +00:00
HoneyryderChuck
51a8b508ac fix: datadog not generating new span on retried requests
spans initiation gate wasn't being reset in the case of retries, which
reuse the same object; a redesign was done to ensure the span initiates
before really sending the request, is reused when the request object is
reset and reused, and when the error happens outside of the request
transaction, such as during name resolution.
2024-03-22 12:51:15 +00:00
HoneyryderChuck
b86529655f Merge branch 'gh-43' into 'master'
fix: recover from connection lost leaving process hanging on persistent connections

See merge request os85/httpx!335
2024-03-17 10:10:43 +00:00
HoneyryderChuck
4434daa5ea fix: recover from connection lost leaving process hanging on persistent connections
the recovery model of long running connections is to mark requests as pending, ping the connection to fill the write buffer, and moveon. since the last changes which impoved connection object reusage, the way that the procedures were stacked created a conundrum, where the inactive connection would move to idle before being activated, so it'd never go back to the connection pool; this switches operations, so an inactive connection activates first and is picked up by the pool, before ping-and-reconnect happens
2024-03-15 15:39:57 +00:00
HoneyryderChuck
dec17e8d85 Merge branch 'issue-304' into 'master'
allows for returning buffering to error response on loop error

Closes #304

See merge request os85/httpx!334
2024-03-14 14:10:23 +00:00
HoneyryderChuck
c6a63b55a9 allows for returning buffering to error response on loop error
in some situations on unexpected loop errors, read buffer may still contain response bytes to buffer, which couldn't be buffered to the error responses after error has propagated; this makes it possible by delegating bytes to wrapped response
2024-03-11 23:07:05 +00:00
Tony Hsu
be5a91ce2e ddtrace 2.0 changes 2024-03-11 22:46:42 +00:00
HoneyryderChuck
c4445074ad bump version to 1.2.3 2024-03-04 11:59:40 +00:00
HoneyryderChuck
b1146b9f55 Merge branch 'master' of gitlab.com:os85/httpx 2024-03-01 16:53:33 +00:00
HoneyryderChuck
78d67cd364 wrong ruby engine cond 2024-02-29 15:58:42 +00:00
HoneyryderChuck
2fbec7ab6a Merge branch 'issue-296' into 'master'
elapsing timeouts: guard against mutation of callbacks while looping

Closes #296

See merge request os85/httpx!329
2024-02-29 14:48:00 +00:00
HoneyryderChuck
fbfd17351f disable ssh proxy tests for truffleruby as well 2024-02-29 14:47:54 +00:00
HoneyryderChuck
3c914f741d remove unused var 2024-02-29 14:15:36 +00:00
HoneyryderChuck
ad14df6a7a Merge branch 'issue-287' into 'master'
native resolver will cleanly go from tcp to udp on CNAME resolution

Closes #287

See merge request os85/httpx!331
2024-02-29 14:08:03 +00:00
HoneyryderChuck
cf43257006 documenting delegated methods in Response 2024-02-28 14:38:01 +00:00
Mostafa Dahab
06076fc908 Allow zero max retries 2024-02-28 14:37:38 +00:00
HoneyryderChuck
d5c9a518d8 Merge branch 'github-pr-41' into 'master'
datadog: do not set lazy for newer versions (deprecated)

See merge request os85/httpx!332
2024-02-28 13:42:18 +00:00
HoneyryderChuck
d5eee7f2d1 Merge branch 'issue-299' into 'master'
fix for not allowing default oauth auth method when setting grant type and scope

Closes #299

See merge request os85/httpx!330
2024-02-27 21:20:35 +00:00
HoneyryderChuck
ab51dcbbc1 datadog: do not set lazy for newer versions (deprecated) 2024-02-27 12:23:34 +00:00
HoneyryderChuck
8982dc0fe4 remove regression test 0.19.3
peers used for the test changed their TLS certificate config, and I can't find replacement peers.
2024-02-27 11:40:47 +00:00
HoneyryderChuck
8e3d5f4094 fix: native resolver will cleanly go from tcp to udp on CNAME resolution
if a CNAME came on a tcp dns response, the follow-up dns query would be erased and never done; this fixes it by keeping buffer state on fall back to udp
2024-02-26 18:11:24 +00:00
HoneyryderChuck
77006fd0c9 fix for not allowing default oauth auth method when setting grant type and scope
the oauth plugin already documents defaulting to client_secret_basic
2024-02-26 16:33:09 +00:00
HoneyryderChuck
bab19efcfe fix: make sure happy eyeballs cloned connections set the session callbacks
fixed an issue where a 421 response would not call the misredirected callback, as it wasn't being re-set in the cloned connection, therefore would be never called, and would hang...
2024-02-25 23:24:30 +00:00
HoneyryderChuck
f1bccaae2e elapsing timeouts: guard against mutation of callbacks while looping
triggering timer callbacks may call Connection#consume, which may trigger interval cleanup process of the timer callback. this does not happen usually, but it can happen in the context of multiple requests to the same host using the expect plugin
2024-02-09 14:46:27 +00:00
HoneyryderChuck
b5b59b10d7 bump version to 1.2.2 2024-02-02 16:38:48 +00:00
Earlopain
91fba0a971 Fix initial headers always being an instance of the default header class 2024-02-02 16:24:54 +00:00
HoneyryderChuck
a839c2d6f1 oauth: do not bail out on token endpoint (it's the rest that matters) 2024-02-02 10:22:11 +00:00
HoneyryderChuck
3cf07839cc Merge branch 'master' of gitlab.com:os85/httpx 2024-02-01 18:29:04 +00:00
HoneyryderChuck
112dc10dba Merge branch 'another-3.3-warning' into 'master'
Fix another warning when running tests under Ruby 3.3

See merge request os85/httpx!328
2024-02-01 17:12:03 +00:00
HoneyryderChuck
b086c237ee versioning declaration of syslog gem, which is not a bundled gem anymore 2024-02-01 17:11:18 +00:00
HoneyryderChuck
ffd20d73c8 remove needless usage of ||= 2024-02-01 17:08:45 +00:00
Earlopain
861f7a0d34
Fix another warning when running tests under Ruby 3.3 2024-02-01 16:38:21 +01:00
HoneyryderChuck
7a7ad75ef7 Merge branch 'test-warning-ruby-3.3' into 'master'
Fix a warning when running tests on Ruby 3.3

See merge request os85/httpx!323
2024-02-01 11:21:17 +00:00
HoneyryderChuck
2f513526d3 Merge branch 'bugfix/issue-289' into 'master'
Fix OAuthSession#load returning when all inst_vars are assigned

See merge request os85/httpx!321
2024-02-01 11:19:35 +00:00
HoneyryderChuck
566b804b65 Merge branch 'issue-288' into 'master'
cleanly close the connection on HTTP2 protocol error

Closes #288, #294, and #295

See merge request os85/httpx!325
2024-02-01 11:18:18 +00:00
HoneyryderChuck
5a08853e7a rescue from IOError and terminate the connection
contrary to what was probably assumed, IOError is not a SocketError...

Closes #295
2024-01-26 02:19:03 +00:00
HoneyryderChuck
dd0473e7cf cleanly close the connection on HTTP2 protocol error
when the HTTP/2 connection suffers an irrecovable error on the protocol level, it has to be terminated
2024-01-26 02:19:02 +00:00
HoneyryderChuck
067e32923c Merge branch 'issue-292' into 'master'
bookkeep pool connections on Session#wrap

Closes #292

See merge request os85/httpx!326
2024-01-26 02:18:24 +00:00
HoneyryderChuck
f3a241fcc1 ci: do not use links in docker-compose
deprecated for a while, and gitlab ci / dind does not seem to support it anymore.
2024-01-26 02:11:03 +00:00
HoneyryderChuck
4ad2c50143 bookkeep pool connections on Session#wrap
in order to only close connections initiated during the Session#wrap block, pool also wraps the block so that existing connections do not roll over

Closes #292
2024-01-26 02:11:03 +00:00
Earlopain
194b5ae3dc
Fix a warning when running tests on Ruby 3.3 2024-01-20 13:52:57 +01:00
HoneyryderChuck
0633daaf8e Merge branch 'plugin-no-method-error' into 'master'
Fix error message when options method itself raises NoMethodError

See merge request os85/httpx!322
2024-01-18 22:36:42 +00:00
Earlopain
7dd06c5e87
Fix error message when options method itself raises MethodError
This would previously say that the option was unknown
2024-01-18 12:37:02 +01:00
mereghost
8d30ce1588
Adds nil return to avoid rubocop linting violations 2024-01-14 16:15:14 +01:00
mereghost
9187692615
Avoid overwriting already set OAuthSession instance variables 2024-01-14 16:08:08 +01:00
mereghost
99621de555
Invert the inst_vars check on OAuthSession. Fix inst_var reference 2024-01-14 11:21:42 +01:00
HoneyryderChuck
e9d5b75298 bump version to 1.2.1 2024-01-13 16:03:39 +00:00
HoneyryderChuck
994049da8c fix decoding issue in test by not allowing to reuse same response object
buffering more chunks after decoding response payload leads to dubious results in ruby 3.3, and is, from a usability perspective, not even something httpx should allow
2024-01-13 15:39:10 +00:00
HoneyryderChuck
84d01b5358 prevent HTTP/2 handshake on corrupted socket fix to loop
consume may call on_error, which ends up in #handle_transition again. To prevent this infinite loop, the state is set before the handshake packet is buffered.
2024-01-13 00:58:52 +00:00
HoneyryderChuck
ff914d380d Merge branch 'ruby-3.3' into 'master'
Fix compatibility with Ruby 3.3

See merge request os85/httpx!317
2024-01-13 00:33:38 +00:00
HoneyryderChuck
9d04c6747c Merge branch 'issue-288' into 'master'
fix: recover from socket/ssl errors on conn reset handshake

Closes #288

See merge request os85/httpx!320
2024-01-13 00:30:23 +00:00
HoneyryderChuck
8e0a5665f0 fix: recover from socket/ssl errors on conn reset handshake
if a socket or ssl session gets corrupted, it's a certainty that HTTP/2 goaway frame will fail to be sent. in such cases, the error should be ignored. given that this is already handled in transition routine, one moves the handshake push there
2024-01-13 00:20:52 +00:00
HoneyryderChuck
dc7b41e7da test reproducing ssl error blocking clean connection exit 2024-01-13 00:20:52 +00:00
HoneyryderChuck
b1fc1907ab Merge branch 'issue-286' into 'master'
https resolver: try remaining candidates on domain not found

Closes #286

See merge request os85/httpx!319
2024-01-03 22:22:19 +00:00
HoneyryderChuck
c1a25d34d3 Merge branch 'pin-rubocop' into 'master'
Pin RuboCop

See merge request os85/httpx!316
2024-01-03 15:57:24 +00:00
HoneyryderChuck
5a9113e445 https resolver: try remaining candidates on domain not found
in the face of multiple dns name candidates, the https was not behaving as the native resolver on recursively trying them on receiving domain not found types of errors
2024-01-03 15:51:40 +00:00
HoneyryderChuck
cc4b8d4c9e Merge branch 'github-34' into 'master'
fix Response#content_type doc

See merge request os85/httpx!318
2024-01-03 15:10:37 +00:00
HoneyryderChuck
890d4b8d50 linting test names 2024-01-03 14:26:09 +00:00
Earlopain
9afc138e25
Fix compatibility with Ruby 3.3 2024-01-03 13:07:02 +01:00
Earlopain
76737b3b99
Pin rubocop 2024-01-03 12:36:23 +01:00
Earlopain
5b570c21fb
Add ruby 3.3 to CI 2024-01-03 12:28:35 +01:00
HoneyryderChuck
31ec7a2ecf doc improvements 2023-12-27 14:51:09 +00:00
HoneyryderChuck
2e32aa6707 fix Response#content_type doc
https://github.com/HoneyryderChuck/httpx/issues/34
2023-12-27 14:49:55 +00:00
HoneyryderChuck
5feba82ffb fixing docs typos & syntax 2023-12-14 18:16:56 +00:00
HoneyryderChuck
1be8fdd1f0 bumped version to 1.2.0 2023-12-14 18:01:35 +00:00
HoneyryderChuck
4848e5be14 fix connection init call as per the new signature 2023-12-07 18:27:15 +00:00
HoneyryderChuck
c4b6df2637 Merge branch 'opts' into 'master'
Opts

See merge request os85/httpx!312
2023-12-06 15:13:51 +00:00
HoneyryderChuck
874bb6f1cf immprove h2c to not misuse max_concurrrent_requests
the h2c was relying heavily on rewriting connection options to only allow the first request to upgrade; this changes by instead changing the parser on first incoming request, so that if it's upgrade and contains the header, blocks the remainder until the upgrade is successful or not, and if it's not reverts the max concurrent requests anyway
2023-12-06 14:24:33 +00:00
HoneyryderChuck
7842d075ad removed unreachable code 2023-12-05 23:20:48 +00:00
HoneyryderChuck
1bd7831c85 removed patch for jruby < 9.4.5.0 2023-12-05 23:20:14 +00:00
HoneyryderChuck
5816debef5 improved coverage of ssrf filter 2023-12-05 22:49:02 +00:00
HoneyryderChuck
97c44a37ae added webmock test for plain-text response with content-encoding headerr 2023-12-05 19:41:49 +00:00
HoneyryderChuck
3c060a4e8c simplifying connection initialization, while moving conn type calculation to init process 2023-12-05 19:41:49 +00:00
HoneyryderChuck
fb7302c361 simplify chunking applying logic 2023-12-05 17:36:50 +00:00
HoneyryderChuck
4670c94241 HTTP1: do not use intermediate buffer for constructing header 2023-12-05 17:36:50 +00:00
HoneyryderChuck
864a6cd2ae Merge branch 'issue-283' into 'master'
webmock fix: do not try to decode mocked responses

Closes #283

See merge request os85/httpx!314
2023-12-05 13:29:05 +00:00
HoneyryderChuck
815f3bd638 Merge branch 'github-26' into 'master'
promote HTTPProxyError to a ConnectionError so it is retriable

See merge request os85/httpx!311
2023-12-05 04:27:19 +00:00
HoneyryderChuck
c2e4e5030b Merge branch 'github-27' into 'master'
added tests for altenative/multiple requests APIs

See merge request os85/httpx!315
2023-12-05 04:26:41 +00:00
HoneyryderChuck
086e6bc970 added tests for altenative/multiple requests APIs 2023-12-05 00:34:56 +00:00
HoneyryderChuck
58fb2c2191 webmock fix: deregister mocked connection when real requests are turned on
when it's a real request on a webmmocked connnection, mocked state needs to go away, which includes the registered connect_error callback; this should be better refactored to not rely on private API, but for now, this moves the needle forward
2023-12-05 00:16:56 +00:00
HoneyryderChuck
8268b12a77 webmock fix: do not try to decode mocked responses
mocked responses are set up in plain text; in some cases, ,such as vcr integrations, they're auto-registered after the first successful requests, where the content-encoding header is retained but body has benn decoded; when so, they're marked as mocked, and therefore the decoding step is skipped
2023-12-05 00:15:12 +00:00
HoneyryderChuck
290db4847a promote HTTPProxyError to a ConnectionError so it is retriable
while not all of them are recoverable, the ones that aren't are raised very early in the request establishment phase; for the ones that are, such as socks4 or 5 connection phase errors, they're retried
2023-12-03 04:49:11 +00:00
HoneyryderChuck
1e146e711c Merge branch 'issue-263' into 'master'
move callbacks to plugin

Closes #276 and #263

See merge request os85/httpx!292
2023-12-03 04:46:41 +00:00
HoneyryderChuck
f88322cdff connection: register callbacks before resolving, so force-reset works on early error
the ssrf filter tests surfaced an issue with these errors, which were leaving connections in the loop, a problem even more exacerbated now that inactive connections are kept. these are the kind of connections that can be immmediately discarded
2023-12-03 04:39:56 +00:00
HoneyryderChuck
7a96cbe228 using force_reset on Exception handling from pool
it's what makes more sense. since these are supposed to be irrecoverable errors, so not point in close-handshake
2023-12-03 01:10:46 +00:00
HoneyryderChuck
7143245c37 raise error on user-code errors in callbacks
when users of the library code bugs in callbacks, they should not be ignored (as they were before this change), but they should also not be treated as timeouts and such, in that they should not be wrapped in an error response. they should fail loudly, ,i.e. raise

Closes #276
2023-12-02 00:49:52 +00:00
HoneyryderChuck
885bf947b5 deprecating callback methods on raw sessions 2023-12-02 00:49:52 +00:00
HoneyryderChuck
e29a91e7f7 move session callbacks to plugin 2023-12-02 00:49:51 +00:00
HoneyryderChuck
7878595460 skip ssrf test due to jruby bug 2023-12-02 00:46:27 +00:00
HoneyryderChuck
7a1cdd2c3d new rubocop, new needless linting... 2023-12-02 00:11:52 +00:00
HoneyryderChuck
9bab254710 Merge branch 'add-before-redirect-hook' into 'master'
Add `before_redirect` to `follow_redirects` plugin

Closes #272

See merge request os85/httpx!296
2023-12-02 00:05:32 +00:00
HoneyryderChuck
b32f936365 Merge branch 'issue-250' into 'master'
ssrf filter plugin

Closes #250

See merge request os85/httpx!291
2023-12-02 00:04:38 +00:00
HoneyryderChuck
4809e1d0d0 Merge branch 'options-improvements' into 'master'
adding new Options#merge implementation

See merge request os85/httpx!297
2023-11-29 17:17:12 +00:00
HoneyryderChuck
529daa3c6f Merge branch 'resolve-eden' into 'master'
Remove eden, keep single store of connections

See merge request os85/httpx!295
2023-11-29 17:16:59 +00:00
HoneyryderChuck
37314ec930 Merge branch 'ci-wiki-rubocop' into 'master'
Add job to run rubocop on code blocks in the wiki

See merge request os85/httpx!305
2023-11-27 22:38:23 +00:00
Earlopain
b38d8805a6
Add job to run rubocop on code blocks in the wiki 2023-11-26 13:18:23 +01:00
HoneyryderChuck
b2cfe285b4 Merge branch 'fix-links' into 'master'
Update a bunch of links

See merge request os85/httpx!309
2023-11-26 00:46:42 +00:00
HoneyryderChuck
36cab0c1af Merge branch 'rubocop-from-bundle' 2023-11-26 00:45:24 +00:00
HoneyryderChuck
793840f762 fixed integration test lint 2023-11-26 00:39:57 +00:00
HoneyryderChuck
a784941932 lood rubocop from bundler, use it in CI, keep cache 2023-11-26 00:39:57 +00:00
HoneyryderChuck
ae14d6a9fe Merge branch 'ci-separate-rubocop-job' into 'master'
Run rubocop within its own job on CI

See merge request os85/httpx!310
2023-11-26 00:29:54 +00:00
HoneyryderChuck
f1bd41fada fixing datadog trace id extraction in tests
ddtrace 1.17.0 enables 128-bit trace ids by default
2023-11-26 00:26:48 +00:00
Earlopain
2760e588ac
Run rubocop within its own job on CI 2023-11-24 14:14:30 +01:00
Earlopain
c60ad23618
Fix example syntax error in readme
Fix the resulting rubocop offenses
2023-11-24 13:59:01 +01:00
Earlopain
9b3691b2bc
Update a bunch of links
Fix 404s, avoid redirects
2023-11-24 13:57:47 +01:00
HoneyryderChuck
1c64a31ac8 bump version to 1.1.5 2023-11-23 13:44:40 +00:00
HoneyryderChuck
290da6f1fe Merge branch 'github-23' into 'master'
ignore 103 early hints responses

See merge request os85/httpx!308
2023-11-22 23:45:55 +00:00
HoneyryderChuck
ea46cb08a4 Merge branch 'rb2p7' into 'master'
Allow pattern matching for Ruby 2.7

See merge request os85/httpx!307
2023-11-22 23:45:26 +00:00
HoneyryderChuck
8ec98064a1 ignore 103 early hints responses
these are interesting for browsers, but I can't seem to find a use-case for an http client. it was also breaks under HTTP/2, where the final response would have the 103 headers and the 200 response body
2023-11-22 23:30:38 +00:00
Brian Koh
b8f0d0fbcd Allow pattern matching for Ruby 2.7 2023-11-22 23:14:50 +08:00
HoneyryderChuck
911a27b20a Merge branch 'issue-280' into 'master'
stream plugin: fix #each_line not yielding last chunk

Closes #281, #282, and #280

See merge request os85/httpx!306
2023-11-22 11:29:58 +00:00
HoneyryderChuck
a586dd0d44 disabling runtime type-checking for webmock and ddtrace tests
the pattern used to override the session class doesn't seem to be supported by rbs runtime type checking code
2023-11-22 11:15:25 +00:00
HoneyryderChuck
79756e4ac4 small cleanup in type definitions and webmock testing 2023-11-22 11:07:54 +00:00
HoneyryderChuck
354bba3179 making grpc code more shape-friendly 2023-11-21 10:21:44 +00:00
HoneyryderChuck
b0dfe68ebe stream plugin: do not cache intermediate responses
this had the effect of storing redirect responses and using them solely for inferences on the each chunk block, instead of the final response

Closes #282
2023-11-21 10:21:13 +00:00
HoneyryderChuck
fa513a9ac9 stream plugin: fix #each loop when used with webmock
when response would be called inside the #each block, the webmock trigger would inject the body before attaching the response object to the request, thereby retriggering #each in a loop

Closes #281
2023-11-21 10:08:29 +00:00
HoneyryderChuck
716e98af5b stream plugin: fix #each_line not yielding last chunk
the last line of the payload wasn't being yielded unless the last character of the payload was a newliine. this was overlooked for a time due to stream plugin being built for text/event-stream mime type, which follows that rule, as per what the tests cover.
2023-11-20 22:38:47 +00:00
HoneyryderChuck
6437b4b5fb bump version to 1.1.4 2023-11-20 10:16:04 +00:00
HoneyryderChuck
ce5c2c2f21 Merge branch 'master' of gitlab.com:os85/httpx 2023-11-20 10:09:53 +00:00
HoneyryderChuck
4eb1ccb532 Merge branch 'issue-278' into 'master'
stream plugin fix: do not preempt request

Closes #278

See merge request os85/httpx!304
2023-11-20 10:03:48 +00:00
HoneyryderChuck
b0e1e2e837 datadog: use Gem::Version for comparisons 2023-11-20 10:02:43 +00:00
HoneyryderChuck
ee66b7e5cc stream plugin fix: do not preempt request
while stream requests are lazy, they were being nonetheless enqueued, before any function would be called. this was not great behaviour, as they could perhaps never been called, it also interfered with how other plugins inferred finished responses, such as the webmock adapter and follow_redirects. Another flaw in the grpc plugin was fixed as a result, given that bidirectional streams were actually being buffered
2023-11-19 23:58:27 +00:00
HoneyryderChuck
b82e57c281 ad test for integration of webmock with follow_redirects and stream plugins 2023-11-19 22:43:30 +00:00
HoneyryderChuck
aa4f267a29 altsvc: pre-purge was just removing requests, whereas below they were being rerouted already 2023-11-19 22:38:27 +00:00
HoneyryderChuck
ef3ae2a38e fix grpc logic merging metadata with credentials
header merge logic changed, and because Headers#initialize and Headers#merge logic is a bit different, it's safer to account everything as having string keys
2023-11-19 22:38:27 +00:00
HoneyryderChuck
78c29804a1 fixing cookie-header-to-jar logic on options merge
because options can be now duped without being initialized
2023-11-19 22:38:27 +00:00
HoneyryderChuck
cce68bcd98 moved altsvc-specific connection behaviour to mixin
this mixin applies only for connections built via Session#build_altsvc_connection. This moves out logic which was always being called on the hot path for connections which hadn't been alt-svc enabled, which improves the Connection#match? bottleneck.
2023-11-19 22:38:25 +00:00
HoneyryderChuck
a27f735eb8 adding new Options#merge implementation
the new merge strategy tries to avoid allocating new objects, whereas the old one relied on transforming objects to hashes for merging, then back to Options objects, which just generated too much garbage. So the new one keeps the merging object as a hash if it can, and tries to bail out if there's nothing new to merge (empty or same objects). If there is new stuff, a shallow dup is called (dup will not dup all attributes by default, more on that later), and new attributes are then passed through the transformation-then-set pipeline (which duplicates this logic in two places now)

Because .dup creates a full shallow copy, extending classes for plugins needs to be taken into account, and these must also be duped when extendable. This has the benefit of sharing more classes across plugins
2023-11-19 22:37:20 +00:00
HoneyryderChuck
abe4997d44 moving logic to compare options ivars into separate function 2023-11-19 22:37:20 +00:00
HoneyryderChuck
1c7881eda3 Options#== improvement: bailout early when there's mismatch of ivars 2023-11-19 22:37:20 +00:00
HoneyryderChuck
5be39fe60e moar options merge tests 2023-11-19 22:37:20 +00:00
HoneyryderChuck
02c1917004 Merge branch 'fix-auth-plugin-links' into 'master'
Fix auth plugins wiki links

See merge request os85/httpx!303
2023-11-19 22:36:44 +00:00
Earlopain
20164c647b
Fix auth plugins wiki links 2023-11-18 18:36:52 +01:00
Earlopain
8290afc737
Add redirect_on to follow_redirects plugin
Returning false from this callable will result in the redirect not
being followed.

Closes #272
2023-11-18 11:36:01 +01:00
HoneyryderChuck
95681aa86e bump version to 1.1.3 2023-11-17 23:58:30 +00:00
HoneyryderChuck
c7431f1b19 ssrf filter plugin
a plugin which allows for requests to fail when requests are crafted to
use IPs considered internal or reserved for specific usages. these SSRF
vulnerabilities happen when one allows requests with urls input by an
external user.

This plugin is inspired, and heavily makes use of routines existing in
the ssrf_filter gem: https://github.com/arkadiyt/ssrf_filter/ .
2023-11-17 23:40:01 +00:00
HoneyryderChuck
6106f5cd43 allow early resolve errors to bubble up the session just like lack of nameserver 2023-11-17 23:39:59 +00:00
HoneyryderChuck
b6611ec321 bugfix: protect all find-connection-and-send-request calls from early-resolve errors
httpx uses throw/catch in order to save from so-called early resolve errors, i.e. errors which may happen before the name resolution process is either early-complete or setup, such as when there are no nameservers (internet turned off), and the requests were piped into the connection, which means they're outside of the 'on_error' callback reach. there errors were only covered on the initial send flow, i.e. in other situations when new connections may have to be established ad may early-fail, the throw would not be caught, and would reach user code
2023-11-17 23:38:39 +00:00
HoneyryderChuck
9636e58bec Merge branch 'issue-277' into 'master'
fix usage of IPv6 in urls in systems with IPv6 set up but no outer connectivity

Closes #277

See merge request os85/httpx!302
2023-11-17 17:16:21 +00:00
HoneyryderChuck
ca602ed936 fix usage of IPv6 in urls in systems with IPv6 set up but no outer connectivity
the name resolution code was making the usage of IPs dependent on the existence of a DNS resolver, but there are situations where users use the IP directly, and in such a case, when IPv4-only DNS is possible **but** IPv6 loopback/link-local is available, one should still provide support for it
2023-11-17 16:58:53 +00:00
HoneyryderChuck
fb6b5d0887 Merge branch 'add-rubocop-md' into 'master'
Add rubocop-md to check ruby code blocks

See merge request os85/httpx!301
2023-11-17 16:50:37 +00:00
HoneyryderChuck
5faf8fa050 Merge branch 'issue-273' into 'master'
remove authorization header when redirecting to different-origin urls

Closes #273

See merge request os85/httpx!300
2023-11-17 15:25:16 +00:00
HoneyryderChuck
ffb24f71c6 remove authorization header when redirecting to different-origin urls
this is an old vuln fixed in curl (https://github.com/advisories/GHSA-7xmh-mw7w-rr97), which has been fixed for a long time, where credentials via authorization header would be resent on all follow location requests; this limits it to same-origin redirects; an option, "auth_to_other_origins", can be used to keep original behaviour
2023-11-17 15:16:52 +00:00
HoneyryderChuck
a9ecbec6f1 Merge branch 'issue-271' into 'master'
fix: stream + follow_redirects plugins working now

Closes #271

See merge request os85/httpx!299
2023-11-17 15:06:37 +00:00
Earlopain
5f8bc74f0b
Add rubocop-md to check ruby code blocks 2023-11-17 09:33:05 +01:00
HoneyryderChuck
8b80f15ee7 bugfix: allow stream responses to decode compressed content as well 2023-11-16 17:00:26 +00:00
HoneyryderChuck
0d24204b83 Merge branch 'remove-mutex_m' into 'master'
Remove dependency on mutex_m

Closes #274

See merge request os85/httpx!298
2023-11-16 12:38:22 +00:00
Earlopain
ac21f563de Remove dependency on mutex_m 2023-11-16 12:38:22 +00:00
HoneyryderChuck
55c71e2b80 remove unreachable code (@response never set) 2023-11-16 11:29:34 +00:00
HoneyryderChuck
c150bd1341 fix: stream + follow_redirects plugins working now
stream responses weren't following redirects when both plugins were
loaded. This was due to the stream callback object not being passed
across the redirect chain.
2023-11-16 11:29:34 +00:00
HoneyryderChuck
ce7eb0b91a out with eden connections, keep closed around
connection bookkeeping on the pool changes, where all conections are kept around, even the ones that close during the scope of the requests; new requests may then find them, reset them, an reselect them. this is a major improvement, as objects get more reused, so less gc and object movement. this also changes the way pool terminates, as connections nonw follow a termination protocol, instead of just closing (which they can while scope is open)
2023-11-15 10:37:38 +00:00
HoneyryderChuck
b24ed83a8b try to flush HTTP/2 final frame on transition to closed
improves certain jumps to closed where one can skip yet another callback
2023-11-15 10:37:38 +00:00
HoneyryderChuck
0d9a8d76fc always emit :close event implicitly
this code was scatteredd all over for no reason, only one instance not doing it, which needs refactoring
2023-11-15 10:37:38 +00:00
HoneyryderChuck
187bdbc20f refactor exhausted event handling
now, when a connection gets exhausted, the same object is reused, and reconnection & reselection is handled without having to redrive all requests again, so less work to do
2023-11-15 10:37:38 +00:00
HoneyryderChuck
bb3183a0b8 redo reset flow coming from parser
this is only used by http/1 connections; still, this is now adapted to the new reality of picking up closed-but-in-the-loop connections, in that the reset process picks up requests left off, transitions to closed, then mooves back to idle if there's request backlog to deal with
2023-11-15 10:37:38 +00:00
HoneyryderChuck
100394b29c adding :close_handshake_timeout timeout option
used to monitor readiness of connection to write the last goaway frame from HTTP/2
2023-11-15 10:37:37 +00:00
HoneyryderChuck
7345c19d5d pass project name to wiki layout 2023-11-14 23:20:38 +00:00
HoneyryderChuck
801e0aa907 remove versioning for 0.x from the readme 2023-11-14 14:20:48 +00:00
HoneyryderChuck
0910c2749b bumped version to 1.1.2 2023-11-14 13:40:04 +00:00
HoneyryderChuck
300cb83ab8 Merge branch 'issue-265' into 'master'
fix super call in sentry adapter

Closes #265

See merge request os85/httpx!294
2023-11-12 15:56:34 +00:00
HoneyryderChuck
ca6fa4605b sentry: do not propagate trace when sdk options are not set correctly 2023-11-12 15:42:26 +00:00
HoneyryderChuck
1bebb179ce load httpx sentry patch for tests 2023-11-12 15:31:38 +00:00
HoneyryderChuck
8632da0a22 name sentry patch 2023-11-12 12:06:42 +00:00
HoneyryderChuck
a864db0182 CI: support recent localstack health payloadd change 2023-11-12 11:31:45 +00:00
HoneyryderChuck
fcf41b990e fix super call in sentry adapter 2023-11-10 18:28:55 +00:00
HoneyryderChuck
4c01dd0b9b do not force to close a connection which has been closed already 2023-11-06 23:34:10 +00:00
HoneyryderChuck
bea2c4d5c6 eden connections should only reset to idle once they are picked up 2023-11-06 23:33:55 +00:00
HoneyryderChuck
f442e81414 bump version to 1.1.1 2023-11-06 17:17:05 +00:00
HoneyryderChuck
18f2bea9b0 Merge branch 'issue-261' into 'master'
reset timer baseline interval when adding new timers

Closes #261

See merge request os85/httpx!290
2023-11-06 16:36:53 +00:00
HoneyryderChuck
f6bee9e6e4 Merge branch 'issue-257' into 'master'
DNS retries to native resolver

Closes #257

See merge request os85/httpx!293
2023-11-06 12:03:52 +00:00
HoneyryderChuck
d9a52ec795 readding DNS retries to native resolver
when they fail once, the whole thing crumbles, which breaks rate limit strategies from some known software

Fixes #257
2023-11-06 09:56:40 +00:00
Nogweii
4b074a6d8a fix squid crashing on my Arch laptop 2023-11-04 16:55:11 +00:00
HoneyryderChuck
791a94322f resolver: fix for when nested lookup call returns nil 2023-11-04 16:54:59 +00:00
HoneyryderChuck
3cd063b153 Merge branch 'issue-gh-18' into 'master'
Fix close callback leak

See merge request os85/httpx!289
2023-11-04 16:35:01 +00:00
HoneyryderChuck
9a64fadb56 updating example scripts 2023-11-04 16:22:53 +00:00
HoneyryderChuck
e178bc9f20 remove duplicated conn close handler, it's set already in init_connection 2023-11-04 16:22:53 +00:00
HoneyryderChuck
4ef2d9c3ce do not remove ivars anymore 2023-11-04 16:22:53 +00:00
HoneyryderChuck
39d0356340 no consumer of connection reset event, so no emission required 2023-11-04 02:22:32 +00:00
HoneyryderChuck
1e05cdbe62 http/1.1 fix: close connection even if the server does not respect connectionn: close in request 2023-11-04 02:21:03 +00:00
HoneyryderChuck
e27301013d patching the setup of the on close callback instead
the previous patch allowed the callback to be called only once, whereas this one will be long-lived for the duration of the connection
2023-11-03 22:48:55 +00:00
HoneyryderChuck
f477871bfa reset timer baseline interval when adding new timers
due to how read timeouts are added on request transitions, timers may
enter the pool **before** a new tick happens, and are therefore
accounted for when the timers are fired after the current tick.

This patch resets the timer, which will force a new tick before they may
fire again.
2023-11-03 12:12:05 +00:00
HoneyryderChuck
fac8a62037 bail out on dns answer when connection already closed
there are situations where a connection may already be closed before dns response is received for it. such an example is connection coalescing, when happy eyeballs takes over, first address arrives, a coalescing situation is detected, and then the connection and it's happy eyeballs cousin are both closed, **before** the cound connection has been resolved
2023-11-02 23:11:27 +00:00
Thomas Hurst
ec7b845c67 Fix close callback leak
Per Github issue #18, this causes a linear performance decrease, with
each connection slightly slowing the next.
2023-11-02 02:00:01 +00:00
HoneyryderChuck
ce07b2ff50 bumped version to 1.1.0 2023-10-30 11:27:20 +00:00
HoneyryderChuck
c2bd6c8540 Merge branch 'issue-260' into 'master'
add support in json mime type checker for application/hal+json

Closes #260

See merge request os85/httpx!288
2023-10-30 10:35:28 +00:00
HoneyryderChuck
1aa2b08db7 add support in json mime type checker for application/hal+json 2023-10-30 10:27:32 +00:00
HoneyryderChuck
14c94e6d14 Merge branch 'issue-251' into 'master'
Add Response#peer_address and ErrorResponse#peer_address

Closes #251

See merge request os85/httpx!286
2023-10-30 10:13:52 +00:00
HoneyryderChuck
8f54afe7b3 Merge branch 'issue-252' into 'master'
test for follow redirect with relative paths

Closes #252

See merge request os85/httpx!287
2023-10-30 10:13:42 +00:00
HoneyryderChuck
9465a077b1 Add Response#peer_address and ErrorResponse#peer_address
responses can now expose the IP address used to connect to the peer
server to fetch the response from.
2023-10-30 09:52:30 +00:00
HoneyryderChuck
168e530dab tolerate not having openssl installed 2023-10-29 22:56:37 +00:00
HoneyryderChuck
159fa74a3f add test to verify that redirect plugin can follow relative and with .. paths 2023-10-29 22:46:45 +00:00
HoneyryderChuck
5bb74ec465 expose the io object via the TCP#socket method, which will be an SSLSocket for SSL case 2023-10-29 22:34:39 +00:00
HoneyryderChuck
949bcdbc2a Merge branch 'issue-255' into 'master'
Fixing timeouts performance regression

See merge request os85/httpx!282
2023-10-28 22:30:14 +00:00
HoneyryderChuck
ceaa994eba patch jruby until 9.4.5.0 is released 2023-10-27 18:19:59 +01:00
HoneyryderChuck
489c7280ec cleaned up timeout setup logic by usign a shared function for the set/unset phases 2023-10-25 07:44:40 +01:00
HoneyryderChuck
c5fc8aeb19 simplify initialization of request buffer 2023-10-25 07:44:40 +01:00
HoneyryderChuck
d5e469d6c6 removing threshold size var from req body 2023-10-25 07:44:40 +01:00
HoneyryderChuck
bc99188c80 adding persistent= setter to Request
this avoids the creation of another options object
2023-10-25 07:44:40 +01:00
HoneyryderChuck
6176afbf2c removing unneeded var 2023-10-24 22:53:59 +01:00
HoneyryderChuck
1cc9d4f04b fixing recovering from exhausted connections for HTTP/1
this has been working for a while, but was silently failing in HTTP/1, due to our inability to test it in CI (HTTP/1 setup is not yet using keep-alive)
2023-10-24 22:53:22 +01:00
HoneyryderChuck
62217f6a76 added connect nonblock state to internal logs 2023-10-24 22:53:22 +01:00
HoneyryderChuck
e4facd9b7a defaulting max requests to infinity
this limit wasn't doing any favours to anyone, particularly during benchmmarks
2023-10-24 22:53:22 +01:00
HoneyryderChuck
ba8b4a4bc9 optimization: try connecting on #call
this avoids a needless select syscall on connection establishment
2023-10-24 22:53:22 +01:00
HoneyryderChuck
82a0c8cf11 fix faraaday adapter timeout setup
do not set them all as operation timeout
2023-10-24 22:53:22 +01:00
HoneyryderChuck
bdc9478aa8 do not use INFINITY for timeouts
it isn't a valid input for IO#wait family of functions; instead, use nil
2023-10-24 22:53:22 +01:00
HoneyryderChuck
8bd4dc1fbd fix timers overhead causing spurious wakeups on the select loop
the change to read/write cancellation-driven timeouts as the default
timeout strategy revealed a performance regression; because these were
built on Timers, which never got unsubscribed, this meant that they were
kept beyond the duration of the request they were created for, and
needlessly got picked up for the next timeout tick.

This was fixed by adding a callback on timer intervals, which
unsubscribes them from the timer group when called; these would then be
activated after the timeout is not needed anymore (request send /
response received), thereby removing the overhead on subsequent
requests.

An additional intervals array is also kept in the connection itself;
timeouts from timers are signalled via socket wait calls, however they
were always resulting in timeouts, even when they shouldn't (ex: expect
timeout and send full response payload as a result), and with the wrong
exception class in some cases. By keeping intervals from its requests
around, and monitoring whether there are relevant request triggers, the
connection can therefore handle a timeout or bail out (so that timers
can fire the correct callback).
2023-10-24 22:53:22 +01:00
HoneyryderChuck
dbc7536724 fix for timeouts performance regression
since v1, performance has regressed due to the new default timeouts,
which are based on timers. That's because they were not being cleanup
after requests were done with, and were causing spurious wakeups in the
select loop after the fact.

Fixed by cleaning up each timer after each relevant request event.
2023-10-24 22:53:22 +01:00
HoneyryderChuck
062109a5bc Merge branch 'doc-improvements' into 'master'
doc to most public accessible classes

See merge request os85/httpx!279
2023-10-24 16:05:34 +00:00
HoneyryderChuck
09a3df54c4 Merge branch 'issue-github-19' into 'master'
set ndots to 1 when none parsed from resolv.conf

See merge request os85/httpx!283
2023-10-24 14:04:54 +00:00
Jonas Mueller
554b5a663c Return domain name early if ASCII only 2023-10-24 15:00:11 +01:00
HoneyryderChuck
0cb169afab set ndots to 1 when none parsed from resolv.conf
as per https://www.man7.org/linux/man-pages/man5/resolv.conf.5.html, default is 1
2023-10-21 00:02:25 +01:00
HoneyryderChuck
61ce888e47 Merge branch 'fix-readme' into 'master'
Fix basic_auth doc in README.md

See merge request os85/httpx!280
2023-10-16 21:11:34 +00:00
Chulki Lee
e8f1657821 Update file README.md 2023-10-16 17:48:45 +00:00
HoneyryderChuck
f089d57d7d added rdoc to most public accessible classes 2023-10-14 16:02:11 +01:00
HoneyryderChuck
2de2b026be bump release to 1.0.2 2023-10-13 18:12:20 +01:00
HoneyryderChuck
9d3dd72b80 fixing datadog min version 2023-10-13 18:06:33 +01:00
HoneyryderChuck
c1da8d29fc readded support for older datadog versions... 2023-10-13 17:52:42 +01:00
HoneyryderChuck
1fa9846f56 set min versio of http-2-next to 1.0.1 2023-10-13 17:04:11 +01:00
HoneyryderChuck
ba6fc820b7 bump version to 0.24.7 2023-10-13 16:55:09 +01:00
HoneyryderChuck
16ecdd2b57 readded support for older datadog versions... 2023-10-13 16:54:22 +01:00
HoneyryderChuck
2896134f67 Merge branch 'http-2-next-patch' into 'master'
http/2: do not interpret MAX_CONCURRENT_STREAMS as request cap

See merge request os85/httpx!278
2023-10-13 15:07:34 +00:00
HoneyryderChuck
97a34cfcbc wip: using master 2023-10-13 15:56:10 +01:00
HoneyryderChuck
ca75148e86 http/2: do not interpret MAX_CONCURRENT_STREAMS as request cap
a misinterpretation of the spec on http-2-next led to the introduction
of the max_requests option, a cap of requests on a given connection,
which in http/2 case, would be initialized with MAX_CONCURRENT_STREAMS,
which means something else.

This has been fixed already in http-2-next, and this is the summary of
changes required to support it.

The `max_requests` option is kept, as it can still be useful from a user
perspective, but the default in http/2 is now INFINITY, which disables
it effectively. The HTTP/1 cap is bumped to 200, but it may fall as
well soon.
2023-10-13 15:56:10 +01:00
HoneyryderChuck
834873638d Merge branch 'iaddict-master-patch-44799' into 'master'
Fix and enhance grpc method definition

See merge request os85/httpx!277
2023-10-12 22:16:34 +00:00
HoneyryderChuck
4618845a97 fix datadog version used 2023-10-12 23:07:01 +01:00
HoneyryderChuck
5db6e28534 linting updates 2023-10-12 22:39:26 +01:00
HoneyryderChuck
fb86669872 fixing datadog integration, removing support for versions older than 1.13.0 2023-10-12 18:56:41 +01:00
Thomas Steinhausen
013f24ba80 Test camel case grpc procedure names
Show that a camel case procedure can be called with underscored name.
2023-10-11 10:54:27 +02:00
Thomas Steinhausen
96eae65da1 Fix and enhance rpc method definition
Snake case named procedures could not be called. Now two methods are defined, where one is underscore named and the second has the originla procedure name as called on the service.
2023-10-10 17:55:48 +00:00
HoneyryderChuck
a3ac1993e9 bumped version to 1.0.1 2023-10-04 15:07:02 +01:00
HoneyryderChuck
5ca0dcdf8d Merge branch 'issue-249' into 'master'
bugfix: do not inflate empty chunks

Closes #249

See merge request os85/httpx!276
2023-10-04 14:06:31 +00:00
HoneyryderChuck
8a66233148 bugfix: do not inflate empty chunks
Closes #249
2023-10-04 14:56:01 +01:00
HoneyryderChuck
377abc84c7 bump version to 1.0.0 2023-10-03 13:46:52 +01:00
HoneyryderChuck
ede4ccdf30 bumping timeout for webdav lock 2023-10-03 12:19:12 +01:00
HoneyryderChuck
7e06957cc2 using ghcr.io/graalvm/truffleruby-community namespace for truffleruby image 2023-10-03 10:57:29 +01:00
HoneyryderChuck
ad7da6edfa bumping http-2-next version 2023-10-03 10:53:50 +01:00
HoneyryderChuck
62868f64b3 Merge branch 'c-breaker' 2023-09-29 15:01:00 +01:00
HoneyryderChuck
09be632cd9 circuit breaker: use Enumerator#with_object, treat uris as strings to avoid allocation 2023-09-29 10:29:09 +01:00
HoneyryderChuck
803718108e protect circuit store access with a mutex
a session object may be used from different threads, we want the same rules to apply in such a case
2023-09-28 22:56:03 +01:00
HoneyryderChuck
f8020b9c10 bump version to 0.24.6 2023-09-28 12:40:04 +01:00
HoneyryderChuck
11210e3a23 Merge branch 'v1' into 'master'
circuit breaker improvs

See merge request os85/httpx!275
2023-09-28 11:32:22 +00:00
HoneyryderChuck
c48969996e fix for jruby returning empty string in alpn_protocol 2023-09-28 10:57:43 +01:00
HoneyryderChuck
c7ccc9eaf6 prepare for base64 being removed from default gems 2023-09-27 18:12:56 +01:00
HoneyryderChuck
e4869e1a4b circuit-breaker plugin. fix half-open decision to emit real request
the previous logic was relying on a random order which didn't work in practice; instead, one now reuses the max-attempts to define how many requests happen in the half-open state, and the drip rate defines how may of them will be real
2023-09-27 18:12:56 +01:00
HoneyryderChuck
dd84195db6 bump coverage by testing more edge cases
mime type detector using file, no idnx
2023-09-27 11:59:52 +01:00
HoneyryderChuck
d856ae81e0 added missing release notes 2023-09-27 11:59:52 +01:00
HoneyryderChuck
1494ba872a Merge branch 'v1' into 'master'
1.0.0

See merge request os85/httpx!270
2023-09-20 17:19:53 +00:00
HoneyryderChuck
685e6e4c7f allow multipart requests to accept tempfile
in fact, anything responding to .path, .eof?, .rewind and .read can be accepted
2023-09-20 17:57:41 +01:00
HoneyryderChuck
085cec0c8e improve coverage and simplified faraday adapter
and some other modules
2023-09-20 17:57:41 +01:00
HoneyryderChuck
288ac05508 fix: proxy plugin broke when processing a 305 use proxy redirect
the proxy plugin contained an enhancement, when used with the follow_redirects plugin, which retries a request over the received proxy. This contained a bug, which was now caught with the added test
2023-09-20 17:57:41 +01:00
HoneyryderChuck
c777aa779e test socks5 no auth methods error path 2023-09-20 17:57:41 +01:00
HoneyryderChuck
d55bfec80c fix: system resolv timeout raise ResolveTimeoutError instead of ResolveError 2023-09-20 17:57:41 +01:00
HoneyryderChuck
e88956a16f improving coverage of tests for proxy module 2023-09-20 17:57:41 +01:00
HoneyryderChuck
aab30279ac allow default errors catch up besides retry on 2023-09-20 17:57:41 +01:00
HoneyryderChuck
2f9247abfb use default HTTP/2 handshake strategy for grpc 2023-09-20 17:57:41 +01:00
HoneyryderChuck
0d58408c58 compression plugins for gzip and deflate supported by default
most of the code was moved to the transcoder layer.

The `compression_threshold_size` option has been removed.

The `:compression/brotli` plugin becomes only ´:brotli`, and depends on
the new transcoding APIs.

options to skip compression and decompression were added.
2023-09-20 17:57:41 +01:00
HoneyryderChuck
3f73d2e3ce multipart supported by default
the plugin was now moved to the transcoder layer, where it is available
from the get-go.
2023-09-20 17:57:41 +01:00
HoneyryderChuck
896914e189 lint change 2023-09-20 17:57:41 +01:00
HoneyryderChuck
4f587c5508 renaming authenticationn modules to just auth
* `:authentication` plugin becomes `:auth`
  * `authentication` helper becomes `authorization`
* `:basic_authentication` plugin becomes `:basic_auth`
  * `:basic_authentication` helper is removed
* `:digest_authentication` plugin becomes `:digest_auth`
  * `:digest_authentication` helper is removed
* `:ntlm_authentication` plugin becomes `:ntlm_auth`
  * `:ntlm_authentication` helper is removed
2023-09-20 17:57:41 +01:00
HoneyryderChuck
a9cb0a69a2 seting :read_timeout and :write_timeout by default 2023-09-20 17:57:41 +01:00
HoneyryderChuck
6baca35422 support has been removed 2023-09-20 17:57:41 +01:00
HoneyryderChuck
b4c5e75705 drop faraday adapter support for faraday lower than v1 2023-09-20 17:57:41 +01:00
HoneyryderChuck
d859c3a1eb remove support for older (< v1) versions of dddtrace in the datadog plugin 2023-09-20 17:57:41 +01:00
HoneyryderChuck
b7f5a3dfad adding release notes with latest updates 2023-09-20 17:57:41 +01:00
HoneyryderChuck
8cd1aac99c remove deprecated APIs 2023-09-20 17:57:39 +01:00
HoneyryderChuck
f0f6b5f7e2 removed punycode ruby implementation inherited from domain_name
it's IDNA 2003 compliant only, and people can already load idnx
optionally.
2023-09-20 17:57:05 +01:00
HoneyryderChuck
acbc22e79f test against jruby 9.4 2023-09-20 17:57:05 +01:00
HoneyryderChuck
134bef69e0 removed overrides and refinements of methods prior to 2.7 2023-09-20 17:57:05 +01:00
HoneyryderChuck
477c3601fc eliminated blocks testing for ruby < 2.7 2023-09-20 17:57:05 +01:00
HoneyryderChuck
f0dabb9a83 rearranged deps to adapt to the new constraints 2023-09-20 17:57:05 +01:00
HoneyryderChuck
7407adefb9 set min ruby gemspec constraint 2023-09-20 17:57:05 +01:00
HoneyryderChuck
91bfa84c12 removed ruby < 2.7 from CI 2023-09-20 17:57:05 +01:00
HoneyryderChuck
7473af6d9d removed punycode ruby implementation inherited from domain_name
it's IDNA 2003 compliant only, and people can already load idnx
optionally.
2023-09-20 17:57:05 +01:00
HoneyryderChuck
4292644870 Merge branch 'issue-247' into 'master'
fix Session class assertions not prepared for class overrides

Closes #247

See merge request os85/httpx!274
2023-09-19 16:07:08 +00:00
HoneyryderChuck
2e11ee5b32 fix Session class assertions not prepared for class overrides
Some plugins override the Session class, however there may be instances
of the original Session around, therefore the assertions need to somehow
point to the original Session class to stil be able to work.

Closes #247
2023-09-19 09:33:01 +01:00
HoneyryderChuck
0c8398b3db bumped version to 0.24.5 2023-09-17 22:53:35 +01:00
HoneyryderChuck
52e738b586 Merge branch 'issue-246' into 'master'
fix bug in DoH impl when the request returned no answer

Closes #246

See merge request os85/httpx!273
2023-09-14 18:44:29 +00:00
HoneyryderChuck
c0afc295a5 fix bug in DoH impl when the request returned no answer (Closes #246) 2023-09-14 13:32:04 +01:00
HoneyryderChuck
ed7c56f12c
Merge branch 'test-for-no-sni-and-san-check' into 'master'
Test for no sni and san check

See merge request os85/httpx!272
2023-09-10 20:11:58 +00:00
HoneyryderChuck
be7075beb8 fix for san check in order to support IPv6 SAN check 2023-09-10 01:09:56 +01:00
HoneyryderChuck
f9a6aab475 add the no-sni-with-san-check test 2023-09-08 23:29:12 +01:00
HoneyryderChuck
cc441b33d8 force variable that may be nil to array for Array#& 2023-09-06 22:52:19 +01:00
HoneyryderChuck
b8d97cc414 bump version to 0.24.4 2023-09-06 22:40:42 +01:00
HoneyryderChuck
eab39a5f99
Merge branch 'issue-243' into 'master'
TLS: support session resumption

Closes #243

See merge request os85/httpx!271
2023-09-06 21:40:10 +00:00
HoneyryderChuck
5ffab53364 disable http2 goaway test for jurby (ssl socket hanging on reconnection, can't figure out the reason yet) 2023-09-06 22:09:56 +01:00
HoneyryderChuck
b24421e18c forcing hostname verification for jruby, since it's turned off by default
https://github.com/jruby/jruby-openssl/issues/284
2023-09-06 22:09:56 +01:00
HoneyryderChuck
487a747544 allow reuse of previously closed connections within the scope of a session
when closed, connections are now placed in a place called eden_connections; whenever a connection is matched for, after checking the live connections and finding none, a match is looked in eden connections; the match is accepted **if** the IP is considered fresh (the input is validated in the cache, or input was an ip or in /etc/hosts, or it's an external socket) and, if a TLS connection, the stored TLS session did not expire; if these conditions do not match, the connection is dropped from the eden and a new connection will started instead; this will therefore allow reusing ruby objects, reusing TLS sessions, and still respect the DNs cache
2023-09-06 22:09:56 +01:00
HoneyryderChuck
ef2f0cc998 ssl: support session resumption on reconnections with same session
when connections get reset due to max number of requests being reached,
the same TLS session is going to be reused, as long as it's valid.

This change is ported from the same feature in net-http, including [the
tls 1.3
improvements](ddf5c52b5f)
2023-09-06 22:09:56 +01:00
HoneyryderChuck
f03d9bb648 fix: ssl handshake correct handling of ip addresses
besides not setting session sni hostname, which it was already doing,
the verify_hostname is set to false to avoid warnings, and the
post_connection_check is still allowed to proceed, to check that the
certificate returned includes the IP address.

port of the similar net-http change found
[here](fa68e64bee)

also ommitting certain steps in the initializer if the ssl socket is
initiated outside of the httpx context and passed as an option.
2023-09-06 22:09:56 +01:00
HoneyryderChuck
0f234c2d7b moved rbs out of the general group 2023-09-05 23:02:51 +01:00
HoneyryderChuck
f4171e3cf5 Merge branch 'fix/digest-improvements' into 'master'
Fix/digest improvements

See merge request os85/httpx!256
2023-08-16 00:24:32 +00:00
HoneyryderChuck
9c831205e0 linting 2023-08-16 01:08:41 +01:00
HoneyryderChuck
a429a6af22 .digest_auth can now support prior hashed ha1 password via :hashed
kwarg

this is something common to store in htdigest files for example, and is
a format supported by webrick as well.
2023-08-16 00:35:46 +01:00
HoneyryderChuck
73484df323 add tests for -sess digest auth 2023-08-08 23:11:02 +01:00
Jonas Müller
819e11f680 Fix typo in README.md 2023-08-04 14:04:56 +01:00
HoneyryderChuck
9b2c8e773d faraday test fix: old faraday does not support stringio inputs 2023-08-01 10:36:54 +01:00
HoneyryderChuck
607fa42672 fix faraday test again... 2023-08-01 10:06:05 +01:00
HoneyryderChuck
0ce42ba694 use string in faraday test 2023-08-01 09:51:29 +01:00
HoneyryderChuck
463bf15ba8 bump version to 0.24.3 2023-07-31 19:21:26 +01:00
HoneyryderChuck
835a851dd6 Merge branch 'faraday-tests' 2023-07-31 19:14:58 +01:00
HoneyryderChuck
1b9422e828 supporting faraday bind request option 2023-07-31 16:09:59 +01:00
HoneyryderChuck
2ef2b5f797 fix: set ssl verify to none when verify field is false (and ignore when nil) 2023-07-31 16:09:59 +01:00
HoneyryderChuck
7be554dc62 fix: faraday timeouts not being correctly mapped to httpx timeouts 2023-07-31 16:09:59 +01:00
HoneyryderChuck
b7a850f6da turn httpx timeout errors into faraday errors 2023-07-31 16:09:59 +01:00
HoneyryderChuck
b7d421fdcd fix for accessing wrong ivar 2023-07-31 16:09:59 +01:00
HoneyryderChuck
93b4ac8542 added tests to faraday adapter, for timeout and proxy based features 2023-07-31 16:09:59 +01:00
HoneyryderChuck
892dd6d37f bump version to 0.24.2 2023-07-30 23:35:27 +01:00
HoneyryderChuck
6ae05006c6 fixes and improvements on the faraday adapter
* implement `Faraday::Adapter#build_connection´ (adapter seems to
  expect it)
* implement `Faraday::Adapter#close` (adapter seems to expect it)
* use `Faraday::Adapter#request_timeout` to translate faraday timeouts
  to httpx timeouts;
* ensure that the same HTTPX sesion object gets reused

In the process, also had to tweak the parallel manager, by
reimplementing the faraday APIs I was required to implement in the first
place, in order to obe able to reuse something (which just shows that
this faraday parallel API was poorly thought out).
2023-07-28 23:45:33 +01:00
HoneyryderChuck
f0167925ec fixing cheatsheet indication 2023-07-28 23:45:03 +01:00
HoneyryderChuck
afead02c46 eliminate deprecated MiniTest module 2023-07-27 00:02:11 +01:00
HoneyryderChuck
baab52f440 lax error check 2023-07-05 23:18:29 +01:00
HoneyryderChuck
1c04bf7cdb Merge branch 'master' of gitlab.com:os85/httpx 2023-07-05 23:03:39 +01:00
HoneyryderChuck
4b058cc837 replaced endpoint used to test udp-to-tcp dns upgrade 2023-07-05 22:55:08 +01:00
HoneyryderChuck
5bc2949a49 Merge branch 'issue-239' into 'master'
added #bearer_auth helper in authentication pluginn

Closes #239

See merge request os85/httpx!265
2023-07-03 21:31:39 +00:00
HoneyryderChuck
1a2db03c26 resolver_options: allow nameserver to be an hash descriminating dns server by socket family 2023-07-03 22:30:53 +01:00
HoneyryderChuck
17a26be1a9 added #bearer_auth helper in authentication pluginn 2023-07-02 22:23:07 +01:00
HoneyryderChuck
3ec44fd56a Merge branch 'native-resolver-bug-multi' into 'master'
fix for multi-hostname resolution with aliases failing

See merge request os85/httpx!264
2023-06-29 19:18:53 +00:00
HoneyryderChuck
ee6c5b231f fix for multi-hostname resolution with aliases failing
context of the multiple alias hops was cleaned up from the context,
which broke next query access.
2023-06-29 17:48:59 +01:00
HoneyryderChuck
255fc98d44 bumped version to 0.24.1 2023-06-27 16:09:20 +01:00
HoneyryderChuck
4f0b41a791 revert rubocop regression change (fixed in latest) 2023-06-27 10:17:22 +01:00
HoneyryderChuck
e4338979a6 rewrite regression test for proxy to use a local proxy using webrick 2023-06-26 20:19:39 +01:00
HoneyryderChuck
85f0ac8ed3 bugfix: fix wrong super call for unexisting super method, when using the datadog plugin 2023-06-26 19:31:51 +01:00
HoneyryderChuck
e25ac201d2 updated cheatsheet examples 2023-06-26 19:27:48 +01:00
HoneyryderChuck
38b871aa8e Merge branch 'issue-238' into 'master'
proxy: fix incorrect connection #send definition never calling super

Closes #238

See merge request os85/httpx!262
2023-06-26 18:24:08 +00:00
HoneyryderChuck
0b18bb63e8 lint issue 2023-06-26 16:45:27 +01:00
HoneyryderChuck
afbde420a7 proxy: fix incorrect connection #send definition never calling super
this made several plugins unusable with the proxy plugin, because a lot
of them are dependent on Connection#send being called and overwritten.
This was done so to avoid piping requests when intermediate
connect-level parsers are in place. So in the way, when the conn is
initial, original send is closed; when not, which should almost never
happen, a second list is created, which is then piped when the
connection is established, back to original send.
2023-06-26 16:45:27 +01:00
HoneyryderChuck
244563720a proxy plugin: fail early if proxy scheme is not supported
The error which it was appearing when erroneously using "https" in a
proxy url was too cryptic.
2023-06-26 15:54:16 +01:00
HoneyryderChuck
886c091901 Merge branch 'fix-rubygems-website' into 'master'
Fix wrong homepage URL in rubygems

See merge request os85/httpx!263
2023-06-26 11:58:13 +00:00
Arash Mousavi
11942b2c74 Update file httpx.gemspec 2023-06-26 10:59:57 +00:00
HoneyryderChuck
b2848ea718 remove Style/RedundantCurrentDirectoryInPath cop support due to upstream bug 2023-06-25 01:22:50 +01:00
HoneyryderChuck
b9ee892b20 Merge branch 'datadog-improvements' into 'master'
datadog plugin: support env-based service name, set distributed tracing by default to true

See merge request os85/httpx!258
2023-06-15 22:27:59 +00:00
HoneyryderChuck
af457255ca datadog plugin: support env-based service name, set distributed tracing by default to true 2023-06-15 17:28:06 +01:00
HoneyryderChuck
0397d6d814 omit ntlm tests from recent ruby test pipelines (ciphers unsupported in recent openssl default mode) 2023-06-15 15:31:39 +01:00
HoneyryderChuck
4d61ba1cc2 bumped version to 0.24.0 2023-06-15 12:58:38 +01:00
HoneyryderChuck
23fe515eac Merge branch 'moar-coverage' into 'master'
Moar coverage

See merge request os85/httpx!257
2023-06-14 14:53:14 +00:00
HoneyryderChuck
75bf8de36a fix: do not delete the algo digest header, required for negotiation 2023-06-13 17:44:54 +01:00
HoneyryderChuck
d24cf98785 add test for case when body only responds to #length 2023-06-13 17:43:00 +01:00
HoneyryderChuck
896253bcbc testing response cache internal store 2023-06-13 17:34:57 +01:00
HoneyryderChuck
32188352a5 test jitter with retries plugin 2023-06-13 17:13:51 +01:00
HoneyryderChuck
b9b2715b10 improve coverage of altsvc and resolver modules 2023-06-13 16:54:19 +01:00
HoneyryderChuck
7c1d7083ab Merge branch 'issue-236' into 'master'
`:response_cache` andd `:circuit_breaker` improvements

Closes #236

See merge request os85/httpx!255
2023-06-12 21:23:03 +00:00
HoneyryderChuck
bed0d03b9c do not cache closed responses, moar thread-safety
instead, copy the body; the buffer will keep a pointer to the same
original source, be it a string (for stringio) or a file descriptor
(for a file)
2023-06-12 20:42:57 +01:00
HoneyryderChuck
0555132740 integrate mutex_m in signatures 2023-06-12 20:42:57 +01:00
HoneyryderChuck
9342f983d5 improved coverage of response_cache plugin
and fixed a bug in the process
2023-06-12 20:42:57 +01:00
HoneyryderChuck
52082359f0 response_cache: eliminate stalled responses from cache
on read/write from/to store, take time to eliminate stalled responses from the given cache key
2023-06-12 20:42:57 +01:00
HoneyryderChuck
59cc0037fc response_cache: make the response store thread safe
The store is held by the session, which can be used in multithread
scenarios.
2023-06-12 20:42:57 +01:00
HoneyryderChuck
eb0291ed87 :circuit_breaker plugin: added support for .on_circuit_open callback
called when a circuit is open.

```ruby
HTTPX.plugin(:circuit_breaker).on_circuit_open do |req|
  # ... do smth
end
2023-06-12 20:42:57 +01:00
HoneyryderChuck
03059786b6 Merge branch 'fix-localhost-multihome-happy-eyeballs-connect' into 'master'
fix for happy eyeballs with early-resolved IPs

See merge request os85/httpx!254
2023-06-12 19:39:05 +00:00
HoneyryderChuck
1475f9a2ec fix for happy eyeballs with early-resolved IPs
for instance, in multi-homed networks, ´/etc/hosts` will have both
    "127.0.0.1" and "::1" pointing from localhost; still only one of
    them may be reachable, if a server binds only to "127.0.0.1", for
    exammple. In such cases, the early exit placed to prevent the loop
    from b0777c61e was preventing the dual-stack IP resolve to pass the
    second set of responses, thereby potentiallly making only the
    unreachable IP accessible to the connection.
2023-06-12 20:10:27 +01:00
HoneyryderChuck
8daf49a505 bumped version to 0.23.4 2023-06-09 00:07:32 +01:00
HoneyryderChuck
73468e5424 Merge branch 'issue-237' into 'master'
fix Response::Body#read which rewinds on every call

Closes #237

See merge request os85/httpx!253
2023-06-08 23:02:47 +00:00
HoneyryderChuck
46ce583de3 fix Response::Body#read which rewinds on every call
As per the ruby IO reader protocol, which Response::Body was aimed at
suppoorting since the beginning, the call to #rewind was impeding it
from consuming the body buffer, and instead delivering the same
substring everytime.
2023-06-08 23:47:32 +01:00
HoneyryderChuck
f066bc534f fixed Response::Body#read test, which didn't really test for equality, and was therefore broken 2023-06-08 23:24:34 +01:00
HoneyryderChuck
709101cf0f Merge branch 'issue-43' into 'master'
event callbacks

Closes #43

See merge request os85/httpx!229
2023-05-31 19:30:53 +00:00
HoneyryderChuck
0d969a7a3c errors in response chunk handling will now bubble up and force the connection to close 2023-05-31 20:17:27 +01:00
HoneyryderChuck
0f988e3e9f adding session lifecycle callbacks 2023-05-31 20:06:59 +01:00
HoneyryderChuck
9bcae578d7 recover from errors on response chunk processing
first attempt at more granular error handling: during response chunk processing, errors will be handled in a way where current response stops being fetched; for http/1, the connection is fully reset, for http/2, the individual stream is cancelled
2023-05-31 11:24:21 +01:00
HoneyryderChuck
45c8dcb36b Merge branch 'issue-231' into 'master'
`oauth` plugin

Closes #210

See merge request os85/httpx!252
2023-05-27 22:37:24 +00:00
HoneyryderChuck
5655c602c7 the oauth plugin 2023-05-25 16:45:25 +01:00
HoneyryderChuck
af38476a14 test for oauth plugin 2023-05-25 16:37:22 +01:00
HoneyryderChuck
2dda42cf9f tidying up resolv examples 2023-05-25 16:37:22 +01:00
HoneyryderChuck
e4b9557c8e bumped version to 0.23.3 2023-05-23 10:38:44 +02:00
HoneyryderChuck
6bdf827c65 Merge branch 'issue-235' into 'master'
Native resolver fixes

Closes #235

See merge request os85/httpx!251
2023-05-22 01:08:11 +00:00
HoneyryderChuck
ddffe33bcd removing ruby 2.3 from CI 2023-05-22 01:58:02 +02:00
HoneyryderChuck
f193e164ff cleaning up resolver test artifacts 2023-05-22 01:09:40 +02:00
HoneyryderChuck
af2da64c62 bugfix: make sure that packet is not exceeded when receiving short packet via tcp socket resolver 2023-05-22 00:43:43 +02:00
HoneyryderChuck
1433f35186 moar tests for native resolver paths 2023-05-22 00:42:52 +02:00
HoneyryderChuck
507339907c bugfix: e is an undefined variable 2023-05-21 23:46:28 +02:00
HoneyryderChuck
1fb4046d52 added test exercising the dns error path 2023-05-21 23:45:38 +02:00
HoneyryderChuck
c71d4048af bumped version to 0.23.2 2023-05-05 17:25:47 +01:00
HoneyryderChuck
877e561a45 fix unavailable hostname variable
Fixes #234
2023-05-05 14:57:42 +00:00
HoneyryderChuck
1765ddf0f8 fixed test match 2023-05-01 01:19:43 +01:00
HoneyryderChuck
5ad314607d bump version to 0.23.1 2023-05-01 00:45:56 +01:00
HoneyryderChuck
b154d97438 readd error message on failed resolution 2023-05-01 00:45:35 +01:00
HoneyryderChuck
07624e529f Merge branch 'issue-233' into 'master'
Revert "dns errors: raise error immediately on nxdomain error"

Closes #233

See merge request os85/httpx!250
2023-04-30 23:42:24 +00:00
HoneyryderChuck
a772ab42d0 fix for no candidate queries after the first fails. 2023-05-01 00:20:27 +01:00
HoneyryderChuck
b13b0f86eb Revert "dns errors: raise error immediately on nxdomain error"
This reverts commit 04c5b39600e36ebb38884ff9285cdd66d933d70e.
2023-04-29 23:15:46 +01:00
414 changed files with 15624 additions and 7148 deletions

4
.gitignore vendored
View File

@ -15,4 +15,6 @@ tmp
public
build
.sass-cache
wiki
wiki
.gem_rbs_collection/
rbs_collection.lock.yaml

View File

@ -8,7 +8,7 @@ image:
name: docker/compose:latest
variables:
# this variable enables caching withing docker-in-docker
# this variable enables caching within docker-in-docker
# https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-workflow-with-docker-executor
MOUNT_POINT: /builds/$CI_PROJECT_PATH/vendor
# bundler-specific
@ -38,33 +38,40 @@ cache:
paths:
- vendor
lint rubocop code:
image: "ruby:3.4"
variables:
BUNDLE_WITHOUT: test:coverage:assorted
before_script:
- bundle install
script:
- bundle exec rake rubocop
lint rubocop wiki:
image: "ruby:3.4"
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
variables:
BUNDLE_ONLY: lint
before_script:
- git clone https://gitlab.com/os85/httpx.wiki.git
- bundle install
- |
cat > .rubocop-wiki.yml << FILE
require:
- rubocop-md
AllCops:
TargetRubyVersion: 3.4
DisabledByDefault: true
FILE
script:
- bundle exec rubocop httpx.wiki --config .rubocop-wiki.yml
test jruby:
<<: *test_settings
script:
./spec.sh jruby 9.0.0.0
allow_failure: true
test ruby 2/3:
<<: *test_settings
script:
./spec.sh ruby 2.3
test ruby 2/4:
<<: *test_settings
only:
- master
script:
./spec.sh ruby 2.4
test ruby 2/5:
<<: *test_settings
only:
- master
script:
./spec.sh ruby 2.5
test ruby 2/6:
<<: *test_settings
only:
- master
script:
./spec.sh ruby 2.6
test ruby 2/7:
<<: *test_settings
script:
@ -83,20 +90,28 @@ test ruby 3/1:
./spec.sh ruby 3.1
test ruby 3/2:
<<: *test_settings
<<: *yjit_matrix
script:
./spec.sh ruby 3.2
test ruby 3/3:
<<: *test_settings
script:
./spec.sh ruby 3.3
test ruby 3/4:
<<: *test_settings
<<: *yjit_matrix
script:
./spec.sh ruby 3.4
test truffleruby:
<<: *test_settings
script:
./spec.sh truffleruby latest
allow_failure: true
regression tests:
image: "ruby:3.2"
image: "ruby:3.4"
variables:
BUNDLE_WITHOUT: assorted
BUNDLE_WITHOUT: lint:assorted
CI: 1
COVERAGE_KEY: "$RUBY_ENGINE-$RUBY_VERSION-regression-tests"
COVERAGE_KEY: "ruby-3.4-regression-tests"
artifacts:
paths:
- coverage/
@ -108,12 +123,12 @@ regression tests:
- bundle exec rake regression_tests
coverage:
coverage: '/\(\d+.\d+\%\) covered/'
coverage: '/Coverage: \d+.\d+\%/'
stage: prepare
variables:
BUNDLE_WITHOUT: test:assorted
BUNDLE_WITHOUT: lint:test:assorted
image: "ruby:3.2"
image: "ruby:3.4"
script:
- gem install simplecov --no-doc
# this is a workaround, because simplecov doesn't support relative paths.
@ -135,7 +150,7 @@ pages:
stage: deploy
needs:
- coverage
image: "ruby:3.2"
image: "ruby:3.4"
before_script:
- gem install hanna-nouveau
script:

View File

@ -1,6 +1,8 @@
inherit_from: .rubocop_todo.yml
require: rubocop-performance
require:
- rubocop-performance
- rubocop-md
AllCops:
NewCops: enable
@ -23,9 +25,10 @@ AllCops:
- 'vendor/**/*'
- 'www/**/*'
- 'lib/httpx/extensions.rb'
- 'lib/httpx/punycode.rb'
# Do not lint ffi block, for openssl parity
- 'test/extensions/response_pattern_match.rb'
# Old release notes
- !ruby/regexp /doc/release_notes/0_.*.md/
Metrics/ClassLength:
Enabled: false
@ -89,6 +92,10 @@ Style/GlobalVars:
Exclude:
- lib/httpx/plugins/internal_telemetry.rb
Style/CommentedKeyword:
Exclude:
- integration_tests/faraday_datadog_test.rb
Style/RedundantBegin:
Enabled: false
@ -118,6 +125,9 @@ Style/HashSyntax:
Style/AndOr:
Enabled: False
Style/ArgumentsForwarding:
Enabled: False
Naming/MethodParameterName:
Enabled: false
@ -170,3 +180,7 @@ Performance/StringIdentifierArgument:
Style/Lambda:
Enabled: false
Style/TrivialAccessors:
Exclude:
- 'test/pool_test.rb'

View File

@ -11,7 +11,7 @@ Metrics/ModuleLength:
Max: 325
Metrics/BlockLength:
Max: 200
Max: 500
Metrics/BlockNesting:
Enabled: False
@ -38,4 +38,4 @@ Naming/AccessorMethodName:
Enabled: false
Performance/MethodObjectAsBlock:
Enabled: false
Enabled: false

View File

@ -6,5 +6,5 @@ SimpleCov.start do
add_filter "/integration_tests/"
add_filter "/regression_tests/"
add_filter "/lib/httpx/plugins/internal_telemetry.rb"
add_filter "/lib/httpx/punycode.rb"
add_filter "/lib/httpx/base64.rb"
end

View File

@ -14,7 +14,7 @@ require "httpx"
response = HTTPX.get("https://google.com/")
# Will print response.body
puts response.to_s
puts response
```
## Multiple HTTP Requests
@ -24,7 +24,7 @@ require "httpx"
uri = "https://google.com"
responses = HTTPX.new(uri, uri)
responses = HTTPX.get(uri, uri)
# OR
HTTPX.wrap do |client|
@ -37,17 +37,17 @@ end
## Headers
```ruby
HTTPX.headers("user-agent" => "My Ruby Script").get("https://google.com")
HTTPX.with(headers: { "user-agent" => "My Ruby Script" }).get("https://google.com")
```
## HTTP Methods
```ruby
HTTP.get("https://myapi.com/users/1")
HTTP.post("https://myapi.com/users")
HTTP.patch("https://myapi.com/users/1")
HTTP.put("https://myapi.com/users/1")
HTTP.delete("https://myapi.com/users/1")
HTTPX.get("https://myapi.com/users/1")
HTTPX.post("https://myapi.com/users")
HTTPX.patch("https://myapi.com/users/1")
HTTPX.put("https://myapi.com/users/1")
HTTPX.delete("https://myapi.com/users/1")
```
## HTTP Authentication
@ -56,13 +56,13 @@ HTTP.delete("https://myapi.com/users/1")
require "httpx"
# Basic Auth
response = HTTPX.plugin(:basic_authentication).basic_authentication("username", "password").get("https://google.com")
response = HTTPX.plugin(:basic_auth).basic_auth("username", "password").get("https://google.com")
# Digest Auth
response = HTTPX.plugin(:digest_authentication).digest_authentication("username", "password").get("https://google.com")
response = HTTPX.plugin(:digest_auth).digest_auth("username", "password").get("https://google.com")
# Token Auth
response = HTTPX.plugin(:authentication).authentication("eyrandomtoken").get("https://google.com")
# Bearer Token Auth
response = HTTPX.plugin(:auth).authorization("eyrandomtoken").get("https://google.com")
```
@ -74,31 +74,27 @@ require "httpx"
response = HTTPX.get("https://google.com/")
response.status # => 301
response.headers["location"] #=> "https://www.google.com/"
response.body # => "<HTML><HEAD><meta http-equiv=\"content-type\" ....
response["cache-control"] # => public, max-age=2592000
response.headers["cache-control"] #=> public, max-age=2592000
response.body.to_s #=> "<HTML><HEAD><meta http-equiv=\"content-type\" ....
```
## POST form request
## POST `application/x-www-form-urlencoded` request
```ruby
require "httpx"
uri = URI.parse("http://example.com/search")
# Shortcut
response = HTTPX.post(uri, form: {"q" => "My query", "per_page" => "50"})
response = HTTPX.post(uri, form: { "q" => "My query", "per_page" => "50" })
```
## File upload - input type="file" style
## File `multipart/form-data` upload - input type="file" style
```ruby
require "httpx"
# uses http_form_data API: https://github.com/httprb/form_data
path = "/path/to/your/testfile.txt"
HTTPX.plugin(:multipart).post("http://something.com/uploads", form: {
name: HTTP::FormData::File.new(path)
})
file_to_upload = Pathname.new("/path/to/your/testfile.txt")
HTTPX.plugin(:multipart).post("http://something.com/uploads", form: { name: file_to_upload })
```
## SSL/HTTPS request
@ -108,8 +104,7 @@ Update: There are some good reasons why this code example is bad. It introduces
```ruby
require "httpx"
response = HTTPX.with(ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE }).get("https://secure.com/")
response = HTTPX.with(ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE }).get("https://secure.com/")
```
## SSL/HTTPS request with PEM certificate
@ -118,11 +113,11 @@ response = HTTPX.with(ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE }).get("htt
require "httpx"
pem = File.read("/path/to/my.pem")
HTTPX.with(ssl: {
HTTPX.with_ssl(
cert: OpenSSL::X509::Certificate.new(pem),
key: OpenSSL::PKey::RSA.new(pem),
verify_mode: OpenSSL::SSL::VERIFY_PEER
}).get("https://secure.com/")
verify_mode: OpenSSL::SSL::VERIFY_PEER,
).get("https://secure.com/")
```
## Cookies
@ -132,8 +127,7 @@ require "httpx"
HTTPX.plugin(:cookies).wrap do |client|
session_response = client.get("https://translate.google.com/")
response_cookies = session_response.cookie_jar
response = client.cookies(response_cookies).get("https://translate.google.com/#auto|en|Pardon")
response = client.get("https://translate.google.com/#auto|en|Pardon")
puts response
end
```
@ -143,9 +137,14 @@ end
```ruby
require "httpx"
response = HTTPX.plugin(:compression).get("https://www.google.com")
puts response.headers["content-encoding"] #=> "gzip"
response = HTTPX.get("https://www.google.com")
puts response.headers["content-encoding"] #=> "gzip"
puts response #=> uncompressed payload
# uncompressed request payload
HTTPX.post("https://myapi.com/users", body: super_large_text_payload)
# gzip-compressed request payload
HTTPX.post("https://myapi.com/users", headers: { "content-encoding" => %w[gzip] }, body: super_large_text_payload)
```
## Proxy
@ -171,11 +170,10 @@ HTTPX.get("https://google.com")
require "httpx"
HTTPX.with(resolver_class: :https).get("https://google.com")
# by default it uses cloudflare DoH server.
# This example switches the resolver to Quad9's DoH server
HTTPX.with(resolver_class: :https, resolver_options: {uri: "https://9.9.9.9/dns-query"}).get("https://google.com")
HTTPX.with(resolver_class: :https, resolver_options: { uri: "https://9.9.9.9/dns-query" }).get("https://google.com")
```
## Follow Redirects
@ -183,7 +181,9 @@ HTTPX.with(resolver_class: :https, resolver_options: {uri: "https://9.9.9.9/dns-
```ruby
require "httpx"
HTTPX.plugin(:follow_redirects).with(follow_insecure_redirects: false, max_redirects: 4).get("https://www.google.com")
HTTPX.plugin(:follow_redirects)
.with(follow_insecure_redirects: false, max_redirects: 4)
.get("https://www.google.com")
```
## Timeouts
@ -191,12 +191,12 @@ HTTPX.plugin(:follow_redirects).with(follow_insecure_redirects: false, max_redir
```ruby
require "httpx"
HTTPX.with(timeout: {connect_timeout: 10, operation_timeout: 3}).get("https://google.com")
# full E2E request/response timeout, 10 sec to connect to peer
HTTPX.with(timeout: { connect_timeout: 10, request_timeout: 3 }).get("https://google.com")
```
## Retries
```ruby
require "httpx"
HTTPX.plugin(:retries).max_retries(5).get("https://www.google.com")
@ -214,4 +214,3 @@ HTTPX.get("https://google.com") #=> udp://10.0.1.2:53...
HTTPX.with(debug_level: 1, debug: $stderr).get("https://google.com")
```

116
Gemfile
View File

@ -5,56 +5,42 @@ ruby RUBY_VERSION
source "https://rubygems.org"
gemspec
if RUBY_VERSION < "2.2.0"
gem "rake", "~> 12.3"
else
gem "rake", "~> 13.0"
end
gem "rake", "~> 13.0"
group :test do
if RUBY_VERSION >= "3.2.0"
gem "datadog", "~> 2.0"
else
gem "ddtrace"
end
gem "http-form_data", ">= 2.0.0"
gem "minitest"
gem "minitest-proveit"
gem "ruby-ntlm"
gem "sentry-ruby" if RUBY_VERSION >= "2.4.0"
gem "spy"
if RUBY_VERSION < "2.3.0"
gem "webmock", "< 3.15.0"
elsif RUBY_VERSION < "2.4.0"
gem "webmock", "< 3.17.0"
else
gem "webmock"
end
gem "nokogiri"
gem "ruby-ntlm"
gem "sentry-ruby"
gem "spy"
gem "webmock"
gem "websocket-driver"
gem "net-ssh", "~> 4.2.0" if RUBY_VERSION < "2.2.0"
gem "ddtrace"
platform :mri do
if RUBY_VERSION >= "2.3.0"
if RUBY_VERSION < "2.5.0"
gem "google-protobuf", "< 3.19.2"
elsif RUBY_VERSION < "2.7.0"
gem "google-protobuf", "< 3.22.0"
end
if RUBY_VERSION <= "2.6.0"
gem "grpc", "< 1.49.0"
else
gem "grpc"
end
gem "logging"
gem "marcel", require: false
gem "mimemagic", require: false
gem "ruby-filemagic", require: false
end
gem "grpc"
gem "logging"
gem "marcel", require: false
gem "mimemagic", require: false
gem "ruby-filemagic", require: false
if RUBY_VERSION >= "3.0.0"
gem "multi_json", require: false
gem "oj", require: false
gem "rbs"
gem "yajl-ruby", require: false
end
if RUBY_VERSION >= "3.4.0"
# TODO: remove this once websocket-driver-ruby declares this as dependency
gem "base64"
end
end
platform :mri, :truffleruby do
@ -65,63 +51,39 @@ group :test do
gem "net-ssh-gateway"
end
platform :mri_21 do
gem "rbnacl"
end
platform :mri_23 do
if RUBY_VERSION >= "2.3.0"
gem "openssl", "< 2.0.6" # force usage of openssl version we patch against
end
gem "msgpack", "<= 1.3.3"
end
platform :jruby do
gem "jruby-openssl" # , git: "https://github.com/jruby/jruby-openssl.git", branch: "master"
gem "ruby-debug"
end
gem "aws-sdk-s3"
gem "faraday"
gem "idnx" if RUBY_VERSION >= "2.4.0"
gem "multipart-post", "< 2.2.0" if RUBY_VERSION < "2.3.0"
gem "faraday-multipart"
gem "idnx"
gem "oga"
if RUBY_VERSION >= "3.0.0"
gem "rbs"
gem "rubocop"
gem "rubocop-performance"
gem "webrick"
end
gem "webrick" if RUBY_VERSION >= "3.0.0"
# https://github.com/TwP/logging/issues/247
gem "syslog" if RUBY_VERSION >= "3.3.0"
# https://github.com/ffi/ffi/issues/1103
# ruby 2.7 only, it seems
gem "ffi", "< 1.17.0" if Gem::VERSION < "3.3.22"
end
group :lint do
gem "rubocop", "~> 1.59.0"
gem "rubocop-md"
gem "rubocop-performance", "~> 1.19.0"
end
group :coverage do
if RUBY_VERSION < "2.2.0"
gem "simplecov", "< 0.11.0"
elsif RUBY_VERSION < "2.3"
gem "simplecov", "< 0.11.0"
elsif RUBY_VERSION < "2.4"
gem "simplecov", "< 0.19.0"
elsif RUBY_VERSION < "2.5"
gem "simplecov", "< 0.21.0"
else
gem "simplecov"
end
gem "simplecov"
end
group :assorted do
if RUBY_VERSION < "2.2.0"
gem "pry", "~> 0.12.2"
else
gem "pry"
end
gem "pry"
platform :mri do
if RUBY_VERSION < "2.2.0"
gem "pry-byebug", "~> 3.4.3"
else
gem "debug" if RUBY_VERSION >= "3.1.0"
gem "pry-byebug"
end
gem "debug" if RUBY_VERSION >= "3.1.0"
gem "pry-byebug"
end
end

View File

@ -189,51 +189,3 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
* lib/httpx/domain_name.rb
This file is derived from the implementation of punycode available at
here:
https://www.verisign.com/en_US/channel-resources/domain-registry-products/idn-sdks/index.xhtml
Copyright (C) 2000-2002 Verisign Inc., All rights reserved.
Redistribution and use in source and binary forms, with or
without modification, are permitted provided that the following
conditions are met:
1) Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2) Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
3) Neither the name of the VeriSign Inc. nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
This software is licensed under the BSD open source license. For more
information visit www.opensource.org.
Authors:
John Colosi (VeriSign)
Srikanth Veeramachaneni (VeriSign)
Nagesh Chigurupati (Verisign)
Praveen Srinivasan(Verisign)

View File

@ -19,7 +19,7 @@ And also:
* Compression (gzip, deflate, brotli)
* Streaming Requests
* Authentication (Basic Auth, Digest Auth, NTLM)
* Auth (Basic Auth, Digest Auth, NTLM)
* Expect 100-continue
* Multipart Requests
* Advanced Cookie handling
@ -46,7 +46,7 @@ And that's the simplest one there is. But you can also do:
HTTPX.post("http://example.com", form: { user: "john", password: "pass" })
http = HTTPX.with(headers: { "x-my-name" => "joe" })
http.patch(("http://example.com/file", body: File.open("path/to/file")) # request body is streamed
http.patch("http://example.com/file", body: File.open("path/to/file")) # request body is streamed
```
If you want to do some more things with the response, you can get an `HTTPX::Response`:
@ -61,7 +61,7 @@ puts body #=> #<HTTPX::Response ...
You can also send as many requests as you want simultaneously:
```ruby
page1, page2, page3 =`HTTPX.get("https://news.ycombinator.com/news", "https://news.ycombinator.com/news?p=2", "https://news.ycombinator.com/news?p=3")
page1, page2, page3 = HTTPX.get("https://news.ycombinator.com/news", "https://news.ycombinator.com/news?p=2", "https://news.ycombinator.com/news?p=3")
```
## Installation
@ -107,26 +107,26 @@ HTTPX.get(
```ruby
response = HTTPX.get("https://www.google.com", params: { q: "me" })
response = HTTPX.post("https://www.nghttp2.org/httpbin/post", form: {name: "John", age: "22"})
response = HTTPX.plugin(:basic_authentication)
.basic_authentication("user", "pass")
response = HTTPX.post("https://www.nghttp2.org/httpbin/post", form: { name: "John", age: "22" })
response = HTTPX.plugin(:basic_auth)
.basic_auth("user", "pass")
.get("https://www.google.com")
# more complex client objects can be cached, and are thread-safe
http = HTTPX.plugin(:compression).plugin(:expect).with(headers: { "x-pvt-token" => "TOKEN"})
http = HTTPX.plugin(:expect).with(headers: { "x-pvt-token" => "TOKEN" })
http.get("https://example.com") # the above options will apply
http.post("https://example2.com", form: {name: "John", age: "22"}) # same, plus the form POST body
http.post("https://example2.com", form: { name: "John", age: "22" }) # same, plus the form POST body
```
### Lightweight
It ships with most features published as a plugin, making vanilla `httpx` lightweight and dependency-free, while allowing you to "pay for what you use"
The plugin system is similar to the ones used by [sequel](https://github.com/jeremyevans/sequel), [roda](https://github.com/jeremyevans/roda) or [shrine](https://github.com/janko-m/shrine).
The plugin system is similar to the ones used by [sequel](https://github.com/jeremyevans/sequel), [roda](https://github.com/jeremyevans/roda) or [shrine](https://github.com/shrinerb/shrine).
### Advanced DNS features
`HTTPX` ships with custom DNS resolver implementations, including a native Happy Eyeballs resolver immplementation, and a DNS-over-HTTPS resolver.
`HTTPX` ships with custom DNS resolver implementations, including a native Happy Eyeballs resolver implementation, and a DNS-over-HTTPS resolver.
## User-driven test suite
@ -134,9 +134,9 @@ The test suite runs against [httpbin proxied over nghttp2](https://nghttp2.org/h
## Supported Rubies
All Rubies greater or equal to 2.1, and always latest JRuby and Truffleruby.
All Rubies greater or equal to 2.7, and always latest JRuby and Truffleruby.
**Note**: This gem is tested against all latest patch versions, i.e. if you're using 2.2.0 and you experience some issue, please test it against 2.2.10 (latest patch version of 2.2) before creating an issue.
**Note**: This gem is tested against all latest patch versions, i.e. if you're using 3.3.0 and you experience some issue, please test it against 3.3.$latest before creating an issue.
## Resources
| | |
@ -149,24 +149,14 @@ All Rubies greater or equal to 2.1, and always latest JRuby and Truffleruby.
## Caveats
### ALPN support
ALPN negotiation is required for "auto" HTTP/2 "https" requests. This is available in ruby since version 2.3 .
### Known bugs
* Doesn't work with ruby 2.4.0 for Windows (see [#36](https://gitlab.com/os85/httpx/issues/36)).
* Using `total_timeout` along with the `:persistent` plugin [does not work as you might expect](https://gitlab.com/os85/httpx/-/wikis/Timeouts#total_timeout).
## Versioning Policy
Although 0.x software, `httpx` is considered API-stable and production-ready, i.e. current API or options may be subject to deprecation and emit log warnings, but can only effectively be removed in a major version change.
`httpx` follows Semantic Versioning.
## Contributing
* Discuss your contribution in an issue
* Fork it
* Make your changes, add some tests
* Ensure all tests pass (`docker-compose -f docker-compose.yml -f docker-compose-ruby-{RUBY_VERSION}.yml run httpx bundle exec rake test`)
* Make your changes, add some tests (follow the instructions from [here](test/README.md))
* Open a Merge Request (that's Pull Request in Github-ish)
* Wait for feedback

View File

@ -100,6 +100,7 @@ task :prepare_website => %w[rdoc prepare_jekyll_data] do
header = "---\n" \
"layout: #{layout}\n" \
"title: #{title}\n" \
"project: httpx\n" \
"---\n\n"
File.write(path, header + data)
end

View File

@ -0,0 +1,5 @@
# 0.23.1
## Bugfixes
* fixed regression causing dns candidate names not being tried after first one fails.

View File

@ -0,0 +1,5 @@
# 0.23.2
## Bugfixes
* fix missing variable on code path in the native resolver.

View File

@ -0,0 +1,6 @@
# 0.23.3
## Bugfixes
* native resolver: fix missing exception variable in the DNS error code path.
* native resolver: fixed short DNS packet handling when using TCP.

View File

@ -0,0 +1,5 @@
# 0.23.4
## Bugfixes
* fix `Response::Body#read` which rewinds on every call.

View File

@ -0,0 +1,48 @@
# 0.24.0
## Features
### `:oauth` plugin
The `:oauth` plugin manages the handling of a given OAuth session, in that it ships with convenience methods to generate a new access token, which it then injects in all requests.
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/OAuth
### session callbacks
HTTP request/response lifecycle events have now the ability of being intercepted via public API callback methods:
```ruby
HTTPX.on_request_completed do |request|
puts "request to #{request.uri} sent"
end.get(...)
```
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Events to know which events and callback methods are supported.
### `:circuit_breaker` plugin `on_circuit_open` callback
A callback has been introduced for the `:circuit_breaker` plugin, which is triggered when a circuit is opened.
```ruby
http = HTTPX.plugin(:circuit_breaker).on_circuit_open do |req|
puts "circuit opened for #{req.uri}"
end
http.get(...)
```
## Improvements
Several `:response_cache` features have been improved:
* `:response_cache` plugin: response cache store has been made thread-safe.
* cached response sharing across threads is made safer, as stringio/tempfile instances are copied instead of shared (without copying the underling string/file).
* stale cached responses are eliminate on cache store lookup/store operations.
* already closed responses are evicted from the cache store.
* fallback for lack of compatible response "date" header has been fixed to return a `Time` object.
## Bugfixes
* Ability to recover from errors happening during response chunk processing (required for overriding behaviour and response chunk callbacks); error bubbling up will result in the connection being closed.
* Happy eyeballs support for multi-homed early-resolved domain names (such as `localhost` under `/etc/hosts`) was broken, as it would try the first given IP; so, if given `::1` and connection would fail, it wouldn't try `127.0.0.1`, which would have succeeded.
* `:digest_authentication` plugin was removing the "algorithm" header on `-sess` declared algorithms, which is required for HTTP digest auth negotiation.

View File

@ -0,0 +1,12 @@
# 0.24.1
## Improvements
* datadog adapter: support `:service_name` configuration option.
* datadog adapter: set `:distributed_tracing` to `true` by default.
* `:proxy` plugin: when the proxy uri uses an unsupported scheme (i.e.: "scp://125.24.2.1"), a more user friendly error is raised (instead of the previous broken stacktrace).
## Bugfixes
* datadog adapter: fix tracing enable call, which was wrongly calling `super`.
+ `:proxy` plugin: fix for bug which was turning off plugins overriding `HTTPX::Connection#send` (such as the datadog adapter).

View File

@ -0,0 +1,12 @@
# 0.24.2
## Improvements
* besides an array, `:resolver_options` can now receive a hash for `:nameserver`, which **must** be indexed by IP family (`Socket::AF_INET6` or `Socket::AF_INET`); each group of nameservers will be used for emitting DNS queries of that iP family.
* `:authentication` plugin: Added `#bearer_auth` helper, which receives a token, and sets it as `"Bearer $TOKEN` in the `"authorization"` header.
* `faraday` adapter: now implements `#build_connection` and `#close`, will now interact with `faraday` native timeouts (`:read`, `:write` and `:connect`).
## Bugfixes
* fixed native resolver bug when queries involving intermediate alias would be kept after the original query and mess with re-queries.

View File

@ -0,0 +1,12 @@
# 0.24.3
## Improvements
* faraday adapter: reraise httpx timeout errors as faraday errors.
* faraday adapter: support `:bind` option, which expects a host and port to connect to.
## Bugfixes
* faraday adapter: fix `#close` implementation using the wrong ivar.
* faraday adapter: fix usage of `requestt_timeout` translation of faraday timeouts into httpx timeouts.
* faraday adapter: `ssl: { verify: false }` was being ignored, and certification verification was still proceeding.

View File

@ -0,0 +1,18 @@
# 0.24.4
## Improvements
* `digest_authentication` plugin now supports passing HA1hashed with password HA1s (common to store in htdigest files for example) when setting the`:hashed` kwarg to `true` in the `.digest_auth` call.
* ex: `http.digest_auth(user, get_hashed_passwd_from_htdigest(user), hashed: true)`
* TLS session resumption is now supported
* whenever possible, `httpx` sessions will recycle used connections so that, in the case of TLS connections, the first session will keep being reusedd, thereby diminishing the overhead of subsequent TLS handshakes on the same host.
* TLS sessions are only reused in the scope of the same `httpx` session, unless the `:persistent` plugin is used, in which case, the persisted `httpx` session will always try to resume TLS sessions.
## Bugfixes
* When explicitly using IP addresses in the URL host, TLS handshake will now verify tif he IP address is included in the certificate.
* IP address will keep not be used for SNI, as per RFC 6066, section 3.
* ex: `http.get("https://10.12.0.12/get")`
* if you want the prior behavior, set `HTTPX.with(ssl: {verify_hostname: false})`
* Turn TLS hostname verification on for `jruby` (it's turned off by default).
* if you want the prior behavior, set `HTTPX.with(ssl: {verify_hostname: false})`

View File

@ -0,0 +1,6 @@
# 0.24.5
## Bugfixes
* fix for SSL handshake post connection SAN check using IPv6 address.
* fix bug in DoH impl when the request returned no answer.

View File

@ -0,0 +1,5 @@
# 0.24.6
## Bugfixes
* fix Session class assertions not prepared for class overrides, which could break some plugins which override the Session class on load (such as `datadog` or `webmock` adapters).

View File

@ -0,0 +1,10 @@
# 0.24.6
## dependencies
`http-2-next` last supported version for the 0.x series is the last version before v1. This shoul ensure that older versions of `httpx` won't be affected by any of the recent breaking changes.
## Bugfixes
* `grpc`: setup of rpc calls from camel-cased symbols has been fixed. As an improvement, the GRPC-enabled session will now support both snake-cased, as well as camel-cased calls.
* `datadog` adapter has now been patched to support the most recent breaking changes of `ddtrace` configuration DSL (`env_to_bool` is no longer supported).

View File

@ -0,0 +1,60 @@
# 1.0.0
## Breaking changes
* the minimum supported ruby version is 2.7.0 .
* The fallback support for IDNA 2003 has been removed. If you require this feature, install the [idnx gem](https://github.com/HoneyryderChuck/idnx), which `httpx` automatically integrates with when available (and supports IDNA 2008).
* `:total_timeout` option has been removed (no session-wide timeout supported, use `:request_timeout`).
* `:read_timeout` and `:write_timeout` are now set to 60 seconds by default, and preferred over `:operation_timeout`;
* the exception being in the `:stream` plugin, as the response is theoretically endless (so `:read_timeout` is unset).
* The `:multipart` plugin is removed, as its functionality and API are now loaded by default (no API changes).
* The `:compression` plugin is removed, as its functionality and API are now loaded by default (no API changes).
* `:compression_threshold_size` was removed (formats in `"content-encoding"` request header will always encode the request body).
* the new `:compress_request_body` and `:decompress_response_body` can be set to `false` to (respectively) disable compression of passed input body, or decompression of the response body.
* `:retries` plugin: the `:retry_on` condition will **not** replace default retriable error checks, it will now instead be triggered **only if** no retryable error has been found.
### plugins
* `:authentication` plugin becomes `:auth`.
* `.authentication` helper becomes `.authorization`.
* `:basic_authentication` plugin becomes `:basic_auth`.
* `:basic_authentication` helper is removed.
* `:digest_authentication` plugin becomes `:digest_auth`.
* `:digest_authentication` helper is removed.
* `:ntlm_authentication` plugin becomes `:ntlm_auth`.
* `:ntlm_authentication` helper is removed.
* OAuth plugin: `:oauth_authentication` helper is rename to `:oauth_auth`.
* `:compression/brotli` plugin becomes `:brotli`.
### Support removed for deprecated APIs
* The deprecated `HTTPX::Client` constant lookup has been removed (use `HTTPX::Session` instead).
* The deprecated `HTTPX.timeout({...})` function has been removed (use `HTTPX.with(timeout: {...})` instead).
* The deprecated `HTTPX.headers({...})` function has been removed (use `HTTPX.with(headers: {...})` instead).
* The deprecated `HTTPX.plugins(...)` function has been removed (use `HTTPX.plugin(...).plugin(...)...` instead).
* The deprecated `:transport_options` option, which was only valid for UNIX connections, has been removed (use `:addresses` instead).
* The deprecated `def_option(...)` function, previously used to define additional options in plugins, has been removed (use `def option_$new_option)` instead).
* The deprecated `:loop_timeout` timeout option has been removed.
* `:stream` plugin: the deprecated `HTTPX::InstanceMethods::StreamResponse` has been removed (use `HTTPX::StreamResponse` instead).
* The deprecated usage of symbols to indicate HTTP verbs (i.e. `HTTPX.request(:get, ...)` or `HTTPX.build_request(:get, ...)`) is not supported anymore (use the upcase string always, i.e. `HTTPX.request("GET", ...)` or `HTTPX.build_request("GET", ...)`, instead).
* The deprecated `HTTPX::ErrorResponse#status` method has been removed (use `HTTPX::ErrorResponse#error` instead).
### dependencies
* `http-2-next` minimum supported version is 1.0.0.
* `:datadog` adapter only supports `ddtrace` gem 1.x or higher.
* `:faraday` adapter only supports `faraday` gem 1.x or higher.
## Improvements
* `circuit_breaker`: the drip rate of real request during the "half-open" stage of a circuit will reliably distribute real requests (as per the drip rate) over the `max_attempts`, before the circuit is closed.
## Bugfixes
* Tempfiles are now correctly identified as file inputs for multipart requests.
* fixed `proxy` plugin behaviour when loaded with the `follow_redirects` plugin and processing a 305 response (request needs to be retried on a different proxy).
## Chore
* `:grpc` plugin: connection won't buffer requests before HTTP/2 handshake is commpleted, i.e. works the same as plain `httpx` HTTP/2 connection establishment.
* if you are relying on this, you can keep the old behavior this way: `HTTPX.plugin(:grpc, http2_settings: { wait_for_handshake: false })`.

View File

@ -0,0 +1,5 @@
# 1.0.1
## Bugfixes
* do not try to inflate empty chunks (it triggered an error during response decoding).

View File

@ -0,0 +1,7 @@
# 1.0.2
## bugfixes
* bump `http-2-next` to 1.0.1, which fixes a bug where http/2 connection interprets MAX_CONCURRENT_STREAMS as request cap.
* `grpc`: setup of rpc calls from camel-cased symbols has been fixed. As an improvement, the GRPC-enabled session will now support both snake-cased, as well as camel-cased calls.
* `datadog` adapter has now been patched to support the most recent breaking changes of `ddtrace` configuration DSL (`env_to_bool` is no longer supported).

View File

@ -0,0 +1,32 @@
# 1.1.0
## Features
A function, `#peer_address`, was added to the response object, which returns the IP (either a string or an `IPAddr` object) from the socket used to get the response from.
```ruby
response = HTTPX.get("https://example.com")
response.peer_address #=> #<IPAddr: IPv4:93.184.216.34/255.255.255.255>
```
error responses will also expose an IP address via `#peer_address` as long a connection happened before the error.
## Improvements
* A performance regression involving the new default timeouts has been fixed, which could cause significant overhead in "multiple requests in sequence" scenarios, and was clearly visible in benchmarks.
* this regression will still be seen in jruby due to a bug, which fix will be released in jruby 9.4.5.0.
* HTTP/1.1 connections are now set to handle as many requests as they can by default (instead of the past default of max 200, at which point they'd be recycled).
* tolerate the inexistence of `openssl` in the installed ruby, like `net-http` does.
* `on_connection_opened` and `on_connection_closed` will yield the `OpenSSL::SSL::SSLSocket` instance for `https` backed origins (instead of always the `Socket` instance).
## Bugfixes
* when using the `:native` resolver (default option), a default of 1 for ndots is set, for systems which do not set one.
* replaced usage of `Float::INFINITY` with `nil` for timeout defaults, as the former can't be used in IO wait functions.
* `faraday` adapter timeout setup now maps to `:read_timeout` and `:write_timeout` options from `httpx`.
* fixed HTTP/1.1 connection recycling on number of max requests exhausted.
* `response.json` will now work when "content-type" header is set to "application/hal+json".
## Chore
* when using the `:cookies` plugin, a warning message to install the idnx message will only be emitted if the cookie domain is an IDN (this message was being shown all the time since v1 release).

View File

@ -0,0 +1,17 @@
# 1.1.1
## improvements
* (Re-)enabling default retries in DNS name queries; this had been disabled as a result of revamping timeouts, and resulted in queries only being sent once, which is very little for UDP-related traffic, and breaks if using DNs rate-limiting software. Retries the query just once, for now.
## bugfixes
* reset timers when adding new intervals, as these may be added as a result on after-select connection handling, and must wait for the next tick cycle (before the patch, they were triggering too soon).
* fixed "on close" callback leak on connection reuse, which caused linear performance regression in benchmarks performing one request per connection.
* fixed hanging connection when an HTTP/1.1 emitted a "connection: close" header but the server would not emit one (it closes the connection now).
* fixed recursive dns cached lookups which may have already expired, and created nil entries in the returned address list.
* dns system resolver is now able to retry on failure.
## chore
* remove duplicated callback unregitering connections.

View File

@ -0,0 +1,12 @@
# 1.1.2
## improvements
* only moving eden connections to idle when they're recycled.
## bugfixes
* skip closing a connection which is already closed during reset.
* sentry adapter: fixed `super` call which didn't have a super method (this prevented usinng sentry-enabled sessions with the `:retries` plugin).
* sentry adapter: fixing registering of sentry config.
* sentry adapter: do not propagate traces when relevant sdk options are disabled (such as `propagate_traces`).

View File

@ -0,0 +1,18 @@
# 1.1.3
## improvements
## security
* when using `:follow_redirects` plugin, the "authorization" header will be removed when following redirect responses to a different origin.
## bugfixes
* fixed `:stream` plugin not following redirect responses when used with the `:follow_redirects` plugin.
* fixed `:stream` plugin not doing content decoding when responses were p.ex. gzip-compressed.
* fixed bug preventing usage of IPv6 loopback or link-local addresses in the request URL in systems with no IPv6 internet connectivity (the request was left hanging).
* protect all code which may initiate a new connection from abrupt errors (such as internet turned off), as it was done on the initial request call.
## chore
internal usage of `mutex_m` has been removed (`mutex_m` is going to be deprecated in ruby 3.3).

View File

@ -0,0 +1,6 @@
# 1.1.4
## bugfixes
* datadog adapter: use `Gem::Version` to invoke the correct configuration API.
* stream plugin: do not preempt request enqueuing (this was making integration with the `:follow_redirects` plugin fail when set up with `webmock`).

View File

@ -0,0 +1,12 @@
# 1.1.5
## improvements
* pattern matching support for responses has been backported to ruby 2.7 as well.
## bugfixes
* `stream` plugin: fix for `HTTPX::StreamResponse#each_line` not yielding the last line of the payload when not delimiter-terminated.
* `stream` plugin: fix `webmock` adapter integration when methods calls would happen in the `HTTPX::StreamResponse#each` block.
* `stream` plugin: fix `:follow_redirects` plugin integration which was caching the redirect response and using it for method calls inside the `HTTPX::StreamResponse#each` block.
* "103 early hints" responses will be ignored when processing the response (it was causing the response returned by sesssions to hold its headers, instead of the following 200 response, while keeping the 200 response body).

View File

@ -0,0 +1,49 @@
# 1.2.0
## Features
### `:ssrf_filter` plugin
The `:ssrf_filter` plugin prevents server-side request forgery attacks, by blocking requests to the internal network. This is useful when the URLs used to perform requests arent under the developer control (such as when they are inserted via a web application form).
```ruby
http = HTTPX.plugin(:ssrf_filter)
# this works
response = http.get("https://example.com")
# this doesn't
response = http.get("http://localhost:3002")
response = http.get("http://[::1]:3002")
response = http.get("http://169.254.169.254/latest/meta-data/")
```
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/SSRF-Filter
### `:callbacks` plugin
The session callbacks introduced in v0.24.0 are in its own plugin. Older code will still work and emit a deprecation warning.
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Callbacks
### `:redirect_on` option for `:follow_redirects` plugin
This option allows passing a callback which, when returning `false`, can interrupt the redirect loop.
```ruby
http = HTTPX.plugin(:follow_redirects).with(redirect_on: ->(location_uri) { BLACKLIST_HOSTS.include?(location_uri.host) })
```
### `:close_on_handshake_timeout` timeout
A new `:timeout` option, `:close_handshake_timeout`, is added, which monitors connection readiness when performing HTTP/2 connection termination handshake.
## Improvements
* Internal "eden connections" concept was removed, and connection objects are now kept-and-reused during the lifetime of a session, even when closed. This simplified connectio pool implementation and improved performance.
* request using `:proxy` and `:retries` plugin enabled sessions will now retry on proxy connection establishment related errors.
## Bugfixes
* webmock adapter: mocked responses storing decoded payloads won't try to decode them again (fixes vcr/webmock integrations).
* webmock adapter: fix issue related with making real requests over webmock-enabled connection.

View File

@ -0,0 +1,6 @@
# 1.2.1
## Bugfixes
* DoH resolver: try resolving other candidates on "domain not found" error (same behaviour as with native resolver).
* Allow HTTP/2 connections to exit cleanly when TLS session gets corrupted and termination handshake can't be performed.

View File

@ -0,0 +1,10 @@
# 1.2.2
## Bugfixes
* only raise "unknown option" error when option is not supported, not anymore when error happens in the setup of a support option.
* usage of `HTTPX::Session#wrap` within a thread with other sessions using the `:persistent` plugin won't inadvertedly terminate its open connections.
* terminate connections on `IOError` (`SocketError` does not cover them).
* terminate connections on HTTP/2 protocol and handshake errors, which happen during establishment or termination of a HTTP/2 connection (they were being previously kept around, although they'd irrecoverable).
* `:oauth` plugin: fixing check preventing the OAuth metadata server integration path to be exercised.
* fix instantiation of the options headers object with the wrong headers class.

View File

@ -0,0 +1,16 @@
# 1.2.3
## Improvements
* `:retries` plugin: allow `:max_retries` set to 0 (allows for a soft disable of retries when using the faraday adapter).
## Bugfixes
* `:oauth` plugin: fix for default auth method being ignored when setting grant type and scope as options only.
* ensure happy eyeballs-initiated cloned connections also set session callbacks (caused issues when server would respond with a 421 response, an event requiring a valid internal callback).
* native resolver cleanly transitions from tcp to udp after truncated DNS query (causing issues on follow-up CNAME resolution).
* elapsing timeouts now guard against mutation of callbacks while looping (prevents skipping callbacks in situations where a previous one would remove itself from the collection).
## Chore
* datadog adapter: do not call `.lazy` on options (avoids deprecation warning, to be removed in ddtrace 2.0)

View File

@ -0,0 +1,8 @@
# 1.2.4
## Bugfixes
* fixed issue related to inability to buffer payload to error responses (which may happen on certain error handling situations).
* fixed recovery from a lost persistent connection leaving process due to ping being sent while still marked as inactive.
* fixed datadog integration, which was not generating new spans on retried requests (when `:retries` plugin is enabled).
* fixed splitting strings into key value pairs in cases where the value would contain a "=", such as in certain base64 payloads.

View File

@ -0,0 +1,7 @@
# 1.2.5
## Bugfixes
* fix for usage of correct `last-modified` header in `response_cache` plugin.
* fix usage of decoding helper methods (i.e. `response.json`) with `response_cache` plugin.
* `stream` plugin: reverted back to yielding buffered payloads for streamed responses (broke `down` integration)

View File

@ -0,0 +1,13 @@
# 1.2.6
## Improvements
* `native` resolver: when timing out on DNS query for an alias, retry the DNS query for the alias (instead of the original hostname).
## Bugfixes
* `faraday` adapter: set `env` options on the request object, so they are available in the request object when yielded.
* `follow_redirects` plugin: remove body-related headers (`content-length`, `content-type`) on POST-to-GET redirects.
* `follow_redirects` plugin: maintain verb (and body) of original request when the response status code is 307.
* `native` resolver: when timing out on TCP-based name resolution, downgrade to UDP before retrying.
* `rate_limiter` plugin: do not try fetching the retry-after of error responses.

View File

@ -0,0 +1,18 @@
# 1.3.0
## Dependencies
`http-2` v1.0.0 is replacing `http-2-next` as the HTTP/2 parser.
`http-2-next` was forked from `http-2` 5 years ago; its improvements have been merged back to `http-2` recently though, so `http-2-next` willl therefore no longer be maintained.
## Improvements
Request-specific options (`:params`, `:form`, `:json` and `:xml`) are now separately kept by the request, which allows them to share `HTTPX::Options`, and reduce the number of copying / allocations.
This means that `HTTPX::Options` will throw an error if you initialize an object which such keys; this should not happen, as this class is considered internal and you should not be using it directly.
## Fixes
* support for the `datadog` gem v2.0.0 in its adapter has been unblocked, now that the gem has been released.
* loading the `:cookies` plugin was making the `Session#build_request` private.

View File

@ -0,0 +1,17 @@
# 1.3.1
## Improvements
* `:request_timeout` will be applied to all HTTP interactions until the final responses returned to the caller. That includes:
* all redirect requests/responses (when using the `:follow_redirects` plugin)
* all retried requests/responses (when using the `:retries` plugin)
* intermediate requests (such as "100-continue")
* faraday adapter: allow further plugins of internal session (ex: `builder.adapter(:httpx) { |sess| sess.plugin(:follow_redirects) }...`)
## Bugfixes
* fix connection leak on proxy auth failed (407) handling
* fix busy loop on deferred requests for the duration interval
* do not further enqueue deferred requests if they have terminated meanwhile.
* fix busy loop caused by coalescing connections when one of them is on the DNS resolution phase still.
* faraday adapter: on parallel mode, skip calling `on_complete` when not defined.

View File

@ -0,0 +1,6 @@
# 1.3.2
## Bugfixes
* Prevent `NoMethodError` in an edge case when the `:proxy` plugin is autoloaded via env vars and webmock adapter are used in tandem, and a real request fails.
* raise invalid uri error if passed request uri does not contain the host part (ex: `"https:/get"`)

View File

@ -0,0 +1,5 @@
# 1.3.3
## Bugfixes
* fixing a regression introduced in 1.3.2 associated with the webmock adapter, which expects matchable request bodies to be strings

View File

@ -0,0 +1,6 @@
# 1.3.4
## Bugfixes
* webmock adapter: fix tempfile usage in multipart requests.
* fix: fallback to binary encoding when parsing incoming invalid charset in HTTP "content-type" header.

View File

@ -0,0 +1,43 @@
# 1.4.0
## Features
### `:content_digest` plugin
The `:content_digest` can be used to calculate the digest of request payloads and set them in the `"content-digest"` header; it can also validate the integrity of responses which declare the same `"content-digest"` header.
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Content-Digest
## Per-session connection pools
This architectural changes moves away from per-thread shared connection pools, and into per-session (also thread-safe) connection pools. Unlike before, this enables connections from a session to be reused across threads, as well as limiting the number of connections that can be open on a given origin peer. This fixes long-standing issues, such as reusing connections under a fiber scheduler loop (such as the one from the gem `async`).
A new `:pool_options` option is introduced, which can be passed an hash with the following sub-options:
* `:max_connections_per_origin`: maximum number of connections a pool allows (unbounded by default, for backwards compatibility).
* `:pool_timeout`: the number of seconds a session will wait for a connection to be checked out (default: 5)
More info under https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools
## Improvements
* `:aws_sigv4` plugin: improved digest calculation on compressed request bodies by buffering content to a tempfile.
* `HTTPX::Response#json` will parse payload from extended json MIME types (like `application/ld+json`, `application/hal+json`, ...).
## Bugfixes
* `:aws_sigv4` plugin: do not try to rewind a request body which yields chunks.
* fixed request encoding when `:json` param is passed, and the `oj` gem is used (by using the `:compat` flag).
* native resolver: on message truncation, bubble up tcp handshake errors as resolve errors.
* allow `HTTPX::Response#json` to accept extended JSON mime types (such as responses with `content-type: application/ld+json`)
## Chore
* default options are now fully frozen (in case anyone relies on overriding them).
### `:xml` plugin
XML encoding/decoding (via `:xml` request param, and `HTTPX::Response#xml`) is now available via the `:xml` plugin.
Using `HTTPX::Response#xml` without the plugin will issue a deprecation warning.

View File

@ -0,0 +1,19 @@
# 1.4.1
## Bugfixes
* several `datadog` integration bugfixes
* only load the `datadog` integration when the `datadog` sdk is loaded (and not other gems that may define the `Datadog` module, like `dogstatsd`)
* do not trace if datadog integration is loaded but disabled
* distributed headers are now sent along (when the configuration is enabled, which it is by default)
* fix for handling multiple `GOAWAY` frames coming from the server (node.js servers seem to send multiple frames on connection timeout)
* fix regression for when a url is used with `httpx` which is not `http://` or `https://` (should raise `HTTPX::UnsupportedSchemaError`)
* worked around `IO.copy_stream` which was emitting incorrect bytes for HTTP/2 requests which bodies larger than the maximum supported frame size.
* multipart requests: make sure that a body declared as `Pathname` is opened for reading in binary mode.
* `webmock` integration: ensure that request events are emitted (such as plugins and integrations relying in it, such as `datadog` and the OTel integration)
* native resolver: do not propagate successful name resolutions for connections which were already closed.
* native resolver: fixed name resolution stalling, in a multi-request to multi-origin scenario, when a resolution timeout would happen.
## Chore
* refactor of the happy eyeballs and connection coalescing logic to not rely on callbacks, and instead on instance variable management (makes code more straightforward to read).

View File

@ -0,0 +1,20 @@
# 1.4.2
## Bugfixes
* faraday: use default reason when none is matched by Net::HTTP::STATUS_CODES
* native resolver: keep sending DNS queries if the socket is available, to avoid busy loops on select
* native resolver fixes for Happy Eyeballs v2
* do not apply resolution delay if the IPv4 IP was not resolved via DNS
* ignore ALIAS if DNS response carries IP answers
* do not try to query for names already awaiting answer from the resolver
* make sure all types of errors are propagated to connections
* make sure next candidate is picked up if receiving NX_DOMAIN_NOT_FOUND error from resolver
* raise error happening before any request is flushed to respective connections (avoids loop on non-actionable selector termination).
* fix "NoMethodError: undefined method `after' for nil:NilClass", happening for requests flushed into persistent connections which errored, and were retried in a different connection before triggering the timeout callbacks from the previously-closed connection.
## Chore
* Refactor of timers to allow for explicit and more performant single timer interval cancellation.
* default log message restructured to include info about process, thread and caller.

View File

@ -0,0 +1,11 @@
# 1.4.3
## Bugfixes
* `webmock` adapter: reassign headers to signature after callbacks are called (these may change the headers before virtual send).
* do not close request (and its body) right after sending, instead only on response close
* prevents retries from failing under the `:retries` plugin
* fixes issue when using `faraday-multipart` request bodies
* retry request with HTTP/1 when receiving an HTTP/2 GOAWAY frame with `HTTP_1_1_REQUIRED` error code.
* fix wrong method call on HTTP/2 PING frame with unrecognized code.
* fix EOFError issues on connection termination for long running connections which may have already been terminated by peer and were wrongly trying to complete the HTTP/2 termination handshake.

View File

@ -0,0 +1,14 @@
# 1.4.4
## Improvements
* `:stream` plugin: response will now be partially buffered in order to i.e. inspect response status or headers on the response body without buffering the full response
* this fixes an issue in the `down` gem integration when used with the `:max_size` option.
* do not unnecessarily probe for connection liveness if no more requests are inflight, including failed ones.
* when using persistent connections, do not probe for liveness right after reconnecting after a keep alive timeout.
## Bugfixes
* `:persistent` plugin: do not exhaust retry attempts when probing for (and failing) connection liveness.
* since the introduction of per-session connection pools, and consequentially due to the possibility of multiple inactive connections for the same origin being in the pool, which may have been terminated by the peer server, requests would fail before being able to establish a new connection.
* prevent retrying to connect the TCP socket object when an SSLSocket object is already in place and connecting.

126
doc/release_notes/1_5_0.md Normal file
View File

@ -0,0 +1,126 @@
# 1.5.0
## Features
### `:stream_bidi` plugin
The `:stream_bidi` plugin enables bidirectional streaming support (an HTTP/2 only feature!). It builds on top of the `:stream` plugin, and uses its block-based syntax to process incoming frames, while allowing the user to pipe more data to the request (from the same, or another thread/fiber).
```ruby
http = HTTPX.plugin(:stream_bidi)
request = http.build_request(
"POST",
"https://your-origin.com/stream",
headers: { "content-type" => "application/x-ndjson" },
body: ["{\"message\":\"started\"}\n"]
)
chunks = []
response = http.request(request, stream: true)
Thread.start do
response.each do |chunk|
handle_data(chunk)
end
end
# now send data...
request << "{\"message\":\"foo\"}\n"
request << "{\"message\":\"bar\"}\n"
# ...
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Stream-Bidi
### `:query` plugin
The `:query` plugin adds public methods supporting the `QUERY` HTTP verb:
```ruby
http = HTTPX.plugin(:query)
http.query("https://example.com/gquery", body: "foo=bar") # QUERY /gquery ....
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Query
this functionality was added as a plugin for explicit opt-in, as it's experimental (RFC for the new HTTP verb is still in draft).
### `:response_cache` plugin filesystem based store
The `:response_cache` plugin supports setting the filesystem as the response cache store (instead of just storing them in memory, which is the default `:store`).
```ruby
# cache store in the filesystem, writes to the temporary directory from the OS
http = HTTPX.plugin(:response_cache, response_cache_store: :file_store)
# if you want a separate location
http = HTTPX.plugin(:response_cache).with(response_cache_store: HTTPX::Plugins::ResponseCache::FileStore.new("/path/to/dir"))
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Response-Cache#:file_store
### `:close_on_fork` option
A new option `:close_on_fork` can be used to ensure that a session object which may have open connections will not leak them in case the process is forked (this can be the case of `:persistent` plugin enabled sessions which have add usage before fork):
```ruby
http = HTTPX.plugin(:persistent, close_on_fork: true)
# http may have open connections here
fork do
# http has no connections here
end
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools#Fork-Safety .
### `:debug_redact` option
The `:debug_redact` option will, when enabled, replace parts of the debug logs (enabled via `:debug` and `:debug_level` options) which may contain sensitive information, with the `"[REDACTED]"` placeholder.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Debugging .
### `:max_connections` pool option
A new `:max_connections` pool option (settable under `:pool_options`) can be used to defined the maximum number **overall** of connections for a pool ("in-transit" or "at-rest"); this complements, and supersedes when used, the already existing `:max_connections_per_origin`, which does the same per connection origin.
```ruby
HTTPX.with(pool_options: { max_connections: 100 })
```
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Connection-Pools .
### Subplugins
An enhancement to the plugins architecture, it allows plugins to define submodules ("subplugins") which are loaded if another plugin is in use, or is loaded afterwards.
You can read more about it in https://honeyryderchuck.gitlab.io/httpx/wiki/Custom-Plugins#Subplugins .
## Improvements
* `:persistent` plugin: several improvements around reconnections of failure:
* reconnections will only happen for "connection broken" errors (and will discard reconnection on timeouts)
* reconnections won't exhaust retries
* `:response_cache` plugin: several improements:
* return cached response if not stale, send conditional request otherwise (it was always doing the latter).
* consider immutable (i.e. `"Cache-Control: immutable"`) responses as never stale.
* `:datadog` adapter: decorate spans with more tags (header, kind, component, etc...)
* timers operations have been improved to use more efficient algorithms and reduce object creation.
## Bugfixes
* ensure that setting request timeouts happens before the request is buffered (the latter could trigger a state transition required by the former).
* `:response_cache` plugin: fix `"Vary"` header handling by supporting a new plugin option, `:supported_vary_headers`, which defines which headers are taken into account for cache key calculation.
* fixed query string encoded value when passed an empty hash to the `:query` param and the URL already contains query string.
* `:callbacks` plugin: ensure the callbacks from a session are copied when a new session is derived from it (via a `.plugin` call, for example).
* `:callbacks` plugin: errors raised from hostname resolution should bubble up to user code.
* fixed connection coalescing selector monitoring in cases where the coalescable connecton is cloned, while other branches were simplified.
* clear the connection write buffer in corner cases where the remaining bytes may be interpreted as GOAWAY handshake frame (and may cause unintended writes to connections already identified as broken).
* remove idle connections from the selector when an error happens before the state changes (this may happen if the thread is interrupted during name resolution).
## Chore
`httpx` makes extensive use of features introduced in ruby 3.4, such as `Module#set_temporary_name` for otherwise plugin-generated anonymous classes (improves debugging and issue reporting), or `String#append_as_bytes` for a small but non-negligible perf boost in buffer operations. It falls back to the previous behaviour when used with ruby 3.3 or lower.
Also, and in preparation for the incoming ruby 3.5 release, dependency of the `cgi` gem (which will be removed from stdlib) was removed.

View File

@ -0,0 +1,6 @@
# 1.5.1
## Bugfixes
* connection errors on persistent connections which have just been checked out from the pool no longer account for retries bookkeeping; the assumption should be that, if a connection has been checked into the pool in an open state, chances are, when it eventually gets checked out, it may be corrupt. This issue was more exacerbated in `:persistent` plugin connections, which by design have a retry of 1, thus failing often immediately after check out without a legitimate request try.
* native resolver: fix issue with process interrupts during DNS request, which caused a busy loop when closing the selector.

View File

@ -1,7 +1,7 @@
version: '3'
services:
httpx:
image: jruby:9.3
image: jruby:9.4
environment:
- JRUBY_OPTS=--debug
entrypoint:

View File

@ -1,4 +0,0 @@
version: '3'
services:
httpx:
image: ruby:2.1

View File

@ -1,4 +0,0 @@
version: '3'
services:
httpx:
image: ruby:2.2

View File

@ -1,8 +0,0 @@
version: '3'
services:
httpx:
image: ruby:2.3
environment:
- HTTPBIN_COALESCING_HOST=another
links:
- "nghttp2:another"

View File

@ -1,8 +0,0 @@
version: '3'
services:
httpx:
image: ruby:2.4
environment:
- HTTPBIN_COALESCING_HOST=another
links:
- "nghttp2:another"

View File

@ -1,8 +0,0 @@
version: '3'
services:
httpx:
image: ruby:2.5
environment:
- HTTPBIN_COALESCING_HOST=another
links:
- "nghttp2:another"

View File

@ -5,13 +5,11 @@ services:
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
links:
- "nghttp2:another"
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -5,13 +5,11 @@ services:
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
links:
- "nghttp2:another"
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -5,13 +5,11 @@ services:
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
links:
- "nghttp2:another"
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -5,13 +5,11 @@ services:
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
links:
- "nghttp2:another"
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -1,17 +1,15 @@
version: '3'
services:
httpx:
image: ruby:2.6
image: ruby:3.3
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
links:
- "nghttp2:another"
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint:

View File

@ -0,0 +1,23 @@
version: '3'
services:
httpx:
image: ruby:3.4
environment:
- HTTPBIN_COALESCING_HOST=another
- HTTPX_RESOLVER_URI=https://doh/dns-query
depends_on:
- doh
doh:
image: registry.gitlab.com/os85/httpx/nghttp2:3
depends_on:
- doh-proxy
entrypoint: /usr/local/bin/nghttpx
volumes:
- ./test/support/ci:/home
command: --conf /home/doh-nghttp.conf --no-ocsp --frontend '*,443'
doh-proxy:
image: publicarray/doh-proxy
environment:
- "UNBOUND_SERVICE_HOST=127.0.0.11"

View File

@ -1,7 +1,7 @@
version: '3'
services:
httpx:
image: ghcr.io/graalvm/truffleruby:latest
image: ghcr.io/graalvm/truffleruby-community:latest
entrypoint:
- bash
- /home/test/support/ci/build.sh

View File

@ -26,6 +26,7 @@ services:
- AMZ_HOST=aws:4566
- WEBDAV_HOST=webdav
- DD_INSTRUMENTATION_TELEMETRY_ENABLED=false
- GRPC_VERBOSITY=ERROR
image: ruby:alpine
privileged: true
depends_on:
@ -37,13 +38,10 @@ services:
- aws
- ws-echo-server
- webdav
- altsvc-nghttp2
volumes:
- ./:/home
links:
- "altsvc-nghttp2:another2"
- "aws:test.aws"
entrypoint:
/home/test/support/ci/build.sh
entrypoint: /home/test/support/ci/build.sh
sshproxy:
image: connesc/ssh-gateway
@ -51,8 +49,6 @@ services:
- ./test/support/ssh:/config
depends_on:
- nghttp2
links:
- "nghttp2:another"
socksproxy:
image: qautomatron/docker-3proxy
@ -61,8 +57,6 @@ services:
- "3129:3129"
volumes:
- ./test/support/ci:/etc/3proxy
links:
- "nghttp2:another"
httpproxy:
image: sameersbn/squid:3.5.27-2
@ -72,56 +66,53 @@ services:
- ./test/support/ci/squid/proxy.conf:/etc/squid/squid.conf
- ./test/support/ci/squid/proxy-users-basic.txt:/etc/squid/proxy-users-basic.txt
- ./test/support/ci/squid/proxy-users-digest.txt:/etc/squid/proxy-users-digest.txt
links:
- "nghttp2:another"
command:
-d 3
command: -d 3
http2proxy:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 3300:80
depends_on:
- httpproxy
entrypoint:
/usr/local/bin/nghttpx
command:
--no-ocsp --frontend '*,80;no-tls' --backend 'httpproxy,3128' --http2-proxy
entrypoint: /usr/local/bin/nghttpx
command: --no-ocsp --frontend '*,80;no-tls' --backend 'httpproxy,3128' --http2-proxy
nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 80:80
- 443:443
depends_on:
- httpbin
entrypoint:
/usr/local/bin/nghttpx
entrypoint: /usr/local/bin/nghttpx
volumes:
- ./test/support/ci:/home
command:
--conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443'
command: --conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443'
networks:
default:
aliases:
- another
altsvc-nghttp2:
image: registry.gitlab.com/os85/httpx/nghttp2:1
image: registry.gitlab.com/os85/httpx/nghttp2:3
ports:
- 81:80
- 444:443
depends_on:
- httpbin
entrypoint:
/usr/local/bin/nghttpx
entrypoint: /usr/local/bin/nghttpx
volumes:
- ./test/support/ci:/home
command:
--conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443' --altsvc "h2,443,nghttp2"
command: --conf /home/nghttp.conf --no-ocsp --frontend '*,80;no-tls' --frontend '*,443' --altsvc "h2,443,nghttp2"
networks:
default:
aliases:
- another2
httpbin:
environment:
- DEBUG=True
image: citizenstig/httpbin
command:
gunicorn --bind=0.0.0.0:8000 --workers=6 --access-logfile - --error-logfile - --log-level debug --capture-output httpbin:app
command: gunicorn --bind=0.0.0.0:8000 --workers=6 --access-logfile - --error-logfile - --log-level debug --capture-output httpbin:app
aws:
image: localstack/localstack
@ -133,6 +124,10 @@ services:
- 4566:4566
volumes:
- ./test/support/ci/aws:/docker-entrypoint-initaws.d
networks:
default:
aliases:
- test.aws
ws-echo-server:
environment:
@ -146,4 +141,4 @@ services:
environment:
- AUTH_TYPE=Basic
- USERNAME=user
- PASSWORD=pass
- PASSWORD=pass

View File

@ -1,11 +1,20 @@
require "httpx"
URLS = %w[https://nghttp2.org/httpbin/get] * 1
if ARGV.empty?
URLS = %w[https://nghttp2.org/httpbin/get] * 1
else
URLS = ARGV
end
responses = HTTPX.get(*URLS)
Array(responses).each(&:raise_for_status)
puts "Status: \n"
puts Array(responses).map(&:status)
puts "Payload: \n"
puts Array(responses).map(&:to_s)
Array(responses).each do |res|
puts "URI: #{res.uri}"
case res
when HTTPX::ErrorResponse
puts "error: #{res.error}"
puts res.error.backtrace
else
puts "STATUS: #{res.status}"
puts res.to_s[0..2048]
end
end

View File

@ -17,23 +17,49 @@ end
Signal.trap("INFO") { print_status } unless ENV.key?("CI")
PAGES = (ARGV.first || 10).to_i
Thread.start do
frontpage = HTTPX.get("https://news.ycombinator.com").to_s
html = Oga.parse_html(frontpage)
links = html.css('.itemlist a.storylink').map{|link| link.get('href') }
links = links.select {|l| l.start_with?("https") }
puts links
responses = HTTPX.get(*links)
links.each_with_index do |l, i|
puts "#{responses[i].status}: #{l}"
end
page_links = []
HTTPX.wrap do |http|
PAGES.times.each do |i|
frontpage = http.get("https://news.ycombinator.com?p=#{i+1}").to_s
html = Oga.parse_html(frontpage)
links = html.css('.athing .title a').map{|link| link.get('href') }.select { |link| URI(link).absolute? }
links = links.select {|l| l.start_with?("https") }
puts "for page #{i+1}: #{links.size} links"
page_links.concat(links)
end
end
puts "requesting #{page_links.size} links:"
responses = HTTPX.get(*page_links)
# page_links.each_with_index do |l, i|
# puts "#{responses[i].status}: #{l}"
# end
responses, error_responses = responses.partition { |r| r.is_a?(HTTPX::Response) }
puts "#{responses.size} responses (from #{page_links.size})"
puts "by group:"
responses.group_by(&:status).each do |st, res|
res.each do |r|
puts "#{st}: #{r.uri}"
end
end unless responses.empty?
unless error_responses.empty?
puts "error responses (#{error_responses.size})"
error_responses.group_by{ |r| r.error.class }.each do |kl, res|
res.each do |r|
puts "#{r.uri}: #{r.error}"
puts r.error.backtrace&.join("\n")
end
end
end
end.join

View File

@ -1,7 +1,7 @@
require "httpx"
require "oga"
http = HTTPX.plugin(:compression).plugin(:persistent).with(timeout: { operation_timeut: 5, connect_timeout: 5})
http = HTTPX.plugin(:persistent).with(timeout: { request_timeout: 5 })
PAGES = (ARGV.first || 10).to_i
pages = PAGES.times.map do |page|
@ -16,10 +16,11 @@ Array(http.get(*pages)).each_with_index.map do |response, i|
end
html = Oga.parse_html(response.to_s)
# binding.irb
page_links = html.css('.itemlist a.titlelink').map{|link| link.get('href') }
page_links = html.css('.athing .title a').map{|link| link.get('href') }.select { |link| URI(link).absolute? }
puts "page(#{i+1}): #{page_links.size}"
if page_links.size == 0
puts "error(#{response.status}) on page #{i+1}"
next
end
# page_links.each do |link|
# puts "link: #{link}"
@ -31,6 +32,11 @@ end
links = links.each_with_index do |pages, i|
puts "Page: #{i+1}\t Links: #{pages.size}"
pages.each do |page|
puts "URL: #{page.uri} (#{page.status})"
case page
in status:
puts "URL: #{page.uri} (#{status})"
in error:
puts "URL: #{page.uri} (#{error.message})"
end
end
end

View File

@ -7,8 +7,8 @@
#
require "httpx"
URLS = %w[http://badipv4.test.ipv6friday.org/] * 1
# URLS = %w[http://badipv6.test.ipv6friday.org/] * 1
# URLS = %w[https://ipv4.test-ipv6.com] * 1
URLS = %w[https://ipv6.test-ipv6.com] * 1
responses = HTTPX.get(*URLS, ssl: { verify_mode: OpenSSL::SSL::VERIFY_NONE})

View File

@ -6,11 +6,9 @@ include HTTPX
URLS = %w[http://nghttp2.org https://nghttp2.org/blog/]# * 3
client = HTTPX.plugin(:proxy)
client = client.with_proxy(uri: "http://61.7.174.110:54132")
responses = client.get(URLS)
client = client.with_proxy(uri: "http://134.209.29.120:8080")
responses = client.get(*URLS)
puts responses.map(&:status)
# response = client.get(URLS.first)
# puts response.status

View File

@ -0,0 +1,8 @@
require "socket"
puts Process.pid
sleep 10
puts Addrinfo.getaddrinfo("www.google.com", 80).inspect
sleep 10
puts Addrinfo.getaddrinfo("www.google.com", 80).inspect
sleep 60

View File

@ -0,0 +1,40 @@
# frozen_string_literal: true
require "resolv"
require "httpx"
host = "127.0.0.11"
port = 53
# srv_hostname = "aerserv-bc-us-east.bidswitch.net"
record_type = Resolv::DNS::Resource::IN::A
# # addresses = nil
# # Resolv::DNS.open(nameserver: host) do |dns|
# # require "pry-byebug"; binding.pry
# # addresses = dns.getresources(srv_hostname, record_type)
# # end
# message_id = 1
# buffer = HTTPX::Resolver.encode_dns_query(srv_hostname, type: record_type, message_id: message_id)
# io = TCPSocket.new(host, port)
# buffer[0, 2] = [buffer.size, message_id].pack("nn")
# io.write(buffer.to_s)
# data, _ = io.readpartial(2048)
# size = data[0, 2].unpack1("n")
# answer = data[2..-1]
# answer << io.readpartial(size) if size > answer.bytesize
# addresses = HTTPX::Resolver.decode_dns_answer(answer)
# puts "(#{srv_hostname}) addresses: #{addresses}"
srv_hostname = "www.sfjewjfwigiewpgwwg-native-1.com"
socket = UDPSocket.new
buffer = HTTPX::Resolver.encode_dns_query(srv_hostname, type: record_type)
socket.send(buffer.to_s, 0, host, port)
recv, _ = socket.recvfrom(512)
puts "received #{recv.bytesize} bytes..."
addresses = HTTPX::Resolver.decode_dns_answer(recv)
puts "(#{srv_hostname}) addresses: #{addresses}"

View File

@ -0,0 +1,23 @@
require "httpx"
host = "1.1.1.1"
port = 53
hostname = "google.com"
srv_hostname = "_https._tcp.#{hostname}"
record_type = Resolv::DNS::Resource::IN::SRV
addresses = nil
Resolv::DNS.open(nameserver: host) do |dns|
addresses = dns.getresources(srv_hostname, record_type)
end
# buffer = HTTPX::Resolver.encode_dns_query(hostname, type: record_type)
# io = UDPSocket.new(Socket::AF_INET)
# size = io.send(buffer.to_s, 0, Socket.sockaddr_in(port, host.to_s))
# data, _ = io.recvfrom(2048)
# addresses = HTTPX::Resolver.decode_dns_answer(data)
puts "(#{hostname}) addresses: #{addresses}"

View File

@ -20,10 +20,10 @@ Gem::Specification.new do |gem|
gem.metadata = {
"bug_tracker_uri" => "https://gitlab.com/os85/httpx/issues",
"changelog_uri" => "https://os85.gitlab.io/httpx/#release-notes",
"documentation_uri" => "https://os85.gitlab.io/httpx/rdoc/",
"changelog_uri" => "https://honeyryderchuck.gitlab.io/httpx/#release-notes",
"documentation_uri" => "https://honeyryderchuck.gitlab.io/httpx/rdoc/",
"source_code_uri" => "https://gitlab.com/os85/httpx",
"homepage_uri" => "https://os85.gitlab.io/httpx/",
"homepage_uri" => "https://honeyryderchuck.gitlab.io/httpx/",
"rubygems_mfa_required" => "true",
}
@ -32,5 +32,7 @@ Gem::Specification.new do |gem|
gem.require_paths = ["lib"]
gem.add_runtime_dependency "http-2-next", ">= 0.4.1"
gem.add_runtime_dependency "http-2", ">= 1.0.0"
gem.required_ruby_version = ">= 2.7.0"
end

View File

@ -1,3 +1,3 @@
# Integration
This section is to test certain cases where we can't reliably reproduce in our test environments, but can be ran locally.
This section is to test certain cases where we can't reliably reproduce in our test environments, but can be ran locally.

View File

@ -0,0 +1,133 @@
# frozen_string_literal: true
module DatadogHelpers
DATADOG_VERSION = defined?(DDTrace) ? DDTrace::VERSION : Datadog::VERSION
ERROR_TAG = if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.8.0")
"error.message"
else
"error.msg"
end
private
def verify_instrumented_request(status, verb:, uri:, span: fetch_spans.first, service: datadog_service_name.to_s, error: nil)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
assert span.type == "http"
else
assert span.span_type == "http"
end
assert span.name == "#{datadog_service_name}.request"
assert span.service == service
assert span.get_tag("out.host") == uri.host
assert span.get_tag("out.port") == 80
assert span.get_tag("http.method") == verb
assert span.get_tag("http.url") == uri.path
if status && status >= 400
verify_http_error_span(span, status, error)
elsif error
verify_error_span(span)
else
assert span.status.zero?
assert span.get_tag("http.status_code") == status.to_s
# peer service
# assert span.get_tag("peer.service") == span.service
end
end
def verify_http_error_span(span, status, error)
assert span.get_tag("http.status_code") == status.to_s
assert span.get_tag("error.type") == error
assert !span.get_tag(ERROR_TAG).nil?
assert span.status == 1
end
def verify_error_span(span)
assert span.get_tag("error.type") == "HTTPX::NativeResolveError"
assert !span.get_tag(ERROR_TAG).nil?
assert span.status == 1
end
def verify_no_distributed_headers(request_headers)
assert !request_headers.key?("x-datadog-parent-id")
assert !request_headers.key?("x-datadog-trace-id")
assert !request_headers.key?("x-datadog-sampling-priority")
end
def verify_distributed_headers(request_headers, span: fetch_spans.first, sampling_priority: 1)
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
assert request_headers["x-datadog-parent-id"] == span.id.to_s
else
assert request_headers["x-datadog-parent-id"] == span.span_id.to_s
end
assert request_headers["x-datadog-trace-id"] == trace_id(span)
assert request_headers["x-datadog-sampling-priority"] == sampling_priority.to_s
end
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.17.0")
def trace_id(span)
Datadog::Tracing::Utils::TraceId.to_low_order(span.trace_id).to_s
end
else
def trace_id(span)
span.trace_id.to_s
end
end
def verify_analytics_headers(span, sample_rate: nil)
assert span.get_metric("_dd1.sr.eausr") == sample_rate
end
def set_datadog(options = {}, &blk)
Datadog.configure do |c|
c.tracing.instrument(datadog_service_name, options, &blk)
end
tracer # initialize tracer patches
end
def tracer
@tracer ||= begin
tr = Datadog::Tracing.send(:tracer)
def tr.write(trace)
@traces ||= []
@traces << trace
end
tr
end
end
def trace_with_sampling_priority(priority)
tracer.trace("foo.bar") do
tracer.active_trace.sampling_priority = priority
yield
end
end
# Returns spans and caches it (similar to +let(:spans)+).
def spans
@spans ||= fetch_spans
end
# Retrieves and sorts all spans in the current tracer instance.
# This method does not cache its results.
def fetch_spans
spans = (tracer.instance_variable_get(:@traces) || []).map(&:spans)
spans.flatten.sort! do |a, b|
if a.name == b.name
if a.resource == b.resource
if a.start_time == b.start_time
a.end_time <=> b.end_time
else
a.start_time <=> b.start_time
end
else
a.resource <=> b.resource
end
else
a.name <=> b.name
end
end
end
end

View File

@ -1,51 +1,60 @@
# frozen_string_literal: true
require "ddtrace"
begin
# upcoming 2.0
require "datadog"
rescue LoadError
require "ddtrace"
end
require "test_helper"
require "support/http_helpers"
require "httpx/adapters/datadog"
require_relative "datadog_helpers"
class DatadogTest < Minitest::Test
include HTTPHelpers
include DatadogHelpers
def test_datadog_successful_get_request
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_successful_post_request
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/post", "http://#{httpbin}"))
response = HTTPX.post(uri, body: "bla")
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "POST", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "POST", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_successful_multiple_requests
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
get_uri = URI(build_uri("/get", "http://#{httpbin}"))
post_uri = URI(build_uri("/post", "http://#{httpbin}"))
get_response, post_response = HTTPX.request([["GET", uri], ["POST", uri]])
get_response, post_response = HTTPX.request([["GET", get_uri], ["POST", post_uri]])
verify_status(get_response, 200)
verify_status(post_response, 200)
assert fetch_spans.size == 2, "expected to have 2 spans"
get_span, post_span = fetch_spans
verify_instrumented_request(get_response, span: get_span, verb: "GET", uri: uri)
verify_instrumented_request(post_response, span: post_span, verb: "POST", uri: uri)
verify_distributed_headers(get_response, span: get_span)
verify_distributed_headers(post_response, span: post_span)
verify_instrumented_request(get_response.status, span: get_span, verb: "GET", uri: get_uri)
verify_instrumented_request(post_response.status, span: post_span, verb: "POST", uri: post_uri)
verify_distributed_headers(request_headers(get_response), span: get_span)
verify_distributed_headers(request_headers(post_response), span: post_span)
verify_analytics_headers(get_span)
verify_analytics_headers(post_span)
end
@ -58,8 +67,7 @@ class DatadogTest < Minitest::Test
verify_status(response, 500)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
end
def test_datadog_client_error_request
@ -70,8 +78,7 @@ class DatadogTest < Minitest::Test
verify_status(response, 404)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
end
def test_datadog_some_other_error
@ -82,12 +89,11 @@ class DatadogTest < Minitest::Test
assert response.is_a?(HTTPX::ErrorResponse), "response should contain errors"
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
verify_distributed_headers(response)
verify_instrumented_request(nil, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
end
def test_datadog_host_config
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
set_datadog(describe: /#{uri.host}/) do |http|
http.service_name = "httpbin"
http.split_by_domain = false
@ -97,12 +103,12 @@ class DatadogTest < Minitest::Test
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_split_by_domain
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
set_datadog do |http|
http.split_by_domain = true
end
@ -111,13 +117,13 @@ class DatadogTest < Minitest::Test
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(response)
verify_instrumented_request(response.status, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_datadog_distributed_headers_disabled
set_datadog(distributed_tracing: false)
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
@ -127,14 +133,14 @@ class DatadogTest < Minitest::Test
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(response)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(request_headers(response))
verify_analytics_headers(span)
end
def test_datadog_distributed_headers_sampling_priority
set_datadog
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
@ -145,37 +151,51 @@ class DatadogTest < Minitest::Test
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_distributed_headers(response, span: span, sampling_priority: sampling_priority)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response), span: span, sampling_priority: sampling_priority)
verify_analytics_headers(span)
end
def test_datadog_analytics_enabled
set_datadog(analytics_enabled: true)
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 1.0)
end
def test_datadog_analytics_sample_rate
set_datadog(analytics_enabled: true, analytics_sample_rate: 0.5)
uri = URI(build_uri("/status/200", "http://#{httpbin}"))
uri = URI(build_uri("/get", "http://#{httpbin}"))
response = HTTPX.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response, span: span, verb: "GET", uri: uri)
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 0.5)
end
def test_datadog_per_request_span_with_retries
set_datadog
uri = URI(build_uri("/status/404", "http://#{httpbin}"))
http = HTTPX.plugin(:retries, max_retries: 2, retry_on: ->(r) { r.status == 404 })
response = http.get(uri)
verify_status(response, 404)
assert fetch_spans.size == 3, "expected to 3 spans"
fetch_spans.each do |span|
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri, error: "HTTPX::HTTPError")
end
end
private
def setup
@ -186,144 +206,15 @@ class DatadogTest < Minitest::Test
def teardown
super
Datadog.registry[:httpx].reset_configuration!
Datadog.configuration.tracing[:httpx].enabled = false
end
def verify_instrumented_request(response, verb:, uri:, span: fetch_spans.first, service: "httpx", error: nil)
assert span.span_type == "http"
assert span.name == "httpx.request"
assert span.service == service
assert span.get_tag("out.host") == uri.host
assert span.get_tag("out.port") == "80"
assert span.get_tag("http.method") == verb
assert span.get_tag("http.url") == uri.path
error_tag = if defined?(::DDTrace) && Gem::Version.new(::DDTrace::VERSION::STRING) >= Gem::Version.new("1.8.0")
"error.message"
else
"error.msg"
end
if error
assert span.get_tag("error.type") == "HTTPX::NativeResolveError"
assert !span.get_tag(error_tag).nil?
assert span.status == 1
elsif response.status >= 400
assert span.get_tag("http.status_code") == response.status.to_s
assert span.get_tag("error.type") == "HTTPX::HTTPError"
assert !span.get_tag(error_tag).nil?
assert span.status == 1
else
assert span.status.zero?
assert span.get_tag("http.status_code") == response.status.to_s
# peer service
assert span.get_tag("peer.service") == span.service
end
def datadog_service_name
:httpx
end
def verify_no_distributed_headers(response)
request = response.instance_variable_get(:@request)
assert !request.headers.key?("x-datadog-parent-id")
assert !request.headers.key?("x-datadog-trace-id")
assert !request.headers.key?("x-datadog-sampling-priority")
end
def verify_distributed_headers(response, span: fetch_spans.first, sampling_priority: 1)
request = response.instance_variable_get(:@request)
assert request.headers["x-datadog-parent-id"] == span.span_id.to_s
assert request.headers["x-datadog-trace-id"] == span.trace_id.to_s
assert request.headers["x-datadog-sampling-priority"] == sampling_priority.to_s
end
def verify_analytics_headers(span, sample_rate: nil)
assert span.get_metric("_dd1.sr.eausr") == sample_rate
end
if defined?(::DDTrace) && Gem::Version.new(::DDTrace::VERSION::STRING) >= Gem::Version.new("1.0.0")
def set_datadog(options = {}, &blk)
Datadog.configure do |c|
c.tracing.instrument(:httpx, options, &blk)
end
tracer # initialize tracer patches
end
def tracer
@tracer ||= begin
tr = Datadog::Tracing.send(:tracer)
def tr.write(trace)
@traces ||= []
@traces << trace
end
tr
end
end
def trace_with_sampling_priority(priority)
tracer.trace("foo.bar") do
tracer.active_trace.sampling_priority = priority
yield
end
end
else
def set_datadog(options = {}, &blk)
Datadog.configure do |c|
c.use(:httpx, options, &blk)
end
tracer # initialize tracer patches
end
def tracer
@tracer ||= begin
tr = Datadog.tracer
def tr.write(trace)
@spans ||= []
@spans << trace
end
tr
end
end
def trace_with_sampling_priority(priority)
tracer.trace("foo.bar") do |span|
span.context.sampling_priority = priority
yield
end
end
end
# Returns spans and caches it (similar to +let(:spans)+).
def spans
@spans ||= fetch_spans
end
# Retrieves and sorts all spans in the current tracer instance.
# This method does not cache its results.
def fetch_spans
spans = if defined?(::DDTrace) && Gem::Version.new(::DDTrace::VERSION::STRING) >= Gem::Version.new("1.0.0")
(tracer.instance_variable_get(:@traces) || []).map(&:spans)
else
tracer.instance_variable_get(:@spans) || []
end
spans.flatten.sort! do |a, b|
if a.name == b.name
if a.resource == b.resource
if a.start_time == b.start_time
a.end_time <=> b.end_time
else
a.start_time <=> b.start_time
end
else
a.resource <=> b.resource
end
else
a.name <=> b.name
end
end
def request_headers(response)
body = json_body(response)
body["headers"].transform_keys(&:downcase)
end
end

View File

@ -0,0 +1,198 @@
# frozen_string_literal: true
begin
# upcoming 2.0
require "datadog"
rescue LoadError
require "ddtrace"
end
require "test_helper"
require "support/http_helpers"
require "httpx/adapters/faraday"
require_relative "datadog_helpers"
DATADOG_VERSION = defined?(DDTrace) ? DDTrace::VERSION : Datadog::VERSION
class FaradayDatadogTest < Minitest::Test
include HTTPHelpers
include DatadogHelpers
include FaradayHelpers
def test_faraday_datadog_successful_get_request
set_datadog
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_successful_post_request
set_datadog
uri = URI(build_uri("/status/200"))
response = faraday_connection.post(uri, "bla")
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, verb: "POST", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_server_error_request
set_datadog
uri = URI(build_uri("/status/500"))
ex = assert_raises(Faraday::ServerError) do
faraday_connection.tap do |conn|
adapter_handler = conn.builder.handlers.last
conn.builder.insert_before adapter_handler, Faraday::Response::RaiseError
end.get(uri)
end
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(ex.response[:status], verb: "GET", uri: uri, error: "Error 500")
verify_distributed_headers(request_headers(ex.response))
end
def test_faraday_datadog_client_error_request
set_datadog
uri = URI(build_uri("/status/404"))
ex = assert_raises(Faraday::ResourceNotFound) do
faraday_connection.tap do |conn|
adapter_handler = conn.builder.handlers.last
conn.builder.insert_before adapter_handler, Faraday::Response::RaiseError
end.get(uri)
end
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(ex.response[:status], verb: "GET", uri: uri, error: "Error 404")
verify_distributed_headers(request_headers(ex.response))
end
def test_faraday_datadog_some_other_error
set_datadog
uri = URI("http://unexisting/")
assert_raises(HTTPX::NativeResolveError) { faraday_connection.get(uri) }
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(nil, verb: "GET", uri: uri, error: "HTTPX::NativeResolveError")
end
def test_faraday_datadog_host_config
uri = URI(build_uri("/status/200"))
set_datadog(describe: /#{uri.host}/) do |http|
http.service_name = "httpbin"
http.split_by_domain = false
end
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, service: "httpbin", verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_split_by_domain
uri = URI(build_uri("/status/200"))
set_datadog do |http|
http.split_by_domain = true
end
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
verify_instrumented_request(response.status, service: uri.host, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response))
end
def test_faraday_datadog_distributed_headers_disabled
set_datadog(distributed_tracing: false)
uri = URI(build_uri("/status/200"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
faraday_connection.get(uri)
end
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_no_distributed_headers(request_headers(response))
verify_analytics_headers(span)
end unless ENV.key?("CI") # TODO: https://github.com/DataDog/dd-trace-rb/issues/4308
def test_faraday_datadog_distributed_headers_sampling_priority
set_datadog
uri = URI(build_uri("/status/200"))
sampling_priority = 10
response = trace_with_sampling_priority(sampling_priority) do
faraday_connection.get(uri)
end
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_distributed_headers(request_headers(response), span: span, sampling_priority: sampling_priority)
verify_analytics_headers(span)
end unless ENV.key?("CI") # TODO: https://github.com/DataDog/dd-trace-rb/issues/4308
def test_faraday_datadog_analytics_enabled
set_datadog(analytics_enabled: true)
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 1.0)
end
def test_faraday_datadog_analytics_sample_rate
set_datadog(analytics_enabled: true, analytics_sample_rate: 0.5)
uri = URI(build_uri("/status/200"))
response = faraday_connection.get(uri)
verify_status(response, 200)
assert !fetch_spans.empty?, "expected to have spans"
span = fetch_spans.last
verify_instrumented_request(response.status, span: span, verb: "GET", uri: uri)
verify_analytics_headers(span, sample_rate: 0.5)
end
private
def setup
super
Datadog.registry[:faraday].reset_configuration!
end
def teardown
super
Datadog.registry[:faraday].reset_configuration!
end
def datadog_service_name
:faraday
end
def origin(orig = httpbin)
"http://#{orig}"
end
end

View File

@ -1,150 +1,149 @@
# frozen_string_literal: true
if RUBY_VERSION >= "2.4.0"
require "logger"
require "stringio"
require "sentry-ruby"
require "test_helper"
require "support/http_helpers"
require "httpx/adapters/sentry"
require "logger"
require "stringio"
require "sentry-ruby"
require "test_helper"
require "support/http_helpers"
require "httpx/adapters/sentry"
class SentryTest < Minitest::Test
include HTTPHelpers
class SentryTest < Minitest::Test
include HTTPHelpers
DUMMY_DSN = "http://12345:67890@sentry.localdomain/sentry/42"
DUMMY_DSN = "http://12345:67890@sentry.localdomain/sentry/42"
def test_sentry_send_yes_pii
before_pii = Sentry.configuration.send_default_pii
begin
Sentry.configuration.send_default_pii = true
def test_sentry_send_yes_pii
before_pii = Sentry.configuration.send_default_pii
begin
Sentry.configuration.send_default_pii = true
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
uri = build_uri("/get")
response = HTTPX.get(uri, params: { "foo" => "bar" })
verify_status(response, 200)
verify_spans(transaction, response, description: "GET #{uri}?foo=bar")
crumb = Sentry.get_current_scope.breadcrumbs.peek
assert crumb.category == "httpx"
assert crumb.data == { status: 200, method: "GET", url: "#{uri}?foo=bar" }
ensure
Sentry.configuration.send_default_pii = before_pii
end
end
def test_sentry_send_no_pii
before_pii = Sentry.configuration.send_default_pii
begin
Sentry.configuration.send_default_pii = false
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
uri = build_uri("/get")
response = HTTPX.get(uri, params: { "foo" => "bar" })
verify_status(response, 200)
verify_spans(transaction, response, description: "GET #{uri}")
crumb = Sentry.get_current_scope.breadcrumbs.peek
assert crumb.category == "httpx"
assert crumb.data == { status: 200, method: "GET", url: uri }
ensure
Sentry.configuration.send_default_pii = before_pii
end
end
def test_sentry_post_request
before_pii = Sentry.configuration.send_default_pii
begin
Sentry.configuration.send_default_pii = true
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
uri = build_uri("/post")
response = HTTPX.post(uri, form: { foo: "bar" })
verify_status(response, 200)
verify_spans(transaction, response, verb: "POST")
crumb = Sentry.get_current_scope.breadcrumbs.peek
assert crumb.category == "httpx"
assert crumb.data == { status: 200, method: "POST", url: uri, body: "foo=bar" }
ensure
Sentry.configuration.send_default_pii = before_pii
end
end
def test_sentry_multiple_requests
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
responses = HTTPX.get(build_uri("/status/200"), build_uri("/status/404"))
verify_status(responses[0], 200)
verify_status(responses[1], 404)
verify_spans(transaction, *responses)
end
uri = build_uri("/get")
def test_sentry_server_error_request
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
response = HTTPX.get(uri, params: { "foo" => "bar" })
uri = URI("http://unexisting/")
response = HTTPX.get(uri)
verify_error_response(response, /name or service not known/)
assert response.is_a?(HTTPX::ErrorResponse), "response should contain errors"
verify_spans(transaction, response, verb: "GET")
verify_status(response, 200)
verify_spans(transaction, response, description: "GET #{uri}?foo=bar")
crumb = Sentry.get_current_scope.breadcrumbs.peek
assert crumb.category == "httpx"
assert crumb.data == { error: "name or service not known (unexisting)", method: "GET", url: uri.to_s }
end
private
def verify_spans(transaction, *responses, verb: nil, description: nil)
assert transaction.span_recorder.spans.count == responses.size + 1
assert transaction.span_recorder.spans[0] == transaction
response_spans = transaction.span_recorder.spans[1..-1]
responses.each_with_index do |response, idx|
request_span = response_spans[idx]
assert request_span.op == "httpx.client"
assert !request_span.start_timestamp.nil?
assert !request_span.timestamp.nil?
assert request_span.start_timestamp != request_span.timestamp
assert request_span.description == (description || "#{verb || "GET"} #{response.uri}")
if response.is_a?(HTTPX::ErrorResponse)
assert request_span.data == { error: response.error.message }
else
assert request_span.data == { status: response.status }
end
end
end
def setup
super
mock_io = StringIO.new
mock_logger = Logger.new(mock_io)
Sentry.init do |config|
config.traces_sample_rate = 1.0
config.logger = mock_logger
config.dsn = DUMMY_DSN
config.transport.transport_class = Sentry::DummyTransport
config.breadcrumbs_logger = [:http_logger]
# so the events will be sent synchronously for testing
config.background_worker_threads = 0
end
end
def origin
"https://#{httpbin}"
assert crumb.data == { status: 200, method: "GET", url: "#{uri}?foo=bar" }
ensure
Sentry.configuration.send_default_pii = before_pii
end
end
def test_sentry_send_no_pii
before_pii = Sentry.configuration.send_default_pii
begin
Sentry.configuration.send_default_pii = false
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
uri = build_uri("/get")
response = HTTPX.get(uri, params: { "foo" => "bar" })
verify_status(response, 200)
verify_spans(transaction, response, description: "GET #{uri}")
crumb = Sentry.get_current_scope.breadcrumbs.peek
assert crumb.category == "httpx"
assert crumb.data == { status: 200, method: "GET", url: uri }
ensure
Sentry.configuration.send_default_pii = before_pii
end
end
def test_sentry_post_request
before_pii = Sentry.configuration.send_default_pii
begin
Sentry.configuration.send_default_pii = true
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
uri = build_uri("/post")
response = HTTPX.post(uri, form: { foo: "bar" })
verify_status(response, 200)
verify_spans(transaction, response, verb: "POST")
crumb = Sentry.get_current_scope.breadcrumbs.peek
assert crumb.category == "httpx"
assert crumb.data == { status: 200, method: "POST", url: uri, body: "foo=bar" }
ensure
Sentry.configuration.send_default_pii = before_pii
end
end
def test_sentry_multiple_requests
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
responses = HTTPX.get(build_uri("/status/200"), build_uri("/status/404"))
verify_status(responses[0], 200)
verify_status(responses[1], 404)
verify_spans(transaction, *responses)
end
def test_sentry_server_error_request
transaction = Sentry.start_transaction
Sentry.get_current_scope.set_span(transaction)
uri = URI("http://unexisting/")
response = HTTPX.get(uri)
verify_error_response(response, /name or service not known/)
assert response.is_a?(HTTPX::ErrorResponse), "response should contain errors"
verify_spans(transaction, response, verb: "GET")
crumb = Sentry.get_current_scope.breadcrumbs.peek
assert crumb.category == "httpx"
assert crumb.data == { error: "name or service not known", method: "GET", url: uri.to_s }
end
private
def verify_spans(transaction, *responses, verb: nil, description: nil)
assert transaction.span_recorder.spans.count == responses.size + 1
assert transaction.span_recorder.spans[0] == transaction
response_spans = transaction.span_recorder.spans[1..-1]
responses.each_with_index do |response, idx|
request_span = response_spans[idx]
assert request_span.op == "httpx.client"
assert !request_span.start_timestamp.nil?
assert !request_span.timestamp.nil?
assert request_span.start_timestamp != request_span.timestamp
assert request_span.description == (description || "#{verb || "GET"} #{response.uri}")
if response.is_a?(HTTPX::ErrorResponse)
assert request_span.data == { error: response.error.message }
else
assert request_span.data == { status: response.status }
end
end
end
def setup
super
mock_io = StringIO.new
mock_logger = Logger.new(mock_io)
Sentry.init do |config|
config.traces_sample_rate = 1.0
config.sdk_logger = mock_logger
config.dsn = DUMMY_DSN
config.transport.transport_class = Sentry::DummyTransport
config.background_worker_threads = 0
config.breadcrumbs_logger = [:http_logger]
config.enabled_patches << :httpx
# so the events will be sent synchronously for testing
end
end
def origin
"https://#{httpbin}"
end
end

View File

@ -26,6 +26,7 @@ class WebmockTest < Minitest::Test
end
def teardown
super
WebMock.reset!
WebMock.allow_net_connect!
WebMock.disable!
@ -49,6 +50,14 @@ class WebmockTest < Minitest::Test
assert_equal(@exception_class.new("exception message"), response.error)
end
def test_response_not_decoded
request = stub_request(:get, MOCK_URL_HTTP).to_return(body: "body", headers: { content_encoding: "gzip" })
response = HTTPX.get(MOCK_URL_HTTP)
assert_equal("body", response.body.to_s)
assert_requested(request)
end
def test_to_timeout
response = http_request(:get, MOCK_URL_HTTP_TIMEOUT)
assert_requested(@stub_timeout)
@ -87,7 +96,7 @@ class WebmockTest < Minitest::Test
expected_message = "The request GET #{MOCK_URL_HTTP}/ was expected to execute 1 time but it executed 0 times" \
"\n\nThe following requests were made:\n\nNo requests were made.\n" \
"============================================================"
assert_raise_with_message(MiniTest::Assertion, expected_message) do
assert_raise_with_message(Minitest::Assertion, expected_message) do
assert_requested(:get, MOCK_URL_HTTP)
end
end
@ -96,7 +105,7 @@ class WebmockTest < Minitest::Test
expected_message = "The request ANY #{MOCK_URL_HTTP}/ was expected to execute 1 time but it executed 0 times" \
"\n\nThe following requests were made:\n\nNo requests were made.\n" \
"============================================================"
assert_raise_with_message(MiniTest::Assertion, expected_message) do
assert_raise_with_message(Minitest::Assertion, expected_message) do
assert_requested(@stub_http)
end
end
@ -146,13 +155,36 @@ class WebmockTest < Minitest::Test
assert_requested(:get, MOCK_URL_HTTP, query: hash_excluding("a" => %w[b c]))
end
def test_verification_that_expected_request_with_hash_as_body
stub_request(:post, MOCK_URL_HTTP).with(body: { foo: "bar" })
http_request(:post, MOCK_URL_HTTP, form: { foo: "bar" })
assert_requested(:post, MOCK_URL_HTTP, body: { foo: "bar" })
end
def test_verification_that_expected_request_occured_with_form_file
file = File.new(fixture_file_path)
stub_request(:post, MOCK_URL_HTTP)
http_request(:post, MOCK_URL_HTTP, form: { file: file })
# TODO: webmock does not support matching multipart request body
assert_requested(:post, MOCK_URL_HTTP)
end
def test_verification_that_expected_request_occured_with_form_tempfile
stub_request(:post, MOCK_URL_HTTP)
Tempfile.open("tmp") do |file|
http_request(:post, MOCK_URL_HTTP, form: { file: file })
end
# TODO: webmock does not support matching multipart request body
assert_requested(:post, MOCK_URL_HTTP)
end
def test_verification_that_non_expected_request_didnt_occur
expected_message = Regexp.new(
"The request GET #{MOCK_URL_HTTP}/ was not expected to execute but it executed 1 time\n\n" \
"The following requests were made:\n\nGET #{MOCK_URL_HTTP}/ with headers .+ was made 1 time\n\n" \
"============================================================"
)
assert_raise_with_message(MiniTest::Assertion, expected_message) do
assert_raise_with_message(Minitest::Assertion, expected_message) do
http_request(:get, "http://www.example.com/")
assert_not_requested(:get, "http://www.example.com")
end
@ -164,7 +196,7 @@ class WebmockTest < Minitest::Test
"The following requests were made:\n\nGET #{MOCK_URL_HTTP}/ with headers .+ was made 1 time\n\n" \
"============================================================"
)
assert_raise_with_message(MiniTest::Assertion, expected_message) do
assert_raise_with_message(Minitest::Assertion, expected_message) do
http_request(:get, "#{MOCK_URL_HTTP}/")
refute_requested(:get, MOCK_URL_HTTP)
end
@ -176,12 +208,43 @@ class WebmockTest < Minitest::Test
"The following requests were made:\n\nGET #{MOCK_URL_HTTP}/ with headers .+ was made 1 time\n\n" \
"============================================================"
)
assert_raise_with_message(MiniTest::Assertion, expected_message) do
assert_raise_with_message(Minitest::Assertion, expected_message) do
http_request(:get, "#{MOCK_URL_HTTP}/")
assert_not_requested(@stub_http)
end
end
def test_webmock_allows_real_request
WebMock.allow_net_connect!
uri = build_uri("/get?foo=bar")
response = HTTPX.get(uri)
verify_status(response, 200)
verify_body_length(response)
assert_requested(:get, uri, query: { "foo" => "bar" })
end
def test_webmock_allows_real_request_with_body
WebMock.allow_net_connect!
uri = build_uri("/post")
response = HTTPX.post(uri, form: { foo: "bar" })
verify_status(response, 200)
verify_body_length(response)
assert_requested(:post, uri, headers: { "Content-Type" => "application/x-www-form-urlencoded" }, body: "foo=bar")
end
def test_webmock_allows_real_request_with_file_body
WebMock.allow_net_connect!
uri = build_uri("/post")
response = HTTPX.post(uri, form: { image: File.new(fixture_file_path) })
verify_status(response, 200)
verify_body_length(response)
body = json_body(response)
verify_header(body["headers"], "Content-Type", "multipart/form-data")
verify_uploaded_image(body, "image", "image/jpeg")
# TODO: webmock does not support matching multipart request body
# assert_requested(:post, uri, headers: { "Content-Type" => "multipart/form-data" }, form: { "image" => File.new(fixture_file_path) })
end
def test_webmock_mix_mock_and_real_request
WebMock.allow_net_connect!
@ -214,6 +277,49 @@ class WebmockTest < Minitest::Test
assert_not_requested(:get, "http://#{httpbin}")
end
def test_webmock_follow_redirects_with_stream_plugin_each
session = HTTPX.plugin(:follow_redirects).plugin(:stream)
redirect_url = "#{MOCK_URL_HTTP}/redirect"
initial_request = stub_request(:get, MOCK_URL_HTTP).to_return(status: 302, headers: { location: redirect_url }, body: "redirecting")
redirect_request = stub_request(:get, redirect_url).to_return(status: 200, body: "body")
response = session.get(MOCK_URL_HTTP, stream: true)
body = "".b
response.each do |chunk|
next if (300..399).cover?(response.status)
body << chunk
end
assert_equal("body", body)
assert_requested(initial_request)
assert_requested(redirect_request)
end
def test_webmock_with_stream_plugin_each
session = HTTPX.plugin(:stream)
request = stub_request(:get, MOCK_URL_HTTP).to_return(body: "body")
body = "".b
response = session.get(MOCK_URL_HTTP, stream: true)
response.each do |chunk|
next if (300..399).cover?(response.status)
body << chunk
end
assert_equal("body", body)
assert_requested(request)
end
def test_webmock_with_stream_plugin_each_line
session = HTTPX.plugin(:stream)
request = stub_request(:get, MOCK_URL_HTTP).to_return(body: "First line\nSecond line")
response = session.get(MOCK_URL_HTTP, stream: true)
assert_equal(["First line", "Second line"], response.each_line.to_a)
assert_requested(request)
end
private
def assert_raise_with_message(e, message, &block)
@ -228,4 +334,8 @@ class WebmockTest < Minitest::Test
def http_request(meth, *uris, **options)
HTTPX.__send__(meth, *uris, **options)
end
def scheme
"http://"
end
end

View File

@ -2,6 +2,42 @@
require "httpx/version"
# Top-Level Namespace
#
module HTTPX
EMPTY = [].freeze
EMPTY_HASH = {}.freeze
# All plugins should be stored under this module/namespace. Can register and load
# plugins.
#
module Plugins
@plugins = {}
@plugins_mutex = Thread::Mutex.new
# Loads a plugin based on a name. If the plugin hasn't been loaded, tries to load
# it from the load path under "httpx/plugins/" directory.
#
def self.load_plugin(name)
h = @plugins
m = @plugins_mutex
unless (plugin = m.synchronize { h[name] })
require "httpx/plugins/#{name}"
raise "Plugin #{name} hasn't been registered" unless (plugin = m.synchronize { h[name] })
end
plugin
end
# Registers a plugin (+mod+) in the central store indexed by +name+.
#
def self.register_plugin(name, mod)
h = @plugins
m = @plugins_mutex
m.synchronize { h[name] = mod }
end
end
end
require "httpx/extensions"
require "httpx/errors"
@ -20,55 +56,11 @@ require "httpx/response"
require "httpx/options"
require "httpx/chainable"
require "mutex_m"
# Top-Level Namespace
#
module HTTPX
EMPTY = [].freeze
# All plugins should be stored under this module/namespace. Can register and load
# plugins.
#
module Plugins
@plugins = {}
@plugins.extend(Mutex_m)
# Loads a plugin based on a name. If the plugin hasn't been loaded, tries to load
# it from the load path under "httpx/plugins/" directory.
#
def self.load_plugin(name)
h = @plugins
unless (plugin = h.synchronize { h[name] })
require "httpx/plugins/#{name}"
raise "Plugin #{name} hasn't been registered" unless (plugin = h.synchronize { h[name] })
end
plugin
end
# Registers a plugin (+mod+) in the central store indexed by +name+.
#
def self.register_plugin(name, mod)
h = @plugins
h.synchronize { h[name] = mod }
end
end
# :nocov:
def self.const_missing(const_name)
super unless const_name == :Client
warn "DEPRECATION WARNING: the class #{self}::Client is deprecated. Use #{self}::Session instead."
Session
end
# :nocov:
extend Chainable
end
require "httpx/session"
require "httpx/session_extensions"
# load integrations when possible
require "httpx/adapters/datadog" if defined?(DDTrace) || defined?(Datadog)
require "httpx/adapters/datadog" if defined?(DDTrace) || defined?(Datadog::Tracing)
require "httpx/adapters/sentry" if defined?(Sentry)
require "httpx/adapters/webmock" if defined?(WebMock)

View File

@ -1,177 +1,211 @@
# frozen_string_literal: true
if defined?(DDTrace) && DDTrace::VERSION::STRING >= "1.0.0"
require "datadog/tracing/contrib/integration"
require "datadog/tracing/contrib/configuration/settings"
require "datadog/tracing/contrib/patcher"
require "datadog/tracing/contrib/integration"
require "datadog/tracing/contrib/configuration/settings"
require "datadog/tracing/contrib/patcher"
TRACING_MODULE = Datadog::Tracing
else
require "ddtrace/contrib/integration"
require "ddtrace/contrib/configuration/settings"
require "ddtrace/contrib/patcher"
TRACING_MODULE = Datadog
end
module TRACING_MODULE # rubocop:disable Naming/ClassAndModuleCamelCase
module Datadog::Tracing
module Contrib
module HTTPX
if defined?(::DDTrace) && ::DDTrace::VERSION::STRING >= "1.0.0"
METADATA_MODULE = TRACING_MODULE::Metadata
DATADOG_VERSION = defined?(::DDTrace) ? ::DDTrace::VERSION : ::Datadog::VERSION
TYPE_OUTBOUND = TRACING_MODULE::Metadata::Ext::HTTP::TYPE_OUTBOUND
METADATA_MODULE = Datadog::Tracing::Metadata
TAG_PEER_SERVICE = TRACING_MODULE::Metadata::Ext::TAG_PEER_SERVICE
TAG_URL = TRACING_MODULE::Metadata::Ext::HTTP::TAG_URL
TAG_METHOD = TRACING_MODULE::Metadata::Ext::HTTP::TAG_METHOD
TAG_TARGET_HOST = TRACING_MODULE::Metadata::Ext::NET::TAG_TARGET_HOST
TAG_TARGET_PORT = TRACING_MODULE::Metadata::Ext::NET::TAG_TARGET_PORT
TAG_STATUS_CODE = TRACING_MODULE::Metadata::Ext::HTTP::TAG_STATUS_CODE
TYPE_OUTBOUND = Datadog::Tracing::Metadata::Ext::HTTP::TYPE_OUTBOUND
TAG_BASE_SERVICE = if Gem::Version.new(DATADOG_VERSION::STRING) < Gem::Version.new("1.15.0")
"_dd.base_service"
else
METADATA_MODULE = Datadog
TYPE_OUTBOUND = TRACING_MODULE::Ext::HTTP::TYPE_OUTBOUND
TAG_PEER_SERVICE = TRACING_MODULE::Ext::Integration::TAG_PEER_SERVICE
TAG_URL = TRACING_MODULE::Ext::HTTP::URL
TAG_METHOD = TRACING_MODULE::Ext::HTTP::METHOD
TAG_TARGET_HOST = TRACING_MODULE::Ext::NET::TARGET_HOST
TAG_TARGET_PORT = TRACING_MODULE::Ext::NET::TARGET_PORT
TAG_STATUS_CODE = Datadog::Ext::HTTP::STATUS_CODE
PROPAGATOR = TRACING_MODULE::HTTPPropagator
Datadog::Tracing::Contrib::Ext::Metadata::TAG_BASE_SERVICE
end
TAG_PEER_HOSTNAME = Datadog::Tracing::Metadata::Ext::TAG_PEER_HOSTNAME
TAG_KIND = Datadog::Tracing::Metadata::Ext::TAG_KIND
TAG_CLIENT = Datadog::Tracing::Metadata::Ext::SpanKind::TAG_CLIENT
TAG_COMPONENT = Datadog::Tracing::Metadata::Ext::TAG_COMPONENT
TAG_OPERATION = Datadog::Tracing::Metadata::Ext::TAG_OPERATION
TAG_URL = Datadog::Tracing::Metadata::Ext::HTTP::TAG_URL
TAG_METHOD = Datadog::Tracing::Metadata::Ext::HTTP::TAG_METHOD
TAG_TARGET_HOST = Datadog::Tracing::Metadata::Ext::NET::TAG_TARGET_HOST
TAG_TARGET_PORT = Datadog::Tracing::Metadata::Ext::NET::TAG_TARGET_PORT
TAG_STATUS_CODE = Datadog::Tracing::Metadata::Ext::HTTP::TAG_STATUS_CODE
# HTTPX Datadog Plugin
#
# Enables tracing for httpx requests. A span will be created for each individual requests,
# and it'll trace since the moment it is fed to the connection, until the moment the response is
# fed back to the session.
# Enables tracing for httpx requests.
#
# A span will be created for each request transaction; the span is created lazily only when
# buffering a request, and it is fed the start time stored inside the tracer object.
#
module Plugin
class RequestTracer
include Contrib::HttpAnnotationHelper
module RequestTracer
extend Contrib::HttpAnnotationHelper
module_function
SPAN_REQUEST = "httpx.request"
def initialize(request)
@request = request
# initializes tracing on the +request+.
def call(request)
return unless configuration(request).enabled
span = nil
# request objects are reused, when already buffered requests get rerouted to a different
# connection due to connection issues, or when they already got a response, but need to
# be retried. In such situations, the original span needs to be extended for the former,
# while a new is required for the latter.
request.on(:idle) do
span = nil
end
# the span is initialized when the request is buffered in the parser, which is the closest
# one gets to actually sending the request.
request.on(:headers) do
next if span
span = initialize_span(request, now)
end
request.on(:response) do |response|
unless span
next unless response.is_a?(::HTTPX::ErrorResponse) && response.error.respond_to?(:connection)
# handles the case when the +error+ happened during name resolution, which means
# that the tracing start point hasn't been triggered yet; in such cases, the approximate
# initial resolving time is collected from the connection, and used as span start time,
# and the tracing object in inserted before the on response callback is called.
span = initialize_span(request, response.error.connection.init_time)
end
finish(response, span)
end
end
def call
return unless tracing_enabled?
def finish(response, span)
if response.is_a?(::HTTPX::ErrorResponse)
span.set_error(response.error)
else
span.set_tag(TAG_STATUS_CODE, response.status.to_s)
@request.on(:response, &method(:finish))
span.set_error(::HTTPX::HTTPError.new(response)) if response.status >= 400 && response.status <= 599
verb = @request.verb
uri = @request.uri
span.set_tags(
Datadog.configuration.tracing.header_tags.response_tags(response.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
end
@span = build_span
span.finish
end
@span.resource = verb
# return a span initialized with the +@request+ state.
def initialize_span(request, start_time)
verb = request.verb
uri = request.uri
# Add additional request specific tags to the span.
config = configuration(request)
@span.set_tag(TAG_URL, @request.path)
@span.set_tag(TAG_METHOD, verb)
span = create_span(request, config, start_time)
@span.set_tag(TAG_TARGET_HOST, uri.host)
@span.set_tag(TAG_TARGET_PORT, uri.port.to_s)
span.resource = verb
# Tag original global service name if not used
span.set_tag(TAG_BASE_SERVICE, Datadog.configuration.service) if span.service != Datadog.configuration.service
span.set_tag(TAG_KIND, TAG_CLIENT)
span.set_tag(TAG_COMPONENT, "httpx")
span.set_tag(TAG_OPERATION, "request")
span.set_tag(TAG_URL, request.path)
span.set_tag(TAG_METHOD, verb)
span.set_tag(TAG_TARGET_HOST, uri.host)
span.set_tag(TAG_TARGET_PORT, uri.port)
span.set_tag(TAG_PEER_HOSTNAME, uri.host)
# Tag as an external peer service
@span.set_tag(TAG_PEER_SERVICE, @span.service)
# span.set_tag(TAG_PEER_SERVICE, span.service)
propagate_headers if @configuration[:distributed_tracing]
if config[:distributed_tracing]
propagate_trace_http(
Datadog::Tracing.active_trace,
request.headers
)
end
# Set analytics sample rate
if Contrib::Analytics.enabled?(@configuration[:analytics_enabled])
Contrib::Analytics.set_sample_rate(@span, @configuration[:analytics_sample_rate])
if Contrib::Analytics.enabled?(config[:analytics_enabled])
Contrib::Analytics.set_sample_rate(span, config[:analytics_sample_rate])
end
span.set_tags(
Datadog.configuration.tracing.header_tags.request_tags(request.headers.to_h)
) if Datadog.configuration.tracing.respond_to?(:header_tags)
span
rescue StandardError => e
Datadog.logger.error("error preparing span for http request: #{e}")
Datadog.logger.error(e.backtrace)
end
def finish(response)
return unless @span
if response.is_a?(::HTTPX::ErrorResponse)
@span.set_error(response.error)
else
@span.set_tag(TAG_STATUS_CODE, response.status.to_s)
@span.set_error(::HTTPX::HTTPError.new(response)) if response.status >= 400 && response.status <= 599
end
@span.finish
def now
::Datadog::Core::Utils::Time.now.utc
end
private
def configuration(request)
Datadog.configuration.tracing[:httpx, request.uri.host]
end
if defined?(::DDTrace) && ::DDTrace::VERSION::STRING >= "1.0.0"
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("2.0.0")
def propagate_trace_http(trace, headers)
Datadog::Tracing::Contrib::HTTP.inject(trace, headers)
end
def build_span
TRACING_MODULE.trace(
def create_span(request, configuration, start_time)
Datadog::Tracing.trace(
SPAN_REQUEST,
service: service_name(@request.uri.host, configuration, Datadog.configuration_for(self)),
span_type: TYPE_OUTBOUND
service: service_name(request.uri.host, configuration),
type: TYPE_OUTBOUND,
start_time: start_time
)
end
def propagate_headers
TRACING_MODULE::Propagation::HTTP.inject!(TRACING_MODULE.active_trace, @request.headers)
end
def configuration
@configuration ||= Datadog.configuration.tracing[:httpx, @request.uri.host]
end
def tracing_enabled?
TRACING_MODULE.enabled?
end
else
def build_span
service_name = configuration[:split_by_domain] ? @request.uri.host : configuration[:service_name]
configuration[:tracer].trace(
def propagate_trace_http(trace, headers)
Datadog::Tracing::Propagation::HTTP.inject!(trace.to_digest, headers)
end
def create_span(request, configuration, start_time)
Datadog::Tracing.trace(
SPAN_REQUEST,
service: service_name,
span_type: TYPE_OUTBOUND
service: service_name(request.uri.host, configuration),
span_type: TYPE_OUTBOUND,
start_time: start_time
)
end
def propagate_headers
Datadog::HTTPPropagator.inject!(@span.context, @request.headers)
end
def configuration
@configuration ||= Datadog.configuration[:httpx, @request.uri.host]
end
def tracing_enabled?
configuration[:tracer].enabled
end
end
end
module RequestMethods
def __datadog_enable_trace!
return super if @__datadog_enable_trace
# intercepts request initialization to inject the tracing logic.
def initialize(*)
super
RequestTracer.new(self).call
@__datadog_enable_trace = true
return unless Datadog::Tracing.enabled?
RequestTracer.call(self)
end
end
module ConnectionMethods
def send(request)
request.__datadog_enable_trace!
attr_reader :init_time
def initialize(*)
super
@init_time = ::Datadog::Core::Utils::Time.now.utc
end
end
end
@ -179,7 +213,7 @@ module TRACING_MODULE # rubocop:disable Naming/ClassAndModuleCamelCase
module Configuration
# Default settings for httpx
#
class Settings < TRACING_MODULE::Contrib::Configuration::Settings
class Settings < Datadog::Tracing::Contrib::Configuration::Settings
DEFAULT_ERROR_HANDLER = lambda do |response|
Datadog::Ext::HTTP::ERROR_RANGE.cover?(response.status)
end
@ -188,29 +222,82 @@ module TRACING_MODULE # rubocop:disable Naming/ClassAndModuleCamelCase
option :distributed_tracing, default: true
option :split_by_domain, default: false
option :enabled do |o|
o.default { env_to_bool("DD_TRACE_HTTPX_ENABLED", true) }
o.lazy
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
option :enabled do |o|
o.type :bool
o.env "DD_TRACE_HTTPX_ENABLED"
o.default true
end
option :analytics_enabled do |o|
o.type :bool
o.env "DD_TRACE_HTTPX_ANALYTICS_ENABLED"
o.default false
end
option :analytics_sample_rate do |o|
o.type :float
o.env "DD_TRACE_HTTPX_ANALYTICS_SAMPLE_RATE"
o.default 1.0
end
else
option :enabled do |o|
o.default { env_to_bool("DD_TRACE_HTTPX_ENABLED", true) }
o.lazy
end
option :analytics_enabled do |o|
o.default { env_to_bool(%w[DD_TRACE_HTTPX_ANALYTICS_ENABLED DD_HTTPX_ANALYTICS_ENABLED], false) }
o.lazy
end
option :analytics_sample_rate do |o|
o.default { env_to_float(%w[DD_TRACE_HTTPX_ANALYTICS_SAMPLE_RATE DD_HTTPX_ANALYTICS_SAMPLE_RATE], 1.0) }
o.lazy
end
end
option :analytics_enabled do |o|
o.default { env_to_bool(%w[DD_TRACE_HTTPX_ANALYTICS_ENABLED DD_HTTPX_ANALYTICS_ENABLED], false) }
o.lazy
if defined?(Datadog::Tracing::Contrib::SpanAttributeSchema)
option :service_name do |o|
o.default do
Datadog::Tracing::Contrib::SpanAttributeSchema.fetch_service_name(
"DD_TRACE_HTTPX_SERVICE_NAME",
"httpx"
)
end
o.lazy unless Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
end
else
option :service_name do |o|
o.default do
ENV.fetch("DD_TRACE_HTTPX_SERVICE_NAME", "httpx")
end
o.lazy unless Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
end
end
option :analytics_sample_rate do |o|
o.default { env_to_float(%w[DD_TRACE_HTTPX_ANALYTICS_SAMPLE_RATE DD_HTTPX_ANALYTICS_SAMPLE_RATE], 1.0) }
o.lazy
end
option :distributed_tracing, default: true
option :error_handler, default: DEFAULT_ERROR_HANDLER
if Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.15.0")
option :error_handler do |o|
o.type :proc
o.default_proc(&DEFAULT_ERROR_HANDLER)
end
elsif Gem::Version.new(DATADOG_VERSION::STRING) >= Gem::Version.new("1.13.0")
option :error_handler do |o|
o.type :proc
o.experimental_default_proc(&DEFAULT_ERROR_HANDLER)
end
else
option :error_handler, default: DEFAULT_ERROR_HANDLER
end
end
end
# Patcher enables patching of 'httpx' with datadog components.
#
module Patcher
include TRACING_MODULE::Contrib::Patcher
include Datadog::Tracing::Contrib::Patcher
module_function
@ -233,7 +320,6 @@ module TRACING_MODULE # rubocop:disable Naming/ClassAndModuleCamelCase
class Integration
include Contrib::Integration
# MINIMUM_VERSION = Gem::Version.new('0.11.0')
MINIMUM_VERSION = Gem::Version.new("0.10.2")
register_as :httpx
@ -250,14 +336,8 @@ module TRACING_MODULE # rubocop:disable Naming/ClassAndModuleCamelCase
super && version >= MINIMUM_VERSION
end
if defined?(::DDTrace) && ::DDTrace::VERSION::STRING >= "1.0.0"
def new_configuration
Configuration::Settings.new
end
else
def default_configuration
Configuration::Settings.new
end
def new_configuration
Configuration::Settings.new
end
def patcher

View File

@ -7,69 +7,112 @@ require "faraday"
module Faraday
class Adapter
class HTTPX < Faraday::Adapter
# :nocov:
SSL_ERROR = if defined?(Faraday::SSLError)
Faraday::SSLError
else
Faraday::Error::SSLError
end
CONNECTION_FAILED_ERROR = if defined?(Faraday::ConnectionFailed)
Faraday::ConnectionFailed
else
Faraday::Error::ConnectionFailed
end
# :nocov:
unless Faraday::RequestOptions.method_defined?(:stream_response?)
module RequestOptionsExtensions
refine Faraday::RequestOptions do
def stream_response?
false
end
end
end
using RequestOptionsExtensions
end
module RequestMixin
using ::HTTPX::HashExtensions
def build_connection(env)
return @connection if defined?(@connection)
@connection = ::HTTPX.plugin(:persistent).plugin(ReasonPlugin)
@connection = @connection.with(@connection_options) unless @connection_options.empty?
connection_opts = options_from_env(env)
if (bind = env.request.bind)
@bind = TCPSocket.new(bind[:host], bind[:port])
connection_opts[:io] = @bind
end
@connection = @connection.with(connection_opts)
if (proxy = env.request.proxy)
proxy_options = { uri: proxy.uri }
proxy_options[:username] = proxy.user if proxy.user
proxy_options[:password] = proxy.password if proxy.password
@connection = @connection.plugin(:proxy).with(proxy: proxy_options)
end
@connection = @connection.plugin(OnDataPlugin) if env.request.stream_response?
@connection = @config_block.call(@connection) || @connection if @config_block
@connection
end
def close
@connection.close if @connection
@bind.close if @bind
end
private
def connect(env, &blk)
connection(env, &blk)
rescue ::HTTPX::TLSError => e
raise Faraday::SSLError, e
rescue Errno::ECONNABORTED,
Errno::ECONNREFUSED,
Errno::ECONNRESET,
Errno::EHOSTUNREACH,
Errno::EINVAL,
Errno::ENETUNREACH,
Errno::EPIPE,
::HTTPX::ConnectionError => e
raise Faraday::ConnectionFailed, e
end
def build_request(env)
meth = env[:method]
request_options = {
headers: env.request_headers,
body: env.body,
**options_from_env(env),
}
[meth.to_s.upcase, env.url, request_options]
end
def options_from_env(env)
timeout_options = {
connect_timeout: env.request.open_timeout,
operation_timeout: env.request.timeout,
}.compact
timeout_options = {}
req_opts = env.request
if (sec = request_timeout(:read, req_opts))
timeout_options[:read_timeout] = sec
end
options = {
ssl: {},
if (sec = request_timeout(:write, req_opts))
timeout_options[:write_timeout] = sec
end
if (sec = request_timeout(:open, req_opts))
timeout_options[:connect_timeout] = sec
end
{
ssl: ssl_options_from_env(env),
timeout: timeout_options,
}
end
options[:ssl][:verify_mode] = OpenSSL::SSL::VERIFY_PEER if env.ssl.verify
options[:ssl][:ca_file] = env.ssl.ca_file if env.ssl.ca_file
options[:ssl][:ca_path] = env.ssl.ca_path if env.ssl.ca_path
options[:ssl][:cert_store] = env.ssl.cert_store if env.ssl.cert_store
options[:ssl][:cert] = env.ssl.client_cert if env.ssl.client_cert
options[:ssl][:key] = env.ssl.client_key if env.ssl.client_key
options[:ssl][:ssl_version] = env.ssl.version if env.ssl.version
options[:ssl][:verify_depth] = env.ssl.verify_depth if env.ssl.verify_depth
options[:ssl][:min_version] = env.ssl.min_version if env.ssl.min_version
options[:ssl][:max_version] = env.ssl.max_version if env.ssl.max_version
if defined?(::OpenSSL)
def ssl_options_from_env(env)
ssl_options = {}
options
unless env.ssl.verify.nil?
ssl_options[:verify_mode] = env.ssl.verify ? OpenSSL::SSL::VERIFY_PEER : OpenSSL::SSL::VERIFY_NONE
end
ssl_options[:ca_file] = env.ssl.ca_file if env.ssl.ca_file
ssl_options[:ca_path] = env.ssl.ca_path if env.ssl.ca_path
ssl_options[:cert_store] = env.ssl.cert_store if env.ssl.cert_store
ssl_options[:cert] = env.ssl.client_cert if env.ssl.client_cert
ssl_options[:key] = env.ssl.client_key if env.ssl.client_key
ssl_options[:ssl_version] = env.ssl.version if env.ssl.version
ssl_options[:verify_depth] = env.ssl.verify_depth if env.ssl.verify_depth
ssl_options[:min_version] = env.ssl.min_version if env.ssl.min_version
ssl_options[:max_version] = env.ssl.max_version if env.ssl.max_version
ssl_options
end
else
# :nocov:
def ssl_options_from_env(*)
{}
end
# :nocov:
end
end
@ -100,30 +143,15 @@ module Faraday
end
module ReasonPlugin
if RUBY_VERSION < "2.5"
def self.load_dependencies(*)
require "webrick"
end
else
def self.load_dependencies(*)
require "net/http/status"
end
def self.load_dependencies(*)
require "net/http/status"
end
module ResponseMethods
if RUBY_VERSION < "2.5"
def reason
WEBrick::HTTPStatus::StatusMessage.fetch(@status)
end
else
def reason
Net::HTTP::STATUS_CODES.fetch(@status)
end
end
end
end
def self.session
@session ||= ::HTTPX.plugin(:compression).plugin(:persistent).plugin(ReasonPlugin)
module ResponseMethods
def reason
Net::HTTP::STATUS_CODES.fetch(@status, "Non-Standard status code")
end
end
end
class ParallelManager
@ -158,8 +186,9 @@ module Faraday
include RequestMixin
def initialize
def initialize(options)
@handlers = []
@connection_options = options
end
def enqueue(request)
@ -173,40 +202,51 @@ module Faraday
env = @handlers.last.env
session = HTTPX.session.with(options_from_env(env))
session = session.plugin(:proxy).with(proxy: { uri: env.request.proxy }) if env.request.proxy
session = session.plugin(OnDataPlugin) if env.request.stream_response?
connect(env) do |session|
requests = @handlers.map { |handler| session.build_request(*build_request(handler.env)) }
requests = @handlers.map { |handler| session.build_request(*build_request(handler.env)) }
if env.request.stream_response?
requests.each do |request|
request.response_on_data = env.request.on_data
end
end
if env.request.stream_response?
requests.each do |request|
request.response_on_data = env.request.on_data
responses = session.request(*requests)
Array(responses).each_with_index do |response, index|
handler = @handlers[index]
handler.on_response.call(response)
handler.on_complete.call(handler.env) if handler.on_complete
end
end
rescue ::HTTPX::TimeoutError => e
raise Faraday::TimeoutError, e
end
responses = session.request(*requests)
Array(responses).each_with_index do |response, index|
handler = @handlers[index]
handler.on_response.call(response)
handler.on_complete.call(handler.env)
end
# from Faraday::Adapter#connection
def connection(env)
conn = build_connection(env)
return conn unless block_given?
yield conn
end
private
# from Faraday::Adapter#request_timeout
def request_timeout(type, options)
key = Faraday::Adapter::TIMEOUT_KEYS[type]
options[key] || options[:timeout]
end
end
self.supports_parallel = true
class << self
def setup_parallel_manager
ParallelManager.new
def setup_parallel_manager(options = {})
ParallelManager.new(options)
end
end
def initialize(app, options = {})
super(app)
@session_options = options
end
def call(env)
super
if parallel?(env)
@ -224,38 +264,30 @@ module Faraday
return handler
end
session = HTTPX.session
session = session.with(@session_options) unless @session_options.empty?
session = session.with(options_from_env(env))
session = session.plugin(:proxy).with(proxy: { uri: env.request.proxy }) if env.request.proxy
session = session.plugin(OnDataPlugin) if env.request.stream_response?
request = session.build_request(*build_request(env))
request.response_on_data = env.request.on_data if env.request.stream_response?
response = session.request(request)
# do not call #raise_for_status for HTTP 4xx or 5xx, as faraday has a middleware for that.
response.raise_for_status unless response.is_a?(::HTTPX::Response)
response = connect_and_request(env)
save_response(env, response.status, response.body.to_s, response.headers, response.reason) do |response_headers|
response_headers.merge!(response.headers)
end
@app.call(env)
rescue ::HTTPX::TLSError => e
raise SSL_ERROR, e
rescue Errno::ECONNABORTED,
Errno::ECONNREFUSED,
Errno::ECONNRESET,
Errno::EHOSTUNREACH,
Errno::EINVAL,
Errno::ENETUNREACH,
Errno::EPIPE,
::HTTPX::ConnectionError => e
raise CONNECTION_FAILED_ERROR, e
end
private
def connect_and_request(env)
connect(env) do |session|
request = session.build_request(*build_request(env))
request.response_on_data = env.request.on_data if env.request.stream_response?
response = session.request(request)
# do not call #raise_for_status for HTTP 4xx or 5xx, as faraday has a middleware for that.
response.raise_for_status unless response.is_a?(::HTTPX::Response)
response
end
rescue ::HTTPX::TimeoutError => e
raise Faraday::TimeoutError, e
end
def parallel?(env)
env[:parallel_manager]
end

View File

@ -27,6 +27,11 @@ module HTTPX::Plugins
def set_sentry_trace_header(request, sentry_span)
return unless sentry_span
config = ::Sentry.configuration
url = request.uri.to_s
return unless config.propagate_traces && config.trace_propagation_targets.any? { |target| url.match?(target) }
trace = ::Sentry.get_current_client.generate_sentry_trace(sentry_span)
request.headers[::Sentry::SENTRY_TRACE_HEADER_NAME] = trace if trace
end
@ -91,7 +96,7 @@ module HTTPX::Plugins
module RequestMethods
def __sentry_enable_trace!
return super if @__sentry_enable_trace
return if @__sentry_enable_trace
Tracer.call(self)
@__sentry_enable_trace = true
@ -108,7 +113,7 @@ module HTTPX::Plugins
end
end
Sentry.register_patch do
Sentry.register_patch(:httpx) do
sentry_session = HTTPX.plugin(HTTPX::Plugins::Sentry)
HTTPX.send(:remove_const, :Session)

View File

@ -2,13 +2,8 @@
module WebMock
module HttpLibAdapters
if RUBY_VERSION < "2.5"
require "webrick/httpstatus"
HTTP_REASONS = WEBrick::HTTPStatus::StatusMessage
else
require "net/http/status"
HTTP_REASONS = Net::HTTP::STATUS_CODES
end
require "net/http/status"
HTTP_REASONS = Net::HTTP::STATUS_CODES
#
# HTTPX plugin for webmock.
@ -25,7 +20,7 @@ module WebMock
WebMock::RequestSignature.new(
request.verb.downcase.to_sym,
uri.to_s,
body: request.body.each.to_a.join,
body: request.body.to_s,
headers: request.headers.to_h
)
end
@ -43,27 +38,53 @@ module WebMock
return build_error_response(request, webmock_response.exception) if webmock_response.exception
response = request.options.response_class.new(request,
webmock_response.status[0],
"2.0",
webmock_response.headers)
response << webmock_response.body.dup
response
request.options.response_class.new(request,
webmock_response.status[0],
"2.0",
webmock_response.headers).tap do |res|
res.mocked = true
end
end
def build_error_response(request, exception)
HTTPX::ErrorResponse.new(request, exception, request.options)
HTTPX::ErrorResponse.new(request, exception)
end
end
module InstanceMethods
def build_connection(*)
connection = super
private
def do_init_connection(connection, selector)
super
connection.once(:unmock_connection) do
pool.__send__(:resolve_connection, connection)
pool.__send__(:unregister_connection, connection) unless connection.addresses
next unless connection.current_session == self
unless connection.addresses
# reset Happy Eyeballs, fail early
connection.sibling = nil
deselect_connection(connection, selector)
end
resolve_connection(connection, selector)
end
connection
end
end
module ResponseMethods
attr_accessor :mocked
def initialize(*)
super
@mocked = false
end
end
module ResponseBodyMethods
def decode_chunk(chunk)
return chunk if @response.mocked
super
end
end
@ -85,6 +106,10 @@ module WebMock
super
end
def terminate
force_reset
end
def send(request)
request_signature = Plugin.build_webmock_request_signature(request)
WebMock::RequestRegistry.instance.requested_signatures.put(request_signature)
@ -93,8 +118,16 @@ module WebMock
response = Plugin.build_from_webmock_response(request, mock_response)
WebMock::CallbackRegistry.invoke_callbacks({ lib: :httpx }, request_signature, mock_response)
log { "mocking #{request.uri} with #{mock_response.inspect}" }
request.transition(:headers)
request.transition(:body)
request.transition(:trailers)
request.transition(:done)
response.finish!
request.response = response
request.emit(:response, response)
request_signature.headers = request.headers.to_h
response << mock_response.body.dup unless response.is_a?(HTTPX::ErrorResponse)
elsif WebMock.net_connect_allowed?(request_signature.uri)
if WebMock::CallbackRegistry.any_callbacks?
request.on(:response) do |resp|

View File

@ -4,7 +4,59 @@ require "strscan"
module HTTPX
module AltSvc
@altsvc_mutex = Mutex.new
# makes connections able to accept requests destined to primary service.
module ConnectionMixin
using URIExtensions
def send(request)
request.headers["alt-used"] = @origin.authority if @parser && !@write_buffer.full? && match_altsvcs?(request.uri)
super
end
def match?(uri, options)
return false if !used? && (@state == :closing || @state == :closed)
match_altsvcs?(uri) && match_altsvc_options?(uri, options)
end
private
# checks if this is connection is an alternative service of
# +uri+
def match_altsvcs?(uri)
@origins.any? { |origin| altsvc_match?(uri, origin) } ||
AltSvc.cached_altsvc(@origin).any? do |altsvc|
origin = altsvc["origin"]
altsvc_match?(origin, uri.origin)
end
end
def match_altsvc_options?(uri, options)
return @options == options unless @options.ssl.all? do |k, v|
v == (k == :hostname ? uri.host : options.ssl[k])
end
@options.options_equals?(options, Options::REQUEST_BODY_IVARS + %i[@ssl])
end
def altsvc_match?(uri, other_uri)
other_uri = URI(other_uri)
uri.origin == other_uri.origin || begin
case uri.scheme
when "h2"
(other_uri.scheme == "https" || other_uri.scheme == "h2") &&
uri.host == other_uri.host &&
uri.port == other_uri.port
else
false
end
end
end
end
@altsvc_mutex = Thread::Mutex.new
@altsvcs = Hash.new { |h, k| h[k] = [] }
module_function
@ -46,7 +98,7 @@ module HTTPX
altsvc = response.headers["alt-svc"]
# https://tools.ietf.org/html/rfc7838#section-3
# https://datatracker.ietf.org/doc/html/rfc7838#section-3
# A field value containing the special value "clear" indicates that the
# origin requests all alternatives for that origin to be invalidated
# (including those specified in the same response, in case of an
@ -79,9 +131,9 @@ module HTTPX
scanner.skip(/;/)
break if scanner.eos? || scanner.scan(/ *, */)
end
alt_params = Hash[alt_params.map { |field| field.split("=") }]
alt_params = Hash[alt_params.map { |field| field.split("=", 2) }]
alt_proto, alt_authority = alt_service.split("=")
alt_proto, alt_authority = alt_service.split("=", 2)
alt_origin = parse_altsvc_origin(alt_proto, alt_authority)
return unless alt_origin
@ -98,29 +150,14 @@ module HTTPX
end
end
# :nocov:
if RUBY_VERSION < "2.2"
def parse_altsvc_origin(alt_proto, alt_origin)
alt_scheme = parse_altsvc_scheme(alt_proto) or return
def parse_altsvc_origin(alt_proto, alt_origin)
alt_scheme = parse_altsvc_scheme(alt_proto)
alt_origin = alt_origin[1..-2] if alt_origin.start_with?("\"") && alt_origin.end_with?("\"")
if alt_origin.start_with?(":")
alt_origin = "#{alt_scheme}://dummy#{alt_origin}"
uri = URI.parse(alt_origin)
uri.host = nil
uri
else
URI.parse("#{alt_scheme}://#{alt_origin}")
end
end
else
def parse_altsvc_origin(alt_proto, alt_origin)
alt_scheme = parse_altsvc_scheme(alt_proto) or return
alt_origin = alt_origin[1..-2] if alt_origin.start_with?("\"") && alt_origin.end_with?("\"")
return unless alt_scheme
URI.parse("#{alt_scheme}://#{alt_origin}")
end
alt_origin = alt_origin[1..-2] if alt_origin.start_with?("\"") && alt_origin.end_with?("\"")
URI.parse("#{alt_scheme}://#{alt_origin}")
end
# :nocov:
end
end

27
lib/httpx/base64.rb Normal file
View File

@ -0,0 +1,27 @@
# frozen_string_literal: true
if RUBY_VERSION < "3.3.0"
require "base64"
elsif !defined?(Base64)
module HTTPX
# require "base64" will not be a default gem after ruby 3.4.0
module Base64
module_function
def decode64(str)
str.unpack1("m")
end
def strict_encode64(bin)
[bin].pack("m0")
end
def urlsafe_encode64(bin, padding: true)
str = strict_encode64(bin)
str.chomp!("==") or str.chomp!("=") unless padding
str.tr!("+/", "-_")
str
end
end
end
end

View File

@ -3,11 +3,17 @@
require "forwardable"
module HTTPX
# Internal class to abstract a string buffer, by wrapping a string and providing the
# minimum possible API and functionality required.
#
# buffer = Buffer.new(640)
# buffer.full? #=> false
# buffer << "aa"
# buffer.capacity #=> 638
#
class Buffer
extend Forwardable
def_delegator :@buffer, :<<
def_delegator :@buffer, :to_s
def_delegator :@buffer, :to_str
@ -22,9 +28,22 @@ module HTTPX
attr_reader :limit
def initialize(limit)
@buffer = "".b
@limit = limit
if RUBY_VERSION >= "3.4.0"
def initialize(limit)
@buffer = String.new("", encoding: Encoding::BINARY, capacity: limit)
@limit = limit
end
def <<(chunk)
@buffer.append_as_bytes(chunk)
end
else
def initialize(limit)
@buffer = "".b
@limit = limit
end
def_delegator :@buffer, :<<
end
def full?

View File

@ -4,6 +4,7 @@ module HTTPX
module Callbacks
def on(type, &action)
callbacks(type) << action
action
end
def once(type, &block)
@ -13,17 +14,13 @@ module HTTPX
end
end
def only(type, &block)
callbacks(type).clear
on(type, &block)
end
def emit(type, *args)
log { "emit #{type.inspect} callbacks" } if respond_to?(:log)
callbacks(type).delete_if { |pr| :delete == pr.call(*args) } # rubocop:disable Style/YodaCondition
end
def callbacks_for?(type)
@callbacks.key?(type) && @callbacks[type].any?
@callbacks && @callbacks.key?(type) && @callbacks[type].any?
end
protected

View File

@ -1,6 +1,8 @@
# frozen_string_literal: true
module HTTPX
# Session mixin, implements most of the APIs that the users call.
# delegates to a default session when extended.
module Chainable
%w[head get post put delete trace options connect patch].each do |meth|
class_eval(<<-MOD, __FILE__, __LINE__ + 1)
@ -10,80 +12,95 @@ module HTTPX
MOD
end
# delegates to the default session (see HTTPX::Session#request).
def request(*args, **options)
branch(default_options).request(*args, **options)
end
# :nocov:
def timeout(**args)
warn ":#{__method__} is deprecated, use :with_timeout instead"
with(timeout: args)
end
def headers(headers)
warn ":#{__method__} is deprecated, use :with_headers instead"
with(headers: headers)
end
# :nocov:
def accept(type)
with(headers: { "accept" => String(type) })
end
# delegates to the default session (see HTTPX::Session#wrap).
def wrap(&blk)
branch(default_options).wrap(&blk)
end
# returns a new instance loaded with the +pl+ plugin and +options+.
def plugin(pl, options = nil, &blk)
klass = is_a?(Session) ? self.class : Session
klass = is_a?(S) ? self.class : Session
klass = Class.new(klass)
klass.instance_variable_set(:@default_options, klass.default_options.merge(default_options))
klass.plugin(pl, options, &blk).new
end
# deprecated
# :nocov:
def plugins(pls)
warn ":#{__method__} is deprecated, use :plugin instead"
klass = is_a?(Session) ? self.class : Session
klass = Class.new(klass)
klass.instance_variable_set(:@default_options, klass.default_options.merge(default_options))
klass.plugins(pls).new
end
# :nocov:
# returns a new instance loaded with +options+.
def with(options, &blk)
branch(default_options.merge(options), &blk)
end
private
# returns default instance of HTTPX::Options.
def default_options
@options || Session.default_options
end
# returns a default instance of HTTPX::Session.
def branch(options, &blk)
return self.class.new(options, &blk) if is_a?(Session)
return self.class.new(options, &blk) if is_a?(S)
Session.new(options, &blk)
end
def method_missing(meth, *args, **options)
return super unless meth =~ /\Awith_(.+)/
def method_missing(meth, *args, **options, &blk)
case meth
when /\Awith_(.+)/
option = Regexp.last_match(1)
option = Regexp.last_match(1)
return super unless option
return super unless option
with(option.to_sym => (args.first || options))
with(option.to_sym => args.first || options)
when /\Aon_(.+)/
callback = Regexp.last_match(1)
return super unless %w[
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].include?(callback)
warn "DEPRECATION WARNING: calling `.#{meth}` on plain HTTPX sessions is deprecated. " \
"Use `HTTPX.plugin(:callbacks).#{meth}` instead."
plugin(:callbacks).__send__(meth, *args, **options, &blk)
else
super
end
end
def respond_to_missing?(meth, *)
return super unless meth =~ /\Awith_(.+)/
case meth
when /\Awith_(.+)/
option = Regexp.last_match(1)
option = Regexp.last_match(1)
default_options.respond_to?(option) || super
when /\Aon_(.+)/
callback = Regexp.last_match(1)
default_options.respond_to?(option) || super
%w[
connection_opened connection_closed
request_error
request_started request_body_chunk request_completed
response_started response_body_chunk response_completed
].include?(callback) || super
else
super
end
end
end
extend Chainable
end

View File

@ -33,7 +33,6 @@ module HTTPX
include Callbacks
using URIExtensions
using NumericExtensions
require "httpx/connection/http2"
require "httpx/connection/http1"
@ -42,21 +41,33 @@ module HTTPX
def_delegator :@write_buffer, :empty?
attr_reader :type, :io, :origin, :origins, :state, :pending, :options
attr_reader :type, :io, :origin, :origins, :state, :pending, :options, :ssl_session, :sibling
attr_writer :timers
attr_writer :current_selector
attr_accessor :family
attr_accessor :current_session, :family
def initialize(type, uri, options)
@type = type
protected :sibling
def initialize(uri, options)
@current_session = @current_selector =
@parser = @sibling = @coalesced_connection =
@io = @ssl_session = @timeout =
@connected_at = @response_received_at = nil
@exhausted = @cloned = @main_sibling = false
@options = Options.new(options)
@type = initialize_type(uri, @options)
@origins = [uri.origin]
@origin = Utils.to_uri(uri.origin)
@options = Options.new(options)
@window_size = @options.window_size
@read_buffer = Buffer.new(@options.buffer_size)
@write_buffer = Buffer.new(@options.buffer_size)
@pending = []
@inflight = 0
@keep_alive_timeout = @options.timeout[:keep_alive_timeout]
on(:error, &method(:on_error))
if @options.io
# if there's an already open IO, get its
@ -67,14 +78,39 @@ module HTTPX
else
transition(:idle)
end
on(:close) do
next if @exhausted # it'll reset
@inflight = 0
@keep_alive_timeout = @options.timeout[:keep_alive_timeout]
@total_timeout = @options.timeout[:total_timeout]
# may be called after ":close" above, so after the connection has been checked back in.
# next unless @current_session
next unless @current_session
@current_session.deselect_connection(self, @current_selector, @cloned)
end
on(:terminate) do
next if @exhausted # it'll reset
current_session = @current_session
current_selector = @current_selector
# may be called after ":close" above, so after the connection has been checked back in.
next unless current_session && current_selector
current_session.deselect_connection(self, current_selector)
end
on(:altsvc) do |alt_origin, origin, alt_params|
build_altsvc_connection(alt_origin, origin, alt_params)
end
self.addresses = @options.addresses if @options.addresses
end
def peer
@origin
end
# this is a semi-private method, to be used by the resolver
# to initiate the io object.
def addresses=(addrs)
@ -90,27 +126,27 @@ module HTTPX
end
def match?(uri, options)
return false if @state == :closing || @state == :closed
return false if exhausted?
return false if !used? && (@state == :closing || @state == :closed)
(
(
@origins.include?(uri.origin) &&
# if there is more than one origin to match, it means that this connection
# was the result of coalescing. To prevent blind trust in the case where the
# origin came from an ORIGIN frame, we're going to verify the hostname with the
# SSL certificate
(@origins.size == 1 || @origin == uri.origin || (@io.is_a?(SSL) && @io.verify_hostname(uri.host)))
) && @options == options
) || (match_altsvcs?(uri) && match_altsvc_options?(uri, options))
@origins.include?(uri.origin) &&
# if there is more than one origin to match, it means that this connection
# was the result of coalescing. To prevent blind trust in the case where the
# origin came from an ORIGIN frame, we're going to verify the hostname with the
# SSL certificate
(@origins.size == 1 || @origin == uri.origin || (@io.is_a?(SSL) && @io.verify_hostname(uri.host)))
) && @options == options
end
def expired?
return false unless @io
@io.expired?
end
def mergeable?(connection)
return false if @state == :closing || @state == :closed || !@io
return false if exhausted?
return false unless connection.addresses
(
@ -119,6 +155,14 @@ module HTTPX
) && @options == connection.options
end
# coalesces +self+ into +connection+.
def coalesce!(connection)
@coalesced_connection = connection
close_sibling
connection.merge(self)
end
# coalescable connections need to be mergeable!
# but internally, #mergeable? is called before #coalescable?
def coalescable?(connection)
@ -133,11 +177,17 @@ module HTTPX
end
def create_idle(options = {})
self.class.new(@type, @origin, @options.merge(options))
self.class.new(@origin, @options.merge(options))
end
def merge(connection)
@origins |= connection.instance_variable_get(:@origins)
if connection.ssl_session
@ssl_session = connection.ssl_session
@io.session_new_cb do |sess|
@ssl_session = sess
end if @io
end
connection.purge_pending do |req|
send(req)
end
@ -155,22 +205,10 @@ module HTTPX
end
end
# checks if this is connection is an alternative service of
# +uri+
def match_altsvcs?(uri)
@origins.any? { |origin| uri.altsvc_match?(origin) } ||
AltSvc.cached_altsvc(@origin).any? do |altsvc|
origin = altsvc["origin"]
origin.altsvc_match?(uri.origin)
end
end
def io_connected?
return @coalesced_connection.io_connected? if @coalesced_connection
def match_altsvc_options?(uri, options)
return @options == options unless @options.ssl[:hostname] == uri.host
dup_options = @options.merge(ssl: { hostname: nil })
dup_options.ssl.delete(:hostname)
dup_options == options
@io && @io.state == :connected
end
def connecting?
@ -178,7 +216,12 @@ module HTTPX
end
def inflight?
@parser && !@parser.empty? && !@write_buffer.empty?
@parser && (
# parser may be dealing with other requests (possibly started from a different fiber)
!@parser.empty? ||
# connection may be doing connection termination handshake
!@write_buffer.empty?
)
end
def interests
@ -194,6 +237,9 @@ module HTTPX
return @parser.interests if @parser
nil
rescue StandardError => e
emit(:error, e)
nil
end
@ -203,16 +249,22 @@ module HTTPX
def call
case @state
when :idle
connect
consume
when :closed
return
when :closing
consume
transition(:closed)
emit(:close)
when :open
consume
end
nil
rescue StandardError => e
@write_buffer.clear
emit(:error, e)
raise e
end
def close
@ -221,24 +273,38 @@ module HTTPX
@parser.close if @parser
end
def terminate
case @state
when :idle
purge_after_closed
emit(:terminate)
when :closed
@connected_at = nil
end
close
end
# bypasses the state machine to force closing of connections still connecting.
# **only** used for Happy Eyeballs v2.
def force_reset
def force_reset(cloned = false)
@state = :closing
@cloned = cloned
transition(:closed)
emit(:close)
end
def reset
return if @state == :closing || @state == :closed
transition(:closing)
transition(:closed)
emit(:close)
end
def send(request)
if @parser && !@write_buffer.full?
request.headers["alt-used"] = @origin.authority if match_altsvcs?(request.uri)
return @coalesced_connection.send(request) if @coalesced_connection
if @parser && !@write_buffer.full?
if @response_received_at && @keep_alive_timeout &&
Utils.elapsed_time(@response_received_at) > @keep_alive_timeout
# when pushing a request into an existing connection, we have to check whether there
@ -246,8 +312,9 @@ module HTTPX
# for such cases, we want to ping for availability before deciding to shovel requests.
log(level: 3) { "keep alive timeout expired, pinging connection..." }
@pending << request
parser.ping
transition(:active) if @state == :inactive
parser.ping
request.ping!
return
end
@ -258,28 +325,26 @@ module HTTPX
end
def timeout
if @total_timeout
return @total_timeout unless @connected_at
return if @state == :closed || @state == :inactive
elapsed_time = @total_timeout - Utils.elapsed_time(@connected_at)
if elapsed_time.negative?
ex = TotalTimeoutError.new(@total_timeout, "Timed out after #{@total_timeout} seconds")
ex.set_backtrace(caller)
on_error(ex)
return
end
return elapsed_time
end
return @timeout if defined?(@timeout)
return @timeout if @timeout
return @options.timeout[:connect_timeout] if @state == :idle
@options.timeout[:operation_timeout]
end
def idling
purge_after_closed
@write_buffer.clear
transition(:idle)
@parser = nil if @parser
end
def used?
@connected_at
end
def deactivate
transition(:inactive)
end
@ -288,28 +353,65 @@ module HTTPX
@state == :open || @state == :inactive
end
def raise_timeout_error(interval)
error = HTTPX::TimeoutError.new(interval, "timed out while waiting on select")
def handle_socket_timeout(interval)
error = OperationTimeoutError.new(interval, "timed out while waiting on select")
error.set_backtrace(caller)
on_error(error)
end
def sibling=(connection)
@sibling = connection
return unless connection
@main_sibling = connection.sibling.nil?
return unless @main_sibling
connection.sibling = self
end
def handle_connect_error(error)
return handle_error(error) unless @sibling && @sibling.connecting?
@sibling.merge(self)
force_reset(true)
end
def disconnect
return unless @current_session && @current_selector
emit(:close)
@current_session = nil
@current_selector = nil
end
# :nocov:
def inspect
"#<#{self.class}:#{object_id} " \
"@origin=#{@origin} " \
"@state=#{@state} " \
"@pending=#{@pending.size} " \
"@io=#{@io}>"
end
# :nocov:
private
def connect
transition(:open)
end
def exhausted?
@parser && parser.exhausted?
end
def consume
return unless @io
catch(:called) do
epiped = false
loop do
# connection may have
return if @state == :idle
parser.consume
# we exit if there's no more requests to process
@ -339,8 +441,10 @@ module HTTPX
#
loop do
siz = @io.read(@window_size, @read_buffer)
log(level: 3, color: :cyan) { "IO READ: #{siz} bytes..." }
log(level: 3, color: :cyan) { "IO READ: #{siz} bytes... (wsize: #{@window_size}, rbuffer: #{@read_buffer.bytesize})" }
unless siz
@write_buffer.clear
ex = EOFError.new("descriptor closed")
ex.set_backtrace(caller)
on_error(ex)
@ -395,6 +499,8 @@ module HTTPX
end
log(level: 3, color: :cyan) { "IO WRITE: #{siz} bytes..." }
unless siz
@write_buffer.clear
ex = EOFError.new("descriptor closed")
ex.set_backtrace(caller)
on_error(ex)
@ -440,17 +546,22 @@ module HTTPX
def send_request_to_parser(request)
@inflight += 1
parser.send(request)
request.peer_address = @io.ip
set_request_timeouts(request)
parser.send(request)
return unless @state == :inactive
transition(:active)
# mark request as ping, as this inactive connection may have been
# closed by the server, and we don't want that to influence retry
# bookkeeping.
request.ping!
end
def build_parser(protocol = @io.protocol)
parser = self.class.parser_type(protocol).new(@write_buffer, @options)
parser = parser_type(protocol).new(@write_buffer, @options)
set_parser_callbacks(parser)
parser
end
@ -462,6 +573,7 @@ module HTTPX
end
@response_received_at = Utils.now
@inflight -= 1
response.finish!
request.emit(:response, response)
end
parser.on(:altsvc) do |alt_origin, origin, alt_params|
@ -474,32 +586,49 @@ module HTTPX
request.emit(:promise, parser, stream)
end
parser.on(:exhausted) do
emit(:exhausted)
@exhausted = true
current_session = @current_session
current_selector = @current_selector
begin
parser.close
@pending.concat(parser.pending)
ensure
@current_session = current_session
@current_selector = current_selector
end
case @state
when :closed
idling
@exhausted = false
when :closing
once(:closed) do
idling
@exhausted = false
end
end
end
parser.on(:origin) do |origin|
@origins |= [origin]
end
parser.on(:close) do |force|
transition(:closing)
if force || @state == :idle
transition(:closed)
emit(:close)
if force
reset
emit(:terminate)
end
end
parser.on(:close_handshake) do
consume
end
parser.on(:reset) do
if parser.empty?
reset
else
transition(:closing)
transition(:closed)
emit(:reset)
@parser.reset if @parser
transition(:idle)
transition(:open)
@pending.concat(parser.pending) unless parser.empty?
current_session = @current_session
current_selector = @current_selector
reset
unless @pending.empty?
idling
@current_session = current_session
@current_selector = current_selector
end
end
parser.on(:current_timeout) do
@ -508,15 +637,28 @@ module HTTPX
parser.on(:timeout) do |tout|
@timeout = tout
end
parser.on(:error) do |request, ex|
case ex
when MisdirectedRequestError
emit(:misdirected, request)
else
response = ErrorResponse.new(request, ex, @options)
request.response = response
request.emit(:response, response)
parser.on(:error) do |request, error|
case error
when :http_1_1_required
current_session = @current_session
current_selector = @current_selector
parser.close
other_connection = current_session.find_connection(@origin, current_selector,
@options.merge(ssl: { alpn_protocols: %w[http/1.1] }))
other_connection.merge(self)
request.transition(:idle)
other_connection.send(request)
next
when OperationTimeoutError
# request level timeouts should take precedence
next unless request.active_timeouts.empty?
end
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end
end
@ -531,19 +673,22 @@ module HTTPX
Errno::ENETUNREACH,
Errno::EPIPE,
Errno::ENOENT,
SocketError => e
SocketError,
IOError => e
# connect errors, exit gracefully
error = ConnectionError.new(e.message)
error.set_backtrace(e.backtrace)
connecting? && callbacks_for?(:connect_error) ? emit(:connect_error, error) : handle_error(error)
handle_connect_error(error) if connecting?
@state = :closed
emit(:close)
rescue TLSError => e
purge_after_closed
disconnect
rescue TLSError, ::HTTP2::Error::ProtocolError, ::HTTP2::Error::HandshakeError => e
# connect errors, exit gracefully
handle_error(e)
connecting? && callbacks_for?(:connect_error) ? emit(:connect_error, e) : handle_error(e)
handle_connect_error(e) if connecting?
@state = :closed
emit(:close)
purge_after_closed
disconnect
end
def handle_transition(nextstate)
@ -551,11 +696,12 @@ module HTTPX
when :idle
@timeout = @current_timeout = @options.timeout[:connect_timeout]
@connected_at = @response_received_at = nil
when :open
return if @state == :closed
@io.connect
emit(:tcp_open, self) if @io.state == :connected
close_sibling if @io.state == :connected
return unless @io.connected?
@ -567,92 +713,203 @@ module HTTPX
emit(:open)
when :inactive
return unless @state == :open
when :closing
return unless @state == :open
# do not deactivate connection in use
return if @inflight.positive?
when :closing
return unless @state == :idle || @state == :open
unless @write_buffer.empty?
# preset state before handshake, as error callbacks
# may take it back here.
@state = nextstate
# handshakes, try sending
consume
@write_buffer.clear
return
end
when :closed
return unless @state == :closing
return unless @write_buffer.empty?
purge_after_closed
disconnect if @pending.empty?
when :already_open
nextstate = :open
# the first check for given io readiness must still use a timeout.
# connect is the reasonable choice in such a case.
@timeout = @options.timeout[:connect_timeout]
send_pending
when :active
return unless @state == :inactive
nextstate = :open
emit(:activate)
# activate
@current_session.select_connection(self, @current_selector)
end
log(level: 3) { "#{@state} -> #{nextstate}" }
@state = nextstate
end
def close_sibling
return unless @sibling
if @sibling.io_connected?
reset
# TODO: transition connection to closed
end
unless @sibling.state == :closed
merge(@sibling) unless @main_sibling
@sibling.force_reset(true)
end
@sibling = nil
end
def purge_after_closed
@io.close if @io
@read_buffer.clear
remove_instance_variable(:@timeout) if defined?(@timeout)
@timeout = nil
end
def initialize_type(uri, options)
options.transport || begin
case uri.scheme
when "http"
"tcp"
when "https"
"ssl"
else
raise UnsupportedSchemeError, "#{uri}: #{uri.scheme}: unsupported URI scheme"
end
end
end
# returns an HTTPX::Connection for the negotiated Alternative Service (or none).
def build_altsvc_connection(alt_origin, origin, alt_params)
# do not allow security downgrades on altsvc negotiation
return if @origin.scheme == "https" && alt_origin.scheme != "https"
altsvc = AltSvc.cached_altsvc_set(origin, alt_params.merge("origin" => alt_origin))
# altsvc already exists, somehow it wasn't advertised, probably noop
return unless altsvc
alt_options = @options.merge(ssl: @options.ssl.merge(hostname: URI(origin).host))
connection = @current_session.find_connection(alt_origin, @current_selector, alt_options)
# advertised altsvc is the same origin being used, ignore
return if connection == self
connection.extend(AltSvc::ConnectionMixin) unless connection.is_a?(AltSvc::ConnectionMixin)
log(level: 1) { "#{origin} alt-svc: #{alt_origin}" }
connection.merge(self)
terminate
rescue UnsupportedSchemeError
altsvc["noop"] = true
nil
end
def build_socket(addrs = nil)
transport_type = case @type
when "tcp" then TCP
when "ssl" then SSL
when "unix" then UNIX
else
raise Error, "unsupported transport (#{@type})"
case @type
when "tcp"
TCP.new(peer, addrs, @options)
when "ssl"
SSL.new(peer, addrs, @options) do |sock|
sock.ssl_session = @ssl_session
sock.session_new_cb do |sess|
@ssl_session = sess
sock.ssl_session = sess
end
end
when "unix"
path = Array(addrs).first
path = String(path) if path
UNIX.new(peer, path, @options)
else
raise Error, "unsupported transport (#{@type})"
end
transport_type.new(@origin, addrs, @options)
end
def on_error(error)
if error.instance_of?(TimeoutError)
def on_error(error, request = nil)
if error.is_a?(OperationTimeoutError)
if @total_timeout && @connected_at &&
Utils.elapsed_time(@connected_at) > @total_timeout
ex = TotalTimeoutError.new(@total_timeout, "Timed out after #{@total_timeout} seconds")
ex.set_backtrace(error.backtrace)
error = ex
else
# inactive connections do not contribute to the select loop, therefore
# they should not fail due to such errors.
return if @state == :inactive
# inactive connections do not contribute to the select loop, therefore
# they should not fail due to such errors.
return if @state == :inactive
if @timeout
@timeout -= error.timeout
return unless @timeout <= 0
end
error = error.to_connection_error if connecting?
if @timeout
@timeout -= error.timeout
return unless @timeout <= 0
end
error = error.to_connection_error if connecting?
end
handle_error(error)
handle_error(error, request)
reset
end
def handle_error(error)
parser.handle_error(error) if @parser && parser.respond_to?(:handle_error)
while (request = @pending.shift)
response = ErrorResponse.new(request, error, request.options)
request.response = response
request.emit(:response, response)
def handle_error(error, request = nil)
parser.handle_error(error, request) if @parser && parser.respond_to?(:handle_error)
while (req = @pending.shift)
next if request && req == request
response = ErrorResponse.new(req, error)
req.response = response
req.emit(:response, response)
end
return unless request
@inflight -= 1
response = ErrorResponse.new(request, error)
request.response = response
request.emit(:response, response)
end
def set_request_timeouts(request)
write_timeout = request.write_timeout
request.once(:headers) do
@timers.after(write_timeout) { write_timeout_callback(request, write_timeout) }
end unless write_timeout.nil? || write_timeout.infinite?
set_request_write_timeout(request)
set_request_read_timeout(request)
set_request_request_timeout(request)
end
def set_request_read_timeout(request)
read_timeout = request.read_timeout
request.once(:done) do
@timers.after(read_timeout) { read_timeout_callback(request, read_timeout) }
end unless read_timeout.nil? || read_timeout.infinite?
return if read_timeout.nil? || read_timeout.infinite?
set_request_timeout(:read_timeout, request, read_timeout, :done, :response) do
read_timeout_callback(request, read_timeout)
end
end
def set_request_write_timeout(request)
write_timeout = request.write_timeout
return if write_timeout.nil? || write_timeout.infinite?
set_request_timeout(:write_timeout, request, write_timeout, :headers, %i[done response]) do
write_timeout_callback(request, write_timeout)
end
end
def set_request_request_timeout(request)
request_timeout = request.request_timeout
request.once(:headers) do
@timers.after(request_timeout) { read_timeout_callback(request, request_timeout, RequestTimeoutError) }
end unless request_timeout.nil? || request_timeout.infinite?
return if request_timeout.nil? || request_timeout.infinite?
set_request_timeout(:request_timeout, request, request_timeout, :headers, :complete) do
read_timeout_callback(request, request_timeout, RequestTimeoutError)
end
end
def write_timeout_callback(request, write_timeout)
@ -660,7 +917,8 @@ module HTTPX
@write_buffer.clear
error = WriteTimeoutError.new(request, nil, write_timeout)
on_error(error)
on_error(error, request)
end
def read_timeout_callback(request, read_timeout, error_type = ReadTimeoutError)
@ -670,18 +928,32 @@ module HTTPX
@write_buffer.clear
error = error_type.new(request, request.response, read_timeout)
on_error(error)
on_error(error, request)
end
class << self
def parser_type(protocol)
case protocol
when "h2" then HTTP2
when "http/1.1" then HTTP1
else
raise Error, "unsupported protocol (##{protocol})"
def set_request_timeout(label, request, timeout, start_event, finish_events, &callback)
request.set_timeout_callback(start_event) do
timer = @current_selector.after(timeout, callback)
request.active_timeouts << label
Array(finish_events).each do |event|
# clean up request timeouts if the connection errors out
request.set_timeout_callback(event) do
timer.cancel
request.active_timeouts.delete(label)
end
end
end
end
def parser_type(protocol)
case protocol
when "h2" then HTTP2
when "http/1.1" then HTTP1
else
raise Error, "unsupported protocol (##{protocol})"
end
end
end
end

View File

@ -7,15 +7,17 @@ module HTTPX
include Callbacks
include Loggable
MAX_REQUESTS = 100
MAX_REQUESTS = 200
CRLF = "\r\n"
attr_reader :pending, :requests
attr_accessor :max_concurrent_requests
def initialize(buffer, options)
@options = Options.new(options)
@options = options
@max_concurrent_requests = @options.max_concurrent_requests || MAX_REQUESTS
@max_requests = @options.max_requests || MAX_REQUESTS
@max_requests = @options.max_requests
@parser = Parser::HTTP1.new(self)
@buffer = buffer
@version = [1, 1]
@ -47,6 +49,7 @@ module HTTPX
@max_requests = @options.max_requests || MAX_REQUESTS
@parser.reset!
@handshake_completed = false
@pending.concat(@requests) unless @requests.empty?
end
def close
@ -90,7 +93,7 @@ module HTTPX
concurrent_requests_limit = [@max_concurrent_requests, requests_limit].min
@requests.each_with_index do |request, idx|
break if idx >= concurrent_requests_limit
next if request.state == :done
next unless request.can_buffer?
handle(request)
end
@ -116,7 +119,7 @@ module HTTPX
@parser.http_version.join("."),
headers)
log(color: :yellow) { "-> HEADLINE: #{response.status} HTTP/#{@parser.http_version.join(".")}" }
log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{v}" }.join("\n") }
log(color: :yellow) { response.headers.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v)}" }.join("\n") }
@request.response = response
on_complete if response.finished?
@ -128,38 +131,46 @@ module HTTPX
response = @request.response
log(level: 2) { "trailer headers received" }
log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{v.join(", ")}" }.join("\n") }
log(color: :yellow) { h.each.map { |f, v| "-> HEADER: #{f}: #{log_redact(v.join(", "))}" }.join("\n") }
response.merge_headers(h)
end
def on_data(chunk)
return unless @request
request = @request
return unless request
log(color: :green) { "-> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "-> #{chunk.inspect}" }
response = @request.response
log(level: 2, color: :green) { "-> #{log_redact(chunk.inspect)}" }
response = request.response
response << chunk
rescue StandardError => e
error_response = ErrorResponse.new(request, e)
request.response = error_response
dispatch
end
def on_complete
return unless @request
request = @request
return unless request
log(level: 2) { "parsing complete" }
dispatch
end
def dispatch
if @request.expects?
request = @request
if request.expects?
@parser.reset!
return handle(@request)
return handle(request)
end
request = @request
@request = nil
@requests.shift
response = request.response
response.finish!
emit(:response, request, response)
if @parser.upgrade?
@ -169,12 +180,23 @@ module HTTPX
@parser.reset!
@max_requests -= 1
manage_connection(response)
if response.is_a?(ErrorResponse)
disable
else
manage_connection(request, response)
end
send(@pending.shift) unless @pending.empty?
if exhausted?
@pending.concat(@requests)
@requests.clear
emit(:exhausted)
else
send(@pending.shift) unless @pending.empty?
end
end
def handle_error(ex)
def handle_error(ex, request = nil)
if (ex.is_a?(EOFError) || ex.is_a?(TimeoutError)) && @request && @request.response &&
!@request.response.headers.key?("content-length") &&
!@request.response.headers.key?("transfer-encoding")
@ -188,23 +210,28 @@ module HTTPX
if @pipelining
catch(:called) { disable }
else
@requests.each do |request|
emit(:error, request, ex)
@requests.each do |req|
next if request && request == req
emit(:error, req, ex)
end
@pending.each do |request|
emit(:error, request, ex)
@pending.each do |req|
next if request && request == req
emit(:error, req, ex)
end
end
end
def ping
reset
emit(:reset)
emit(:exhausted)
end
private
def manage_connection(response)
def manage_connection(request, response)
connection = response.headers["connection"]
case connection
when /keep-alive/i
@ -221,7 +248,7 @@ module HTTPX
return unless keep_alive
parameters = Hash[keep_alive.split(/ *, */).map do |pair|
pair.split(/ *= */)
pair.split(/ *= */, 2)
end]
@max_requests = parameters["max"].to_i - 1 if parameters.key?("max")
@ -234,7 +261,7 @@ module HTTPX
disable
when nil
# In HTTP/1.1, it's keep alive by default
return if response.version == "1.1"
return if response.version == "1.1" && request.headers["connection"] != "close"
disable
end
@ -242,6 +269,7 @@ module HTTPX
def disable
disable_pipelining
reset
emit(:reset)
throw(:called)
end
@ -272,29 +300,31 @@ module HTTPX
request.body.chunk!
end
connection = request.headers["connection"]
extra_headers = {}
connection ||= if request.options.persistent
# when in a persistent connection, the request can't be at
# the edge of a renegotiation
if @requests.index(request) + 1 < @max_requests
"keep-alive"
unless request.headers.key?("connection")
connection_value = if request.persistent?
# when in a persistent connection, the request can't be at
# the edge of a renegotiation
if @requests.index(request) + 1 < @max_requests
"keep-alive"
else
"close"
end
else
"close"
end
else
# when it's not a persistent connection, it sets "Connection: close" always
# on the last request of the possible batch (either allowed max requests,
# or if smaller, the size of the batch itself)
requests_limit = [@max_requests, @requests.size].min
if request == @requests[requests_limit - 1]
"close"
else
"keep-alive"
# when it's not a persistent connection, it sets "Connection: close" always
# on the last request of the possible batch (either allowed max requests,
# or if smaller, the size of the batch itself)
requests_limit = [@max_requests, @requests.size].min
if request == @requests[requests_limit - 1]
"close"
else
"keep-alive"
end
end
extra_headers["connection"] = connection_value
end
extra_headers = { "connection" => connection }
extra_headers["host"] = request.authority unless request.headers.key?("host")
extra_headers
end
@ -331,7 +361,7 @@ module HTTPX
while (chunk = request.drain_body)
log(color: :green) { "<- DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "<- #{chunk.inspect}" }
log(level: 2, color: :green) { "<- #{log_redact(chunk.inspect)}" }
@buffer << chunk
throw(:buffer_full, request) if @buffer.full?
end
@ -350,18 +380,17 @@ module HTTPX
end
def join_headers2(headers)
buffer = "".b
headers.each do |field, value|
buffer << "#{capitalized(field)}: #{value}" << CRLF
log(color: :yellow) { "<- HEADER: #{buffer.chomp}" }
@buffer << buffer
buffer.clear
field = capitalized(field)
log(color: :yellow) { "<- HEADER: #{[field, log_redact(value)].join(": ")}" }
@buffer << "#{field}: #{value}#{CRLF}"
end
end
UPCASED = {
"www-authenticate" => "WWW-Authenticate",
"http2-settings" => "HTTP2-Settings",
"content-md5" => "Content-MD5",
}.freeze
def capitalized(field)

View File

@ -1,18 +1,24 @@
# frozen_string_literal: true
require "securerandom"
require "http/2/next"
require "http/2"
module HTTPX
class Connection::HTTP2
include Callbacks
include Loggable
MAX_CONCURRENT_REQUESTS = HTTP2Next::DEFAULT_MAX_CONCURRENT_STREAMS
MAX_CONCURRENT_REQUESTS = ::HTTP2::DEFAULT_MAX_CONCURRENT_STREAMS
class Error < Error
def initialize(id, code)
super("stream #{id} closed with error: #{code}")
def initialize(id, error)
super("stream #{id} closed with error: #{error}")
end
end
class PingError < Error
def initialize
super(0, :ping_error)
end
end
@ -25,7 +31,7 @@ module HTTPX
attr_reader :streams, :pending
def initialize(buffer, options)
@options = Options.new(options)
@options = options
@settings = @options.http2_settings
@pending = []
@streams = {}
@ -35,7 +41,7 @@ module HTTPX
@handshake_completed = false
@wait_for_handshake = @settings.key?(:wait_for_handshake) ? @settings.delete(:wait_for_handshake) : true
@max_concurrent_requests = @options.max_concurrent_requests || MAX_CONCURRENT_REQUESTS
@max_requests = @options.max_requests || 0
@max_requests = @options.max_requests
init_connection
end
@ -52,10 +58,12 @@ module HTTPX
if @connection.state == :closed
return unless @handshake_completed
return if @buffer.empty?
return :w
end
unless (@connection.state == :connected && @handshake_completed)
unless @connection.state == :connected && @handshake_completed
return @buffer.empty? ? :r : :rw
end
@ -73,8 +81,11 @@ module HTTPX
end
def close
@connection.goaway unless @connection.state == :closed
emit(:close)
unless @connection.state == :closed
@connection.goaway
emit(:timeout, @options.timeout[:close_handshake_timeout])
end
emit(:close, true)
end
def empty?
@ -82,29 +93,17 @@ module HTTPX
end
def exhausted?
return false if @max_requests.zero? && @connection.active_stream_count.zero?
@connection.active_stream_count >= @max_requests
!@max_requests.positive?
end
def <<(data)
@connection << data
end
def can_buffer_more_requests?
if @handshake_completed
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
else
!@wait_for_handshake &&
@streams.size < @max_concurrent_requests
end
end
def send(request)
def send(request, head = false)
unless can_buffer_more_requests?
@pending << request
return
head ? @pending.unshift(request) : @pending << request
return false
end
unless (stream = @streams[request])
stream = @connection.new_stream
@ -114,47 +113,57 @@ module HTTPX
end
handle(request, stream)
true
rescue HTTP2Next::Error::StreamLimitExceeded
rescue ::HTTP2::Error::StreamLimitExceeded
@pending.unshift(request)
emit(:exhausted)
false
end
def consume
@streams.each do |request, stream|
next if request.state == :done
next unless request.can_buffer?
handle(request, stream)
end
end
def handle_error(ex)
if ex.instance_of?(TimeoutError) && !@handshake_completed && @connection.state != :closed
def handle_error(ex, request = nil)
if ex.is_a?(OperationTimeoutError) && !@handshake_completed && @connection.state != :closed
@connection.goaway(:settings_timeout, "closing due to settings timeout")
emit(:close_handshake)
settings_ex = SettingsTimeoutError.new(ex.timeout, ex.message)
settings_ex.set_backtrace(ex.backtrace)
ex = settings_ex
end
@streams.each_key do |request|
emit(:error, request, ex)
@streams.each_key do |req|
next if request && request == req
emit(:error, req, ex)
end
@pending.each do |request|
emit(:error, request, ex)
while (req = @pending.shift)
next if request && request == req
emit(:error, req, ex)
end
end
def ping
ping = SecureRandom.gen_random(8)
@connection.ping(ping)
@connection.ping(ping.dup)
ensure
@pings << ping
end
private
def can_buffer_more_requests?
(@handshake_completed || !@wait_for_handshake) &&
@streams.size < @max_concurrent_requests &&
@streams.size < @max_requests
end
def send_pending
while (request = @pending.shift)
break unless send(request)
break unless send(request, true)
end
end
@ -171,8 +180,7 @@ module HTTPX
end
def init_connection
@connection = HTTP2Next::Client.new(@settings)
@connection.max_streams = @max_requests if @connection.respond_to?(:max_streams=) && @max_requests.positive?
@connection = ::HTTP2::Client.new(@settings)
@connection.on(:frame, &method(:on_frame))
@connection.on(:frame_sent, &method(:on_frame_sent))
@connection.on(:frame_received, &method(:on_frame_received))
@ -218,12 +226,12 @@ module HTTPX
extra_headers = set_protocol_headers(request)
if request.headers.key?("host")
log { "forbidden \"host\" header found (#{request.headers["host"]}), will use it as authority..." }
log { "forbidden \"host\" header found (#{log_redact(request.headers["host"])}), will use it as authority..." }
extra_headers[":authority"] = request.headers["host"]
end
log(level: 1, color: :yellow) do
request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n")
"\n#{request.headers.merge(extra_headers).each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")}"
end
stream.headers(request.headers.each(extra_headers), end_stream: request.body.empty?)
end
@ -235,7 +243,7 @@ module HTTPX
end
log(level: 1, color: :yellow) do
request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{v}" }.join("\n")
request.trailers.each.map { |k, v| "#{stream.id}: -> HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
stream.headers(request.trailers.each, end_stream: true)
end
@ -246,13 +254,13 @@ module HTTPX
chunk = @drains.delete(request) || request.drain_body
while chunk
next_chunk = request.drain_body
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: -> #{chunk.inspect}" }
stream.data(chunk, end_stream: !(next_chunk || request.trailers? || request.callbacks_for?(:trailers)))
send_chunk(request, stream, chunk, next_chunk)
if next_chunk && (@buffer.full? || request.body.unbounded_body?)
@drains[request] = next_chunk
throw(:buffer_full)
end
chunk = next_chunk
end
@ -261,6 +269,16 @@ module HTTPX
on_stream_refuse(stream, request, error)
end
def send_chunk(request, stream, chunk, next_chunk)
log(level: 1, color: :green) { "#{stream.id}: -> DATA: #{chunk.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: -> #{log_redact(chunk.inspect)}" }
stream.data(chunk, end_stream: end_stream?(request, next_chunk))
end
def end_stream?(request, next_chunk)
!(next_chunk || request.trailers? || request.callbacks_for?(:trailers))
end
######
# HTTP/2 Callbacks
######
@ -274,7 +292,7 @@ module HTTPX
end
log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n")
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
_, status = h.shift
headers = request.options.headers_class.new(h)
@ -287,14 +305,14 @@ module HTTPX
def on_stream_trailers(stream, response, h)
log(color: :yellow) do
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{v}" }.join("\n")
h.map { |k, v| "#{stream.id}: <- HEADER: #{k}: #{log_redact(v)}" }.join("\n")
end
response.merge_headers(h)
end
def on_stream_data(stream, request, data)
log(level: 1, color: :green) { "#{stream.id}: <- DATA: #{data.bytesize} bytes..." }
log(level: 2, color: :green) { "#{stream.id}: <- #{data.inspect}" }
log(level: 2, color: :green) { "#{stream.id}: <- #{log_redact(data.inspect)}" }
request.response << data
end
@ -311,25 +329,33 @@ module HTTPX
@streams.delete(request)
if error
ex = Error.new(stream.id, error)
ex.set_backtrace(caller)
response = ErrorResponse.new(request, ex, request.options)
emit(:response, request, response)
case error
when :http_1_1_required
emit(:error, request, error)
else
ex = Error.new(stream.id, error)
ex.set_backtrace(caller)
response = ErrorResponse.new(request, ex)
request.response = response
emit(:response, request, response)
end
else
response = request.response
if response && response.is_a?(Response) && response.status == 421
ex = MisdirectedRequestError.new(response)
ex.set_backtrace(caller)
emit(:error, request, ex)
emit(:error, request, :http_1_1_required)
else
emit(:response, request, response)
end
end
send(@pending.shift) unless @pending.empty?
return unless @streams.empty? && exhausted?
close
emit(:exhausted) unless @pending.empty?
if @pending.empty?
close
else
emit(:exhausted)
end
end
def on_frame(bytes)
@ -339,14 +365,7 @@ module HTTPX
def on_settings(*)
@handshake_completed = true
emit(:current_timeout)
if @max_requests.zero?
@max_requests = @connection.remote_settings[:settings_max_concurrent_streams]
@connection.max_streams = @max_requests if @connection.respond_to?(:max_streams=) && @max_requests.positive?
end
@max_concurrent_requests = [@max_concurrent_requests, @max_requests].min
@max_concurrent_requests = [@max_concurrent_requests, @connection.remote_settings[:settings_max_concurrent_streams]].min
send_pending
end
@ -354,7 +373,12 @@ module HTTPX
is_connection_closed = @connection.state == :closed
if error
@buffer.clear if is_connection_closed
if error == :no_error
case error
when :http_1_1_required
while (request = @pending.shift)
emit(:error, request, error)
end
when :no_error
ex = GoawayError.new
@pending.unshift(*@streams.keys)
@drains.clear
@ -362,8 +386,11 @@ module HTTPX
else
ex = Error.new(0, error)
end
ex.set_backtrace(caller)
handle_error(ex)
if ex
ex.set_backtrace(caller)
handle_error(ex)
end
end
return unless is_connection_closed && @streams.empty?
@ -373,8 +400,15 @@ module HTTPX
def on_frame_sent(frame)
log(level: 2) { "#{frame[:stream]}: frame was sent!" }
log(level: 2, color: :blue) do
payload = frame
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data
payload =
case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}"
end
end
@ -382,15 +416,22 @@ module HTTPX
def on_frame_received(frame)
log(level: 2) { "#{frame[:stream]}: frame was received!" }
log(level: 2, color: :magenta) do
payload = frame
payload = payload.merge(payload: frame[:payload].bytesize) if frame[:type] == :data
payload =
case frame[:type]
when :data
frame.merge(payload: frame[:payload].bytesize)
when :headers, :ping
frame.merge(payload: log_redact(frame[:payload]))
else
frame
end
"#{frame[:stream]}: #{payload}"
end
end
def on_altsvc(origin, frame)
log(level: 2) { "#{frame[:stream]}: altsvc frame was received" }
log(level: 2) { "#{frame[:stream]}: #{frame.inspect}" }
log(level: 2) { "#{frame[:stream]}: #{log_redact(frame.inspect)}" }
alt_origin = URI.parse("#{frame[:proto]}://#{frame[:host]}:#{frame[:port]}")
params = { "ma" => frame[:max_age] }
emit(:altsvc, origin, alt_origin, origin, params)
@ -405,11 +446,9 @@ module HTTPX
end
def on_pong(ping)
if @pings.delete(ping.to_s)
emit(:pong)
else
close(:protocol_error, "ping payload did not match")
end
raise PingError unless @pings.delete(ping.to_s)
emit(:pong)
end
end
end

View File

@ -51,8 +51,6 @@ module HTTPX
# non-canonical domain.
attr_reader :domain
DOT = "." # :nodoc:
class << self
def new(domain)
return domain if domain.is_a?(self)
@ -63,8 +61,12 @@ module HTTPX
# Normalizes a _domain_ using the Punycode algorithm as necessary.
# The result will be a downcased, ASCII-only string.
def normalize(domain)
domain = domain.ascii_only? ? domain : domain.chomp(DOT).unicode_normalize(:nfc)
Punycode.encode_hostname(domain).downcase
unless domain.ascii_only?
domain = domain.chomp(".").unicode_normalize(:nfc)
domain = Punycode.encode_hostname(domain)
end
domain.downcase
end
end
@ -73,7 +75,7 @@ module HTTPX
def initialize(hostname)
hostname = String(hostname)
raise ArgumentError, "domain name must not start with a dot: #{hostname}" if hostname.start_with?(DOT)
raise ArgumentError, "domain name must not start with a dot: #{hostname}" if hostname.start_with?(".")
begin
@ipaddr = IPAddr.new(hostname)
@ -84,7 +86,7 @@ module HTTPX
end
@hostname = DomainName.normalize(hostname)
tld = if (last_dot = @hostname.rindex(DOT))
tld = if (last_dot = @hostname.rindex("."))
@hostname[(last_dot + 1)..-1]
else
@hostname
@ -94,7 +96,7 @@ module HTTPX
@domain = if last_dot
# fallback - accept cookies down to second level
# cf. http://www.dkim-reputation.org/regdom-libs/
if (penultimate_dot = @hostname.rindex(DOT, last_dot - 1))
if (penultimate_dot = @hostname.rindex(".", last_dot - 1))
@hostname[(penultimate_dot + 1)..-1]
else
@hostname
@ -126,17 +128,12 @@ module HTTPX
@domain && self <= domain && domain <= @domain
end
# def ==(other)
# other = DomainName.new(other)
# other.hostname == @hostname
# end
def <=>(other)
other = DomainName.new(other)
othername = other.hostname
if othername == @hostname
0
elsif @hostname.end_with?(othername) && @hostname[-othername.size - 1, 1] == DOT
elsif @hostname.end_with?(othername) && @hostname[-othername.size - 1, 1] == "."
# The other is higher
-1
else

View File

@ -1,20 +1,27 @@
# frozen_string_literal: true
module HTTPX
# the default exception class for exceptions raised by HTTPX.
class Error < StandardError; end
class UnsupportedSchemeError < Error; end
class ConnectionError < Error; end
# Error raised when there was a timeout. Its subclasses allow for finer-grained
# control of which timeout happened.
class TimeoutError < Error
# The timeout value which caused this error to be raised.
attr_reader :timeout
# initializes the timeout exception with the +timeout+ causing the error, and the
# error +message+ for it.
def initialize(timeout, message)
@timeout = timeout
super(message)
end
# clones this error into a HTTPX::ConnectionTimeoutError.
def to_connection_error
ex = ConnectTimeoutError.new(@timeout, message)
ex.set_backtrace(backtrace)
@ -22,13 +29,22 @@ module HTTPX
end
end
class TotalTimeoutError < TimeoutError; end
# Raise when it can't acquire a connection from the pool.
class PoolTimeoutError < TimeoutError; end
# Error raised when there was a timeout establishing the connection to a server.
# This may be raised due to timeouts during TCP and TLS (when applicable) connection
# establishment.
class ConnectTimeoutError < TimeoutError; end
# Error raised when there was a timeout while sending a request, or receiving a response
# from the server.
class RequestTimeoutError < TimeoutError
# The HTTPX::Request request object this exception refers to.
attr_reader :request
# initializes the exception with the +request+ and +response+ it refers to, and the
# +timeout+ causing the error, and the
def initialize(request, response, timeout)
@request = request
@response = response
@ -40,19 +56,31 @@ module HTTPX
end
end
# Error raised when there was a timeout while receiving a response from the server.
class ReadTimeoutError < RequestTimeoutError; end
# Error raised when there was a timeout while sending a request from the server.
class WriteTimeoutError < RequestTimeoutError; end
# Error raised when there was a timeout while waiting for the HTTP/2 settings frame from the server.
class SettingsTimeoutError < TimeoutError; end
# Error raised when there was a timeout while resolving a domain to an IP.
class ResolveTimeoutError < TimeoutError; end
# Error raise when there was a timeout waiting for readiness of the socket the request is related to.
class OperationTimeoutError < TimeoutError; end
# Error raised when there was an error while resolving a domain to an IP.
class ResolveError < Error; end
# Error raised when there was an error while resolving a domain to an IP
# using a HTTPX::Resolver::Native resolver.
class NativeResolveError < ResolveError
attr_reader :connection, :host
# initializes the exception with the +connection+ it refers to, the +host+ domain
# which failed to resolve, and the error +message+.
def initialize(connection, host, message = "Can't resolve #{host}")
@connection = connection
@host = host
@ -60,18 +88,22 @@ module HTTPX
end
end
# The exception class for HTTP responses with 4xx or 5xx status.
class HTTPError < Error
# The HTTPX::Response response object this exception refers to.
attr_reader :response
# Creates the instance and assigns the HTTPX::Response +response+.
def initialize(response)
@response = response
super("HTTP Error: #{@response.status} #{@response.headers}\n#{@response.body}")
end
# The HTTP response status.
#
# error.status #=> 404
def status
@response.status
end
end
class MisdirectedRequestError < HTTPError; end
end

View File

@ -3,96 +3,6 @@
require "uri"
module HTTPX
unless Method.method_defined?(:curry)
# Backport
#
# Ruby 2.1 and lower implement curry only for Procs.
#
# Why not using Refinements? Because they don't work for Method (tested with ruby 2.1.9).
#
module CurryMethods
# Backport for the Method#curry method, which is part of ruby core since 2.2 .
#
def curry(*args)
to_proc.curry(*args)
end
end
Method.__send__(:include, CurryMethods)
end
unless String.method_defined?(:+@)
# Backport for +"", to initialize unfrozen strings from the string literal.
#
module LiteralStringExtensions
def +@
frozen? ? dup : self
end
end
String.__send__(:include, LiteralStringExtensions)
end
unless Numeric.method_defined?(:positive?)
# Ruby 2.3 Backport (Numeric#positive?)
#
module PosMethods
def positive?
self > 0
end
end
Numeric.__send__(:include, PosMethods)
end
unless Numeric.method_defined?(:negative?)
# Ruby 2.3 Backport (Numeric#negative?)
#
module NegMethods
def negative?
self < 0
end
end
Numeric.__send__(:include, NegMethods)
end
module NumericExtensions
# Ruby 2.4 backport
refine Numeric do
def infinite?
self == Float::INFINITY
end unless Numeric.method_defined?(:infinite?)
end
end
module StringExtensions
refine String do
# Ruby 2.5 backport
def delete_suffix!(suffix)
suffix = Backports.coerce_to_str(suffix)
chomp! if frozen?
len = suffix.length
if len > 0 && index(suffix, -len)
self[-len..-1] = ''
self
else
nil
end
end unless String.method_defined?(:delete_suffix!)
end
end
module HashExtensions
refine Hash do
# Ruby 2.4 backport
def compact
h = {}
each do |key, value|
h[key] = value unless value == nil
end
h
end unless Hash.method_defined?(:compact)
end
end
module ArrayExtensions
module FilterMap
refine Array do
@ -108,16 +18,6 @@ module HTTPX
end unless Array.method_defined?(:filter_map)
end
module Sum
refine Array do
# Ruby 2.6 backport
def sum(accumulator = 0, &block)
values = block_given? ? map(&block) : self
values.inject(accumulator, :+)
end
end unless Array.method_defined?(:sum)
end
module Intersect
refine Array do
# Ruby 3.1 backport
@ -133,30 +33,6 @@ module HTTPX
end
end
module IOExtensions
refine IO do
# Ruby 2.3 backport
# provides a fallback for rubies where IO#wait isn't implemented,
# but IO#wait_readable and IO#wait_writable are.
def wait(timeout = nil, _mode = :read_write)
r, w = IO.select([self], [self], nil, timeout)
return unless r || w
self
end unless IO.method_defined?(:wait) && IO.instance_method(:wait).arity == 2
end
end
module RegexpExtensions
refine(Regexp) do
# Ruby 2.4 backport
def match?(*args)
!match(*args).nil?
end
end
end
module URIExtensions
# uri 0.11 backport, ships with ruby 3.1
refine URI::Generic do
@ -178,21 +54,6 @@ module HTTPX
def origin
"#{scheme}://#{authority}"
end unless URI::HTTP.method_defined?(:origin)
def altsvc_match?(uri)
uri = URI.parse(uri)
origin == uri.origin || begin
case scheme
when "h2"
(uri.scheme == "https" || uri.scheme == "h2") &&
host == uri.host &&
(port || default_port) == (uri.port || uri.default_port)
else
false
end
end
end
end
end
end

View File

@ -11,20 +11,32 @@ module HTTPX
end
def initialize(headers = nil)
if headers.nil? || headers.empty?
@headers = headers.to_h
return
end
@headers = {}
return unless headers
headers.each do |field, value|
array_value(value).each do |v|
add(downcased(field), v)
field = downcased(field)
value = array_value(value)
current = @headers[field]
if current.nil?
@headers[field] = value
else
current.concat(value)
end
end
end
# cloned initialization
def initialize_clone(orig)
def initialize_clone(orig, **kwargs)
super
@headers = orig.instance_variable_get(:@headers).clone
@headers = orig.instance_variable_get(:@headers).clone(**kwargs)
end
# dupped initialization
@ -39,17 +51,6 @@ module HTTPX
super
end
def same_headers?(headers)
@headers.empty? || begin
headers.each do |k, v|
next unless key?(k)
return false unless v == self[k]
end
true
end
end
# merges headers with another header-quack.
# the merge rule is, if the header already exists,
# ignore what the +other+ headers has. Otherwise, set
@ -119,6 +120,10 @@ module HTTPX
other == to_hash
end
def empty?
@headers.empty?
end
# the headers store in Hash format
def to_hash
Hash[to_a]
@ -137,7 +142,8 @@ module HTTPX
# :nocov:
def inspect
to_hash.inspect
"#<#{self.class}:#{object_id} " \
"#{to_hash.inspect}>"
end
# :nocov:
@ -160,12 +166,7 @@ module HTTPX
private
def array_value(value)
case value
when Array
value.map { |val| String(val).strip }
else
[String(value).strip]
end
Array(value)
end
def downcased(field)

View File

@ -4,4 +4,8 @@ require "socket"
require "httpx/io/udp"
require "httpx/io/tcp"
require "httpx/io/unix"
require "httpx/io/ssl"
begin
require "httpx/io/ssl"
rescue LoadError
end

View File

@ -4,26 +4,49 @@ require "openssl"
module HTTPX
TLSError = OpenSSL::SSL::SSLError
IPRegex = Regexp.union(Resolv::IPv4::Regex, Resolv::IPv6::Regex)
class SSL < TCP
using RegexpExtensions unless Regexp.method_defined?(:match?)
# rubocop:disable Style/MutableConstant
TLS_OPTIONS = { alpn_protocols: %w[h2 http/1.1].freeze }
# https://github.com/jruby/jruby-openssl/issues/284
# TODO: remove when dropping support for jruby-openssl < 0.15.4
TLS_OPTIONS[:verify_hostname] = true if RUBY_ENGINE == "jruby" && JOpenSSL::VERSION < "0.15.4"
# rubocop:enable Style/MutableConstant
TLS_OPTIONS.freeze
TLS_OPTIONS = if OpenSSL::SSL::SSLContext.instance_methods.include?(:alpn_protocols)
{ alpn_protocols: %w[h2 http/1.1].freeze }.freeze
else
{}.freeze
end
attr_writer :ssl_session
def initialize(_, _, options)
super
@ctx = OpenSSL::SSL::SSLContext.new
ctx_options = TLS_OPTIONS.merge(options.ssl)
@sni_hostname = ctx_options.delete(:hostname) || @hostname
@ctx.set_params(ctx_options) unless ctx_options.empty?
@state = :negotiated if @keep_open
@hostname_is_ip = IPRegex.match?(@sni_hostname)
if @keep_open && @io.is_a?(OpenSSL::SSL::SSLSocket)
# externally initiated ssl socket
@ctx = @io.context
@state = :negotiated
else
@ctx = OpenSSL::SSL::SSLContext.new
@ctx.set_params(ctx_options) unless ctx_options.empty?
unless @ctx.session_cache_mode.nil? # a dummy method on JRuby
@ctx.session_cache_mode =
OpenSSL::SSL::SSLContext::SESSION_CACHE_CLIENT | OpenSSL::SSL::SSLContext::SESSION_CACHE_NO_INTERNAL_STORE
end
yield(self) if block_given?
end
@verify_hostname = @ctx.verify_hostname
end
if OpenSSL::SSL::SSLContext.method_defined?(:session_new_cb=)
def session_new_cb(&pr)
@ctx.session_new_cb = proc { |_, sess| pr.call(sess) }
end
else
# session_new_cb not implemented under JRuby
def session_new_cb; end
end
def protocol
@ -32,6 +55,20 @@ module HTTPX
super
end
if RUBY_ENGINE == "jruby"
# in jruby, alpn_protocol may return ""
# https://github.com/jruby/jruby-openssl/issues/287
def protocol
proto = @io.alpn_protocol
return super if proto.nil? || proto.empty?
proto
rescue StandardError
super
end
end
def can_verify_peer?
@ctx.verify_mode == OpenSSL::SSL::VERIFY_PEER
end
@ -43,85 +80,57 @@ module HTTPX
OpenSSL::SSL.verify_certificate_identity(@io.peer_cert, host)
end
def close
super
# allow reconnections
# connect only works if initial @io is a socket
@io = @io.io if @io.respond_to?(:io)
end
def connected?
@state == :negotiated
end
def expired?
super || ssl_session_expired?
end
def ssl_session_expired?
@ssl_session.nil? || Process.clock_gettime(Process::CLOCK_REALTIME) >= (@ssl_session.time.to_f + @ssl_session.timeout)
end
def connect
super
return if @state == :negotiated ||
@state != :connected
return if @state == :negotiated
unless @state == :connected
super
return unless @state == :connected
end
unless @io.is_a?(OpenSSL::SSL::SSLSocket)
if (hostname_is_ip = (@ip == @sni_hostname))
# IPv6 address would be "[::1]", must turn to "0000:0000:0000:0000:0000:0000:0000:0001" for cert SAN check
@sni_hostname = @ip.to_string
# IP addresses in SNI is not valid per RFC 6066, section 3.
@ctx.verify_hostname = false
end
@io = OpenSSL::SSL::SSLSocket.new(@io, @ctx)
@io.hostname = @sni_hostname unless @hostname_is_ip
@io.hostname = @sni_hostname unless hostname_is_ip
@io.session = @ssl_session unless ssl_session_expired?
@io.sync_close = true
end
try_ssl_connect
end
if RUBY_VERSION < "2.3"
# :nocov:
def try_ssl_connect
@io.connect_nonblock
@io.post_connection_check(@sni_hostname) if @ctx.verify_mode != OpenSSL::SSL::VERIFY_NONE && !@hostname_is_ip
transition(:negotiated)
@interests = :w
rescue ::IO::WaitReadable
def try_ssl_connect
ret = @io.connect_nonblock(exception: false)
log(level: 3, color: :cyan) { "TLS CONNECT: #{ret}..." }
case ret
when :wait_readable
@interests = :r
rescue ::IO::WaitWritable
return
when :wait_writable
@interests = :w
return
end
def read(_, buffer)
super
rescue ::IO::WaitWritable
buffer.clear
0
end
def write(*)
super
rescue ::IO::WaitReadable
0
end
# :nocov:
else
def try_ssl_connect
case @io.connect_nonblock(exception: false)
when :wait_readable
@interests = :r
return
when :wait_writable
@interests = :w
return
end
@io.post_connection_check(@sni_hostname) if @ctx.verify_mode != OpenSSL::SSL::VERIFY_NONE && !@hostname_is_ip
transition(:negotiated)
@interests = :w
end
# :nocov:
if OpenSSL::VERSION < "2.0.6"
def read(size, buffer)
@io.read_nonblock(size, buffer)
buffer.bytesize
rescue ::IO::WaitReadable,
::IO::WaitWritable
buffer.clear
0
rescue EOFError
nil
end
end
# :nocov:
@io.post_connection_check(@sni_hostname) if @ctx.verify_mode != OpenSSL::SSL::VERIFY_NONE && @verify_hostname
transition(:negotiated)
@interests = :w
end
private
@ -130,6 +139,7 @@ module HTTPX
case nextstate
when :negotiated
return unless @state == :connected
when :closed
return unless @state == :negotiated ||
@state == :connected

View File

@ -17,7 +17,7 @@ module HTTPX
@state = :idle
@addresses = []
@hostname = origin.host
@options = Options.new(options)
@options = options
@fallback_protocol = @options.fallback_protocol
@port = origin.port
@interests = :w
@ -38,7 +38,10 @@ module HTTPX
add_addresses(addresses)
end
@ip_index = @addresses.size - 1
# @io ||= build_socket
end
def socket
@io
end
def add_addresses(addrs)
@ -72,10 +75,20 @@ module HTTPX
@io = build_socket
end
try_connect
rescue Errno::EHOSTUNREACH,
Errno::ENETUNREACH => e
raise e if @ip_index <= 0
log { "failed connecting to #{@ip} (#{e.message}), evict from cache and trying next..." }
Resolver.cached_lookup_evict(@hostname, @ip)
@ip_index -= 1
@io = build_socket
retry
rescue Errno::ECONNREFUSED,
Errno::EADDRNOTAVAIL,
Errno::EHOSTUNREACH,
SocketError => e
SocketError,
IOError => e
raise e if @ip_index <= 0
log { "failed connecting to #{@ip} (#{e.message}), trying next..." }
@ -91,84 +104,45 @@ module HTTPX
retry
end
if RUBY_VERSION < "2.3"
# :nocov:
def try_connect
@io.connect_nonblock(Socket.sockaddr_in(@port, @ip.to_s))
rescue ::IO::WaitWritable, Errno::EALREADY
@interests = :w
rescue ::IO::WaitReadable
def try_connect
ret = @io.connect_nonblock(Socket.sockaddr_in(@port, @ip.to_s), exception: false)
log(level: 3, color: :cyan) { "TCP CONNECT: #{ret}..." }
case ret
when :wait_readable
@interests = :r
rescue Errno::EISCONN
transition(:connected)
@interests = :w
else
transition(:connected)
return
when :wait_writable
@interests = :w
return
end
private :try_connect
transition(:connected)
@interests = :w
rescue Errno::EALREADY
@interests = :w
end
private :try_connect
def read(size, buffer)
@io.read_nonblock(size, buffer)
log { "READ: #{buffer.bytesize} bytes..." }
buffer.bytesize
rescue ::IO::WaitReadable
def read(size, buffer)
ret = @io.read_nonblock(size, buffer, exception: false)
if ret == :wait_readable
buffer.clear
0
rescue EOFError
nil
return 0
end
return if ret.nil?
def write(buffer)
siz = @io.write_nonblock(buffer)
log { "WRITE: #{siz} bytes..." }
buffer.shift!(siz)
siz
rescue ::IO::WaitWritable
0
rescue EOFError
nil
end
# :nocov:
else
def try_connect
case @io.connect_nonblock(Socket.sockaddr_in(@port, @ip.to_s), exception: false)
when :wait_readable
@interests = :r
return
when :wait_writable
@interests = :w
return
end
transition(:connected)
@interests = :w
rescue Errno::EALREADY
@interests = :w
end
private :try_connect
log { "READ: #{buffer.bytesize} bytes..." }
buffer.bytesize
end
def read(size, buffer)
ret = @io.read_nonblock(size, buffer, exception: false)
if ret == :wait_readable
buffer.clear
return 0
end
return if ret.nil?
def write(buffer)
siz = @io.write_nonblock(buffer, exception: false)
return 0 if siz == :wait_writable
return if siz.nil?
log { "READ: #{buffer.bytesize} bytes..." }
buffer.bytesize
end
log { "WRITE: #{siz} bytes..." }
def write(buffer)
siz = @io.write_nonblock(buffer, exception: false)
return 0 if siz == :wait_writable
return if siz.nil?
log { "WRITE: #{siz} bytes..." }
buffer.shift!(siz)
siz
end
buffer.shift!(siz)
siz
end
def close
@ -189,9 +163,25 @@ module HTTPX
@state == :idle || @state == :closed
end
def expired?
# do not mess with external sockets
return false if @options.io
return true unless @addresses
resolver_addresses = Resolver.nolookup_resolve(@hostname)
(Array(resolver_addresses) & @addresses).empty?
end
# :nocov:
def inspect
"#<#{self.class}: #{@ip}:#{@port} (state: #{@state})>"
"#<#{self.class}:#{object_id} " \
"#{@ip}:#{@port} " \
"@state=#{@state} " \
"@hostname=#{@hostname} " \
"@addresses=#{@addresses} " \
"@state=#{@state}>"
end
# :nocov:
@ -219,12 +209,9 @@ module HTTPX
end
def log_transition_state(nextstate)
case nextstate
when :connected
"Connected to #{host} (##{@io.fileno})"
else
"#{host} #{@state} -> #{nextstate}"
end
label = host
label = "#{label}(##{@io.fileno})" if nextstate == :connected
"#{label} #{@state} -> #{nextstate}"
end
end
end

View File

@ -23,45 +23,19 @@ module HTTPX
true
end
if RUBY_VERSION < "2.3"
# :nocov:
def close
@io.close
rescue StandardError
nil
end
# :nocov:
else
def close
@io.close
end
def close
@io.close
end
# :nocov:
if (RUBY_ENGINE == "truffleruby" && RUBY_ENGINE_VERSION < "21.1.0") ||
RUBY_VERSION < "2.3"
if RUBY_ENGINE == "jruby"
# In JRuby, sendmsg_nonblock is not implemented
def write(buffer)
siz = @io.sendmsg_nonblock(buffer.to_s, 0, Socket.sockaddr_in(@port, @host.to_s))
siz = @io.send(buffer.to_s, 0, @host, @port)
log { "WRITE: #{siz} bytes..." }
buffer.shift!(siz)
siz
rescue ::IO::WaitWritable
0
rescue EOFError
nil
end
def read(size, buffer)
data, _ = @io.recvfrom_nonblock(size)
buffer.replace(data)
log { "READ: #{buffer.bytesize} bytes..." }
buffer.bytesize
rescue ::IO::WaitReadable
0
rescue IOError
end
else
def write(buffer)
siz = @io.sendmsg_nonblock(buffer.to_s, 0, Socket.sockaddr_in(@port, @host.to_s), exception: false)
return 0 if siz == :wait_writable
@ -72,26 +46,17 @@ module HTTPX
buffer.shift!(siz)
siz
end
def read(size, buffer)
ret = @io.recvfrom_nonblock(size, 0, buffer, exception: false)
return 0 if ret == :wait_readable
return if ret.nil?
log { "READ: #{buffer.bytesize} bytes..." }
buffer.bytesize
rescue IOError
end
end
# In JRuby, sendmsg_nonblock is not implemented
def write(buffer)
siz = @io.send(buffer.to_s, 0, @host, @port)
log { "WRITE: #{siz} bytes..." }
buffer.shift!(siz)
siz
end if RUBY_ENGINE == "jruby"
# :nocov:
def read(size, buffer)
ret = @io.recvfrom_nonblock(size, 0, buffer, exception: false)
return 0 if ret == :wait_readable
return if ret.nil?
log { "READ: #{buffer.bytesize} bytes..." }
buffer.bytesize
rescue IOError
end
end
end

Some files were not shown because too many files have changed in this diff Show More