due to how read timeouts are added on request transitions, timers may
enter the pool **before** a new tick happens, and are therefore
accounted for when the timers are fired after the current tick.
This patch resets the timer, which will force a new tick before they may
fire again.
there are situations where a connection may already be closed before dns response is received for it. such an example is connection coalescing, when happy eyeballs takes over, first address arrives, a coalescing situation is detected, and then the connection and it's happy eyeballs cousin are both closed, **before** the cound connection has been resolved
this has been working for a while, but was silently failing in HTTP/1, due to our inability to test it in CI (HTTP/1 setup is not yet using keep-alive)
the change to read/write cancellation-driven timeouts as the default
timeout strategy revealed a performance regression; because these were
built on Timers, which never got unsubscribed, this meant that they were
kept beyond the duration of the request they were created for, and
needlessly got picked up for the next timeout tick.
This was fixed by adding a callback on timer intervals, which
unsubscribes them from the timer group when called; these would then be
activated after the timeout is not needed anymore (request send /
response received), thereby removing the overhead on subsequent
requests.
An additional intervals array is also kept in the connection itself;
timeouts from timers are signalled via socket wait calls, however they
were always resulting in timeouts, even when they shouldn't (ex: expect
timeout and send full response payload as a result), and with the wrong
exception class in some cases. By keeping intervals from its requests
around, and monitoring whether there are relevant request triggers, the
connection can therefore handle a timeout or bail out (so that timers
can fire the correct callback).
since v1, performance has regressed due to the new default timeouts,
which are based on timers. That's because they were not being cleanup
after requests were done with, and were causing spurious wakeups in the
select loop after the fact.
Fixed by cleaning up each timer after each relevant request event.