Age | Commit message (Collapse) | Author |
|
listener wrapper (#5424)
Co-authored-by: WeidiDeng <weidi_deng@icloud.com>
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
|
|
* reverseproxy: Begin refactor to enable dynamic upstreams
Streamed here: https://www.youtube.com/watch?v=hj7yzXb11jU
* Implement SRV and A/AAA upstream sources
Also get upstreams at every retry loop iteration instead of just once
before the loop. See #4442.
* Minor tweaks from review
* Limit size of upstreams caches
* Add doc notes deprecating LookupSRV
* Provision dynamic upstreams
Still WIP, preparing to preserve health checker functionality
* Rejigger health checks
Move active health check results into handler-specific Upstreams.
Improve documentation regarding health checks and upstreams.
* Deprecation notice
* Add Caddyfile support, use `caddy.Duration`
* Interface guards
* Implement custom resolvers, add resolvers to http transport Caddyfile
* SRV: fix Caddyfile `name` inline arg, remove proto condition
* Use pointer receiver
* Add debug logs
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
|
|
* reverseproxy: Fix dial placeholders, SRV, active health checks
Supercedes #3776
Partially reverts or updates #3756, #3693, and #3695
* reverseproxy: add integration tests
Co-authored-by: Mohammed Al Sahaf <msaa1990@gmail.com>
|
|
* reverseproxy: construct active health-check transport from scratch (Fixes #3691)
* reverseproxy: do upstream health-check on the correct alternative port
* reverseproxy: add integration test for health-check on alternative port
* reverseproxy: put back the custom transport for health-check http client
* reverseproxy: cleanup health-check integration test
* reverseproxy: fix health-check of unix socket upstreams
* reverseproxy: skip unix socket tests on Windows
* tabs > spaces
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
* make the linter (and @francislavoie) happy
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
* One more lint fix
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
|
|
Now use context cancellation to stop active health checker, which is
simpler than and just as effective as using a separate stop channel.
|
|
|
|
Either Dial or LookupSRV will be set, but if we rely on Dial always
being set, we could run into bugs.
Note: Health checks don't support SRV upstreams.
|
|
* reverse_proxy: Begin SRV lookup support (WIP)
* reverse_proxy: Finish adding support for SRV-based backends (#3179)
|
|
* Fix typo
* Fix typo, thanks for Spell Checker under VS Code
|
|
Previously, all matchers in a route would be evaluated before any
handlers were executed, and a composite route of the matching routes
would be created. This made rewrites especially tricky, since the only
way to defer later matchers' evaluation was to wrap them in a subroute,
or to invoke a "rehandle" which often caused bugs.
Instead, this new sequential design evaluates each route's matchers then
its handlers in lock-step; matcher-handlers-matcher-handlers...
If the first matching route consists of a rewrite, then the second route
will be evaluated against the rewritten request, rather than the original
one, and so on.
This should do away with any need for rehandling.
I've also taken this opportunity to avoid adding new values to the
request context in the handler chain, as this creates a copy of the
Request struct, which may possibly lead to bugs like it has in the past
(see PR #1542, PR #1481, and maybe issue #2463). We now add all the
expected context values in the top-level handler at the server, then
any new values can be added to the variable table via the VarsCtxKey
context key, or just the GetVar/SetVar functions. In particular, we are
using this facility to convey dial information in the reverse proxy.
Had to be careful in one place as the middleware compilation logic has
changed, and moved a bit. We no longer compile a middleware chain per-
request; instead, we can compile it at provision-time, and defer only the
evaluation of matchers to request-time, which should slightly improve
performance. Doing this, however, we take advantage of multiple function
closures, and we also changed the use of HandlerFunc (function pointer)
to Handler (interface)... this led to a situation where, if we aren't
careful, allows one request routed a certain way to permanently change
the "next" handler for all/most other requests! We avoid this by making
a copy of the interface value (which is a lightweight pointer copy) and
using exclusively that within our wrapped handlers. This way, the
original stack frame is preserved in a "read-only" fashion. The comments
in the code describe this phenomenon.
This may very well be a breaking change for some configurations, however
I do not expect it to impact many people. I will make it clear in the
release notes that this change has occurred.
|
|
The interface was only making things difficult; a concrete pointer is
probably best.
|
|
These will be used in the new automated documentation system
|
|
* fix OOM issue caught by fuzzing
* use ParsedAddress as the struct name for the result of ParseNetworkAddress
* simplify code using the ParsedAddress type
* minor cleanups
|
|
This PR enables the use of placeholders in an upstream's Dial address.
A Dial address must represent precisely one socket after replacements.
See also #998 and #1639.
|
|
|
|
|
|
My goodness that was complicated
Blessed be request.Context
Sort of
|
|
|
|
|