Age | Commit message (Collapse) | Author |
|
* feat: Multiple 'to' upstreams in reverse-proxy cmd
* Repeat --to for multiple upstreams, rather than comma-separating in a single flag
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
* reverseproxy: Close hijacked conns on reload/quit
We also send a Close control message to both ends of
WebSocket connections. I have tested this many times in
my dev environment with consistent success, although
the variety of scenarios was limited.
* Oops... actually call Close() this time
* CloseMessage --> closeMessage
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
* Use httpguts, duh
* Use map instead of sync.Map
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
|
|
* break up code and use lazy reading and pool bufio.Writer
* close underlying connection when operation failed
* allocate bufWriter and streamWriter only once
* refactor record writing
* rebase from master
* handle err
* Fix type assertion
Also reduce some duplication
* Refactor client and clientCloser for logging
Should reduce allocations
* Minor cosmetic adjustments; apply Apache license
* Appease the linter
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
|
|
Co-authored-by: flga <flga@users.noreply.github.com>
|
|
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
|
|
|
|
|
|
This allows users to, for example, get upstreams from multiple SRV
endpoints in order (such as primary and secondary clusters).
Also, gofmt went to town on the comments, sigh
|
|
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Use of non-cryptographic random numbers in the load balancing
is intentional.
|
|
|
|
* reverseproxy: Implement retry count, alternative to try_duration
* Add Caddyfile support for `retry_match`
* Refactor to deduplicate matcher parsing logic
* Fix lint
|
|
Turns out the NTLM transport uses it. Oops.
|
|
|
|
|
|
See https://caddy.community/t/using-forward-auth-and-writing-my-own-authenticator-in-php/16410, apparently it didn't work when `copy_headers` wasn't used. This is because we were skipping adding a handler to the routes in the "good response handler", but this causes the logic in `reverseproxy.go` to ignore the response handler since it's empty. Instead, we can just always put in the `header` handler, even with an empty `Set` operation, it's just a no-op, but it fixes that condition in the proxy code.
|
|
|
|
* reverseproxy: Fix panic when TLS is not configured
* Refactor and simplify setScheme
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
|
|
* Make reverse proxy TLS server name replaceable for SNI upstreams.
* Reverted previous TLS server name replacement, and implemented thread safe version.
* Move TLS servername replacement into it's own function
* Moved SNI servername replacement into httptransport.
* Solve issue when dynamic upstreams use wrong protocol upstream.
* Revert previous commit.
Old commit was: Solve issue when dynamic upstreams use wrong protocol upstream.
Id: 3c9806ccb63e66bdcac8e1ed4520c9d135cb011d
* Added SkipTLSPorts option to http transport.
* Fix typo in test config file.
* Rename config option as suggested by Matt
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
* Update code to match renamed config option.
* Fix typo in config option name.
* Fix another typo that I missed.
* Tests not completing because of apparent wrong ordering of options.
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
|
|
* Make reverse proxy TLS server name replaceable for SNI upstreams.
* Reverted previous TLS server name replacement, and implemented thread safe version.
* Move TLS servername replacement into it's own function
* Moved SNI servername replacement into httptransport.
* Solve issue when dynamic upstreams use wrong protocol upstream.
* Revert previous commit.
Old commit was: Solve issue when dynamic upstreams use wrong protocol upstream.
Id: 3c9806ccb63e66bdcac8e1ed4520c9d135cb011d
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
|
|
* Add renegotiation option in reverseproxy tls client
* Update modules/caddyhttp/reverseproxy/httptransport.go
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
* reverseproxy: Correct the `tls_server_name` docs
* Update modules/caddyhttp/reverseproxy/httptransport.go
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
Closes #4823
|
|
In v2.5.0, upstream health was fixed such that whether an upstream is
considered healthy or not is mostly up to each individual handler's
config. Since "healthy" is an opinion, it is not a global value.
I unintentionally left in the "healthy" field in the API endpoint for
checking upstreams, and it is now misleading (see #4792).
However, num_requests and fails remains, so health can be determined by
the API client, rather than having it be opaquely (and unhelpfully)
determined for the client.
If we do restore this value later on, it'd need to be replicated once
per reverse_proxy handler according to their individual configs.
|
|
added flag --internal-certs
when set, for non-local domains the internal CA will be used for cert generation
|
|
|
|
|
|
Context: https://caddy.community/t/caddy-2-5-dynamic-upstreams-and-consul-srv-dns/15839
I realized it probably makes sense to allow `:53` to be omitted, since it's the default port for DNS.
|
|
|
|
* reverseproxy: Improve hashing LB policies with HRW
Previously, if a list of upstreams changed, hash-based LB policies
would be greatly affected because the hash relied on the position of
upstreams in the pool. Highest Random Weight or "rendezvous" hashing
is apparently robust to pool changes. It runs in O(n) instead of
O(log n), but n is very small usually.
* Fix bug and update tests
|
|
|
|
* reverseproxy: Add `_ms` placeholders for proxy durations
* Add http.request.duration_ms
Also add comments, and change duration_sec to duration_ms
* Add response.duration_ms for consistency
* Add missing godoc comment
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
|
|
* reverseproxy: Sync up `handleUpgradeResponse` with stdlib
I had left this as a TODO for when we bump to minimum 1.17, but I should've realized it was under `internal` so it couldn't be used directly.
Copied the functions we needed for parity. Hopefully this is ok!
* Add tests and fix godoc comments
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
|
|
Should fix #4659
|
|
Fix for dc4d147388547515f77447d594024386b732e7d4
|
|
Hopefully fix #4645
|
|
|
|
routes (#4391)
* reverseproxy: New `copy_response` handler for `handle_response` routes
Followup to #4298 and #4388.
This adds a new `copy_response` handler which may only be used in `reverse_proxy`'s `handle_response` routes, which can be used to actually copy the proxy response downstream.
Previously, if `handle_response` was used (with routes, not the status code mode), it was impossible to use the upstream's response body at all, because we would always close the body, expecting the routes to write a new body from scratch.
To implement this, I had to refactor `h.reverseProxy()` to move all the code that came after the `HandleResponse` loop into a new function. This new function `h.finalizeResponse()` takes care of preparing the response by removing extra headers, dealing with trailers, then copying the headers and body downstream.
Since basically what we want `copy_response` to do is invoke `h.finalizeResponse()` at a configurable point in time, we need to pass down the proxy handler, the response, and some other state via a new `req.WithContext(ctx)`. Wrapping a new context is pretty much the only way we have to jump a few layers in the HTTP middleware chain and let a handler pick up this information. Feels a bit dirty, but it works.
Also fixed a bug with the `http.reverse_proxy.upstream.duration` placeholder, it always had the same duration as `http.reverse_proxy.upstream.latency`, but the former was meant to be the time taken for the roundtrip _plus_ copying/writing the response.
* Delete the "Content-Length" header if we aren't copying
Fixes a bug where the Content-Length will mismatch the actual bytes written if we skipped copying the response, so we get a message like this when using curl:
```
curl: (18) transfer closed with 18 bytes remaining to read
```
To replicate:
```
{
admin off
debug
}
:8881 {
reverse_proxy 127.0.0.1:8882 {
@200 status 200
handle_response @200 {
header Foo bar
}
}
}
:8882 {
header Content-Type application/json
respond `{"hello": "world"}` 200
}
```
* Implement `copy_response_headers`, with include/exclude list support
* Apply suggestions from code review
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|