Age | Commit message (Collapse) | Author |
|
|
|
* Make reverse proxy TLS server name replaceable for SNI upstreams.
* Reverted previous TLS server name replacement, and implemented thread safe version.
* Move TLS servername replacement into it's own function
* Moved SNI servername replacement into httptransport.
* Solve issue when dynamic upstreams use wrong protocol upstream.
* Revert previous commit.
Old commit was: Solve issue when dynamic upstreams use wrong protocol upstream.
Id: 3c9806ccb63e66bdcac8e1ed4520c9d135cb011d
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
|
|
* Add renegotiation option in reverseproxy tls client
* Update modules/caddyhttp/reverseproxy/httptransport.go
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
* reverseproxy: Correct the `tls_server_name` docs
* Update modules/caddyhttp/reverseproxy/httptransport.go
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
Closes #4823
|
|
In v2.5.0, upstream health was fixed such that whether an upstream is
considered healthy or not is mostly up to each individual handler's
config. Since "healthy" is an opinion, it is not a global value.
I unintentionally left in the "healthy" field in the API endpoint for
checking upstreams, and it is now misleading (see #4792).
However, num_requests and fails remains, so health can be determined by
the API client, rather than having it be opaquely (and unhelpfully)
determined for the client.
If we do restore this value later on, it'd need to be replicated once
per reverse_proxy handler according to their individual configs.
|
|
added flag --internal-certs
when set, for non-local domains the internal CA will be used for cert generation
|
|
|
|
|
|
Context: https://caddy.community/t/caddy-2-5-dynamic-upstreams-and-consul-srv-dns/15839
I realized it probably makes sense to allow `:53` to be omitted, since it's the default port for DNS.
|
|
|
|
* reverseproxy: Improve hashing LB policies with HRW
Previously, if a list of upstreams changed, hash-based LB policies
would be greatly affected because the hash relied on the position of
upstreams in the pool. Highest Random Weight or "rendezvous" hashing
is apparently robust to pool changes. It runs in O(n) instead of
O(log n), but n is very small usually.
* Fix bug and update tests
|
|
|
|
* reverseproxy: Add `_ms` placeholders for proxy durations
* Add http.request.duration_ms
Also add comments, and change duration_sec to duration_ms
* Add response.duration_ms for consistency
* Add missing godoc comment
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
|
|
* reverseproxy: Sync up `handleUpgradeResponse` with stdlib
I had left this as a TODO for when we bump to minimum 1.17, but I should've realized it was under `internal` so it couldn't be used directly.
Copied the functions we needed for parity. Hopefully this is ok!
* Add tests and fix godoc comments
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
|
|
Should fix #4659
|
|
Fix for dc4d147388547515f77447d594024386b732e7d4
|
|
Hopefully fix #4645
|
|
|
|
routes (#4391)
* reverseproxy: New `copy_response` handler for `handle_response` routes
Followup to #4298 and #4388.
This adds a new `copy_response` handler which may only be used in `reverse_proxy`'s `handle_response` routes, which can be used to actually copy the proxy response downstream.
Previously, if `handle_response` was used (with routes, not the status code mode), it was impossible to use the upstream's response body at all, because we would always close the body, expecting the routes to write a new body from scratch.
To implement this, I had to refactor `h.reverseProxy()` to move all the code that came after the `HandleResponse` loop into a new function. This new function `h.finalizeResponse()` takes care of preparing the response by removing extra headers, dealing with trailers, then copying the headers and body downstream.
Since basically what we want `copy_response` to do is invoke `h.finalizeResponse()` at a configurable point in time, we need to pass down the proxy handler, the response, and some other state via a new `req.WithContext(ctx)`. Wrapping a new context is pretty much the only way we have to jump a few layers in the HTTP middleware chain and let a handler pick up this information. Feels a bit dirty, but it works.
Also fixed a bug with the `http.reverse_proxy.upstream.duration` placeholder, it always had the same duration as `http.reverse_proxy.upstream.latency`, but the former was meant to be the time taken for the roundtrip _plus_ copying/writing the response.
* Delete the "Content-Length" header if we aren't copying
Fixes a bug where the Content-Length will mismatch the actual bytes written if we skipped copying the response, so we get a message like this when using curl:
```
curl: (18) transfer closed with 18 bytes remaining to read
```
To replicate:
```
{
admin off
debug
}
:8881 {
reverse_proxy 127.0.0.1:8882 {
@200 status 200
handle_response @200 {
header Foo bar
}
}
}
:8882 {
header Content-Type application/json
respond `{"hello": "world"}` 200
}
```
* Implement `copy_response_headers`, with include/exclude list support
* Apply suggestions from code review
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
|
|
* reverseproxy: Begin refactor to enable dynamic upstreams
Streamed here: https://www.youtube.com/watch?v=hj7yzXb11jU
* Implement SRV and A/AAA upstream sources
Also get upstreams at every retry loop iteration instead of just once
before the loop. See #4442.
* Minor tweaks from review
* Limit size of upstreams caches
* Add doc notes deprecating LookupSRV
* Provision dynamic upstreams
Still WIP, preparing to preserve health checker functionality
* Rejigger health checks
Move active health check results into handler-specific Upstreams.
Improve documentation regarding health checks and upstreams.
* Deprecation notice
* Add Caddyfile support, use `caddy.Duration`
* Interface guards
* Implement custom resolvers, add resolvers to http transport Caddyfile
* SRV: fix Caddyfile `name` inline arg, remove proto condition
* Use pointer receiver
* Add debug logs
Co-authored-by: Francis Lavoie <lavofr@gmail.com>
|
|
|
|
|
|
* reverseproxy: Make shallow-ish clone of the request
* Refactor request cloning into separate function
Co-authored-by: Matthew Holt <mholt@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
Fixes #4481
|
|
|
|
* fastcgi: Fix a TODO, prevent zap using reflection for logging env
* Update modules/caddyhttp/reverseproxy/fastcgi/fastcgi.go
Co-authored-by: Mohammed Al Sahaf <msaa1990@gmail.com>
Co-authored-by: Mohammed Al Sahaf <msaa1990@gmail.com>
|
|
* reverseproxy: Adjust defaults, document defaults
Related to some of the issues in https://github.com/caddyserver/caddy/issues/4245, a complaint about the proxy transport defaults not being properly documented in https://caddy.community/t/default-values-for-directives/14254/6.
- Dug into the stdlib to find the actual defaults for some of the timeouts and buffer limits, documenting them in godoc so the JSON docs get them next release.
- Moved the keep-alive and dial-timeout defaults from `reverseproxy.go` to `httptransport.go`. It doesn't make sense to set defaults in the proxy, because then any time the transport is configured with non-defaults, the keep-alive and dial-timeout defaults are lost!
- Sped up the dial timeout from 10s to 3s, in practice it rarely makes sense to wait a whole 10s for dialing. A shorter timeout helps a lot with the load balancer retries, so using something lower helps with user experience.
* reverseproxy: Make keepalive interval configurable via Caddyfile
* fastcgi: DialTimeout default for fastcgi transport too
|
|
* caddyhttp: Sanitize scheme and host on incoming requests
* reverseproxy: Sanitize the URL scheme and host before proxying
* Apply suggestions from code review
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
Co-authored-by: Matt Holt <mholt@users.noreply.github.com>
|
|
|
|
The question would only receive bad answers so it's better
to just say what the option actually does.
|
|
|
|
Debug log is correct level for this
|
|
|
|
|
|
|
|
From reading through the code, I think this code path is now obsoleted by the changes made in https://github.com/caddyserver/caddy/pull/4266.
Basically, `h.flushInterval()` will set the flush interval to `-1` if we're in a bi-directional stream, and the recent PR ensured that `h.copyResponse()` properly flushes headers immediately when the flush interval is non-zero. So now there should be no need to call Flush before calling `h.copyResponse()`.
|
|
|
|
I went through the commits that touched stdlib's `reverseproxy.go` file, and copied over all the changes that are to code that was copied into Caddy.
The commits I pulled changes from:
- https://github.com/golang/go/commit/2cc347382f4df3fb40d8d81ec9331f0748b1c394
- https://github.com/golang/go/commit/a5cea062b305c8502bdc959c0eec279dbcd4391f
- https://github.com/golang/go/commit/ecdbffd4ec68b509998792f120868fec319de59b
- https://github.com/golang/go/commit/21898524f66c075d7cfb64a38f17684140e57675
-https://github.com/golang/go/commit/ca3c0df1f8e07337ba4048b191bf905118ebe251
- https://github.com/golang/go/commit/9c017ff30dd21bbdcdb11f39458d3944db530d7e
This may also fix https://github.com/caddyserver/caddy/issues/4247 because of the change to `copyResponse` to set `mlw.flushPending = true` right away.
|
|
|
|
|
|
The fastcgi changes came from v1 which don't make sense in v2.
Fix comment about default value in reverse proxy keep alive.
|
|
Also split the Caddyfile subdirective keepalive_idle_conns into two properties so the conns and conns_per_host can be set separately.
This is technically a breaking change, but probably anyone who this breaks already had a broken config anyway, and silently fixing it won't help them fix their configs.
|