Age | Commit message (Collapse) | Author |
|
See https://caddy.community/t/v2-matcher-or-in-not/7355/
|
|
This is more congruent with its module name. A change that affects only
code, not configurations.
|
|
|
|
Either Dial or LookupSRV will be set, but if we rely on Dial always
being set, we could run into bugs.
Note: Health checks don't support SRV upstreams.
|
|
* reverse_proxy: Begin SRV lookup support (WIP)
* reverse_proxy: Finish adding support for SRV-based backends (#3179)
|
|
Brotli encoder, jsonc and json5 config adapters, and the unfinished
HTTP cache handler are removed.
They will be available in separate repos.
|
|
|
|
|
|
|
|
|
|
Adds `Alt-Svc` to the list of headers that get removed when proxying
to a backend.
This fixes the issue of having the contents of the Alt-Svc header
duplicated when proxying to another Caddy server.
|
|
|
|
* pki: Initial commit of PKI app (WIP) (see #2502 and #3021)
* pki: Ability to use root/intermediates, and sign with root
* pki: Fix benign misnamings left over from copy+paste
* pki: Only install root if not already trusted
* Make HTTPS port the default; all names use auto-HTTPS; bug fixes
* Fix build - what happened to our CI tests??
* Fix go.mod
|
|
|
|
|
|
This is a breaking change primarily in two areas:
- Storage paths for certificates have changed
- Slight changes to JSON config parameters
Huge improvements in this commit, to be detailed more in
the release notes.
The upcoming PKI app will be powered by Smallstep libraries.
|
|
|
|
|
|
This makes it more convenient to configure quick proxies that use HTTPS
but also introduces a lot of logical complexity. We have to do a lot of
verification for consistency and errors.
Path and query string is not supported (i.e. no rewriting).
Scheme and port can be inferred from each other if HTTP(S)/80/443.
If omitted, defaults to HTTP.
Any explicit transport config must be consistent with the upstream
schemes, and the upstream schemes must all match too.
But, this change allows a config that used to require this:
reverse_proxy example.com:443 {
transport http {
tls
}
}
to be reduced to this:
reverse_proxy https://example.com
which is really nice syntactic sugar (and is reminiscent of Caddy 1).
|
|
"Transparent mode" is the default, just like the actual handler.
|
|
* Fix typo
* Fix typo, thanks for Spell Checker under VS Code
|
|
This reverts commit 86b785e51cccd5df18611c380962cbd4faf38af5.
|
|
|
|
Fixes https://caddy.community/t/v2-health-checks-are-going-to-the-wrong-upstream/7084?u=matt
... I think
|
|
Now multiple instances of the same matcher can be used within a named
matcher without overwriting previous ones.
|
|
|
|
|
|
* v2: add documentation for circuit breaker config and "random selection" load balancing policy
* v2: rename circuit breaker config inline key from `type` to `breaker` to avoid json key clash between the `circuit_breaker` type and the `type` field of the generic circuit breaker Config struct used by circuit breaking implementations
* v2: restore the circuit breaker inline key to `type` and rename the name circuit breaker config field from `Type` to `Factor`
|
|
The fix that was initially put forth in #2971 was good, but only for
up to one layer of nesting. The real problem was that we forgot to
increment nesting when already inside a block if we saw another open
curly brace that opens another block (dispenser.go L157-158).
The new 'handle' directive allows HTTP Caddyfiles to be designed more
like nginx location blocks if the user prefers. Inside a handle block,
directives are still ordered just like they are outside of them, but
handler blocks at a given level of nesting are mutually exclusive.
This work benefitted from some refactoring and cleanup.
|
|
|
|
I am not sure if the query_string one is necessary or useful yet. We
can always add it later if needed.
|
|
|
|
Previously, all matchers in a route would be evaluated before any
handlers were executed, and a composite route of the matching routes
would be created. This made rewrites especially tricky, since the only
way to defer later matchers' evaluation was to wrap them in a subroute,
or to invoke a "rehandle" which often caused bugs.
Instead, this new sequential design evaluates each route's matchers then
its handlers in lock-step; matcher-handlers-matcher-handlers...
If the first matching route consists of a rewrite, then the second route
will be evaluated against the rewritten request, rather than the original
one, and so on.
This should do away with any need for rehandling.
I've also taken this opportunity to avoid adding new values to the
request context in the handler chain, as this creates a copy of the
Request struct, which may possibly lead to bugs like it has in the past
(see PR #1542, PR #1481, and maybe issue #2463). We now add all the
expected context values in the top-level handler at the server, then
any new values can be added to the variable table via the VarsCtxKey
context key, or just the GetVar/SetVar functions. In particular, we are
using this facility to convey dial information in the reverse proxy.
Had to be careful in one place as the middleware compilation logic has
changed, and moved a bit. We no longer compile a middleware chain per-
request; instead, we can compile it at provision-time, and defer only the
evaluation of matchers to request-time, which should slightly improve
performance. Doing this, however, we take advantage of multiple function
closures, and we also changed the use of HandlerFunc (function pointer)
to Handler (interface)... this led to a situation where, if we aren't
careful, allows one request routed a certain way to permanently change
the "next" handler for all/most other requests! We avoid this by making
a copy of the interface value (which is a lightweight pointer copy) and
using exclusively that within our wrapped handlers. This way, the
original stack frame is preserved in a "read-only" fashion. The comments
in the code describe this phenomenon.
This may very well be a breaking change for some configurations, however
I do not expect it to impact many people. I will make it clear in the
release notes that this change has occurred.
|
|
Allows specifying ca certs with by filename in
`reverse_proxy.transport`.
Example
```
reverse_proxy /api api:443 {
transport http {
tls
tls_trusted_ca_certs certs/rootCA.pem
}
}
```
|
|
* v2: housekeeping: update tools
* v2: housekeeping: adhere to US locale in spelling
* v2: housekeeping: simplify code
|
|
|
|
The interface was only making things difficult; a concrete pointer is
probably best.
|
|
|
|
These will be used in the new automated documentation system
|
|
(Try saying "patch path match" ten times fast)
|
|
Also some minor cleanup/improvements discovered along the way
|
|
|
|
This commit goes a long way toward making automated documentation of
Caddy config and Caddy modules possible. It's a broad, sweeping change,
but mostly internal. It allows us to automatically generate docs for all
Caddy modules (including future third-party ones) and make them viewable
on a web page; it also doubles as godoc comments.
As such, this commit makes significant progress in migrating the docs
from our temporary wiki page toward our new website which is still under
construction.
With this change, all host modules will use ctx.LoadModule() and pass in
both the struct pointer and the field name as a string. This allows the
reflect package to read the struct tag from that field so that it can
get the necessary information like the module namespace and the inline
key.
This has the nice side-effect of unifying the code and documentation. It
also simplifies module loading, and handles several variations on field
types for raw module fields (i.e. variations on json.RawMessage, such as
arrays and maps).
I also renamed ModuleInfo.Name -> ModuleInfo.ID, to make it clear that
the ID is the "full name" which includes both the module namespace and
the name. This clarity is helpful when describing module hierarchy.
As of this change, Caddy modules are no longer an experimental design.
I think the architecture is good enough to go forward.
|
|
|
|
|
|
Also add godoc for Caddyfile syntax for file_server
|
|
|
|
This is a bad idea, but some backends apparently require it. See
discussion in #176.
|
|
This makes it easier to use multiple instances on the same machine
|
|
* fix OOM issue caught by fuzzing
* use ParsedAddress as the struct name for the result of ParseNetworkAddress
* simplify code using the ParsedAddress type
* minor cleanups
|