ingress-nginx is dead, long live the Gateway API
You might be aware that Ingress NGINX is retiring next month (that’s tomorrow!). I really didn’t want to bother with replacing it, but I also didn’t particularly like the sound of running something barely maintained to begin with never receiving another patch again. After a half-hearted attempt to understand why I would want to use Istio, I’ve completed a migration to Envoy Gateway and will briefly document the changes in manifest form in case anyone else wants to copy-paste.
This first bit’s Flux CD-specific, but gives you the info you need anyway. Grab the HelmRelease. I use a custom cluster domain (not cluster.local) so I needed to configure it. Missing this caused a bit of wailing and gnashing of teeth wondering why it wouldn’t bring up.
---
apiVersion: v1
kind: Namespace
metadata:
name: envoy-gateway-system
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: envoy-gateway
namespace: envoy-gateway-system
spec:
type: oci
interval: 24h
url: oci://docker.io/envoyproxy
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: envoy-gateway
namespace: envoy-gateway-system
spec:
chart:
spec:
chart: gateway-helm
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: envoy-gateway
namespace: envoy-gateway-system
interval: 24h
values:
kubernetesClusterDomain: cassax.hrzn.ee
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
spec:
interval: 24h
path: "./flux/sources/envoy-gateway"
prune: true
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
Here I’ve configured it to grab a kustomization from ./flux/sources/envoy-gateway (relative to the git repository the whole Flux system is configured from).
The kustomization’s manifest starts with this:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyService:
type: NodePort
patch:
type: StrategicMerge
value:
spec:
externalIPs:
- 51.161.136.132
externalTrafficPolicy: Local
envoyDeployment:
pod:
nodeSelector:
kubernetes.io/hostname: kala
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: envoy
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy
namespace: envoy-gateway-system
I don’t have a LoadBalancer (the cluster is just one node on a VPS), so we get the Envoy proxy to be a NodePort service with the IP of the interface we want it to listen on, the same way I used to for Ingress NGINX. Likewise coming across, externalTrafficPolicy: Local ensures the client source IP is preserved, and the Pod node selector targets the node I actually can serve internet traffic from. (Technically redundant right now; when I set up Ingress NGINX originally there were multiple nodes, and only kala is publicly routable, and maybe it’ll be the case in the future again.)
The GatewayClass ties the proxy to the gateway we’ll define now:
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: envoy-gateway-system
annotations:
cert-manager.io/cluster-issuer: letsencrypt
spec:
gatewayClassName: envoy
addresses:
- type: IPAddress
value: 51.161.136.132
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- name: https-kivikakk-ee
protocol: HTTPS
port: 443
hostname: kivikakk.ee
tls:
mode: Terminate
certificateRefs:
- name: tls-kivikakk-ee
allowedRoutes:
namespaces:
from: All
# ...
We need to listen on port 80 for ACME (and HTTP to HTTPS redirects), and then we define listeners for every SNI’able host.
The HTTP to HTTPS redirect is the first HTTPRoute and it’s simple:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-to-https-redirect
namespace: envoy-gateway-system
spec:
parentRefs:
- name: eg
sectionName: http
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
Actual services get routes like this; I define these in the individual services’ manifests:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: nossa.ee.
external-dns.alpha.kubernetes.io/ttl: 24h
name: nossa
namespace: nossa
spec:
hostnames:
- nossa.ee
parentRefs:
- name: eg
namespace: envoy-gateway-system
sectionName: https-nossa-ee
rules:
- backendRefs:
- name: nossa
port: 80
(Actually, I use Timoni to make these things out of CUE because it’s way less painful and means I can’t accidentally e.g. define that port number incorrectly, relative to the port the Service sits on.)
www. redirects look like this:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: nossa-www-redirect
namespace: nossa
spec:
hostnames:
- www.nossa.ee
parentRefs:
- name: eg
namespace: envoy-gateway-system
sectionName: https-www-nossa-ee
rules:
- filters:
- requestRedirect:
hostname: nossa.ee
statusCode: 301
type: RequestRedirect
It was a lot less painful than I’d feared, and I’m a lot happier doing this than Istio (seemed like SO much) or what I was going to fall back to, HAProxy Ingress (prefer to get onto Gateway).
But!
I tripped over two things.
First, nóssa is protected by Anubis. enbi needs to access it via git+https to build things. I also want to pull from/push to it from kala (the cluster node) itself. OOTB this was newly failing: Anubis was giving 500s, matching this issue: X-Forwarded-For & X-Real-Ip handling doesn’t properly respect private IPs #1270.
X-Forwarded-For and X-Real-Ip are a mess, and I dare you to read Envoy’s HTTP header manipulation documentation and come away feeling pure of soul.
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: client-headers
namespace: envoy-gateway-system
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
headers:
earlyRequestHeaders:
set:
- name: X-Real-Ip
value: "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%"
%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT% is documented under the “Advanced” part of Envoy’s configuration reference, with a delightful note that it might be inferred from XFF anyway. Seems not to, though: I get external IPs logged by Anubis for x-real-ip when I make external requests, and cluster-internal ones from kala or from a different pod. Setting XFF to something fake doesn’t show up at all when external, and while it shows up in Anubis’s logs under x-forwarded-for when setting it on internal requests, x-real-ip is unaffected.
Second, I use MinIO for the “static hosting” of a bunch of sites; it’s convenient, this blog engine uses its API for asset admin, Annie can push her sites with it, etc. (I do want to get off it since it’s undergoing AI enshittification.)
I have one MinIO instance with buckets served out of subdirectories; e.g. https://s3.hrzn.ee/comrak.ee/index.html.
I used to use nginx.ingress.kubernetes.io/rewrite-target: /comrak.ee$uri on the comrak.ee ingress with a backend of the https-minio service, so a request for https://comrak.ee/index.html would translate internally to /comrak.ee/index.html. (And add a rewrite configuration snippet to handle / to /index.html explicitly.)
Porting this naïvely did not work: I don’t know if it’s a Gateway spec thing or an Envoy Gateway thing, but a filter with replacePrefixMatch: /comrak.ee/ when matching a PathPrefix of / would nonetheless strip the trailing /; requests for https://comrak.ee/index.html would get rewritten to request /comrak.eeindex.html. Specifying an empty string for the PathPrefix made no difference. Adding a second / at the end of the replacement worked. This feels somehow demeaning, but what can you do:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: comrak.ee.
external-dns.alpha.kubernetes.io/ttl: 24h
name: static-comrak.ee
namespace: minio-kala
spec:
hostnames:
- comrak.ee
parentRefs:
- name: eg
namespace: envoy-gateway-system
sectionName: https-comrak-ee
rules:
- backendRefs:
- name: minio
port: 80
filters:
- type: URLRewrite
urlRewrite:
path:
replaceFullPath: /comrak.ee/index.html
type: ReplaceFullPath
matches:
- path:
type: Exact
value: /
- backendRefs:
- name: minio
port: 80
filters:
- type: URLRewrite
urlRewrite:
path:
replacePrefixMatch: /comrak.ee// # yuk
type: ReplacePrefixMatch
matches:
- path:
type: PathPrefix
value: /
That’s it, I’m done thinking about ingress for a long time again. If this happens again I’m going back to my cursed NixOS integrated ingress.