Typically in the Kubernetes environment, when we need to expose a service to the ouside world, we make use of Load Balancer techniques which in general means a load balancer from our cloud provider.
The problem with this approach is that our service will be reachable from the load balancer or, in more accurate terms, form the network where the load balancer lives.
One way to improve in terms of security is start using Cloudflare tunnels, which are a fundamental component of Cloudflare’s suite of services designed to enhance the performance, reliability, and security of websites and applications. At its core, a Cloudflare Tunnel serves as a secure conduit between the origin server, where the application is hosted, and Cloudflare’s global network. This tunneling architecture introduces several key benefits that set it apart from traditional load balancing approaches.
By using CF tunnels, instead of publicly exposing an endpoint, what we do is “connect” one little daemon (the CF tunnel) through Cloudflare’s global network. So, in theory, the connection is first established from our data center (cloud, on-prem, whatever) to the cloud and not the otherway around.
One benefit of this approach is that we don’t need to have any network ports opened, as all the communication between our services and the outside world will happen though the CF tunnel.
k8s requirements
What do we need to run CF tunnels alongside my public services in k8s? Not much, just a couple of deployments and services.
In my example, I’ll be using nginx
as backend but anything else (Flask
service, API, static site, whatever!) could be used.
The diagram of the request will look like the following:
User (cloud) --> CF Global Network <-- CF Tunnel container --> nginx
Which means that the connection is initialized from the CF tunnel to CF global network. Once this happens, all the magic that CF tunnels provides will happen as well.
k8s configuration
As mentioned, we’d need a couple of deployments and a service. In order to keep the deployment clean and simple, we could take advantage of namespaces: We could create one namespace per group of services, as they are free.
In this example, I have create two different deployments and one service:
> kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
cf-tunnel-nginx-test 1/1 1 1 2m48s
cf-tunnel-test 1/1 1 1 2m48s
> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.100.115.29 <none> 80/TCP 4m23s
Cloudflare configuartion
The configuration in Cloudflare is as simple as it follows:
Verification
In the code I have added a response header including the hostname of the container that handled the response:
> curl -I https://tunnel.borisquiroz.dev
HTTP/2 200
date: Sun, 19 Nov 2023 02:55:19 GMT
content-type: text/html
accept-ranges: bytes
last-modified: Tue, 24 Oct 2023 13:46:47 GMT
x-container-hostname: cf-tunnel-nginx-test-869bdffcd4-kpgmx
cf-cache-status: DYNAMIC
report-to:
{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=45hKRrpt4aV2DZunX9FOYuqc4ryouUzOTjVf9ngTd%2FTETC0ZZV%2BvU9n97mVpjPWfZKYhPmiLf%2FSr5k1R%2F9s6S7tzBbLAktHo95fAOZZcRL5XMAGPNUzjPPGBOdCE1ZRFTsS746S7FbaY"}],"group":"cf-nel","max_age":604800}
nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
server: cloudflare
cf-ray: 82852c758ef22def-SCL
alt-svc: h3=":443"; ma=86400
Or, just to make it more clear:
> curl -sI https://tunnel.borisquiroz.dev | grep x-container-hostname
x-container-hostname: cf-tunnel-nginx-test-869bdffcd4-kpgmx
Show me the code
All the code used for this lab is available here.