Providing access to Kubernetes through tunnels in one of the largest cities in Lithuania
The core tenet of the Webhook Relay service is receiving and processing HTTP requests. When used with bidirectional tunnels it’s a 1:1 relationship that simply exposes the underlying service to the internet. Our client software can run on any machine (Windows, MacOS, Linux x86 and ARM) as well as Docker containers, Kubernetes deployments.
When used with webhook forwarding, relationship becomes N:N, meaning that there can be multiple public endpoints that will be routing to multiple destinations that are either internal (private network) or public destinations. All HTTP requests can be transformed on arrival to our system or before getting dispatched. A single webhook can be transformed into a new request that is tailored for any API.
Webhook Relay masterplan
- Remove all friction from exposing internal services to the internet for easy access.
- Securely let traffic in for sensitive systems (CI/CD) using unidirectional forwarding.
- Transform webhooks, when needed.
Kaunas public transport company
Kaunas is the second biggest city in Lithuania, with big plans for both its physical and digital infrastructure. The city has invested heavily into modernizing vehicles as well as rebuilding their software stack.
The first step was to start offering e-tickets via mobile application payments - that’s how Žiogas was born. With the new mobile app and a modern, cloud-based backend running on GKE (Google Cloud Platform Kubernetes engine) was designed and deployed.
With a lot of workloads running on Kubernetes, a need for staging and development environments was as high as ever. However, due to an existing infrastructure, getting load balancers to work with Kubernetes or even to get an access to Kubernetes API server for the kubectl from outside was not trivial. A decision was made to:
- Use TLS-pass-through tunnels to access Kubernetes as the authentication and authorization is performed using certificates.
- Deploy ingress controller that will provide access to the workload APIs and web dashboards.
- CI/CD integration with GitHub via webhooks and an easy access to the build server itself using tunnels.
Use case: TLS-pass-through to enable access for kubectl
Deploying a development/staging environment was simple, however to enable an easy access that doesn’t require users to SSH into a jump server and then downloading manifests wasn’t very simple.
This is where Webhook Relay TLS-pass-through type tunnels came into play. We have launched a standalone webhookrelayd container on the same machine that could then open a direct tunnel to Kubernetes API server. Then, we only had to update the .kubeconfig to point to the public tunnel hostname instead of the internally running Kubernetes and we got the access.
Use case: Tunnel based ingress controller for the workloads
Having access to Kubernetes via kubectl was great but to get access to the workloads was crucial as the product mostly consists of a back-office web portal for system administration and an end-user facing API.
As this was a closed environment, services couldn’t be exposed directly to the internet for testing and development purposes. Hence, we have deployed Webhook Relay ingress controller that would let us achieve several goals:
- HTTPs endpoints for our workloads without having to configure NAT, routing or getting a domain.
- Easy access to any new workloads as there’s no shortage of custom subdomains.
- Same principles of configuration in staging and production - just through ingress.yaml files. The only difference is ingress controller class.
- A closed corporate environment without public access can now expose services through an ingress controller.
- Kubernetes ‘kubectl‘ can reach the API server to create/view/manage deployments, services, etc.
- CI/CD utilizing vast internal resources
- Excellent integration with GitHub, enabling a modern CI pipeline.
- Using tunnels to make private infrastructure feel like a cloud environment.
Webhook Relay allowed the team to speed up its development and testing process. Our client could finally utilize large servers for Kubernetes that would otherwise have cost thousands of dollars on a cloud environment.
All this, while retaining an excellent user experience as in a cloud;easy access to Kubernetes and on-demand access to any existing or new workload.