Webhooks have become an essential component of modern application integration, allowing systems to communicate and exchange information in real-time. However, ensuring the security of webhooks is crucial to protect against potential vulnerabilities and attacks. In this research report, we will explore the best practices for securing webhooks, including encryption, authentication, message verification, and more. By following these practices, you can enhance the security of your webhook implementation and protect your data from potential threats.
Encrypting data sent through webhooks is a fundamental security measure to protect the confidentiality of the information transmitted. It is recommended to use the secure HTTP protocol, HTTPS, instead of HTTP. HTTPS encrypts all communication between the sender and receiver, making it harder for third parties to intercept and access the data.
Signing webhooks using a hash-based message authentication code (HMAC) ensures the authenticity and integrity of the messages. HMAC uses a shared secret key between the webhook provider and consumer to create a signature for each message. The consumer can then verify the signature to ensure that the message has not been tampered with during transit.
You can view our example of how to sign and verify webhooks using HMAC here.
Authenticating the source of webhook messages is essential to prevent unauthorized access and ensure that requests are coming from the intended source. One common method is to include an authentication token in the webhook request header. The consumer can check for this token to verify the legitimacy of the payload. Additionally, the consumer can whitelist the IP address of the webhook provider to only accept requests from known sources.
In Webhook Relay you can do this by going to the bucket details and clicking on authentication tab:
Alternatively you can select “basic” which means a standard username and password authentication will be applied.
Adding timestamps to webhook messages helps prevent replay attacks, where an attacker intercepts and resends a legitimate message at a later time. By including a timestamp in the message and verifying it on the consumer side, the consumer can ensure that the message is current and reject any outdated or replayed messages.
Timestamps should be paired with the HMAC check to ensure that the attacker is not just changing the timestamp to a current one.
Certificate pinning is a technique used to ensure the authenticity of the server’s certificate during the SSL/TLS handshake. By pinning the server’s certificate in the code, the consumer can verify that the connection is established with the correct server and prevent attacks with fake or compromised certificates.
Webhooks are typically used for sending notifications about events and are not suitable for transmitting highly sensitive data such as passwords or credit card information. It is recommended to avoid sending sensitive data through webhooks and instead use more secure methods like direct API calls with proper authentication and encryption.
A typical workflow here is:
In this case, the CI/CD system should not be receiving any sensitive data from the webhook. It should only be receiving the event type and the repository URL. The CI/CD system should then use its own credentials to clone the repository.
Implementing logging for all webhook messages sent out is essential for auditing, monitoring, and detecting security incidents. By logging webhook messages, you can keep a record of every message sent and analyze them for any security-related issues or suspicious activities.
When routing webhooks through our platform, you can define either additional outputs (your auditing/logging system) or push webhooks through a function to a data warehouse.
Using a subscription model with expiration dates adds an extra layer of security to your webhook implementation. By allowing users to provide an expiration date for their subscription, you can limit the timeframe for potential attacks. Once the subscription expires, the webhook consumer can stop accepting requests from that particular source.
Securing webhooks is crucial to protect against potential vulnerabilities and attacks. By following the best practices outlined in this research report, including encrypting data, signing webhooks, authenticating connections, adding timestamps, using certificate pinning, avoiding sensitive data transmission, implementing logging, and using a subscription model, you can enhance the security of your webhook implementation. It is important to tailor the security measures to the nature of the information being sent and to implement overlapping layers of security for comprehensive protection.
Remember that security is an ongoing process, and it is essential to regularly review and update your security measures to stay ahead of potential threats. By prioritizing webhook security, you can ensure the integrity and confidentiality of your data and maintain the trust of your users.
At Webhook Relay all webhooks are encrypted in transit and we provide a number of ways to verify the authenticity of the message. You can monitor, audit and inspect webhook payloads, statuses and your server responses to them. You can use password or HMAC authentication with just a few clicks or implement a custom, provider-specific authentication method with Functions.
You can use this form to:
Example form on https://synpse.net/contact/
In order to start adding rows to Airtable you will need to prepare few things:
https://airtable.com/appXXXX/tblXXXX/viwtXXXX
. The first part is your base ID and the second part is your table name. From here, appXXXX
is the base ID and tblXXXX
is the table ID.You will need to create forwarding config to a public destination here https://my.webhookrelay.com/new-public-destination. The URL will be https://api.airtable.com/v0/appXXXX/tblXXXX
.
🚨 Don’t just copy/paste the URL from the browser! they are not the same as your browser uses
https://airtable.com/appXXXX/tblXXXX/viwtXXXX
but we need to send webhooks tohttps://api.airtable.com/v0/appXXXX/tblXXXX
🚨
Don’t try to send requests to this yet as we will need to set up an Airtable webhook integration first.
Next step is to transform the incoming HTML form request into a webhook that will add or create a new record in your Airtable table:
local json = require("json")local time = require("time")if r.RequestMethod == "POST" then -- Only POST requests allowed else r:StopForwarding() returnend-- Taking fields from the HTML form and -- using them to prepare Airtable webhooklocal encoded_payload = { records={ { fields= { Email= r.RequestFormData.email[1], Name = r.RequestFormData.name[1], Added= time.format(time.unix(), "2006-01-02", "UTC") , Message = r.RequestFormData.message[1], } } }}-- Encoding payload to JSONlocal encoded_payload, err = json.encode(encoded_payload)if err then error(err) endr:SetRequestHeader("Authorization", "Bearer " .. cfg:GetValue("SECRET_API_TOKEN"))r:SetRequestHeader("Content-Type", "application/json")r:SetRequestBody(encoded_payload)r:SetRequestMethod("POST")
Once added:
data.records:write
scope and set Access to the base you want to add records to.CONFIG VARIABLES
tab of the function and create a new variable SECRET_API_TOKEN
with the value of your personal access token:We will use a simple HTML script. Replace the https://XXXXXX.hooks.webhookrelay.com
URL with your own Webhook Relay input endpoint. You can find it in the Bucket details:
<iframe name="form-i" style="display:none;"></iframe><!-- Forwarding to Webhook Relay which transforms requests and adds it to the Airtable as a new row --><form action="https://XXXXXX.hooks.webhookrelay.com" method="post" target="form-i"> <div> <label for="name">Name:</label> <input type="text" id="name" name="name" required> </div> <div> <label for="email">Email:</label> <input type="email" id="email" name="email" required> </div> <div> <label for="message">Message: </label> <br><textarea name="message" id="message" rows="10" cols="40"></textarea> </div> <input type="submit" value="Submit"></form>
I have made a simple CodePen example here https://codepen.io/defiant77/pen/gOQeRMm which you can open and edit the webhooks URL with the one from your bucket.
Fill in the form and click submit. In your Airtable you should be able to see the new record. If you can’t, check out webhook logs in Webhook Relay’s bucket details.
There are very few moving parts here but if something doesn’t work, things to check:
action
URL is pointing at the bucket that you have created.data.records:write
) and access is allowed to the base that you are working with.That’s it! 😃 You can also make a very similar integration for Google Sheets as the process is the same - using webhook as an API call.
]]>It’s great to receive Stripe emails on new payments however it’s harder to differentiate new customers from recurring when you are mostly selling subscriptions.
In this short tutorial we will set up a function that receives a webhook from Stripe and transforms it into an email which is then dispatched via Mailgun.
In order to start receiving webhooks and doing things with them you will first need to create a bucket here https://my.webhookrelay.com/buckets.
Once created, copy the input URL which looks like https://xyz.hooks.webhookrelay.com
, we will need it for Stripe.
When you have logged into your Stripe dashboard, head to webhook admin page https://dashboard.stripe.com/webhooks and add a new endpoint with the input URL from the previous step.
It should listen for customer.subscription.created
events.
Our function will do several things, in this example I will filter out all subscriptions that or on a plan named “free”, prepare some variables from the webhook and send an email via Mailgun.
mailgun = require('mailgun')json = require('json')-- First, decoding the stripe payloadlocal request_payload, err = json.decode(r.RequestBody)if err then error(err) end-- You can use product IDs or any other fields that you find useful herelocal plan = request_payload["data"]["object"]["plan"]["nickname"]local amount = request_payload["data"]["object"]["plan"]["amount"]local subscription_id = request_payload["data"]["object"]["id"]local customer_id = request_payload["data"]["object"]["customer"]-- If you have a free plan, you can ignore itif plan == "Free" then -- request is not important, don't forward it r:StopForwarding() returnend -- preparing the mailgun client clienterr = mailgun.initialize(cfg:GetValue('domain'), cfg:GetValue('api_key'), 'us')if err then error(err) end-- Preparing the email message!local text = string.format([[ Horray! Customer subscribed to '%s' ($%s). Subscription: https://dashboard.stripe.com/subscriptions/%s Customer: https://dashboard.stripe.com/customers/%s Kind regards, Webhook Relay Function ]], plan, amount/100, subscription_id, customer_id)err = mailgun.send('from-address@example.com', 'Hooray! New Paying Customer!', text, 'to-address@example.com')if err then error(err) endr:SetRequestBody('sent')
Now, let’s go back to our Bucket, click on the Input and attach this function:
Once function is in place, you are ready to start receiving the emails.
That’s it, on the next new subscription you will receive an email with the details! :) Some ideas for the follow up work:
Enjoy!
]]>A rough timeline of the outage (all times EST):
After receiving the first healthcheck notification that the service went down, we thought that potentially it’s just that the request got interrupted on the client side or hit a server that got evicted due to a memory or CPU utilization issue, we knew that these errors are transient and the recovery should be quick and automatic. However, after checking out the main dashboard and seeing it offline, we immediately started mitigating the outage.
The main problem with this outage was the lack of information provided by the GKE console. It didn’t look completely right but it also didn’t look wrong. What did work:
What didn’t work:
kubectl
was able to list and retrieve objects, however it couldn’t view logs or do port forwardingkubectl
couldn’t update objectsSo the cluster was running, the workloads were running, however some workload info was being returned definitely as stale. Running workloads were having trouble accessing various helper services that Google Cloud provides. It took as awhile to pin point the problem with the Kubernetes API server certificates. In GKE on older clusters they used to give 5 year certificates (now the clusters come with 30 year expiration) and the certificate rotation never happened. You can also manually start the rotation with:
gcloud container clusters update <cluster name> --zone europe-west1-d --start-credential-rotation
But unfortunately this operation while succeeding - didn’t actually do anything. We found a useful command which can show the validity of your GKE cluster:
gcloud container clusters describe <cluster name> --zone europe-north1-a \ --format "value(masterAuth.clusterCaCertificate)" \ | base64 --decode \ | openssl x509 -text \ | grep Validity -A 2
GKE has multiple availability zones that increase your application’s reliability in case one of the zones fail. However, in this case that didn’t matter :)
Manual certificate rotation didn’t help. Upgrading control-plane also didn’t help. The pods were running but all the backing services were not accessible. Backend services could not connect to Cloud SQL, PubSub or GCS. Once we noticed the authentication errors, we started moving services to the cluster.
It would have been much shorter downtime duration if we were aware that the managed GKE cluster is totalled. It always seemed that it’s about to start working. If we had the knowledge about the real cluster state, we would have made different choices as while the complete migration to a new cluster is not without its issues (moving persistent disks, detaching load balancers), it would have been a lot quicker.
The main lesson here for us was that it’s important to time-box the recovery of an existing infrastructure. While throughout the outage it looks like services are about to start working, we should have pulled the plug on the cluster much sooner and start from scratch with a new one. The sunk-cost fallacy phenomenon made us reluctant to ditch the salvage efforts which would have been the right call.
As part of the work during the outage we have improved our deployment manifests to be able to quickly switch between clusters without a complicated persistent disk data migration.
]]>In this tutorial, you’ll see how easy it is to install and run a modern, all-in-one dockerized Jenkins with Synpse which provides:
We will split this article into several sections - installation, administration, and webhook configuration. Let’s get started.
Webhook Relay and Synpse have paid tiers with increased quotas and support plans; however a free tier should be enough for setting this up.
First, install the Synpse agent on your server. You can view installation instructions here https://docs.synpse.net/agent/install/linux-docker. This will provide us with lightweight deployment capabilities. Once installed, add the label type: controller
to that device in your Synpse dashboard.
Next, let’s create an application that will run on the server:
name: jenkinsdescription: CI/CD serverscheduling: type: Conditional selectors: type: controllerspec: containers: - name: jenkins image: jenkins/jenkins:lts user: root privileged: true ports: - 8080:8080 - 50000:50000 volumes: # Persistent volumes - /data/jenkins-compose:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock
This will download the image and start the container. Once it shows up as ready, open the http://device-ip:8080
address. You will then require an initial admin password, found through Synpse inbuilt web SSH terminal:
Enter your initial admin password and proceed with the plugin installation:
You should also setup Jenkins agents to do the heavy work as they can just be additional containers either in the same application spec or separate applications that Jenkins server connects to but we will explore that route in a separate blog post.
It’s important to be able to receive webhooks without exposing our Jenkins server to the internet. For this, we will need to deploy a container next to the Jenkins server which will help with request forwarding.
Let’s start by getting the tokens from https://my.webhookrelay.com/tokens and creating two secrets in Synpse named webhookrelayKey and webhookrelaySecret that contain your authentication tokens. Then, go to https://my.webhookrelay.com/buckets page and create a bucket named jenkins. Now, we can add the new container to the Jenkins server app:
name: jenkinsdescription: CI/CD serverscheduling: type: Conditional selectors: type: controllerspec: containers: - name: jenkins image: jenkins/jenkins:lts user: root privileged: true ports: - 8080:8080 - 50000:50000 volumes: # Persistent volumes - /data/jenkins-compose:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock # Webhook Relay forwarding container. This container allows # us to receive the webhooks that are hitting public server endpoints - name: webhookrelay image: webhookrelay/webhookrelayd:latest env: - name: RELAY_KEY fromSecret: webhookrelayKey - name: RELAY_SECRET fromSecret: webhookrelaySecret - name: BUCKETS value: jenkins
Click save and after a few seconds we should see two containers running:
The easiest way to start receiving GitHub webhooks is by using this plugin https://plugins.jenkins.io/github. To install:
Add default GitHub server (don’t bother adding credentials as we are using public repo anyway):
When you want Jenkins to do something - create a job. In this case, we will be using Freestyle project:
We have to configure several sections here - Source Code Management and Build Triggers. First, set repository (in this case it’s my demo app repo repository):
Next step is setting a build trigger to GitHub hook trigger for GITScm polling:
This means that once the Jenkins receives a webhook, it can identify which repo is changed and thus triggers a pull and job execution.
Once things are running, go back to your bucket details in Webhook Relay and add Jenkins container as the destination. It should be:
jenkins
http://jenkins:8080/github-webhook/
internal
Port always needs to match the port on which the application is running in a container. For example, if you do a mapping of 8888:8080 for Jenkins server, destination should still be on port 8080 as that is what’s available internally.
Go to your repository settings and add the input endpoint URL from your Webhook Relay bucket:
Once you push to your repository, you should see few things:
In this tutorial, we deployed the main Jenkins server as a Docker container with persistent volumes. We also configured GitHub webhooks to trigger builds without us requiring to have a public IP or configure firewalls.
In our upcoming blog post, we will explore ways of connecting more agents to the server.
]]>It is always useful to know how your business or projects are doing and for that, there are a bunch of tools available such as Excel spreadsheets, Google DataStudio, Apache Superset, etc. I personally am a fan of Metabase as it is the easiest to deploy and use. When paired with the right technologies, this setup becomes trivial to anything from a small organization to a big company.
In this article, we will use a setup that works exactly the same way on both an Intel NUC (for some of my projects) and on a large VM that is managed by a VMware.
Once you log into Synpse Cloud, select your project and then head to the “Devices” page. From there you will be able to find the auto-generated command that you need to run on the device to add it to your project.
There are multiple ways to do it however initially you can just SSH into the machine via local network. Once you run the command, after a few seconds (depends on your internet speed) you should see the magic happen and device appear in your “Devices” page in Synpse :)
Our Metabase deployment will need to be reachable from outside so we can actually view reports. For this, we are creating a Webhook Relay tunnel that will be established by a webhookrelayd
container.
Go to your https://my.webhookrelay.com/tunnels page and create a new tunnel with these details:
webhookrelayd
agent will need to know which tunnel to use)Head to the tokens page here https://my.webhookrelay.com/tokens and create a new pair. Save the key and secret before closing the window.
Go to the secrets page in Synpse and create both relayKey
and relaySecret
secrets:
Last step is to create a Synpse application.
name: metabasedescription: metabase + WHRscheduling: type: Conditional selectors: type: controllerspec: containers: - name: metabase image: metabase/metabase:latest volumes: - /data/metabase:/metabase-data env: - name: MB_DB_FILE value: /metabase-data/metabase.db - name: MB_REDIRECT_ALL_REQUESTS_TO_HTTPS value: "false" - name: relayd image: webhookrelay/webhookrelayd:1 args: - --mode - tunnel - -t - whr-metabase # <--- if you have chosen a different name for the tunnel, change it here too env: - name: RELAY_KEY fromSecret: relayKey - name: RELAY_SECRET fromSecret: relaySecret
Once deployed, use your tunnel URL to access it. You can also configure Google OAuth to make the login easier, however that’s out of scope in this article.
Enjoy!
]]>As of 1st May 2021, we will be changing our subscription prices to incorporate an increase across all paid plans for new customers. This modest adjustment — our first since we started out in 2017 — equates to an additional $0.49 for Basic, $20 for Standard, $30 for Business and $100 for Pro, per month respectively. Our new prices will be as follows:
Plans | Old Pricing | New Pricing |
---|---|---|
Basic | $4.50 | $4.99 |
Standard | $19.99 | $39.99 |
Business | $49.99 | $79.99 |
Pro | $149.99 | $249.99 |
The circumstances prompting this change includes;
We, at WHR, feel grateful to be able to rely on the support of our existing customers. And as a thank you, we will exclude them from any imminent price increase. By capping the subscription price for our existing customers and by continuing to offer WHR’s free starter version for our new users, we seek to keep WHR affordable.
]]>Please note, any customers signing up for our services before 1st of May 2021 will continue to pay our old prices.
The core tenet of the Webhook Relay service is receiving and processing HTTP requests. When used with bidirectional tunnels it’s a 1:1 relationship that simply exposes the underlying service to the internet. Our client software can run on any machine (Windows, MacOS, Linux x86 and ARM) as well as Docker containers, Kubernetes deployments.
When used with webhook forwarding, relationship becomes N:N, meaning that there can be multiple public endpoints that will be routing to multiple destinations that are either internal (private network) or public destinations. All HTTP requests can be transformed on arrival to our system or before getting dispatched. A single webhook can be transformed into a new request that is tailored for any API.
Kaunas is the second biggest city in Lithuania, with big plans for both its physical and digital infrastructure. The city has invested heavily into modernizing vehicles as well as rebuilding their software stack.
The first step was to start offering e-tickets via mobile application payments - that’s how Žiogas was born. With the new mobile app and a modern, cloud-based backend running on GKE (Google Cloud Platform Kubernetes engine) was designed and deployed.
With a lot of workloads running on Kubernetes, a need for staging and development environments was as high as ever. However, due to an existing infrastructure, getting load balancers to work with Kubernetes or even to get an access to Kubernetes API server for the kubectl from outside was not trivial. A decision was made to:
Deploying a development/staging environment was simple, however to enable an easy access that doesn’t require users to SSH into a jump server and then downloading manifests wasn’t very simple.
This is where Webhook Relay TLS-pass-through type tunnels came into play. We have launched a standalone webhookrelayd container on the same machine that could then open a direct tunnel to Kubernetes API server. Then, we only had to update the .kubeconfig to point to the public tunnel hostname instead of the internally running Kubernetes and we got the access.
Having access to Kubernetes via kubectl was great but to get access to the workloads was crucial as the product mostly consists of a back-office web portal for system administration and an end-user facing API.
As this was a closed environment, services couldn’t be exposed directly to the internet for testing and development purposes. Hence, we have deployed Webhook Relay ingress controller that would let us achieve several goals:
Webhook Relay allowed the team to speed up its development and testing process. Our client could finally utilize large servers for Kubernetes that would otherwise have cost thousands of dollars on a cloud environment.
All this, while retaining an excellent user experience as in a cloud;easy access to Kubernetes and on-demand access to any existing or new workload.
In this short tutorial, I will demonstrate how to configure Facebook webhooks so Webhook Relay will solve the challenge for you.
You can read more about the Messenger Platform here: https://developers.facebook.com/docs/messenger-platform/introduction.
First, create a new Webhook Relay bucket. You can do that by visiting buckets page.
Here, you will see your “Default public endpoint” that should look like <random letters>.hooks.webhookrelay.com
.
Click on subscribe and then add your Webhook Relay public endpoint:
If you click on “Verify and Save” it will show you an error and looking at the request in the Webhook Relay dashboard, we can see that the query had several values Query: hub.mode=subscribe&hub.challenge=1903260781&hub.verify_token=my-facebook-token
.
We need to take the challenge and set it as a header.
The way to solve the Facebook verification challenge is to get the value from the URL query and set it to the response body. This can be achieved with the Webhook Relay Function.
Go to Functions page https://my.webhookrelay.com/functions and create a new function called “facebook-verification”.
Copy paste the code below:
local mode = r.RequestQuery["hub.mode"]if mode == "subscribe" then r:SetResponseBody(r.RequestQuery["hub.challenge"])end
Now, go back to the bucket that you have previously created, click on the Inputs and then “CONFIGURE” on your input:
Now, if you try “Save and Verify” again from the Facebook webhooks page, it should pass.
You can now try sending some “test” events from the platform:
Facebook also sends a token that you have supplied when adding configuration. In my example, it was my-facebook-token
. We can get this value too and verify in the Function:
local mode = r.RequestQuery["hub.mode"]-- Shared secret - replace with yours!! local my_shared_secret = "my-facebook-token"-- If mode is subscribe, validating itif (mode == "subscribe") then -- Getting verify token local token = r.RequestQuery["hub.verify_token"] -- Validating token if not(token == my_shared_secret) then r:SetResponseStatusCode(401) r:StopForwarding() return end -- Responding to challenge r:SetResponseBody(r.RequestQuery["hub.challenge"])end
Update the function, remove Facebook subscription and then try again.
That’s it, you can now configure destinations either to your public or private application servers and start processing Facebook webhooks.
]]>What is CDN? Cloudflare has a nice explanation here: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
In short:
A content delivery network (CDN) refers to a geographically distributed group of servers that work together to provide fast delivery of Internet content.
In this article we will have a quick look into several CDN types and potentials pros/cons that you might encounter when setting one up yourself.
Cloudflare is one of the best CDNs out there and we are using it for our landing pages and numerous other projects. It’s a great DNS configuration service too that provides rich APIs. However, it’s good to understand what other types of CDNs are out there and which suits you best.
All CDNs have different pros and cons and all solutions are trying to achieve the same thing: load content faster.
Some of the CDN types you will encounter in the wild:
Push CDN is a setup where you upload your assets to a server (or a group of servers). An example of such CDN is Google Cloud CDN. In this setup, you will have to create a load balancer and a storage bucket and upload your content assets as part of the CI/CD pipeline where you build your frontend app. In this setup, you will need to create a new domain such as cdn.example.com
that points to your CDN storage location.
Pros:
/js/chunk-2d22502a.0844b32d.js
.Cons:
No CDN, just cache control headers on your web server. This option might work for many cases, however, the first load can be painful if the user is far from your server location and you have a lot of static assets.
CDNs like BunnyCDN (affiliate link, great service) pull from your origin server but don’t try to proxy all your traffic. In this scenario, you serve your index.html that then loads assets through the CDN domain instead of your own. Similarly, as with the “Push CDN” type, you will have to either serve assets from cdn.example.com
, or if you have a fancy global load balancer, you can configure that certain paths load files directly from CDN servers.
Pros:
Cons:
I couldn’t immediately spot the docs but if you are doing this not for the first time, it’s quite straightforward:
module.exports = { publicPath: process.env.NODE_ENV === 'production' ? 'https://cdn.your-domain-here.com/' : '/',}
If you are using React, check out “Public Folder” docs here: https://create-react-app.dev/docs/using-the-public-folder/.
That’s it, generated assets will all have the prefix to load through the CDN. If you are using Nginx to serve your app, ensure that you are providing correct headers for js and css files. For example:
location ~* \.(?:css|js)$ { expires 1y; add_header Cache-Control "public"; access_log off; }
I hope you will find this useful whenever you decide to add CDN for your website!
]]>If you are familiar with Docker, you probably have also heard about Podman. Podman doesn’t try to do many things that Docker does, it’s a daemonless tool that provides an easy way to run, find and build OCI containers. We are happy to include documentation on how to run Webhook Relay tunnelling agent with Podman and also to announce that we are now providing agent images that are built on top of Redhat’s Universal Base Image (UBI).
First, get a token key & secret pair from https://my.webhookrelay.com/tokens. Once you got it, to launch a container via Podman we will be using our ubi8 based image webhookrelay/webhookrelayd-ubi8:latest.
Podman has very similar run syntax to Docker’s so you will probably feel right at home if you have used Docker before:
podman run -it docker.io/library/busybox
You can find “podman run” command docs here. From that page let’s look at environment variable configuration as we will need to supply access token and secret:
podman run -d --env RELAY_KEY=your-token-key --env RELAY_SECRET=your-token-secret --env BUCKETS=your-bucket-name --network host webhookrelay/webhookrelayd-ubi8
Here we also added:
Now, we can check whether the container is running:
podman psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<container ID> docker.io/webhookrelay/webhookrelayd-ubi8:latest About a minute ago Up About a minute ago gallant_bardeen
To view container logs:
podman logs <container ID>2020-05-31 21:26:43.267 INFO using standard transport...2020-05-31 21:26:43.356 INFO webhook relay ready... {"host": "my.webhookrelay.com:8080", "buckets": ["podman-test"]}
That’s it, to remove container once you are done:
podman rm <container ID> -f
]]>Some people have probably already noticed that you could see one more endpoint in your input settings - https://my.webhookrelay.com/v1/webhooks/xxxx
(default one) and https://xxxx.hooks.webhookrelay.com
which is our new one. Building on our new virtual host based router we can also finally allow input endpoints such as https://hooks.example.com
(you can put in your own domain) and custom paths such as https://hooks.example.com/github
, https://hooks.example.com/stripe
, etc..
In this short article we will briefly look into what has changed and what kind of improvements we can expect.
When Webhook Relay was initially built, path-based routing solved the issue for systems like Jenkins and pretty much anything else we have encountered. However, there were always some cases such as:
http://internal_server/api/hooks
would result in an invalid signature.First of all, path-based webhook endpoints such as https://my.webhookrelay.com/v1/webhooks/xxxx
are not being deprecated, they have their use case and they already deliver millions of webhooks per day. However, for each endpoint, you will now be able to also assign custom subdomains and domains.
Input endpoints can also now utilize custom subdomains. Instead of having an endpoint such as https://x1scmzopk2ogxxty3qvb4o.hooks.webhookrelay.com you can now specify your own subdomain under .hooks.webhookrelay.com
, for example https://dogfood-shop.hooks.webhookrelay.com. To get started, either click on “reserve domain” in your input details page or go to https://my.webhookrelay.com/domains and reserve it there. Once you have reserved it, you will be able to select it from the dropdown.
Just like with custom subdomains for webhook endpoints you will have to register your own domain either in input details page or here. Once it’s registered, select it from the dropdown in the input details page.
Then, go to your DNS provider and configure a CNAME record pointing at hooks.webhookrelay.com. On the first webhook, Webhook Relay will provision a free certificate for you.
Main advantages of using custom domains for webhook forwarding:
You can find more information on domain configuration in the webhook forwarding documentation.
]]>Generated webhooks can be coming from services that are running on various backends:
To provide a static set of IP address for outgoing connections (so your firewall can whitelist them) a company that runs on cloud infrastructure (such as Kubernetes or serverless infrastructure like AWS Lambda) needs to use cloud provider services like Cloud NAT on GCP, Internet Gateway on AWS that route outgoing traffic (which can be a lot) or set up their own dedicated group of instances that will forward all requests:
So, we can see that from the webhook producer there are definitely multiple ways to achieve it. However, with the Cloud NAT on GCP it can be quite costly since they would route all traffic through it. And with a separate pool of nodes they would now have to manage a pool of nodes (update OS, update software, update configuration, etc.) for very little added value from their end.
This can explain why with time fewer services are providing a set of external static IPs for organizations to whitelist. A more common way (and probably better) to ensure that webhooks are coming from the correct source is with payload signatures. Some good examples are Github: https://developer.github.com/webhooks/securing/ and Stripe: https://stripe.com/docs/webhooks/signatures.
One of the solutions that we will demonstrate today will be to use Webhook Relay to accept and forward webhooks from an internal or your controlled static external IP. With internal IP it’s simple, you can choose an installation method that you prefer here but in this guide we will deploy your controlled GCP server that will act as a relay for all webhooks:
Pros of this setup:
Before we begin, ensure that you have these tools ready:
Clone our Terraform repository:
git clone https://github.com/webhookrelay/relay-tf.gitcd relay-tf
Go into the ‘gcp’ directory:
cd gcp
Initialize terraform modules:
terraform init
Authenticate to your GCP account:
gcloud auth application-default login
Instructions how to use service account can be found here: https://www.terraform.io/docs/providers/google/guides/getting_started.html#adding-credentials.
Let’s first create a bucket called static-ip here https://my.webhookrelay.com/buckets:
Then, create an output. Make sure ‘internal’ is chosen to route webhooks through our GCP agent. For the sake of this example we will point the output destination at http://ifconfig.co/json
to get our IP address:
Then, go to the access token page here https://my.webhookrelay.com/tokens and click “Create Token”. You will need these details for the terraform inputs file.
Create a new inputs.tfvars
using your favorite text editor, and put the following in it:
project = "<your google project id>"relay_key = "<key from https://my.webhookrelay.com/tokens>"relay_secret = "<secret from https://my.webhookrelay.com/tokens>"relay_buckets = "static-ip"
Now, to deploy relay agent:
terraform apply -var-file inputs.tfvars
It will print out ssh command (that you can use to get into the instance) and external IP:
Outputs:external_ip_address = 35.185.28.228gcloud_ssh = gcloud beta compute ssh --zone us-east1-b relay-agent-vm-c34ded9a750faf93 --project webhookrelay
Now, if you send requests to your bucket’s input with a curl (or any other client, in real world scenario it will be sent by the webhook producer’s service):
curl https://my.webhookrelay.com/v1/webhooks/739903db-ef87-4427-8d78-caccc28253c9
they will be dispatched to the destination through the GCP server and therefore have “35.185.28.228” (different if you have provisioned it yourself) IP. This way you can whitelist webhook source or just ensure that you deploy Webhook Relay agent with terraform into an existing private network.
You can view the response from the http://ifconfig.co/json service in your bucket details page:
Which shows that the IP was in fact our external IP from the agent (35.185.28.228):
{ "ip": "35.185.28.228", "ip_decimal": 599334116, "country": "United States", "country_eu": false, "country_iso": "US", "hostname": "228.28.185.35.bc.googleusercontent.com", "latitude": 38.6583, "longitude": -77.2481, "asn": "AS15169", "asn_org": "GOOGLE", "user_agent": { "product": "curl", "version": "7.65.3", "raw_value": "curl/7.65.3" }}
That’s it, if you would like to learn more about Webhook Relay, check out our docs, examples and this blog :)
]]>In the previous article about controlling gadgets via IFTTT and Node-RED we explored ways to receive webhooks without public IP or configuring NAT and then performing certain actions. However, sometimes you need to respond back to the webhook producers or just other applications that expect success/error responses to properly function. Up until now you would have needed to use Webhook Relay tunnels but with the recent release, we allow sending dynamic responses back to the caller.
Pros:
This feature transforms Webhook Relay webhook forwarding feature from unidirectional-only to a much more powerful tool.
Webhook responses work by pausing HTTP response for up to 10 seconds while waiting for the response. The rules are simple:
meta
object that you have received with the webhook (it contains unique request ID and bucket ID that are required by Webhook Relay to correctly respond)We will create a simple API backend that will return current weather information for a selected city:
Here we will use several custom nodes nodes:
Other nodes are from the standard palette.
Note that: We need to preserve
payload.meta
object from the original Webhook Relay message as we will be using it to reply to the correct request. Your application has 10 seconds to send a reply and there might be several request in-flight that Node-RED is dealing with.
node-red-contrib-webhookrelay
node.grab request metadata for later response
We will have to join this later with the rest of the data to correctly respond to the caller.
parse URL query
now, since we HTTP requests with a query like ?city=London&country=GB, we need to get these details into an object that openweather node will understand. Here’s the code:
function getJsonFromUrl(url) { if(!url) url = location.search; var query = url.substr(0); var result = {}; query.split("&").forEach(function(part) { var item = part.split("="); result[item[0]] = decodeURIComponent(item[1]); }); return result;}return { location: getJsonFromUrl(msg.payload.query)}
request weather
encode JSON data - here we just take the response from openweather response and encode it into a JSON string. This payload will be returned to the caller.
put message into a “path” for later join - same as number 1:
join metadata and data - time to join weather data and request metadata:
form API response - use “function” node to grab “meta” and “data” values from the previous node
return { meta: msg.paths["meta"].meta, // this is original meta field from the payload (it's important to include it so we have the message ID) status: 200, // status code to return (200, 201, 400, etc) body: msg.paths["data"].data, // body headers: { 'content-type': ['application/json'] // good practice to include content type, browsers do their best to display it nicely }};
That’s it, connect the response node back to the Webhook Relay node and open your Bucket’s input URL in your browser or just use curl
(if you are on Linux or Mac):
curl https://my.webhookrelay.com/v1/webhooks/YOUR-INPUT-UUID?city=London&country=GB
Try out integrating different APIs. If free tier is too low for you, message me and I will bump up your limits :)
Here’s the flow itself, feel free to import and play with it.
[{"id":"44a1295a.6a99d","type":"tab","label":"Node-RED API","disabled":false,"info":""},{"id":"5b239f76.3f86d8","type":"webhookrelay","z":"44a1295a.6a99d","buckets":"node-red-responses","x":160,"y":320,"wires":[["329e9d6b.728d6a","8858f09a.c7aee8"]]},{"id":"ee1d7dc4.e7c358","type":"function","z":"44a1295a.6a99d","name":"create response","func":"return {\n meta: msg.paths[\"meta\"].meta, // this is original meta field from the payload (it's important to include it so we have the message ID)\n status: 200, // status code to return (200, 201, 400, etc)\n\tbody: msg.paths[\"data\"].data, // body\n\theaders: {\n\t 'content-type': ['application/json']\n\t}\n};","outputs":1,"noerr":0,"x":1280,"y":320,"wires":[["5b239f76.3f86d8"]]},{"id":"c3ea1e3b.a8c708","type":"openweathermap","z":"44a1295a.6a99d","name":"","wtype":"current","lon":"","lat":"","city":"","country":"","language":"en","x":510,"y":460,"wires":[["27524c01.87056c"]]},{"id":"329e9d6b.728d6a","type":"function","z":"44a1295a.6a99d","name":"get city name","func":"function getJsonFromUrl(url) {\n if(!url) url = location.search;\n var query = url.substr(0);\n var result = {};\n query.split(\"&\").forEach(function(part) {\n var item = part.split(\"=\");\n result[item[0]] = decodeURIComponent(item[1]);\n });\n return result;\n}\n\nreturn {\n location: getJsonFromUrl(msg.payload.query)\n}","outputs":1,"noerr":0,"x":250,"y":460,"wires":[["c3ea1e3b.a8c708"]]},{"id":"27524c01.87056c","type":"json","z":"44a1295a.6a99d","name":"encode","property":"payload","action":"","pretty":false,"x":720,"y":460,"wires":[["13242f86.520e8"]]},{"id":"cf7bfe30.6a52b8","type":"wait-paths","z":"44a1295a.6a99d","name":"wait for meta and data","paths":"[\"data\",\"meta\"]","timeout":15000,"finalTimeout":60000,"x":1020,"y":320,"wires":[["ee1d7dc4.e7c358"]]},{"id":"13242f86.520e8","type":"change","z":"44a1295a.6a99d","name":"paths[\"data\"]","rules":[{"t":"move","p":"payload","pt":"msg","to":"paths[\"data\"].data","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":930,"y":460,"wires":[["cf7bfe30.6a52b8"]]},{"id":"8858f09a.c7aee8","type":"change","z":"44a1295a.6a99d","name":"paths[\"meta\"]","rules":[{"t":"move","p":"payload.meta","pt":"msg","to":"paths[\"meta\"].meta","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":570,"y":320,"wires":[["cf7bfe30.6a52b8"]]},{"id":"78312bed.0624fc","type":"comment","z":"44a1295a.6a99d","name":"1. grab request metadata for later response","info":"","x":660,"y":260,"wires":[]},{"id":"d13cd56d.3e6a08","type":"comment","z":"44a1295a.6a99d","name":"2. parse URL query","info":"","x":270,"y":400,"wires":[]},{"id":"b0668370.e60a88","type":"comment","z":"44a1295a.6a99d","name":"3. request weather","info":"","x":510,"y":400,"wires":[]},{"id":"a9a0d2e1.527298","type":"comment","z":"44a1295a.6a99d","name":"4. encode json","info":"","x":740,"y":400,"wires":[]},{"id":"fe57aa5.43ee4d8","type":"comment","z":"44a1295a.6a99d","name":"5. put message into a \"path\" for later join","info":"","x":1020,"y":400,"wires":[]},{"id":"d0d96c0d.930398","type":"comment","z":"44a1295a.6a99d","name":"6. join metadata and data","info":"","x":1030,"y":260,"wires":[]},{"id":"f25f94a5.7ad5d8","type":"comment","z":"44a1295a.6a99d","name":"7. form API response","info":"","x":1300,"y":260,"wires":[]}]
]]>The core tenet of the Webhook Relay service is receiving and processing HTTP requests. When used with bidirectional tunnels it’s a 1:1 relationship that simply exposes the underlying service to the internet. Our client software can run on any machine (Windows, MacOS, Linux x86 and ARM) as well as Docker containers, Kubernetes deployments.
When used with webhook forwarding, relationship becomes N:N, meaning that there can be multiple public endpoints that will be routing to multiple destinations that are either internal (private network) or public destinations. All HTTP requests can be transformed on arrival to our system or before getting dispatched. A single webhook can be transformed into a new request that is tailored for any API.
Dotscience is a Machine Learning & Data Science platform that allows engineers and data scientists easily utilize compute infrastructure and track their data in a reproducible way.
Since it’s an end-to-end Data Science platform, in this article we will only focus on Jupyter notebook service and ML model deployment for inference.
The potential problem with Dotscience runners was that they could be started anywhere - in the Kubernetes environment, on a simple virtual machine with docker that is running in a cloud environment, or in a private data center. This required easy access to the Jupyter, free from any restrictions regardless of the environment.
The solution was to adopt Webhook Relay tunnels to enable Dotscience users to reach Jupyter notebook servers running on local or remote compute nodes. Dotscience agents would start both Jupyter, Dotmesh (data persistence daemon), and Webhook Relay container. This container would open a tunnel and serve Jupyter notebooks under a URL that’s similar to xyz.tasks.dotscience.com
. Dotscience dashboard would then open it in an iframe and supply authentication token.
Tunnel management is completely automated with domains and routing configuration created during runtime. During normal operation, tunnels might last for a few minutes or even days. There can be thousands of tunnels created and deleted within hours. Since we are utilizing wildcard domains for the tunnels, TLS becomes a simple thing to manage.
As with Dotscience runners for Data Science and ML model training on Jupyter notebooks, there is a need for easy access to deployed models on Kubernetes clusters. Since Dotscience provides a deployer model where the user just needs to start the operator, they might not always be able to configure load balancing.
For this, Webhook Relay ingress controller comes into play where we can provide access to the models whether it’s running in GKE, EKS or Minikube:
Last year I wrote a blog post about combining several tools to automate simple NodeJS app updates on git push. Many users were solving similar problems by writing local web servers in Ruby, Python or PHP to receive webhooks and to do the update. I am happy to announce that we have decided to add this feature to the relay client. Now, to execute a bash script, you can:
relay forward --bucket my-bucket-name --relayer exec --command bash update.sh
And to launch a Python app on webhook:
relay forward --bucket my-bucket-name --relayer exec --command python my-app.py
This opens up some interesting possibilities to create pipelines that can react to pretty much anything that emits webhooks. In this article I will show you how to build a GitOps style pipeline that does Docker Compose update to sync with a docker-compose.yaml hosted on a git repository.
Prerequisites
Repository with scripts that I used for this article can be found here: https://github.com/webhookrelay/docker-compose-update-on-git-push.
First step is to do the initial deployment. We will create a simple dockerized Python application that you can find here that connects to Redis and deploy it:
version: '3'services: web: image: "karolisr/python-counter:0.1.0" ports: - "5000:5000" redis: image: "redis:alpine"
Since we only want to update on git tags and not just any pushes, let’s configure a webhook and analyze the payload.
To achieve that, let’s first create a bucket with an internal output:
$ relay forward --bucket docker-compose-update-on-git-push http://localhost:4000Forwarding:https://my.webhookrelay.com/v1/webhooks/a956a9f7-2260-4bc2-a54b-3d896acf4206 -> http://localhost:4000Starting webhook relay agent...2019-08-28 23:14:41.773 INFO using standard transport...2019-08-28 23:14:41.928 INFO webhook relay ready... {"host": "my.webhookrelay.com:8080", "buckets": ["8e977e70-09a6-464c-ad30-855e1cd5d9f9"]}
Here, bucket will be used later to subscribe to github requests while destination is just a mandatory argument that we don’t have to use in this case.
Grab that https://my.webhookrelay.com/v1/webhooks/*** URL and go to your repository’s settings -> webhooks section. Once there, set:
Let me select individual events.
and select Releases.Now, go to your repository’s releases page (for example https://github.com/webhookrelay/docker-compose-update-on-git-push/releases) and make a new release 1.0.0
. Then, if you visit bucket details page or logs page - you should see webhook from Github. Open it and let’s inspect the payload. It’s quite lengthy but we should be able to see
"action": "released",
in the top. To ensure that we only react on these events, create a rule:
If you tag another release, you should see now that only one webhook was forwarder
Our update script is:
#!/bin/bashgit pulldocker-compose up -d
It will pull the latest compose file and update containers. Now, let’s update configuration in relay.yml
file (access key & secret can be generated here):
version: v1key: xxx # your access keysecret: xxx # your access secretbuckets:- docker-compose-update-on-git-push # your bucket name where github webhooks are sentrelayer: type: exec command: bash commandArgs: - /full/path/to/docker-compose-update-on-git-push/update.sh # <-- should be full path to your update script timeout: 300
To start the relay, run:
relay run -c relay.yml
This will be running it through your terminal. For production use cases, please use background service mode. It will ensure that the daemon is launched on OS startup.
Launch docker-compose:
docker-compose up -d
Check containers:
$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES26cd2219e18b redis:alpine "docker-entrypoint.s…" 20 seconds ago Up 3 seconds 6379/tcp docker-compose-update-on-git-push_redis_163c8cd1ae7bb karolisr/python-counter:0.1.0 "flask run" 20 seconds ago Up 18 seconds 0.0.0.0:5000->5000/tcp docker-compose-update-on-git-push_web_1$ curl http://localhost:5000I have been seen 1 times.$ curl http://localhost:5000I have been seen 2 times.
Next step would be to build a new image 0.2.0
and push it to the registry. Once it’s available, we can update our github repository docker-compose.yml
and make a new release. For the sake of this example, let’s do this through the Github UI.
In a few seconds you should see a new container running:
$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES27b2542423ec karolisr/python-counter:0.2.0 "flask run" 9 seconds ago Up 7 seconds 0.0.0.0:5000->5000/tcp docker-compose-update-on-git-push_web_126cd2219e18b redis:alpine "docker-entrypoint.s…" 10 minutes ago Up 9 minutes 6379/tcp docker-compose-update-on-git-push_redis_1
Webhook Relay output rules can also validate Github signature:
This will ensure that only webhooks signed by Github will be processed.
As with any code executed on your machine - you have to be careful when automating tasks. Webhook Relay will provide you with a unidirectional flow of webhooks into the machine. Your script/applications are on your machine and cannot be modified remotely through Webhook Relay. Coupled with authenticated webhook endpoints (you can configure it on a bucket level) or webhook payload checksum validation - you can build a secure update mechanism.
In general this is an easy way and simple way to update Docker Compose on your server without investing time. You can just push Docker images, update the tag and relay agent will run the update.
]]>Usually, when I need a database I just pick Postgres or embedded key-value stores such as the excellent boltdb, badger from dgraph or Redis (if I need a KV store but shared between several nodes). With flexibility comes the burden of maintenance and sometimes additional cost. In this article, we will explore a simple Golang backend service that will use Google Firestore as storage.
When I started working on a simple project called bin.webhookrelay.com, I picked Badger as a key-value store, attached a persistent disk to a Kubernetes pod and launched it.
bin.webhookrelay.com is a free service that allows you to capture webhook or API requests for testing purposes. It also lets you specify what response body and status code to return, as well as set an optional response delay.
The data model was (and still is) simple:
Most of the time, KV stores such as boltdb or badger are great for such use cases. Problems arise when you want to either scale horizontally or have a rolling update strategy meaning that a number of instances of your application would have to surge during the update. While Kubernetes is great for running pretty much any workload, an update where it has to detach a persistent disk and reattach it to a new pod can lead to downtime and just generally slow updates. I always try to avoid such scenarios, however, webhook bin service was suffering from it.
This time, I decided to try out Cloud Firestore. You probably have already heard about Firestore (previously known as Firebase) and that it is very popular amongst mobile app developers who need to have a database for their Android and iOS apps. Apparently, it can also provide a really nice developer UX for backend applications! :)
For authentication, Golang Firestore client uses a standard mechanism that relies on a service account. Basically, you need to go to:
Docs can be found in the Google Cloud authentication section.
Application code is surprisingly simple. You get the client using Google application credentials, project ID and that’s it:
func NewFirestoreBinManager(opts *FirestoreBinManagerOpts) (*FirestoreBinManager, error) { ctx := context.Background() var options []option.ClientOption if opts.CredsFile != "" { options = append(options, option.WithCredentialsFile(opts.CredsFile)) } // credentials file option is optional, by default it will use GOOGLE_APPLICATION_CREDENTIALS // environment variable, this is a default method to connect to Google services client, err := firestore.NewClient(ctx, opts.ProjectID, options...) if err != nil { return nil, err } return &FirestoreBinManager{ binsCollection: opts.BinsCollection, // our bins collection name reqsCollection: opts.ReqsCollection, // our requests collection name client: client, pubsub: opts.Pubsub, logger: opts.Logger, }, nil}
When creating a document, you can specify document ID and just pass in the whole golang struct without first marshaling it into JSON:
func (m *FirestoreBinManager) BinPut(ctx context.Context, b *bin.Bin) (err error) { _, err = m.client.Collection(m.binsCollection).Doc(b.GetId()).Set(ctx, b) return err}
Note that we supply collection name: Collection(m.binsCollection)
, ID: Doc(b.GetId())
and set the struct fields Set(ctx, b)
.
This really saves time! An alternative with a KV store would be something like:
func (m *BinManager) BinPut(ctx context.Context, b *bin.Bin) (err error) { b.Requests = nil encoded, err := proto.Marshal(b) if err != nil { return err } return m.storage.Store(ctx, "bins/"+b.Id, encoded, nil)}...// store packagefunc (s *Storage) Store(ctx context.Context, id string, data []byte, metadata map[string]string) error { err := s.db.Update(func(txn *badger.Txn) error { // Your code here… return txn.Set([]byte(id), data) }) return err}
To delete a bin in our case means deleting both the bin document and all associated webhook requests with it:
func (m *FirestoreBinManager) BinDelete(ctx context.Context, binID string) error { _, err := m.client.Collection(m.binsCollection).Doc(binID).Delete(ctx) if err != nil { m.logger.Errorw("failed to delete bin doc by ref", "error", err, ) } // Now, get all the requests and delete them in a batch request iter := m.client.Collection(m.reqsCollection).Where("Bin", "==", binID).Documents(ctx) numDeleted := 0 batch := m.client.Batch() for { doc, err := iter.Next() if err == iterator.Done { break } if err != nil { return fmt.Errorf("Failed to iterate: %v", err) } batch.Delete(doc.Ref) numDeleted++ } // If there are no documents to delete, // the process is over. if numDeleted == 0 { return nil } _, err = batch.Commit(ctx) return err}
While it is very easy to store, retrieve and modify documents, some people will miss SQL type queries that can aggregate, count records and do other useful operations in the database. For example, to track document counts, you will have to implement a solution similar to one described here. My suggestion would be to spend more time planning data structure and what kind of operations are you planning to use before embarking on this journey :)
While being a bit skeptical at first, I quickly started liking Firestore. While running a managed Postgres would allow me to easier switch cloud providers, it would also make it more expensive to run. Keeping storage interface small means that you can implement Postgres (or any other database) driver in a matter of hours so then the most important things to look for are:
Useful resources:
]]>In this short guide we will configure Jenkins to start builds on GitHub pull requests. Subsequent builds will be triggered on any new commits and GitHub pull request status will show whether build succeeded or failed. This setup will work without configuring router, firewall or having a public IP. It will also work behind a corporate firewall.
In my case, I just grabbed a Vagrant box from https://app.vagrantup.com/ubuntu/boxes/xenial64:
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.network "private_network", type: "dhcp"end
Then:
vagrant upvagrant ssh
And we have our VM. You can get the IP address by typing ifconfig
in the terminal.
I mostly followed this guide https://linuxize.com/post/how-to-install-jenkins-on-ubuntu-18-04/. The only caveat I encountered this time with Jenkins, was the jdk version mismatch.
First, get your Jenkins token:
cat /var/lib/jenkins/secrets/initialAdminPasswordce04d19270934633a7badcac3cfac316
Then, either open your node firewall (or check Vagrant port forwarding) or do the easy thing: connect with the relay:
To get the CLI, check instructions here. On a 64-bit Linux OS it’s:
curl -sSL https://storage.googleapis.com/webhookrelay/downloads/relay-linux-amd64 > relay \ && chmod +wx relay && sudo mv relay /usr/local/bin
Go to https://my.webhookrelay.com/tokens, click CREATE TOKEN and copy/paste login command into the terminal, it should be something like:
relay login -k <your key> -s <your secret>
$ relay connect :8080Connecting: http://lsw7eq49jlhsuldvhpiyku.webrelay.io <----> http://127.0.0.1:8080https://lsw7eq49jlhsuldvhpiyku.webrelay.io <----> http://127.0.0.1:8080
Now, open the browser. You should see a similar screen:
Follow the steps to configure your Jenkins initial admin user.
Plugin installation instructions can be found here.
Once you have it, add GitHub credentials - your username and GitHub token.
Go to your bucket configuration and create a bucket called github-webhooks
. Configure it to forward all webhooks to http://localhost:8080/. This will ensure that webhooks will reach Jenkins server.
Once you have the relay
CLI on the machine where you run Jenkins, type:
relay forward --bucket github-webhooks
This will start forwarding webhooks. There are alternative options to run the forwarding daemon, such as Docker container.
If you are creating webhook configuration manually in GitHub, use http://localhost:8080/ghprbhook destination as it’s the endpoint on which the plugin is listening for webhooks. In default case, Jenkins will automatically transform https://my.webhookrelay.com/v1/webhooks/21e13033-bd3d-47a2-bf15-6fd42d4b40a3 endpoints to https://my.webhookrelay.com/v1/webhooks/21e13033-bd3d-47a2-bf15-6fd42d4b40a3/ghprbhook and
Webhook Relay will preserve the extra/ghprbhook
path.
Now, configure GitHub Pull Request Builder:
To create a new job, first select “Freestyle project”, then:
Add the project’s GitHub URL to the “GitHub project” field (the one you can enter into browser. eg: “https://github.com/rusenask/jenkins-test/"):
Configure Source Code Management section:
+refs/pull/*:refs/remotes/origin/pr/*
${ghprbActualCommit}
Configure Build Triggers with a list of admins and tick the use github hooks for build triggering
:
Add your Build step configuration. This can be anything you want, usually people tend to use either a bash script, Makefile target or something specific to your programming language such as go build
:
Now, whenever you open a new pull request in GitHub, you should see a build being triggered:
You can view build status in your Jenkins instance as well. This build indicator in GitHub will either turn red or green based on the build status.
As we can see, there are several required steps to make sure your PRs get automatically built and tested when using Jenkins. Those can be split into two groups:
I hope you will find this guide useful!
P.S. Bonus troubleshooting below:
When we are talking about Jenkins, there are many ways for things to go wrong. Multiple plugin versions, corporate proxies and different operating systems contribute to all of this. I have compiled a short list of items for you to check if you encounter problems.
Make sure there’s an automatically created GitHub repository webhook configuration:
Normally, connected agent should look like this:
relay forward --bucket github-webhooksFiltering on bucket: github-webhooksStarting webhook relay agent... 1.55523552627511e+09 info using standard transport...1.5552355264042027e+09 info webhook relay ready... {"host": "my.webhookrelay.com:8080"}
If you are behind a corporate proxy, try adding --ws
flag to change default transport type from GRPC to WebSocket:
relay forward --ws --bucket github-webhooksFiltering on bucket: github-webhooksStarting webhook relay agent... 1.5552387754607065e+09 info using websocket based transport...1.5552387754607568e+09 info authenticating to 'wss://my.webhookrelay.com:443/v1/socket'...1.5552387754608495e+09 info websocket reader process started...1.555238775470567e+09 info subscribing to buckets: [github-webhooks
There will be several sources of logs you can check out:
/log/all
:relay forward --bucket github-webhooksFiltering on bucket: github-webhooksStarting webhook relay agent... 1.55523552627511e+09 info using standard transport...1.5552355264042027e+09 info webhook relay ready... {"host": "my.webhookrelay.com:8080"}1.555236773074343e+09 info webhook request relayed {"destination": "http://localhost:8080/ghprbhook/", "method": "POST", "bucket": "github-webhooks", "status": 200, "retries": 0}1.5552368184301443e+09 info webhook request relayed {"destination": "http://localhost:8080/ghprbhook/", "method": "POST", "bucket": "github-webhooks", "status": 200, "retries": 0}1.5552368215106862e+09 info webhook request relayed {"destination": "http://localhost:8080/ghprbhook/", "method": "POST", "bucket": "github-webhooks", "status": 200, "retries": 0}1.555236829308788e+09 info webhook request relayed {"destination": "http://localhost:8080/ghprbhook/", "method": "POST", "bucket": "github-webhooks", "status": 200, "retries": 0}1.5552368314174337e+09 info webhook request relayed {"destination": "http://localhost:8080/ghprbhook/", "method": "POST", "bucket": "github-webhooks", "status": 200, "retries": 0}1.555236920064973e+09 info webhook request relayed {"destination": "http://localhost:8080/ghprbhook/", "method": "POST", "bucket": "github-webhooks", "status": 200, "retries": 0}1.5552369202506151e+09 info webhook request relayed {"destination": "http://localhost:8080/ghprbhook/", "method": "POST", "bucket": "github-webhooks", "status": 200, "retries": 0}
]]>Some webhook providers do not offer fine-grained control over what events are being sent via webhooks. Sometimes, you end up in a situation where webhooks, based on their type have to be delivered to different servers. We will have a look at a new Webhook Relay forwarding feature that allows users to define rules on outputs that help in these situations.
Each output can now have one or more, multi-level rules and rule groups. In this short tutorial, we will create a rule to verify GitHub webhooks and route them to appropriate endpoints based on their contents.
First, go to the Webhook Relay buckets page and create a new bucket. Once you have it, you should be able to set your public input endpoint:
Buckets are a grouping mechanism that let you define multiple inputs and multiple output destinations.
To create a GitHub repository webhooks, go to Settings -> Webhooks and set few parameters:
https://my.wehbookrelay.com/v1/webhooks/.....
.very-secret
:)Open your bucket details page from the buckets page and create a new output (mine relays to a helper service on https://bin.webhookrelay.com):
Once you have your output, click on a triangle to define your rules. For now, we will create a single rule. Click on the dropdown next to “ADD RULE”, select header and then click the “ADD RULE” button.
In the parameter set X-Hub-Signature. It tells Webhook Relay where to find the parameter to match against. Then, from the dropdown select payload SHA1 and set the last field to your secret value, mine was very-secret
:
Configuration is now set. Try pushing to your repository. Webhook should be pass through the service. Feel free to change your secret either in Webhook Relay configuration or GitHub webhook config and retrying push, it will be rejected.
At this point, we can determine whether webhooks are coming from GitHub or not. Next step - differentiating between pushes and tags.
If you create a new release or just make a git tag on your repository, a new webhook will be visible in your repository. Go to request details:
The part that’s interesting for us is "ref": "refs/tags/0.1.0"
:
{ "ref": "refs/tags/0.1.0", "before": "0000000000000000000000000000000000000000", "after": "39237a56aab6ed53030b086e6d02d04aa232b337", "created": true, "deleted": false, "forced": false, ... ...
And the second webhook will be for the tag itself:
{ "ref": "0.1.1", "ref_type": "tag", "master_branch": "master", "description": null, "pusher_type": "user", ... ...
Let’s create a rule that will catch any tag requests:
Now, if you push again, webhook will not be relayed to production-ci destination. Let’s modify our first rule to reject all tag webhooks. You can do that by creating a similar rule to the one we added in production-ci output but instead of contains set it to does not contain. Now, make a new tag and check out your bucket logs:
As you see, it’s trivial to set up multi-level rules that can route webhooks to desired destinations based on their contents and also do some basic validation. If you need yet more power on selecting correct webhooks, check out our example implementation of the socket protocol here: https://github.com/webhookrelay/webhookrelay-ws-client. Using WebSockets, you can receive webhooks directly inside your application and process them accordingly. Library is already used in Node-RED workflows via our node (blog post here).
I have recently also had a chance to look at Rust programming language and found that implementing a webhook forwarding application can be quite interesting: https://github.com/rusenask/rust-ws-client.
]]>Today we are expanding Webhook Relay’s Home Assistant add-on support for portability across different domains by announcing integration with Cloudflare API to create and manage DNS records. This means that you can transfer your domain management to Cloudflare and start enjoying new capabilities. Cloudflare has also recently announced their registrar so instead of just managing your records (they are doing it for free) and providing great proxy that speeds up your websites, you can now fully transfer your domain to them and avoid increasing prices from your original registrar.
Webhook Relay Home Assistant add-on is a lightweight service that creates fast and secure tunnels for remote connection. It empowers users and expands their choice when ISPs or routers prevent incoming connections. With this add-on users can easily use Alexa, Google Home, IFTTT and many other automation services that require your Home Assistant be reachable from the internet. Add-on itself is a single executable, just 7MB size and barely uses any CPU & RAM, making it an ideal option when running on low power devices.
With Cloudflare integration, other than allowing you to have any domain names with TLS pass-through tunnels, you get some additional benefits:
Add-on is installed in a same way as before (you can follow official documentation on it). The only difference now is that you will need to get a Cloudflare API key. Follow these instructions, or:
Note that your Cloudflare API will always remain on the device and will never be shared with Webhook Relay cloud service.
Now, set:
{ "key": "[YOUR TOKEN KEY]", "secret": "[YOUR TOKEN SECRET]", "forwarding": [ ], "tunnels": [{ "name": "home-assistant", "destination": "http://127.0.0.1:8123/", "protocol": "tls", "domain": "home-assistant.example.com", "provider": "cloudflare" }], "duck_dns": { "token": "", "accept_terms": false }, "cloudflare": { "email": "your-email@example.com", "api_key": "[YOUR CLOUDFLARE API KEY]" }, "tunnels_enabled": true, "forwarding_enabled": false}
Don’t forget to set the new "provider": "cloudflare"
field in the tunnel configuration.
We released multi-architecture add-on with the initial 1.0.0 release that can work in any environment, as long as it can connect to the public Webhook Relay servers. These agents are now running from low-power Raspberry Pi devices to high-performance servers, providing secure tunnels. With TLS pass-through tunnels we created an easy to use, secure by default tunnels where the agent is doing the heavy-lifting of ensuring TLS certificates and decrypting traffic. In this configuration, Webhook Relay servers can only see encrypted traffic ensuring maximum privacy.
Webhook Relay is a modern tunnelling service available on multiple architectures and providing both free and paid tiers. Self-hosted & enterprise are available. Follow us on Twitter @webhookrelay.
]]>