Autobucket Operator
Building a Cloud Storage Kubernetes Operator with Go and Operator SDK
This blog has moved to https://didil.substack.com/
In the last article, we looked at Mutating Admission Webhooks as a way to extend Kubernetes. In this article we’ll explore another concept: Kubernetes Operators.
Kubernetes Operators
The Kubernetes docs defines operators as:
Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components
This might seem a little abstract right now, but we’ll try to explore this concept further by actually implementing an Operator.
Autobucket Operator
In this article we’ll build “Autobucket Operator”, a Kubernetes operator that automatically manages Cloud Object Storage (like GCP Cloud Storage Buckets or S3 Buckets) for a Kubernetes Deployment. Here is a schema that represents the general idea:
Whenever a Kubernetes Deployment with a specific set of annotations is created, we’d like the operator controller to create a Bucket Custom Resource (CR), and whenever a Bucker CR is created, we’d like the operator controller to create a Cloud Storage Bucket.
Let’s code
The companion repo for this article is available on github, so you can follow along.
The Operator is built using Operator SDK/Kubebuilder and Golang. To get started we define a Bucket Custom Resource in Go code:
Kubebuilder then generates for us the corresponding Kubernetes Custom Resource Definition, which allows to define a Bucket in yaml like this:
However we won’t really need to create the Bucket CR manually. The idea is to add custom annotations to a Deployment resource, and have the operator create and manage the Bucket Custom Resource automatically for us, as in this example:
The example above is a pretty regular k8s Deployment, but if you look at the metadata.annotations section, you’ll see that there is a series of custom annotations with keys starting with “ab.leclouddev.com”, those are the special instructions for our operator controller to create the Bucket CR.
But how does the controller actually work ?
The Reconcile Loop
The operator controller watches the Deployments and whenever it finds a Deployment with the special annotation “ab.leclouddev.com/cloud”, it will create (if missing) a matching Bucket CR. Luckily kubebuilder and controller-runtime do the heavy lifting for us here and we basically just have to define our deployment’s “Reconcile Loop”, which checks deployments and reconciles the Bucket resources:
The code above mainly creates the Bucket resources if they’re missing. The next step is to create the actual Cloud Storage buckets when a Bucket CR is created. We’ll take care of that in the buckets controller/reconcile loop:
This last piece of code creates Cloud Storage buckets, and updates our Bucket CR status with a creation time stamp “CreatedAt”, so we can track whether our Cloud Storage buckets have been created yet.
How about testing ?
As a big fan of automated testing, it’s comforting to learn that we can easily write tests for our controllers using envtest (which runs a local k8s control plane so we can run our tests against it), the Gingko testing framework and the Gomega matching/assertion library:
In the test above, we create a Deployment with our special annotations via the kubernetes API client, and then we check that the operator controller creates a Bucket CR with the right specs.
Does it work ?
Let’s try our operator in a little demo. For this, I have created another github repository bucket-text-api which is a simple Go REST API that takes a JSON input and saves text to a Cloud Storage bucket.
For this demo I have created a Kubernetes cluster (v1.18.6), a GCP project, and a GCP service account (detailed GCP instructions here). Then we’ll install our resources and controller manager:
# install the k8s resources
$ make install
# deploy the controller manager
$ GCP_PROJECT=autobucket-demo make deploy
Our operator is now running in the cluster, so we’ll create the sample deployment (bucket-text-api):
$ kubectl apply -f config/samples/deployment.yaml
Let’s check that our deployment is running:
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
bucket-text-api 2/2 2 2 1m
and let’s check that our Bucket CR was created:
$ kubectl get bucket
NAME CLOUD FULLNAME CREATEDAT
bucket-text-api gcp ab-default-bucket-text-api 2020-11-19T15:31:34Z
As expected, the Bucket CR was created and we can see its full name and that it has a CreatedAt timestamp, which means our Cloud Storage bucket was also created.
Let’s check the Cloud Storage bucket is actually there using the gsutil tool:
$ gsutil ls -L -p autobucket-demo
gs://ab-default-bucket-text-api/ :
Storage class: STANDARD
Location type: multi-region
Location constraint: US
Versioning enabled: None
Logging configuration: None
Website configuration: None
CORS configuration: None
Lifecycle configuration: None
Requester Pays enabled: None
Labels: None
Default KMS key: None
Time created: Thu, 19 Nov 2020 15:31:34 GMT
Time updated: Thu, 19 Nov 2020 15:31:34 GMT
Great ! Let’s now try to use our bucket-text-api app to save a text file to the bucket. The Deployment was exposed via a NodePort Service on port 3008 of the cluster nodes so we can access it using:
$ curl --request POST \
--url http://<kubernetes-node-ip>:30008/save \
--header 'Content-Type: application/json' \
--data '{
"name": "test.txt",
"content": "hello operator !"
}'
And finally let’s see if the file is saved on the Cloud Storage bucket:
$ gsutil cat gs://ab-default-bucket-text-api/test.txt
hello operator !
The file is there and everything seems to be working as expected 🎉 .
CR Deletion and Finalizers
One last bit I didn’t mention yet is what happens when a Deployment or Bucket CR is deleted.
The operator provides a special deployment annotation “ab.leclouddev.com/on-delete-policy” which can be set to “destroy” or “ignore”. If it is set to “destroy” as in our example above, the operator will delete the Cloud Storage bucket when the Bucket CR is deleted, and also when the Deployment is deleted since a Deployment deletion triggers a Bucket CR deletion (use carefully as you might lose data). This is done through Kubernetes Finalizers, which I highly encourage you to read on, and you can check the full code here.
Let’s try to delete the deployment:
$ kubectl delete deployment bucket-text-api
deployment.apps "bucket-text-api" deleted$ kubectl get bucket
No resources found in default namespace.$ gsutil ls -p autobucket-demo gs://ab-default-bucket-text-apiBucketNotFoundException: 404 gs://ab-default-bucket-text-api bucket does not exist.
Deleting the deployment also deleted the Bucket CR and the Cloud Storage bucket as expected.
Conclusion
We have seen in this example how Kubernetes Operators can allow us to automate cloud infrastructure logic. But there are of course a lot more uses to operators, and you can check this list of operators in the wild.
While functional, this first version of Autobucket Operator is pretty basic and only handle Google Cloud Platform storage buckets, but there are already a few items on the todo list, such as support for AWS S3 bucket and more advanced configuration options.
I hope you have found this article useful and as usual please let me know if you have any questions or remarks, and if you’d like to contribute to Autobucket Operator please open github issues and send pull requests !