So, the other day I needed to come up with a proof of concept for running Apache Kafka on Kubernetes. So, my thinking was “ok, I’ll create a temporary K8s cluster, prove it works, then tear it down”. AWS was being a pain, so I thought to myself “hey, I’ll try GKE”. This is a log of my experience with trying to use GKE (a.k.a. Google Container Engine, the “K” stands for Kubernetes I believe).
Firstly, I tried to sign up using my work email to get the $300 free credit. This didn’t work for some reason (maybe due to my company already having a GCP account?).
Secondly, after I logged in it refused to create a cluster unless I created a “Project” first. God knows why this is needed, but hey, whatever. I created a project called “Kafka PoC” or “foobar” or something.
Thirdly, I decided to try and create a cluster using a larger than default instance size, because:
- The guide I was following used the “minimum” ZooKeeper settings from Yahoo
- I figured as it was going to be deleted, the extra cost wouldn’t be too much
This resulted in the following error:
Ok… so… that error message has pretty much no useful or actionable information. After hitting the “reload” button a few times and getting the same result, I gave up and tried to create the same cluster with the “default” instance size. This worked fine, but still left me without the recommended size instances I needed.
So, I tried again to create a cluster with the larger size and this time it worked! 🙂
The cluster came up, next it was time to connect to it. Generally speaking, Kubernetes has a pretty basic authn/authz setup. To connecto to it, you basically need:
and you can replace user/pass with a client certificate if you like. So, I clicked on the “Connect” button and got the following:
I ran the command, only to get back an error message complaining that the API hadn’t been enabled. This time however the error message was useful and even included a link to the GCP console to enable the API:
srdan@Srdans-MacBook-Pro:~$ gcloud container clusters get-credentials cluster-1 > --zone us-central1-a --project kafka-poc Fetching cluster endpoint and auth data. ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Google Container Engine API has not been used in project 012346006552 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/container/overview?project=012346006552 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Except that clicking on the link just led to another error:
For the life of me, I couldn’t work our where to turn on the Container API to allow gcloud to give me the credentials I needed to access the cluster that I had just created.
Even worse was that the web UI was showing that the API was in fact already enabled:
At this point, I gave up. I tore down the cluster and switched to using the kube-up.sh script with AWS.
UPDATE: After discussing the issue with a colleague that has more Google Cloud Platform experience than me, and trying to re-create the issue, I managed to figure out that it was an issue with the “service account” not being activated causing the error.
I had to go into the IAM screen, generate a service account:
Then create a key for the Service Account and run:
gcloud auth activate-service-account --key-file Downloads/kafka-poc-8b0bd676d86d.json Activated service account credentials for: [firstname.lastname@example.org]
This allowed the original gcloud command to get the Kubernetes credentials and add them to my kubeconfig file:
srdan@Srdans-MacBook-Pro:~$ gcloud container clusters get-credentials cluster-1 --zone us-central1-a --project kafka-poc Fetching cluster endpoint and auth data. kubeconfig entry generated for cluster-1.
Which allowed me to connect to my cluster!
If you’re looking to get started with Google Container Engine (GKE), this is an excellent tutorial and place to start: https://github.com/rvowles/kubernetes-codelounge
In fact, taking the above linked tutorial before attempting to spin up a cluster may have saved me a lot of time and grief.