Provision GKE Cluster with Terraform

Bukola Johnson
9 min readJun 7, 2021

--

A Guide to Deploying Kubernetes with Terraform

Google Kubernetes Engine (GKE) is a fully managed Kubernetes service for deploying, managing, and scaling containerized applications on Google Cloud.

In this guide , I will work you through how to deploy a managed kubernetes on Google cloud using Terraform .

Prerequisite

I assume you already have some basic knowledge of Terraform and kubernetes in this guide , though I will make it as simple as possible for beginners to follow

Overview of Tasks

  1. Set up a project on your Google cloud account
  2. Enable Compute engine API and Kubernetes Engine API
  3. Create a Service account
  4. Install and initialize gcloud SDK
  5. Install kubectl
  6. Install Terraform
  7. Create Terraform files needed for the cluster creation
  8. Provision the GKE cluster
  9. Interacting with the cluster using kubectl
  10. Delete the Cluster

Bonus : Clone the Github repo for the project

1. Set up a project on your Google cloud account

There are several ways to create a new project in google cloud , but for this guide, we will use the Console

  • In the New Project window that appears, enter a project name and select a billing account as applicable. For the purpose of this guide , we are using project name: gke-terraform
  • click Create.

2. Enable the Compute engine API and kubernetes Engine API

These 2 APIs are required for terraform apply to work on this configuration. To enable the API for our project:

  1. Go to the Cloud Console API Library.
  2. From the projects list, select the project we created in the previous step (gke-terraform)
  3. In the API Library, select the Compute Engine API and
    the Kubernetes Engine API
    . If you need help finding the API, use the search field and search for each of the API separately (Compute engine API , kubernetes API)
  4. On the API page, click ENABLE.

3. Create a Service account

The best practise is to have separate service account for separate services , we will be creating a service account for terraform for the purpose of this guide

  • In the Cloud Console, go to the Service accounts page in IAM & Admin
  • Select our project (gke-terraform)
  • Click Create service account.
  • Enter a service account name terraform
  • Click Create and continue to the next step.
  • Choose IAM role — Editor to grant access to the service account for the project.
  • Click Continue.
  • Click Done to finish creating the service account.
  • Confirm the service terraform account is now created as shown below
  • Click on the newly created Service account and assign a key to the service account => click on the KEYS => click on ADD KEY => Click on Create a new key. Download the key in json format and save it securely somewhere on your system

4. Installing and initializing gcloud SDK

To let Terraform run operations and create the GKE cluster on our GCP account , we need to first install and configure the gcloud SDK tool.

  • To install the gcloud SDK
  • If you are on Mac OS , you can use the package manager to install it by running the below command

If you prefer to install Google SDK directly or you use other operating system , check out the detailed guide here

brew install --cask google-cloud-sdk

Take note of the output below to add to your profile so that you will be able to run the next set of commands

==> Caveats
google-cloud-sdk is installed at /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk. Add your profile:

for bash users
source '/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/path.bash.inc'
source '/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/completion.bash.inc'

for zsh users
source '/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/path.zsh.inc'
source '/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/completion.zsh.inc'
  • Now let’s initialize GCloud to authorise the SDK to access GCP using our service account credentials terraform that we created in step 3 and add the SDK to our PATH . This will allow Terraform to access these credentials to provision resources on GCloud.
gcloud init...
You must log in to continue. Would you like to log in (Y/n)?
  • Login with your GCP account
  • Select the project we created in step 1 — gke-terraform-314918
  • Select the default Zone and Region (optional)
  • Add your account to Application Default Credentials (ADC). This will allow Terraform to access these credentials to provision resources on GCloud
gcloud auth application-default login

5. Install kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters.

If you are on MacOS , run the below command to use package manager . For other OS , check out the guide here

brew install kubernetes-cli

6. Install Terraform

On MacOS , using package manager

  • First, install the HashiCorp tap, a repository of all our Homebrew packages.
brew tap hashicorp/tap
  • Now, install Terraform with hashicorp/tap/terraform
brew install hashicorp/tap/terraform

For other ways of installing terraform , please check the guide here

7. Create Terraform files needed for the cluster creation

  • First , on your system , create an empty directory, we will call it terraform-gke

For future purpose , make it a git directory so you can push your changes to GitHub when done , check out this guide if you need help on how to create a GitHub repository)

  • Create a sub directory called auth and move the service key we downloaded in step3 into the authdirectory ( remember to add the auth directory to your .gitignore file so you don’t expose your credentials to others when you finally push your changes to GitHub)
  • In order to make requests against the GCP API, we need to authenticate.
    You supply the key to Terraform using the environment variable GOOGLE_APPLICATION_CREDENTIALS, setting the value to the location of the file
export GOOGLE_APPLICATION_CREDENTIALS={{path}}
  • Next, we create the provider file named provider.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.52.0"
}
}
required_version = "~> 0.14"
}
  • Create another file named terraform.tfvars , we will add all the variables into this file
project_id = "gke-terraform-314918"
region = "europe-west1"
  • After you have saved your variables file, initialize the Terraform workspace, which will download the provider and initialize it with the values provided in your terraform.tfvars file.
terraform init 

If everything works fine, then you should see an output like below

  • Now, lets create a file named gke.tf, which will have the details to create GKE cluster and a separately managed node pool
  • Create a file named vpc.tf to provision a separate VPC and subnet for our GKE cluster
  • Lastly, lets make some outputs declaration in a new file called outputs.tf so we can export structured data about the resources

8. Provision the GKE cluster

  • Now, we will run a terraform plan, Terraform plan creates an execution plan. It involves reading the current state of any already-existing remote objects to make sure that the Terraform state is up-to-date.
terraform plan
  • Next, we will execute all the actions proposed in the above command and this will create the GKE cluster
terraform apply 
  • Below is a snippet of the expected output

As shown above in the output, 4 resources were created for us in this task (VPC, Subnet, GKE Cluster and a GKE node pool)

  • Now let’s confirm the cluster creation in our GCP portal, Navigate to compute => kubernetes Engine => Click on Clusters.
    As seen below our cluster was created successfully and named gke-terraform-314918-gke
  • Let’s confirm the Node pools.
    Click on the Cluster = > Click on Nodes .

You will see in the above image that we have 6 nodes, but in our gke.tf file we asked terraform to create 2 nodes, right? This is because a node pool was provisioned in each of the three zones within the region to provide high availability.

9. Interacting with the cluster using kubectl

In order to be able to interact with our newly created cluster , we need to add credentials to our KUBECONFIG (~/.kube/config) so we can interact with the cluster ( To read more on kubeconfig , check here )

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters.

  • Run the following command to retrieve the access credentials for your cluster and automatically configure kubectl.
gcloud container clusters get-credentials $(terraform output -raw kubernetes_cluster_name) --region $(terraform output -raw region)
  • Your output should be similar to the below
  • To confirm, verify the contexts by running
kubectl config get-contexts
  • Let’s try to get the nodes in our cluster using kubectl
kubectl get nodes
  • Let’s list all components on our cluster
kubectl get all --all-namespaces

(Below is the snippet of the output just to show you how it looks like, there are many other cluster resources not shown here)

Yay 💃 🕺, Congrats!!! 🍷
We have deployed a managed Google Kubernetes Engine (GKE) cluster on Google Cloud Platform(GCP) using Terraform

11 . Destroy the cluster

If you are just testing this, remember to destroy any resources you create once you are done with this tutorial so you don’t incur cost.

terraform destroy

(snippet from the output is shown below)

  • Confirm by typing yes and take a chill pill for all the resources to be destroyed, This will take a while because the nodes will have to be cordoned first

We have come to the end of our tutorial, I hope you find this guide easy to use.

I hope you are excited about trying this out? Let me know how you feel after performing the steps in this guide .

I pushed all the files used in this guide to my Github Repo here , You can clone it or write your own manifest using this tutorial as a guide

Please drop a comment for me on what your thoughts are after trying out this guide.

Do not forget the 👏 if you like this content
Also I will be glad if you hit the follow button so you get notified of my new posts.
You can also follow me on twitter
Thank you!!

--

--

Bukola Johnson
Bukola Johnson

Written by Bukola Johnson

I am a DevOps Engineer. Follow me to keep abreast with the latest technology news, industry insights, and DevOps trends.

Responses (2)