Google Cloud Build, CI/CD for static websites

Updated December 2018 · 23 minute read
The following assume familiarity with Continuous Integration, Continuous Delivery and Continuous Deployment practices, the Google Cloud Platform as well as git, Hugo installed and an existing Hugo project.

Continuous Deployment aims at minimizing the total time taken for code changes in development to deploy in the production environment. In this practice there’s no human intervention at all, since the entire pipeline is put into production automatically. This is achieved with appropriate infrastructure capable of automating the steps of testing, building and deploying.

This post is focused at describing a free solution for Continuous Deployment of static websites. In a previous post, “Continuous Deployment for Hugo websites”, I described a free solution for Continuous Deployment with services including GitLab, Docker Hub, and Firebase. My intention with this post, is to add an alternative free solution based entirely on Google’s cloud infrastructure. To create a working example for demonstration, we will focus on Hugo static site generator, however, by altering specific parts in the supplied files, one can apply this solution to static websites build by other engines (Jekyll, Pelikan, Gatsby). For more solutions and ideas, concerning Hugo sites, you can also look at Hugo’s documentation extensive section dedicated to Hosting & Deployment with options and guides to various solutions for hosting, continuous integration and automatic deployment.

Google Cloud Platform (GCP) is a suite of cloud computing products and services. Among these products/services, are Google’s Cloud Build, Cloud Source Repositories, Container Registry and Firebase, which can integrate together seamlessly to form a Continuous Deployment pipeline for our use case. At the time of writing, Google offers a free tier, including always free usage limits during and after the free trial, where one can start learning/building on GCP for free.

Services and tools for the job

Since we will be using products and services from Google we are guaranteed they will work together flawlessly. You can read more on the advantages of using GCP for continuous delivery here. The products and services from GCP, we will use to create a Continuous Deployment pipeline for a Hugo static website are:

  1. Cloud Source Repositories. This will be used to host our remote repository, where we will store, manage, and track our code.
  2. Cloud Build. This will be used to define our custom workflow for building, testing, and deploying our website to Firebase.
  3. Container Registry. This will be used to store, manage, and secure our Docker container images and fully automate our Docker pipelines to get fast deployement.
  4. Cloud Key Management Service (KMS). Cloud KMS allows you to keep encryption keys in one central cloud service, for direct use by other cloud resources and applications. This will be used to encrypt our Firebase token.
  5. Cloud SDK. This is a Command-line interface for Google Cloud Platform products and services. It contains gcloud, gsutil, and bq, which you can use to access Google Cloud Build, Google Container Registry, and other products and services from the command-line. We will run these tools interactively and in our automated scripts.

Git will be our local choice for free and open source distributed version control system. This will be used to manage our local repository and for pushing/pulling code from Google’s Cloud Source Repositories. One thing that should be noted here, is that this solution is free based on some limitations. Once you exceed Google’s free tier limits you will be charged according to each product’s/service’s pricing. This should not be a problem for small scale, individual projects, but you should be familiar with pricing for each product to make sure you don’t incur charges.


Establishing our desired work-flow as our final target, this would briefly include the following:

  1. Working locally with our website files. (Try new features, add/update posts, etc.)
  2. When we feel happy with what we have done, commit the changes. (Or commit as many times as we feel necessary)
  3. Push changes to remote repository. (usually origin/master)
  4. Done. (Automatic deployment)

After step 3 in the above work-flow our site will be tested, built and deployed automatically, and as soon as our pipeline finishes successfully we will be able to see our latest changes on-line. The procedure involves the following:

  1. Initializing version control with Git in our website’s root directory. (local repository)
  2. Creating a relevant project in GCP for our remote repository, build pipelines and registry storage.
  3. Enabling required GCP APIs and installing Cloud SDK.
  4. Creating a build configuration pipeline in Cloud Build using Google’s Cloud resources, to test, build and deploy a Hugo site.
  5. Securing our Firebase CI token.
  6. Creating and pushing files to our Cloud Source Repository.
  7. Creating a Build Trigger to automate the build process.
  8. Setting appropriate account permissions to your cloudbuild service account.

Pipeline creation using GCP’s technologies involves the use of container images, which we can get from the public repositories of Docker Hub, or shared/public official builder images in Google’s Container Registry. Another option is to create our own containers employing again GCP’s CI/CD mechanisms to build and store container images utilizing builders and project’s registry storage, or to build it locally and upload it to our Docker-hub repository, or Google’s Container registry. For this guide we are going to follow the option of utilizing official builders, Docker Hub images, and GCP community builders to create our own container images for deploying a Hugo website. This is chosen to satisfy particular requirements that could not be met otherwise at the time of writing and also to demonstrate complex use of GCP’s build services. Our build configuration will be one continuous procedure, that starts by collecting version information for the most recent release for the tools we use (Hugo, Firebase-tools), updates relevant files, if necessary, pushes changes to our Source Repository, builds image(s), tests and finally deploys our static site.

Step 1: Using git with our Hugo website

To proceed through this step you must have Git installed on your machine. If you are unfamiliar with version control using Git you can try a free on-line course to familiarize yourself with Git. To initialize a Git repository in your existing Hugo project, go to your project’s root directory (this is usually where your config.toml resides) create a file named .gitignore and add an exception for the directory public. You could try the following, in your project’s root directory, if you are on Ubuntu or similar Linux flavored:

touch .gitignore
echo "# Hugo default output directory
/public" >> .gitignore
git init
git add .
git commit -m "Initial commit"

This will create a new repository, add all your project files, except from public directory and an initial commit will be created. Also, if you you are using Git for the first time you should make sure you have set your personal information in Git’s configuration:

git config --global "YOUR_USER_NAME"
git config --global "YOUR_EMAIL"

In case you are already familiar with Git, you may ignore all this and proceed.

Step 2: Creating a GCP project

Usually to create a new GCP project you would follow the instructions from here. However, for our use-case it is easier to start off by following the instructions for creating a new Firebase project instead. As Firebase is part of Google’s Cloud products and GCP being common among all Google’s Cloud products, creating a new Firebase project is essentially the same as creating a GCP project and enabling some APIs and permissions specifically for Firebase related projects.

Firebase is offering a free tier for hosting static websites (which are mainly intended to be app-supporting sites) also providing secure http (https) via issuing a free ssl certificate. To get started with Firebase hosting you can read the relevant guide at Hugo documentation and Firebase’s getting started guides. Since, we need a place where our deployed site will be hosted and served to www, we would eventually create a Firebase project, but since such a project is a GCP project, following this route, we will benefit by not having to set up Firebase hosting in an base GCP project. In a few brief steps:

  1. Create an account with Firebase.
  2. Globally install Firebase tools. Although not required for the creation of our CI/CD pipeline, Firebase’s console tools provide an alternative way to control and monitor your website’s deployment and are required to generate an authentication token for use in non-interactive environments which is required for our CI/CD pipeline. To install the tools you must have Node.js installed, which you can get from here or following the directions here. Then, to install the actual Firebase tools you can follow the instructions given here.
  3. Go to your Firebase console and create a new web project.
  4. This is also a good time to connect your custom domain from the Firebase UI > Hosting > Connect Domain.
  5. Locally, go to the root directory of your Hugo website project (usually where config.toml resides). Then initialize your Firebase project:

    firebase login
    firebase init

    Follow the directions to make your choices, ie. choose Hosting, the name of the project you set up in Firebase UI, defaults for database rules, public as default publish directory and No for single page app deployment.

  6. Generate an authentication token for use in our CI/CD which is an non-interactive environment, using Firebase console tools:

    firebase login:ci

    This generates a token, e.g. “1/AD7sdasdasdKJA824OvEFc1c89Xz2ilBlaBlaBla”. Take a note of the generated token, but keep it safe, as this is a private key that allows write access to your Firebase hosting.

We now have a new GCP project with Firebase hosting enabled. Next we need to enable more functionality by enabling more GCP APIs.

Step 3: Installing Cloud SDK and enabling GCP APIs

An essential set of tools for interacting with GCP projects right from your console is Cloud SDK. It includes the gcloud tool that manages authentication, local configuration, developer workflow, and interactions with the Cloud Platform APIs. After you’ve installed Cloud SDK, you can also install language specific cloud libraries and optional Cloud SDK components. Cloud SDK documentation includes instructions for installing the tools on most OS platforms. Please, follow the instructions to install and initialize for your OS platform. Having installed Cloud SDK, an additional component you may find helpful for your Cloud builds can be installed (on a Linux machine) running the following:

gcloud components install cloud-build-local

This will install the local builder that will allow you to test a build configuration locally, or even build locally and push your images to the Container Registry (requires local installation of Docker).

Enabling Billing

An important step prior to enabling APIs is to enable billing for your project. This does not imply that, as soon as you enable billing, you will start incurring charges. If for example you have available credits from the free tier, or you are within the free limits you will not be charged (at least this is how things work at the time of writing). However, since the added functionality provided by most APIs has an associated pricing scheme the vendor (Google) would like to be able to charge for any fees that might occur, when you are out the free limits and you have run out of free tier credits. In any case you should familiarize with the pricing schemes for each API and any other associated costs, so you won’t be surprised by imposed charges. If billing is not enabled on your project, you may not be able to use some APIs until you enable billing on your project. See APIs and billing for more information.

Enabling GCP APIs

Enabling an API associates it with the current project, adds monitoring pages, and enables billing for that API if billing is enabled for the project. The APIs you must enable to work on our use case are:

  1. Cloud Source Repositories API
  2. Cloud Build API
  3. Container Registry API
  4. Cloud KMS API

It should be fairly straightforward to enable the APIs, in case you encounter difficulties, check the relevant documentation for each Cloud API at the “quickstarts”. Also, occasionally Cloud Console might require you to try again at a later time the requested action.

Step 4: Creating a build pipeline

To start a build on Cloud Build, you need to create a build configuration file. The file should be named cloudbuild.yaml and it defines the fields that are needed for Cloud Build to perform your tasks. It should be written in JSON or YAML syntax depending on how it would be supplied to Cloud Build. We are going to include cloudbuild.yaml in our repository so it will be written in YAML. For a full list of all the fields you can include in a build config file read here.

Automating Builds requires the use of Build Triggers, that instruct Cloud Build to automatically build your image(s) whenever there are changes pushed to the build source. A build trigger can be set to re-build your images on any changes to the source repository, or only changes that match certain criteria. For our use case we will configure Build Triggers to react when certain tags are pushed to our Source Repository. We will not differentiate among branches, as will use only the master branch, both for building and updating our images and for deploying our website using our Registry images. To make use of Build Triggers we must include a cloudbuild.yaml file in our sources.

If you add a cloudbuild.yaml to the root directory of your repository, then each push with a certain tag will trigger your CI pipeline. To build our Hugo site with GCP CI we need to utilize a couple of container images to create our own Hugo container image. We will then use this image for our Continuous Deployment pipeline. The image we will be creating will have Hugo extended version installed to support Sass/SCSS functionality. We are further looking to build this image in a light linux based container so that the final image size is small and flexible.

Community-contributed images for Google Cloud Build are a very good starting point if you do not require the extended version of Hugo. They provide a repo with cloudbuild.yaml and Dockerfile for the latest version of Hugo which you can use in your own cloudbuild.yaml to build your image if you do not need Sass/SCSS. In Dockerfile they use a distroless image to run the Hugo static site generator once it has been downloaded. Distroless images are Language focused docker images, without the operating system. Apart from reducing the container image size are a good practice to use when possible. These distroless images are available from the Google Container Registry. For our use case we will use distroless image, as we need the glibc runtime for the extended Hugo version. Using the distroless image reduces the built image size from ~30.2MB to ~16.2MB.

Build files

In the root directory of your Hugo website (you should be in the same directory as config.toml) create the following files:

  1. cloudbuild.yaml
  2. Dockerfile-firebase
  3. Dockerfile-hugo-xtnd
  4. firebase.bash
  5. imgbuilder.bash


You can get the cloudbuild.yaml file from my associated GitHub repo.

This file contains all the steps to instruct Cloud Build to follow for our pipeline. There are in total nine steps in the cloudbuild.yaml. In the first step we start by checking our project’s Container Registry for existing images. We request the response in .json format and save it in an associated file. We will be building/using two images one for Hugo static site generator and one for Firebase tools, so we check our GCR for both images. We will use this response later in our update-images script (

- name: ''
  entrypoint: 'bash'
  - '-c'
  - |
    gcloud container images list-tags$PROJECT_ID/hugo --format=json > gcrimagelist.json
    gcloud container images list-tags$PROJECT_ID/frbtools --format=json > gcrfrbimglist.json
  id: 'check-images'

In the second build-step we run a Python script ( to: a) check the current release of Hugo, b) current release of Firebase tools, c) decide whether to update the image(s) or not, d) update the relevant files in case a container image needs updating to a recent release. In this step your should replace [YOUR_EMAIL] with your email and [YOUR_NAME] with your name or a username.

- name: 'python:3.7'
  entrypoint: 'bash'
  - '-c'
  - |
    python --version
    git config --global "[YOUR_EMAIL]"
    git config --global "[YOUR_NAME]"
    pip3 install requests gitpython
    python ./
  id: 'update'

The third build-step updates our Cloud Source Repository, with the latest changes from the previous step (if any).

- name: ''
  - 'push'
  - '-f'
  - 'master'
  - '--follow-tags'
  id: 'sources-update'

The forth build-step is a conditional build job. The imgbuilder.bash script will build and push to GCR a new Hugo image if there is no image available for the most recent Hugo release. This step uses the Dockerfile-hugo-xtnd file.

- name: ''
  entrypoint: 'bash'
  - './imgbuilder.bash'
  - 'VRSNFILE=buildhugo.txt'
  - 'IMGNAME=hugo'
  - 'DCRFILE=Dockerfile-hugo-xtnd'

The fifth build-step is a simple test for our Hugo image.

- name: '$PROJECT_ID/hugo'
  args: ['--help']
  id: 'test'

This sixth build-step is where our Hugo site is actually generated and will start after the test build step.

- name: '$PROJECT_ID/hugo'
  id: 'site-builder'

In the seventh build-step we are simply testing the generated site code. If you have disabled RSS in your Hugo site configuration this step must be changed or removed, as you will not have an index.xml generated from the previous build-step.

- name: busybox
  args: ['cat', 'public/index.xml']

In the eighth build-step we have another conditional build job. We use the imgbuilder.bash script again to conditionally build and push to GCR a new Firebase-tools image if there is no image available for the most recent Firebase-tools release. This build-step starts asynchronously after the sources-update (third) build-step. This step uses the Dockerfile-firebase file.

- name: ''
  entrypoint: 'bash'
  - './imgbuilder.bash'
  - 'VRSNFILE=buildfrtls.txt'
  - 'IMGNAME=frbtools'
  - 'DCRFILE=Dockerfile-firebase'
  id: 'frbtools-image'
  waitFor: ['sources-update']

The ninth build-step, is a just to test our Firebase-tools image. It runs firebase list using our FIREBASE_TOKEN, that is expected to produce a list of our Firebase projects.

- name: '$PROJECT_ID/frbtools'
  - 'list'
  secretEnv: ['FIREBASE_TOKEN']
  waitFor: ['frbtools-image', 'test']

The last, tenth build-step is where we deploy our newly generated site code to our Firebase hosting. This build-step will use our Firebase-tools image to push our public directory to Firebase hosting and create a deployment message.

- name: '$PROJECT_ID/frbtools'
  - '--project'
  - '--non-interactive'
  - 'deploy'
  - '--only'
  - 'hosting'
  - '--message'
  - 'Build $BUILD_ID, repo $REPO_NAME, sha1 $COMMIT_SHA, ref $BRANCH_NAME'
  secretEnv: ['FIREBASE_TOKEN']

The last part of cloudbuild.yaml instructs Cloud Build to decrypt the encrypted FIREBASE_TOKEN using key firebase-token from keyring cloudbuilder. This is achieved with the KMS API. Encrypting the FIREBASE_TOKEN is described in a later section. In this part you should replace [YOUR_PROJECT_ID] with your project name and [YOUR_ENCRYPTED_FIREBASE_TOKEN] with the encrypted token string you will create when you encrypt your FIREBASE_TOKEN. We created the CI FIREBASE_TOKEN during the Firebase-project procedure in step 2.

- kmsKeyName: 'projects/[YOUR_PROJECT_ID]/locations/global/keyRings/cloudbuilder/cryptoKeys/firebase-token'


You can get the Dockerfile-firebase file from my associated GitHub repo.

This is a Dockerfile that is used to build the latest Firebase-tools image, so we can use Firebase commands for our non-interactive website deployment. This Dockerfile is based on the cloud-builders-community/firebase builder. This builder’s usage information is relevant to our use case and should be useful to read it.


You can get the Dockerfile-hugo-xtnd file from my associated GitHub repo.

This is a Dockerfile that is used to build the latest Hugo (extended) image, so we can generate our static site for our non-interactive website deployment. This Dockerfile is based on the GitLab pages Hugo example Dockerfile and on the cloud-builders-community/hugo builder. This is a multi-stage build Dockerfile, where initially we use an Alpine linux image to get the Hugo binary, verify checksum and decompress it, and then use the image to run Hugo, which creates a minimal ~16MB container image.


You can get the firebase.bash file from my associated GitHub repo.

This script is used by Dockerfile-firebase as the ENTRYPOINT, so that Firebase commands passed to the image will be supplemented with the required FIREBASE_TOKEN which is passed as a secrete environment variable when the image is used to deploy the site to Firebase (last build-step). This script is from the cloud-builders-community/firebase builder. We created the CI FIREBASE_TOKEN during the Firebase-project creation procedure in step 2. In a following section we will encrypt the FIREBASE_TOKEN for safe use in Cloud Build.


You can get the imgbuilder.bash file from my associated GitHub repo.

if [ -f "$VRSNFILE" ]; then
    tag=$(cat "$VRSNFILE")
    echo "Image${PROJECT}/${IMGNAME}:${tag} NOT found. Building new..."
    docker build -t "${PROJECT}/${IMGNAME}:${tag}" -t "${PROJECT}/${IMGNAME}" -f "$DCRFILE" .
    docker push "${PROJECT}/${IMGNAME}:${tag}"
    docker push "${PROJECT}/${IMGNAME}"
    line=$(sed -n -e "/$SEARCHSTR /p" "$DCRFILE")
    tag=$(echo "${line##* }")
    echo "Image${PROJECT}/${IMGNAME}:$tag exists in GCR, proceeding without new build..."

This script is used for conditionally building Hugo and Firebase-tools images. The script examines the presence of a file, which is automatically generated by the script if updating is required. If the file is found, the new image tag is red from the file and appropriate images are built and pushed to Container Registry. If the file is not found, we proceed without building new images. This saves build-time and prevents our GCR from being cluttered with same images at each deployment. Free storage in Container Registry is not unlimited so using it sparingly is probably a good option, unless you don’t mind incurring charges.

You can get the file from my associated GitHub repo. The idea for this script and some parts come from a similar used in GitLab pages for the Hugo example, found here

rrelease = requests.get(GITHUB_API_REPOS + '/gohugoio/hugo/releases/latest')
if rrelease.status_code != 200:
    print('Failed to get Hugo latest release from GitHub')

release = rrelease.json()
print(f'Hugo Latest version is {release["name"]}')

rftrelease = requests.get(NPMREGISTRY.format('firebase-tools'))
if rftrelease.status_code != 200:
    print('Failed to get Firebase-tools latest release from registry.npmjs')

ftrelease = rftrelease.json()
print(f'Firebase-tools latest version is {ftrelease["latest"]}') first checks the current releases of Hugo and Firebase-tools from GitHub repo and npm registry respectively. Then checks the available images in our GCR for both Hugo and Firebase-tools:

imagedata = gcrimagedata(GCRIMGFILE)
frbimgdata = gcrimagedata(GCRFRBIMGFILE)

Then sets appropriate flags, depending on the comparisons of current versions to existing versions in our GCR:

hgupdate = False
if imagedata:
    hgupdate = compare_version_tags(imagedata, release['name'][1:], 'Hugo')
    hgupdate = True

frbupdate = False
if frbimgdata:
    frbupdate = compare_version_tags(frbimgdata, ftrelease['latest'], 'Firebase-tools')
    frbupdate = True

If our images are up to date with the current versions, the script exits, otherwise it starts the update for either or both images:

if (not hgupdate) and (not frbupdate):

if hgupdate:
    dockerfile = get_dockerfile(DOCKERFILE)

if frbupdate:
    frbdockerfile = get_dockerfile(FRBDOCKERFILE)

Finally, it commits the changes to local repo, creates relevant update config files for the next build-steps and creates a new repo tag to mark the current update:

repo_commit_changes(hgupdate, frbupdate, release['name'][1:], ftrelease['latest'])

if hgupdate:
    write_notify(UPDATEFILE, release['name'][1:])

if frbupdate:
    write_notify(FRTUPDATEFILE, ftrelease['latest'])

create_repo_tag(hgupdate, frbupdate, release['name'][1:], ftrelease['latest'])

These are all the files you need to add to your Hugo project directory to make Cloud Build work as intended.

Step 5: Encrypting FIREBASE_TOKEN with Cloud KMS

You can use encrypted resources like files or variables to pass authorization tokens, to your build steps. Cloud KMS allows you to encrypt and decrypt resources and also easily manage access and rotation of your encryption keys. To encrypt our FIREBASE_TOKEN we need to create a KeyRing and a CryptoKey. KeyRings are just groupings of CryptoKeys. If you have not already enabled the Cloud KMS API, you must enable it, to proceed to this step. The following require the use of gcloud commands in the cli, instructions are based on usage instructions of firebase-community-builder.


# create a keyring for cloudbuilder-related keys
gcloud kms keyrings create cloudbuilder --location global

# create a key for the firebase token
gcloud kms keys create firebase-token --location global --keyring cloudbuilder --purpose encryption

# create the encrypted token
echo -n $FIREBASE_TOKEN | gcloud kms encrypt \
  --plaintext-file=- \
  --ciphertext-file=- \
  --location=global \
  --keyring=cloudbuilder \
  --key=firebase-token | base64

Replace [YOUR_FIREBASE_TOKEN] with the one your generated in step 2 using firebase login:ci command. The gcloud commands refer to the currently set project, so you must have set the project you created in step 2 prior to running the above commands (hint: use gcloud config set project [YOUR_PROJECT_ID]). The output of the last command will be the encrypted FIREBASE_TOKEN string which you can use to replace [YOUR_ENCRYPTED_FIREBASE_TOKEN] in the cloudbuild.yaml.

Step 6: Creating and populating our Cloud Source Repository

Creating a Source Repository as our remote repo, will allow us to push our files and create automatic build triggers for our Continuous Deployment pipeline. To create a new repository in Cloud Source Repositories, you must have enabled the Cloud Source Repositories API. The gcloud commands refer to the currently set project, so you must have set the project you created in step 2 prior to running the following commands (hint: use gcloud config set project [YOUR_PROJECT_ID]). Then at your terminal run:

gcloud source repos create [CLOUD_SOURCE_REPOSITORY_NAME]

Replacing [CLOUD_SOURCE_REPOSITORY_NAME] with the name you choose for your new Cloud Source Repository. Then from your local project’s root directory run:

git config credential.''.helper
git remote add origin[PROJECT_NAME]/r/[REPOSITORY_NAME]

In the above, replace [PROJECT_NAME] with the name of GCP project and [REPOSITORY_NAME] with the name of the new Cloud Source Repository you just created. If you have already configured another remote for your local repo, you could change origin with google, to keep both. If you have your local files committed, you can now push them to your new remote:

git push --all origin

When finished you can open Source View in Cloud Console to view the code you just uploaded.

Step 7: Build triggers - Build Automation

To automatically initiate your build whenever there are changes pushed to the build source, you can use a build trigger. We are going to set up a build trigger that will initiate a build whenever a tag starting from rel- is pushed to your build source. If we used a trigger based on changes on branch master this could lead to a build loop whenever your third build-step pushed changes to your Cloud Source Repository. Also this approach allows you push commits without building or deploying your website.

To create a new build trigger open Build Triggers page in the GCP Console:

  1. Select your project if you have not done so already, and click Add trigger.
  2. Select Cloud Source Repository for your build source.
  3. Click Continue.
  4. Select the desired repository, then click Continue.

On the trigger setting page, enter the following:

  • Name is optional, add what you like.
  • Trigger type, select Tag
  • Tag (regex), type rel-.*
  • Build configuration, select cloudbuild.yaml
  • cloudbuild.yaml location, should have / cloudbuild.yaml
  • Click Save

You now have a build trigger setup that automates your build pipeline.

Step 8: cloudbuild service account permissions

A service account is a special Google account that executes builds on your behalf. The Cloud Build service account is of the form [PROJECT_NUMBER] The place to view your project’s service accounts is the IAM menu on your GCP Console (you have to choose a project if one is not already selected). Your Cloud Build service account is automatically created when you enable the Cloud Build API and is given by default the Cloud Build Service role. Fore our use case we need to add two more roles to the cloudbuild service account:

  1. Cloud KMS CryptoKey Decrypter
  2. Source Repository Writer

To grant these roles to the Cloud Build service account:

  1. Open the IAM menu in your GCP Console.
  2. Choose a project if one is not already selected.
  3. In the list of members look for your Cloud Build service account ([PROJECT_NUMBER]
  4. Click the pencil icon in that row.
  5. Click Add another role.
  6. In the pop-up that opens, select first in the left-column the Cloud Service and then in the right column the specific role for that service. So, for example, search in the left column for Cloud KMS and then in the right column select Cloud KMS CryptoKey Decrypter.
  7. Repeat 5, 6 to add the second role.
  8. Click Save.

Having set up access permissions for your Cloud Build service account, you are now all set to trigger your build.

Triggering a build

Whenever, you would like to trigger a build the only thing you have to do is to create a new tag in your local repository and push it to your remote:

git tag -a rel-0.01 -m "release/v0.01"
git push origin rel-0.01

Your build should now have started. Your site should be deployed to Firebase hosting in a couple of minutes and your new images should be stored in your project’s Container Registry. If you would like to check the status of your builds you can do so by opening Cloud Build > build history menu. There you can see all your build history. If you click on a specific build, you can see details for that build, which is very useful for debugging your build procedure if something goes wrong. If your images are up-to-date until your next build, you will also notice faster build times, as Google’s builder will use your stored images.

If you would like to push a tag together with a commit, having committed and created a new tag, you could use:

git push --follow-tags

It should be noted here, that when your build successfully pushes a commit to your Source Repository (changes will be in the Dockefile(s) versions) you should account for that in your local repo. If you have committed changes already you could:

git pull --rebase

or just git pull after your build finishes and before you make any changes.

Side note: In my GitHub repo, you can find an example site with sample content, with the theme Coder by Luiz F. A. de Prá, that you can use to test the build files quickly with a basic working Hugo project. The sample Hugo project includes .scss style-sheets to test Hugo-image extended functionality.

Enjoy coding!

Above opinions and any mistakes are my own. I am not affiliated in any way with companies, or organizations mentioned above. The code samples provided are licensed under the Apache 2.0 License and rest content of this page is licensed under the Creative Commons Attribution 3.0 License, except if noted otherwise.

comments powered by Disqus