Introduction
Sometimes your Google Cloud Run app needs to communicate with or consume other services. This can be a simple as reading an object in Cloud Storage, sending an email, or connecting to a database. What identity does Cloud Run use? Can I change the identity? How do I use this identity to security my services?
In this article, I will cover these questions. We will create a service account, create and lock down a Cloud Storage Bucket, encrypt our secrets with Cloud KMS and deploy a Cloud Run instance that securely gets and decrypts secrets from Cloud Storage.
The default Cloud Run Identity is the Compute Engine Default Service Account. Unless you have changed this service account, it has the roles/editor permissions. This role has vast permissions across the Google Cloud Platform. This service account is also shared with other services such as Compute Engine and Cloud Functions.
In the latest Cloud Run Alpha release, Google has added a new command-line option --service-account
. Update June 11, 2019 – This command line option is now Beta. This option allows you to specify a service account to use as the Cloud Run identity. This means you can use a different identity for each of your Cloud Run services. This is a big feature for Cloud Run. You need not create and download keys for this service account. No key leakage or management. This is inherently powerful and secure.
When storing parameters and secrets, it is very important to limit who/what can access these secrets. By using a unique identity, you can lock down and secure access to secrets.
This article just touches upon Cloud Run Identity. Other Google Cloud services, such as Pub/Sub, can use the Cloud Run Identity for authorization. You can also use this identity in your calls to your own services. In another article, I will discuss the low-level details of Cloud Run identity and how to verify identities.
In this article we will:
- Create a new service account. No permissions are assigned to this service account.
- Create a KMS Keyring and Key.
- Add the service account as an IAM member to the KMS key for decryption.
- Create a Cloud Storage Bucket and lock down access to only Project Owners and this service account.
- Add the service account as an IAM member to the storage bucket members list.
- Encrypt our secrets with KMS and copy to Cloud Storage.
- Configure Cloud Run to use this service account as an identity for service-to-service access.
- Our application in Cloud Run will access the encrypted secrets in Cloud Storage, decrypt using KMS and display for review.
Star of the Show
Google added the command-line option --service-account
to the alpha version of the gcloud run deploy command. I cannot find when this feature was added to the Cloud SDK. I am testing with Cloud SDK 238.0.0 released May 28, 2019. As with all alpha and beta commands, do not use in production.
Update: June 11, 2019. This command-line option is now in the beta commands. This was released in the Cloud SDK version 250.0.0. Run the command gcloud components update
to get the latest version.
This command-line option supports specifying the service account to use for the Cloud Run identity. When you request ADC (Application Default Credentials), this will be the service account used for your OAuth tokens. This feature means that you can create a service account with no permissions, no keys, no JSON file, etc. Then add this service account email address to the services you want to consume securely. Examples in this article are Cloud Storage and KMS.
Google’s description of –service-account
Email address of the IAM service account associated with the revision of the service. The service account represents the identity of the running revision, and determines what permissions the revision has. If not provided, the revision will use the project’s default service account.
Getting Started
This article assumes that you have the CLI gcloud installed and configured with credentials. This article is CLI based and we will not be using the Google Cloud Console. Google’s GUI is excellent, but I prefer the CLI as I can create scripts, create better documentation, etc. Sometimes there are options that are not available in the GUI and you must use the CLI. Step 7 uses a new Cloud Run feature --service-account
which is only available in the alpha and beta versions of the CLI.
Verify that the correct project is the default project:
1 |
gcloud config list core/project |
If the correct project is not displayed, use this command to change the default project:
1 |
gcloud config set core/project [MY_PROJECT_ID] |
You can list the projects in your account. Some security configurations will not allow you to list projects. In that case, you will need to specify the default project manually as shown above.
1 |
gcloud projects list |
Enable the Cloud Run Service
1 |
gcloud services enable run.googleapis.com |
Set the default region for Cloud Run. This is obvious today, but soon there will be more regions announced:
1 |
gcloud config set run/region us-central1 |
For Cloud Run on GKE:
1 2 |
gcloud config set run/cluster [CLUSTER] gcloud config set run/cluster_location [CLUSTER_LOCATION] |
If you develop for BOTH Cloud Run and Cloud Run on GKE, you cannot set the properties as described above, because they will conflict. Instead, supply the –region parameter as needed in the gcloud command-line for Cloud Run , and supply the –cluster and –cluster-location parameters as needed in the gcloud command-line for Cloud Run on GKE.
Software Requirements
- Google CLI gcloud – https://cloud.google.com/sdk/
- Python 3.x – https://www.python.org/downloads/
- jq – https://stedolan.github.io/jq/download/
- sed – https://www.gnu.org/software/sed/
- git – https://git-scm.com/downloads
Download Git Repository
I have published the files for this article on GitHub.
License: MIT License
Clone my repository to your system:
1 |
git clone https://github.com/jhanley-com/google-cloud-run-identity.git |
My repository has build scripts for Linux and for Windows. I tested Linux with the Google Cloud Shell and Windows with Windows 10 Professional.
Provided that you have correctly setup the CLI gcloud
the build scripts will do everything automatically.
Linux setup:
env.sh
sets up the build environment. Review the settings in this file. You can override the CLI settings for some items.setup.sh
andcleanup.sh
build and destroy everything. There are smaller scripts that do specific things such as deploy to Cloud Run.- Execute
chmod +x *.sh
to make each script executable.
Windows setup:
env.bat
sets up the build environment. Review the settings in this file. You can override the CLI settings for some items.setup.bat
andcleanup.bat
build and destroy everything. There are smaller scripts that do specific things such as deploy to Cloud Run.
Tip: Change to the scripts-linux or scripts-windows directory. For Linux execute ./setup.sh
. For Windows execute setup.bat
. Everything will be created, built and deployed.
The env.sh/env.bat shell script creates several environment variables that are used by other scripts. Edit env.sh/env.bat to tailor to your environment.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
set GCP_PROJECT_ID=development-123456 set GCP_PROJECT_NUM=318067891239 set GCP_REGION=us-central1 set GCP_SERVICE_NAME=cloudrun-identity set GCP_IMAGE_NAME=cloudrun-identity set GCP_SA_NAME=cloud-run-identity set GCP_SA=cloud-run-identity@development-123456.iam.gserviceaccount.com set GCS_BUCKET_ROLE=legacyBucketReader set GCS_OBJECT_ROLE=legacyObjectReader set GCP_KMS_KEYRING=cloudrun-secrets set GCP_KMS_KEYNAME=cloudrun-identity set GCP_KMS_ROLE=roles/cloudkms.cryptoKeyDecrypter set GCP_KMS_KEY_ID=projects/development-123456/locations/global/keyRings/coudrun-secrets/cryptoKeys/cloudrun-identity |
Step 1 – Create the Service Account
For this project, we will create a new service account. This service account will provide the identity for Cloud Run. Cloud Storage and KMS will use this identity for authorization.
The env.sh/env.bat file defines the name for the service account:
1 2 |
set GCP_SA_NAME=cloud-run-identity set GCP_SA=%GCP_SA_NAME%@%PROJECT_ID%.iam.gserviceaccount.com |
Create the service account:
1 |
gcloud iam service-accounts create %GCP_SA_NAME% |
Step 2 – Create the KMS Keyring and Key
Create the KMS Keyring. Keyrings cannot be deleted, so this is a one-time operation.
1 |
gcloud kms keyrings create %GCP_KMS_KEYRING% --location global |
Create the KMS Key.
1 2 3 4 |
gcloud kms keys create %GCP_KMS_KEYNAME% ^ --location global ^ --keyring %GCP_KMS_KEYRING% ^ --purpose encryption |
Step 3 – Setup KMS IAM Policy
We will now add the service account to the KMS policy for the keyring and key that we created. This will allow Cloud Run to decrypt data.
1 2 3 4 5 |
gcloud kms keys add-iam-policy-binding %GCP_KMS_KEYNAME% ^ --location global ^ --keyring %GCP_KMS_KEYRING% ^ --member serviceAccount:%GCP_SA% ^ --role %GCP_KMS_ROLE% |
Step 4 – Encrypt the Secrets
For this article, I created a config.json file. This is to simulate storing database credentials:
1 2 3 4 5 6 |
{ "DB_HOST": "127.0.0.1", "DB_PORT": "3306", "DB_USER": "Roberts", "DB_PASS": "Not-A-Secret" } |
Encrypt config.json using Cloud KMS and store the encrypted results in config.enc:
1 2 3 4 5 6 |
call gcloud kms encrypt ^ --location=global ^ --keyring %GCP_KMS_KEYRING% ^ --key=%GCP_KMS_KEYNAME% ^ --plaintext-file=config.json ^ --ciphertext-file=config.enc |
Step 5 – Create the Cloud Storage Bucket
For this project, we will create a new storage bucket. This bucket will hold our secret file config.json.
The env.sh/env.bat file defines the name for the bucket:
1 |
set GCS_BUCKET_NAME=%GCP_PROJECT_ID%-cloudrun-identity |
Change the default ACL for this bucket to private:
1 |
gsutil defacl set private gs://%GCS_BUCKET_NAME% |
Enable versioning on this bucket. This prevents objects from easily being deleted.
1 |
call gsutil versioning set on gs://%GCS_BUCKET_NAME% |
Our secrets file is config.json. Copy this file to the bucket:
1 |
gsutil -h "Content-Type: application/json" cp config.json gs://%GCS_BUCKET_NAME%/config.json |
Change the ACL for this object to private:
1 |
gsutil acl set private gs://%GCS_BUCKET_NAME%/config.json |
We will assign two IAM roles to the bucket allowing this service account to access both the bucket and the objects stored in the bucket. We are only granting read access. Note that we are applying the IAM permissions to the bucket and not the service account. The service account has no IAM permissions.
1 2 |
set GCS_BUCKET_ROLE=legacyBucketReader set GCS_OBJECT_ROLE=legacyObjectReader |
Assign the IAM role legacyBucketReader to the bucket:
1 |
gsutil iam ch serviceAccount:%GCP_SA%:%GCS_BUCKET_ROLE% gs://%GCS_BUCKET_NAME%/ |
Assign the IAM role legacyObjectRead to the config.json file:
1 |
gsutil iam ch serviceAccount:%GCP_SA%:%GCS_OBJECT_ROLE% gs://%GCS_BUCKET_NAME%/config.json |
If you add additional objects to this bucket, repeat the last command to assign rights to access the object.
Summary. We have created a new bucket, copied our secret config.json file to the bucket and locked down permissions to access anything in this bucket. At this point, the only identity that can access this bucket is Project Owners.
Step 6 – Build the Docker Image
The env.sh/env.bat file defines the name for the image:
1 |
set GCP_IMAGE_NAME=cloudrun-identity |
Use Cloud Build to build the image:
1 |
gcloud builds submit --tag gcr.io/%GCP_PROJECT_ID%/%GCP_IMAGE_NAME% |
Step 7 – Deploy Image to Cloud Run:
1 2 3 4 5 6 |
call gcloud neta run deploy %GCP_SERVICE_NAME% ^ --region %GCP_REGION% ^ --image gcr.io/%GCP_PROJECT_ID%/%GCP_IMAGE_NAME% ^ --allow-unauthenticated ^ --set-env-vars BUCKET_NAME=%GCS_BUCKET_NAME% ^ --service-account=%GCP_SA% |
Step 8 – Verify that everything works
When the deploy command completes, you will see a message similar to the following. Make note of the service URL:
1 |
Service [cloudrun-identity] revision [cloudrun-identity-00001] has been deployed and is serving traffic at https://cloudrun-identity-x5yqob7qaq-uc.a.run.app |
Open a web browser. Enter the URL. You should see a screen similar to this:
If you do not see a screen similar to this, but instead see not_defined
for each parameter or a stack trace error message, go to the debugging section.
Step 9 – Cleanup
Once you are finished with this example, execute cleanup.sh/cleanup.bat. This script will delete the bucket, the service account, the IAM permissions from KMS and the Cloud Run service.
The cleanup script does not delete the following items:
- The image stored in the Google Container Registry
- The KMS Keyring
- The KMS Key
Additional Thoughts
In this article, we encrypted our secrets file config.json
and copied it to Cloud Storage. The example Python code loads the secrets file on every HTTP request. A better approach is to load the secrets file once when the container starts. This will reduce the response time for HTTP requests. However, this brings up the issue of how do you rotate credentials?
There are several strategies that come to mind to rotate credentials:
- Unless necessary, do not immediately invalidate the current credentials when rotating. Instead, create new credentials that overlap the old credentials for a period of time.
- An option is to add “smarts” to the code so that when the current credentials no longer work, reload the secrets from Cloud Storage and try again. This will provide an automated retry when credential rotation occurs.
- When the Cloud Run container starts, it will load whatever credentials are stored in the secrets file. Update the secrets file with new credentials.
- Once the new credentials are in place, issue a new Cloud Run deploy command. This will cause all Cloud Run instances to start with a new version loading the new secrets file.
- After a short period of time, invalidate the old credentials.
- An option is to keep track of what time the current credentials were loaded inside the container. Perhaps every 15-minutes, reload the credentials. This will reduce the requirement for issuing a new deploy command. You would update the secrets file in Cloud Storage, wait for 15 minutes and then delete the old credentials.
The service account we created has no IAM permissions other than being able to access one KMS key and one object in Cloud Storage. Cloud Run cannot access any other Google Cloud services such as Pub/Sub. Review what services your Cloud Run instance requires access to and add the corresponding permissions to the service account.
Debugging
The first step in debugging is to open the Google Console Stackdriver Logging section. My sample code will log messages to Stackdriver including error messages. This will help you pinpoint what is going wrong.
KMS Key Problems:
If you see a message like means that either the KMS key does not exist or the IAM permissions are missing.
1 |
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://cloudkms.googleapis.com/v1/projects/mystic-advice-218620/locations/global/keyRings/cloudrun-secrets/cryptoKeys/cloudrun-1:decrypt?alt=json returned "Permission 'cloudkms.cryptoKeyVersions.useToDecrypt' denied for resource 'projects/mystic-advice-218620/locations/global/keyRings/cloudrun-secrets/cryptoKeys/cloudrun-1'."> |
Additional Information
- Google’s Seth Vargo wrote an article “Secrets in Serverless“. His article is excellent and focusses on Cloud Functions, which share many common configuration features with Cloud Run. Seth’s article started me on my journey to write this article.
- Ahmet’s Cloud Run FAQ is a must read and reference for everything Cloud Run.
- For a good 5-minute video introduction to Cloud KMS keys: Data Encryption and Managed Encryption Keys – Take5
Credits
I write free articles about technology. Recently, I learned about Pexels.com which provides free images. The image in this article is courtesy of Pixabay at Pexels.
I design software for enterprise-class systems and data centers. My background is 30+ years in storage (SCSI, FC, iSCSI, disk arrays, imaging) virtualization. 20+ years in identity, security, and forensics.
For the past 14+ years, I have been working in the cloud (AWS, Azure, Google, Alibaba, IBM, Oracle) designing hybrid and multi-cloud software solutions. I am an MVP/GDE with several.
September 22, 2019 at 9:07 PM
Hi John, thanks, I came across this article form StackOverflow and am wondering how you go about locking down a Cloud Run site to the public, but allowing you the developer to view the app. I find it odd that we can’t just have the service account given a role of invoker and be able to see the app we’re building on the web. I must not understand how this works on GCP, but just seems odd that –allow-unauthenticated can be passed as a flag but not something like –allow-myself-to-see-my-own-damn-app.
September 23, 2019 at 1:52 PM
Hi David,
The key is to create an Identity Token and include that token in the HTTP “Authorization: Bearer” header. Cloud Run then validates that the Identity Token has the correct Cloud Run Invoker role.
To experiment, you can generate an Identity Token with this command:
gcloud beta auth print-identity-token
You can use curl to test
curl -H “Authorization: Bearer REPLACE_WITH_IDENTITY_TOKEN” https://example.com
If you can write down more details on your challenge, I will look at creating a new article with source code to show how to do it with Cloud Run. The same details apply to Cloud Functions as well.