GCP Self Hosting (GKE)
This section explains how to setup and create resources for self-hosting Zerve on a pre-existing GKE cluster in a Google Cloud Platform project.
General Infrastructure
At this time, GCP Self-Hosting is possible only with an existing GKE cluster.
GCS Bucket
Zerves requires block storage in order to store block state and user files. We recommend creating a bucket for this purpose using the installation steps below.
Artifact Registry
Zerve needs a repository in the Artifact Registry to store Docker images.
IAM Identities
Application Service Account
An identity that represents Zerve application in your GCP project.
Zerve will use Service Account Impersonation to obtain short-lived credentials to perform operations within your GCP project, such as scheduling compute jobs or managing canvas storage.
Execution Service Account
An identity for compute jobs that Zerve schedules to execute code blocks.
This service account can be used to grant users' code blocks access to other GCP resources in your organization.
Build Service Account
An identity for build jobs.
This service account will be used to grant build jobs access to the GCS bucket and docker repository.
Google Kubernetes Engine
Zerve can use your existing GKE cluster to schedule build and compute jobs. This cluster does not have to be in the same project with the rest of Zerve's infrastructure.
Cluster requirements:
Version 1.28 or higher.
Enabled Workload Identity
DNS-based control plane endpoint
Setup Instructions
Cloud infrastructure
You can use gcloud
CLI to provision the necessary infrastructure. You can do it in a separate GCP project within your organization.
Set some common env vars, used throughout the guide:
export BUCKET_NAME="zerve-$ZERVE_ORG_ID"
If using GKE for compute, set env vars for the project ID and node identity of the GKE cluster:
export COMPUTE_PROJECT_ID=
export COMPUTE_NODES_SA=
Point your gcloud
CLI to the general infrastructure project:
gcloud config set project "$PROJECT_ID"
Set up service accounts:
gcloud iam service-accounts create zerve-app-sa --display-name "Zerve application service account"
gcloud iam service-accounts create zerve-execution-sa --display-name "Zerve execution service account"
gcloud iam service-accounts create zerve-build-sa --display-name "Zerve build service account"
gcloud iam service-accounts add-iam-policy-binding "zerve-app-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--member "serviceAccount:[email protected]" \
--role roles/iam.serviceAccountTokenCreator
Set up the bucket:
gcloud storage buckets create "gs://$BUCKET_NAME" \
--location "$REGION" \
--public-access-prevention \
--uniform-bucket-level-access \
--enable-hierarchical-namespace
gcloud storage buckets update "gs://$BUCKET_NAME" --cors-file /dev/stdin <<- EOF
[
{
"origin": ["https://app.zerve.ai"],
"responseHeader": ["*"],
"method": ["GET", "PUT", "POST"],
"maxAgeSeconds": 3600
}
]
EOF
gcloud storage buckets add-iam-policy-binding "gs://$BUCKET_NAME" \
--member "serviceAccount:zerve-app-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/storage.objectAdmin"
gcloud storage managed-folders create "gs://$BUCKET_NAME/build"
gcloud storage managed-folders add-iam-policy-binding "gs://$BUCKET_NAME/build" \
--member "serviceAccount:zerve-build-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--role roles/storage.objectViewer
gcloud storage managed-folders create "gs://$BUCKET_NAME/canvases"
gcloud storage managed-folders add-iam-policy-binding "gs://$BUCKET_NAME/canvases" \
--member "serviceAccount:zerve-execution-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--role roles/storage.objectAdmin
Set up the docker repository:
gcloud artifacts repositories create zerve \
--location "$REGION" \
--repository-format docker \
--mode standard-repository
gcloud artifacts repositories add-iam-policy-binding zerve \
--location "$REGION" \
--member "serviceAccount:zerve-build-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/artifactregistry.repoAdmin"
Setup logging permissions:
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:zerve-execution-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/logging.logWriter"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:zerve-build-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/logging.logWriter"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:zerve-app-sa@$PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/logging.viewer"
If using GKE for compute:
Allow k8s service accounts to impersonate Zerve's IAM service accounts:
gcloud iam service-accounts add-iam-policy-binding "zerve-execution-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --member "serviceAccount:$COMPUTE_PROJECT_ID.svc.id.goog[zerve/executor]" \ --role roles/iam.workloadIdentityUser
gcloud iam service-accounts add-iam-policy-binding "zerve-build-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --member "serviceAccount:$COMPUTE_PROJECT_ID.svc.id.goog[zerve/builder]" \ --role roles/iam.workloadIdentityUser
Allow k8s nodes to pull images from the docker repository:
gcloud artifacts repositories add-iam-policy-binding zerve \ --location "$REGION" \ --member "serviceAccount:$COMPUTE_NODES_SA" \ --role "roles/artifactregistry.reader"
Allow Zerve to connect to your EKS cluster:
gcloud iam roles create ClustersConnect \ --project "$COMPUTE_PROJECT_ID" \ --title "Clusters connect role" \ --description "Allow connecting to clusters" \ --permissions "container.clusters.connect"
gcloud projects add-iam-policy-binding "$COMPUTE_PROJECT_ID" \ --member "serviceAccount:zerve-app-sa@$PROJECT_ID.iam.gserviceaccount.com" \ --role "projects/$COMPUTE_PROJECT_ID/roles/ClustersConnect"
Setup RBAC in your cluster by installing our helm chart:
cat <<- EOF > /tmp/values.yaml builder: # Associate "builder" k8s SA with the corresponding IAM SA serviceAccount: annotations: iam.gke.io/gcp-service-account: zerve-build-sa@$PROJECT_ID.iam.gserviceaccount.com executor: # Associate "executor" k8s SA with the corresponding IAM SA serviceAccount: annotations: iam.gke.io/gcp-service-account: zerve-execution-sa@$PROJECT_ID.iam.gserviceaccount.com scheduler: # Grant Zerve application full access to the namespace user: enabled: true name: zerve-app-sa@$PROJECT_ID.iam.gserviceaccount.com roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin EOF
helm install zerve oci://public.ecr.aws/x8w6c8k3/helm/zerve -n zerve -f /tmp/values.yaml --create-namespace
Zerve Organization Self-Hosting Settings
Navigate to your organization's self-hosting settings in Zerve app.
Fill out the form with the following values:
Project ID: the project ID where Zerve's general infrastructure resides, e.g.
flying-banana-412312-r9
.Region: the region where Zerve's general infrastructure resides, e.g.
europe-west2
Bucket Name: the name of the GCS bucket , e.g.
zerve
.Service Account: the email of the application service account, e.g.
[email protected]
.Docker Repository: the address of the docker repository, e.g.
europe-west2-docker.pkg.dev/flying-banana-412312-r9/zerve
.
If using GKE for compute, check the Kubernetes
box under Compute options
and fill out the following values:
Namespace: the namespace where Zerve's helm chart was installed, e.g.
zerve
.Endpoint: DNS-based control plane endpoint of the GKE cluster, e.g.
https://gke-723981544496.europe-west2.gke.goog
To find it using
gcloud
CLI, run the following command:gcloud container clusters describe CLUSTER_NAME \ --location "$REGION" \ --format 'value(controlPlaneEndpointsConfig.dnsEndpointConfig.endpoint)'
Service Account Token: leave empty
Certificate Authority Data: leave empty
Last updated