Self Hosted Control Plane

Overview

A Self Hosted Control Plane enables you to manage and run your jobs, pipelines, and builds efficiently in a scalable environment. This guide walks you through the steps to set up your Self Hosted Control Plane, create queues for runners, and integrate BuildKit if needed.

Prerequisites

  • Ensure that your Connector setup is complete before proceeding.

  • A Kubernetes cluster is required, but optional if you don't already have one. If you don’t have a cluster, you can follow the steps to set one up.

Steps to Set Up Self-Hosted Control Plane

1. (Optional) Set Up a Kubernetes Cluster

If you don’t have a Kubernetes cluster, you can either set one up using the provided steps or use an existing cluster. A Kubernetes cluster is required to orchestrate your data plane effectively. Alternatively, you can contact us for an end-to-end managed service.

Make sure to replace your AWS account ID and region in the necessary configuration files before proceeding and store config in cluster.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: cicd-cluster
  region: ${AWS_DEFAULT_REGION}
  version: "1.31"
  tags:
    karpenter.sh/discovery: cicd-cluster

iam:
  withOIDC: true
  podIdentityAssociations:
  - namespace: "${KARPENTER_NAMESPACE}"
    serviceAccountName: karpenter
    roleName: cicd-cluster-karpenter
    permissionPolicyARNs:
    - arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-cicd-cluster

iamIdentityMappings:
- arn: "arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-cicd-cluster"
  username: system:node:{{EC2PrivateDNSName}}
  groups:
  - system:bootstrappers
  - system:nodes
  ## If you intend to run Windows workloads, the kube-proxy group should be specified.
  # For more information, see https://github.com/aws/karpenter/issues/5099.
  # - eks:kube-proxy-windows

managedNodeGroups:
- instanceType: m5.large
  amiFamily: AmazonLinux2
  name: cicd-cluster-ng
  desiredCapacity: 2
  minSize: 1
  maxSize: 10

addons:
- name: eks-pod-identity-agent

Now, run the following command to create the Kubernetes cluster:

eksctl create cluster -f cluster.yaml

Once the Kubernetes cluster is ready, install the Karpenter CloudFormation stack to provision the necessary resources for Karpenter.

curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v1.31/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml  > cloudformation.yaml \
&& aws cloudformation deploy \
  --stack-name "Karpenter-cicd-cluster" \
  --template-file cloudformation.yaml \
  --capabilities CAPABILITY_NAMED_IAM \
  --parameter-overrides "ClusterName=cicd-cluster"

Unless your AWS account has already onboarded to EC2 Spot, you will need to create the service linked role to avoid the ServiceLinkedRoleCreationNotPermitted error.

aws iam create-service-linked-role --aws-service-name spot.amazonaws.com || true

2. Create the Control Plane

Once your connector is ready, you can proceed to create the control plane. This serves as the central management hub for your jobs and pipelines.

3. Now deploy the data plane in kubernetes cluster by running below command

helm repo add saas https://helm.thesaas.company

helm upgrade -n saas saas saas/thesaas-company --create-namespace --set id=CONTROL_PLANE_ID --set secret.data.token=API_KEY --install

2. Check Control Plane Details

After the control plane is created, verify its status and details to ensure it’s operational.

3. Create a Queue for Runners

Now that the control plane is ready, you need to create a queue for the runner. This queue will manage the execution of tasks. Depending on your use cases, you can create multiple queues.

4. Monitor Queue Health

Once the queue is created, monitor its health status. A healthy queue will automatically register available runners.

5. Create a Pipeline Using the Queue

Next, create a pipeline and assign it to the queue by specifying the queue’s name in the runs_on field.

6. (Optional) Set Up BuildKit for Multi-Arch Builds

If your project requires BuildKit for Docker builds, especially for multi-architecture support, you'll need to create a BuildKit instance. Note that multi-architecture BuildKit is available only for enterprise users.

Conclusion

With the Hosted Control Plane, queues for runners, and optional BuildKit setup, your system is now ready to efficiently manage and run jobs. You can easily scale by adding more queues, runners, or BuildKit instances depending on your project needs.

If you have any issues or require further assistance, please refer to the support documentation or contact your system administrator.

Last updated