clusterctl for Developers

This document describes how to use clusterctl during the development workflow.

Prerequisites

  • A Cluster API development setup (go, git, kind v0.9 or newer, Docker v19.03 or newer etc.)
  • A local clone of the Cluster API GitHub repository
  • A local clone of the GitHub repositories for the providers you want to install

Build clusterctl

From the root of the local copy of Cluster API, you can build the clusterctl binary by running:

make clusterctl

The output of the build is saved in the bin/ folder; In order to use it you have to specify the full path, create an alias or copy it into a folder under your $PATH.

Use local artifacts

Clusterctl by default uses artifacts published in the providers repositories; during the development workflow you may want to use artifacts from your local workstation.

There are two options to do so:

  • Use the overrides layer, when you want to override a single published artifact with a local one.
  • Create a local repository, when you want to avoid using published artifacts and use the local ones instead.

If you want to create a local artifact, follow these instructions:

Build artifacts locally

In order to build artifacts for the CAPI core provider, the kubeadm bootstrap provider and the kubeadm control plane provider:

make docker-build REGISTRY=gcr.io/k8s-staging-cluster-api PULL_POLICY=IfNotPresent

In order to build docker provider artifacts

make -C test/infrastructure/docker docker-build REGISTRY=gcr.io/k8s-staging-cluster-api PULL_POLICY=IfNotPresent

Create a clusterctl-settings.json file

Next, create a clusterctl-settings.json file and place it in your local copy of Cluster API. This file will be used by create-local-repository.py. Here is an example:

{
  "providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-aws", "infrastructure-docker"],
  "provider_repos": ["../cluster-api-provider-aws"]
}

providers (Array[]String, default=[]): A list of the providers to enable. See available providers for more details.

provider_repos (Array[]String, default=[]): A list of paths to all the providers you want to use. Each provider must have a clusterctl-settings.json file describing how to build the provider assets.

Create the local repository

Run the create-local-repository hack from the root of the local copy of Cluster API:

cmd/clusterctl/hack/create-local-repository.py

The script reads from the source folders for the providers you want to install, builds the providers’ assets, and places them in a local repository folder located under $HOME/.cluster-api/dev-repository/. Additionally, the command output provides you the clusterctl init command with all the necessary flags. The output should be similar to:

clusterctl local overrides generated from local repositories for the cluster-api, bootstrap-kubeadm, control-plane-kubeadm, infrastructure-docker, infrastructure-aws providers.
in order to use them, please run:

clusterctl init \
   --core cluster-api:v0.3.8 \
   --bootstrap kubeadm:v0.3.8 \
   --control-plane kubeadm:v0.3.8 \
   --infrastructure aws:v0.5.0 \
   --infrastructure docker:v0.3.8 \
   --config ~/.cluster-api/dev-repository/config.yaml

As you might notice, the command is using the $HOME/.cluster-api/dev-repository/config.yaml config file, containing all the required setting to make clusterctl use the local repository.

Available providers

The following providers are currently defined in the script:

  • cluster-api
  • bootstrap-kubeadm
  • control-plane-kubeadm
  • infrastructure-docker

More providers can be added by editing the clusterctl-settings.json in your local copy of Cluster API; please note that each provider_repo should have its own clusterctl-settings.json describing how to build the provider assets, e.g.

{
  "name": "infrastructure-aws",
  "config": {
    "componentsFile": "infrastructure-components.yaml",
    "nextVersion": "v0.5.0"
  }
}

Create a kind management cluster

kind can provide a Kubernetes cluster to be used as a management cluster. See Install and/or configure a kubernetes cluster for more information.

Before running clusterctl init, you must ensure all the required images are available in the kind cluster.

This is always the case for images published in some image repository like docker hub or gcr.io, but it can’t be the case for images built locally; in this case, you can use kind load to move the images built locally. e.g

kind load docker-image gcr.io/k8s-staging-cluster-api/cluster-api-controller-amd64:dev
kind load docker-image gcr.io/k8s-staging-cluster-api/kubeadm-bootstrap-controller-amd64:dev
kind load docker-image gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller-amd64:dev
kind load docker-image gcr.io/k8s-staging-cluster-api/capd-manager-amd64:dev

to make the controller images available for the kubelet in the management cluster.

When the kind cluster is ready and all the required images are in place, run the clusterctl init command generated by the create-local-repository.py script.

Optionally, you may want to check if the components are running properly. The exact components are dependent on which providers you have initialized. Below is an example output with the docker provider being installed.

kubectl get deploy -A | grep  "cap\|cert"
capd-system                         capd-controller-manager                         1/1     1            1           25m
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager       1/1     1            1           25m
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager   1/1     1            1           25m
capi-system                         capi-controller-manager                         1/1     1            1           25m
cert-manager                        cert-manager                                    1/1     1            1           27m
cert-manager                        cert-manager-cainjector                         1/1     1            1           27m
cert-manager                        cert-manager-webhook                            1/1     1            1           27m

Additional Notes for the Docker Provider

Select the appropriate kubernetes version

When selecting the --kubernetes-version, ensure that the kindest/node image is available.

For example, assuming that on docker hub there is no image for version vX.Y.Z, therefore creating a CAPD workload cluster with --kubernetes-version=vX.Y.Z will fail. See issue 3795 for more details.

Get the kubeconfig for the workload cluster

The command for getting the kubeconfig file for connecting to a workload cluster is the following:

clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig

Fix kubeconfig (when using docker on MacOS)

When using docker on MacOS, you will need to do a couple of additional steps to get the correct kubeconfig for a workload cluster created with the Docker provider:

# Point the kubeconfig to the exposed port of the load balancer, rather than the inaccessible container IP.
sed -i -e "s/server:.*/server: https:\/\/$(docker port capi-quickstart-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-quickstart.kubeconfig