# Get Started with Enrolling a Kubernetes Cluster

This guide demonstrates how to enroll a Kubernetes cluster as a Teleport resource by deploying the Teleport Kubernetes Service on the Kubernetes cluster you want to protect.

## How it works

In this scenario, the Teleport Kubernetes Service pod detects that it is running on Kubernetes and enrolls the Kubernetes cluster automatically. The following diagram provides a simplified overview of this deployment scenario with the Teleport Kubernetes Service running on the Kubernetes cluster:

![Enroll a Kubernetes cluster](/docs/assets/images/enroll-kubernetes-93c9cdf0e25cf7e8b352cf58a6874837.png)

For information about other ways to enroll and discover Kubernetes clusters, see [Registering Kubernetes Clusters with Teleport](https://goteleport.com/docs/enroll-resources/kubernetes-access/register-clusters.md).

## Prerequisites

- A running Teleport cluster. If you want to get started with Teleport, [sign up](https://goteleport.com/signup) for a free trial or [set up a demo environment](https://goteleport.com/docs/get-started/deploy-community.md).

- The `tctl` and `tsh` clients.

  Installing `tctl` and `tsh` clients

  1. Determine the version of your Teleport cluster. The `tctl` and `tsh` clients must be at most one major version behind your Teleport cluster version. Send a GET request to the Proxy Service at `/v1/webapi/find` and use a JSON query tool to obtain your cluster version. Replace teleport.example.com:443 with the web address of your Teleport Proxy Service:

     ```
     $ TELEPORT_DOMAIN=teleport.example.com:443
     $ TELEPORT_VERSION="$(curl -s https://$TELEPORT_DOMAIN/v1/webapi/find | jq -r '.server_version')"
     ```

  2. Follow the instructions for your platform to install `tctl` and `tsh` clients:

     **Mac**

     Download the signed macOS .pkg installer for Teleport, which includes the `tctl` and `tsh` clients:

     ```
     $ curl -O https://cdn.teleport.dev/teleport-${TELEPORT_VERSION?}.pkg
     ```

     In Finder double-click the `pkg` file to begin installation.

     ---

     DANGER

     Using Homebrew to install Teleport is not supported. The Teleport package in Homebrew is not maintained by Teleport and we can't guarantee its reliability or security.

     ---

     **Windows - Powershell**

     ```
     $ curl.exe -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-windows-amd64-bin.zip
     Unzip the archive and move the `tctl` and `tsh` clients to your %PATH%
     NOTE: Do not place the `tctl` and `tsh` clients in the System32 directory, as this can cause issues when using WinSCP.
     Use %SystemRoot% (C:\Windows) or %USERPROFILE% (C:\Users\<username>) instead.
     ```

     **Linux**

     All of the Teleport binaries in Linux installations include the `tctl` and `tsh` clients. For more options (including RPM/DEB packages and downloads for i386/ARM/ARM64) see our [installation page](https://goteleport.com/docs/installation.md).

     ```
     $ curl -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz
     $ tar -xzf teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz
     $ cd teleport
     $ sudo ./install
     Teleport binaries have been copied to /usr/local/bin
     ```

* [Kubernetes](https://kubernetes.io) >= v1.17.0

* [Helm](https://helm.sh) >= 3.4.2

  Verify that Helm and Kubernetes are installed and up to date.

  ```
  $ helm version
  version.BuildInfo{Version:"v3.4.2"}

  $ kubectl version
  Client Version: version.Info{Major:"1", Minor:"17+"}
  Server Version: version.Info{Major:"1", Minor:"17+"}
  ```

- To check that you can connect to your Teleport cluster, sign in with `tsh login`, then verify that you can run `tctl` commands using your current credentials. For example, run the following command, assigning teleport.example.com to the domain name of the Teleport Proxy Service in your cluster and email\@example.com to your Teleport username:
  ```
  $ tsh login --proxy=teleport.example.com --user=email@example.com
  $ tctl status
  Cluster  teleport.example.com
  Version  18.7.3
  CA pin   sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678
  ```
  If you can connect to the cluster and run the `tctl status` command, you can use your current credentials to run subsequent `tctl` commands from your workstation. If you host your own Teleport cluster, you can also run `tctl` commands on the computer that hosts the Teleport Auth Service for full permissions.

## Step 1/3. Create RBAC resources

To authenticate to a Kubernetes cluster using Teleport, you must have a Teleport role that grants access to the Kubernetes cluster you plan to interact with.

In this step, we show you how to create a Teleport role called `kube-access` that enables a user to send requests to any Teleport-protected Kubernetes cluster as a member of the `viewers` group. The Teleport Kubernetes Service impersonates the `viewers` group when proxying requests from the user.

1. Create a file called `kube-access.yaml` with the following content:

   ```
   kind: role
   metadata:
     name: kube-access
   version: v7
   spec:
     allow:
       kubernetes_labels:
         '*': '*'
       kubernetes_resources:
         - kind: '*'
           namespace: '*'
           name: '*'
           verbs: ['*']
       kubernetes_groups:
       - viewers
     deny: {}

   ```

2. Apply your changes:

   ```
   $ tctl create -f kube-access.yaml
   ```

   ---

   TIP

   You can also create and edit roles using the Web UI. Go to **Access -> Roles** and click **Create New Role** or pick an existing role to edit.

   ---

3. Assign the `kube-access` role to your Teleport user by running the appropriate commands for your authentication provider:

   **Local User**

   1. Retrieve your local user's roles as a comma-separated list:

      ```
      $ ROLES=$(tsh status -f json | jq -r '.active.roles | join(",")')
      ```

   2. Edit your local user to add the new role:

      ```
      $ tctl users update $(tsh status -f json | jq -r '.active.username') \
        --set-roles "${ROLES?},kube-access"
      ```

   3. Sign out of the Teleport cluster and sign in again to assume the new role.

   **GitHub**

   1. Open your `github` authentication connector in a text editor:

      ```
      $ tctl edit github/github
      ```

   2. Edit the `github` connector, adding `kube-access` to the `teams_to_roles` section.

      The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.

      Here is an example:

      ```
        teams_to_roles:
          - organization: octocats
            team: admins
            roles:
              - access
      +       - kube-access

      ```

   3. Apply your changes by saving and closing the file in your editor.

   4. Sign out of the Teleport cluster and sign in again to assume the new role.

   **SAML**

   1. Retrieve your `saml` configuration resource:

      ```
      $ tctl get --with-secrets saml/mysaml > saml.yaml
      ```

      Note that the `--with-secrets` flag adds the value of `spec.signing_key_pair.private_key` to the `saml.yaml` file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource.

   2. Edit `saml.yaml`, adding `kube-access` to the `attributes_to_roles` section.

      The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

      ```
        attributes_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access

      ```

   3. Apply your changes:

      ```
      $ tctl create -f saml.yaml
      ```

   4. Sign out of the Teleport cluster and sign in again to assume the new role.

   **OIDC**

   1. Retrieve your `oidc` configuration resource:

      ```
      $ tctl get oidc/myoidc --with-secrets > oidc.yaml
      ```

      Note that the `--with-secrets` flag adds the value of `spec.signing_key_pair.private_key` to the `oidc.yaml` file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource.

   2. Edit `oidc.yaml`, adding `kube-access` to the `claims_to_roles` section.

      The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

      ```
        claims_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access

      ```

   3. Apply your changes:

      ```
      $ tctl create -f oidc.yaml
      ```

   4. Sign out of the Teleport cluster and sign in again to assume the new role.

While you have authorized the `kube-access` role to access Kubernetes clusters as a member of the `viewers` group, this group does not yet have permissions within its Kubernetes cluster. To assign these permissions, create a Kubernetes `RoleBinding` or `ClusterRoleBindings` that grants permission to the `viewers` group.

1. Create a file called `viewers-bind.yaml` with the following contents:

   ```
   apiVersion: rbac.authorization.k8s.io/v1
   kind: ClusterRoleBinding
   metadata:
     name: viewers-crb
   subjects:
   - kind: Group
     # Bind the group "viewers" to the kubernetes_groups assigned in the "kube-access" role
     name: viewers
     apiGroup: rbac.authorization.k8s.io
   roleRef:
     kind: ClusterRole
     # "view" is a default ClusterRole that grants read-only access to resources
     # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
     name: view
     apiGroup: rbac.authorization.k8s.io

   ```

2. Apply the `ClusterRoleBinding` with `kubectl`:

   ```
   $ kubectl apply -f viewers-bind.yaml
   ```

Your Teleport user now has permissions to assume membership in the `viewers` group when accessing your Kubernetes cluster, and the `viewers` group now has permissions to view resources in the cluster. The next step is to deploy the Teleport Kubernetes Service in the cluster to proxy user requests.

## Step 2/3. Follow guided enrollment instructions

In this step, you will deploy the Teleport Kubernetes Service on your Kubernetes cluster by copying a script from the Teleport Web UI and running it on your terminal.

1. Open the Teleport Web UI and sign in using your administrative account.

2. Click **Enroll New Resource**.

3. Type all or part of **Kubernetes** in the Search field to filter the resource types displayed, then click **Kubernetes**.

4. Copy the command to add the `teleport-agent` chart repository and paste it in a terminal on your workstation.

5. Type `teleport-agent` for namespace where you will deploy the Teleport Kubernetes Service and the display name to use when connecting to this cluster, then click **Next**.

   After you click Next, Teleport generates a script to configure and enroll the Kubernetes cluster as a resource in the Teleport cluster.

6. Copy the command displayed in the Teleport Web UI and run it in your terminal.

   The Teleport Web UI displays "Successfully detected your new Kubernetes cluster" as confirmation that your cluster is enrolled. When you see this message, click **Next** to continue.

## Step 3/3. Test Kubernetes access

Now that you have deployed the Teleport Kubernetes Service on your Kubernetes cluster and enrolled the cluster as a Teleport resource, confirm that you can access your Kubernetes cluster as a member of the `viewers` group.

If you followed the previous steps in this guide, the **Set Up Access** view populates the **Kubernetes Groups** field with `viewers`.

To set up and test access:

1. Click **Next**.

2. Specify the `teleport-agent` namespace, the Kubernetes `viewers` group from the previous step, and your Teleport user name.

3. Copy and run the commands displayed in the Teleport Web UI to interact with the Kubernetes cluster and verify access through Teleport. Alternatively, run the commands shown below:

   Authenticate to your Teleport cluster, assigning teleport.example.com to your cluster domain and admin\@example.com to your Teleport username:

   ```
   $ tsh login --proxy=teleport.example.com:443 --auth=local --user=admin@example.com teleport.example.com
   ```

   List Kubernetes clusters available for you to access:

   ```
   $ tsh kube ls
   ```

   Retrieve credentials to access your Kubernetes cluster, replacing Kubernetes-cluster-name with your Kubernetes cluster name:

   ```
   $ tsh kube login Kubernetes-cluster-name
   ```

   The Teleport Kubernetes Service proxies `kubectl` commands:

   ```
   $ kubectl get pods -n teleport-agent
   ```

   You should see the Teleport Kubernetes Service pod you deployed earlier:

   ```
   NAME               READY   STATUS    RESTARTS   AGE
   teleport-agent-0   1/1     Running   0          8m6s

   ```

4. Click **Finish**.

## Next steps

This guide demonstrated how to enroll a Kubernetes cluster by running the Teleport Kubernetes Service within the Kubernetes cluster.

- For information about discovering Kubernetes clusters hosted on cloud providers, see [Kubernetes Cluster Discovery](https://goteleport.com/docs/enroll-resources/auto-discovery/kubernetes.md).
- To learn about other ways you can register a Kubernetes cluster with Teleport, see [Registering Kubernetes Clusters with Teleport](https://goteleport.com/docs/enroll-resources/kubernetes-access/register-clusters.md).
- For a complete list of the parameters you can configure in the `teleport-kube-agent` helm chart, see the [Chart Reference](https://goteleport.com/docs/reference/helm-reference/teleport-kube-agent.md).
