# Teleport GKE Auto-Discovery

The Teleport Discovery Service can automatically register your Google Kubernetes Engine (GKE) clusters with Teleport. With Teleport Kubernetes Discovery, you can configure the Teleport Kubernetes Service and Discovery Service once, then create GKE clusters without needing to register them with Teleport after each creation.

In this guide, we will show you how to get started with Teleport Kubernetes Discovery for GKE.

## How it works

Teleport cluster auto-discovery involves two components:

1. The Teleport Discovery Service that watches for new clusters or changes to previously discovered clusters. It dynamically registers each discovered cluster as a `kube_cluster` resource in your Teleport cluster. It does not need connectivity to the clusters it discovers.
2. The Teleport Kubernetes Service that monitors the dynamic `kube_cluster` resources registered by the Discovery Service. It proxies communications between users and the cluster.

---

TIP

This guide presents the Discovery Service and Kubernetes Service running in the same process, however both can run independently and on different machines.

For example, you can run an instance of the Kubernetes Service in the same private network as the clusters you want to register with your Teleport cluster, and an instance of the Discovery Service in any network you wish.

---

## Prerequisites

- A running Teleport cluster. If you want to get started with Teleport, [sign up](https://goteleport.com/signup) for a free trial or [set up a demo environment](https://goteleport.com/docs/get-started/deploy-community.md).

- The `tctl` and `tsh` clients.

  Installing `tctl` and `tsh` clients

  1. Determine the version of your Teleport cluster. The `tctl` and `tsh` clients must be at most one major version behind your Teleport cluster version. Send a GET request to the Proxy Service at `/v1/webapi/find` and use a JSON query tool to obtain your cluster version. Replace teleport.example.com:443 with the web address of your Teleport Proxy Service:

     ```
     $ TELEPORT_DOMAIN=teleport.example.com:443
     $ TELEPORT_VERSION="$(curl -s https://$TELEPORT_DOMAIN/v1/webapi/find | jq -r '.server_version')"
     ```

  2. Follow the instructions for your platform to install `tctl` and `tsh` clients:

     **Mac**

     Download the signed macOS .pkg installer for Teleport, which includes the `tctl` and `tsh` clients:

     ```
     $ curl -O https://cdn.teleport.dev/teleport-${TELEPORT_VERSION?}.pkg
     ```

     In Finder double-click the `pkg` file to begin installation.

     ---

     DANGER

     Using Homebrew to install Teleport is not supported. The Teleport package in Homebrew is not maintained by Teleport and we can't guarantee its reliability or security.

     ---

     **Windows - Powershell**

     ```
     $ curl.exe -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-windows-amd64-bin.zip
     Unzip the archive and move the `tctl` and `tsh` clients to your %PATH%
     NOTE: Do not place the `tctl` and `tsh` clients in the System32 directory, as this can cause issues when using WinSCP.
     Use %SystemRoot% (C:\Windows) or %USERPROFILE% (C:\Users\<username>) instead.
     ```

     **Linux**

     All of the Teleport binaries in Linux installations include the `tctl` and `tsh` clients. For more options (including RPM/DEB packages and downloads for i386/ARM/ARM64) see our [installation page](https://goteleport.com/docs/installation.md).

     ```
     $ curl -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz
     $ tar -xzf teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz
     $ cd teleport
     $ sudo ./install
     Teleport binaries have been copied to /usr/local/bin
     ```

* A Google Cloud account with permissions to create GKE clusters, IAM roles, and service accounts.
* The `gcloud` CLI tool. Follow the [Google Cloud documentation page](https://cloud.google.com/sdk/docs/install-sdk) to install and authenticate to `gcloud`.
* One or more GKE clusters running. Your Kubernetes user must have permissions to create `ClusterRole` and `ClusterRoleBinding` resources in your clusters.
* A Linux host where you will run the Teleport Discovery and Kubernetes services. You can run this host on any cloud provider or even use a local machine.
* To check that you can connect to your Teleport cluster, sign in with `tsh login`, then verify that you can run `tctl` commands using your current credentials. For example, run the following command, assigning teleport.example.com to the domain name of the Teleport Proxy Service in your cluster and email\@example.com to your Teleport username:
  ```
  $ tsh login --proxy=teleport.example.com --user=email@example.com
  $ tctl status
  Cluster  teleport.example.com
  Version  18.7.3
  CA pin   sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678
  ```
  If you can connect to the cluster and run the `tctl status` command, you can use your current credentials to run subsequent `tctl` commands from your workstation. If you host your own Teleport cluster, you can also run `tctl` commands on the computer that hosts the Teleport Auth Service for full permissions.

## Step 1/3. Obtain Google Cloud credentials

The Teleport Discovery Service and Kubernetes Service use a Google Cloud service account to discover GKE clusters and manage access from Teleport users. In this step, you will create a service account and download a credentials file for the Teleport Discovery Service.

### Create an IAM role for the Discovery Service

The Teleport Discovery Service needs permissions to retrieve GKE clusters associated with your Google Cloud project.

To grant these permissions, create a file called `GKEKubernetesAutoDisc.yaml` with the following content:

```
title: GKE Cluster Discoverer
description: "Get and list GKE clusters"
stage: GA
includedPermissions:
- container.clusters.get
- container.clusters.list

```

Create the role, assigning the `--project` flag to the name of your Google Cloud project:

```
$ gcloud iam roles create GKEKubernetesAutoDisc \
  --project=google-cloud-project \
  --file=GKEKubernetesAutoDisc.yaml
```

### Create an IAM role for the Kubernetes Service

The Teleport Kubernetes Service needs Google Cloud IAM permissions in order to forward user traffic to your GKE clusters.

Create a file called `GKEAccessManager.yaml` with the following content:

```
title: GKE Cluster Access Manager
description: "Manage access to GKE clusters"
stage: GA
includedPermissions:
- container.clusters.connect
- container.clusters.get
- container.clusters.impersonate
- container.pods.get
- container.selfSubjectAccessReviews.create
- container.selfSubjectRulesReviews.create

```

Create the role, assigning the `--project` flag to the name of your Google Cloud project. If you receive a prompt indicating that certain permissions are in `TESTING`, enter `y`:

```
$ gcloud iam roles create GKEAccessManager \
  --project=google-cloud-project \
  --file=GKEAccessManager.yaml
```

### Create a service account

Now that you have declared roles for the Discovery Service and Kubernetes Service, create a service account so you can assign these roles.

Run the following command to create a service account called `teleport-discovery-kubernetes`:

```
$ gcloud iam service-accounts create teleport-discovery-kubernetes \
  --description="Teleport Discovery Service and Kubernetes Service" \
  --display-name="teleport-discovery-kubernetes"
```

Grant the roles you defined earlier to your service account, assigning `PROJECT_ID` to the name of your Google Cloud project:

```
$ PROJECT_ID=google-cloud-project
$ gcloud projects add-iam-policy-binding ${PROJECT_ID?} \
   --member="serviceAccount:teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com" \
   --role="projects/${PROJECT_ID?}/roles/GKEKubernetesAutoDisc"
$ gcloud projects add-iam-policy-binding ${PROJECT_ID?} \
   --member="serviceAccount:teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com" \
   --role="projects/${PROJECT_ID?}/roles/GKEAccessManager"
```

Deploying the Kubernetes Service and Discovery Service separately?

Create a service account for each service:

```
$ gcloud iam service-accounts create teleport-discovery-service \
  --description="Teleport Discovery Service" \
  --display-name="teleport-discovery-service"
$ gcloud iam service-accounts create teleport-kubernetes-service \
  --description="Teleport Kubernetes Service" \
  --display-name="teleport-kubernetes-service"
```

Grant the roles you defined earlier to your service account, assigning `PROJECT_ID` to the name of your Google Cloud project:

```
$ PROJECT_ID=google-cloud-project
$ gcloud projects add-iam-policy-binding ${PROJECT_ID?} \
   --member="serviceAccount:teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com" \
   --role="projects/${PROJECT_ID?}/roles/GKEKubernetesAutoDisc"
$ gcloud projects add-iam-policy-binding ${PROJECT_ID?} \
   --member="serviceAccount:teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com" \
   --role="projects/${PROJECT_ID?}/roles/GKEAccessManager"
```

### Retrieve credentials for your Teleport services

Now that you have created a Google Cloud service account and attached roles to it, associate your service account with the Teleport Kubernetes Service and Discovery Service.

The process is different depending on whether you are deploying the Teleport Kubernetes Service and Discovery Service on Google Cloud or some other way (e.g., via Amazon EC2 or on a local network).

**Google Cloud**

Stop your VM so you can attach your service account to it:

```
$ gcloud compute instances stop vm-name --zone=google-cloud-region
```

Attach your service account to the instance, assigning the name of your VM to vm-name and the name of your Google Cloud region to google-cloud-region:

```
$ gcloud compute instances set-service-account vm-name \
   --service-account teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com \
   --zone google-cloud-region \
   --scopes=cloud-platform
```

Running the Kubernetes and Discovery Services separately?

Stop each VM you plan to use to run the Teleport Kubernetes Service and Discovery Service.

Attach the `teleport-kubernetes-service` service account to the VM running the Kubernetes Service:

```
$ gcloud compute instances set-service-account ${VM1_NAME?} \
   --service-account teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com \
   --zone google-cloud-region \
   --scopes=cloud-platform
```

Attach the `teleport-discovery-service` service account to the VM running the Discovery Service:

```
$ gcloud compute instances set-service-account ${VM2_NAME?} \
   --service-account teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com \
   --zone google-cloud-region \
   --scopes=cloud-platform
```

---

WARNING

You must use the `scopes` flag in the `gcloud compute instances set-service-account` command. Otherwise, your Google Cloud VM will fail to obtain the required authorization to access the GKE API.

---

Once you have attached the service account, restart your VM:

```
$ gcloud compute instances start vm-name --zone google-cloud-region
```

**Other Platform**

Download a credentials file for the service account used by the Discovery Service and Kubernetes Service:

```
$ PROJECT_ID=google-cloud-project
$ gcloud iam service-accounts keys create google-cloud-credentials.json \
    --iam-account=teleport-discovery-kubernetes@${PROJECT_ID?}.iam.gserviceaccount.com
```

Move your credentials file to the host running the Teleport Discovery Service and Kubernetes Service the path `/var/lib/teleport/google-cloud-credentials.json`. We will use this credentials file when running this service later in this guide.

Deploying the Kubernetes Service and Discovery Service separately?

Download separate credentials files for each service:

```
$ PROJECT_ID=google-cloud-project
$ gcloud iam service-accounts keys create discovery-service-credentials.json \
    --iam-account=teleport-discovery-service@${PROJECT_ID?}.iam.gserviceaccount.com
$ gcloud iam service-accounts keys create kube-service-credentials.json \
    --iam-account=teleport-kubernetes-service@${PROJECT_ID?}.iam.gserviceaccount.com
```

Move `discovery-service-credentials.json` to the host running the Teleport Discovery Service at the path `/var/lib/teleport/google-cloud-credentials.json`.

Move `kubernetes-service-credentials.json` to the host running the Teleport Kubernetes Service at the path `/var/lib/teleport/google-cloud-credentials.json`.

We will use these credentials files when running this services later in this guide.

## Step 2/3. Configure Teleport to discover GKE clusters

Now that you have created a service account that can discover GKE clusters and a cluster role that can manage access, configure the Teleport Discovery Service to detect GKE clusters and the Kubernetes Service to proxy user traffic.

### Install Teleport

Install Teleport on the host you are using to run the Kubernetes Service and Discovery Service:

To install a Teleport Agent on your Linux server:

The recommended installation method is the cluster install script. It will select the correct version, edition, and installation mode for your cluster.

1. Assign teleport.example.com:443 to your Teleport cluster hostname and port, but not the scheme (https\://).

2. Run your cluster's install script:

   ```
   $ curl "https://teleport.example.com:443/scripts/install.sh" | sudo bash
   ```

### Create a join token

The Teleport Discovery Service and Kubernetes Service require an authentication token in order to join the cluster. Generate one by running the following `tctl` command:

```
$ tctl tokens add --type=discovery,kube --format=text
abcd123-insecure-do-not-use-this
```

Copy the token (e.g., `abcd123-insecure-do-not-use-this` above) and save the token in `/tmp/token` on the machine that will run the Discovery Service and Kubernetes Service, for example:

```
$ echo abcd123-insecure-do-not-use-this | sudo tee /tmp/token
abcd123-insecure-do-not-use-this
```

Running the Kubernetes and Discovery Services separately?

Generate separate tokens for the Kubernetes Service and Discovery Service by running the following `tctl` commands:

```
$ tctl tokens add --type=discovery --format=text
efgh456-insecure-do-not-use-this
$ tctl tokens add --type=kube --format=text
ijkl789-insecure-do-not-use-this
```

Copy each token (e.g., `efgh456-insecure-do-not-use-this` and `ijkl789-insecure-do-not-use-this` above) and save it in `/tmp/token` on the machine that will run the appropriate service.

### Configure the Kubernetes Service and Discovery Service

On the host running the Kubernetes Service and Discovery Service, create a Teleport configuration file with the following content at `/etc/teleport.yaml`:

---

WARNING

Discovery Service exposes a configuration parameter - `discovery_service.discovery_group` - that allows you to group discovered resources into different sets. This parameter is used to prevent Discovery Agents watching different sets of cloud resources from colliding against each other and deleting resources created by another services.

When running multiple Discovery Services, you must ensure that each service is configured with the same `discovery_group` value if they are watching the same cloud resources or a different value if they are watching different cloud resources.

It is possible to run a mix of configurations in the same Teleport cluster meaning that some Discovery Services can be configured to watch the same cloud resources while others watch different resources. As an example, a 4-agent high availability configuration analyzing data from two different cloud accounts would run with the following configuration.

- 2 Discovery Services configured with `discovery_group: "prod"` polling data from Production account.
- 2 Discovery Services configured with `discovery_group: "staging"` polling data from Staging account.

---

```
version: v3
teleport:
  join_params:
    token_name: "/tmp/token"
    method: token
  proxy_server: "teleport.example.com:443"
auth_service:
  enabled: false
proxy_service:
  enabled: false
ssh_service:
  enabled: false
discovery_service:
  enabled: true
  discovery_group: "gke-myproject"
  gcp:
    - types: ["gke"]
      locations: ["*"]
      project_ids: ["myproject"] # replace with my project ID
      tags:
        "*" : "*"
kubernetes_service:
  enabled: true
  resources:
  - labels:
      "*": "*"

```

Running the Kubernetes Service and Discovery Service on separate hosts?

Follow the instructions in this section with two configuration files. The configuration file you will save at `/etc/teleport.yaml` on the Kubernetes Service host will include the following:

```
version: v3
teleport:
  join_params:
    token_name: "/tmp/token"
    method: token
  proxy_server: teleport.example.com:443
auth_service:
  enabled: false
proxy_service:
  enabled: false
ssh_service:
  enabled: false
kubernetes_service:
  enabled: true
  resources:
  - labels:
      "*": "*"

```

On the Discovery Service host, the file will include the following:

```
version: v3
teleport:
  join_params:
    token_name: "/tmp/token"
    method: token
  proxy_server: teleport.example.com:443
auth_service:
  enabled: false
proxy_service:
  enabled: false
ssh_service:
  enabled: false
discovery_service:
  enabled: true
  discovery_group: "gke-myproject"
  gcp:
    - types: ["gke"]
      locations: ["*"]
      project_ids: ["myproject"] # replace with my project ID
      tags:
        "*" : "*"

```

Edit this configuration for your environment as explained below.

#### `proxy_server`

Replace `teleport.example.com:443` with the host and port of your Teleport Proxy Service (e.g., `mytenant.teleport.sh:443` for a Teleport Cloud tenant).

#### `discovery_service.gcp`

Each item in `discovery_service.gcp` is a **matcher** for Kubernetes clusters running on GKE. The Discovery Service periodically executes a request to the Google Cloud API based on each matcher to list GKE clusters. In this case, we have declared a single matcher.

Each matcher searches for clusters that match *all* properties of the matcher, i.e., that belong to the specified locations and projects and have the specified tags. The Discovery Service registers GKE clusters that match *any* configured matcher.

This means that if you declare the following two matchers, the Discovery Service will register clusters in project `myproj-dev` running in `us-east1`, as well as clusters in project `myproj-prod` running in `us-east2`, but *not* clusters in `myproj-dev` running in `us-east2`:

```
discovery_service:
  enabled: true
  discovery_group: "gke-myproject"
  gcp:
    - types: ["gke"]
      locations: ["us-east1"]
      project_ids: ["myproj-dev"]
      tags:
        "*" : "*"
    - types: ["gke"]
      locations: ["us-east2"]
      project_ids: ["myproj-prod"]
      tags:
        "*" : "*"

```

#### `discovery_service.gcp[0].types`

Each matcher's `types` field must be set to an array with a single string value, `gke`.

#### `discovery_service.gcp[0].project_ids`

In your matcher, replace `myproject` with the ID of your Google Cloud project.

Ensure that the `project_ids` field follows these rules:

- It must include at least one value.
- It must not combine the wildcard character (`*`) with other values.

##### Examples of valid configurations

- `["p1", "p2"]`
- `["*"]`
- `["p1"]`

##### Example of an invalid configuration

- `["p1", "*"]`

#### `discovery_service.gcp[0].locations`

Each matcher's `locations` field contains an array of Google Cloud region or zone names that the matcher will search for GKE clusters. The wildcard character, `*`, configures the matcher to search all locations.

#### `discovery_service.gcp[0].tags`

Like `locations`, `tags` consists of a map where each key is a string that represents the key of a tag, and each value is either a single string or an array of strings, representing one tag value or a list of tag values.

A wildcard key or value matches any tag key or value in your Google Cloud account. If you include another value, the matcher will match all GKE clusters with the provided tag.

### Start the Kubernetes Service and Discovery Service

On the host where you will run the Kubernetes Service, execute the following command, depending on:

- Whether you installed Teleport using a package manager or via a TAR archive
- Whether you are running the Discovery and Kubernetes Service on Google Cloud or another platform

**Google Cloud**

If you installed Teleport with a package manager, on the host where you will run the Teleport Kubernetes Service and Discovery Service, start the Teleport service:

```
$ sudo systemctl start teleport
```

If you installed Teleport with a TAR archive, the host where you will run the Teleport Kubernetes Service and Discovery Service, create a systemd service configuration for Teleport, enable the Teleport service, and start Teleport:

```
$ sudo teleport install systemd -o /etc/systemd/system/teleport.service
$ sudo systemctl enable teleport
$ sudo systemctl start teleport
```

**Other platform**

If you installed Teleport via package manager, the installation process created a configuration for the init system `systemd` to run Teleport as a daemon. This service reads environment variables from a file at the path `/etc/default/teleport`. Teleport's built-in Google Cloud client reads the credentials file at the location given by the `GOOGLE_APPLICATION_CREDENTIALS` variable. In this case:

1. Ensure that `/etc/default/teleport` has the following content:

   ```
   GOOGLE_APPLICATION_CREDENTIALS="/var/lib/teleport/google-cloud-credentials.json"

   ```

2. Start the Teleport service:

   ```
   $ sudo systemctl enable teleport
   $ sudo systemctl start teleport
   ```

If you installed Teleport using a TAR archive:

1. On the host where you are running the Teleport Discovery Service and Kubernetes Service, create a systemd configuration that you can use to run Teleport in the background:

   ```
   $ sudo teleport install systemd -o /etc/systemd/system/teleport.service
   $ sudo systemctl enable teleport
   ```

   This service reads environment variables from a file at the path `/etc/default/teleport`. Teleport's built-in Google Cloud client reads the credentials file at the location given by the `GOOGLE_APPLICATION_CREDENTIALS` variable.

2. Ensure that `/etc/default/teleport` has the following content:

   ```
   GOOGLE_APPLICATION_CREDENTIALS="/var/lib/teleport/google-cloud-credentials.json"

   ```

3. Start the Discovery Service and Kubernetes Service:

   ```
   $ sudo systemctl start teleport
   ```

## Step 3/3. Connect to your GKE cluster

### Allow access to your Kubernetes cluster

Ensure that you are in the correct Kubernetes context for the cluster you would like to enable access to:

```
$ kubectl config current-context
```

Using the wrong context?

Retrieve all available contexts:

```
$ kubectl config get-contexts
```

Switch to your context, replacing `CONTEXT_NAME` with the name of your chosen context:

```
$ kubectl config use-context CONTEXT_NAME
Switched to context CONTEXT_NAME
```

To authenticate to a Kubernetes cluster via Teleport, your Teleport user's roles must allow access as at least one Kubernetes user or group.

1. Retrieve a list of your current user's Teleport roles. The example below requires the `jq` utility for parsing JSON:

   ```
   $ CURRENT_ROLES=$(tsh status -f json | jq -r '.active.roles | join ("\n")')
   ```

2. Retrieve the Kubernetes groups your roles allow you to access:

   ```
   $ echo "$CURRENT_ROLES" | xargs -I{} tctl get roles/{} --format json | \
     jq '.[0].spec.allow.kubernetes_groups[]?'
   ```

3. Retrieve the Kubernetes users your roles allow you to access:

   ```
   $ echo "$CURRENT_ROLES" | xargs -I{} tctl get roles/{} --format json | \
     jq '.[0].spec.allow.kubernetes_users[]?'
   ```

4. If the output of one of the previous two commands is non-empty, your user can access at least one Kubernetes user or group, so you can proceed to the next step.

5. If both lists are empty, create a Teleport role for the purpose of this guide that can view Kubernetes resources in your cluster.

   Create a file called `kube-access.yaml` with the following content:

   ```
   kind: role
   metadata:
     name: kube-access
   version: v7
   spec:
     allow:
       kubernetes_labels:
         '*': '*'
       kubernetes_resources:
         - kind: '*'
           namespace: '*'
           name: '*'
           verbs: ['*']
       kubernetes_groups:
       - viewers
     deny: {}

   ```

6. Apply your changes:

   ```
   $ tctl create -f kube-access.yaml
   ```

   ---

   TIP

   You can also create and edit roles using the Web UI. Go to **Access -> Roles** and click **Create New Role** or pick an existing role to edit.

   ---

7. Assign the `kube-access` role to your Teleport user by running the appropriate commands for your authentication provider:

   **Local User**

   1. Retrieve your local user's roles as a comma-separated list:

      ```
      $ ROLES=$(tsh status -f json | jq -r '.active.roles | join(",")')
      ```

   2. Edit your local user to add the new role:

      ```
      $ tctl users update $(tsh status -f json | jq -r '.active.username') \
        --set-roles "${ROLES?},kube-access"
      ```

   3. Sign out of the Teleport cluster and sign in again to assume the new role.

   **GitHub**

   1. Open your `github` authentication connector in a text editor:

      ```
      $ tctl edit github/github
      ```

   2. Edit the `github` connector, adding `kube-access` to the `teams_to_roles` section.

      The team you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the team must include your user account and should be the smallest team possible within your organization.

      Here is an example:

      ```
        teams_to_roles:
          - organization: octocats
            team: admins
            roles:
              - access
      +       - kube-access

      ```

   3. Apply your changes by saving and closing the file in your editor.

   4. Sign out of the Teleport cluster and sign in again to assume the new role.

   **SAML**

   1. Retrieve your `saml` configuration resource:

      ```
      $ tctl get --with-secrets saml/mysaml > saml.yaml
      ```

      Note that the `--with-secrets` flag adds the value of `spec.signing_key_pair.private_key` to the `saml.yaml` file. Because this key contains a sensitive value, you should remove the saml.yaml file immediately after updating the resource.

   2. Edit `saml.yaml`, adding `kube-access` to the `attributes_to_roles` section.

      The attribute you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

      ```
        attributes_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access

      ```

   3. Apply your changes:

      ```
      $ tctl create -f saml.yaml
      ```

   4. Sign out of the Teleport cluster and sign in again to assume the new role.

   **OIDC**

   1. Retrieve your `oidc` configuration resource:

      ```
      $ tctl get oidc/myoidc --with-secrets > oidc.yaml
      ```

      Note that the `--with-secrets` flag adds the value of `spec.signing_key_pair.private_key` to the `oidc.yaml` file. Because this key contains a sensitive value, you should remove the oidc.yaml file immediately after updating the resource.

   2. Edit `oidc.yaml`, adding `kube-access` to the `claims_to_roles` section.

      The claim you should map to this role depends on how you have designed your organization's role-based access controls (RBAC). However, the group must include your user account and should be the smallest group possible within your organization.

      Here is an example:

      ```
        claims_to_roles:
          - name: "groups"
            value: "my-group"
            roles:
              - access
      +       - kube-access

      ```

   3. Apply your changes:

      ```
      $ tctl create -f oidc.yaml
      ```

   4. Sign out of the Teleport cluster and sign in again to assume the new role.

8. Configure the `viewers` group in your Kubernetes cluster to have the built-in `view` ClusterRole. When your Teleport user assumes the `kube-access` role and sends requests to the Kubernetes API server, the Teleport Kubernetes Service impersonates the `viewers` group and proxies the requests.

   Create a file called `viewers-bind.yaml` with the following contents, binding the built-in `view` ClusterRole with the `viewers` group you enabled your Teleport user to access:

   ```
   apiVersion: rbac.authorization.k8s.io/v1
   kind: ClusterRoleBinding
   metadata:
     name: viewers-crb
   subjects:
   - kind: Group
     # Bind the group "viewers", corresponding to the kubernetes_groups we assigned our "kube-access" role above
     name: viewers
     apiGroup: rbac.authorization.k8s.io
   roleRef:
     kind: ClusterRole
     # "view" is a default ClusterRole that grants read-only access to resources
     # See: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
     name: view
     apiGroup: rbac.authorization.k8s.io

   ```

9. Apply the `ClusterRoleBinding` with `kubectl`:

   ```
   $ kubectl apply -f viewers-bind.yaml
   ```

### Access your cluster

When you ran the Discovery Service, it discovered your GKE cluster and registered the cluster with Teleport. You can confirm this by running the following `tctl` command:

```
$ tctl get kube_clusters
kind: kube_cluster
metadata:
  description: GKE cluster "mycluster-gke" in us-east1
  id: 0000000000000000000
  labels:
    location: us-east1
    project-id: myproject
    teleport.dev/cloud: GCP
    teleport.dev/origin: cloud
  name: mycluster-gke
spec:
  aws: {}
  azure: {}
version: v3
```

Run the following command to list the Kubernetes clusters that your Teleport user has access to. The list should now include your GKE cluster:

```
$ tsh kube ls
Kube Cluster Name   Labels                                                                                                   Selected
------------------- -------------------------------------------------------------------------------------------------------- --------
mycluster-gke location=us-east1 project-id=myproject teleport.dev/cloud=GCP teleport.dev/origin=cloud
```

Log in to your cluster, replacing `mycluster-gke` with the name of a cluster you listed previously:

```
$ tsh kube login mycluster-gke
Logged into kubernetes cluster "mycluster-gke". Try 'kubectl version' to test the connection.
```

As you can see, Teleport GKE Auto-Discovery enabled you to access a GKE cluster in your Google Cloud account without requiring you to register that cluster manually within Teleport. When you create or remove clusters in GKE, Teleport will update its state to reflect the available clusters in your account.

## Troubleshooting

### Discovery Service troubleshooting

First, check if any Kubernetes clusters have been discovered. To do this, you can use the `tctl get kube_cluster` command and check if the expected Kubernetes clusters have already been registered with your Teleport cluster.

If some Kubernetes clusters do not appear in the list, check if the Discovery Service selector labels match the missing Kubernetes cluster tags or look into the Discovery Service logs for permission errors.

Check that the Discovery Service is running with credentials for the correct AWS account. It can discover resources in another AWS account, but it must be configured to assume a role in the other AWS account if that's the case.

Check if there is more than one Discovery Services running:

```
$ tctl inventory status --connected
```

If you are running multiple Discovery Services, you must ensure that each service is configured with the same `discovery_group` value if they are watching the same cloud Kubernetes clusters or a different value if they are watching different cloud Kubernetes clusters. If this is not configured correctly, a typical symptom is `kube_cluster` resources being intermittently deleted from your Teleport cluster's registry.

### Kubernetes Service troubleshooting

If the `tctl get kube_cluster` command returns the discovered clusters, but the `tctl kube ls` command does not include them, check that you have set the `kubernetes_service.resources` section correctly.

```
kubernetes_service:
  enabled: true
  resources:
  - labels:
      "env": "prod"

```

If the section is correctly configured, but clusters still do not appear or return authentication errors, please check if permissions have been correctly configured in your target cluster or that you have the correct permissions to list Kubernetes clusters in Teleport.
