# Configuring Workload Identity and AWS OIDC Federation

Teleport Workload Identity issues flexible short-lived identities in JWT format. AWS OIDC Federation allows you to use these JWTs to authenticate to AWS services.

This can be useful in cases where a machine needs to securely authenticate with AWS services without the use of a long-lived credential. This is because the machine can authenticate with Teleport without using any shared secrets by using one of our delegated join methods.

In this guide, we'll configure Teleport Workload Identity and AWS to allow our workload to authenticate to the AWS S3 API and upload content to a bucket.

## How it works

This implementation differs from using the Teleport Application Service to protect AWS APIs in a few ways:

- Requests to AWS are not proxied through the Teleport Proxy Service, meaning reduced latency but also less visibility, as these requests will not be recorded in Teleport's audit log.
- Workload Identity works with any AWS client, including the command-line tool but also their SDKs.
- Using the Teleport Application Service to access AWS does not work with Machine ID and therefore cannot be used when a machine needs to authenticate with AWS.

## OIDC Federation vs Roles Anywhere

The AWS platform offers two routes for workload identity federation: OIDC Federation and Roles Anywhere. Teleport Workload Identity supports both of these methods.

There are a number of differences between the two methods:

- OIDC Federation exchanges a JWT SVID for an AWS credential, whereas Roles Anywhere exchanges an X509 SVID for an AWS credential. The use of X509 SVIDs is generally considered more secure.
- OIDC Federation does not require the additional installation of an AWS credential helper alongside workloads, whereas Roles Anywhere does.
- OIDC Federation requires that your Teleport Proxy Service is reachable by AWS, whereas Roles Anywhere does not.

This guide covers configuring OIDC federation. For Roles Anywhere, see [Configuring Workload Identity and AWS Roles Anywhere](https://goteleport.com/docs/machine-workload-identity/workload-identity/aws-roles-anywhere.md).

## Prerequisites

- A running Teleport cluster. If you want to get started with Teleport, [sign up](https://goteleport.com/signup) for a free trial or [set up a demo environment](https://goteleport.com/docs/get-started/deploy-community.md).

- The `tctl` and `tsh` clients.

  Installing `tctl` and `tsh` clients

  1. Determine the version of your Teleport cluster. The `tctl` and `tsh` clients must be at most one major version behind your Teleport cluster version. Send a GET request to the Proxy Service at `/v1/webapi/find` and use a JSON query tool to obtain your cluster version. Replace teleport.example.com:443 with the web address of your Teleport Proxy Service:

     ```
     $ TELEPORT_DOMAIN=teleport.example.com:443
     $ TELEPORT_VERSION="$(curl -s https://$TELEPORT_DOMAIN/v1/webapi/find | jq -r '.server_version')"
     ```

  2. Follow the instructions for your platform to install `tctl` and `tsh` clients:

     **Mac**

     Download the signed macOS .pkg installer for Teleport, which includes the `tctl` and `tsh` clients:

     ```
     $ curl -O https://cdn.teleport.dev/teleport-${TELEPORT_VERSION?}.pkg
     ```

     In Finder double-click the `pkg` file to begin installation.

     ---

     DANGER

     Using Homebrew to install Teleport is not supported. The Teleport package in Homebrew is not maintained by Teleport and we can't guarantee its reliability or security.

     ---

     **Windows - Powershell**

     ```
     $ curl.exe -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-windows-amd64-bin.zip
     Unzip the archive and move the `tctl` and `tsh` clients to your %PATH%
     NOTE: Do not place the `tctl` and `tsh` clients in the System32 directory, as this can cause issues when using WinSCP.
     Use %SystemRoot% (C:\Windows) or %USERPROFILE% (C:\Users\<username>) instead.
     ```

     **Linux**

     All of the Teleport binaries in Linux installations include the `tctl` and `tsh` clients. For more options (including RPM/DEB packages and downloads for i386/ARM/ARM64) see our [installation page](https://goteleport.com/docs/installation.md).

     ```
     $ curl -O https://cdn.teleport.dev/teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz
     $ tar -xzf teleport-v${TELEPORT_VERSION?}-linux-amd64-bin.tar.gz
     $ cd teleport
     $ sudo ./install
     Teleport binaries have been copied to /usr/local/bin
     ```

* To check that you can connect to your Teleport cluster, sign in with `tsh login`, then verify that you can run `tctl` commands using your current credentials. For example, run the following command, assigning teleport.example.com to the domain name of the Teleport Proxy Service in your cluster and email\@example.com to your Teleport username:
  ```
  $ tsh login --proxy=teleport.example.com --user=email@example.com
  $ tctl status
  Cluster  teleport.example.com
  Version  18.7.3
  CA pin   sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678
  ```
  If you can connect to the cluster and run the `tctl status` command, you can use your current credentials to run subsequent `tctl` commands from your workstation. If you host your own Teleport cluster, you can also run `tctl` commands on the computer that hosts the Teleport Auth Service for full permissions.
* `tbot` must already be installed and configured on the host where the workloads which need to access Teleport Workload Identity will run. For more information, see the [deployment guides](https://goteleport.com/docs/machine-workload-identity/deployment.md).

### Deciding on a SPIFFE ID structure

Within Teleport Workload Identity, all identities are represented using a SPIFFE ID. This is a URI that uniquely identifies the entity that the identity represents. The scheme is always `spiffe://`, and the host will be the name of your Teleport cluster. The structure of the path of this URI is up to you.

For the purposes of this guide, we will be granting access to AWS to the `spiffe://example.teleport.sh/svc/example-service` SPIFFE ID.

If you have already deployed Teleport Workload Identity, then you will already have a SPIFFE ID structure in place. If you have not, then you will need to decide on a structure for your SPIFFE IDs.

If you are only using Teleport Workload Identity with AWS OIDC Federation, you may structure your SPIFFE IDs so that they explicitly specify the AWS role they are allowed to assume. However, it often makes more sense to name the workload or person that will use the SPIFFE ID. See the [best practices guide](https://goteleport.com/docs/machine-workload-identity/workload-identity/best-practices.md) for further advice.

## Step 1/4. Configure AWS

Configuring AWS OIDC Federation for the first time involves a few steps. Some of these may not be necessary if you have previously configured AWS OIDC Federation for your Teleport cluster.

### Create an OpenID Connect Identity Provider

First, you'll need to create an OIDC identity provider in AWS. This will define a trust relationship between AWS and your Teleport cluster's Workload Identity issuer. You can reuse this OIDC identity provider to grant different workloads using Teleport Workload Identity access to AWS services.

When configuring the provider, you need to specify the issuer URI. This will be the public address of your Teleport Proxy Service with the path `/workload-identity` appended. Your Teleport Proxy Service must be accessible by AWS in order for OIDC federation to work.

**Terraform**

Before you can configure the OIDC identity provider, you need to determine the thumbprint of the TLS certificate used by your Teleport Proxy Service. You can do this using `curl`:

```
$ curl https://example.teleport.sh/webapi/thumbprint
"example589ee4bf31a11b78c72b8d13f0example"%
```

Insert the following into a Terraform configuration file which has already been configured to manage your AWS account:

```
resource "aws_iam_openid_connect_provider" "example_teleport_sh_workload_identity" {
  // Replace "example.teleport.sh" with the hostname used to access your
  // Teleport Proxy Service.
  url = "https://example.teleport.sh/workload-identity"

  client_id_list = [
    "sts.amazonaws.com",
  ]

  thumbprint_list = [
    // Replace with the thumbprint you determined using curl.
    "example589ee4bf31a11b78c72b8d13f0example"
  ]
}

```

**AWS Console**

1. Browse to IAM
2. Select "Identity Providers" from the sidebar
3. Select "Add provider"
4. Select "OpenID Connect" as the "Provider type".
5. Specify the public hostname of your Teleport Proxy Service, with "/workload-identity" appended as the "Provider URL", e.g `https://example.teleport.sh/workload-identity`
6. Specify `sts.amazonaws.com` as the Audience
7. Click "Add Provider".

### Create an S3 bucket

For the purposes of this guide, you'll be granting the workload access to an AWS S3 bucket. Before we can dive into configuring the RBAC, we'll need to create our bucket.

You can omit this step if you wish to grant access to a different service within AWS.

**Terraform**

```
// Create an S3 bucket
resource "aws_s3_bucket" "example" {
  // Replace "example" with a meaningful, unique name.
  bucket = "workload-id-demo"
}

```

**AWS Console**

1. Browse to S3
2. Select "Create bucket"
3. Enter a meaningful, unique name for your bucket, e.g. `workload-id-demo`
4. Leave other settings as default
5. Click "Create bucket".

### Configure RBAC

#### Create an IAM Policy

First, create an IAM policy that will grant access to the S3 bucket. Later, you'll attach this to a role that the workload will assume.

The examples in this guide will create an IAM policy that will grant full access to the example bucket. In a production environment, you should modify this to grant the least privileges necessary.

**Terraform**

Insert the following into your Terraform configuration file:

```
resource "aws_iam_policy" "example" {
  // Choose a unique, meaningful name that describes what the policy grants
  // access to.
  name        = "workload-id-s3-full-access"
  path        = "/"

  // This example policy grants full access to AWS S3. In production, you
  // may wish to grant a less permissive policy.
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "s3:*"
      Effect = "Allow"
      Resource = [
        aws_s3_bucket.workload_id_demo.arn,
        "${aws_s3_bucket.workload_id_demo.arn}/*"
      ]
    }]
  })
}

```

**AWS Console**

1. Browse to IAM
2. Select "Policies" from the sidebar
3. Click "Create policy"
4. Choose the "S3" service
5. Under "Actions allowed", choose "All S3 actions"
6. Under "Resources", choose "Specific"
7. For "bucket" enter the name of the bucket you created earlier.
8. For "object" enter the name of the bucket you created earlier and select "All objects"
9. Click "Next"
10. Enter a unique and meaningful name for this policy, in our example, this will be `workload-id-s3-full-access`
11. Click "Create policy"

#### Create an IAM Role

Now, you'll create an IAM role. This role will be assumed by the workload after it authenticates to AWS using the JWT SVID issued by Teleport Workload Identity.

When creating the IAM role, you'll define a trust policy that controls which workload identities are able to assume the role. This policy will contain conditions which will be evaluated against the claims within the JWT SVID issued by Teleport Workload Identity. In our case, the only claim we want to evaluate is the `sub` claim, which will contain our workload's SPIFFE ID.

Finally, we'll attach the IAM policy we created earlier to this role to grant it the privileges specified within the policy.

**Terraform**

Insert the following into your Terraform configuration file:

```
// Create a role that the workload identity will assume.
resource "aws_iam_role" "example" {
  // Choose a unique, meaningful name that describes the role and the workload
  // that will assume it.
  name = "workload-id-demo"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Principal = {
        Federated = aws_iam_openid_connect_provider.example.arn
      }
      Action = "sts:AssumeRoleWithWebIdentity"
      Condition = {
        StringEquals = {
          "${aws_iam_openid_connect_provider.example.url}:aud" = "sts.amazonaws.com"
          "${aws_iam_openid_connect_provider.example.url}:sub" = "spiffe://example.teleport.sh/svc/example-service"
        }
      }
    }]
  })
}

// Attach the policy we created earlier to our role.
resource "aws_iam_role_policy_attachment" "example" {
    role       = aws_iam_role.example.name
    policy_arn = aws_iam_policy.example.arn
}

```

**AWS Console**

1. Browse to IAM
2. Select "Roles" from the sidebar
3. Click "Create role"
4. Select "Web identity" for the "Trusted entity type"
5. Select your identity provider
6. Select the `sts.amazonaws.com` audience
7. Click "Add condition"
8. Select `example.teleport.sh:sub` for the key
9. Select "StringEquals" for the condition
10. Enter the SPIFFE ID of your workload for the value. In our example, this will be `spiffe://example.teleport.sh/svc/example-service`
11. Click "Next"
12. Select the IAM policy you created earlier, and click "Next"
13. Enter a unique and meaningful name for this role, in our example, this will be `workload-id-demo`
14. Click "Create role"

## Step 2/4. Configure Teleport RBAC

Now we need to configure Teleport to allow a JWT to be issued containing the SPIFFE ID we have chosen.

First, you'll create a Workload Identity resource to define the identity and its characteristics. Create a new file called `workload-identity.yaml`:

```
kind: workload_identity
version: v1
metadata:
  name: example-workload-identity
  labels:
    example: getting-started
spec:
  spiffe:
    id: /svc/example-service

```

Replace:

- `example-workload-identity` with a descriptive name for the Workload Identity.
- `/svc/example-service` with the path part of the SPIFFE ID you have chosen.

Apply this to your cluster using `tctl`:

```
$ tctl create -f workload-identity.yaml
```

Next, you'll create a role which grants access to this Workload Identity. Create `role.yaml` with the following content:

```
kind: role
version: v6
metadata:
  name: example-workload-identity-issuer
spec:
  allow:
    workload_identity_labels:
      example: ["getting-started"]
    rules:
    - resources:
      - workload_identity
      verbs:
      - list
      - read

```

Replace:

- `example-workload-identity-issuer` with a descriptive name for the role.
- The labels selector if you have modified the labels of the Workload Identity.

Apply this role to your Teleport cluster using `tctl`:

```
$ tctl create -f role.yaml
```

---

TIP

You can also create and edit roles using the Web UI. Go to **Access -> Roles** and click **Create New Role** or pick an existing role to edit.

---

You now need to assign this role to the bot:

```
$ tctl bots update my-bot --add-roles example-workload-identity-issuer
```

## Step 3/4. Issue Workload Identity JWTs

You'll now configure `tbot` to issue and renew the short-lived JWT SVIDs for your workload. It'll write the JWT as a file on disk, where you can then configure AWS clients and SDKs to read it.

Take your already deployed `tbot` service and configure it to issue SPIFFE SVIDs by adding the following to the `tbot` configuration file:

```
services:
  - type: workload-identity-jwt
    destination:
      type: directory
      path: /opt/workload-identity
    selector:
      name: example-workload-identity
    audiences: ["sts.amazonaws.com"]

```

Replace:

- /opt/workload-identity with the directory where you want the JWT to be written.
- example-workload-identity with the name of the Workload Identity you have created.

Restart your `tbot` service to apply the new configuration. You should see a file created at `/opt/workload-identity/jwt_svid` containing the JWT.

## Step 4/4. Configure AWS CLIs and SDKs

Finally, you need to configure the AWS CLIs and SDKs to use the JWT SVID for authentication.

This can be done using the configuration file located at `~/.aws/config` or by using environment variables.

To proceed, you'll need to know the ARN of the role you created earlier.

**Configuration File**

Add the following to your `~/.aws/config` file:

```
# You can replace "workload-id-demo" with a recognizable name that identifies
# your use-case.
[profile workload-id-demo]
# Replace with the ARN of the role you created earlier.
role_arn=arn:aws:iam::123456789012:role/workload-id-demo
# Replace with the directory and file name you configured `tbot` to write the
# JWT to.
web_identity_token_file=/opt/workload-identity/jwt_svid

```

**Environment Variables**

Configure the following environment variables:

- `AWS_ROLE_ARN`: The ARN of the role you created earlier, e.g. `arn:aws:iam::123456789012:role/workload-id-demo`
- `AWS_WEB_IDENTITY_TOKEN_FILE`: The path to the JWT file that `tbot` is writing, e.g. `/opt/workload-identity/jwt_svid`

You can now test authenticating to the AWS S3 API. Create a file which you can upload to your bucket:

```
$ echo "Hello, World!" > hello.txt
```

Now, use the AWS CLI to upload this file to your bucket:

```
$ aws s3 cp hello.txt s3://workload-id-demo
```

If everything is configured correctly, you should see this file uploaded to your bucket:

```
$ aws s3 ls s3://workload-id-demo
```

Inspecting the audit logs on CloudTrail should indicate that the request was authenticated using Workload Identity and specify the SPIFFE ID of the workload that made the request.

## Next steps

- [AWS OIDC Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html): The official AWS documentation for OIDC federation.
- [AWS CLI documentation](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-role.html#cli-configure-role-oidc): The official AWS CLI documentation for configuring a role to be assumed.
- [Workload Identity Overview](https://goteleport.com/docs/machine-workload-identity/workload-identity/introduction.md): Overview of Teleport Workload Identity.
- [JWT SVID Overview](https://goteleport.com/docs/machine-workload-identity/workload-identity/jwt-svids.md): Overview of the JWT SVIDs issued by Teleport Workload Identity.
- [Best Practices](https://goteleport.com/docs/machine-workload-identity/workload-identity/best-practices.md): Best practices for using Workload Identity in Production.
- Read the [configuration reference](https://goteleport.com/docs/reference/machine-workload-identity/configuration.md) to explore all the available configuration options.
