# Machine & Workload Identity Configuration Reference

This reference documents the various options that can be configured in the `tbot` configuration file. This configuration file offers more control than configuring `tbot` using CLI parameters alone.

To run `tbot` with a configuration file, specify the path with the `-c` flag:

```
$ tbot start -c ./tbot.yaml
```

In this reference, the term **artifact** refers an item that `tbot` writes to a destination as part of the process of generating an output. Examples of artifacts include configuration files, certificates, and cryptographic key material. Usually, artifacts are files, but this term is explicitly avoided because a destination isn't required to be a filesystem.

```
# version specifies the version of the configuration file in use. `v2` is the
# most recent and should be used for all new bots. The rest of this example
# is in the `v2` schema.
version: v2

# debug enables verbose logging to stderr. If unspecified, this defaults to
# false.
debug: true

# auth_server specifies the address of the Auth Service instance that `tbot` should connect
# to. You should prefer specifying `proxy_server` to specify the Proxy Service
# address.
auth_server: "teleport.example.com:3025"

# proxy_server specifies the address of the Teleport Proxy Service that `tbot` should
# connect to.
# It is recommended to use the address of your Teleport Proxy Service, or, if using
# Teleport Cloud, the address of your Teleport Cloud instance.
proxy_server: "teleport.example.com:443" # or "example.teleport.sh:443" for Teleport Cloud

# credential_ttl specifies how long certificates generated by `tbot` should
# live for. It should be a positive, numeric value with an `m` (for minutes) or
# `h` (for hours) suffix. By default, this value is `1h`.
# This has a maximum value of `24h`.
#
# It can be overridden for most outputs and services to give them a shorter TTL
# than `tbot`'s internal certificates.
credential_ttl: "1h"

# renewal_interval specifies how often `tbot` should aim to renew the
# outputs it has generated. It should be a positive, numeric value with an
# `m` (for minutes) or `h` (for hours) suffix. The default value is `20m`.
# This value must be lower than `credential_ttl`.
# This value is ignored when using `tbot` is running in one-shot mode.
#
# It can be overridden for most outputs and services to give them a shorter
# renewal interval than `tbot`'s internal certificates.
renewal_interval: "20m"

# oneshot configures `tbot` to exit immediately after generating the outputs.
# The default value is `false`. A value of `true` is useful in ephemeral environments, like
# CI/CD.
oneshot: false

# onboarding is a group of configuration options that control how `tbot` will
# authenticate with the Teleport cluster.
onboarding:
  # token specifies which join token, configured in the Teleport cluster,
  # should be used to join the Teleport cluster.
  #
  # This can also be an absolute path to a file containing the value you wish
  # to be used.
  # File path example:
  # token: /var/lib/teleport/tokenjoin
  token: "00000000000000000000000000000000"

  # join_method must be the join method associated with the specified token
  # above. This setting should match the value output when creating the bot using
  # `tctl`.
  #
  # Support values include:
  # - `token`
  # - `azure`
  # - `gcp`
  # - `circleci`
  # - `github`
  # - `gitlab`
  # - `iam`
  # - `ec2`
  # - `kubernetes`
  # - `spacelift`
  # - `tpm`
  # - `terraform_cloud`
  join_method: "token"

  # ca_pins are used to validate the identity of the Teleport Auth Service on
  # first connect. This should not be specified when using Teleport Cloud or
  # connecting through a Teleport Proxy.
  ca_pins:
    - "sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678"
    - "sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678"

  # ca_path is used to specify where a CA file can be found that can be used to
  # validate the identity of the Teleport Auth Service on first connect.
  # This should not be specified when using Teleport Cloud or connecting through a
  # Teleport Proxy. The ca_pins option should be preferred over ca_path.
  ca_path: "/path/to/ca.pem"

  # gitlab holds configuration specific to the "gitlab" join method.
  gitlab:
    # token_env_var_name allows the environment variable that contains the
    # GitLab ID token to be specified. If unspecified, this defaults to
    # "TBOT_GITLAB_JWT".
    #
    # Overriding this is useful when you need to use `tbot` to authenticate to
    # multiple Teleport clusters from a single GitLab CI job.
    token_env_var_name: "MY_GITLAB_ID_TOKEN"

  # bound_keypair holds parameters specific to the "bound_keypair" join method
  bound_keypair:
    # registration_secret is an optional secret to use on first join in lieu of
    # a preregistered keypair. You can also set this in the
    # `TBOT_REGISTRATION_SECRET` environment variable.
    registration_secret: "secret"

    # registration_secret_path is an optional path to a file containing a
    # registration secret; conflicts with `registration_secret`
    registration_secret_path: ./path/to/secret

    # static_key_path is an optional path to a file containing a static private
    # key.
    static_key_path: ./path/to/secret

# storage specifies the destination that `tbot` should use to store its
# internal state. This state is sensitive, and you should ensure that the
# destination you specify here can only be accessed by `tbot`.
#
# If unspecified, storage is set to a directory destination with a path
# of `/var/lib/teleport/bot`.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
storage:
  type: directory
  path: /var/lib/teleport/bot

# services specify which `tbot` sub-services should be enabled and how they
# should be configured.
#
# See the full list of supported services and their configuration options
# under the Services section of this reference page.
services:
  - type: identity
    destination:
      type: directory
      path: /opt/machine-id

# outputs is synonymous to `services` and exists for legacy compatibility. All
# services specified in both `services` and `outputs` will be enabled. You
# should not duplicate any entries between the two fields, and should prefer to
# keep configuration within the `services` field where possible.
outputs:
  - type: example

```

If no configuration file is provided, a simple configuration is used based on the provided CLI flags. Given the following sample CLI from `tctl bots add ...`:

```
$ tbot start \
   --destination-dir=./tbot-user \
   --token=00000000000000000000000000000000 \
   --ca-pin=sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678 \
   --proxy-server=example.teleport.sh:443
```

it uses a configuration equivalent to the following:

```
proxy_server: example.teleport.sh:443

onboarding:
  join_method: "token"
  token: "abcd123-insecure-do-not-use-this"
  ca_pins:
    - "sha256:abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678abdc1245efgh5678"

storage:
  type: directory
  path: /var/lib/teleport/bot

services:
  - type: identity
    destination:
      type: directory
      path: ./tbot-user

```

## Output Services

Output services define what actions `tbot` should take when it runs. They describe the format of the certificates to be generated, the roles used to generate the certificates, and the destination where they should be written.

There are multiple types of output. Select the one that is most appropriate for your intended use-case.

### `identity`

The `identity` output service can be used to authenticate:

- SSH access to your Teleport servers, using `tsh`, openssh and tools like ansible.
- Administrative actions against your cluster using tools like `tsh` or `tctl`.
- Management of Teleport resources using the Teleport Terraform provider.
- Access to the Teleport API using the Teleport Go SDK.

See the [Getting Started guide](https://goteleport.com/docs/machine-workload-identity/getting-started.md) to see the `identity` output used in context.

```
# type specifies the type of the output. For the identity output, this will
# always be `identity`.
type: identity
# ssh_config controls whether the identity output will attempt to generate an
# OpenSSH configuration file. This requires that `tbot` can connect to the
# Teleport Proxy Service. Must be "on" or "off". If unspecified, this defaults to
# "on".
ssh_config: on
# allow_reissue controls whether the certificates generated by the identity
# output can be reissued (e.g. used with `tsh apps login`/`tsh db login`). This
# defaults to `false` if unspecified. If you receive an error message indicating
# that the certificate cannot be reissued, set this to `true`.
allow_reissue: false

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

### `application`

The `application` output service is used to generate credentials that can be used to access applications that have been configured with Teleport.

See the [Machine & Workload Identity with Applications guide](https://goteleport.com/docs/machine-workload-identity/access-guides/applications.md) to see the `application` output used in context.

```
# type specifies the type of the output. For the application output, this will
# always be `application`.
type: application
# app_name specifies the application name, as configured in your Teleport
# cluster, that `tbot` should generate credentials for.
# This field must be specified.
app_name: grafana

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

### `database`

The `database` output service is used to generate credentials that can be used to access databases that have been configured with Teleport.

See the [Machine & Workload Identity with Databases guide](https://goteleport.com/docs/machine-workload-identity/access-guides/databases.md) to see the `database` output used in context.

```
# type specifies the type of the output. For the database output, this will
# always be `database`.
type: database
# service is the name of the database server, as configured in Teleport, that
# the output should generate credentials for. This field must be specified.
service: my-postgres-server
# database is the name of the specific database on the specified database
# server to generate credentials for. This field doesn't need to be specified
# for database types that don't support multiple individual databases.
database: my-database
# username is the name of the user on the specified database server to
# generate credentials for. This field doesn't need to be specified
# for database types that don't have users.
username: my-user
# format specifies the format to use for output artifacts. If
# unspecified, a default format is used. See the table titled "Supported
# formats" below for the full list of supported values.
format: tls

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

#### Supported formats

You can provide the following values to the `format` configuration field in the `database` output type:

| `format`    | Description                                                                                                                                               |
| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Unspecified | Provides a certificate in `tlscert`, a private key in `key` and the CA in `teleport-database-ca.crt`. This is compatible with most clients and databases. |
| `mongo`     | Provides `mongo.crt` and `mongo.cas`. This is designed to be used with MongoDB clients.                                                                   |
| `cockroach` | Provides `cockroach/node.key`, `cockroach/node.crt`, and `cockroach/ca.crt`. This is designed to be used with CockroachDB clients.                        |
| `tls`       | Provides `tls.key`, `tls.crt`, and `tls.cas`. This is for generic clients that require the specific file extensions.                                      |

### `kubernetes`

The `kubernetes` output is used to generate credentials that can be used to access Kubernetes clusters that have been configured with Teleport.

It outputs a `kubeconfig.yaml` in the output destination, which can be used with `kubectl`.

See the [Machine & Workload Identity with Kubernetes Clusters guide](https://goteleport.com/docs/machine-workload-identity/access-guides/kubernetes.md) to see the `kubernetes` output used in context.

```
# type specifies the type of the output. For the kubernetes output, this will
# always be `kubernetes`.
type: kubernetes
# kubernetes_cluster is the name of the Kubernetes cluster, as configured in
# Teleport, that the output should generate credentials and a kubeconfig for.
# This field must be specified.
kubernetes_cluster: my-cluster
# disable_exec_plugin disables the default behaviour of using the `tbot` binary
# as a `kubectl` credentials exec plugin. This is useful in environments where
# `tbot` does not exist on the system that will consume the generated kubeconfig
# (e.g. when using the `kubernetes_secret` output type). This credentials exec
# plugin is used to automatically refresh the credentials within a single
# invocation of `kubectl`. Defaults to `false`.
disable_exec_plugin: false

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

### `kubernetes/v2`

The `kubernetes/v2` output service can be used to access many Kubernetes clusters as individual contexts within the same `kubeconfig.yaml`.

```
type: kubernetes/v2

# selectors include one or more matching Kubernetes clusters. Each match will be
# included in the resulting `kubeconfig.yaml`, assuming the bot has permission
# to access the cluster.
selectors:
  # name includes an exact match by name. Note that wildcards are not currently
  # supported. Multiple name selectors can be specified if desired.
  - name: foo
    # default_namespace is the namespace that should be configured for the
    # context within the kubeconfig. This will be the namespace used by
    # `kubectl`/SDK if the user has not explicitly provided one.
    #
    # If unspecified, no default namespace is set within the kubeconfig and
    # `kubectl`/SDKs will use default based on their own logic - which is often
    # to select the `default` namespace.
    default_namespace: my-namespace
  # labels include all clusters matching all of these labels. Multiple label
  # selectors can be provided if needed.
  - labels:
      env: dev

# The following configuration fields are available across most output types.
# Note that `roles` are not supported for this output type.

destination:
  type: directory
  path: /opt/machine-id

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# disable_exec_plugin disables the default behaviour of using the `tbot` binary
# as a `kubectl` credentials exec plugin. This is useful in environments where
# `tbot` does not exist on the system that will consume the generated kubeconfig
# (e.g. when using the `kubernetes_secret` output type). This credentials exec
# plugin is used to automatically refresh the credentials within a single
# invocation of `kubectl`. Defaults to `false`.
disable_exec_plugin: false

# context_name_template determines the format of context names in the generated
# kubeconfig. It is a Go template string that supports the following variables:
#
#   - {{.ClusterName}} - Name of the Teleport cluster
#   - {{.KubeName}} - Name of the Kubernetes cluster resource
#   - {{.Labels}} - Map of labels applied to the Kubernetes cluster
# 	   resource that can be indexed using `{{index .Labels "key"}}`
#
# By default, the following template will be used: "{{.ClusterName}}-{{.KubeName}}"
context_name_template: "{{.KubeName}}"

# relay_server specifies the Relay service address that tbot should use to route
# Kubernetes traffic to instead of the Teleport control plane. When set, all
# Kubernetes API connections are sent via this Relay; only Kubernetes clusters
# reachable through the specified Relay will be accessible and tbot will not fall
# back to the control plane. Provide either a hostname or host:port (e.g.
# relay.example.com or relay.example.com:443).
#
# Use of the relay_server option requires `tbot` 18.5.0 or later.
relay_server: relay.example.com

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

Each Kubernetes cluster matching a selector will result in a new context in the generated `kubeconfig.yaml`. This can be consumed like so:

```
$ kubectl --kubeconfig /opt/machine-id/kubeconfig.yaml --context=example.teleport.sh-foo get pods
```

The context name is `[Teleport cluster name]-[Kubernetes cluster name]`, so the command above runs `kubectl get pods` on the `foo` cluster.

If clusters are added or removed over time, the `kubeconfig.yaml` will be updated at the bot's normal renewal interval. You can trigger an early renewal by restarting `tbot`, or signaling it with `pkill -usr1 tbot`.

### `kubernetes/argo-cd`

The `kubernetes/argo-cd` output service can be used to enable Argo CD to securely connect to external Kubernetes clusters.

It works by "declaratively" managing cluster credentials [using Kubernetes secrets](https://argo-cd.readthedocs.io/en/release-1.8/operator-manual/declarative-setup/#clusters). For each matching Kubernetes cluster, `tbot` will create and continuously update a secret containing connection details and short-lived credentials, labeled with `"argocd.argoproj.io/secret-type": "cluster"` for Argo CD to discover.

As such, it is only intended to be used within a Kubernetes cluster. See the [Machine & Workload Identity with Argo CD guide](https://goteleport.com/docs/machine-workload-identity/access-guides/argocd.md) for information on how to deploy it using the Helm chart.

---

WARNING

The `kubernetes/argo-cd` output type does not currently support configurations where the Teleport proxy is behind a TLS-terminating load balancer.

---

```
type: kubernetes/argo-cd

# selectors include one or more matching Kubernetes clusters. Each matching
# cluster that the bot has permission to access will be registered with Argo CD
# by creating a Kubernetes secret.
selectors:
  # name includes an exact match by name. Note that wildcards are not currently
  # supported. Multiple name selectors can be specified if desired.
  - name: foo
  # labels include all clusters matching all of these labels. Multiple label
  # selectors can be provided if needed.
  - labels:
      env: dev

# secret_namespace is the Kubernetes namespace in which Argo CD cluster secrets
# will be created. It must match the namespace where Argo CD is running.
#
# By default, `tbot` will use the `POD_NAMESPACE` environment variable, or if
# that is empty: "default".
secret_namespace: "argocd"

# secret_name_prefix is the prefix that will be applied to Kubernetes secret
# names so they can be easily identified. The rest of the name will be derived
# from a hash of the target cluster name.
#
# By default, the prefix will be: "teleport.argocd-cluster".
secret_name_prefix: "argocd-cluster"

# secret_labels is a set of labels that will be applied to the cluster secrets
# in addition to the "argocd.argoproj.io/secret-type" label added for Argo CD
# discovery.
#
# Label values can be Go template strings with the following variables:
#
#   - {{.ClusterName}} - Name of the Teleport cluster
#   - {{.KubeName}} - Name of the Kubernetes cluster resource
#   - {{.Labels}} - Map of labels applied to the Kubernetes cluster
# 	   resource that can be indexed using `{{index .Labels "key"}}`
#
# If the label value is empty, the label will not be added to the secret.
secret_labels:
  department: engineering
  cluster-region: |-
    {{index .Labels "region"}}

# secret_annotations is a set of annotations that will be applied to the cluster
# secrets in addition to `tbot`'s own annotations:
#
#   - "teleport.dev/bot-name"                - Name of the Bot
#   - "teleport.dev/kubernetes-cluster-name" - Name of the Kubernetes cluster
#   - "teleport.dev/updated"                 - RFC3339-formatted timestamp
#   - "teleport.dev/tbot-version"            - Version of tbot running
#   - "teleport.dev/teleport-cluster-name"   - Name of the Teleport cluster
#
# Annotation values can be Go template strings with the following variables:
#
#   - {{.ClusterName}} - Name of the Teleport cluster
#   - {{.KubeName}} - Name of the Kubernetes cluster resource
#   - {{.Labels}} - Map of labels applied to the Kubernetes cluster
# 	   resource that can be indexed using `{{index .Labels "key"}}`
#
# If the annotation value is empty, the annotation will not be added to the secret
secret_annotations:
  creator: bob

# project is the Argo CD project with which the Kubernetes clusters will be
# associated.
project: edge-services

# namespaces optionally restricts which namespaces within the target Kubernetes
# clusters applications may be deployed into. By default, all namespaces are
# allowed.
namespaces:
  - dev
  - qa

# cluster_resources determines whether Argo CD will be allowed to operate on
# cluster-scoped resources within the target clusters. This option is only
# applicable when `namespaces` is non-empty.
cluster_resources: true

# cluster_name_template determines the format of cluster names in Argo CD. It is
# a Go template string that supports the following variables:
#
#   - {{.ClusterName}} - Name of the Teleport cluster
#   - {{.KubeName}} - Name of the Kubernetes cluster resource
#
# By default, the following template will be used: "{{.ClusterName}}-{{.KubeName}}"
cluster_name_template: "{{.KubeName}}"

# The following configuration fields are available across most output types.
# Note that `roles` and `destination` are not supported for this output type.

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

### `ssh_host`

The `ssh_host` output service is used to generate the artifacts required to configure an OpenSSH server with Teleport in order to allow Teleport users to connect to it.

The output service generates the following artifacts:

- `ssh_host-cert.pub`: an SSH certificate signed by the Teleport host certificate authority.
- `ssh_host`: the private key associated with the SSH host certificate.
- `ssh_host-user-ca.pub`: an export of the configured Teleport certificate authorities in an OpenSSH-compatible format.

```
# type specifies the type of the output. For the ssh host output, this will
# always be `ssh_host`.
type: ssh_host
# principals is the list of host names to include in the host certificates.
# These names should match the names that clients use to connect to the host.
principals:
  - host.example.com
# ca_type selects which CA type is exported to `ssh_host-user-ca.pub`.
# Supported values are `user` and `openssh`. If unspecified, this defaults to
# `user`.
ca_type: user

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

### `workload-identity-x509`

The `workload-identity-x509` output service is used to issue an X509 workload identity credential and write this to a configured destination.

The output generates the following artifacts:

- `svid.pem`: the X509 SVID.
- `svid.key`: the private key associated with the X509 SVID.
- `bundle.pem`: the X509 bundle that contains the trust domain CAs.

See [Workload Identity introduction](https://goteleport.com/docs/machine-workload-identity/workload-identity/introduction.md) for more information on Workload Identity functionality.

```
# type specifies the type of the output. For the X509 Workload Identity output,
# this will always be `workload-identity-x509`.
type: workload-identity-x509
# Selector is used to control which WorkloadIdentity resource will be used to
# issue the workload identity credential. The selector can either be the name of
# a specific WorkloadIdentity resource or a label selector that can match
# multiple WorkloadIdentity resources.
#
# The selector must be set to either a name or labels, but not both.
selector:
  # Name is used to select a specific WorkloadIdentity resource by its name.
  name: foo
  # Labels is used to select multiple WorkloadIdentity resources by their labels.
  labels:
    app: [foo, bar]

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

### `workload-identity-jwt`

The `workload-identity-jwt` output service used to issue a JWT workload identity credential and write this to a configured destination.

The JWT workload identity credential is compatible with the [SPIFFE JWT SVID specification](https://github.com/spiffe/spiffe/blob/main/standards/JWT-SVID.md).

The output generates the following artifacts:

- `jwt_svid`: the JWT SVID.

See [Workload Identity introduction](https://goteleport.com/docs/machine-workload-identity/workload-identity/introduction.md) for more information on Workload Identity functionality.

```
# type specifies the type of the output. For the JWT Workload Identity output,
# this will always be `workload-identity-jwt`.
type: workload-identity-jwt
# audiences specifies the values that should be included in the `aud` claim of
# the JWT. Typically, this identifies the intended recipient of the JWT and
# contains a single value.
#
# At least one audience value must be specified.
audiences:
 - example.com
 - foo.example.com
# Selector is used to control which WorkloadIdentity resource will be used to
# issue the workload identity credential. The selector can either be the name of
# a specific WorkloadIdentity resource or a label selector that can match
# multiple WorkloadIdentity resources.
#
# The selector must be set to either a name or labels, but not both.
selector:
  # Name is used to select a specific WorkloadIdentity resource by its name.
  name: foo
  # Labels is used to select multiple WorkloadIdentity resources by their labels.
  labels:
    app: [foo, bar]

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

### `workload-identity-aws-roles-anywhere`

The `workload-identity-aws-roles-anywhere` output service used to issue an X509 workload identity credential, exchange this for short-lived AWS credentials using Roles Anywhere, and write these to a configured destination.

The credentials are written in the [AWS shared credentials file format](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds), which is compatible with the AWS CLI and SDKs.

The output service generates the following artifacts:

- `aws_credentials`: the CLI and SDK compatible AWS shared credentials file.

See [Workload Identity introduction](https://goteleport.com/docs/machine-workload-identity/workload-identity/introduction.md) for more information on Workload Identity functionality.

```
# type specifies the type of the output. For the Workload Identity AWS Roles
# Anywhere output, this will always be `workload-identity-aws-roles-anywhere`.
type: workload-identity-aws-roles-anywhere
# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# role_arn is the ARN of the AWS role that the generated credentials should
# assume.
# Required.
role_arn: arn:aws:iam::123456789012:role/example-role
# profile_arn is the ARN of the AWS profile to be used during the Roles Anywhere
# exchange.
# Required.
profile_arn: arn:aws:rolesanywhere:us-east-1:123456789012:profile/0000000-0000-0000-0000-00000000000
# trust_anchor_arn is the ARN of the trust anchor that should be used during the
# Roles Anywhere exchange.
# Required.
trust_anchor_arn: arn:aws:rolesanywhere:us-east-1:123456789012:trust-anchor/0000000-0000-0000-0000-000000000000
# region is the AWS region to use for the Roles Anywhere exchange. If omitted,
# this defaults to the region set by `AWS_REGION` environment variable or the
# AWS configuration file.
region: us-east-1
# session_duration is the duration that the generated AWS credentials should be
# valid for. This may be up to 12 hours. If omitted, this defaults to 6 hours.
session_duration: 6h
# session_renewal_interval is the interval at which the AWS credentials should
# be renewed. This should be less than the session duration. If omitted, this
# defaults to 1 hour.
session_renewal_interval: 1h
# credential_profile_name is the name of the profile to write to in the AWS
# credentials file. If unspecified, this profile will be named `default`.
credential_profile_name: my-profile
# artifact_name is the name of the file that the AWS credentials should be
# written to. If unspecified, this defaults to `aws_credentials`.
artifact_name: my-credentials-file
# overwrite_credential_file controls whether the AWS credentials file should be
# overwritten if it already exists, or whether the profile added by tbot should
# be merged with any existing profiles in the file. If unspecified, this
# defaults to `false`.
overwrite_credential_file: false
# Selector is used to control which WorkloadIdentity resource will be used to
# issue the workload identity credential. The selector can either be the name of
# a specific WorkloadIdentity resource or a label selector that can match
# multiple WorkloadIdentity resources.
#
# The selector must be set to either a name or labels, but not both.
selector:
  # Name is used to select a specific WorkloadIdentity resource by its name.
  name: foo
  # Labels is used to select multiple WorkloadIdentity resources by their labels.
  labels:
    app: [foo, bar]

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

### `spiffe-svid`

---

WARNING

The use of this service has been deprecated as part of the introduction of the new Workload Identity configuration experience. You can replace the use of this output with the new `workload-identity-x509` or `workload-identity-jwt` service.

For further information, see [the new Workload Identity configuration experience and how to migrate](https://goteleport.com/docs/reference/machine-workload-identity/workload-identity/configuration-resource-migration.md).

---

The `spiffe-svid` output is used to generate a SPIFFE X509 SVID and write this to a configured destination.

The output generates the following artifacts:

- `svid.pem`: the X509 SVID.
- `svid.key`: the private key associated with the X509 SVID.
- `bundle.pem`: the X509 bundle that contains the trust domain CAs.

An artifact will also be generated for each entry within the `jwts` list. This will be named according to `file_name`. This artifact will contain only the JWT-SVID with the audience specified in `audience`.

See [Workload Identity](https://goteleport.com/docs/machine-workload-identity/workload-identity/introduction.md) for more information on how to use SPIFFE SVIDs.

```
# type specifies the type of the output. For the SPIFFE SVID output, this will
# always be `spiffe-svid`.
type: spiffe-svid
# svid specifies the properties of the SPIFFE SVID that should be requested.
svid:
  # path specifies what the path element should be requested for the SPIFFE ID.
  path: /svc/foo
  # sans specifies optional Subject Alternative Names (SANs) to include in the
  # generated X509 SVID. If omitted, no SANs are included.
  sans:
    # dns specifies the DNS SANs. If omitted, no DNS SANs are included.
    dns:
      - foo.svc.example.com
    # ip specifies the IP SANs. If omitted, no IP SANs are included.
    ip:
      - 10.0.0.1
  # jwts controls the output of JWT-SVIDs. Each entry will be generated as a
  # separate artifact. If omitted, no JWT-SVIDs are generated.
  jwts:
      # audience specifies the audience that the JWT-SVID should be issued for.
      # this typically identifies the service that the JWT-SVID will be used to
      # authenticate to.
    - audience: https://example.com
      # file_name specifies the name of the file that the JWT-SVID should be
      # written to.
      file_name: example-jwt

# The following configuration fields are available across most output types.

# destination specifies where the output should write any generated artifacts
# such as certificates and configuration files.
#
# See the full list of supported destinations and their configuration options
# under the Destinations section of this reference page.
destination:
  type: directory
  path: /opt/machine-id
# roles specifies the roles that should be included in the certificates generated
# by the output. These roles must be roles that the bot has been granted
# permission to impersonate.
#
# if no roles are specified, all roles the bot is allowed to impersonate are used.
roles:
  - editor

# credential_ttl and renewal_interval override the credential TTL and renewal
# interval for this specific output, so that you can make its certificates valid
# for shorter than `tbot`'s internal certificates.
#
# This is particularly useful when using `tbot` in one-shot as part of a cron job
# where you need `tbot`'s internal certificate to live long enough to be renewed
# on the next invocation, but don't want long-lived workload certificates on-disk.
credential_ttl: 30m
renewal_interval: 15m

# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name


```

## Services

Services are configurable long-lived components that run within `tbot`. Unlike Output Services, they may not necessarily generate artifacts. Typically, services provide supporting functionality for machine to machine access, for example, opening tunnels or providing APIs.

### `workload-identity-api`

The `workload-identity-api` services opens a listener that provides a local workload identity API, intended to serve workload identity credentials (e.g X509/JWT SPIFFE SVIDs) to workloads running on the same host.

For more information about this, see the [Workload Identity API and Workload Attestation reference](https://goteleport.com/docs/reference/machine-workload-identity/workload-identity/workload-identity-api-and-workload-attestation.md)

The `workload-identity-api` service will not start if `tbot` has been configured to run in one-shot mode.

### `spiffe-workload-api`

---

WARNING

The use of this service has been deprecated as part of the introduction of the new Workload Identity configuration experience. You can replace the use of this service with the new `workload-identity-api` service.

For further information, see [the new Workload Identity configuration experience and how to migrate](https://goteleport.com/docs/reference/machine-workload-identity/workload-identity/configuration-resource-migration.md).

---

The `spiffe-workload-api` service opens a listener for a service that implements the SPIFFE Workload API. This service is used to provide SPIFFE SVIDs to workloads.

See [Workload Identity](https://goteleport.com/docs/machine-workload-identity/workload-identity/introduction.md) for more information on the SPIFFE Workload API.

```
# type specifies the type of the service. For the SPIFFE Workload API service,
# this will always be `spiffe-workload-api`.
type: spiffe-workload-api
# listen specifies the address that the service should listen on.
#
# Two types of listener are supported:
# - TCP: `tcp://<address>:<port>`
# - Unix socket: `unix:///<path>`
listen: unix:///opt/machine-id/workload.sock
# attestors allows Workload Attestation to be configured for this Workload
# API.
attestors:
  # docker is configuration for the Docker Workload Attestor. See the Workload
  # Identity API & Workload Attestation reference for more information.
  docker:
    # enabled specifies whether the workload's identity should be attested with
    # information about its Docker container. If unspecified, this defaults to
    # false.
    enabled: true
    # addr is the address at which the Docker Engine daemon can be reached. It
    # must be in the form `unix://path/to/socket`, as connecting via TCP is not
    # currently supported. If unspecified, this defaults to the standard socket
    # location for "rootful" Docker installations: `unix:///var/run/docker.sock`.
    addr: unix:///var/run/docker.sock
  # kubernetes is configuration for the Kubernetes Workload Attestor. See
  # the Kubernetes Workload Attestor section for more information.
  kubernetes:
    # enabled specifies whether the Kubernetes Workload Attestor should be
    # enabled. If unspecified, this defaults to false.
    enabled: true
    # kubelet holds configuration relevant to the Kubernetes Workload Attestors
    # interaction with the Kubelet API.
    kubelet:
      # read_only_port is the port on which the Kubelet API is exposed for
      # read-only operations. Since Kubernetes 1.16, the read-only port is
      # typically disabled by default and secure_port should be used instead.
      read_only_port: 10255
      # secure_port is the port on which the attestor should connect to the
      # Kubelet secure API. If unspecified, this defaults to `10250`. This is
      # mutually exclusive with ReadOnlyPort.
      secure_port: 10250
      # token_path is the path to the token file that the Kubelet API client
      # should use to authenticate with the Kubelet API. If unspecified, this
      # defaults to `/var/run/secrets/kubernetes.io/serviceaccount/token`.
      token_path: "/var/run/secrets/kubernetes.io/serviceaccount/token"
      # ca_path is the path to the CA file that the Kubelet API client should
      # use to validate the Kubelet API server's certificate. If unspecified,
      # this defaults to `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.
      ca_path: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
      # skip_verify is used to disable verification of the Kubelet API server's
      # certificate. If unspecified, this defaults to false.
      #
      # If specified, the value specified in ca_path is ignored.
      #
      # This is useful in cases where the Kubelet API server has not been issued
      # with a certificate signed by the Kubernetes cluster's CA. This is fairly
      # common with a number of Kubernetes distributions.
      skip_verify: true
      # anonymous is used to disable authentication with the Kubelet API. If
      # unspecified, this defaults to false. If set, the token_path field is
      # ignored.
      anonymous: false
  # podman is configuration for the Podman Workload Attestor. See the Workload
  # Identity API & Workload Attestation reference for more information.
  podman:
    # enabled specifies whether the workload's identity should be attested with
    # information about its Podman container and pod. If unspecified, this
    # defaults to false.
    enabled: true
    # addr is the address at which the Podman API Service can be reached. It
    # must be in the form `unix://path/to/socket`, as connecting via TCP is not
    # supported. This field is required and there is no default value. See the
    # Workload Identity API & Workload Attestation reference for more information.
    addr: unix:///run/podman/podman.sock
  # sigstore is configuration for the Sigstore Workload attestor. See the
  # Sigstore Workload Attestation page for more information.
  sigstore:
    # enabled specifies whether tbot will discover Sigstore signatures for the
    # workload's container image. If unspecified, this defaults to false.
    enabled: true
    # additional_registries optionally configures the OCI registries that will
    # be searched for signatures in addition to the workload container image's
    # source registry.
    additional_registries:
      -
        # host of the OCI registry.
        host: ghcr.io
    # credentials_path is the path to a Docker or Podman configuration file
    # containing per-registry credentials.
    credentials_path: /path/to/docker/config.json
    # allowed_private_network_prefixes are the private IP address prefixes (CIDR
    # blocks) that the Sigstore attestor is allowed to connect to. By default,
    # tbot will only connect to registries at publicly-routable IP addresses to
    # reduce the surface area for SSRF attacks.
    allowed_private_network_prefixes:
      - "192.168.1.42/32"
      - "fd12:3456:789a:1::1/128"
  # systemd is configuration for the Systemd Workload Attestor. See the Workload
  # Identity API & Workload Attestation reference for more information.
  systemd:
    # enabled specifies whether the workload's identity should be attested with
    # information about its Systemd service. If unspecified, this defaults to
    # false.
    enabled: true
  # unix is configuration for the Unix Workload Attestor.
  unix:
    # binary_hash_max_size_bytes is the maximum number of bytes that will be
    # read from a process' binary to calculate its SHA-256 checksum. If the
    # binary is larger than this, the `workload.unix.binary_hash` attribute
    # will be empty. If unspecified, this defaults to 1GiB. Set it to -1 to
    # make it unlimited.
    binary_hash_max_size_bytes: 1024
# svids specifies the SPIFFE SVIDs that the Workload API should provide.
svids:
    # path specifies what the path element should be requested for the SPIFFE
    # ID.
  - path: /svc/foo
    # hint is a free-form string which can be used to help workloads determine
    # which SVID to select when multiple are available. If omitted, no hint is
    # included.
    hint: my-hint
    # sans specifies optional Subject Alternative Names (SANs) to include in the
    # generated X509 SVID. If omitted, no SANs are included.
    sans:
      # dns specifies the DNS SANs. If omitted, no DNS SANs are included.
      dns:
        - foo.svc.example.com
      # ip specifies the IP SANs. If omitted, no IP SANs are included.
      ip:
        - 10.0.0.1
    # rules specifies a list of workload attestation rules. At least one of
    # these rules must be satisfied by the workload in order for it to receive
    # this SVID.
    #
    # If no rules are specified, the SVID will be issued to all workloads that
    # connect to this service.
    rules:
        # unix is a group of workload attestation criteria that are available
        # when the workload is running on the same host, and is connected to
        # the Workload API using a Unix socket.
        #
        # If any of the criteria in this group are specified, then workloads
        # that do not connect using a Unix socket will not receive this SVID.
      - unix:
          # uid is the ID of the user that the workload process must be running
          # as to receive this SVID.
          #
          # If unspecified, the UID is not checked.
          uid: 1000
          # pid is the ID that the workload process must have to receive this
          # SVID.
          #
          # If unspecified, the PID is not checked.
          pid: 1234
          # gid is the ID of the primary group that the workload process must be
          # running as to receive this SVID.
          #
          # If unspecified, the GID is not checked.
          gid: 50
# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

#### Envoy SDS

The `spiffe-workload-api` service endpoint also implements the Envoy SDS API. This allows it to act as a source of certificates and certificate authorities for the Envoy proxy.

As a forward proxy, Envoy can be used to attach an X.509 SVID to an outgoing connection from a workload that is not SPIFFE-enabled.

As a reverse proxy, Envoy can be used to terminate mTLS connections from SPIFFE-enabled clients. Envoy can validate that the client has presented a valid X.509 SVID and perform enforcement of authorization policies based on the SPIFFE ID contained within the SVID.

When acting as a reverse proxy for certain protocols, Envoy can be configured to attach a header indicating the identity of the client to a request before forwarding it to the service. This can then be used by the service to make authorization decisions based on the client's identity.

When configuring Envoy to use the SDS API exposed by the `spiffe-workload-api` service, three additional special names can be used to aid configuration:

- `default`: `tbot` will return the default SVID for the workload.
- `ROOTCA`: `tbot` will return the trust bundle for the trust domain that the workload is a member of.
- `ALL`: `tbot` will return the trust bundle for the trust domain that the workload is a member of, as well as the trust bundles of any trust domain that the trust domain is federated with.

The following is an example Envoy configuration that sources a certificate and trust bundle from the `spiffe-workload-api` service listening on `unix:///opt/machine-id/workload.sock`. It requires that a connecting client presents a valid SPIFFE SVID and forwards this information to the backend service in the `x-forwarded-client-cert` header.

```
node:
  id: "my-envoy-proxy"
  cluster: "my-cluster"
static_resources:
  listeners:
    - name: test_listener
      enable_reuse_port: false
      address:
        socket_address:
          address: 0.0.0.0
          port_value: 8080
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                common_http_protocol_options:
                  idle_timeout: 1s
                forward_client_cert_details: sanitize_set
                set_current_client_cert_details:
                  uri: true
                stat_prefix: ingress_http
                route_config:
                  name: local_route
                  virtual_hosts:
                    - name: my_service
                      domains: ["*"]
                      routes:
                        - match:
                            prefix: "/"
                          route:
                            cluster: my_service
                http_filters:
                  - name: envoy.filters.http.router
                    typed_config:
                      "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
          transport_socket:
            name: envoy.transport_sockets.tls
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
              common_tls_context:
                # configure the certificate that the reverse proxy should present.
                tls_certificate_sds_secret_configs:
                  # `name` can be replaced with the desired SPIFFE ID if  multiple
                  # SVIDs are available.
                  - name: "default"
                    sds_config:
                      resource_api_version: V3
                      api_config_source:
                        api_type: GRPC
                        transport_api_version: V3
                        grpc_services:
                          envoy_grpc:
                            cluster_name: tbot_agent
                # combined validation context "melds" two validation contexts
                # together. This is handy for extending the validation context
                # from the SDS source.
                combined_validation_context:
                  default_validation_context:
                    # You can use match_typed_subject_alt_names to configure
                    # rules that only allow connections from specific SPIFFE IDs.
                    match_typed_subject_alt_names: []
                  validation_context_sds_secret_config:
                    name: "ALL" # This can also be replaced with the trust domain name
                    sds_config:
                      resource_api_version: V3
                      api_config_source:
                        api_type: GRPC
                        transport_api_version: V3
                        grpc_services:
                          envoy_grpc:
                            cluster_name: tbot_agent
  clusters:
    # my_service is the example service that Envoy will forward traffic to.
    - name: my_service
      type: strict_dns
      load_assignment:
        cluster_name: my_service
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: 127.0.0.1
                      port_value: 8090
    - name: tbot_agent
      http2_protocol_options: {}
      load_assignment:
        cluster_name: tbot_agent
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    pipe:
                      # Configure the path to the socket that `tbot` is
                      # listening on.
                      path: /opt/machine-id/workload.sock

```

### `database-tunnel`

The `database-tunnel` service opens a listener for a service that tunnels connections to a database server.

The tunnel authenticates connections for the client, meaning that any application which can connect to the listener will be able to connect to the database as the specified user. For this reason, we heavily recommend using the Unix socket listener type and configuring the permissions of the socket to ensure that only the intended applications can connect.

```
# type specifies the type of the service. For the database tunnel service, this
# will always be `database-tunnel`.
type: database-tunnel
# listen specifies the address that the service should listen on.
#
# Two types of listener are supported:
# - TCP: `tcp://<address>:<port>`
# - Unix socket: `unix:///<path>`
listen: tcp://127.0.0.1:25432
# service is the name of the database server, as configured in Teleport, that
# the service should open a tunnel to.
service: postgres-docker
# database is the name of the specific database on the specified database
# service.
database: postgres
# username is the name of the user on the specified database server to open a
# tunnel for.
username: postgres
# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

The `database-tunnel` service will not start if `tbot` has been configured to run in one-shot mode.

### `application-tunnel`

The `application-tunnel` service opens a listener that tunnels connections to an application in Teleport. It supports both HTTP and TCP applications. This is useful for applications which cannot be configured to use client certificates, when using TCP application or where using a L7 load-balancer in front of your Teleport proxies.

The tunnel authenticates connections for the client, meaning that any client that connects to the listener will be able to access the application. For this reason, ensure that the listener is only accessible by the intended clients by using the Unix socket listener or binding to `127.0.0.1`.

```
# type specifies the type of the service. For the application tunnel service,
# this will always be `application-tunnel`.
type: application-tunnel
# listen specifies the address that the service should listen on.
#
# Two types of listener are supported:
# - TCP: `tcp://<address>:<port>`
# - Unix socket: `unix:///<path>`
listen: tcp://127.0.0.1:8084
# app_name is the name of the application, as configured in Teleport, that
# the service should open a tunnel to.
app_name: my-application
# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

The `application-tunnel` service will not start if `tbot` has been configured to run in one-shot mode.

### `ssh-multiplexer`

The `ssh-multiplexer` service opens a listener for a high-performance local SSH multiplexer. This is designed for use-cases which create a large number of SSH connections using Teleport, for example, Ansible.

This differs to using `identity` output for SSH in a few ways:

- The `tbot` instance running the `ssh-multiplexer` service must be running on the same host as the SSH client.
- The `ssh-multiplexer` service is designed to be a long-running background service and cannot be used in one-shot mode. It must be running in order for SSH connections to be established and to continue running.
- Resource consumption is significantly reduced by multiplexing SSH connections through a fewer number of upstream connections to the Teleport Proxy Service.
- It's possible to configure the `ssh-multiplexer` service to connect to SSH servers through a [Teleport Relay](https://goteleport.com/docs/reference/architecture/relay.md).

Additionally, the `ssh-multiplexer` opens a socket that implements the SSH agent protocol. This allows the SSH client to authenticate without writing the sensitive private key to disk.

By default, the `ssh-multiplexer` service outputs an `ssh_config` which uses `tbot` itself as the ProxyCommand. You can further reduce the resource consumption of SSH connections by installing and specifying the `fdpass-teleport` binary.

```
# type specifies the type of the service. For the SSH multiplexer
type: ssh-multiplexer
# destination specifies where the tunnel should be opened and any artifacts
# should be written. It must be of type `directory`.
destination:
  type: directory
  path: /foo
# enable_resumption specifies whether the multiplexer should negotiate
# session resumption. This allows SSH connections to survive network
# interruptions. It does increase the memory resources used per connection.
#
# If unspecified, this defaults to true.
enable_resumption: true
# proxy_command specifies the command that should be used as the ProxyCommand
# in the generated SSH configuration.
#
# If unspecified, the ProxyCommand will be the currently running binary of tbot
# itself.
proxy_command:
  - /usr/local/bin/fdpass-teleport
# proxy_templates_path specifies a path to a proxy templates configuration file
# which should be used when resolving the Teleport node to connect to. This
# file must be accessible by the long-lived tbot process running the
# ssh-multiplexer.
#
# If unspecified, proxy templates will not be used.
proxy_templates_path: /etc/my-proxy-templates.yaml
# relay_server specifies the Relay service address that tbot should use to route
# SSH traffic instead of going through the Teleport control plane. When set, all
# SSH connections are sent via this Relay; only servers reachable through the
# specified Relay will be accessible and tbot will not fall back to the control
# plane. Provide either a hostname or host:port (e.g. relay.example.com or
# relay.example.com:443).
#
# Use of the relay_server option requires `tbot` 18.3.0 or later.
relay_server: relay.example.com
# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

Once configured, `tbot` will create the following artifacts in the specified destination:

- `ssh_config`: an SSH configuration file that will configure OpenSSH to use the multiplexer and agent.
- `known_hosts`: the known hosts file that will be used by OpenSSH to validate a server's identity.
- `v1.sock`: the Unix socket that the multiplexer listens on.
- `agent.sock`: the Unix socket that the SSH agent listens on.

The `ssh-multiplexer` service will not start if `tbot` has been configured to run in one-shot mode.

#### Using the SSH multiplexer programmatically

To use the SSH multiplexer programmatically, your SSH client library will need to support one of two things:

- The ability to use a ProxyCommand with FDPass. If so, you can use the `ssh_config` file generated by `tbot` to configure the SSH client.
- The ability to accept an open socket to use as the connection to the SSH server. You will then need to manually connect to the socket and send the multiplexer request.

The `v1.sock` Unix Domain Socket implements the V1 Teleport SSH multiplexer protocol. The client must first send a short request message to indicate the desired target host and port, terminated with a null byte. The multiplexer will then begin to forward traffic to the target host and port. The client can then make an SSH connection.

Example in Python (Paramiko)

```
import os
import paramiko
import socket

host = "ubuntu.example.teleport.sh"
username = "root"
port = 3022
directory_destination = "/opt/machine-id"

# Connect to Mux Unix Domain Socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(os.path.join(directory_destination, "v1.sock"))
# Send the connection request specifying the server you wish to connect to
sock.sendall(f"{host}:{port}\x00".encode("utf-8"))

# We must set the env var as Paramiko does not make this configurable...
os.environ["SSH_AUTH_SOCK"] = os.path.join(directory_destination, "agent.sock")

ssh_config = paramiko.SSHConfig()
with open(os.path.join(directory_destination, "ssh_config")) as f:
    ssh_config.parse(f)

ssh_client = paramiko.SSHClient()

# Paramiko does not support known_hosts with CAs: https://github.com/paramiko/paramiko/issues/771
# Therefore, we must disable host key checking
ssh_client.set_missing_host_key_policy(paramiko.WarningPolicy())

ssh_client.connect(
    hostname=host,
    port=port,
    username=username,
    sock=sock
)

stdin, stdout, stderr = ssh_client.exec_command("hostname")
print(stdout.read().decode())

```

Example in Go

```
package main

import (
    "fmt"
    "net"
    "path/filepath"

    "golang.org/x/crypto/ssh"
    "golang.org/x/crypto/ssh/agent"
    "golang.org/x/crypto/ssh/knownhosts"
)

func main() {
    host := "ubuntu.example.teleport.sh"
    username := "root"
    directoryDestination := "/opt/machine-id"

    // Setup Agent and Known Hosts
    agentConn, err := net.Dial(
        "unix", filepath.Join(directoryDestination, "agent.sock"),
    )
    if err != nil {
        panic(err)
    }
    defer agentConn.Close()
    agentClient := agent.NewClient(agentConn)
    hostKeyCallback, err := knownhosts.New(
        filepath.Join(directoryDestination, "known_hosts"),
    )
    if err != nil {
        panic(err)
    }

    // Create SSH Config
    sshConfig := &ssh.ClientConfig{
        Auth: []ssh.AuthMethod{
            ssh.PublicKeysCallback(agentClient.Signers),
        },
        User:            username,
        HostKeyCallback: hostKeyCallback,
    }

    // Dial Unix Domain Socket and send multiplexing request
    conn, err := net.Dial(
        "unix", filepath.Join(directoryDestination, "v1.sock"),
    )
    if err != nil {
        panic(err)
    }
    defer conn.Close()
    _, err = fmt.Fprint(conn, fmt.Sprintf("%s:0\x00", host))
    if err != nil {
        panic(err)
    }

    sshConn, sshChan, sshReq, err := ssh.NewClientConn(
        conn,
        // Port here doesn't matter because Multiplexer has already established
        // connection.
        fmt.Sprintf("%s:22", host),
        sshConfig,
    )
    if err != nil {
        panic(err)
    }
    sshClient := ssh.NewClient(sshConn, sshChan, sshReq)
    defer sshClient.Close()

    sshSess, err := sshClient.NewSession()
    if err != nil {
        panic(err)
    }
    defer sshSess.Close()

    out, err := sshSess.CombinedOutput("hostname")
    if err != nil {
        panic(err)
    }
    fmt.Println(string(out))
}

```

### `application-proxy`

The `application-proxy` service opens a listener serving an HTTP proxy that forwards requests to HTTP applications enrolled in Teleport. It handles the process of attaching the necessary client certificate to the upstream connection on the behalf of the client.

Unlike the `application-tunnel` service, which is explicitly bound to a specific application, the `application-proxy` service dynamically routes requests to the correct application. This makes it more suitable for use-cases where a client must connect to a large number of applications enrolled in Teleport, for example, for scraping Prometheus metric endpoints through Teleport.

The proxy authenticates connections for the client, meaning that any client that connects to the listener will be able to access applications through Teleport. For this reason, ensure that the listener is only accessible by the intended clients by using the Unix socket listener or binding to `127.0.0.1`.

```
# type specifies the type of the service. For the application proxy service,
# this will always be `application-proxy`.
type: application-proxy
# listen specifies the address that the service should listen on.
#
# Two types of listener are supported:
# - TCP: `tcp://<address>:<port>`
# - Unix socket: `unix:///<path>`
listen: tcp://127.0.0.1:8080
# name optionally overrides the name of the service used in logs and the `/readyz`
# endpoint. It must only contain letters, numbers, hyphens, underscores, and plus
# symbols.
name: my-service-name

```

The `application-proxy` service will not start if `tbot` has been configured to run in one-shot mode.

#### Limitations

There are a number of limitations in the initial implementation of the `application-proxy` to be aware of:

- Only HTTP applications are supported. TCP applications are not supported at this time.
- Only HTTP/1 and HTTP/1.1 are supported. HTTP/2 is not supported at this time.

If these limitations are problematic, consider using the `application-tunnel` service instead, or, reach out to the Teleport team to discuss your use-case.

#### Using the application proxy

The listener exposed by the `application-proxy` service is compatible with a wide range of HTTP clients and libraries. How this is configured will depend on the client or library being used. For many clients, this is possible using the `http_proxy` environment variable.

When using the `application-proxy`, either the `Host` header or authority specified within a request's target URI must match the name of the application as enrolled within Teleport.

For example, using `curl` to access the HTTP application enrolled in Teleport called `my-app`:

```
curl --proxy localhost:8080 http://my-app/example

```

## Destinations

A destination is somewhere that `tbot` can read and write artifacts.

Destinations are used in two places in the `tbot` configuration:

- Specifying where `tbot` should store its internal state.
- Specifying where an output should write its generated artifacts.

Destinations come in multiple types. Usually, the `directory` type is the most appropriate.

### `directory`

The `directory` destination type stores artifacts as files in a specified directory.

```
# type specifies the type of the destination. For the directory destination,
# this will always be `directory`.
type: directory

# path specifies the path to the directory that this destination should write
# to. This directory should already exist, or `tbot init` should be used to
# create it with the correct permissions.
path: /opt/machine-id

# symlinks configures the behaviour of symlink attack prevention.
# Requires Linux 5.6+.
# Supported values:
#   * try-secure (default): Attempt to securely read and write certificates
#     without symlinks, but fall back (with a warning) to insecure read
#     and write if the host doesn't support this.
#   * secure: Attempt to securely read and write certificates, with a hard error
#     if unsupported.
#   * insecure: Quietly allow symlinks in paths.
symlinks: try-secure

# acls configures whether Linux Access Control List (ACL) setup should occur for
# this destination.
# Requires Linux with a file system that supports ACLs.
# Supported values:
#   * try (default on Linux): Attempt to use ACLs, warn at runtime if ACLs
#     are configured but invalid.
#   * off (default on non-Linux): Do not attempt to use ACLs.
#   * required: Always use ACLs, produce a hard error at runtime if ACLs
#     are invalid.
acls: try

# readers is a list of users and groups that will be allowed by ACL to access
# this directory output. The `acls` parameter must be either `try` or
# `required`. File ACLs will be monitored and corrected at runtime to ensure
# they match this configuration.
# Individual entries may either specify `user` or `group`, but not both. `user`
# accepts an existing named user or a UID, and `group` accepts an existing named
# group or GID. UIDs and GIDs do not necessarily need to exist on the local
# system.
# An empty list of readers disables runtime ACL management.
readers:
- user: teleport
- user: 123
- group: teleport
- group: 456

```

### `memory`

The `memory` destination type stores artifacts in the process memory. When the process exits, nothing is persisted. This destination type is most suitable for ephemeral environments, but can also be used for testing.

Configuration:

```
# type specifies the type of the destination. For the memory destination, this
# will always be `memory`.
type: memory

```

### `kubernetes_secret`

The `kubernetes_secret` destination type stores artifacts in a Kubernetes secret. This allows them to be mounted into other containers deployed in Kubernetes.

There is no requirement that the secret already exists, one will be created if it does not exist. If a secret already exists, `tbot` will overwrite any other keys within the secret.

By default, the `kubernetes_secret` destination operates against the Kubernetes cluster that `tbot` itself is running against. In this mode, very little configuration is required. However, you must ensure that:

- `tbot` is running in Kubernetes with at most one replica. If using a `deployment`, then the `Recreate` strategy must be used to ensure only one instance exists at any time. This is because multiple `tbot` agents configured with the same secret will compete to write to the secret and it may be left in an inconsistent state or the `tbot` agents may fail to write.
- The `tbot` pod is configured with a service account that allows it to read and write from the configured secret.
- The `POD_NAMESPACE` environment variable is configured with the name of the namespace that `tbot` is running in. This is best achieved with the [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/).

Using the `tbot` helm chart is the easiest way to ensure that all of these requirements are fulfilled.

It is also possible for `tbot` to write to a secret in a Kubernetes cluster it is not running in. It may be running in a different Kubernetes cluster or running completely external to a Kubernetes cluster. To configure this:

- Set `kubeconfig_path` to the path of a kubeconfig file that contains the configuration necessary for `tbot` to connect and authenticate to the Kubernetes cluster.
- If your kubeconfig file contains multiple contexts, use `kubeconfig_context` to select the desired one. If this is unset, the default context will be used.
- Set `namespace` to the namespace that the secret should be written into.

Configuration:

```
# type specifies the type of the destination. For the kubernetes_secret
# destination, this will always be `kubernetes_secret`.
type: kubernetes_secret
# name specifies the name of the Kubernetes Secret to write the artifacts to.
name: my-secret
# namespace specifies the Kubernetes namespace that the secret should be written
# to. If unspecified, this defaults to the value of the `POD_NAMESPACE`
# environment variable.
#
# When using the Helm chart, and specifying a namespace other than the one that
# `tbot` is running in, you must manually grant the `tbot` service account
# privileges to read and write to secrets in that namespace.
namespace: default
# labels specifies the labels to apply to the Kubernetes Secret. This field is
# optional.
labels:
  example: "foo"
# kubeconfig_path is the path to a kubeconfig to use for connecting to and
# authenticating to the Kubernetes API server. When running tbot inside a
# Kubernetes cluster, configuring this is unnecessary as the in-cluster
# credentials can be used.
#
# This can be useful when running tbot outside a Kubernetes cluster or
# when tbot needs to write secrets to a Kubernetes cluster that differs
# from the one it is running in.
#
# This may also be set using the KUBECONFIG environment variable. The value
# here within the configuration file will take precedence over the
# environment variable.
kubeconfig_path: /opt/kube.yaml
# kubeconfig_context overrides which context to use from the kubeconfig.
#
# This has no effect when relying on the default in-cluster config and can
# only be used when the KUBECONFIG environment variable or the
# `kubeconfig_path` field has been set.
#
# When unspecified, the context currently set within the Kubernetes config
# file will be used.
kubeconfig_context: my-cluster

```

## Bot resource

The `bot` resource is used to manage Machine & Workload Identity Bots. It is used to configure the access that is granted to a Bot.

```
kind: bot
version: v1
metadata:
  # name is a unique identifier for the bot in the cluster.
  name: robot
spec:
  # roles is a list of roles that the bot should be able to generate credentials
  # for.
  roles:
  - editor
  # traits controls the traits applied to the Bot user. These are fed into the
  # role templating system and can be used to grant a specific Bot access to
  # specific resources without the creation of a new role.
  traits:
  - name: logins
    values:
    - root

```

You can apply a file containing YAML that defines a `bot` resource using `tctl create -f ./bot.yaml`.
