# Export Teleport Audit Events to the Elastic Stack

Teleport's Event Handler plugin receives audit events from the Teleport Auth Service and forwards them to your log management solution, letting you perform historical analysis, detect unusual behavior, and form a better understanding of how users interact with your Teleport cluster.

In this guide, we will show you how to configure Teleport's Event Handler plugin to send your Teleport audit events to the Elastic Stack.

## How it works

The Teleport Event Handler authenticates to the Teleport Auth Service to receive audit events over a gRPC stream, then sends those events to Logstash, which stores them in Elasticsearch for visualization and alerting in Kibana.

## Prerequisites

- Logstash version 8.4.1 or above running on a Linux host. In this guide, you will also run the Event Handler plugin on this host.

- Elasticsearch and Kibana version 8.4.1 or above, either running via an Elastic Cloud account or on your own infrastructure. You will need permissions to create and manage users in Elasticsearch.

  We have tested this guide on the Elastic Stack version 8.4.1.

- A server, virtual machine, Kubernetes cluster, or Docker environment to run the Teleport Event Handler plugin.

This guide requires you to have completed one of the Event Handler setup guides:

- [Set up the Event Handler with tctl](https://goteleport.com/docs/zero-trust-access/export-audit-events/event-handler-setup.md)
- [Set up the Event Handler with the Teleport Kubernetes Operator](https://goteleport.com/docs/zero-trust-access/export-audit-events/event-handler-setup-operator.md)

The instructions below demonstrate a local test of the Event Handler plugin on your workstation. You will need to adjust paths, ports, and domains for other environments.

## Step 1/3. Configure a Logstash pipeline

The Event Handler plugin forwards audit logs from Teleport by sending HTTP requests to a user-configured endpoint. We will define a Logstash pipeline that handles these requests, extracts logs, and sends them to Elasticsearch.

### Create a role for the Event Handler plugin

Your Logstash pipeline will require permissions to create and manage Elasticsearch indexes and index lifecycle management policies, plus get information about your Elasticsearch deployment. Create a role with these permissions so you can later assign it to the Elasticsearch user you will create for the Event Handler.

In Kibana, navigate to "Management" > "Roles" and click "Create role". Enter the name `teleport-plugin` for the new role. Under the "Elasticsearch" section, under "Cluster privileges", enter `manage_index_templates`, `manage_ilm`, and `monitor`.

Under "Index privileges", define an entry with `audit-events-*` in the "Indices" field and `write` and `manage` in the "Privileges" field. Click "Create role".

![Creating an Elasticsearch role](/docs/assets/images/create-role-734e828c7982c1d4b3bba2ece1f88f88.png)

### Create an Elasticsearch user for the Event Handler

Create an Elasticsearch user that Logstash can authenticate as when making requests to the Elasticsearch API.

In Kibana, find the hamburger menu on the upper left and click "Management", then "Users" > "Create user". Enter `teleport` for the "Username" and provide a secure password.

Assign the user the `teleport-plugin` role we defined earlier.

### Prepare TLS credentials for Logstash

Later in this guide, your Logstash pipeline will use an HTTP input to receive audit events from the Teleport Event Handler plugin.

Logstash's HTTP input can only sign certificates with a private key that uses the unencrypted PKCS #8 format. When you ran `teleport-event-handler configure` earlier, the command generated an encrypted RSA key. We will convert this key to PKCS #8.

You will need a password to decrypt the RSA key. To retrieve this, execute the following command in the directory where you ran `teleport-event-handler configure`:

```
$ cat fluent.conf | grep passphrase
private_key_passphrase "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"
```

Convert the encrypted RSA key to an unencrypted PKCS #8 key. The command will prompt your for the password you retrieved:

```
$ openssl pkcs8 -topk8 -in server.key -nocrypt -out pkcs8.key
```

Enable Logstash to read the new key, plus the CA and certificate we generated earlier:

```
$ chmod +r pkcs8.key ca.crt server.crt
```

### Define an index template

When the Event Handler plugin sends audit events to Logstash, Logstash needs to know how to parse these events to forward them to Elasticsearch. You can define this logic using an index template, which Elasticsearch uses to construct an index for data it receives.

Create a file called `audit-events.json` with the following content:

```
{
  "index_patterns": ["audit-events-*"],
  "template": {
    "settings": {},
    "mappings": {
      "dynamic":"true"
    }
  }
}

```

This index template modifies any index with the pattern `audit-events-*`. Because it includes the `"dynamic": "true"` setting, it instructs Elasticsearch to define index fields dynamically based on the events it receives. This is useful for Teleport audit events, which use a variety of fields depending on the event type.

### Define a Logstash pipeline

On the host where you are running Logstash, create a configuration file that defines a Logstash pipeline. This pipeline will receive logs from port `9601` and forward them to Elasticsearch.

On the host running Logstash, create a file called `/etc/logstash/conf.d/teleport-audit.conf` with the following content:

```
input {
  http {
    port => 9601
    ssl =>  true
    ssl_certificate => "/home/server.crt"
    ssl_key =>  "/home/pkcs8.key"
    ssl_certificate_authorities => [
      "/home/ca.crt"
    ]
    ssl_verify_mode => "force_peer"
  }
}
output {
  elasticsearch {
    user => "teleport"
    password => "ELASTICSEARCH_PASSPHRASE"
    template_name => "audit-events"
    template => "/home/audit-events.json"
    index => "audit-events-%{+yyyy.MM.dd}"
    template_overwrite => true
  }
}

```

In the `input.http` section, update `ssl_certificate` and `ssl_certificate_authorities` to include the locations of the server certificate and certificate authority files that the `teleport-event-handler configure` command generated earlier.

Logstash will authenticate client certificates against the CA file and present a signed certificate to the Teleport Event Handler plugin.

Edit the `ssl_key` field to include the path to the `pkcs8.key` file we generated earlier.

In the `output.elasticsearch` section, edit the following fields depending on whether you are using Elastic Cloud or your own Elastic Stack deployment:

**Elastic Cloud**

Assign `cloud_auth` to a string with the content `teleport:PASSWORD`, replacing `PASSWORD` with the password you assigned to your `teleport` user earlier.

Visit `https://cloud.elastic.co/deployments`, find the "Cloud ID" field, copy the content, and add it as the value of `cloud_id` in your Logstash pipeline configuration. The `elasticsearch` section should resemble the following:

```
  elasticsearch {
    cloud_id => "CLOUD_ID"
    cloud_auth => "teleport:PASSWORD" 
    template_name => "audit-events"
    template => "/home/audit-events.json"
    index => "audit-events-%{+yyyy.MM.dd}"
    template_overwrite => true
  }

```

**Self-Hosted**

Assign `hosts` to a string indicating the hostname of your Elasticsearch host.

Assign `user` to `teleport` and `password` to the passphrase you created for your `teleport` user earlier.

The `elasticsearch` section should resemble the following:

```
  elasticsearch {
    hosts => "elasticsearch.example.com"
    user => "teleport" 
    password => "PASSWORD" 
    template_name => "audit-events"
    template => "/home/audit-events.json"
    index => "audit-events-%{+yyyy.MM.dd}"
    template_overwrite => true
  }

```

Finally, modify `template` to point to the path to the `audit-events.json` file you created earlier.

Because the index template we will create with this file applies to indices with the prefix `audit-events-*`, and we have configured our Logstash pipeline to create an index with the title `"audit-events-%{+yyyy.MM.dd}`, Elasticsearch will automatically index fields from Teleport audit events.

### Disable the Elastic Common Schema for your pipeline

The Elastic Common Schema (ECS) is a standard set of fields that Elastic Stack uses to parse and visualize data. Since we are configuring Elasticsearch to index all fields from your Teleport audit logs dynamically, we will disable the ECS for your Logstash pipeline.

On the host where you are running Logstash, edit `/etc/logstash/pipelines.yml` to add the following entry:

```
- pipeline.id: teleport-audit-logs
  path.config: "/etc/logstash/conf.d/teleport-audit.conf"
  pipeline.ecs_compatibility: disabled

```

This disables the ECS for your Teleport audit log pipeline.

---

TIP

If your `pipelines.yml` file defines an existing pipeline that includes `teleport-audit.conf`, e.g., by using a wildcard value in `path.config`, adjust the existing pipeline definition so it no longer applies to `teleport-audit.conf`.

---

### Run the Logstash pipeline

Restart Logstash:

```
$ sudo systemctl restart logstash
```

Make sure your Logstash pipeline started successfully by running the following command to tail Logstash's logs:

```
$ sudo journalctl -u logstash -f
```

When your Logstash pipeline initializes its `http` input and starts running, you should see a log similar to this:

```
Sep 15 18:27:13 myhost logstash[289107]: [2022-09-15T18:27:13,491][INFO ][logstash.inputs.http][main][33bdff0416b6a2b643e6f4ab3381a90c62b3aa05017770f4eb9416d797681024] Starting http input listener {:address=>"0.0.0.0:9601", :ssl=>"true"}

```

These logs indicate that your Logstash pipeline has connected to Elasticsearch and installed a new index template:

```
Sep 12 19:49:06 myhost logstash[33762]: [2022-09-12T19:49:06,309][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.4.1) {:es_version=>8}
Sep 12 19:50:00 myhost logstash[33762]: [2022-09-12T19:50:00,993][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"audit-events"}

```

Pipeline not starting?

If Logstash fails to initialize the pipeline, it may continue to attempt to contact Elasticsearch. In that case, you will see repeated logs like the one below:

```
Sep 12 19:43:04 myhost logstash[33762]: [2022-09-12T19:43:04,519][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://teleport:xxxxxx@127.0.0.1:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::ClientProtocolException] 127.0.0.1:9200 failed to respond"}

```

### Diagnosing the problem

To diagnose the cause of errors initializing your Logstash pipeline, search your Logstash `journalctl` logs for the following, which indicate that the pipeline is starting. The relevant error logs should come shortly after these:

```
Sep 12 18:15:52 myhost logstash[27906]: [2022-09-12T18:15:52,146][INFO][logstash.javapipeline][main] Starting pipeline {:pipeline_id=>"main","pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50,"pipeline.max_inflight"=>250,"pipeline.sources"=>["/etc/logstash/conf.d/teleport-audit.conf"],:thread=>"#<Thread:0x1c1a3ee5 run>"}
Sep 12 18:15:52 myhost logstash[27906]: [2022-09-12T18:15:52,912][INFO][logstash.javapipeline][main] Pipeline Java execution initialization time {"seconds"=>0.76}

```

### Disabling Elasticsearch TLS

This guide assumes that you have already configured Elasticsearch and Logstash to communicate with one another via TLS.

If your Elastic Stack deployment is in a sandboxed or low-security environment (e.g., a demo environment), and your `journalctl` logs for Logstash show that Elasticsearch is unreachable, you can disable TLS for communication between Logstash and Elasticsearch.

Edit the file `/etc/elasticsearch/elasticsearch.yml` to set `xpack.security.http.ssl.enabled` to `false`, then restart Elasticsearch.

## Step 2/3. Run the Event Handler plugin

In this section, you will modify the Event Handler configuration you generated and run the Event Handler to test your configuration.

### Configure the Event Handler

Edit the configuration for the Event Handler, depending on your installation method.

**Executable**

Earlier, we generated a file called `teleport-event-handler.toml` to configure the Teleport Event Handler. This file includes setting similar to the following:

```
storage = "./storage"
timeout = "10s"
batch = 20
# concurrency is the number of concurrent sessions to process. By default, this is set to 5.
concurrency = 5
# The window size configures the duration of the time window for the event handler
# to request events from Teleport. By default, this is set to 24 hours.
# Reduce the window size if the events backend cannot manage the event volume
# for the default window size.
# The window size should be specified as a duration string, parsed by Go's time.ParseDuration.
window-size = "24h"
# types is a comma-separated list of event types to search when forwarding audit
# events. For example, to limit forwarded events to user logins
# and new Access Requests, you can assign this field to
# "user.login,access_request.create".
types = ""
# skip-event-types is a comma-separated list of audit log event types to skip.
# For example, to forward all audit events except for new app deletion events,
# you can include the following assignment:
# skip-event-types = ["app.delete"]
skip-event-types = []
# skip-session-types is a comma-separated list of session recording event types to skip.
# For example, to forward all session events except for malformed SQL packet
# events, you can include the following assignment:
# skip-session-types = ["db.session.malformed_packet"]
skip-session-types = []

[forward.fluentd]
ca = /home/bob/event-handler/ca.crt
cert = /home/bob/event-handler/client.crt
key = /home/bob/event-handler/client.key
url = "https://fluentd.example.com:8888/test.log"
session-url = "https://fluentd.example.com:8888/session"

[teleport]
addr = teleport.example.com:443
identity = "identity"

```

**Helm Chart**

Earlier, we generated a file called `teleport-plugin-event-handler-values.yaml` to configure the Teleport Event Handler. This file includes setting similar to the following:

```
eventHandler:
  storagePath: "./storage"
  timeout: "10s"
  batch: 20
  # concurrency is the number of concurrent sessions to process. By default, this is set to 5.
  concurrency: 5
  # The window size configures the duration of the time window for the event handler
  # to request events from Teleport. By default, this is set to 24 hours.
  # Reduce the window size if the events backend cannot manage the event volume
  # for the default window size.
  # The window size should be specified as a duration string, parsed by Go's time.ParseDuration.
  windowSize: "24h"
  # types is a list of event types to search when forwarding audit
  # events. For example, to limit forwarded events to user logins
  # and new Access Requests, you can assign this field to:
  # ["user.login", "access_request.create"]
  types: []
  # skipEventTypes lists types of audit events to skip. For example, to forward all
  # audit events except for new app deletion events, you can assign this to:
  # ["app.delete"]
  skipEventTypes: []
  # skipSessionTypes lists types of session recording events to skip. For example,
  # to forward all session events except for malformed SQL packet events,
  # you can assign this to:
  # ["db.session.malformed_packet"]
  skipSessionTypes: []

teleport:
  address: teleport.example.com:443
  identitySecretName: teleport-event-handler-identity
  identitySecretPath: identity

fluentd:
  url: "https://fluentd.fluentd.svc.cluster.local/events.log"
  sessionUrl: "https://fluentd.fluentd.svc.cluster.local/session.log"
  certificate:
    secretName: "teleport-event-handler-client-tls"
    caPath: "ca.crt"
    certPath: "client.crt"
    keyPath: "client.key"

persistentVolumeClaim:
  enabled: true

```

**Helm Chart with Kubernetes Operator**

Your helm configuration file `teleport-plugin-event-handler-values.yaml` should contain settings similar to the following:

```
eventHandler:
  storagePath: "./storage"
  timeout: "10s"
  batch: 20
  # concurrency is the number of concurrent sessions to process. By default, this is set to 5.
  concurrency: 5
  # The window size configures the duration of the time window for the event handler
  # to request events from Teleport. By default, this is set to 24 hours.
  # Reduce the window size if the events backend cannot manage the event volume
  # for the default window size.
  # The window size should be specified as a duration string, parsed by Go's time.ParseDuration.
  windowSize: "24h"
  # types is a list of event types to search when forwarding audit
  # events. For example, to limit forwarded events to user logins
  # and new Access Requests, you can assign this field to:
  # ["user.login", "access_request.create"]
  types: []
  # skipEventTypes lists types of audit events to skip. For example, to forward all
  # audit events except for new app deletion events, you can assign this to:
  # ["app.delete"]
  skipEventTypes: []
  # skipSessionTypes lists types of session recording events to skip. For example,
  # to forward all session events except for malformed SQL packet events,
  # you can assign this to:
  # ["db.session.malformed_packet"]
  skipSessionTypes: []

crd:
  create: true
  namespace: operator-namespace

tbot:
  enabled: true
  clusterName: teleport.example.com
  teleportProxyAddress: teleport.example.com:443

fluentd:
  url: "https://fluentd.fluentd.svc.cluster.local/events.log"
  sessionUrl: "https://fluentd.fluentd.svc.cluster.local/session.log"
  certificate:
    secretName: "teleport-event-handler-client-tls"
    caPath: "ca.crt"
    certPath: "client.crt"
    keyPath: "client.key"

persistentVolumeClaim:
  enabled: true

```

Update the following fields.

**Executable**

**`[teleport]`**

`addr`: Include the hostname and HTTPS port of your Teleport Proxy Service or Teleport Enterprise Cloud account: teleport.example.com:443

`identity`: Fill this in with the path to the identity file you exported earlier.

If you are providing credentials to the Event Handler using a `tbot` binary that runs on a Linux server, make sure the value of `identity` in the Event Handler configuration is the same as the path of the identity file you configured `tbot` to generate, `/opt/machine-id/identity`.

**`[forward.fluentd]`**

`ca`: Include the path to the CA certificate: /home/bob/event-handler/ca.crt

`cert`: Include the path to the Fluentd client certificate. /home/bob/event-handler/client.crt

`key`: Include the path to the Fluentd client key. /home/bob/event-handler/client.key

`url`: Include the Fluentd URL where the audit event logs will be sent.

`session-url`: Include the Fluentd URL where the session logs will be sent.

**Helm Chart**

**`teleport`**

`address`: Include the hostname and HTTPS port of your Teleport Proxy Service or Teleport Enterprise Cloud account: teleport.example.com:443

`identitySecretName`: Fill in the `identitySecretName` field with the name of the Kubernetes secret you created earlier.

`identitySecretPath`: Fill in the `identitySecretPath` field with the path of the identity file within the Kubernetes secret. If you have followed the instructions above, this will be `identity`.

**`fluentd`**

`url`: Include the Fluentd URL where the audit event logs will be sent.

`sessionUrl`: Include the Fluentd URL where the session logs will be sent.

`certificate.secretName`: Include the name of the Kubernetes secret containing the Fluentd client credentials. If you have followed the instructions above, this will be `teleport-event-handler-client-tls`.

`certificate.caPath`: Include the path to the CA certificate inside the secret.

`certificate.certPath`: Include the path to the Fluentd client certificate inside the secret.

`certificate.keyPath`: Include the path to the Fluentd client key inside the secret.

**Helm Chart with Kubernetes Operator**

**`crd`**

`namespace`: Include the namespace that the Teleport Kubernetes Operator is running in: operator-namespace

`tokenSpecOverride`: Optionally include a specific join token specification for the bot user that `tbot` will authenticate as.

**`tbot`**

`clusterName`: Include the name of your Teleport cluster: teleport.example.com

`teleportProxyAddress`: Include the hostname and HTTPS port of your Teleport Proxy Service or Teleport Enterprise Cloud account: teleport.example.com:443

**`fluentd`**

`url`: Include the Fluentd URL where the audit event logs will be sent.

`sessionUrl`: Include the Fluentd URL where the session logs will be sent.

`certificate.secretName`: Include the name of the Kubernetes secret containing the Fluentd client credentials. If you have followed the instructions above, this will be `teleport-event-handler-client-tls`.

`certificate.caPath`: Include the path to the CA certificate inside the secret.

`certificate.certPath`: Include the path to the Fluentd client certificate inside the secret.

`certificate.keyPath`: Include the path to the Fluentd client key inside the secret.

Change `forward.fluentd.url` to the scheme, host and port you configured for your Logstash `http` input earlier, `https://localhost:9601`. Change `forward.fluentd.session-url` to the same value with the root URL path: `https://localhost:9601/`.

### Start the Event Handler

Start the Teleport Event Handler by following the instructions below.

**Linux server**

Copy the `teleport-event-handler.toml` file to `/etc` on your Linux server. Update the settings within the `toml` file to match your environment. Make sure to use absolute paths on settings such as `identity` and `storage`. Files and directories in use should only be accessible to the system user executing the `teleport-event-handler` service such as `/var/lib/teleport-event-handler`.

Next, create a systemd service definition at the path `/usr/lib/systemd/system/teleport-event-handler.service` with the following content:

```
[Unit]
Description=Teleport Event Handler
After=network.target

[Service]
Type=simple
Restart=always
ExecStart=/usr/local/bin/teleport-event-handler start --config=/etc/teleport-event-handler.toml --teleport-refresh-enabled=true
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/run/teleport-event-handler.pid

[Install]
WantedBy=multi-user.target

```

If you are not using Machine & Workload Identity to provide short-lived credentials to the Event Handler, you can remove the `--teleport-refresh-enabled true` flag.

Enable and start the plugin:

```
$ sudo systemctl enable teleport-event-handler
$ sudo systemctl start teleport-event-handler
```

Choose when to start exporting events

You can configure when you would like the Teleport Event Handler to begin exporting events when you run the `start` command. This example will start exporting from May 5th, 2021:

```
$ teleport-event-handler start --config /etc/teleport-event-handler.toml --start-time "2021-05-05T00:00:00Z"
```

You can only determine the start time once, when first running the Teleport Event Handler. If you want to change the time frame later, remove the plugin state directory that you specified in the `storage` field of the handler's configuration file.

Once the Teleport Event Handler starts, you will see notifications about scanned and forwarded events:

```
$ sudo journalctl -u teleport-event-handler
DEBU   Event sent id:f19cf375-4da6-4338-bfdc-e38334c60fd1 index:0 ts:2022-09-21
18:51:04.849 +0000 UTC type:cert.create event-handler/app.go:140
...
```

**Helm chart**

Run the following command on your workstation:

```
$ helm install teleport-plugin-event-handler teleport/teleport-plugin-event-handler \
  --values teleport-plugin-event-handler-values.yaml \
  --version 18.7.3
```

**Local Docker container**

Navigate to the directory where you ran the `configure` command earlier and execute the following command:

```
$ docker run --network host -v `pwd`:/opt/teleport-plugin -w /opt/teleport-plugin public.ecr.aws/gravitational/teleport-plugin-event-handler:18.7.3 start --config=teleport-event-handler.toml
```

This command joins the Event Handler container to the preset `host` network, which uses the Docker host networking mode and removes network isolation, so the Event Handler can communicate with the Fluentd container on `localhost`.

## Step 3/3. Create a data view in Kibana

Make it possible to explore your Teleport audit events in Kibana by creating a data view. In the Elastic Stack UI, find the hamburger menu on the upper left of the screen, then click "Management" > "Data Views". Click "Create data view".

For the "Name" field, use "Teleport Audit Events". In "Index pattern", use `audit-events-*` to select all indices created by our Logstash pipeline. In "Timestamp field", choose `time`, which Teleport adds to its audit events.

![Creating a data view](/docs/assets/images/data-view-create-d96e60dea95cdb5a0a40ed5777349899.png)

To use your data view, find the search box at the top of the Elastic Stack UI and enter "Discover". On the upper left of the screen, click the dropdown menu and select "Teleport Audit Events". You can now search and filter your Teleport audit events in order to get a better understanding how users are interacting with your Teleport cluster.

![Creating a data view](/docs/assets/images/data-view-explore-f470f1261410cd5592130463f989581b.png)

For example, we can click the `event` field on the left sidebar and visualize the event types for your Teleport audit events over time:

![Creating a visualization](/docs/assets/images/lens-dacdb80c241bca8f5d8ba42760add284.png)

## Troubleshooting connection issues

If the Teleport Event Handler is displaying error logs while connecting to your Teleport Cluster, ensure that:

- The certificate the Teleport Event Handler is using to connect to your Teleport cluster is not past its expiration date. This is the value of the `--ttl` flag in the `tctl auth sign` command, which is 12 hours by default.
- In your Teleport Event Handler configuration file, you have provided the correct host *and* port for the Teleport Proxy Service.

## Next steps

- Now that you are exporting your audit events to the Elastic Stack, consult our [audit event reference](https://goteleport.com/docs/reference/deployment/monitoring/audit.md) so you can plan visualizations and alerts.
- To see all of the options you can set in the values file for the `teleport-plugin-event-handler` Helm chart, consult our [reference guide](https://goteleport.com/docs/reference/helm-reference/teleport-plugin-event-handler.md).
