Event stream setup

View as Markdown

Krenalis uses NATS as its internal event stream to process and route events.

NATS is a lightweight messaging system designed for distributed applications. It is simple to operate, requires minimal resources, and delivers messages with very low latency, even under high throughput.

Krenalis uses NATS to:

  • process events asynchronously
  • decouple sources from destinations
  • retry event processing safely in case of failures
  • scale event pipelines independently
  • replicate events for high availability

This approach keeps event processing fast, reliable, and predictable as the system grows.

If you are running Krenalis via Docker Compose, a NATS instance is already included and configured automatically. This page explains how to set up NATS when installing Krenalis from source and managing your own NATS infrastructure.

Requirements

Krenalis requires a NATS server v2.12 and supports different NATS configurations, depending on your deployment needs.

Topology

You can run NATS in one of the following modes:

  • Single-node, for development or small installations
  • Clustered, for high availability and fault tolerance

Storage

Events can be stored using:

  • File-based storage, recommended for production
  • In-memory storage, suitable for testing

Authentication

Krenalis supports the following NATS authentication methods:

  • NKey (recommended)
  • Token
  • Username and password

How Krenalis uses NATS

At runtime, Krenalis automatically creates the required streams and consumers if they do not already exist. In particular:

  • An EVENTS stream stores all incoming events.
  • A dedicated consumer is created for each enabled source and destination pipeline.
  • Each event is replicated in the stream for every pipeline that needs to process it.
  • Events are removed only after processing is complete (for example, once data is written to a warehouse or delivered to a destination).

Running managed NATS

You can use a managed NATS service instead of running your own servers. This option reduces operational overhead and offloads maintenance, upgrades, and monitoring.

Synadia provides managed NATS deployments, in multiple AWS and GCP regions, and a supported self-hosted offering.

Running self-managed NATS

1. Install the NATS server

Install the NATS server using Go:

go install github.com/nats-io/nats-server/v2@latest

Make sure the nats-server binary (generally ~/go/bin/nats-server) is available in your PATH.

2. Generate NKey credentials

NKey authentication uses a public/private key pair.

First, install the nk tool:

go install github.com/nats-io/nkeys/nk@latest

Generate a new user key pair:

nk -gen user -pubout

This command prints two lines:

  1. Private key → used by Krenalis. ⚠️ keep it secret ⚠️. It starts with SU.
  2. Public key → used in the NATS configuration file. It starts with UD.

3. Configure the NATS server

Below is an example NATS configuration with NKey authentication and persistent storage on disk.

Single-node

listen: 0.0.0.0:4222

authorization {
  users = [ { nkey: YOUR_PUBLIC_NKEY } ]
}

jetstream { store_dir: /var/lib/nats }

Replace YOUR_PUBLIC_NKEY (starting with UD) with the public key generated in the previous step.

Cluster (High Availability)

To run NATS in a cluster, deploy multiple NATS servers on separate machines:

  • nodes must be able to communicate with each other
  • storage replication should be lower than the total number of nodes (commonly 3 or 5)
  • each node with storage must have its own disk

Below is an example of a 3-node cluster where all three nodes are configured to store and replicate data.

Node 1
server_name: nats-node-1

cluster {
  listen: 0.0.0.0:6222
  routes = [
    nats://nats-node-2:6222
    nats://nats-node-3:6222
  ]
}

listen: 0.0.0.0:4222

authorization {
  users = [ { nkey: YOUR_PUBLIC_NKEY } ]
}

jetstream {
  store_dir: /var/lib/nats/storage
}

Replace YOUR_PUBLIC_NKEY (starting with UD) with the public key generated in the previous step.

Node 2 and Node 3

Use the same configuration, changing:

  • server_name
  • routes to point to the other nodes

Only nodes with a jetstream section will participate in storage replication.

For maximum availability, place nodes in different availability zones or regions.

4. Start the NATS server

Start NATS using the configuration file created in the previous step:

nats-server -c /etc/nats/nats.conf

Check the standard error to confirm that the server starts successfully.

5. Configure environment variables

Add the following environment variables to the .env file in the same directory as the Krenalis executable:

Single-node

KRENALIS_NATS_URL=nats://nats-node:4222
KRENALIS_NATS_NKEY=<YOUR_PRIVATE_NKEY>

Cluster

KRENALIS_NATS_URL=nats://nats-node-1:4222, nats://nats-node-2:4222, nats://nats-node-3:4222
KRENALIS_NATS_NKEY=<YOUR_PRIVATE_NKEY>
KRENALIS_NATS_REPLICAS=3

Replace <YOUR_PRIVATE_NKEY> with the private key generated earlier (it starts with SU).

Next steps

  • Starting Krenalis. Start Krenalis for the first time by creating the database tables.