# Load into warehouse ## Store events into your data warehouse in real time. When events are sent to Krenalis, they are stored in your data warehouse **quickly, reliably, and safely**, making them immediately available for analytics and downstream use cases. Each workspace is provisioned with its own dedicated data warehouse database. Krenalis creates and manages a single central table that stores all events ingested from connected sources. To access this data, you should query the [`events` view](https://www.krenalis.com/docs/views-and-tables.md#event-view-schema), which provides a stable and consistent abstraction over the underlying table and is designed for analytics and integrations. What follows explains how Krenalis handles event ordering, delivery timing, and durability as events move from ingestion to the data warehouse. ## Ordering When events are received by Krenalis, they are processed in a way that preserves their **relative order for each user**, making it possible to analyze behavior over time in a consistent and predictable way. Events generated by the same user share the same `anonymous_id`. For these events, Krenalis preserves the sequence in which they are received and keeps them ordered throughout the entire pipeline, up to their storage in the data warehouse. This allows downstream analyses—such as funnels, sessions, and timelines—to rely on a stable event sequence. To benefit from this behavior, events should be sent by the client in their correct chronological order. If events are received out of order, their original sequence cannot be reconstructed. In rare situations, such as a server restart, events received very close in time may also experience slight reordering. For this reason, analytical queries should rely on event timestamps (`timestamp` or `sent_at`) rather than assuming a strictly ordered ingestion sequence. ## Deduplication In addition to preserving order, Krenalis ensures that each event is stored **once and only once** in the data warehouse. Every event is uniquely identified by its `message_id`, which guarantees that duplicate events are not written. This makes event ingestion idempotent and allows clients to safely retry requests without the risk of creating duplicated data. All events sent to Krenalis must include a `messageId` field. This field is stored as `message_id` in the data warehouse and must be globally unique. Krenalis SDKs automatically generate `messageId` values as UUIDs, making them effectively unique even across different clients. ## Delivery timing Krenalis is designed to make events available in the data warehouse **near real time**, while continuing to operate reliably under sustained load. Once events are received, they move through the ingestion pipeline and are loaded into the data warehouse within a short time window. In most cases, events appear in the warehouse in **less than one second to a few seconds**, making them quickly available for analytics and activation. The system is built to handle high throughput at scale. Depending on the characteristics of the machine it runs on, Krenalis can process **tens of thousands of events per second**, and ingestion performance naturally scales with available resources. This allows throughput to grow with demand without requiring special tuning on the client side. Delivery timing can vary based on factors such as network conditions, event volume, and internal batching strategies. Some of these strategies intentionally trade a small amount of latency for higher efficiency and lower warehouse costs, resulting in a more stable and predictable system over time. To understand delivery timing in detail, Krenalis exposes multiple timestamps for each event. By comparing `timestamp`, `sent_at`, and `received_at`, you can observe ingestion behavior end to end and analyze latency at each stage of the pipeline. ## Durability and fault tolerance From the moment events are accepted by Krenalis, they are **protected from data loss**. Once an event is acknowledged, it will not be lost—even if the data warehouse is temporarily unavailable, slow, or undergoing maintenance. As events arrive, Krenalis first stores them in a persistent internal event stream. This stream acts as a durable buffer between event ingestion and warehouse loading, ensuring that incoming data is safely persisted before any downstream processing occurs. Krenalis acknowledges a client request only after all events in that request have been successfully written to this persistent stream. At that point, the events are durably stored, and the client can safely assume that the data has been accepted and protected. From there, events are processed and loaded into the data warehouse asynchronously. If the warehouse is temporarily unreachable or operating under reduced capacity, events remain safely stored and are delivered automatically as soon as the warehouse becomes available again. Events are removed from the persistent stream only after they have been successfully written to the data warehouse. This design allows Krenalis to absorb temporary failures, decouple ingestion from delivery, and guarantee reliable event storage—without requiring retries or special handling from the client.