Production
The Docker stack in this repository is a local development and testing tool. This page covers what to change for production.
Authentication
Replace the test JWKS with a real identity provider:
- Test setup: Envoy validates JWTs against a local file (
e2e/fixtures/jwks.json) with a committed RSA keypair. - Production: Point Envoy's
remote_jwksat your identity provider's JWKS endpoint (Auth0, Cognito, Keycloak, etc.). Update theissuerandaudiencesto match.
The DS server itself needs no changes. It is auth-agnostic by design.
Persistent storage
The DS server supports multiple storage modes:
memory: in-RAM only; no restart durability.file-fast/file-durable: file-backed per-stream logs.acid: sharded redb backend with ACID commits and immediate durability.
For production durability, use file-durable or acid, or run the sync layer to mirror data into Postgres.
Acid mode tuning
DS_STORAGE__MODE=acidandDS_STORAGE__DATA_DIR=/path/to/storepersist under${DATA_DIR}/acid/.DS_STORAGE__ACID_SHARD_COUNTcontrols write concurrency. Keep it power-of-2 (1..=256), default16.- Writes are serialized per shard (single writer per shard). Increase shard count for highly concurrent write workloads.
- Acid mode commits with immediate durability (fsync-class semantics on each commit), prioritizing crash safety over raw append latency.
Electric SQL configuration
- Pin the Electric SQL version in your deployment (the stack uses
electricsql/electric:1.4.2). - Configure Postgres replication slots carefully. The defaults (
max_wal_senders=10,max_replication_slots=10) work for development but may need tuning for production workloads. - Ensure
wal_level=logicalis set in your Postgres configuration. This is required for Electric's logical replication.
Sync service resilience
The reference sync service (e2e/sync/sync.mjs) is a starting point, not production-ready:
- Offset persistence: The sync service should save its SSE offset (from the
id:field) to a Postgres table so it can resume after restart without replaying the full stream. - Error handling: Add retries with backoff for Postgres insert failures and SSE reconnection.
- Scaling: The sync service is a single process. For high-throughput streams, partition by stream name or run multiple instances with offset coordination.
Memory limits
Tune the server's memory limits for your workload:
| Variable | Default | Notes |
|---|---|---|
DS_LIMITS__MAX_MEMORY_BYTES | 100 MB | Total across all streams |
DS_LIMITS__MAX_STREAM_BYTES | 10 MB | Per stream |
In production with the sync layer, streams are consumed and can be deleted after sync. When using the default in-memory mode, the store acts as a buffer, not long-term storage. For persistence without the sync layer, use file-durable or acid mode.
Monitoring
- Log and alert on sync service errors, SSE reconnections, and PG insert failures.
- The sync service logs events to stdout; aggregate with your preferred log pipeline.
- Use the Envoy admin dashboard (port 9901 in dev) for proxy metrics and connection debugging.
- The DS server logs via
tracingwith configurable levels viaRUST_LOG.
CORS
The server defaults to DS_HTTP__CORS_ORIGINS=* (allow all). For production, restrict to your application's domain:
DS_HTTP__CORS_ORIGINS=https://app.example.com cargo run
Multiple origins can be comma-separated.
TLS
Default model: terminate TLS at the proxy layer (Envoy, nginx, cloud load balancer) or at a CDN edge.
Optional model: the DS server can terminate TLS directly when both DS_TLS__CERT_PATH and DS_TLS__KEY_PATH are set.
Recommended topology matrix:
- Internet-facing traffic: terminate TLS at proxy/edge (this is still the default).
- Proxy -> DS server hop: use direct TLS on the DS server when you need encrypted in-cluster/intra-box traffic.
- mTLS: terminate and enforce mTLS at the proxy when possible; DS direct TLS currently covers server-side TLS termination.
HTTP/2 and HTTP/3 are typically negotiated at the proxy/edge. Enabling direct TLS on the DS server secures that hop, while proxy capabilities still govern external ALPN behavior.