Features:
- Broker. Each broker is completely stateless acting as the leader for any topic partition, transaction or consumer group. Storage is separate to the broker. Schema validation with open table support. Quick starting: spin the broker down between API requests.
- Proxy. A high volume, low latency proxy for Kafka traffic adding security, multi tenancy, schema validation, throttling or batching.
- CLI. A developer friendly CLI that can be used to administer the broker or produce to schema backed topics.
Storage:
-
PostgreSQL. Multiple brokers can use the same PostgreSQL database as storage. Topic data is partitioned, splitting what is logically one large table into smaller physical pieces. Simple for existing Operational teams to manage.
-
SQLite. Super simple to setup. Embedded in the Tansu binary. Single broker only. Widely adopted and very fast. A single database file can easily reproduce an environment on demand.
-
S3. AWS S3 is designed to exceed 99.999999999% (11 nines) data durability. Multiple brokers can use the same S3 bucket using conditional writes, without an additional coordinator.
-
memory. Designed for ephemeral development or test environments. Quick to setup. Even quicker to tear down.
-
Broker Schema Validation. AVRO, Protocol buffer and JSON schema backed topics are automatically validated by the broker. Validation is embedded in the broker, with no other moving parts.
-
Open Table Format. Automatic conversion of schema topics into Delta Lake, Apache Iceberg or Parquet open table/file formats. Sink topics can skip the Kafka metadata overhead writing directly into the Data Lake.
Articles:
- Tuning Tansu: 600,000 record/s with 13MB of RAM tuned the broker with the null storage engine using cargo flamegraph
- Using flame graphs to remove a hot path, stop copying data and switching to a fast CRC32 which tuned a hot regular expression, stopped copying uncompressed data and used a faster CRC32 implementation using the SQLite storage engine
- Route, Layer and Process Kafka Messages with Tansu Services, the composable layers that are used to build the Tansu broker and proxy
- Apache Kafka protocol with serde, quote, syn and proc_macro2, a walk through of the low level Kafka protocol implementation used by Tansu
- Effortlessly Convert Kafka Messages to Apache Parquet with Tansu: A Step-by-Step Guide, using a schema backed topic to write data into the Parquet open table format
- Using Tansu with Tigris on Fly, spin up (and down!) a broker on demand
- Smoke Testing with the Bash Automated Testing System 🦇, a look at the integration tests that are part of the Tansu CI system
Examples: