Intelligence is foundation
Podcast Subscribe
Web Development Sunday, 22 March 2026

The Message Broker That Runs on 20MB and Starts in 10 Milliseconds

Share: LinkedIn
The Message Broker That Runs on 20MB and Starts in 10 Milliseconds

Kafka is brilliant. It's also absurdly heavy.

Running a Kafka cluster means managing ZooKeeper, configuring brokers, monitoring replication, tuning JVM garbage collection, and keeping an eye on disk usage across nodes. For large-scale systems processing millions of events per second, that overhead is worth it. For smaller workloads - a startup logging user events, a SaaS app handling webhooks, a team prototyping a data pipeline - it's overkill.

Enter Tansu. Announced at QCon London, it's an open-source message broker written in Rust that's Kafka-compatible but uses just 20MB of RAM and starts in 10 milliseconds. It's stateless, scales to zero, and writes directly to object storage or data lakes. No cluster leadership. No replication overhead. No ZooKeeper.

For anyone who's ever thought "I just need a simple event stream," this is the answer they've been waiting for.

Why Kafka Became So Heavy

Kafka was built for LinkedIn's infrastructure in 2011. The use case was high-throughput, low-latency event streaming at massive scale. To achieve that, Kafka makes architectural choices that prioritise durability and performance over operational simplicity.

Messages are written to disk, replicated across brokers, and organised into partitions. Cluster coordination requires ZooKeeper (or KRaft, in newer versions). Producers and consumers need to understand partition assignments. Scaling means adding brokers, rebalancing partitions, and managing replication factors.

For a team running Kafka in production, that's three to five nodes minimum, each with non-trivial memory and disk requirements. And even if your workload is small, you're still paying the operational cost of keeping that cluster healthy.

Tansu sidesteps all of this by being stateless. Instead of storing messages locally, it writes directly to pluggable storage backends - S3, Azure Blob, Google Cloud Storage, or a data lake. The broker itself holds nothing. If it crashes, you spin up another one. No state to recover. No partitions to rebalance.

What This Means for Builders

The practical impact is cost and simplicity. A startup building an event-driven architecture can now use Kafka-compatible tooling without paying for a managed Kafka service or running a cluster. A single Tansu instance handles modest workloads with minimal resources. When traffic drops, it scales to zero. When traffic spikes, you add instances. No rebalancing. No downtime.

For prototyping, this is transformational. You can spin up a local Tansu broker, write events to it, and test your pipeline without Docker Compose files or waiting for clusters to stabilise. 10-millisecond startup time means you can include it in your test suite without slowing down CI.

For production, the trade-offs are more nuanced. Tansu's stateless design means slightly higher latency than Kafka - you're writing to object storage, not local disk. For real-time systems where every millisecond counts, that matters. For event logging, webhook delivery, or batch processing, it doesn't.

The bigger advantage is operational simplicity. No cluster to manage. No partitions to monitor. No brokers to upgrade in sequence. You deploy Tansu like any other stateless service. It either works or it doesn't. If it doesn't, you restart it. That's the entire runbook.

The Rust Factor

Tansu is written in Rust, which explains the low memory footprint and fast startup time. Rust's zero-cost abstractions and lack of garbage collection mean predictable performance without the JVM overhead that comes with Kafka.

This is part of a broader shift. More infrastructure tools are being rewritten in Rust - vector, Qdrant, Delta Lake's Rust bindings, now Tansu. The pattern is consistent: take a heavyweight tool written in Java or Go, rebuild it in Rust, and end up with something that does the same job with a fraction of the resources.

For developers, that means leaner deployments, lower cloud bills, and fewer "why is the JVM using 4GB of RAM" debugging sessions. For infrastructure teams, it means simpler capacity planning and fewer nodes to manage.

Who This Isn't For

Tansu isn't replacing Kafka at LinkedIn or Netflix. If you're processing petabytes of events per day with sub-millisecond latency requirements, Kafka's architecture is precisely what you need. The operational complexity is justified by the performance characteristics.

But most teams aren't LinkedIn. Most event streams are measured in thousands of messages per second, not millions. Most workloads can tolerate 50-millisecond latency instead of 5-millisecond. For those use cases, Kafka's complexity is a liability, not a strength.

Tansu also requires rethinking some assumptions. If you're used to Kafka's partition-based parallelism, a stateless broker feels unfamiliar. If your architecture depends on Kafka's exactly-once semantics, you'll need to verify Tansu provides equivalent guarantees. The Kafka-compatible API helps, but "compatible" doesn't mean "identical."

The Bigger Picture

This fits a pattern we've seen before. Databases got simpler with SQLite. Web servers got leaner with Caddy. Object storage replaced SANs. The trend is toward tools that do one thing well, start fast, use minimal resources, and don't require a dedicated ops team to keep running.

Tansu takes that philosophy and applies it to message brokers. For teams who need event streaming but don't need Kafka's full feature set, it's a better fit. For teams prototyping or building side projects, it removes a significant barrier to entry.

And for anyone who's ever thought "surely there's a simpler way to do this" while configuring a Kafka cluster - yes, now there is.

More Featured Insights

Artificial Intelligence
Amazon's Second Shot at Smartphones - This Time With AI
Quantum Computing
The 100-Picosecond Network That Unlocks Bigger Quantum Computers

Today's Sources

GeekWire
Report: Amazon is making another phone, this time for the AI era
GeekWire
Blue Origin adds 51,600 satellites to orbital data center race with Project Sunrise
Wired AI
I Tried DoorDash's Tasks App and Saw the Bleak Future of AI Gig Work
TechCrunch
Are AI tokens the new signing bonus or just a cost of doing business?
TechCrunch AI
Delve accused of misleading customers with 'fake compliance'
TechCrunch
Publisher pulls horror novel 'Shy Girl' over AI concerns
Quantum Zeitgeist
Quantum Computers Gain Speed with Network Achieving 100ps Synchronisation
Quantum Zeitgeist
Low-Power Lasers Now Control Material Vibrations for Faster Electronics
Phys.org Quantum Physics
Physicists find electronic agents that govern flat band quantum materials
Quantum Zeitgeist
Arthur D. Little Analyzes Optimism Surrounding Quantum Computing Development
InfoQ
QCon London 2026: Introducing Tansu.io - Rethinking Kafka for Lean Operations
Elementor
10 Best WordPress AI Chatbots in 2026
Hacker News
Sashiko: An agentic Linux kernel code review system

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed