Actions in Karma: From Events to Execution

In Karma, every action is just another event. This post explains the pattern for turning anomalies and rules into commands, tracking their execution, and feeding the results back into the same event pipeline.

August 9, 2025 · 2 min · Ted Strall

Splitting the Ledger and the Graph: Why Karma Uses Separate Pipelines for ClickHouse and Graph DB

Karma uses a single normalized event stream to feed both a ClickHouse ledger and an optional graph database — but through separate pipelines for flexibility, scalability, and clarity.

August 9, 2025 · 2 min · Ted Strall

A Generic, Config-Driven CDC Pipeline from MongoDB to ClickHouse

When you already have systems tracking their own state in MongoDB, you can turn that into a real-time stream of structured events without rewriting application logic. This approach captures every meaningful change from Mongo, tags it with relevant metadata, and makes it instantly queryable in ClickHouse — all through a generic, reusable pattern. The idea: One fixed event envelope for all sources Dynamic tags/attributes defined in config files No code changes when onboarding new collections 1. The Fixed Event Envelope Every CDC message has the same top-level structure, no matter what source system or collection it came from: ...

August 9, 2025 · 3 min · Ted Strall

First Things to Do After Capturing MongoDB Change Streams in ClickHouse

Once your MongoDB change streams are flowing through Kafka and landing in ClickHouse, you’ve got a live, queryable event history for every state change in your systems. The obvious next step: start using it immediately — even before you build full-blown dashboards or machine learning models. 1. Detect Missing or Late Events One of the fastest wins is catching when something doesn’t happen. If you know a collection normally sees certain events every day, you can query for absences: ...

August 9, 2025 · 3 min · Ted Strall