Proposal: Early Ticket Prediction via Mongo–Kafka–ClickHouse

Extend the MongoDB → Kafka → ClickHouse pipeline with ServiceNow ticket data to provide early-warning signals for incidents, helping on-call engineers see problems before tickets are created.

August 30, 2025 · 2 min · Ted Strall

ClickHouse DDL & Queries for TicketSoon Pilot

ClickHouse schema definitions and example queries for the TicketSoon pilot, integrating MongoDB CDC, system events, and ServiceNow tickets into a unified event store.

August 30, 2025 · 2 min · Ted Strall

Discovering Schedules and Dependencies from Mongo Change Streams

Many systems already know a lot about themselves — you just have to listen. MongoDB change streams (CDC) emit a continuous feed of inserts, updates, and deletes. With a little routing into a fast analytical database like ClickHouse, you can let the system “discover itself”: jobs, runs, schedules, dependencies, and even the fingerprints of human intervention. 1. Capture the Raw Feed First, set up a connector: MongoDB → Kafka → ClickHouse In ClickHouse, land the JSON envelopes losslessly: ...

August 30, 2025 · 5 min · Ted Strall

Implementing Entropy in Karma: The First Step

A practical blueprint for the first entropy-capable version of Karma — using simple statistical measures and ClickHouse queries to detect surprise.

August 9, 2025 · 2 min · Ted Strall

Karma and Entropy: From Surprise to Self-Healing

How Karma uses information-theoretic entropy to detect operational drift, learn expectations, and close the loop toward self-healing systems.

August 9, 2025 · 2 min · Ted Strall

Karma: Current State and Next Steps

Karma now ingests, normalizes, and routes events from any CDC-like source into a shared ledger, optional graph, and an action loop — setting the stage for learned expectations and autonomous intervention.

August 9, 2025 · 2 min · Ted Strall

Actions in Karma: From Events to Execution

In Karma, every action is just another event. This post explains the pattern for turning anomalies and rules into commands, tracking their execution, and feeding the results back into the same event pipeline.

August 9, 2025 · 2 min · Ted Strall

Splitting the Ledger and the Graph: Why Karma Uses Separate Pipelines for ClickHouse and Graph DB

Karma uses a single normalized event stream to feed both a ClickHouse ledger and an optional graph database — but through separate pipelines for flexibility, scalability, and clarity.

August 9, 2025 · 2 min · Ted Strall

A Generic, Config-Driven CDC Pipeline from MongoDB to ClickHouse

When you already have systems tracking their own state in MongoDB, you can turn that into a real-time stream of structured events without rewriting application logic. This approach captures every meaningful change from Mongo, tags it with relevant metadata, and makes it instantly queryable in ClickHouse — all through a generic, reusable pattern. The idea: One fixed event envelope for all sources Dynamic tags/attributes defined in config files No code changes when onboarding new collections 1. The Fixed Event Envelope Every CDC message has the same top-level structure, no matter what source system or collection it came from: ...

August 9, 2025 · 3 min · Ted Strall

First Things to Do After Capturing MongoDB Change Streams in ClickHouse

Once your MongoDB change streams are flowing through Kafka and landing in ClickHouse, you’ve got a live, queryable event history for every state change in your systems. The obvious next step: start using it immediately — even before you build full-blown dashboards or machine learning models. 1. Detect Missing or Late Events One of the fastest wins is catching when something doesn’t happen. If you know a collection normally sees certain events every day, you can query for absences: ...

August 9, 2025 · 3 min · Ted Strall