If you'd told me at the beginning of 2025 that by January 2026 I'd have delivered a data warehouse migration, a full BI platform migration, and an entirely new real-time data layer for an AI agent, I would have asked what happened to the other two people who were supposed to help.
But that's what happened. Three major projects in sequence, each building on the last, none of them planned as a trilogy from the start.
Project 1: BigQuery to ClickHouse
The first project was migrating our Looker instance from BigQuery to ClickHouse as the underlying data warehouse. This meant refactoring all models, views, and explores to work with a fundamentally different database.
The most important technical discovery from this project was the aggregate awareness problem: Looker's optimization layer doesn't handle certain distinct aggregation types correctly with ClickHouse. I built a solution using ClickHouse native functions and documented it. Six months later, that solution would save me days on an unrelated project.
This was also where I started learning ClickHouse in earnest. BigQuery is a reliable Subaru: everything is tuned for you, it handles a lot, you just can't go too fast. ClickHouse is a Formula One car. When the configuration is right, it's breathtakingly fast. When one setting is wrong, you crash into a wall.
Project 2: Looker to Omni
The second project was replacing Looker with Omni as our embedded BI platform. I owned the full architecture: model structure, topic design, LookML-to-YAML migration, and coordination with platform engineering on the embedding.
This project pushed me into skills I hadn't needed before. Reviewing and contributing to TypeScript and Ruby code for the embedding layer. Learning enough about session token creation and user attribute passing to identify data-layer issues before they became production bugs. Working with feature flags to let both analytics environments coexist during the transition.
A year ago I didn't write TypeScript. I still wouldn't call myself a TypeScript developer. But I got close enough to the engineering work to keep the integration clean, and that made the difference between a migration that went smoothly and one that would have been a months-long debugging exercise.
Project 3: Real-time agent data layer
The third project was the most technically ambitious: building a real-time data model from scratch to support our LLM-powered hiring agent.
I've written about this in detail elsewhere, but the key insight was that the existing BI models were the wrong foundation. An agent data model needs to support unpredictable queries, complex filtering, and flexible aggregation at sub-second speed. The BI models were optimized for predefined dashboards. Starting fresh from source tables, with ClickPipes for streaming and ReplacingMergeTree for incremental updates, was the right call.
The compounding effect from the earlier projects was real. The ClickHouse native function solution from project one solved the same aggregate awareness constraint when it appeared through Cube in project three. The cross-functional working relationships from project two meant I had established trust with the engineering team before I needed their help with the agent integration.
What compounding looks like
Looking back, the sequencing looks intentional. It wasn't. Each project was its own thing, driven by its own business need. But the skills and patterns accumulated.
Project one taught me ClickHouse. Project two taught me cross-functional collaboration and AI-assisted migration tooling. Project three required both, plus real-time architecture patterns I'd never built before.
The career arc is similar. Five years ago I was configuring Looker at my first data role. Before that, I was a process safety management consultant analyzing failure points in oil refineries. The thread between those jobs and what I'm doing now is the same thing: understand a complex system, identify where it breaks, and build something that holds up under pressure.
What I'd be honest about
I'm not going to pretend all of this went smoothly. A month after the agent data model shipped, I discovered an unexpected memory cost issue with ClickPipe inserts caused by expensive downstream joins. I still don't fully understand the internal mechanics of why the cost surfaced where it did. I had to audit every join, optimize what I could, and flag the rest for source table backfills.
My clearest growth area is still what I'd call the knowledge frontier problem: I don't always know what I don't know. ClickHouse is extraordinarily powerful, but its more advanced storage and materialization patterns have nuances that only reveal themselves in production. I'm still learning.
The other thing I'd be honest about is that keeping tickets current and maintaining project visibility while deep in technical work is something I'm still figuring out. When you're building the actual thing, the meta-work of communicating what you're doing and what's blocked can feel secondary. It isn't. I'm working on it.
If three major projects in a year taught me anything, it's that shipping is the beginning, not the end. The demo gets the applause. The month after the demo is where you find out what you actually built.