I spent the first part of my career making sure refineries didn't explode. Now I build real-time data infrastructure for AI agents. The through-line is less obvious than it sounds.

In chemicals and refining, my job involved moving, processing and transforming physical material safely. A pump is designed for liquid, not gas. Push the wrong phase through it and it seizes. Equipment has temperature and pressure limits. Feed it material outside those parameters and the system breaks. Data infrastructure is the same problem in a different medium. We're moving, transforming and storing digital information, and when a process or target is designed for a certain data type or structure and the wrong one arrives, the system breaks in exactly the same way. Same problem, different material.

The refinery work was about understanding complex, interconnected systems, asking the right questions before something fails, and building frameworks that prevent problems you haven't seen yet. I liked the data and modeling parts of that work more than anything else, and eventually followed that thread into analytics. In 4 years I went from a Data Analyst creating Looker dashboards to Senior Analytics Engineer owning the real-time data architecture behind a production LLM agent. My path wasn't planned, but my skill set transferred more directly than you'd expect: understand a complex system, find the failure points, build something that holds up under pressure.

This past year I delivered three major projects in sequence, each building on the one before it: a warehouse migration from BigQuery to ClickHouse, a full BI platform migration from Looker to Omni, and an entirely new real-time data layer built from scratch for an AI agent. ClickHouse is the most technically demanding system I've worked with in data. It rewards correct configuration enormously and penalizes incorrect configuration just as much. That's a familiar profile. It's what complex industrial systems are like too.

Process safety has a concept called inherent safety: the best protection isn't recovery from failure, it's designing so the failure is less likely to occur. A month after the agent data model shipped, I found an unexpected memory cost issue I hadn't designed against, because I didn't know to look for it. That's not a ClickHouse problem. That's a depth problem. Building inherent safety into data systems means knowing the machine well enough to design around its edges before you hit them. I'm still building that depth.

What that background gave me more than anything is the ability to see the system as a whole: not just the parts, but how they fit together and how they could be configured differently. Risk analysis trains you to think beyond the intended use case: what else could break here, what are the edge cases, what other configurations might serve a wider range of needs. I bring that instinct to data modeling and agent infrastructure. It's less of a career pivot than it looks.