data_storey

Cecily Storey

I'm Cecily Storey, a Senior Analytics Engineer at Fountain, a frontline hiring and workforce management platform serving enterprise customers.

My job is to design and build the data layers that power everything from embedded customer analytics to the real-time data infrastructure behind Fountain's AI agent product. In the past year I've delivered a data warehouse semantic model migration (BigQuery to ClickHouse), a full BI platform migration (Looker to Omni), and a new real-time data model built from scratch to support an LLM-powered agent network. All three are in production.

I work daily with ClickHouse, dbt, Cube, Python, SQL, YAML, Dagster, and Claude Code. I design data models, evaluate warehouse engine tradeoffs, write and validate data quality tests, and build the semantic context that lets an LLM know what a field actually means. I also build the skills and resources that teach agents how to do their jobs: internal agents that navigate and process our system, so that every time I build something the process gets faster and the agents I work with get smarter, and the external-facing agent network that queries what I've built, interprets results, and presents them to customers. Building that layer has taught me more about how LLMs actually process instructions than any tutorial ever could.

I have a point of view about how AI agents should be built, and it's not the dominant one. I think the industry is over-engineering agent behavior: hardcoding recommendation patterns, boxing LLMs into rigid decision trees, and trying to remove the need for human thinking. I don't buy into the idea of one enormous perfect prompt to get exactly, precisely the single thing you need at that specific point in time. I think that approach produces worse outcomes. That approach is using an LLM to get a head start in a race while aiming for a brick wall. The value of an LLM is its ability to surface things humans haven't seen. If you prescribe every response, you've built an expensive vending machine.

I believe in AI as a thinking partner, not a replacement for thought, but a collaborator that can help you find better questions. That's how I work, and how I think products should be built.

I think about how data models should be structured and what changes when the end user is an LLM instead of a human analyst. Modeling patterns like OBTs, star schemas, and flat models have been written about extensively. There are already great resources on when to choose one over another. But the capabilities of an LLM analyst change some of those long-held practices. How much pre-aggregation you need, how much flexibility to leave in the model, how precise you need to be in the code, what your documentation needs to accomplish: these are different questions when the consumer is an LLM, a human, or both. LLMs don't need the formatting that makes something easier on human eyes. For them, it's just fluff and bloat.

I write about these subjects and themes here.