My approach to working with AI is pretty simple: I don't assume it knows everything, and I don't try to make it do everything, especially not in a single magic prompt. I use it as a thinking partner. I go back and forth. I ask it questions and I challenge what it tells me. And only after we've worked through something together a few times do I consider turning it into something more structured, like a skill or a saved workflow.
When I was evaluating table type options for the real-time agent data model I built this year, the decision wasn't obvious. ClickHouse offers several storage engines, and the tradeoffs depend on your specific data patterns. For data that is updated constantly: you could rebuild the table from scratch every hour, which is clean and predictable, or have incremental updates on an orchestrator schedule, but neither of these are real time. For real time you could use live insert updates like ClickHouse ReplacingMergeTree, which is cheaper when it works but depends on update frequency, table size, and query patterns.
I worked through these tradeoffs with Claude and with my manager. Not by asking "which one should I use?" but by exploring: what happens at our scale if update frequency spikes? What are the memory implications of this join pattern? The AI didn't make the decision, but it was fast, patient, and able to model scenarios I'd have spent days building manually.
One practice I've started doing regularly is spinning up a separate agent to write validation scripts. I'll have the main agent I'm working with on the data model, and then I'll create another one to independently query ClickHouse and compare its answers against the model's output. It's a second opinion that runs in minutes instead of hours. And because the validation agent is writing queries from scratch (not looking at my model), it catches assumptions I've baked in without realizing.
For the Looker-to-Omni migration, I used Cursor with a multi-root workspace to build AI-assisted conversion rules. At the time this was before Claude released the Skill feature; Cursor Rules were a pre-cursor that defined repeatable patterns and workflows. That was a different kind of AI use: not thinking, but acceleration. The systematic translation work of converting LookML to Omni YAML is exactly the kind of high-volume, pattern-based task where AI saves enormous amounts of time. The judgment calls still need a human, but the volume of rote conversion in a BI migration is enormous.
I helped edit Greg Storey's book Creative Intelligence, which describes five cognitive modes people move between when solving problems: exploring, generating, evaluating, strategizing, synthesizing. What I've noticed is that the way I work with AI maps to those modes pretty directly. Exploring ClickHouse tradeoffs is one kind of thinking. Spinning up a validation agent to challenge my assumptions is another. Accelerating a migration is another. The value isn't in any single mode. It's in switching between them fluidly, and AI makes those transitions faster.
Building agent skills has changed how I prompt AI in my own work. LLMs take language extremely literally. I learned that the hard way when "use this resource" produced a 50% failure rate and "you must load this resource" brought it to near zero. That's made me more specific about criteria when I ask Claude to evaluate something, and more explicit about which steps are non-negotiable.
I learn these things by interacting with AI and participating in the decision process, not by trying to use big magical prompts to have it do everything for me.