I see two camps forming in how companies think about building AI products.
Camp One believes success means engineering the agent so thoroughly that no human has to intervene. Every recommendation is prescribed. Every workflow is predetermined. The goal is to remove thinking from the equation and ideally, to remove the need for a precise question too. A user asks something broad and non-specific, and the agent initiates a detailed prescribed workflow. But is that really what the user wanted? Is that what the user meant or intended? And more importantly, is that what the user needed in order to make an informed decision?
I remember when this instinct showed up in consumer AI use too. There was a period where people were sharing these enormous single prompts like they'd found a silver bullet: one perfect, all-encompassing instruction set that would return exactly what you needed for a specific task, no follow-up required. People would ask me excitedly: "Did you get it to work with just one prompt?" And "Did you see my <insert name> prompt?" My honest answer, the polite version, was: "Oh, how did that work out for you?" What I actually found successful looked nothing like that.
The Camp One instinct makes some sense. Users are still learning how to interact with AI. The public doesn't suddenly know how to get the most out of a feature just because a company added one. And if you can't count on users to ask good questions, maybe you engineer around that gap. It feels responsible. But there are two problems with that logic. The first is that it doesn't actually work. A prescribed workflow triggered by a vague question isn't giving users what they need, it's giving them what the builders predicted they'd need. The second is cost. Every broad query that fires off a detailed agent workflow, every unnecessary skill load, every token burned processing a bloated prescribed prompt: all of it adds up fast at scale. The company pays for it. In some models, the user does too. You've built something expensive that delivers mediocre results, and you've billed everyone for the privilege.
Camp two believes the human-AI collaboration is the product. The AI augments human thinking to help us see new patterns, new ways of creating, and learn new skills. Collaboration requires users to ask better questions. Users have to interact and engage in discovery. Exploring possibilities with AI is how you uncover patterns and solutions that you hadn't previously thought of. The AI gives users capabilities they didn't have before, but it doesn't do all the work for them.
I'm in camp two. I got there by working both sides of the problem. I use AI tools every day in my own work, and I also build the skills and resources that a production LLM agent uses to do data work for customers. I see the collaboration from both ends, and the pattern is the same on both sides. The best outcomes happen when the human does work too.
When I use Claude as a thinking partner, I don't ask "which ClickHouse storage engine should I use?" I explore: what happens at our scale if update frequency spikes? What are the memory implications of this join pattern? I challenge what it tells me. I go back and forth. The AI didn't make the architecture decisions for the agent data model I built, but it was an effective sparring partner for testing my assumptions and helping me to discover new solutions. Good problem-solving requires moving between different kinds of thinking: exploring possibilities, then evaluating them critically, then making a strategic call. That's true whether the tool is a whiteboard or an LLM.
On the agent side, I see what happens when teams try to skip the thinking part. They prescribe every outcome instead of teaching customers how to engage with the tool. So instead of helping people ask better questions, teams over-engineer the agent until it can only return what its builders already thought to put in. It looks impressive in a demo, but in daily use, it's less flexible than the spreadsheet it replaced.
The user has to do work too. That's not a limitation of the technology, it's the whole point.