When we started building our agent product, the architecture was heading toward a network of many specialized agents: a data agent, various operational agents, more sub-process operational agents, a master orchestrator passing work back and forth. The lead team saw challenges ahead.

The handoff problem

A customer asks a question that requires data analysis followed by an operational change. The master agent routes to the data agent. The data agent analyzes, summarizes its findings, and passes the summary to an operational agent. The operational agent tries to act on the summary. Then the customer asks a follow-up, and we're back to the data agent again.

It's a game of telephone. The operational agent isn't working from native research. It's working from another agent's summary. The experience was slow and choppy, and context got lost at every handoff. So we made the call: fewer agents, more skills.

What skills actually are

A skill is a set of instructions that an agent loads on demand rather than carrying in its base prompt all the time. If the agent needs to do something every single time, that goes in the prompt. If it depends on what the customer is asking, that's a skill.

The difference matters because prompt size directly affects performance. The longer the prompt, the harder the LLM works to pay attention to all of it. We'd seen this repeatedly: when prompts got too large, the agent would start ignoring instructions, skipping steps, and improvising in ways that looked helpful but produced inconsistent results.

Converting the data agent to a data skill

I was asked to convert what we'd built for the standalone data agent into a set of skills and resources. The logic was straightforward: customers are going to ask for data at some point during a conversation, but not every question is about data. Loading the full data context every time wastes capacity.

The first data skill I built was universal: tell the agent which data tools are available, when to use them, and how to use them. It references resources (library files, essentially) that load conditionally. If the agent is doing one type of analysis, it loads one set of resources. A different analysis, a different set. The skill also includes a decision framework for data display so once you have results, here's a hierarchy for how to present them: table, chart, or something else. If you choose a chart, you must load these additional resources because the charting tools require the request body in a very specific format. That word, "must," turned out to matter more than I expected (see the Prompt Bloat Trap article).

The result was a lean, portable skill. No handoff, no context loss from summarizing results across agents.