Agent-Native Systems and Skill Graphs

Most AI setups are prompt libraries with better formatting. Here's what a system looks like when it's built for how agents actually operate. Modular skills, structured context, and a graph that compounds.

aiarchitecturebaseline-core

Heinrich (@arscontexta) posted an article on skill graphs that's making the rounds today, and it stopped me mid-scroll. Not because the concept was new to me, but because the architecture he's describing is the architecture I've been building for the past two months.

He's right. A single skill file can't hold a domain. Composable pieces that reference each other, progressive disclosure through metadata, agents that traverse a graph instead of reading one massive file. That's the pattern. I know because I shipped a system built on it.

Here's what I learned along the way.

The ceiling everyone hits

When I started building AI workflows in 2022, I hit the same wall Heinrich is talking about. One file, one capability. That works for simple tasks. Write a summary. Review some code. Generate a template.

But product work isn't simple. Research informs strategy. Strategy shapes what you design. Design decisions show up in specs. Specs drive sprint planning. Everything connects.

So I'd cram more into the skill file. Business context. User personas. Competitive landscape. Brand voice. The file would grow until it hit context limits or became unmanageable. One massive file trying to hold methodology, business knowledge, and reusable frameworks all at once.

That's the ceiling. The instinct is to write a better file. Tighter instructions, more examples, clearer structure. That helps, but it doesn't solve the fundamental problem: one file is doing too many jobs.

Heinrich calls the solution a skill graph. I call it an agent-native system. We're describing the same architecture from different angles.

Three layers, not one file

The answer I landed on has three layers.

The first is skills. A skill is a complete methodology for one domain. Not a prompt. A methodology. Research synthesis doesn't just say "do research." It defines how to plan a study, write a discussion guide, run interviews, synthesize findings, and validate insights. It teaches the agent how a senior product person actually does the work. Twelve of these, covering the full product lifecycle.

The second is context. Your business-specific knowledge, separate from any skill. Identity, voice, user personas, product details, competitive landscape, pricing, visual identity. Context doesn't tell the agent what to do. It tells the agent who you are. When a skill loads your context, the output stops being generic and starts sounding like your team wrote it.

The third is frameworks. Reusable methodologies that aren't owned by any single skill. Prioritization, decision-making, messaging, stakeholder communication, research methods. The prioritization framework gets used by strategic advisory and project management. The messaging framework gets used by product marketing and go-to-market. Shared patterns, loaded where they're needed.

Three layers, each modular, each composable. Skills provide methodology. Context makes it personal. Frameworks give structure. The pieces connect, but they're maintained independently.

Heinrich talks about nodes in a graph where each file is one complete thought. That's exactly what these are. A skill is a node. A context file is a node. A framework is a node. The edges between them are what make the system work.

The canonical instruction file

This is the piece that doesn't come up enough in the skill graph conversation.

Modular pieces don't help if the agent doesn't know how to find them. This is the problem I see in most setups: files exist, but nothing tells the agent which ones matter for the current task.

The solution is a single instruction file that acts as the routing layer. One file the agent reads first, every time. It maps tasks to skills, tells the agent where to find manifests, defines the execution protocol, and sets the rules for how context loads.

You describe a task. The instruction file routes it. The agent knows exactly where to go.

Heinrich describes an index file that "points attention" and helps the agent understand the landscape. That's the same idea. The difference is that a canonical instruction file goes further. It doesn't just describe what exists. It defines how the agent should behave, what sequence to follow, and what rules to enforce. It's the operating system for the skill graph.

This is the piece that makes the system agent-native. The agent isn't guessing what to load. It isn't scanning a folder hoping to find the right file. It reads one canonical file that tells it how the entire system works, and then it follows the path.

No human orchestration required.

The manifest pattern

Heinrich talks about YAML frontmatter that lets agents scan without reading full files. That's one implementation. Here's another.

Every skill has a manifest.yaml. A dedicated file that tells the agent what to load before doing any work.

A manifest lists three things:

always_load: The skill file and its core frameworks. These load every time the skill runs.

context: Business-specific files. Identity and voice load into every skill. Then each skill declares what extended context it needs. Research synthesis loads user personas and competitive landscape. UX design loads users, visual identity, and technical constraints. Each skill pulls in exactly the context that matters for its domain.

references: Detailed reference materials. Interview guides, design patterns, document templates, experimentation methods. These load when the task needs depth.

The agent reads the manifest first, before reading any content. It knows what exists, what matters, and what to skip.

This is what makes the system a graph, not a folder. Every skill node connects to context nodes, framework nodes, and reference nodes through its manifest. The manifests define the edges. The agent traverses them. The same progressive disclosure pattern Heinrich describes, implemented through dedicated manifest files rather than frontmatter.

Why context is a separate layer

This is the insight that changed how I built everything.

Most people building AI workflows focus on the skill. The methodology, the instructions, the system prompt. That matters. But it's half the equation.

A research synthesis skill without your user personas produces generic findings. A product marketing skill without your competitive landscape produces generic positioning. A UX design skill without your visual identity produces generic wireframes.

Good instructions with no context produce polished generic output. Average instructions with rich context produce specific, useful output. Good instructions with rich context produce output you actually ship.

That's why context is a separate layer. You write it once. Every skill benefits. When you improve your identity file, every skill that loads it gets better. When you add competitive context, strategy and marketing both level up. The investment compounds across the whole system.

Heinrich's article focuses on the graph structure. The linking patterns, the traversal, the topology. He's right about all of it. But the content of the nodes matters just as much as the connections between them. A beautifully connected graph of generic knowledge still produces generic output. Context is what makes the graph yours.

What agent-native actually means

There's a difference between a system that works with AI and a system designed for how agents actually operate.

Most AI workflows are human workflows with AI bolted on. You still decide what to load. You still paste in context. You still manage the orchestration. The AI does the writing, but you do the thinking about what it needs to do the writing.

An agent-native system flips that. The agent reads the instruction file. It identifies the right skill. It reads the manifest. It loads the context, the frameworks, the references. It plans the approach, asks clarifying questions, executes the work, validates the output.

The human says what they need. The system handles how to get there.

This only works if the architecture was designed for it. Modular files the agent can load independently. Manifests that declare dependencies explicitly. A canonical instruction file that defines the routing. Clear separation between methodology, knowledge, and patterns so the agent can compose them dynamically.

That's what I mean by agent-native. Not "works with AI tools." Built for how agents think.

Building for agents, not for yourself

The biggest lesson from building this: stop thinking about what you need the AI to know. Start thinking about what the agent needs to navigate.

Agents don't read the way humans do. They don't benefit from long, comprehensive documents. They benefit from small, composable pieces with clear metadata that tells them what's relevant and what to skip.

A manifest is more useful than a longer skill file. A separate context file is more useful than context pasted into a prompt. An instruction file that defines routing is more useful than hoping the agent figures out which skill to use.

The primitives are simple. Markdown files. YAML manifests. One instruction file. No special tooling, no plugins, no platform dependencies. Just files organized in a way that agents can traverse.

The architecture is what matters. Get the architecture right and the system works with any AI tool. The same file structure works with Claude Code, Cursor, Windsurf, Codex, GitHub Copilot, JetBrains AI. The agent reads the instruction file and knows what to do.

Agent-native means the system is portable. It's not locked to a platform. It's locked to a pattern.

What I shipped

This architecture is the foundation of Baseline Core. Twelve skills, 14 frameworks, 34 reference files, a full context scaffolding, and a canonical instruction file that ties it all together.

Heinrich's article convinced me to write this because the timing is right. People are starting to see that single-file skills aren't enough. Skill graphs, agent-native systems, whatever you call it, this is the direction AI workflows are heading.

I've been building here for two months. The system is open source, MIT licensed, and free.

Terminal
$ npx @baseline-studio/cli init

If you want the context layer built out for your specific business, that's what I do at Baseline Studio. But the architecture is the point. The pattern of skills, context, frameworks, manifests, and a canonical instruction file. That pattern works for product work, for engineering, for legal, for sales, for anything complex enough that a single skill file can't hold it.

Agent-native systems aren't a feature. They're the foundation.