This isn’t that post.
The First Version Was a Human Problem
A few weeks ago I wrote about centralizing AI agent configurations with the agent-config repository
. The idea was solid: one Git repository to hold all your AI agent instructions, skills, commands, and subagents, with a make symlinks command to wire everything into place across Claude Code, GitHub Copilot, and OpenAI Codex.
The problem was baked into the premise. I designed it the way a human designer would: uniform rules, uniform structure, uniform format. One agents/ directory. One commands/ directory. One make symlinks call to rule them all.
That worked long enough to feel correct. Then I tried to add Gemini.
Gemini CLI doesn’t read Markdown slash commands. It reads TOML. My elegant uniformity immediately collided with reality: you can’t symlink a .md file into a tool that expects a .toml file and call it done. You’ve just broken Gemini while technically following your own architecture.
Here’s a concrete example of why. A shared command in Markdown looks like this:
# commit
Analyze all uncommitted changes and create well-organized commits grouped
by related file paths...
That same command, correctly formatted for Gemini CLI, looks like this:
[[commands]]
name = "commit"
description = "Analyze all uncommitted changes and create well-organized commits grouped by related file paths..."
Same intent. Completely different representation. v1 never handled that distinction — because I assumed that if the concept was the same, the file could be too. That’s the wrong layer to standardize.
v1 was wrong because it treated representation as if it were intent.
The v2 Architecture: Standardize Intent, Generate Format
The updated agent-config repository now has a three-stage build pipeline baked into the Makefile:
Stage 1: Merge
Each agent’s final configuration is generated by merging the shared GLOBAL.md with agent-specific instruction files. The output goes into agents/ as ready-to-deploy artifacts — not sources.
make generate-agents
Stage 2: Convert
Format conversion runs automatically for agents that need it. Gemini gets its commands converted from Markdown to TOML. Other agents get their formats left alone.
make convert-commands
Stage 3: Symlink
The correctly-merged, correctly-formatted files get symlinked into the locations each agent expects.
make symlinks
Or the entire pipeline in one shot:
make all
This is the same pattern we trust everywhere else in software: TypeScript compiles to JavaScript, Sass compiles to CSS, source templates render to environment-specific manifests. Config for AI agents isn’t a special case — it’s another build target problem. The insight I missed in v1 was that “one source of truth” and “different output formats” are not in conflict. A build system handles that every day.
If You Used v1, Here’s the Upgrade
Pull the latest and run:
git pull
make all
make all regenerates merged agent files, runs format conversion, and recreates all symlinks. It’s idempotent — safe to run repeatedly, and it backs up any existing files before overwriting them.
Why the Failure Was Worth More Than Getting It Right
The deeper lesson here isn’t architectural. It’s about how to use AI to work with failure, rather than around it.
My instinct in v1 was to normalize everything — find the common ground between Claude, Codex, Copilot, and Gemini and build to that. That instinct comes from years of writing software where consistency is a virtue. But AI agents are built by different teams, at different companies, with different philosophies, and their interfaces reflect that. Forcing uniformity onto them doesn’t make them consistent. It just makes some of them wrong.
The fix came out of a single conversation with AI, not a week of redesign. I described the problem — “Gemini needs TOML, Claude needs Markdown, here’s the current structure” — and worked through the solution collaboratively. The implementation was cleaner than what I would have designed solo, because I wasn’t anchored to my first instinct.
That’s the pattern worth repeating: use AI not just to write code, but to pressure-test your assumptions before they calcify.
In a traditional project cycle, a bad architectural assumption might cost you a sprint to unwind. With AI in the loop, the same rethink can cost you an afternoon. That compression changes the calculus on when to validate an idea versus when to just build it and see. The cost of being wrong drops enough that you can afford to be wrong more often, faster, and with less dread.
That’s not a consolation. That’s the mechanism.
When was the last time a failed first attempt taught you something that success would have hidden — and how much faster did AI help you get from failure to fix?
