<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Agent-Config on Corey Daley</title><link>https://coreydaley.dev/tags/agent-config/</link><description>Recent content in Agent-Config on Corey Daley</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 11 Apr 2026 18:05:00 -0400</lastBuildDate><atom:link href="https://coreydaley.dev/tags/agent-config/rss.xml" rel="self" type="application/rss+xml"/><item><title>Before the First Commit: What Multi-Agent Sprint Planning Actually Catches</title><link>https://coreydaley.dev/posts/2026/04/before-first-commit-what-multi-agent-sprint-planning-catches/</link><pubDate>Sat, 11 Apr 2026 18:05:00 -0400</pubDate><guid>https://coreydaley.dev/posts/2026/04/before-first-commit-what-multi-agent-sprint-planning-catches/</guid><description>&lt;p&gt;What does a multi-agent sprint planning workflow actually produce? Not just a cleaner document — it finds real bugs in a plan before implementation begins. When the /sprint-plan command ran against a &amp;ldquo;simple&amp;rdquo; Go REST API project, the security review phase returned three critical findings: a logical contradiction that made the stated auth behavior impossible, a schema constraint that would silently break token revocation, and a SQLite pragma applied to only one connection in a pool.&lt;/p&gt;
&lt;p&gt;The post walks through the entire planning session for org-api — from seed prompt to approved sprint document — showing what each phase of the review pipeline produced and what changed as a result. The security findings came from reading the plan carefully, not from running any code. That&amp;rsquo;s the point.&lt;/p&gt;
&lt;p&gt;What step in your planning process is explicitly there to prove the plan wrong before implementation begins?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/04/before-first-commit-what-multi-agent-sprint-planning-catches/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/04/before-first-commit-what-multi-agent-sprint-planning-catches/&lt;/a&gt;
&lt;/p&gt;</description></item><item><title>From Config Hub to Competing Voices: How agent-config Became My AI Collaboration Stack</title><link>https://coreydaley.dev/posts/2026/04/agent-config-from-sharing-to-competing-voices/</link><pubDate>Sat, 11 Apr 2026 14:50:00 -0400</pubDate><guid>https://coreydaley.dev/posts/2026/04/agent-config-from-sharing-to-competing-voices/</guid><description>&lt;p&gt;I started agent-config as a shared configuration hub: one repository to rule Claude, Codex, Copilot, and Gemini. That lasted about two iterations before the cracks showed. Forcing every AI agent to share the same configuration format was the wrong abstraction — different tools, different philosophies, different file formats. The solution wasn&amp;rsquo;t more uniformity. It was a different model of collaboration entirely.&lt;/p&gt;
&lt;p&gt;Today agent-config is Claude-specific, but Codex is still central to how I work. The difference: Codex is no longer a configuration &lt;em&gt;target&lt;/em&gt;. It&amp;rsquo;s a competitive &lt;em&gt;collaborator&lt;/em&gt;. Sprint plans, blog posts, security audits — every significant output runs through a workflow where Claude and Codex produce independent drafts, critique each other&amp;rsquo;s work, and force synthesis from the tension. Two AI voices with different instincts produce better output than either would alone — just like a team of people with different backgrounds does.&lt;/p&gt;
&lt;p&gt;Is your multi-agent workflow built for sharing configuration, or for generating the productive disagreement that makes output actually better?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/04/agent-config-from-sharing-to-competing-voices/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/04/agent-config-from-sharing-to-competing-voices/&lt;/a&gt;
&lt;/p&gt;</description></item><item><title>When Your First Version Fails: Iterating on agent-config with AI</title><link>https://coreydaley.dev/posts/2026/03/agent-config-v2-failing-forward-with-ai/</link><pubDate>Sat, 07 Mar 2026 16:00:00 -0500</pubDate><guid>https://coreydaley.dev/posts/2026/03/agent-config-v2-failing-forward-with-ai/</guid><description>&lt;p&gt;I built agent-config v1 to centralize AI agent configurations across Claude, Codex, Copilot, and Gemini — and it failed. Not dramatically, but fundamentally: I tried to force every agent to follow the same rules in the same format, because that&amp;rsquo;s what my human instincts said made sense. The problem is those agents have completely different requirements. Gemini needs TOML. The others use Markdown. You can&amp;rsquo;t just symlink your way to consistency.&lt;/p&gt;
&lt;p&gt;v2 fixes this by letting AI handle the translation — automated merging of global and per-agent configs, format conversion per tool, and intelligent symlink setup.&lt;/p&gt;
&lt;p&gt;The real lesson isn&amp;rsquo;t about config management, though. It&amp;rsquo;s about failing fast, iterating faster with AI than you ever could alone, and trusting the tools to solve problems your instincts would have you paper over. Are you letting your human instincts slow down your AI iteration cycles?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/03/agent-config-v2-failing-forward-with-ai/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/03/agent-config-v2-failing-forward-with-ai/&lt;/a&gt;
&lt;/p&gt;</description></item></channel></rss>