<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Electron on Corey Daley</title><link>https://coreydaley.dev/tags/electron/</link><description>Recent content in Electron on Corey Daley</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 23 Mar 2026 19:55:00 -0400</lastBuildDate><atom:link href="https://coreydaley.dev/tags/electron/rss.xml" rel="self" type="application/rss+xml"/><item><title>Polyphon at v0.8.0: The End of the Prototype Phase</title><link>https://coreydaley.dev/posts/2026/03/polyphon-from-alpha-to-v0-8/</link><pubDate>Mon, 23 Mar 2026 19:55:00 -0400</pubDate><guid>https://coreydaley.dev/posts/2026/03/polyphon-from-alpha-to-v0-8/</guid><description>&lt;p&gt;When I shipped Polyphon v0.1.0-alpha.2, the pitch was simple: put multiple AI voices in one conversation and let them respond to each other. That was useful. But early usefulness and long-term trust are not the same thing.&lt;/p&gt;
&lt;p&gt;v0.8.0 is the release where Polyphon crosses that line. The features that made the difference weren&amp;rsquo;t the ones I planned at launch. Voices can now interact with actual files, with per-voice sandboxing and explicit permission categories. Conversation history is encrypted with SQLCipher whole-database AES-256, with optional password protection. FTS5 search turns the archive into working memory you can actually retrieve from. These aren&amp;rsquo;t incremental improvements — they&amp;rsquo;re the features that decide whether a tool stays an interesting experiment or earns a place near real projects.&lt;/p&gt;
&lt;p&gt;What actually makes you trust an AI tool with real work: capability, privacy, or memory?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/03/polyphon-from-alpha-to-v0-8/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/03/polyphon-from-alpha-to-v0-8/&lt;/a&gt;
&lt;/p&gt;</description></item><item><title>I Built a Tool So AI Models Could Talk to Each Other</title><link>https://coreydaley.dev/posts/2026/03/launching-polyphon-orchestrating-multiple-ai-voices/</link><pubDate>Mon, 16 Mar 2026 12:30:00 -0400</pubDate><guid>https://coreydaley.dev/posts/2026/03/launching-polyphon-orchestrating-multiple-ai-voices/</guid><description>&lt;p&gt;Every AI power user I know runs the same manual workaround: ask Claude, ask GPT, copy the interesting parts of each into the other, then try to synthesize what you learned. The models are good. The coordination is not.&lt;/p&gt;
&lt;p&gt;I just shipped Polyphon v0.1.0-alpha.2 — a free, local-first desktop app that puts multiple AI voices in the same conversation so they can actually respond to each other. You&amp;rsquo;re the conductor. They&amp;rsquo;re the ensemble. Save a group of voices as a composition and reuse it whenever you need that ensemble again.&lt;/p&gt;
&lt;p&gt;What should a multi-agent conversation feel like when you&amp;rsquo;re not building a pipeline — when you just want to think out loud with several models at once?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/03/launching-polyphon-orchestrating-multiple-ai-voices/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/03/launching-polyphon-orchestrating-multiple-ai-voices/&lt;/a&gt;
&lt;/p&gt;</description></item></channel></rss>