<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ethics on Corey Daley</title><link>https://coreydaley.dev/tags/ethics/</link><description>Recent content in Ethics on Corey Daley</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sun, 15 Feb 2026 17:46:25 -0500</lastBuildDate><atom:link href="https://coreydaley.dev/tags/ethics/rss.xml" rel="self" type="application/rss+xml"/><item><title>Beyond LeetCode: How AI is Transforming Technical Interviews</title><link>https://coreydaley.dev/posts/2026/02/beyond-leetcode-how-ai-is-transforming-technical-interviews/</link><pubDate>Sun, 15 Feb 2026 17:46:25 -0500</pubDate><guid>https://coreydaley.dev/posts/2026/02/beyond-leetcode-how-ai-is-transforming-technical-interviews/</guid><description>&lt;p&gt;The coding interview landscape is shifting dramatically as AI tools become standard in software development. Rather than memorizing algorithms, candidates may soon demonstrate their ability to work effectively with AI agents—breaking down projects, creating tasks in Jira, and collaborating to solve real-world problems.&lt;/p&gt;
&lt;p&gt;This shift raises important questions about what skills truly matter and how we evaluate engineering talent in an AI-augmented world.&lt;/p&gt;
&lt;p&gt;What does this mean for the future of technical hiring?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/02/beyond-leetcode-how-ai-is-transforming-technical-interviews/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/02/beyond-leetcode-how-ai-is-transforming-technical-interviews/&lt;/a&gt;
&lt;/p&gt;</description></item><item><title>The AI Divide: When Innovation Amplifies Inequality</title><link>https://coreydaley.dev/posts/2026/02/ai-ethics-resource-gap/</link><pubDate>Fri, 13 Feb 2026 15:45:00 -0500</pubDate><guid>https://coreydaley.dev/posts/2026/02/ai-ethics-resource-gap/</guid><description>&lt;p&gt;We&amp;rsquo;re witnessing something unprecedented: AI tools that can generate content, build applications, and automate creative work at scales previously unimaginable. But there&amp;rsquo;s a catch—the most powerful capabilities often come with price tags that not everyone can afford.&lt;/p&gt;
&lt;p&gt;From individual creators competing for attention to startups facing AI-augmented giants, the ability to pay for advanced AI is becoming a new axis of inequality.&lt;/p&gt;
&lt;p&gt;Is this just another chapter in technological progress, or are we creating a permanent divide between the AI haves and have-nots? What do you think should be done to ensure AI benefits everyone, not just those who can afford it?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/02/ai-ethics-resource-gap/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/02/ai-ethics-resource-gap/&lt;/a&gt;
&lt;/p&gt;</description></item><item><title>The Ethics of AI-Generated Code in Open Source: A Balanced Perspective</title><link>https://coreydaley.dev/posts/2026/02/ethics-of-ai-generated-code-in-open-source/</link><pubDate>Fri, 13 Feb 2026 11:57:51 -0500</pubDate><guid>https://coreydaley.dev/posts/2026/02/ethics-of-ai-generated-code-in-open-source/</guid><description>&lt;p&gt;Here&amp;rsquo;s a question that&amp;rsquo;s been keeping me up at night: When does using AI coding assistants cross the line from productivity tool to ethical problem? I&amp;rsquo;ve been using tools like GitHub Copilot and Claude Code extensively, and I started wondering—if someone submits AI-generated code to open source projects and builds their reputation on it, is that fundamentally different from using Stack Overflow or IDE autocomplete?&lt;/p&gt;
&lt;p&gt;In my latest blog post, I explore both sides of this debate. On one hand, AI democratizes contributions and amplifies what we can accomplish. On the other, it raises serious questions about authenticity, trust, and what it means to truly &amp;lsquo;know&amp;rsquo; the code you&amp;rsquo;re responsible for. The middle ground is messy and context-dependent.&lt;/p&gt;
&lt;p&gt;Where do you draw the line? Should contributors be required to disclose AI usage? What do you think?&lt;/p&gt;
&lt;p&gt;Read more at &lt;a
 href="https://coreydaley.dev/posts/2026/02/ethics-of-ai-generated-code-in-open-source/" target="_blank" rel="noopener noreferrer"&gt;https://coreydaley.dev/posts/2026/02/ethics-of-ai-generated-code-in-open-source/&lt;/a&gt;
&lt;/p&gt;</description></item></channel></rss>