<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Matt McKenna's Blog]]></title><description><![CDATA[Hi, I’m Matt, an Android GDE working at Block. Here I'll share insights on Android, Kotlin, and cutting edge topics to give back to the community that helped me]]></description><link>https://blog.mmckenna.me</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 17:25:02 GMT</lastBuildDate><atom:link href="https://blog.mmckenna.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[You Should Make a Hype Channel]]></title><description><![CDATA[During my time at Square, I started using a Slack channel as a living hype doc and worked with my manager, Stephen Pickens, to refine the approach. Together we put this document together to share the ]]></description><link>https://blog.mmckenna.me/you-should-make-a-hype-channel</link><guid isPermaLink="true">https://blog.mmckenna.me/you-should-make-a-hype-channel</guid><category><![CDATA[hype-doc]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[engineering-management]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Wed, 25 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/606f0345d741af6659cf8f11/797df15a-58d5-4091-ba82-f1e8f7762773.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>During my time at Square, I started using a Slack channel as a living hype doc and worked with my manager, <a href="https://www.linkedin.com/in/stephen-pickens-58069838/">Stephen Pickens</a>, to refine the approach. Together we put this document together to share the strategy with other individual contributors and engineering managers across Square. The practice ended up being genuinely impactful. Engineers who adopted it had an easier time advocating for themselves, and managers stayed more connected to their team's wins. As AI and LLMs gained access to work tools, these became a critical part of performance reviews, self reflection, and general summaries of work completed.</p>
<p>We want to share it more broadly because we think it's useful well beyond Square, and it's one of the things we are most proud of building into our team culture.</p>
<hr />
<h2>Background</h2>
<p>Keeping track of your professional wins is important. Let's face it, it's easy to forget or overlook them when things get busy. The idea of a hype doc started at Square within the Women in Engineering community. A former Square engineer <a href="https://medium.com/square-corner-blog/you-are-your-own-best-hype-person-cf1e3a83c0c2">wrote about it on Medium</a>, breaking down why having a hype doc is a game-changer and how you can use it to advocate for yourself by keeping track of your own accomplishments. While people traditionally kept hype docs as personal Google documents, integrating them directly into Slack channels takes this practice to a whole new level of ease and effectiveness.</p>
<p>I started my hype doc in Slack¹ after struggling to maintain a traditional one. I wanted to keep track of my accomplishments, but found the usual approach full of friction:</p>
<ul>
<li><p>The doc lived outside of where work actually happened, which made it easy to forget.</p>
</li>
<li><p>I had to copy messages, links, or screenshots across different tools, often breaking context or losing detail.</p>
</li>
<li><p>It wasn't always clear what to include: should I paste a screenshot, a link, or both?</p>
</li>
<li><p><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Updates felt like a chore instead of something quick and lightweight.</mark></strong></p>
</li>
</ul>
<p>To make the process easier, I worked with my manager to figure out how a dedicated Slack channel could serve as a living, low-friction hype doc. This setup allowed us to stay aligned in real time, capture wins as they happened, and reference specific accomplishments when it mattered most without ever needing to leave the tools we were already using every day.</p>
<p>This small adjustment to reduce friction ended up creating a lightweight system that anyone can adopt. Let me break down why hype channels can be so valuable, starting with the benefits for engineers.</p>
<hr />
<h2>For Engineers</h2>
<p>Having your own dedicated hype doc channel in Slack has lots of practical benefits. Slack's real-time format makes it super easy to add your accomplishments on the fly. You can instantly forward messages, shout-outs, or project highlights into your hype channel, which means you're more likely to capture achievements as they happen. This can also track support work that takes place in Slack that otherwise goes untracked in other software such as Github (PRs) or Jira (tickets). Since Slack is already central to your everyday work, it's natural to update your hype doc regularly without extra hassle.</p>
<p>On top of that, this setup streamlines the entire process:</p>
<ul>
<li><p><strong>Automatic time tracking</strong> - Timestamps are automatically recorded, so there's no manual logging required.</p>
</li>
<li><p><strong>Real time notifications</strong> - Interested parties can choose to be notified or tagged in real time making it easy to "manage up" or tag collaborators.</p>
</li>
<li><p><strong>Easy search and summarizing</strong> - Whenever wins need to be resurfaced, Slack has an excellent search interface that makes it painless to find what you need.</p>
</li>
</ul>
<p>This habit boosts confidence, encourages proactively advocating for yourself, and helps you easily recall important accomplishments during performance reviews or when you're preparing for promotions.</p>
<p>One engineer I worked with created a Slack Workflow (IC Hype Tracker) to categorize &amp; track hype feedback in a way that maps directly to engineering level expectations. This easily enables quick searching for relevant content in the hype channel when writing promo packets or reviewing performance with a manager.</p>
<p>Several engineers also made their hype doc channels open for transparency and as examples for others. 👏👏👏</p>
<hr />
<h2>Engineer Testimonials</h2>
<p>Here's what some of the engineers who adopted this practice had to say:</p>
<h3>Testimonial 1</h3>
<blockquote>
<p>I had previously kept a hype-doc using Google Docs, but transitioned to using a Slack channel to take advantage of some automation tricks and more easily track Slack-native content.</p>
</blockquote>
<p>This engineer configured the <a href="https://slack.com/help/articles/360000482666-Reacji-Channeler-for-Slack">Reacji Channeler Slack App</a> to cross-post hype-doc content by reacting to Slack messages with a custom emoji. It had a few positive effects:</p>
<ol>
<li><p>It removed friction from tracking hype-doc content.</p>
</li>
<li><p>It encouraged other individuals to track their own hype doc content by alerting followers of a message or conversation when hype doc content was logged.</p>
</li>
<li><p>It encouraged tracking smaller pieces of hype more often, creating a larger narrative pool to draw from when reviewing performance.</p>
</li>
</ol>
<h3>Testimonial 2</h3>
<blockquote>
<p>I tried using a notion doc and found myself just duplicate tracking my jira tickets.</p>
</blockquote>
<p>This engineer shared some tricks they use in their hype channel:</p>
<p><strong>Sharing private DMs.</strong> This will only share the one solitary message and not an entire thread, but is often sufficient for surfacing work with your EM that is brought to your attention in private DMs before you move it to a public channel, or if you're sent a kudos privately.</p>
<p><strong>Leveraging slackmoji reactions for categorization and retrieval.</strong> You can flag each message with a slackmoji and later search your hype doc for a specific emoji. You could use this for easy retrieval for impact, behavior, and betterment.</p>
<p><strong>Summarizing when forwarding.</strong> Each time you share a thread to your hype channel you can summarize what the hype is.</p>
<h3>Testimonial 3</h3>
<blockquote>
<p>I had tried, unsuccessfully, to keep a hype doc in Google docs for years. Creating a slack channel was so much easier. Most of the work happens, or is documented, in Slack. It's trivial to share a post to my hype channel, and it usually already has screenshots, videos, and links. This makes it that much easier when assembling a promo packet, because you can quickly jump to the post and gather all of the context. It's also obviously a great platform to discuss your hype entries with your manager as they happen.</p>
</blockquote>
<h3>Testimonial 4</h3>
<blockquote>
<p>Like others, I've struggled to keep up a hype doc and when it came time for promo or performance conversations I needed to spend a lot of after hours time getting these artifacts together. I just started using a Slack channel, but with the automations available via emoji reactions and lower friction my initial experience has been positive.</p>
</blockquote>
<hr />
<h2>For Managers</h2>
<p>Managers get plenty of value from Slack-based hype docs too. When managers join these hype channels, they get to celebrate their team's wins right away. This makes it easy for them to stay informed about their engineers' successes, strengths, and growth areas in real-time. This is especially helpful for managers with a lot of direct reports or multiple teams.</p>
<p>These channels are also incredibly helpful when it's time to give feedback. They provide a steady stream of concrete examples that make it easier to offer thoughtful, consistent input, and they help make evaluations simpler, fairer, and more transparent.</p>
<p>Managers can quickly forward notable achievements from a hype channel into larger team or leadership channels, making recognition visible across the company. Another big perk: if there's ever a change in management, simply adding the new manager to the existing Slack hype channel immediately gives them all the historical context they need.</p>
<p>Creating a dedicated Slack section for ICs' hype channels also helps maintain Slack organization.</p>
<hr />
<h2>Summary</h2>
<p>In short, turning hype docs into Slack channels makes tracking and celebrating accomplishments easy and effective. It simplifies the process for engineers, helps managers stay engaged and informed, and creates a positive, recognition-rich team culture. Adopting Slack-based hype docs contributes to happier, more motivated teams and a workplace where everyone's contributions are noticed, appreciated, and celebrated. Create yours today!</p>
<hr />
<h2>Footnotes</h2>
<ol>
<li>We used Slack and I refer to this throughout this post, but any tool you have at work where you can set up a channel for you and your manager should work!</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[No Permanent Overrides]]></title><description><![CDATA[I originally wrote this guide while working at Square to help my teams address a recurring problem: feature flag overrides applied during incident response were quietly becoming permanent, creating dr]]></description><link>https://blog.mmckenna.me/no-permanent-overrides</link><guid isPermaLink="true">https://blog.mmckenna.me/no-permanent-overrides</guid><category><![CDATA[  feature flags]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[best practices]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/606f0345d741af6659cf8f11/067ce01f-9b7e-438c-a59a-7363ee209410.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>I originally wrote this guide while working at Square to help my teams address a recurring problem: feature flag overrides applied during incident response were quietly becoming permanent, creating drift between production and staging environments. The advice below is a generalized version of that internal doc, applicable to any team using a feature flag provider.</p>
</blockquote>
<hr />
<h2>The Problem</h2>
<p>When users report regressions after a new feature rollout, we often respond quickly by removing the affected user from the feature flag rollout or rolling back the flag entirely. While this is the right call for immediate mitigation, the response often creates a long-term maintenance burden: we forget to re-enable the flag or re-enroll the user after the issue is resolved.</p>
<p><strong>These temporary overrides become permanent by accident.</strong></p>
<h2>Why This Matters</h2>
<p>This leads to fragmented experiences:</p>
<ul>
<li><p><strong>Invisible divergence:</strong> A user is running a different path than what we see in day-to-day development, making future bugs harder to reproduce or explain.</p>
</li>
<li><p><strong>Rollout inconsistency:</strong> Features meant to be universally available instead become fractured across the user base.</p>
</li>
<li><p><strong>Technical debt:</strong> Overrides accumulate and become difficult to audit, track, and reason about.</p>
</li>
</ul>
<h2>The Cost</h2>
<p>Permanent overrides erode our confidence in rollout data and introduce edge cases we didn't plan for. They can also block future rollouts, since we end up tiptoeing around previous exceptions rather than treating the root cause.</p>
<p>In short: they reduce predictability and slow us down.</p>
<h2>The Solution: Scheduled Rollback of Overrides</h2>
<p>Most modern feature flag providers (LaunchDarkly, Statsig, Flagsmith, Split, etc.) support scheduling the automatic removal of an override. This should be your default approach whenever you respond to a user-facing issue with an override.</p>
<p>Rather than relying on human follow-up or future cleanup work, make the override temporary at the moment it's applied while moving forward to fix the underlying issue.</p>
<h2>Example Workflow</h2>
<p>Let's say a user hits a bug due to a recent flag rollout:</p>
<ol>
<li><p><strong>Override with an expiration.</strong> Target the user in your feature flag provider to disable the flag, and schedule the override to be removed automatically, timed to when you expect the fix to ship.</p>
</li>
<li><p><strong>Fix the root cause.</strong> Plan and ship the fix through your normal release process.</p>
</li>
</ol>
<p>Because the override already has a time-to-live, the user re-enters the rollout path automatically once the fix is deployed. No manual cleanup or tracking required.</p>
<h2>How To Do It</h2>
<p>The exact steps vary by provider, but the general pattern is:</p>
<ol>
<li><p>Navigate to the flag in your provider's dashboard.</p>
</li>
<li><p>Add the affected user to an individual targeting or exception rule.</p>
</li>
<li><p>When adding the override, look for a <strong>"schedule removal"</strong> or <strong>"add expiration"</strong> option. Most providers surface this inline or through a calendar/date picker.</p>
</li>
<li><p>Set the expiration date to align with your expected fix deployment.</p>
</li>
<li><p>Confirm the scheduled change.</p>
</li>
</ol>
<p>If your provider doesn't support scheduled removal natively, create a ticket or calendar reminder at the time of the override. The point is to make cleanup a commitment, not an afterthought.</p>
]]></content:encoded></item><item><title><![CDATA[Demo Why Not What]]></title><description><![CDATA[Intro
This year at Square we started shipping aggressively. Powered by a completed migration, new feature roadmaps, and AI adoption, we shifted into a build, demo, iterate, ship cycle. My team was sho]]></description><link>https://blog.mmckenna.me/demo-why-not-what</link><guid isPermaLink="true">https://blog.mmckenna.me/demo-why-not-what</guid><category><![CDATA[demo]]></category><category><![CDATA[engineering]]></category><category><![CDATA[Mobile Development]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Mon, 23 Mar 2026 13:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/606f0345d741af6659cf8f11/0658a6cf-0e01-412a-b49b-327b74ff085c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Intro</h2>
<p>This year at Square we started shipping aggressively. Powered by a completed migration, new feature roadmaps, and AI adoption, we shifted into a build, demo, iterate, ship cycle. My team was showing our work to other engineers all the way up to the C-suite. To help us stand out I put together a short framework of how to make demos great, especially while shipping fast and showing progressive improvement.</p>
<p>This is that framework.</p>
<h2>Why Not What</h2>
<p>The "what" in a demo is obvious, it's on the screen. The audience can see the buttons and the flow of the UI. What they can't see is <strong>why it matters</strong>. Stories ground the features we build by making an example of their usefulness. Without a story, the audience is left to guess the value.</p>
<h2>Fine vs. Great</h2>
<p><strong>Fine Demo: What was built</strong></p>
<blockquote>
<p>"We added a modifier button to the order screen."</p>
</blockquote>
<p><strong>Great Demo: The problem it solves</strong></p>
<blockquote>
<p>"Baristas needed a faster way to customize drinks during rush hour without slowing the line."</p>
</blockquote>
<hr />
<p><strong>Fine Demo: Walks through features</strong></p>
<blockquote>
<p>"First you tap here, then select the size, then add modifiers from this menu..."</p>
</blockquote>
<p><strong>Great Demo: Tells a customer story</strong></p>
<blockquote>
<p>"A regular walks in and orders their usual. The barista pulls up their history and the order is ready in two taps."</p>
</blockquote>
<hr />
<p><strong>Fine Demo: Audience evaluates the work</strong></p>
<blockquote>
<p>"We implemented item search using the new API and added caching."</p>
</blockquote>
<p><strong>Great Demo: Audience imagines the user whose day got easier</strong></p>
<blockquote>
<p>"Now when the line is out the door, the barista isn't fumbling through menus, they're making drinks."</p>
</blockquote>
<h2>Lead With a Story</h2>
<p>Before showing anything, answer these four questions from the user's perspective:</p>
<ol>
<li><p><strong>Who are you?</strong> Give the audience a person to root for.</p>
</li>
<li><p><strong>What are you trying to do?</strong> Set up the job to be done.</p>
</li>
<li><p><strong>What was painful before?</strong> Make the problem known.</p>
</li>
<li><p><strong>What's different now?</strong> Show the contrast.</p>
</li>
</ol>
<h2>Demo Recipe</h2>
<p>Follow this structure to build a compelling narrative!</p>
<ol>
<li><p><strong>Open with a one-sentence persona + goal</strong></p>
<blockquote>
<p>"A seller at a busy counter needs to get someone checked out fast."</p>
</blockquote>
</li>
<li><p><strong>Name the pain in one sentence</strong></p>
<blockquote>
<p>"Before, this meant digging through three menus to find the right option, which backed up the line."</p>
</blockquote>
</li>
<li><p><strong>Show the happy path first</strong> One clean flow. No detours, no edge cases. Just the thing that proves the value.</p>
</li>
<li><p><strong>Call out the moment of value</strong> Don't make the audience guess. Say it:</p>
<blockquote>
<p>"This is the part that used to take 30 seconds. Now it's two taps."</p>
</blockquote>
</li>
<li><p><strong>Close with the outcome</strong> Time saved. Fewer errors. Clearer decisions. Happier customers. Name the win.</p>
</li>
<li><p><strong>Save build details for the end</strong> Technical deep-dives are great for people who want them. Offer them as a follow-up, not the main event.</p>
</li>
</ol>
<h2>Make It Stick</h2>
<h3>Critical</h3>
<ul>
<li><p><strong>Visibility Matters</strong> - Introduce yourself and your team at the start, call out contributors by name when showing their work, and share credit generously. Demos are a team effort!</p>
</li>
<li><p><strong>Keep it short</strong> - A tight demo is 1–3 minutes, not 10.</p>
</li>
<li><p><strong>Trust your instincts</strong> - If it feels too short, it's probably right.</p>
</li>
<li><p><strong>Rehearse once</strong> - Even one dry run catches awkward transitions and filler words.</p>
</li>
<li><p><strong>One takeaway</strong> - What's the single sentence the audience should walk away with? Know it before you start.</p>
</li>
</ul>
<h3>Polish</h3>
<ul>
<li><p><strong>Clean your environment</strong> - No "Test User 69." Realistic data makes it believable.</p>
</li>
<li><p><strong>Consider a backup</strong> - Live is better, but a 30-second video or a few screenshots can save the moment.</p>
</li>
<li><p><strong>Handle questions at the end</strong> - "I'll cover that at the end" keeps the flow intact. Answering mid-demo often derails the story.</p>
</li>
</ul>
<h3>Credibility</h3>
<ul>
<li><strong>Own the gaps</strong> - Call out what is missing and what's coming next.</li>
</ul>
<h2>Common Traps</h2>
<p><strong>Don't apologize for what's not done yet</strong></p>
<ul>
<li><p>❌ "Sorry, this part isn't quite finished but…"</p>
</li>
<li><p>✅ "This is in progress - here's what's coming next."</p>
</li>
</ul>
<p><strong>Don't detour into bugs or edge cases</strong></p>
<ul>
<li><p>❌ "Now if you do this weird thing, it breaks but…"</p>
</li>
<li><p>✅ Save edge cases for Q&amp;A unless they're critical to the story.</p>
</li>
</ul>
<p><strong>Don't narrate clicks</strong></p>
<ul>
<li><p>❌ "And then I click here, and then I click here…"</p>
</li>
<li><p>✅ "I'll add the modifier…" <em>(while clicking)</em></p>
</li>
</ul>
<h2>Handling Q&amp;A</h2>
<ul>
<li><p><strong>Repeat the question</strong> - Ensures everyone heard it and gives you time to think.</p>
</li>
<li><p><strong>Bridge back to the story</strong> - "That's a great question about [XYZ]. Remember our bartender…"</p>
</li>
<li><p><strong>Defer deep dives</strong> - "That's a great technical question, let's sync after."</p>
</li>
<li><p><strong>Make space for multiple voices</strong> - Take the lead and go through the queue of people asking questions. Take one or two from an individual before moving on.</p>
</li>
</ul>
<h2>Know Your Audience</h2>
<ul>
<li><p><strong>Leadership</strong> - Focus on outcomes and metrics. "This saves $2 or 2 minutes per transaction."</p>
</li>
<li><p><strong>Engineers</strong> - Show the happy path, then offer technical deep-dive. "Want to see how we handle error handling?"</p>
</li>
<li><p><strong>Customers</strong> - Emphasize ease and reliability. "You'll never lose a sale again!"</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Introducing Dejavu: Recomposition Testing for Jetpack Compose]]></title><description><![CDATA[Where This Idea Came From
At Square, most of the app is built on Workflow. There is an internal testing framework that lets you assert exactly how many render passes are triggered for a given interact]]></description><link>https://blog.mmckenna.me/introducing-dejavu</link><guid isPermaLink="true">https://blog.mmckenna.me/introducing-dejavu</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[Jetpack Compose]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Mon, 09 Mar 2026 19:26:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/606f0345d741af6659cf8f11/97c7bef5-60ea-4b3e-be87-d14f5a09ae28.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Where This Idea Came From</h2>
<p>At Square, most of the app is built on <a href="https://github.com/square/workflow-kotlin">Workflow</a>. There is an internal testing framework that lets you assert exactly how many render passes are triggered for a given interaction which is then enforced by CI.</p>
<p>I led the team that optimized the Select Payment Method screen in the Square Point of Sale app. We traced through the render tree, cut unnecessary render passes, and <strong>reduced overall latency by 30%!</strong> Each render was doing extra work, so consolidating effort and lowering the render count directly contributed to better performance.</p>
<p>The bigger win was what happened <em>after</em>. We wrote tests that asserted the the new and improved render counts we'd achieved. If a future change caused extra renders to show up, CI would fail and our performance gains couldn't silently regress.</p>
<h2>The Problem with Compose</h2>
<p>Workflows have renders. Compose has recompositions.</p>
<p>There's only a few ways to get insights into how Composables are performing.</p>
<ul>
<li><p><a href="https://developer.android.com/develop/ui/compose/tooling/debug#recomposition-counts">Layout Inspector</a> requires being in an IDE¹, manually checking a running app, and it doesn't run in CI.</p>
</li>
<li><p>"<code>SideEffect</code> <a href="https://developer.android.com/develop/ui/compose/side-effects#sideeffect-publish">guarantees that the effect executes after every successful recomposition</a>" and can be used to count them. But they litter production code with debugging infrastructure.</p>
</li>
</ul>
<p>Neither gives you a testable contract enforceable on every PR.</p>
<h2>Introducing Dejavu</h2>
<p>Dejavu turns recomposition behavior into test assertions. No production code changes beyond <code>Modifier.testTag()</code>. (which you are probably using already)</p>
<h3>Setup</h3>
<pre><code class="language-kotlin">// Create a Recomposition Tracking Rule
@get:Rule
val composeTestRule = createRecompositionTrackingRule()

@Test
fun incrementCounter_onlyValueRecomposes() {
  // Perform an action
  composeTestRule.onNodeWithTag("inc_button")
    .performClick()

  // Assert that Composables change like you expect
  composeTestRule.onNodeWithTag("counter_value")
    .assertRecompositions(exactly = 1)

  // Or assert that they remain stable
  composeTestRule.onNodeWithTag("counter_title")
    .assertStable() // asserts recompositions = 0
}
</code></pre>
<p>Same pattern as any Compose UI test.</p>
<ul>
<li><p>Find a node by tag</p>
</li>
<li><p>Perform an action</p>
</li>
<li><p>Assert your expected recomposition count!</p>
</li>
</ul>
<p>When a test fails, you get diagnostics that tell you <em>why</em>:</p>
<pre><code class="language-plaintext">dejavu.UnexpectedRecompositionsError: Recomposition assertion failed for testTag='product_header'
  Composable: demo.app.ui.ProductHeader (ProductList.kt:29)
  Expected: exactly 0 recomposition(s)
  Actual: 1 recomposition(s)

  All tracked composables:
    ProductListScreen = 1
    ProductHeader    = 1  &lt;-- FAILED
    ProductItem      = 1

  Recomposition timeline:
    #1 at +0ms — param slots changed: [1] | parent: ProductListScreen

  Possible cause:
    1 state change(s) of type Int
    Parameter/parent change detected (dirty bits set)
</code></pre>
<p>See the <a href="https://dejavu.mmckenna.me/error-messages/">Error Messages Guide</a> for more information.</p>
<h3>What Makes Dejavu Different</h3>
<p>Dejavu hooks into <a href="https://developer.android.com/reference/kotlin/androidx/compose/runtime/CompositionTracer">CompositionTracer</a> so there's no compiler plugin, bytecode manipulation, or Gradle plugin.</p>
<p><a href="https://dejavu.mmckenna.me/causality-analysis/">Causality analysis</a> is a best effort attempt to explain <em>why</em> we had a recomposition. It tracks <code>Snapshot</code> state changes and maps dirty bits back to parameter slots.</p>
<p>All Composable are tracked by default. <code>testTag</code> only required for the assertion API with <code>onNodeWithTag("x")</code>.</p>
<p>The tracer itself tracks every composable that gets traced via <code>CompositionTracer</code>, regardless of whether it has a <code>testTag</code>. The tag mapping just bridges between <code>testTag</code> and function name so assertions can find the right counter.</p>
<h2>An AI Agent's Blind Spot</h2>
<p>AI agents are writing Compose code. They refactor screens, hoist state, extract components, and they're <em>mostly</em> good at it. There's currently no way for an Agent to deterministically know it negatively (or positively) affected recompositions.</p>
<p>We humans are relying on the model to get it right as the ever growing avalanche of PR reviews comes barreling towards us. UI tests could pass and look fine in the CI recordings, but users start leaving negative reviews about jank.</p>
<p>Dejavu gives you, your CI, and your Agents the ability to run UI tests and validate that there are no unexpected changes.</p>
<p>The error messages work for both audiences. A human or an agent can read the failure and knows what to fix. And if an agent regresses recomposition CI fails and the PR doesn't merge.</p>
<p>You can also stream events in real time with <code>Dejavu.enable(app, logToLogcat = true)</code>. An agent running <code>adb logcat -s Dejavu</code> gets a live feed of per-instance composition state while iterating on a running app.</p>
<h2>Get Started</h2>
<pre><code class="language-kotlin">androidTestImplementation("me.mmckenna.dejavu:dejavu:0.1.2")
</code></pre>
<blockquote>
<p><a href="https://dejavu.mmckenna.me">Full documentation</a> · <a href="https://dejavu.mmckenna.me/getting-started/">Getting Started</a> · <a href="https://dejavu.mmckenna.me/examples/">Examples</a> · <a href="https://dejavu.mmckenna.me/api-reference/">API Reference</a> · <a href="https://github.com/himattm/dejavu">GitHub</a></p>
</blockquote>
<p>If you've ever stared at Layout Inspector wondering why a composable recomposed, or wished you could just <em>assert</em> on it in a test, give this a try and please let me know how it goes!</p>
<h2>Whats next?</h2>
<p>Lots of real testing! I added a pretty expansive test suite to measure correctness, but Compose is so expressive and adaptable I'm sure there are things I missed. If you find bugs I'd love to know what they are.</p>
<hr />
<h2>Footnotes</h2>
<ol>
<li>At Square we were beginning to have weekly discussions about the value of an IDE in today's AI enabled world. Whats your take?</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Cook Together, Clean Together, Build Together
]]></title><description><![CDATA[Ask anyone who worked at Block/Square what they loved most and I guarantee you the first thing they'll say is "its the people".
Square's mission of economic empowerment and growing by helping others g]]></description><link>https://blog.mmckenna.me/cook-together-clean-together-build-together</link><guid isPermaLink="true">https://blog.mmckenna.me/cook-together-clean-together-build-together</guid><category><![CDATA[AI]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Tue, 03 Mar 2026 17:56:48 GMT</pubDate><content:encoded><![CDATA[<p>Ask anyone who worked at Block/Square what they loved most and I guarantee you the first thing they'll say is "its the people".</p>
<p>Square's mission of economic empowerment and growing by helping others grow brought together builders who lead with empathy, care about their craft, and deeply care about the person they are building for. Together we made it a great place to work and a great product.</p>
<p>The best teams I've worked with have always turned coworkers into real friendships. The kind of friendships that promote vulnerability and compassion. Where you can admit what you don't know and learn from each other's experiences. Teams like this reshape how you think about problems and challenge your assumptions.</p>
<p>Personal growth drives every high performer I know. They seek teams that ship products and challenge them to become better at their craft. Your fastest growth will be when a teammate approaches a problem in a way you never would have. Or when, with kindness, they show you a blind spot you didn't know you had. AI agents can make you more productive, but they can't push you to rethink your assumptions or stretch into unfamiliar territory the way another person can. They don't challenge your perspective and instead they reflect it back to you.</p>
<p>I'm starting to think about it like cooking dinner with a partner. When one person cooks it's the other person's job to do the dishes, except nobody ever fights over who gets to clean. Washing dishes isn't a skill people desire to develop, it's a necessity, a chore. Cooking is where the skill, creativity, and development lives.</p>
<p>My wife and I love to cook together when we have the time. We both cook and we both clean. The learning, creativity and growth is shared! She's a much better chef than me so I get to ask questions learn from her, see <em>and</em> do. Afterwards we clean. The load is shared and the whole thing feels less like work because of it.</p>
<p>Working with AI can start to feel like it does the cooking while you're left rinsing plates. The work gets done, but you miss on growth. You miss out on growing others and being mentored yourself.</p>
<p>A team of humans guiding AI agents will build a better product than a single person with a team of agents. I'll never be friends with an AI Agent. It will never introduce me to new lived experiences or challenge my assumptions and biases in solving problems. AI agents are a powerful tool and they can absolutely help make the software. But good teams and good companies are built by people who care.</p>
<p>There's an African proverb: "If you want to go fast, go alone. If you want to go far, go together." Right now, the industry obsesses with the idea that AI lets a single person do the work of ten. And while I doubt that, let's assume it can. I'd never want to be a one-person team with an army of agents. Whatever you're building would be better built by a team of humans guiding the tools together.</p>
<p>Team health matters. Trust matters. The feeling that someone actually has your back matters. These compound over time, and no amount of AI tooling replaces them. So use AI and use it aggressively, but don't use it as a reason to shrink your team down to just you, because in the end, its the people.</p>
]]></content:encoded></item><item><title><![CDATA[Claude Status Lines Are the New Terminal Prompt]]></title><description><![CDATA[If you've spent time tweaking your shell prompt or configuring Starship, you know the feeling, there's something nice about having the right information visible when you need it.
Claude Code supports custom status lines, so I put one together for my ...]]></description><link>https://blog.mmckenna.me/claude-status-lines-are-the-new-terminal-prompt</link><guid isPermaLink="true">https://blog.mmckenna.me/claude-status-lines-are-the-new-terminal-prompt</guid><category><![CDATA[claude.ai]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[Android]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[iOS]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Mon, 05 Jan 2026 21:50:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767649581090/8d6f20ab-88b8-4115-8a92-7b10c5a8ec9f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you've spent time tweaking your shell prompt or configuring Starship, you know the feeling, there's something nice about having the right information visible when you need it.</p>
<p><a target="_blank" href="https://code.claude.com/docs/en/statusline">Claude Code supports custom status lines</a>, so I put one together for my workflow as an Android dev, with some iOS support too!</p>
<p>Here's what I ended up with:</p>
<pre><code class="lang-plaintext">⌸ my-app · Opus 4.5 · [████░░░░▒▒] 45% · +127 -34 · $2.15 · feature/auth* · mcp:1
⬢ emulator-5560 · ⬡ emulator-5562
</code></pre>
<p>I'll walk through each piece and why I find it useful.</p>
<h2 id="heading-the-icon">The Icon</h2>
<p>The icon is purely decorative. Giving each project its own icon just makes switching between repos feel a little more intentional. It's just fun.</p>
<h2 id="heading-directory">Directory</h2>
<p>I often have multiple Claude instances running across different projects. Seeing which directory Claude is working in helps me stay oriented. The status line shows where Claude was started, and it abbreviates the path when it navigates into subdirectories so it doesn't take up too much space.</p>
<h2 id="heading-model">Model</h2>
<p>Having the model name visible means I don't have to think about which one I'm using. Small thing, but it removes a bit of mental overhead.</p>
<h2 id="heading-context">Context</h2>
<p>The <code>[████░░░░▒▒]</code> bar shows how much of the context window I've used. Before I had this, I’d get surprised by compaction. Now I can see when I'm getting close to the limit and wrap up more intentionally or preemptively compact. It's helped me be more proactive about starting fresh sessions when needed.</p>
<h2 id="heading-lines-changed">Lines Changed</h2>
<p>Seeing <code>+127 -34</code> keeps me mindful of my teammates. Someone is going to review this code, and I want to make that experience as smooth as I can.</p>
<p>When I notice the numbers getting larger, it's a good signal to pause and commit what I have. Smaller changes are easier to review and easier to talk through. This little reminder has helped me stay more thoughtful about the code I'm generating before handing it off.</p>
<h2 id="heading-cost">Cost</h2>
<p>I find it helpful to see what a session is costing. It's not about watching every dollar, it's more about staying aware. Some problems need deep exploration, others should be quick. Having the number visible helps me calibrate.</p>
<h2 id="heading-branch">Branch</h2>
<p>Seeing the current branch helps me avoid mistakes. Nothing fancy, just useful to have in view.</p>
<h2 id="heading-mcp">MCP</h2>
<p>If you're using MCP servers, <code>mcp</code> shows they're connected. Helpful for ruling things out when debugging. Sometimes they can fail to start.</p>
<h2 id="heading-devices">Devices</h2>
<p>This is the part that's most specific to mobile dev.</p>
<p>The second line shows my connected emulators and simulators. I can see what's available, which device is currently targeted, and through configuration the version of an app installed on each one. This makes it easy to ask Claude to target a specific device, and I can verify that builds installed correctly without switching windows.</p>
<p>If you're working on Android or iOS, having visibility into your devices is really helpful.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Symbol</strong></td><td><strong>Meaning</strong></td></tr>
</thead>
<tbody>
<tr>
<td><code>⬢</code></td><td>Android device (targeted via ANDROID_SERIAL)</td></tr>
<tr>
<td><code>⬡</code></td><td>Android device (not targeted)</td></tr>
<tr>
<td><code></code></td><td>iOS simulator (Apple logo)</td></tr>
</tbody>
</table>
</div><hr />
<p>Try it for yourself at <a target="_blank" href="https://github.com/himattm/claude-mobiledev-statusline">claude-mobiledev-statusline</a>! It’s configurable via a json file.</p>
<p>If something's missing or broken, let me know on <a target="_blank" href="https://bsky.app/profile/mmckenna.me">Bluesky</a> or open an issue. PRs are welcome! 💚</p>
]]></content:encoded></item><item><title><![CDATA[Agents Keep Fighting Over My CPU]]></title><description><![CDATA[I prefer running two or three AI agents locally rather than offloading to remote agents. The feedback loop is faster, I can see what's happening, use the tools I like, and context switching between tasks is easier when everything is on my machine.
My...]]></description><link>https://blog.mmckenna.me/agents-keep-fighting-over-my-cpu</link><guid isPermaLink="true">https://blog.mmckenna.me/agents-keep-fighting-over-my-cpu</guid><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[Model Context Protocol]]></category><category><![CDATA[mcp]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Thu, 18 Dec 2025 16:39:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/AlRhopd1riM/upload/fab5191a87a879308e321b88df2d015d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I prefer running two or three AI agents locally rather than offloading to remote agents. The feedback loop is faster, I can see what's happening, use the tools I like, and context switching between tasks is easier when everything is on my machine.</p>
<p>My typical setup is one agent on a larger task, a feature or refactor, and one or two others on smaller things like a UI tweak or bug investigation.</p>
<p>As an Android dev, there’s an obvious downside. My laptop takes a beating when running multiple gradle tasks.</p>
<p>When both agents decide to run Gradle at the same time, my machine grinds to a halt. One instance of Gradle is already resource intensive and running multiple builds at the same time makes them compete for resources. A build that normally takes 3 minutes stretches to 15. The fans spin up. Everything freezes. And if they're both trying to deploy to the same emulator, you get task clashing on top of it.</p>
<p>The agents don't know about each other. They can't coordinate.</p>
<h2 id="heading-the-workarounds">The Workarounds</h2>
<p>There are a few options here, and none of them are great.</p>
<p>You could run multiple emulators so each agent has its own target. Now you're burning even more resources on a machine that's already struggling and we didn’t solve the multiple expensive task problem.</p>
<p>So you could manually sequence the tasks yourself. Wait for Agent A to finish its build before letting Agent B run. But now you're babysitting the agents instead of doing your own work. The whole point of running multiple agents is to get more done in parallel, not to become a human task scheduler.</p>
<p>Either way, you're giving up something, time, resources, or attention.</p>
<h2 id="heading-what-i-tried-first">What I Tried First</h2>
<p>I built a CLI wrapper. The idea was simply wrap the command and the calling context and they go into a <a target="_blank" href="https://en.wikipedia.org/wiki/FIFO_\(computing_and_electronics\)">First In First Out</a> (FIFO) queue. One runs at a time, the rest wait.</p>
<pre><code class="lang-bash">queue ./gradlew build
</code></pre>
<p>Technically it worked, but there was a problem I didn't anticipate.</p>
<p>AI coding tools have shell timeouts.</p>
<p><a target="_blank" href="https://docs.anthropic.com/en/docs/claude-code">Claude Code</a> gives you about 2 minutes by default. <a target="_blank" href="https://www.cursor.com/">Cursor</a> hard-codes 30 seconds. If your command is waiting in queue when the timeout hits, it gets killed.</p>
<p>Then the agents “be smart” and try to run the command without calling into the queue mechanism defeating the whole purpose.</p>
<p>I tried extending the timeouts with environment variables. That helped for Claude, but Cursor's limit isn't configurable. Even with longer timeouts, the issue of waiting compounds because we realistically only want timeouts for execution time, not waiting time.</p>
<h2 id="heading-why-model-context-protocol-works-better">Why Model Context Protocol Works Better</h2>
<p><a target="_blank" href="https://modelcontextprotocol.io/">Model Context Protocol (MCP)</a> <a target="_blank" href="https://modelcontextprotocol.io/specification/2025-06-18/server/tools">Tool</a> calls don't go through the shell. The agent connects directly to an MCP server, and that connection <strong>stays alive until the Tool returns</strong>. There's no external timeout to worry about.</p>
<p>With a CLI, the agent spawns a shell process, the shell runs your command, and if the shell process takes too long, the agent kills it. With MCP, the agent calls a Tool and waits for the response. No shell, no timeout!</p>
<p>So I rewrote the queue as an MCP server. Same concept, FIFO queue, one build at a time, but the timeout problem goes away.</p>
<h2 id="heading-how-it-works">How It Works</h2>
<p>Agent A calls <code>run_task</code> with a Gradle command. The MCP server queues it, runs it, returns the result. If Agent B calls the same Tool while A is running, B waits in the queue until A finishes. Both agents block on their Tool calls, but neither times out.</p>
<p>Here's what that looks like in practice:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Time</td><td>Agent A</td><td>Agent B</td></tr>
</thead>
<tbody>
<tr>
<td>0:00</td><td>Started build</td><td></td></tr>
<tr>
<td>0:02</td><td>Building...</td><td>Entered queue, waiting</td></tr>
<tr>
<td>3:12</td><td>Completed (192.6s)</td><td>Started build</td></tr>
<tr>
<td>3:45</td><td></td><td>Completed (32.6s)</td></tr>
</tbody>
</table>
</div><p>Agent B's build only took 32 seconds because it didn't have to compete with Agent A. Gradle's daemon was warm, caches were populated, the machine was free.</p>
<p>Total time: 3:45. Without the queue, both builds fighting each other would've taken 10+ minutes, and my laptop would've been unusable.</p>
<p>The implementation is about 600 lines of Python. <a target="_blank" href="https://sqlite.org/">SQLite</a> with <a target="_blank" href="https://sqlite.org/wal.html">Write-Ahead Logging (WAL)</a> mode for the queue state. Process groups to clean up orphaned builds if an agent crashes. Output goes to log files to avoid eating up context window tokens.</p>
<p>If you're using Claude Code, you'll want to add instructions to your <a target="_blank" href="http://CLAUDE.md">CLAUDE.md</a> telling it to prefer the MCP Tool over the built-in Bash for build commands. Otherwise it'll just run Gradle directly and skip the queue. The <a target="_blank" href="https://github.com/block/agent-task-queue#note-for-claude-code-users">README</a> has the snippet that I use.</p>
<h2 id="heading-try-it-out">Try It Out!</h2>
<p>If you're running multiple AI agents on the same machine and they're triggering expensive operations (builds, tests, docker commands) this might help. The agents serialize automatically instead of fighting over resources.</p>
<p>It's open source under Apache 2.0 and you can install it with <a target="_blank" href="https://docs.astral.sh/uv/guides/tools/">uvx</a>:</p>
<pre><code class="lang-bash">uvx agent-task-queue@latest
</code></pre>
<p>Works with most AI coding tools that support MCP. Check out the repo at <a target="_blank" href="http://github.com/block/agent-task-queue">github.com/block/agent-task-queue</a> for setup instructions.</p>
<hr />
<p>If you try it out, I'd love to hear how it goes. Open an <a target="_blank" href="https://github.com/block/agent-task-queue/issues">issue on GitHub</a> or reach out on <a target="_blank" href="https://bsky.app/profile/mmckenna.me">Bluesky</a>!</p>
]]></content:encoded></item><item><title><![CDATA[A Framework for Engineering Variance with AI Agents]]></title><description><![CDATA[The Piston and the Cup Holder
In mechanical engineering, "tolerance" is the permissible limit of variation in a physical dimension. It is the acknowledgement that the world isn’t perfect, and designs needs to account for that.
A plastic cup holder in...]]></description><link>https://blog.mmckenna.me/the-piston-and-the-cup-holder</link><guid isPermaLink="true">https://blog.mmckenna.me/the-piston-and-the-cup-holder</guid><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Thu, 04 Dec 2025 13:15:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/zR03DGYYo9I/upload/87bc3068fd7e190c86de9b9ee1a05c1e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-the-piston-and-the-cup-holderhttpsyoutubelojqoql7zf4"><a target="_blank" href="https://youtu.be/LOjQoQL7zF4">The Piston and the Cup Holder</a></h1>
<p>In mechanical engineering, "<a target="_blank" href="https://www.modusadvanced.com/resources/blog/tolerance-stack-up-nightmares-choosing-the-right-tolerance-in-product-design">tolerance</a>" is the permissible limit of variation in a physical dimension. It is the acknowledgement that the world isn’t perfect, and designs needs to account for that.</p>
<p>A plastic cup holder in a car has high tolerance. If the mold is off by a millimeter, or if the plastic warps slightly in the summer heat, it doesn’t matter. Your drink will still fit. The user experience remains intact.</p>
<p>A piston ring inside the engine block has extremely low tolerance. If it is off by a fraction of a millimeter, the seal fails, the compression drops, the engine seizes. The margin for error is effectively zero.</p>
<p>We can understand this intuitively in the physical world. You wouldn't use cheap plastic for a piston, and you wouldn't precision machine a cup holder out of hardened steel.</p>
<p>In software, we tend to treat our code bases more or less as uniform constructs. We apply the same processes, the same speed expectations, and increasingly, the same AI tools to every part of the system. We are being asked to "vibe code" our way through mission critical code and it’s dangerous.</p>
<p>If we want to use AI to accelerate delivery we need to stop asking "Can AI write this?" and start asking "What is the tolerance of this part of our system?"</p>
<p>I want to propose a way to categorize this into three specific software engineering tolerances.</p>
<hr />
<h2 id="heading-1-logical-tolerance">1. Logical Tolerance</h2>
<p>This is the permissible limit of deviation from truth. How "correct" does the output need to be?</p>
<ul>
<li><p><strong>High Tolerance:</strong> An animation, a prototype script, or some internal tooling. If the animation jiggles a bit or the script needs a retry, the failure state is annoyance.</p>
</li>
<li><p><strong>Medium Tolerance:</strong> Search rankings, recommendation feeds, or "eventual consistency" counters (like a view count). If a user has 42 likes but the screen says 41 for a few seconds, or if a search result is ranked #2 instead of #1, the product may be “imperfect”, but it is still viable.</p>
</li>
<li><p><strong>Low Tolerance:</strong> Idempotency keys, math, encryption standards, and payment execution. The failure state here isn't annoyance; it’s liability.</p>
</li>
</ul>
<p>Generative AI is a probabilistic tool. It is a <a target="_blank" href="https://www.ibm.com/think/topics/generative-model">variance generator</a>. It excels in High Tolerance zones because "mostly right" is often good enough. In Low Tolerance zones, like the payment path my team manages, correctness is binary. Code that is 99% correct will 100% fail for some uses.</p>
<hr />
<h2 id="heading-2-volatility-tolerance">2. Volatility Tolerance</h2>
<p>This is the permissible limit of change a system can absorb. How stable is the system throughout change?</p>
<ul>
<li><p><strong>High Tolerance:</strong> A standalone microservice, a one-off migration script. These can change daily with minimal blast radius.</p>
</li>
<li><p><strong>Medium Tolerance:</strong> Feature-level business logic, like a User Profile screen or a Settings page. These evolve regularly based on product requirements, but they rely on stable foundations and can't break every week.</p>
</li>
<li><p><strong>Low Tolerance:</strong> Core platform libraries, API schemas, persistent storage, and foundational data models. These evolve slowly as their impact exponentially grows.</p>
</li>
</ul>
<p>AI promises infinite velocity, to push code as fast as it consumes electricity. Not every part of a system can survive infinite velocity. In an area with low <strong>Volatility Tolerance</strong>, the constraint isn't how fast you can <em>write</em> code, but how safely you can <em>integrate</em> it over time.</p>
<hr />
<h2 id="heading-3-cognitive-tolerance">3. Cognitive Tolerance</h2>
<p>This is the limit of complexity, and the rate of change, a human can verify and internalize in a reasonable time frame.</p>
<ul>
<li><p><strong>High Tolerance:</strong> Boilerplate, standard patterns, and simple unit tests. The code is obvious. If AI writes it, it can be scanned in seconds and stamped with "<a target="_blank" href="https://en.wiktionary.org/wiki/LGTM">LGTM</a>."</p>
</li>
<li><p><strong>Medium Tolerance:</strong> Standard data mapping (<a target="_blank" href="https://en.wikipedia.org/wiki/Data_transfer_object">DTO</a> to Domain), form validation, or <a target="_blank" href="https://developer.android.com/topic/libraries/architecture/viewmodel">ViewModel</a> state management. It requires actual reading and context, but the logic is generally linear and self-contained.</p>
</li>
<li><p><strong>Low Tolerance:</strong> Distributed system logic, concurrency handling, and security protocols.</p>
</li>
</ul>
<p>This isn't just about the verification tax: the time it takes to reason about a complex block of code. It is about the <strong>erosion of confidence</strong>.</p>
<p>Engineers rely on deep mental models to predict how a system will behave. These mental models are built on assumptions and historical understanding. If we are encouraged and AI allows us to change too much code too quickly, our mental models can’t keep up. Even if the new code is "better", if it invalidates our core assumptions faster than we can relearn them, we lose our ability to reason about the system.</p>
<p>In Low Tolerance zones, we need the code to match our mental model. If we lose that alignment, we are effectively shipping <a target="_blank" href="https://understandlegacycode.com/blog/what-is-legacy-code-is-it-code-without-tests/"><strong>instant legacy code</strong></a>, systems that become brittle immediately because we are too afraid to change what we don't fully understand.</p>
<hr />
<h2 id="heading-the-core-thesis">The Core Thesis</h2>
<p>AI can be a powerful accelerator, but it is introducing a new way of building software where we need to be very aware of these tolerances .</p>
<p>To use it effectively, we have to map our code base against these three axes.</p>
<p>You cannot readily apply high-variance tools to Low Tolerance zones (the Pistons) without risk of seizing the engine. Conversely, you shouldn't leverage your engineers' time hand-crafting the High Tolerance cup holders.</p>
<p>As the industry pushes for more and more AI adoption, these terms allow us to move the conversation from "Can we use AI?" to "What is the cost?" to “How do areas of our code base thrive with AI?”</p>
<p>We can use this framework to explain that while AI might accelerate code generation in some critical paths, it introduces risks to <strong>Logical Tolerance</strong> that require a disproportionate investment in verification. It gives us the nuance to say, <em>"If we accelerate here, we are trading off</em> <strong><em>Cognitive Tolerance</em></strong>, which means our future maintenance costs will go up."</p>
<p>The goal of this series is to establish <strong>Logical Tolerance</strong>, <strong>Volatility Tolerance</strong>, and <strong>Cognitive Tolerance</strong> as a shared vocabulary for these trade-offs such that they can be used in key engineering decisions.</p>
<hr />
<p>This was <strong>Part One of The Tolerance Trilogy</strong>, thanks for reading! Up next, we’ll dive into why AI written code specifically struggles within large software systems and how we can go about using it successfully.</p>
]]></content:encoded></item><item><title><![CDATA[Git Bisect for Mobile is Dead]]></title><description><![CDATA[I love git bisect
If you don’t know it, git bisect is basically a binary search for bugs. You have an issue that did not exist before, so you grab two commit hashes, one good and one bad, and build, repro, build, repro until you find the exact commit...]]></description><link>https://blog.mmckenna.me/git-bisect-for-mobile-is-dead</link><guid isPermaLink="true">https://blog.mmckenna.me/git-bisect-for-mobile-is-dead</guid><category><![CDATA[AI]]></category><category><![CDATA[Git]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[Android]]></category><category><![CDATA[iOS]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Wed, 29 Oct 2025 11:00:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/ufg0Go7rl2k/upload/0d4aacb0d0f3c67b4bd6942fa5665601.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-i-love-git-bisect">I love <code>git bisect</code></h2>
<p>If you don’t know it, <a target="_blank" href="https://git-scm.com/docs/git-bisect"><code>git bisect</code></a> is basically a binary search for bugs. You have an issue that did not exist before, so you grab two commit hashes, one good and one bad, and build, repro, build, repro until you find the exact commit that introduced the bug.</p>
<h2 id="heading-git-bisect-hates-mobile"><code>git bisect</code> Hates Mobile</h2>
<p>Constant building and shifting of your dependency graph is a HUGE pain for mobile devs. Our builds can take <strong>multiple minutes</strong>, especially if we have to go to a previous release and we don’t have recent build caches. Git bisect requires a lot of builds. This can take a TON of time.</p>
<p>I was investigating an issue between two releases (<code>release_8</code> and <code>release_9</code>) which had 1300 commits between them. This is about 11 bisect steps or:</p>
<ul>
<li><p>22 minutes of build time with 2 minute builds</p>
</li>
<li><p>55 minutes with 5 minute builds</p>
</li>
</ul>
<p>The reality is going to be some mix of these build times depending on which way the bisect goes. Regardless, this is a large amount of time to simply wait.</p>
<h2 id="heading-ai-to-the-rescue">AI to the Rescue?</h2>
<p>For this issue, we couldn’t reproduce it locally and didn’t have clear steps. Doing 10+ builds would’ve been a waste of time.</p>
<p>So instead, I grabbed a <strong>big diff</strong>.<br />We knew the bug appeared in the recent release but not the previous one:</p>
<pre><code class="lang-plaintext">git diff release_8 release_9
</code></pre>
<p>This command produced <strong>413,032</strong> lines changed.</p>
<p>Instead of bisecting I asked an AI Agent to look at this diff with relevant information from our issue. <strong>This only took a few minutes!</strong></p>
<p>The AI analyzed the diff, grouped related changes, and surfaced a few areas that might be tied to the behavior we were seeing:</p>
<ul>
<li><p>Code paths that touched relevant data</p>
</li>
<li><p>New conditions around feature flags</p>
</li>
<li><p>etc.</p>
</li>
</ul>
<p>No builds. No waiting. Just instant context.</p>
<h2 id="heading-why-this-works-so-well">Why This Works So Well</h2>
<p>Bisecting exists because it’s impossible to reason about that much change at once. It helps us narrow the scope so we can test smaller chunks and confirm behavior step by step. That’s how we make the problem manageable.</p>
<p>This is exactly what large language models are good at. They can take in the full context of a change and reason about the entire picture at once.</p>
<p>Instead of checking each commit, I tell the agent to get the diff between good and bad commits and ask, “Given these changes and this bug, what looks connected?” and it points me towards areas of interest.</p>
<p>I’m still debugging, just with more context than I can hold in my head, and without having to wait for the compiler.</p>
<h2 id="heading-try-it-yourself">Try It Yourself</h2>
<p>Next time you hit a tricky regression, grab an AI agent and look at a diff between the good and bad commits. Let it scan the changes and tell you what stands out.</p>
<p><code>git bisect</code> still matters. When you can reproduce the issue cleanly and builds are quick, it’s the right tool. When the bug is vague or builds take forever, this approach can save a lot of time.</p>
]]></content:encoded></item><item><title><![CDATA[Microdosing AI for Mobile Dev]]></title><description><![CDATA[I’ve been experimenting with small, practical ways AI can fit into my daily mobile development. Not to write code for me, but to accelerate the “in-between” steps of my workflow. These little uses aren’t super flashy. They just smooth out some pain p...]]></description><link>https://blog.mmckenna.me/microdosing-ai-for-mobile-dev</link><guid isPermaLink="true">https://blog.mmckenna.me/microdosing-ai-for-mobile-dev</guid><category><![CDATA[Android]]></category><category><![CDATA[Android Studio]]></category><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[gemini]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Mon, 27 Oct 2025 11:00:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4IxPVkFGJGI/upload/2f2270fcd21ae12b8c6603c6c8bdd485.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I’ve been experimenting with small, practical ways AI can fit into my daily mobile development. Not to write code for me, but to accelerate the “in-between” steps of my workflow. These little uses aren’t super flashy. They just smooth out some pain points.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✨</div>
<div data-node-type="callout-text">Here’s some examples using <a target="_self" href="https://developer.android.com/gemini-in-android?gclsrc=aw.ds&amp;gad_source=1&amp;gad_campaignid=22525788956&amp;gbraid=0AAAAAC-IOZl4l9s5BWyqCTTGWNgwTnbvu&amp;gclid=CjwKCAjwjffHBhBuEiwAKMb8pNSXifdA-U1SBZOBoiyVvfJtfniUQBukCvb427rvhRGqeasv6LuAkxoC22oQAvD_BwE">Gemini in Android Studio</a> with the <a target="_self" href="https://github.com/android/compose-samples/tree/main/Jetchat">Jetchat</a> project, but any AI Agent you have access to should work!</div>
</div>

<hr />
<h2 id="heading-reverse-lookup-a-screen">Reverse Lookup a Screen</h2>
<p>You’re debugging a UI, see some text, and have no idea where it’s coming from. You start in <code>strings.xml</code>, follow the ID through the ViewModel, check the Composable, maybe even <a target="_blank" href="https://en.wikipedia.org/wiki/Grep">grep</a> the whole project. It’s the least interesting scavenger hunt in Android development, but an extremely common pattern to reverse lookup the UI that’s being rendered.</p>
<p>Now I just drop a screenshot into an AI Agent and ask where that text lives. It’ll usually tell me the string resource name, where it’s referenced, and which file renders it. It’s the kind of task that’s perfect to hand off. Something that used to take a few minutes and some effort is now no concern.</p>
<p><strong>Example Prompt</strong></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤖</div>
<div data-node-type="callout-text">Given this screenshot of my app, show me where this is defined and where I can make edits to this UI.</div>
</div>

<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/tiP1tnTSPyQ">https://youtu.be/tiP1tnTSPyQ</a></div>
<p> </p>
<hr />
<h2 id="heading-first-pass-review">First Pass Review</h2>
<p>I still rely on teammates for real code review, but AI’s great for a quick first pass. Before I open a PR, I’ll paste the diff or ask an agent to look at my commit history. Then I ask for feedback on readability, potential issues, and edge cases.</p>
<h3 id="heading-time-saved">Time Saved!</h3>
<p>I work in a <a target="_blank" href="https://www.reddit.com/r/androiddev/comments/1ksptx7/comment/mtonvgx/?utm_source=share&amp;utm_medium=web3x&amp;utm_name=web3xcss&amp;utm_term=1&amp;utm_content=share_button">large project</a> and we generally can’t sync all of our modules at once. Doing this first pass of review has saved me time where I missed updating an override when adding a new function to an interface whether its in a fake in tests or an implementation in another module. Typically CI/CD would catch this for me, but that takes much longer to run.</p>
<p><strong>Example Prompt</strong></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤖</div>
<div data-node-type="callout-text">Here’s a PR diff. Review this code change in a kind but critical way. Highlight anything that looks error prone, consider edge cases, evaluate general performance. Favor readability. Summarize into actionable bullet points with must fixes and suggestions.</div>
</div>

<hr />
<h2 id="heading-unused-resource-cleanup">Unused Resource Cleanup</h2>
<p>All projects collect junk over time. You’ll find string resources that aren’t referenced, drawables from redesigns, or xml layouts replaced by Compose. They all sit there bloating the project and slowing builds.</p>
<p>AI Agents are pretty good at walking the reference graph across modules and telling you what’s safe to delete.</p>
<p>This is especially helpful for large projects where not all modules are synced or the Unused Resource tool simply falls over.</p>
<p><a target="_blank" href="https://www.reddit.com/r/androiddev/comments/1ksptx7/comment/mtonvgx/?utm_source=share&amp;utm_medium=web3x&amp;utm_name=web3xcss&amp;utm_term=1&amp;utm_content=share_button"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761504713263/a7700346-9128-48fe-9838-77cf370ff960.png" alt="One reddit user says &quot;That shit simply does not work in big projects&quot; in a thread about using the Android Studio unused resources tool." class="image--center mx-auto" /></a></p>
<p><strong>Example Prompt</strong></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤖</div>
<div data-node-type="callout-text">Given this resource, does it have any uses? If not can you remove it?</div>
</div>

<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/L-Je5tjJqig">https://youtu.be/L-Je5tjJqig</a></div>
<p> </p>
<hr />
<h2 id="heading-supercharging-find-in-files">Supercharging Find in Files</h2>
<p>We all love the <strong>Find in Files</strong> feature (<code>Cmd+Shift+F</code>), but plain text search can often show way too much noise. There’s a little "regex" checkbox that unlocks a lot of potential! Although writing regex is <a target="_blank" href="https://regexle.com/">usually</a> not fun. This is a task I’m happy to hand off.</p>
<p>Describe the code to find in plain English and let the AI generate the complex pattern for the search box.</p>
<h3 id="heading-example-1-hunting-hardcoded-strings"><strong>Example 1: Hunting Hardcoded Strings</strong></h3>
<p>You're trying to stamp out hardcoded strings in your Jetpack Compose code. A simple search for <code>Text("</code> is a noisy mess that misses half the cases and incorrectly flags others. Now, I just ask for a smart regex that finds <code>Text("Hello")</code> and <code>Text(text = "World")</code> but correctly <em>ignores</em> <code>Text(stringResource(R.string.hello))</code>.</p>
<p><strong>Example Prompt</strong></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤖</div>
<div data-node-type="callout-text">Give me a regex for IntelliJ's search to find all Jetpack Compose <code>Text()</code> calls where the <code>text</code> parameter is a hardcoded string literal, not a <code>stringResource()</code> call.</div>
</div>

<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/rP1B_o9beH8">https://youtu.be/rP1B_o9beH8</a></div>
<p> </p>
<h3 id="heading-example-2-auditing-theme-colors">Example 2: Auditing Theme Colors</h3>
<p>You're migrating to Material 3 and need to find every layout that’s still using a hardcoded color (like <a target="_blank" href="https://www.colorhexa.com/a4c639"><code>#A4C639</code></a>) instead of a theme attribute (<code>?attr/colorPrimary</code>). Instead of manually scanning hundreds of XML files, I can get a single regex to find every violation.</p>
<p><strong>Example Prompt</strong></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤖</div>
<div data-node-type="callout-text">I'm searching my <code>.xml</code> layout files. Write a regex to find any XML attribute that ends in <code>Color</code> (like <code>android:textColor</code>) and is set to a hardcoded hex value that starts with <code>#</code>.</div>
</div>

<hr />
<p>If you’ve found other ways AI boosts your mobile workflow, I’d love to hear about them!</p>
<p>Thanks for reading 💚</p>
]]></content:encoded></item><item><title><![CDATA[The Case of the Missing Handler]]></title><description><![CDATA[Our story begins with a clean line of Kotlin:
typealias Handler = (result: Result) -> Unit

It looks innocent. Give a function type a name and tidy up the signatures, great!
Then a bug hits. The Handler is gone in the debugger and in stack traces onl...]]></description><link>https://blog.mmckenna.me/the-case-of-the-missing-handler</link><guid isPermaLink="true">https://blog.mmckenna.me/the-case-of-the-missing-handler</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Thu, 28 Aug 2025 04:00:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/d9ILr-dbEdg/upload/fc2aabdc203783ebe7cb47ac69d472b9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Our story begins with a clean line of Kotlin:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">typealias</span> Handler = (result: Result) -&gt; <span class="hljs-built_in">Unit</span>
</code></pre>
<p>It looks innocent. Give a function type a name and tidy up the signatures, great!</p>
<p>Then a bug hits. The Handler is gone in the debugger and in stack traces only <code>(Result) -&gt; Unit</code> remains. Our alias has disappeared at the scene of the crime.</p>
<p>That is the trick, a <code>typealias</code> does not create a new type. It is only a name for the function type.</p>
<h2 id="heading-enter-the-functional-interface">Enter the functional interface</h2>
<pre><code class="lang-kotlin"><span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-keyword">interface</span> Handler {</span>
  <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">handle</span><span class="hljs-params">(result: <span class="hljs-type">Result</span>)</span></span>
}
</code></pre>
<p>Now the Handler exists. It has identity, it can carry documentation, and the method name <code>handle</code> shows up in stack traces and search results. Usage is still light:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> handler: Handler = Handler { result -&gt;
  println(result)
}
</code></pre>
<h2 id="heading-evidence">Evidence</h2>
<h3 id="heading-overloading">Overloading</h3>
<p>Aliases collapse to the same function type, which prevents overloading:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">typealias</span> Handler = (Result) -&gt; <span class="hljs-built_in">Unit</span>
<span class="hljs-keyword">typealias</span> Completion = (Result) -&gt; <span class="hljs-built_in">Unit</span>

<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">process</span><span class="hljs-params">(h: <span class="hljs-type">Handler</span>)</span></span> { }
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">process</span><span class="hljs-params">(c: <span class="hljs-type">Completion</span>)</span></span> { } <span class="hljs-comment">// Won't compile, same JVM signature.</span>
</code></pre>
<p>Functional interfaces are distinct types, so overloads compile:</p>
<pre><code class="lang-kotlin"><span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-keyword">interface</span> Handler { </span>
  <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">handle</span><span class="hljs-params">(result: <span class="hljs-type">Result</span>)</span></span>
}
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-keyword">interface</span> Completion {</span>
  <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">complete</span><span class="hljs-params">(result: <span class="hljs-type">Result</span>)</span></span>
}

<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">process</span><span class="hljs-params">(h: <span class="hljs-type">Handler</span>)</span></span> { }
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">process</span><span class="hljs-params">(c: <span class="hljs-type">Completion</span>)</span></span> { }
</code></pre>
<h3 id="heading-multiple-handlers-with-the-same-alias">Multiple handlers with the same alias</h3>
<p>Using a single alias for several callbacks in the same type invites mistakes. The compiler cannot help if you swap them. It also won’t tell you which was called while debugging or looking at a stacktrace.</p>
<pre><code class="lang-kotlin"><span class="hljs-comment">// One alias for two different callbacks</span>
<span class="hljs-keyword">typealias</span> Handler = (Result) -&gt; <span class="hljs-built_in">Unit</span>

<span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Screen</span></span>(
  <span class="hljs-keyword">val</span> onClick: Handler,
  <span class="hljs-keyword">val</span> onDismiss: Handler
)

<span class="hljs-keyword">val</span> click: Handler = { println(<span class="hljs-string">"clicked: <span class="hljs-variable">$it</span>"</span>) }
<span class="hljs-keyword">val</span> dismiss: Handler = { println(<span class="hljs-string">"dismissed: <span class="hljs-variable">$it</span>"</span>) }

<span class="hljs-comment">// Compiles, but the handlers are reversed</span>
<span class="hljs-keyword">val</span> screen = Screen(onClick = dismiss, onDismiss = click)
</code></pre>
<p>With functional interfaces you can give each callback a distinct type, so it is impossible to construct the <code>Screen</code> incorrectly and stacktraces name the method you expect.</p>
<pre><code class="lang-kotlin"><span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-keyword">interface</span> ClickHandler { <span class="hljs-keyword">fun</span> <span class="hljs-title">onClick</span><span class="hljs-params">(result: <span class="hljs-type">Result</span>)</span></span> }
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-keyword">interface</span> DismissHandler { <span class="hljs-keyword">fun</span> <span class="hljs-title">onDismiss</span><span class="hljs-params">(result: <span class="hljs-type">Result</span>)</span></span> }

<span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Screen</span></span>(
  <span class="hljs-keyword">val</span> onClick: ClickHandler,
  <span class="hljs-keyword">val</span> onDismiss: DismissHandler
)

<span class="hljs-comment">// Will not compile if you swap types</span>
<span class="hljs-comment">// val screen = Screen(onClick = DismissHandler { ... }, onDismiss = ClickHandler { ... })</span>

<span class="hljs-keyword">val</span> screen = Screen(
  onClick = ClickHandler { println(<span class="hljs-string">"clicked: <span class="hljs-variable">$it</span>"</span>) },
  onDismiss = DismissHandler { println(<span class="hljs-string">"dismissed: <span class="hljs-variable">$it</span>"</span>) }
)
</code></pre>
<h3 id="heading-kdoc-and-discoverability">Kdoc and Discoverability</h3>
<p>You can put kdoc on a functional interface and it surfaces in the IDE. You can also add small helpers as extensions:</p>
<pre><code class="lang-kotlin"><span class="hljs-comment">/** Called once per operation with the final result. */</span>
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-keyword">interface</span> Handler {</span>
  <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">handle</span><span class="hljs-params">(result: <span class="hljs-type">Result</span>)</span></span>
}

<span class="hljs-function"><span class="hljs-keyword">fun</span> Handler.<span class="hljs-title">logged</span><span class="hljs-params">()</span></span>: Handler = Handler { result -&gt;
  Log.d(<span class="hljs-string">"Handler"</span>, <span class="hljs-string">"Handled: <span class="hljs-variable">$result</span>"</span>)
  <span class="hljs-keyword">this</span>.handle(result)
}
</code></pre>
<h3 id="heading-stacktrace-comparison">Stacktrace Comparison</h3>
<p>With a <code>typealias</code>, the alias name is not present. You will usually see <code>$lambda$</code> or <code>Function1</code>:</p>
<pre><code class="lang-kotlin">java.lang.IllegalStateException: boom <span class="hljs-keyword">in</span> <span class="hljs-keyword">typealias</span>
    at FileKt.main$lambda$<span class="hljs-number">0</span>(File.kt:<span class="hljs-number">27</span>) <span class="hljs-comment">// What was called here??</span>
    at FileKt.main(File.kt:<span class="hljs-number">30</span>)
    at FileKt.main(File.kt)
</code></pre>
<p>With a functional interface, the interface and its method are visible, which makes searching and triage simpler:</p>
<pre><code class="lang-kotlin">java.lang.IllegalStateException: boom <span class="hljs-keyword">in</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-keyword">interface</span></span>
    at LoggingHandler.handle(File.kt:<span class="hljs-number">18</span>) <span class="hljs-comment">// Clear name of the caller and override!</span>
    at FileKt.main(File.kt:<span class="hljs-number">38</span>)
    at FileKt.main(File.kt)
</code></pre>
<p>This will also happen in the call stack in the debugger leaving you stranded as you attempt to navigate through your breakpoints.</p>
<h2 id="heading-testing-with-fakes">Testing with fakes</h2>
<p>A named type makes simple fakes straightforward:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> calls = mutableListOf&lt;Result&gt;()
<span class="hljs-keyword">val</span> handler: Handler = Handler { result -&gt; calls += result }

<span class="hljs-comment">// exercise code that calls handler.handle(...)</span>

check(calls.isNotEmpty())
</code></pre>
<h2 id="heading-verdict">Verdict</h2>
<p><code>typealias</code> makes code look cleaner but they do not provide type identity. If you want names that survive code navigation, debugging, and evolution, prefer a functional interface.</p>
<h2 id="heading-try-it-yourself">Try it yourself!</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">▶</div>
<div data-node-type="callout-text"><a target="_self" href="https://pl.kotl.in/we5i-kag_">Here’s a Kotlin Playground</a> where I set up a <code>typealias</code> and <code>fun interface</code> to capture the stack trace to see the difference in the output!</div>
</div>

<h2 id="heading-when-to-use-a-typealias">When to use a <code>typealias</code>?</h2>
<p>When dealing other cases that aren’t lambdas, especially with complicated or nested generic types. They are great for turning complex types into readable, reference-able, and remember-able types!</p>
]]></content:encoded></item><item><title><![CDATA[Stop Calling AI a “Junior Engineer”]]></title><description><![CDATA[There’s a common shorthand in tech circles: “Treat your AI like a junior engineer.” It’s meant to set expectations that you need to give it clear tasks, review its work, don’t let it push to prod, etc, etc.
But the analogy doesn’t sit right with me. ...]]></description><link>https://blog.mmckenna.me/stop-calling-ai-a-junior-engineer</link><guid isPermaLink="true">https://blog.mmckenna.me/stop-calling-ai-a-junior-engineer</guid><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[mentorship]]></category><category><![CDATA[engineering]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Tue, 29 Jul 2025 17:59:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/BVr3XaBiWLU/upload/c7122a0e40884f9c6fdaaf4ac06c204c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There’s a common shorthand in tech circles: “Treat your AI like a junior engineer.” It’s meant to set expectations that you need to give it clear tasks, review its work, don’t let it push to prod, etc, etc.</p>
<p>But the analogy doesn’t sit right with me. The phrase carries weight and it falls unfairly on real junior engineers.</p>
<h2 id="heading-a-quick-note-on-the-term-junior">A quick note on the term “junior”</h2>
<p>I want to acknowledge upfront that “junior” is not a perfect term. It’s vague, sometimes patronizing, and can be tied more to time served than to actual capability. In this context, I’m using it intentionally because it’s the phrase that gets used in these AI comparisons and trainings.</p>
<h2 id="heading-junior-engineers-are-people-not-metaphors">Junior engineers are people, not metaphors</h2>
<p>A junior engineer is a human being. They’re capable of learning from experience, synthesizing context across domains, showing initiative, and growing rapidly. They have motivations, feelings, and ambitions. They can build deep understanding, ask critical questions when something doesn’t make sense, apply judgment, and come up with novel solutions in ways that no language model can.</p>
<p>LLMs don’t grow easily¹. They don’t mature into someone you trust with architectural decisions or tough tradeoffs. They don’t start to anticipate edge cases, advocate for users, or question a spec because something feels off. They don’t take responsibility. They don’t improve with mentorship. They just respond based on patterns.</p>
<p>So when we call an AI a “junior engineer that needs hand holding,” we’re not just making a lazy analogy. We’re erasing the path that real people take to develop into leaders. We risk under investing in the very people who have that potential, because we’ve convinced ourselves a tool can stand in for them.</p>
<p>We need the junior engineers to grow, gain experience, build confidence, and the ability to make difficult decisions. And to make the comparison even more unfair…</p>
<h2 id="heading-llms-dont-have-the-same-capacity-as-junior-engineers">LLMs don’t have the same capacity as junior engineers</h2>
<p>LLMs don’t learn the way people do. They don’t develop understanding. They don’t carry memory across tasks². They can’t reflect or introspect or ask for clarification. They can produce code, sure, even elegant and useful code, but they do so without comprehension. Having to constantly “reteach” the same concepts is exhausting. A junior engineer <em>might</em> need reminders, but then they grow. They internalize. They become the teacher.</p>
<p>LLMs don’t know what the code is for. They don’t understand how it fits into a product, or how that product fits into someone’s life. They have no sense of the impact it might have on the people who use it.</p>
<p>Describing LLMs as junior engineers misleads people about what these tools are and are not capable of. It sets the wrong expectations and erases the fundamental differences between real cognitive development<br />and probabilistic pattern matching.</p>
<h2 id="heading-why-i-care">Why I care</h2>
<p>Words shape how we work. If we start thinking of AIs as “almost-human,” we will misuse them and undervalue the actual humans we hire to be on our teams.</p>
<p>LLMs are useful. But they’re tools, not teammates.</p>
<hr />
<h3 id="heading-footnotes">Footnotes</h3>
<ol>
<li><p>Sure new models are coming more and more frequently, but they take massive amounts of energy and new material.</p>
</li>
<li><p>Unless you explicitly engineer that behavior, and even then the context window is way too small to fully synthesize understanding across tasks.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Ready Layer One: Intro to the Model Context Protocol]]></title><description><![CDATA[Discover how the Model Context Protocol (MCP) connects AI to the real world. We'll explore the Model Context Protocol using the Kotlin SDK letting an Agent control Android devices with ADB! We’ll live code a new ADB tool and demo how AI Agents and th...]]></description><link>https://blog.mmckenna.me/ready-layer-one-intro-to-the-model-context-protocol</link><guid isPermaLink="true">https://blog.mmckenna.me/ready-layer-one-intro-to-the-model-context-protocol</guid><category><![CDATA[Android]]></category><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[mcp]]></category><category><![CDATA[mcp server]]></category><category><![CDATA[Model Context Protocol]]></category><category><![CDATA[adb]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Wed, 23 Jul 2025 15:17:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752686314230/7f8445af-d0c3-477a-9986-0b993a7a28ff.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Discover how the <a target="_blank" href="https://modelcontextprotocol.io/introduction">Model Context Protocol</a> (MCP) connects AI to the real world. We'll explore the Model Context Protocol using the Kotlin SDK letting an Agent control Android devices with <a target="_blank" href="https://developer.android.com/tools/adb">ADB</a>! We’ll live code a new ADB tool and demo how AI Agents and the MCP can automate tedious device tasks, like setting up device permissions, streamlining your Android workflow. You’ll leave this talk understanding the Model Context Protocol, its <a target="_blank" href="https://github.com/modelcontextprotocol/kotlin-sdk">Kotlin SDK</a>, and be ready to build your own AI-driven integrations!</p>
<h2 id="heading-video">Video</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://player.vimeo.com/video/1100417178">https://player.vimeo.com/video/1100417178</a></div>
<p> </p>
<p><a target="_blank" href="https://www.droidcon.com/2025/07/23/ready-layer-one-intro-to-the-model-context-protocol/">Source</a></p>
<h2 id="heading-slides">Slides</h2>
<iframe src="https://www.slideshare.net/slideshow/embed_code/key/540NrWtLrByGzZ?hostedIn=slideshare&amp;page=upload" width="476" height="400"></iframe>

<h2 id="heading-mentioned-links">Mentioned Links</h2>
<ul>
<li><p><a target="_blank" href="https://bsky.app/profile/mmckenna.me/post/3ls4werkbsc2k">Bluesky AndroidDev Starter Pack</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/modelcontextprotocol/kotlin-sdk">MCP Kotlin SDK</a></p>
</li>
<li><p><a target="_blank" href="https://block.github.io/goose/">Goose</a></p>
</li>
<li><p><a target="_blank" href="http://engineering.block.xyz/blog/blocks-playbook-for-designing-mcp-servers">Block's Playbook for Designing MCP Servers</a></p>
</li>
<li><p><a target="_blank" href="https://bsky.app/profile/botteaap.bsky.social/post/3lsezmlhco22j">Hugo Visser’s Drum MCP</a></p>
</li>
<li><p><a target="_blank" href="http://github.com/kaeawc/android-mcp-sdk">Android MCP SDK</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[GitHub Merge Strategies: A Visual Explanation]]></title><description><![CDATA[Discussion started up at work this week about the different merge strategies available in GitHub. I looked around online to see if I could find a good article to send to share a quick snapshot of the differences represented visually and didn’t find a...]]></description><link>https://blog.mmckenna.me/github-merge-strategies-a-visual-explanation</link><guid isPermaLink="true">https://blog.mmckenna.me/github-merge-strategies-a-visual-explanation</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Sat, 01 Mar 2025 12:00:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/hpQAUR9jkaM/upload/1385a73af16e58f7d4b5d769fe08e6f9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://androidweekly.net/issues/issue-664"><img src="https://androidweekly.net/issues/issue-664/badge" alt="Badge" class="image--center mx-auto" /></a></p>
<p>Discussion started up at work this week about the different merge strategies available in GitHub. I looked around online to see if I could find a good article to send to share a quick snapshot of the differences represented visually and didn’t find any I liked, so here’s my take!</p>
<h2 id="heading-merging-your-pull-request">Merging your Pull Request</h2>
<p>You wrote all your code and are ready to merge! Let’s go! At the bottom of your Pull Request (PR) you’ll find a button with a drop down arrow that shows three options with descriptions, but what do they actually do? Sometimes it’s helpful to actually see how these things play out.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740762906175/1b89b3c1-4c48-421a-9dee-1bf515ebf43d.png" alt="GitHub merge dialog showing three options.  Create a merge commit: All commits from this branch will be added to the base branch via a merge commit.  Squash and merge: The 10 commits from this branch will be combined into one commit in the base branch.  Rebase and merge: The 10 commits from this branch will be rebased and added to the base branch." class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-before-the-merge"><strong>Before the Merge</strong></h2>
<p>(aka you wrote code on <code>feature-branch</code> and are ready to merge back to <code>main</code>)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740765759866/2188e3a0-a3d5-46cb-93d0-089d8e55686f.png" alt="A diagram showing the main branch with commits D to E and a feature branch with commits A to B to C" class="image--center mx-auto" /></p>
<ul>
<li><p><code>main</code> branch: <code>D → E</code></p>
</li>
<li><p><code>feature-branch</code> branched from <code>D</code> and adds commits <code>A → B → C</code></p>
</li>
</ul>
<hr />
<h2 id="heading-1-merge-commit"><strong>1. Merge Commit</strong></h2>
<ul>
<li><p><strong>Preserves full commit history</strong></p>
</li>
<li><p><strong>Shows when branches were merged</strong></p>
</li>
<li><p><strong>Creates an extra merge commit (</strong><code>M</code>)</p>
</li>
</ul>
<p><strong>After merge:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740765766109/c112165e-71ff-4a04-b152-4466d41bb7bf.png" alt="A diagram showing the main branch with commits D to E to M and a feature branch with commits A to B to C to M showing that M is a new commit on the main branch merging the changes." class="image--center mx-auto" /></p>
<ul>
<li><p>The feature branch is merged with all its history intact.</p>
</li>
<li><p>A new merge commit (<code>M</code>) is created.</p>
<ul>
<li>These usually read like <code>Merge pull request #---- from feature-branch</code> in your git history.</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-2-squash-merge"><strong>2. Squash Merge</strong></h2>
<ul>
<li><p><strong>Clean history with a single commit</strong></p>
</li>
<li><p><strong>Loses individual commits from the feature branch</strong></p>
</li>
</ul>
<p><strong>After squash merge:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740765780748/881fb9ee-e3be-4477-a5b7-fcec53e77479.png" alt="A diagram showing the main branch with commits D to E to S and a feature branch with commits A to B to C showing that S is a new commit on the main branch merging the changes that were combined from A, B, and C." class="image--center mx-auto" /></p>
<ul>
<li>The entire <code>A → B → C</code> branch is squashed into a single commit <code>S</code>, removing individual commits.</li>
</ul>
<hr />
<h2 id="heading-3-rebase-merge"><strong>3. Rebase Merge</strong></h2>
<ul>
<li><p><strong>Retains individual commits</strong></p>
</li>
<li><p><strong>Creates a linear history</strong></p>
</li>
<li><p><strong>Rewrites commit history (be careful in shared branches)</strong></p>
</li>
</ul>
<p><strong>After rebase:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740765793172/415a053f-7c52-4140-84e6-5f8e362f5e58.png" alt="A diagram showing the main branch with commits D to E to A prime to B prime to C prime. This represents that A, B, and C are rebased on top of the Head of main, but are fundamentally new commits since they have new parent commits." class="image--center mx-auto" /></p>
<ul>
<li><p>The commits are replayed on top of <code>main</code>, making history linear.</p>
</li>
<li><p>New commit hashes (<code>A'</code>, <code>B'</code>, <code>C'</code>) are created because their parent commits changed.</p>
</li>
</ul>
<hr />
<h2 id="heading-final-comparison"><strong>Final Comparison</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Merge Type</td><td>Keeps Individual Commits?</td><td>Keeps Merge Info?</td><td>Linear History?</td><td>Creates Extra Commit?</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Merge Commit</strong></td><td>✅ Yes</td><td>✅ Yes</td><td>❌ No</td><td>✅ Yes (<code>M</code>)</td></tr>
<tr>
<td><strong>Squash Merge</strong></td><td>❌ No (One Commit)</td><td>❌ No</td><td>✅ Yes</td><td>✅ Yes (<code>S</code>)</td></tr>
<tr>
<td><strong>Rebase Merge</strong></td><td>✅ Yes</td><td>❌ No</td><td>✅ Yes</td><td>❌ No</td></tr>
</tbody>
</table>
</div><hr />
<p>Which one do you prefer? Let me know and thanks for reading!</p>
]]></content:encoded></item><item><title><![CDATA[Just Enough Optimization]]></title><description><![CDATA[One of the most effective ways to improve software performance isn't complex algorithms or fancy caching techniques. Often, it's simply removing unnecessary work. If a piece of code doesn't need to run, it won't consume any resources. This principle ...]]></description><link>https://blog.mmckenna.me/just-enough-optimization</link><guid isPermaLink="true">https://blog.mmckenna.me/just-enough-optimization</guid><category><![CDATA[General Programming]]></category><category><![CDATA[General Advice]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Tue, 04 Feb 2025 18:37:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/FlPc9_VocJ4/upload/dcf11b7195d1988f003af25fee7bd59a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the most effective ways to improve software performance isn't complex algorithms or fancy caching techniques. Often, it's simply removing unnecessary work. If a piece of code doesn't need to run, it won't consume any resources. This principle extends beyond code optimization. We can get in the habit of optimizing for scale or fixing hypothetical bugs when we actually want to iterate quickly. Thinking about implementing complex solutions or distributed systems when our current user base is tiny probably isn’t needed. This is a fun exercise, but it can get in the way of quick iteration.</p>
<p>Imagine buying a fleet of delivery trucks for a new small bakery, committing a large pool of resources to solving a problem that doesn’t exist yet. This is optimizing for a level of business that the bakery doesn’t have potentially wasting those resources.</p>
<p>In software engineering, this might mean delaying a release to implement a feature that <em>might</em> be needed in the future. This is optimizing for user needs that may never materialize. In personal productivity, it could mean procrastinating because we're too busy perfecting our project management system. Have you ever said you wanted to write more and instead rewrote your website?</p>
<p>The key is to focus on the problems you <em>do</em> have and create solutions for the outcomes you want. By focusing on the "whats" instead of the "what ifs," you'll deliver real value, make meaningful progress, and ultimately be more productive. The hypothetical bugs or slow code may indeed end up being real, but in the meantime your users have something that helps them and they can provide feedback to help you prioritize whats next.</p>
<p>Maybe your feature could be faster, but maybe it’s also fast enough.</p>
<hr />
<details><summary>Example: Jetpack Compose Beta</summary><div data-type="detailsContent">With the release of the <a target="_self" href="https://android-developers.googleblog.com/2021/02/announcing-jetpack-compose-beta.html">beta of Jetpack Compose</a> the Compose team wanted to start getting early feedback knowing that performance could be improved later. calling out that they will “<em>work on stabilizing these APIs up to our 1.0 release with particular focus on app performance and accessibility.</em>”</div></details>]]></content:encoded></item><item><title><![CDATA[Hardly Easy]]></title><description><![CDATA[Words like "easy" and "hard" can be surprisingly exclusive. Telling someone a task is easy can make them feel ashamed or inadequate if they struggle, while simply labeling something hard fails to provide the context engineers need to understand the t...]]></description><link>https://blog.mmckenna.me/hardly-easy</link><guid isPermaLink="true">https://blog.mmckenna.me/hardly-easy</guid><category><![CDATA[General Programming]]></category><category><![CDATA[General Advice]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Wed, 29 Jan 2025 12:00:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/-1_RZL8BGBM/upload/1b338cedb7ed1480610aa8adc33ed7b9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Words like "easy" and "hard" can be surprisingly exclusive. Telling someone a task is easy can make them feel ashamed or inadequate if they struggle, while simply labeling something hard fails to provide the context engineers need to understand the task needs at a glance.</p>
<p>Instead of easy, try conveying that sentiment with more detail. Saying that something is straightforward or small in scope, even if it requires effort, will be clearer and help engineers feel empowered. As an example the phrase "Implementing this feature will be small in scope” is clearer than “Implementing this feature is easy” giving some context that we might not have dependencies on other teams. Replacing "This bug is easy to fix" with "This bug has a straightforward fix." implies that the task has a clear start and end.</p>
<p>When it comes to describing something challenging, be specific and descriptive! Rather than saying a task is "hard," explain <em>why</em>. Is it challenging because it requires critical thinking? Is it complex because it involves many different factors? Or perhaps it's demanding and requires a lot of effort or mental load.</p>
<p>This opens up a lot of new possibilities! We wouldn't say something is easy and hard, but we can say something is straightforward and demanding!</p>
<p>By choosing our words thoughtfully and providing more descriptive context, we empower people to take ownership and provide room for growth. Engineers will still need help, but being more descriptive and switching out a few words makes them feel in control and not incompetent. By avoiding these negative feelings, they are more likely to ask for help when they do run into trouble which leads to an inclusive and faster team.</p>
]]></content:encoded></item><item><title><![CDATA[A Curious Case of Mistaken Identity: How Lambdas Break Data Class Hashing]]></title><description><![CDATA[Introduction: The Scene of the Crime
It was a dark and stormy night. My hands were flying across the keys when suddenly the codebase began to exhibit strange behavior. Hashes, which once returned the same values for identical objects, suddenly became...]]></description><link>https://blog.mmckenna.me/a-curious-case-of-mistaken-identity</link><guid isPermaLink="true">https://blog.mmckenna.me/a-curious-case-of-mistaken-identity</guid><category><![CDATA[Android]]></category><category><![CDATA[Kotlin]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Thu, 14 Nov 2024 21:44:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/QAOtKq8ehcw/upload/fd2f8aaad27ff129ac1d6ee8b4db4042.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Introduction: The Scene of the Crime</strong></p>
<p>It was a dark and stormy night. My hands were flying across the keys when suddenly the codebase began to exhibit strange behavior. Hashes, which once returned the same values for identical objects, suddenly became unpredictable. Collections shuffled unexpectedly, deduplication failed, and instances thought to be identical went unrecognized. Today we dive into this mystery, revealing the culprit.</p>
<p><strong>Act 1: The Setup</strong></p>
<p>We introduce the protagonist of our story, <code>DetectiveDataClass</code>.</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">DetectiveDataClass</span></span>(
    <span class="hljs-keyword">val</span> name: String,
    <span class="hljs-keyword">val</span> age: <span class="hljs-built_in">Int</span>,
    <span class="hljs-keyword">val</span> alias: String,
    <span class="hljs-keyword">val</span> onDetectiveAlert: () -&gt; <span class="hljs-built_in">Unit</span>
)
</code></pre>
<p>Here, <code>DetectiveDataClass</code> includes a lambda, <code>onDetectiveAlert</code>, meant to serve as a callback when an alert is triggered. All seems calm until…</p>
<p><strong>Act 2: A Case of Mistaken Identity</strong></p>
<p>The team quickly notices something off. When two detectives are instantiated with identical properties, they’re expected to be the same, right? Not quite, it seems.</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> detective1 = DetectiveDataClass(
    name = <span class="hljs-string">"Sherlock"</span>,
    age = <span class="hljs-number">40</span>,
    alias = <span class="hljs-string">"Holmes"</span>,
    onDetectiveAlert = { println(<span class="hljs-string">"Elementary!"</span>) }
)

<span class="hljs-keyword">val</span> detective2 = DetectiveDataClass(
    name = <span class="hljs-string">"Sherlock"</span>,
    age = <span class="hljs-number">40</span>,
    alias = <span class="hljs-string">"Holmes"</span>,
    onDetectiveAlert = { println(<span class="hljs-string">"Elementary!"</span>) }
)

println(detective1 == detective2)
<span class="hljs-comment">// Expected true, but is actually false</span>

println(detective1.hashCode() == detective2.hashCode())
<span class="hljs-comment">// Expected true, but is actually false</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">▶</div>
<div data-node-type="callout-text"><a target="_self" href="https://pl.kotl.in/QC9c27yuH">Try it yourself in the Kotlin Playground!</a></div>
</div>

<p>Surprise! Although the detectives have identical names, ages, aliases, and even identical alert messages, Kotlin reports them as different. The code’s consistency is broken, and it’s clear that <code>hashCode</code> and <code>equals</code> are not behaving as expected.</p>
<p><strong>Act 3: Tracking the Evidence</strong></p>
<p>Why does this happen? It turns out that each lambda (even if it looks identical) is unique. When a lambda is created, it holds a distinct memory reference, meaning <code>onDetectiveAlert</code> in <code>detective1</code> and <code>detective2</code> are fundamentally different objects in memory. The identity of the lambda breaks the hashing logic, and here’s proof:</p>
<pre><code class="lang-kotlin"><span class="hljs-comment">// Unique ID for the first lambda</span>
println(System.identityHashCode(detective1.onDetectiveAlert))

<span class="hljs-comment">// Unique ID for the second lambda</span>
println(System.identityHashCode(detective2.onDetectiveAlert))
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">▶</div>
<div data-node-type="callout-text"><a target="_self" href="https://pl.kotl.in/5dXoZ-4_G">See the output in Kotlin Playground</a></div>
</div>

<p>A <code>System.identityHashCode</code> returns an object’s location in memory. Meaning that it is <em>not</em> a content based hashcode like data class hashcode functions. So even if two objects contain the same information, because they are not the literal same object in memory, they will have a different identity hashcode.</p>
<p>So each lambda instance has a unique identity hashcode, highlighting the critical difference that affects equality and hashcode results. To collections and hash-based logic, these objects are not the same!</p>
<p><strong>Act 4: The Clues Come Together</strong></p>
<p>To solve the mystery, let’s redefine our suspects with a new plan. We’ll exclude the <code>onDetectiveAlert</code> lambda from our equality and hashcode calculations by overriding them:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">DetectiveDataClass</span></span>(
    <span class="hljs-keyword">val</span> name: String,
    <span class="hljs-keyword">val</span> age: <span class="hljs-built_in">Int</span>,
    <span class="hljs-keyword">val</span> alias: String,
    <span class="hljs-keyword">val</span> onDetectiveAlert: () -&gt; <span class="hljs-built_in">Unit</span>
) {
    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">equals</span><span class="hljs-params">(other: <span class="hljs-type">Any</span>?)</span></span>: <span class="hljs-built_in">Boolean</span> {
        <span class="hljs-keyword">if</span> (<span class="hljs-keyword">this</span> === other) <span class="hljs-keyword">return</span> <span class="hljs-literal">true</span>
        <span class="hljs-keyword">if</span> (other !<span class="hljs-keyword">is</span> DetectiveDataClass) <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>

        <span class="hljs-keyword">return</span> name == other.name &amp;&amp;
               age == other.age &amp;&amp;
               alias == other.alias
    }

    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">hashCode</span><span class="hljs-params">()</span></span>: <span class="hljs-built_in">Int</span> {
        <span class="hljs-keyword">return</span> name.hashCode() * <span class="hljs-number">31</span> + 
            age * <span class="hljs-number">31</span> + 
            alias.hashCode()
    }
}
</code></pre>
<p>Now, let’s run the check again:</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">val</span> detective1 = DetectiveDataClass(<span class="hljs-string">"Sherlock"</span>, <span class="hljs-number">40</span>, <span class="hljs-string">"Holmes"</span>) {
  println(<span class="hljs-string">"Elementary!"</span>)
}
<span class="hljs-keyword">val</span> detective2 = DetectiveDataClass(<span class="hljs-string">"Sherlock"</span>, <span class="hljs-number">40</span>, <span class="hljs-string">"Holmes"</span>) {
  println(<span class="hljs-string">"Elementary!"</span>)
}

println(detective1 == detective2) <span class="hljs-comment">// Now true</span>
println(detective1.hashCode() == detective2.hashCode()) <span class="hljs-comment">// Now true</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">▶</div>
<div data-node-type="callout-text"><a target="_self" href="https://pl.kotl.in/6uTpIDzu6">Try it yourself here in Kotlin Playground.</a></div>
</div>

<p>With this solution, the detective instances match as expected.</p>
<p><strong>Act 5: The Mystery is Solved</strong></p>
<p>By excluding the lambda from <code>equals</code> and <code>hashCode</code>, we’ve eliminated the source of instability. This allows objects to be considered identical based on core attributes rather than transient callbacks.</p>
<p>This increases maintenance cost and breaks the hashcode function we get from data classes for free. Any new fields added to <code>DetectiveDataClass</code> will also need to be added to our overridden <code>equals</code> and <code>hashCode</code> functions.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">❗</div>
<div data-node-type="callout-text">Note: This is an example for this article and probably not the best practice in real projects.</div>
</div>

<p>Instead you can consider:</p>
<ul>
<li><p><a target="_blank" href="https://pl.kotl.in/VN1HQrLZa"><strong>Using a method reference</strong> so that the same lambda is called.</a></p>
<ul>
<li><pre><code class="lang-kotlin">  <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">alertFunction</span><span class="hljs-params">()</span></span> {
      println(<span class="hljs-string">"Elementary!"</span>)
  }

  <span class="hljs-keyword">val</span> detective1 = DetectiveDataClass(
      name = <span class="hljs-string">"Sherlock"</span>,
      age = <span class="hljs-number">40</span>,
      alias = <span class="hljs-string">"Holmes"</span>,
      onDetectiveAlert = ::alertFunction
  )

  <span class="hljs-keyword">val</span> detective2 = DetectiveDataClass(
      name = <span class="hljs-string">"Sherlock"</span>,
      age = <span class="hljs-number">40</span>,
      alias = <span class="hljs-string">"Holmes"</span>,
      onDetectiveAlert = ::alertFunction
  )
</code></pre>
</li>
</ul>
</li>
<li><p><a target="_blank" href="https://pl.kotl.in/EoLUKFE1a"><strong>Using interfaces</strong> for lambdas or callbacks if stable references are needed.</a></p>
<ul>
<li><pre><code class="lang-kotlin">  <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">DetectiveAlert</span> </span>{
      <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onAlert</span><span class="hljs-params">()</span></span>
  }

  <span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">DetectiveDataClass</span></span>(
      <span class="hljs-keyword">val</span> name: String,
      <span class="hljs-keyword">val</span> age: <span class="hljs-built_in">Int</span>,
      <span class="hljs-keyword">val</span> alias: String,
      <span class="hljs-keyword">val</span> onDetectiveAlert: DetectiveAlert
  )

  <span class="hljs-keyword">object</span> ElementaryAlert : DetectiveAlert {
      <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onAlert</span><span class="hljs-params">()</span></span> {
          println(<span class="hljs-string">"Elementary!"</span>)
      }
  }
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Store lambdas separately</strong> from data classes when object equality matters.</p>
</li>
<li><p><strong>Override</strong> equals <strong>and</strong> hashCode <strong>carefully</strong> to exclude fields that could vary unexpectedly, or you know don’t matter to your equality.</p>
</li>
</ul>
<p><strong>Epilogue</strong></p>
<p>With the mystery unraveled, our detectives can rest assured, knowing their identities are now stable and consistent. The tale of the unstable hash is a warning to all: in the world of Kotlin, lambdas may be useful but can be fickle co-conspirators when data class hashing is involved.</p>
]]></content:encoded></item><item><title><![CDATA[Hue-manize Your Android Apps: Develop for Color Blindness]]></title><description><![CDATA[Have you ever wondered how your Android app appears to someone with color blindness or low vision? We will share firsthand how inaccessible apps impact daily life.
https://player.vimeo.com/video/1017418233?autopause=0&autoplay=0&color=00adef&portrait...]]></description><link>https://blog.mmckenna.me/hue-manize-your-android-apps-develop-for-color-blindness</link><guid isPermaLink="true">https://blog.mmckenna.me/hue-manize-your-android-apps-develop-for-color-blindness</guid><category><![CDATA[Android]]></category><category><![CDATA[UI]]></category><category><![CDATA[conference sessions]]></category><category><![CDATA[droidcon]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Thu, 17 Oct 2024 04:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732566685725/275f0266-32d2-48fa-b7b8-45cada58e16a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever wondered how your Android app appears to someone with color blindness or low vision? We will share firsthand how inaccessible apps impact daily life.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://player.vimeo.com/video/1017418233?autopause=0&amp;autoplay=0&amp;color=00adef&amp;portrait=0&amp;byline=0&amp;title=0">https://player.vimeo.com/video/1017418233?autopause=0&amp;autoplay=0&amp;color=00adef&amp;portrait=0&amp;byline=0&amp;title=0</a></div>
]]></content:encoded></item><item><title><![CDATA[Designing for Disconnection: The Mental Model of Offline Apps]]></title><description><![CDATA[In today's interconnected world we developers rely on our users to have strong networks for the best app experience. Empty states and error states perforate our designs and view systems.
https://player.vimeo.com/video/869512170?autopause=0&autoplay=0...]]></description><link>https://blog.mmckenna.me/designing-for-disconnection-the-mental-model-of-offline-apps</link><guid isPermaLink="true">https://blog.mmckenna.me/designing-for-disconnection-the-mental-model-of-offline-apps</guid><category><![CDATA[Android]]></category><category><![CDATA[Kotlin]]></category><category><![CDATA[conference sessions]]></category><category><![CDATA[droidcon]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Fri, 06 Oct 2023 04:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732566827359/a98f4728-a809-44d0-bbd5-495e0c0e1caf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's interconnected world we developers rely on our users to have strong networks for the best app experience. Empty states and error states perforate our designs and view systems.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://player.vimeo.com/video/869512170?autopause=0&amp;autoplay=0&amp;color=00adef&amp;portrait=0&amp;byline=0&amp;title=0">https://player.vimeo.com/video/869512170?autopause=0&amp;autoplay=0&amp;color=00adef&amp;portrait=0&amp;byline=0&amp;title=0</a></div>
]]></content:encoded></item><item><title><![CDATA[Behind The Screen: The Humans Who Use Our Code]]></title><description><![CDATA[Android has been around for over 10 years, and has now become the most used operating system in the entire world. It is the go to operating system for all kinds of user facing devices. From car displays and airplane entertainment systems, to fitness ...]]></description><link>https://blog.mmckenna.me/behind-the-screen-the-humans-who-use-our-code</link><guid isPermaLink="true">https://blog.mmckenna.me/behind-the-screen-the-humans-who-use-our-code</guid><category><![CDATA[droidcon]]></category><category><![CDATA[compose]]></category><category><![CDATA[Kotlin]]></category><category><![CDATA[Android]]></category><category><![CDATA[conference sessions]]></category><dc:creator><![CDATA[Matt McKenna]]></dc:creator><pubDate>Tue, 15 Nov 2022 05:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1732566903110/98b72cf9-f207-4877-85fa-69566071984f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Android has been around for over 10 years, and has now become the most used operating system in the entire world. It is the go to operating system for all kinds of user facing devices. From car displays and airplane entertainment systems, to fitness gear and points of sale, users are having countless interactions with Android.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://player.vimeo.com/video/770920346?autopause=0&amp;autoplay=0&amp;color=00adef&amp;portrait=0&amp;byline=0&amp;title=0">https://player.vimeo.com/video/770920346?autopause=0&amp;autoplay=0&amp;color=00adef&amp;portrait=0&amp;byline=0&amp;title=0</a></div>
]]></content:encoded></item></channel></rss>