<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Rag on Thoughts and code</title>
    <link>https://claydon.co/tags/rag/</link>
    <description>Recent content in Rag on Thoughts and code</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 09 Apr 2026 04:23:12 +0000</lastBuildDate><atom:link href="https://claydon.co/tags/rag/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Semantic Code Search in Go: Indexing Functions and Learning Where It Fails</title>
      <link>https://claydon.co/code/its-just-vectors/part6-rag-baseline/</link>
      <pubDate>Thu, 09 Apr 2026 04:23:12 +0000</pubDate>
      <guid>https://claydon.co/code/its-just-vectors/part6-rag-baseline/</guid>
      <description>&lt;p&gt;Semantic search over source code is often introduced as “grep, but smarter.” That framing is both directionally correct and operationally useless. The actual problem is whether your retrieval layer can consistently surface the right code unit under ambiguity, noise, and naming collisions. This post builds a minimal retrieval system over Go functions and then lets it fail in predictable ways.&lt;/p&gt;
&lt;p&gt;The point is to have something concrete to measure against before adding complexity.&lt;/p&gt;</description>
    </item>
    
  </channel>
</rss>
