Skip to main content

Coding For Agents: Why Contextual Readiness is the New Technical Debt

11 min read By Matt B

Leading teams create in-repo guidance for their AI agents to slash onboarding time, banish hallucinations, and multiply delivery speed. Is your team Agent-ready?

From Human-Readability to Agent-Parsability

At my first programming job (many moons ago) I was given a rigid set of coding standards to adhere to; camelCase my variables, learn Hungarian notation, minimise memory overhead. I was also told that the most important job of any programmer was to make their code maintainable. Years later, someone completely new to the code should be able to read it, comprehend it, and change it with confidence.

For decades, this was the gold standard of software engineering - human readability; ensuring the next human brain to approach the project could navigate the hierarchy without a map. We relied on "resident sages" and ancient documentation to fill the gaps.

Over twenty five years of delivering bespoke software, I've watched this principle hold steady through every paradigm shift - from monoliths to microservices to serverless applications in the cloud. The constant was always: write it so the next person can understand it.

But in 2026, the primary consumer of your codebase is no longer just a developer; it's also the suite of AI Agents built into their IDE. Whether Cursor, Copilot, or autonomous agents like Claude or Devin, these tools are rapidly becoming the first point of interaction with your logic.

The problem? Most legacy codebases weren't designed for this new type of "brain." If your repo is an indecipherable black box to an LLM, you are accumulating a dangerous new liability: Contextual Technical Debt.

Why Contextual Debt is Different

Traditional technical debt slows down evolution - you spend your time patching old shortcuts and shoring up weak points.

Contextual debt slows down intelligence. If your agent doesn't know where to look for a utility function or misinterprets your state management because it lacks the right reference, it will hallucinate. Every hallucination is a tax on your delivery speed and an added cost to your project.

On an inherited Rails project early last year, I set to work with Cursor, prompting the generation of a few service objects to encapsulate the application's meaty business logic. I had jumped the proverbial gun; with no architecture guidelines or naming conventions yet documented in the repo, my agent couldn't identify all of the patterns in place in the source material, and insisted upon creating files in /lib instead of the /app/services structure I had created. I needed to remove the contextual debt, then start again, or I would spend half my time correcting my agent's work.

To solve this, we must move beyond the README and toward the Agent-Parsable Project Map.

Instructional Metadata: From Wikis to In-Repo Knowledge

In times past, external Wikis (Confluence, Notion, and so forth) would serve as our teams' documented source of truth. Has your customer's business process changed? Document it. Was a dependency replaced with a whole new in-house module? Write down how to integrate. Did your lead developer spend three days bringing a Rails 4 application into the light? Commit all that new learning to the wiki.

In an AI-first workflow, however, if the information isn't in the repository, it may as well not exist at all. That's why leading architectural teams are now adopting In-Repo Knowledge Bases and transitioning from wiki-based documentation to project-level metadata - and that doesn't just mean a well-written README.md.

.cursorrules - The Original Guardrail

In 2024, Cursor announced a method for their users to define custom instructions to guide the LLM. A single .cursorrules file in the project root would be read and added to the agent's context window for every request, instructing it on how to behave, what tools to use, and how to structure responses. These files could define coding standards - "Always use functional components," "Use Tailwind CSS instead of Bootstrap," "Never use Zod for simple types." Developers were now writing coding guidelines for their assistant.

These guardrails were a strong starting point, and have since evolved into nested structures of individual files for granular control. However, the pattern fell short of teaching the agent how an application actually worked.

Here's the fundamental constraint: while a human developer grows with your project - gaining experience, comprehension, and ever-deepening product knowledge - an AI agent has no memory at all. It doesn't recall your previous sessions, it doesn't learn your codebase, and it doesn't remember the files it looked at last week. Every time you send a prompt, you must tell it everything it needs to know. Every. Single. Time.

CLAUDE.md - Blueprinting Your Project for Your Agent

As context windows have grown, developers can now pass longer and more detailed instructions with each prompt. Anthropic formalised this in early 2025 by presenting CLAUDE.md as a working pattern for developers to pass their knowledge on to their agents consistently. The goal was to reduce "architecture drift" and ensure the AI adhered to a cohesive, designed structure.

Where .cursorrules was tied to a specific environment, CLAUDE.md was designed as an open standard that any agent could use. It acts as an architecture guide: describing the mental model of the project, where the core business logic lives, how data flows through the application, which legacy folders should be ignored, the tech stack, directory structure, key components, and more.

By carefully creating a detailed map of the project, developers could ensure that their agents worked in a structured manner - reading the plan, digesting the blueprints, brushing up on coding standards - before moving on to the work prompt with richly detailed context. With this background, agents started to feel like they remembered. They gained an improved ability to implement changes that were better structured, closer to the original project design, and nearer to being "complete."

The emergence of instructional metadata files like .cursorrules and CLAUDE.md marked a turning point. These weren't just configuration files; they were executive summaries for AI agents.

How to Optimise Your Codebase for LLM Attention

While LLM context windows continue to grow, they are still prone to "lost in the middle" effects. When an agent scans your repo, it's looking for cues. If your project structure is a maze of nested directories with generic names, the agent's attention fractures and spreads thin.

To get the best from our agents, we need to make deliberate adjustments to the way we architect and build. Here are the three patterns that matter most, with practical examples.

1. Semantic File Naming

Generic names like /utils and /helpers tell an agent nothing about intent. A folder called /utils could contain anything from date formatting to payment processing. When an agent encounters this, it has to open and read every file to understand what's available - burning context window tokens and increasing the chance of picking the wrong tool.

In one of our Rails projects, we restructured a sprawling /lib/utils directory into purpose-built services:

Before:

  • /lib/utils/helpers.rb
  • /lib/utils/formatters.rb
  • /lib/utils/validators.rb

After:

  • /app/services/payment_processing/paypal_service.rb
  • /app/formatters/bulk_invoice_pdf_formatter.rb
  • /app/validators/shipment_date_range_validator.rb

With these changes, our agent went from needing three follow-up prompts to find the right service to nailing it first time.

2. Explicit Dependency Mapping

In bespoke software, the hardest thing for any new reader to grasp isn't the code itself - it's the business process behind it. A controller might trigger a background job that updates a delivery window, notifies an external partner, and sends a customer alert, and none of that chain is visible from the controller file alone. Without a map, an agent will miss side effects, suggest changes that break downstream processes, or duplicate functionality that already exists.

A simple process definition committed to the repo - even just a commented block in a markdown file - gives the agent the full picture before it ever touches a line of code. For example, in one of our logistics projects we documented a core process like so:

# PROCESS: UpdateShipmentStatus
# TRIGGER: Driver scans a package barcode

1. LOG:     Save location and timestamp to the database.
2. ANALYSE: Trigger ArrivalEngine to update the delivery window.
3. SYNC:    Push status update to external partners (Shopify/Amazon).
4. NOTIFY:  Alert customer via SMS or email.

With this simple guidance in the repo, an agent tasked with modifying the status update flow understands the full chain of consequences before getting started. Without it, the agent sees only the code in front of it and has no way to anticipate that a change to step one will ripple through to step four.

3. Sharded Instructional Metadata

A single monolithic CLAUDE.md works for small projects, but in a large enterprise application it runs into the same "lost in the middle" problem it was designed to solve. The answer is to shard your metadata: place focused markdown files within subdirectories so the LLM only deals with relevant context for the part of the codebase it's working in.

For example, an /app/services/README.md that explains the service object pattern your team uses, naming conventions, and common pitfalls is far more useful to an agent editing a service than a 500-line root-level document that covers the entire application.

We now maintain a root CLAUDE.md for high-level architecture, plus targeted README.md files in /app/services, /app/jobs, and /app/api (etc) - each explaining the conventions and patterns specific to that layer. The result is that our agents stay focused and accurate, instead of drowning in 500 lines of context they don't need for the task at hand.

To be truly effective, an agent shouldn't have to guess - it should read the room. By embedding instructional metadata directly into the repo, we ensure the agent accurately predicts function behaviour without constant hand-holding. Clean code in 2026 is code that doesn't require a prompt-engineer to explain it to the agent every time a new task begins.

Instant Onboarding - The Financial Case

For engineering managers, Contextual Readiness must not be just a technical preference. It needs to become a habit, for one major reason: it's a massive financial win.

Consider the onboarding cost of a standard enterprise repo. A new hire can spend weeks just getting the lay of the land. They shadow your lead devs, pair with peers, read outdated documentation, and struggle with dependencies before finally being able to tackle a support ticket.

In our experience across dozens of client projects, onboarding a mid-level developer to full productivity on a legacy Rails application can take 4–8 weeks. That's 20–40 billable days of senior developer time spent on walkthroughs and pairing.

In an AI-ready codebase with full contextual readiness, that timeline compresses dramatically. Here's what it looks like in practice:

Your new hire opens the repo in their IDE. The AI agent, guided by your carefully curated in-repo metadata, explains the architecture. The developer asks, "How do I add a new API endpoint following our coding patterns?" The agent signposts the exact files, explains the existing code, and generates a first draft - all while adhering to the team's specific standards. The developer then applies the crucial human touch: architectural validation, strategic oversight, and the kind of intuition that only comes from experience.

The AI has become the patient pair-programming partner that never tires of answering questions, can't be disrupted, and is ready to work immediately. If an AI agent can understand your architecture in seconds, your next hire can start producing real work from day one.

The Strategic Imperative

As an architectural strategist, my message is clear: your codebase is now an environment for agents. If you continue to build for humans alone, you will be handicapping your team's most powerful new tool.

By investing in contextual readiness today, you aren't just tidying up your files. You're building a streamlined, organised workspace where your developers can utilise modern agentic tooling to deliver value at a pace that was previously impossible.

Many years ago, I was told the most important job of a programmer was to make their code as maintainable as possible for the next developer to come along.

That guidance hasn't changed - but the developer that comes next has. Today, you're writing for humans and agents alike.

The teams that recognise this earliest will be the ones that deliver fastest - don't let contextual debt hold yours back.


In the follow-up tutorial, Building a CLAUDE.md for a Legacy Rails App: A Field Report, we put this strategy into practice - walking through the audit, structure, and measured results from bringing a 15-year-old codebase up to scratch with instructional metadata.

See the original post on Matt's website: tiltedsky.net(opens in new tab)