AI-Assisted Development

Your developers already use AI. The question is whether they use it well.

GitHub Copilot, Claude, ChatGPT, Cursor: every team has someone generating code with these tools. Some produce working software faster than before. Others generate plausible garbage at unprecedented speed. The difference isn’t the tool. It’s the discipline around it.

We use AI tools daily in production work. Not for demos. Not for conference talks. For shipping software that real users depend on. We’ve migrated Kubernetes namespaces with Copilot, extracted business rules from 28,000 lines of VBA with Claude, and built features in hours that used to take weeks.

Here’s what we’ve learned: AI makes good practices more valuable, not less. TDD matters more when code arrives fast and unverified. CI/CD matters more when changes flow faster. Architecture matters more when anyone can generate 500 lines in a minute.

The bottleneck moved from typing to thinking. AI didn’t eliminate the need for developers. It eliminated the excuse for not having disciplined ones.

AI Makes Discipline More Valuable

Every AI vendor tells you their tool generates working code. Sometimes it does. Often it generates code that looks right, passes a glance review, and fails in production two weeks later.

The developers who thrive with AI are the ones who already had good habits: write the test first, integrate continuously, deploy small changes, verify assumptions. AI accelerates all of that.

AI without discipline is just faster chaos.

What We Actually Do

We work alongside your team, using AI tools in real production code. Not a separate "AI initiative." Not a workshop with toy examples.

  • Pair with developers using Copilot and Claude in their actual codebase
  • Show how TDD works with AI (write the test, let AI suggest the implementation, verify)
  • Build AI into existing CI/CD workflows
  • Extract knowledge from legacy code using AI as an archaeological tool
  • Establish verification habits that catch AI mistakes before they reach production

Why Tests Matter More Now

AI writes both the tests and the code at the same time. That's the actual workflow. You describe what the system should do, and the AI produces implementation and tests together.

The tests become guardrails. When someone changes the code next week, or the AI regenerates a section, or a new requirement arrives, those tests catch drift. They lock down what was agreed to work. Without them, every AI-assisted change is a gamble on whether something else broke silently.

Without tests, AI-generated code drifts. With tests, it stays honest.

Legacy Modernization with AI

This is where AI genuinely shines. Nobody wants to read 28,000 lines of VBA from 2003. AI doesn't mind.

We use AI to extract business rules from legacy code, translate them into human language, create GitHub Issues that domain experts can validate, and then reimplement in modern stacks with executable specifications.

What used to take months of painful manual archaeology now takes weeks. The AI reads the old code. Humans verify the extracted knowledge. The result is a modern system that actually produces the same outcomes.

Who This Is For

  • Teams already using AI tools but getting inconsistent results
  • CTOs who want AI adoption grounded in engineering discipline
  • Organizations modernizing legacy systems
  • Companies that tried "just add Copilot" and found it wasn't enough
  • Leaders who want evidence of impact, not vendor promises

Who This Is Not For

  • Anyone expecting AI to replace developers
  • Organizations looking for a "prompt engineering" course
  • Companies that want to reduce headcount by adding AI
  • Teams unwilling to invest in test coverage and CI/CD
  • Buyers looking for magic

Our Daily Practice

This isn't theory. It's our Tuesday. A few examples from real work:

A newsletter service needed to move between Kubernetes namespaces. We described the problem to Copilot in VS Code, and it walked us through merging two SQLite databases, updating ingress rules, and catching an environment variable we'd have missed. Forty minutes, zero errors, no rollback needed.

A German insurance software vendor had 28,000 lines of VBA implementing payroll rules nobody fully understood anymore. We pointed Claude at the code and had it extract business rules into plain-language GitHub Issues. Domain experts reviewed those issues and confirmed or corrected them. Months of manual reading compressed into weeks.

When writing new code, AI produces tests and implementation together. The tests become the contract. When someone modifies the code later, or when a new requirement changes the shape of the system, those tests catch what drifted. They're not an afterthought. They're the reason you can trust the output.

For legacy migrations, we use AI to generate parameterized characterization tests from existing behavior. These tests lock down what the old system actually does before anyone writes a single line of replacement code.

Every tool stays subordinate to the practice. AI proposes. Tests decide.

What We Don't Believe

The AI hype cycle produces claims faster than AI produces code. Some reality checks from daily practice:

  • "AI agents will do the work." They won't. AI agents are useful for narrow, well-defined tasks inside verified pipelines. For anything requiring judgment, context, or accountability, you need humans.
  • "10x productivity with AI." For typing, maybe. For thinking through the right design, understanding user needs, and making architectural tradeoffs? No tool changes that.
  • "AI replaces junior developers." AI replaces the mechanical parts of junior work. It makes mentoring more important, not less. Someone still needs to verify what the AI produced.
  • "Just add Copilot to your team." Without test discipline and CI/CD, adding AI to your team is like giving a faster car to someone who doesn't know the road.

What Changes for You

  • Developers who use AI tools with confidence and discipline
  • Test coverage that catches what AI gets wrong
  • Legacy knowledge extracted and verified, not lost
  • Faster delivery without sacrificing reliability
  • Teams that evaluate AI output critically, not reverently

Engagement Structure

This work integrates into our Delivery Integrity Intervention. AI adoption isn't a standalone project. It's part of building engineering discipline.

We embed with your team, work in your codebase, and establish practices that persist after we leave. Retainer-based, transparent, with measurable progress tracked via Navigator.

Orientation Meeting

Bring your questions about AI adoption, your frustrations with current tools, or your legacy modernization challenge. We'll tell you honestly what AI can and can't do for your specific situation.

Schedule online meeting

From Practice

Our experience with AI in production work: