GitHub Copilot Good Practices

How to get speed without sacrificing quality

Copilot Is Fast. Your Feedback Loop Must Be Faster.

GitHub Copilot can write a lot of code quickly. That is not the hard part.

The hard part is keeping your standards intact while the output volume goes up: tests, security, clarity, and small changes that you can actually review.

This page collects simple practices that keep Copilot useful instead of dangerous.

Disclaimer: Copilot features, models, and agent workflows evolve rapidly. Treat these practices as current guidance, not a permanent contract.


30 minutes. No pitch. Just a frank conversation about what's slowing your team down.

Let's Work Together

The Practices That Actually Work

1) Give Copilot a contract, not a vibe

Copilot performs best when you specify what “done” means.

Use a short prompt contract:

Goal:
- 

Constraints:
- Keep changes small and focused
- No new dependencies unless necessary
- Do not change public APIs

Acceptance checks:
- Add or update tests
- Explain how to run the tests
- Call out edge cases and failure modes

Context:
- Relevant files:
- Current behavior:
- Expected behavior:

If you can’t describe the acceptance checks, Copilot can’t guess them for you.

2) Feed it reality: code, errors, and boundaries

Copilot guesses when you give it a vague problem statement. Give it concrete inputs:

  • The failing test output, error stack trace, or log snippet
  • The specific file and function you want changed
  • The constraints that matter (performance, compatibility, thread safety, backward compatibility)

Then ask it to propose the smallest safe change.

3) Work in small diffs on purpose

Large AI-generated changes hide problems. Keep diffs reviewable.

  • Ask for a single change at a time
  • Request a brief step-by-step plan before edits
  • Prefer one file per change when possible
  • Make it stop after the first working increment

Small diffs are a force multiplier because you can review and test them quickly.

4) Treat tests as the steering wheel

Copilot will happily invent code that compiles and still fails in production.

Push it into a test-first workflow:

  • Ask it to write a failing test that captures the bug
  • Run the test and confirm it fails for the right reason
  • Only then accept the fix

If the code is hard to test, that’s a signal. Use Copilot to help you carve a seam, not to add more complexity.

5) Make Copilot do review work, not just write code

Good prompts for review:

  • “Explain what this diff changes in one paragraph.”
  • “List the top three risks and how to mitigate them.”
  • “What edge cases will break this?”
  • “Suggest tests that would have caught a regression here.”

You still own the judgment. Copilot just reduces the cost of asking the questions.

6) Default to secure behavior

Copilot cannot know what is secret in your environment.

  • Never paste tokens, private keys, or customer data
  • Prefer placeholders when sharing config
  • Ask for patterns that load secrets from environment variables or secret stores
  • Review dependencies carefully before adding them

If you wouldn’t paste it in a public issue, don’t paste it into a prompt.


Copilot in VS Code: Know What You Installed

If you are using Copilot in Visual Studio Code, you are usually using two separate extensions:

  • GitHub Copilot: inline suggestions as you type (ghost text) and next edit suggestions (NES)
  • GitHub Copilot Chat: chat, edits across multiple files, and agent/plan workflows

The practical implication: inline suggestions and chat are related, but they are not the same feature set and they do not always share the same configuration.

Keep VS Code and Copilot Chat current

Copilot Chat releases in lockstep with VS Code. Newer Copilot Chat builds are only compatible with the latest VS Code release.

If chat suddenly disappears or stops working, upgrading VS Code and the extensions is often the fix.

References:

  • GitHub Copilot Chat extension page: https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat
  • GitHub Copilot troubleshooting: https://docs.github.com/en/copilot/troubleshooting-github-copilot/troubleshooting-issues-with-github-copilot-chat

Pick the right surface for the job

VS Code exposes multiple Copilot “surfaces” that map to different kinds of work:

  • Inline suggestions for writing new code quickly (ghost text and NES)
  • Ask mode for understanding, exploring, and reviewing without edits
  • Inline chat for a focused, in-place refactor or small rewrite
  • Edit mode for controlled multi-file edits in a defined working set
  • Agent mode for multi-step tasks where the assistant plans, runs tools, and iterates
  • Plan mode to produce a plan first, then hand off to implementation

References:

  • VS Code AI core concepts: https://code.visualstudio.com/docs/copilot/core-concepts
  • GitHub Copilot features: https://docs.github.com/en/copilot/about-github-copilot/github-copilot-features

Make behavior consistent with custom instructions

If you want Copilot Chat and agents to follow project rules reliably, put them in instruction files instead of repeating them in every prompt.

  • .github/copilot-instructions.md applies automatically to all chat requests in the workspace
  • AGENTS.md is another always-on option that is explicitly intended for working with multiple agents
  • *.instructions.md lets you scope rules to subsets of the repo via globs

Note: custom instructions do not apply to inline suggestions as you type, so treat them as a chat-and-agents control knob.

References:

  • VS Code custom instructions: https://code.visualstudio.com/docs/copilot/customization/custom-instructions

Review edits like you would review a junior developer’s PR

When chat or agents apply edits, VS Code tracks pending edits and gives you Keep/Undo controls per change, plus a “reject all” escape hatch.

Two useful habits:

  • Stage changes only after you have reviewed them (staging implicitly accepts pending edits)
  • Require explicit approval for sensitive files (for example, .env and workspace settings)

References:

  • Review AI-generated code edits: https://code.visualstudio.com/docs/copilot/chat/review-code-edits

Know the telemetry knobs

The VS Code Copilot extensions collect usage data and respect VS Code’s telemetry.telemetryLevel setting.

If you are using Copilot Free, VS Code docs call out that telemetry is enabled by default, and can be disabled via the VS Code telemetry setting (and you can also adjust Copilot settings on GitHub).

References:

  • GitHub Copilot Chat extension telemetry note: https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat
  • Set up GitHub Copilot in VS Code: https://code.visualstudio.com/docs/copilot/setup

Multiple Agents, One Repo: How to Avoid a Self-Inflicted Merge Disaster

Copilot is not a single assistant anymore. In VS Code you can run multiple independent chat sessions (and multiple agent sessions) in parallel, and each session has its own context window.

That is great for throughput, and terrible if you let the sessions step on each other.

References:

  • Manage chat sessions: https://code.visualstudio.com/docs/copilot/chat/chat-sessions

1) Split by outcome, not by “area of code”

Give each agent a task with:

  • One concrete goal
  • Clear constraints
  • A defined acceptance check (tests, build, lint)

If two tasks touch the same files, you do not have parallel work. You have conflict.

2) Separate context and working copies

Keep each agent in its own chat session. Start a new session when the topic changes.

If you run multiple instances of VS Code (or multiple agents) against the same repository, use isolated branches and ideally separate working directories (for example via git worktree) so they cannot silently overwrite each other’s changes.

3) Treat instruction files as your “shared brain”

Put non-negotiable repo rules into:

  • .github/copilot-instructions.md for shared project rules
  • AGENTS.md if you explicitly want a single always-on rule set for multiple agents

This keeps parallel sessions aligned even when their local chat histories diverge.

4) Enforce a human review gate

Multiple agents can draft changes. Only your normal review process should merge them.

At minimum:

  • Review diffs
  • Run tests
  • Confirm secrets and config files were not modified unintentionally

If the assistant output surprises you, the Chat Debug view can show what context and instructions were actually sent.

Reference:

  • Chat Debug view: https://code.visualstudio.com/docs/copilot/chat/chat-debug-view

Articles: AI Collaboration With Standards

These articles explore AI-assisted development as a collaboration problem: feedback loops, discipline, and the habits that keep quality high.

Copilot as a Thinking Partner

  • When AI Becomes Your Thinking Partner
    A real migration story where conversation quality mattered more than code generation speed.
  • The Gray Beard and the Machine
    What changes when Copilot arrives, and why judgment still matters.

Quality, Collaboration, and Tests

Legacy Work Without the Fear