OpenAI shipped roughly one million lines of production code with three engineers, where every line was generated by Codex, and where the first commit included an AGENTS.md file that Codex had written for itself [1]. The file is the contract every coding agent in the building reads. It is also the reason the workflow stays coherent.
What AGENTS.md is
AGENTS.md is a plain Markdown file at the root of a repository that gives AI coding agents the context they need to work in this project. The agents.md site describes it as "a simple, open format for guiding coding agents" and frames the file as "a README for agents" [2].
Why the file matters for automation work
When the file is missing, every agent session starts blind. The agent has to re-discover the build commands, the package manager, the test runner, the naming conventions, and the constraints the team keeps correcting. Multi-agent teams have it worse: the same rules sit in three places (CLAUDE.md, .cursorrules, Copilot config), drift the moment one developer updates only one file, and force the agent to ask the same setup questions every session.
AGENTS.md addresses three operator pain points directly:
- Setup-question fatigue. The build, test, and dev commands, named once at the repo root, become the agent's first lookup at session start. Fewer "how do I run the tests" round-trips, more first-pass-correct output.
- Rule drift across tools. One file at the root is what every supporting agent reads, so a single edit propagates to all of them at once.
- Repeated corrections. The constraints you find yourself typing into chat for the second time belong here. The agent reads them on every session and treats them as standing rules.
The file ships in git, lives next to the README, and updates with the same pull-request workflow as the rest of the codebase. That is what makes it operationally useful: the rules evolve with the project, alongside the code, where every change is reviewable.
How AGENTS.md came to be
AGENTS.md emerged in 2025 out of a coordinated effort across the AI coding tool ecosystem to settle on one open file format. The agents.md project was formed by OpenAI Codex, Amp, Jules from Google, Cursor, and Factory, with the explicit goal of one open, tool-agnostic standard adopted across vendors [2]. Stewardship now sits with the Agentic AI Foundation under the Linux Foundation [2], which lifts the convention out of any single vendor's control.
One file in a family of agent-config files
AGENTS.md is the cross-platform anchor in a small family of agent-config files. Each one serves a different layer:
- AGENTS.md: the cross-platform repo contract, read by 25+ agents.
- CLAUDE.md: the Claude Code-specific overlay, loaded every Claude Code session. (Companion explainer in this series.)
- SKILL.md: portable agent skills, loaded on demand when the skill is invoked. See our SKILL.md Files: The Agent Skills Directory for the deeper explainer.
- memory.md: Claude Code's auto-memory file, written by Claude itself based on your corrections.
The four stack: AGENTS.md sets the cross-tool baseline, CLAUDE.md adds Claude-specific overlays, SKILL.md packages opt-in capabilities, memory.md captures the agent's own learnings. For the side-by-side decision matrix, see our SKILL.md vs AGENTS.md vs CLAUDE.md: When to Use Each. For the visual map of the agent-config landscape, see our Agent Skills Landscape infographic.
Adoption (May 2026)
The format is open and tool-agnostic. As of May 2026, the agents.md site reports 60,000+ open-source projects with an AGENTS.md file at their root, measured by a live GitHub code search query [2]. The supporting agent ecosystem is broad: the agents.md site lists 25+ tools that read the file natively, including OpenAI Codex, Cursor, JetBrains Junie, UiPath Autopilot and Coded Agents, Phoenix, Aider, Amp, opencode, Kilo Code, VS Code, RooCode, Windsurf from Cognition, GitHub Copilot, Warp, Augment Code, Gemini CLI from Google, Semgrep, Jules from Google, goose from Block, Factory, Devin from Cognition, and Zed [2]. See our AI Coding Assistants directory for the editorial scoring on each.
Why AGENTS.md exists when CLAUDE.md and .cursorrules already do
Platform-specific instruction files predate AGENTS.md and remain in active use. CLAUDE.md is the Claude Code instruction file. The Cursor IDE reads .cursor/rules/*.mdc. GitHub Copilot reads its own configuration. Each binds to one tool.
A team that uses two or more of these tools faces an obvious problem: the same repository needs the same rules in three places, and they drift the moment one developer updates only one file.
AGENTS.md addresses that problem by being readable across the ecosystem. The 25+ supporting agents listed at agents.md treat the same Markdown file as their primary source of repo context [2]. CLAUDE.md and .cursorrules become Claude-specific or Cursor-specific overlays on top of the shared contract. They stack.
Claude Code reads CLAUDE.md by default and overlooks AGENTS.md. Matt Pocock at AI Hero documents the standard workaround: a symlink (ln -s AGENTS.md CLAUDE.md) lets a single source of truth feed both tools [4].
When two files conflict, discovery rules decide who wins. OpenAI's Codex documentation describes the precedence chain explicitly: it reads global guidance first (~/.codex/AGENTS.md or its override file), walks down the project tree from the root toward the working directory, and applies nested files in order so the file closest to the working directory has the final say. Files at deeper levels appear later in the combined prompt and override earlier guidance [3]. An explicit chat instruction overrides everything.
| Criteria | AGENTS.md | CLAUDE.md | .cursor/rules | SKILL.md |
|---|---|---|---|---|
| Scope | Cross-platform | Claude Code only | Cursor only | Cross-platform (Claude Code, Codex CLI, opencode) |
| Read by | 25+ supporting agents (per agents.md) | Claude Code | Cursor | Claude Code, Codex CLI, opencode |
| Loaded at | Session start, every request | Session start, every request | On match per file glob | When the skill is invoked |
| Filename and location | Root + nested per package | Root + nested + ~/.claude/ | .cursor/rules/*.mdc | .claude/skills/<name>/SKILL.md |
| Override mechanism | AGENTS.override.md (Codex) | None native | Glob-scoped rules | Skill activation |
| Stewardship | Agentic AI Foundation / Linux Foundation | Anthropic | Cursor | Convention |
Anatomy of a good AGENTS.md
A useful AGENTS.md does five things:
- States the build and test commands the agent will run every session.
- Names the package manager, language version, and any non-default tooling.
- Lists the constraints the agent must respect (the things you keep correcting).
- Sets the style and quality bar.
- Optionally shows examples of good output and common failure modes.
The agents.md site offers a worked example covering setup commands, testing instructions, and PR title format [2]. It looks like this:
# AGENTS.md
## Setup commands
- Install deps: `pnpm install`
- Start dev server: `pnpm dev`
- Run tests: `pnpm test`
## Code style
- TypeScript strict mode
- Single quotes, no semicolons
- Use functional patterns where possible
## Testing instructions
- Find the CI plan in `.github/workflows`
- Run `pnpm turbo run test --filter <project_name>`
- Add or update tests for the code you change
## PR instructions
- Title format: [<project_name>] <Title>
- Always run `pnpm lint` and `pnpm test` before committingAI Hero cites Kyle from Humanlayer on the instruction budget of frontier thinking LLMs. The model follows roughly 150 to 200 instructions with reasonable consistency, and smaller models attend to fewer [4]. Every byte in your AGENTS.md loads on every request, so the file behaves like a shared cache that competes with the actual task. Keep it lean.
Two adjacent traps are worth naming. AI Hero calls them out:
- Auto-generated AGENTS.md. Initialization scripts produce a comprehensive file when what the agent needed was a useful one. The result reads as universal guidance that applies only to a few scenarios [4].
- Documenting file paths. AI Hero recommends documenting capabilities over structure. File paths drift, and a stale "authentication logic lives in
src/auth/handlers.ts" line will confidently send the agent to the wrong place once that file moves [4].
OpenAI's Codex implementation enforces a hard limit on the same principle. The default project_doc_max_bytes is 32 KiB across the combined chain. Once the chain reaches the cap, Codex stops adding files [3].
How to write one
- 01Start with the build and test commands the agent will run every session.
These are the commands the agent has at hand from turn one.
- 02Add the constraints.
The things you keep correcting belong in writing, with reasons attached.
- 03Document the conventions you would explain to a new hire on day one.
Naming, branching, language version, package manager.
- 04Add examples for the tasks where format matters.
Commit messages, PR titles, generated tests, response shape.
- 05Run the same task across two different agents (Codex and Cursor, for example) and use the diff to find platform-specific gaps.
What one agent infers cleanly, the other often misses.
AGENTS.md and governance
A repo with one model and one agent usually has one place where the rules live. A repo with three agents and a CI pipeline that calls a fourth needs a place where the rules apply to all of them. AGENTS.md is the natural anchor for that role.
Three consequences flow from this:
- Approval rules (what the agent must ask before doing) live in AGENTS.md so every harness honors them.
- Style and constraint rules live in AGENTS.md so the same code review verdict comes back regardless of which model generated the change.
- The repo becomes legible to agents the same way a strong README makes a project legible to a new engineer.
OpenAI's Harness engineering case study makes this concrete. A team of three to seven engineers shipped roughly one million lines of code across approximately 1,500 pull requests in five months, with all code generated by Codex. The first commit included an AGENTS.md file generated by Codex itself, and the team treated repository knowledge as the system of record. The principle they keep returning to: "agent legibility is the goal" [1].
The bridge to the next article is short. AGENTS.md is what the harness reads. The harness is the layer that enforces what AGENTS.md says.
And every shape of harness reads it. Interactive coding harnesses (Cursor, Codex CLI, Aider) and autonomous agentic harnesses (OpenClaw, Hermes, Cowork, Dispatch) all treat AGENTS.md as the shared repo contract. The file's value comes from being the same artefact every harness consumes.
Common mistakes
- Writing it as a vision document. Goals and motivation belong in a PRD or README. AGENTS.md is for commands and constraints the agent applies in this session.
- Treating it as private. AGENTS.md ships in the repo and is checked into git by default. Keep secrets in a secrets manager.
- Auto-generating it on init. Per AI Hero, generated files prioritize comprehensiveness over restraint and grow into a "ball of mud" over time [4].
- Documenting file structure. File paths change. Capabilities are stable. Document the second.
- Leaving conflicts unresolved. Where you keep both AGENTS.md and CLAUDE.md, write CLAUDE.md as a thin Claude-only overlay or symlink it to AGENTS.md. Two files with different rules teach the agent that rules are negotiable.
AGENTS.md is checked into git by default. It loads on every session, by every agent. Putting a credential into AGENTS.md is the same as putting one into the repo: assume it is public.
AGENTS.md as the operating contract
AGENTS.md is a small file that pays back on every session. It removes the setup-question round-trips, prevents rule drift across the tools your team uses, and turns repeated chat corrections into standing rules the agent reads on every turn. The file is checked into git, reviewable in pull requests, and shaped by the same workflow as the code it governs.
For operators running multiple coding agents, the practical move is small. Start with the build and test commands, add the constraints you keep correcting, and use AGENTS.md as the cross-platform contract that every harness reads. CLAUDE.md and the rest of the agent-config family stack on top.
OpenAI's case study is the proof point at scale: a team of three to seven engineers shipped roughly one million lines of code across approximately 1,500 pull requests, with all code generated by Codex and the first commit including an AGENTS.md file Codex had written for itself [1]. The file was the contract that kept the workflow coherent. The same mechanism scales down to a one-person repo with a single coding agent and an AGENTS.md checked in next to the README.
One file. Every agent. Keep it current.

