The plumbing was the point.

Automation Switch went live on April 6, 2026. Twenty-eight days later, AI assistants had cited the site 1,400 times across 17 distinct query topics, with an 8x growth curve over the final 21 days of that window. A single article on skill.md files accounts for 786 of those citations. The rest of the corpus surfaces too.

This is the data, what it tells me about AI-mediated discovery, and why the site was deliberately built this way.

1,400
AI assistant citations

Across 13 pages in 28 days from launch, with an 8x growth curve over the final 21 days.

Source: Bing Webmaster Tools AI Performance, May 2026

The numbers

The 30-day Bing Webmaster Tools AI Performance report shows what AI assistants do with our content. Total citations: 1,400. Average cited pages per day: 3. Peak day: 191 citations across 8 unique pages.

Citation growth follows a steep curve.

Citation growth across the 30-day window
CriteriaWeek 1 (Apr 6–12)Week 2 (Apr 13–19)Week 3 (Apr 20–26)
Avg citations per day~15~54~120
Avg cited pages per day1 to 32 to 47 to 8

Daily counts swing. The direction holds: more citations on more pages, week after week.

The grounding queries that drove the citations

The Bing report shows the user questions AI assistants answered using Automation Switch content. Top 17 grounding queries from the 30-day window:

Top 17 grounding queries (30-day window)
CriteriaCitations
Bolt software company AI prototype generator evaluation92
skill files54
skill.md files32
best automation tools small business30
skill.md file24
what is skill.md23
skill md file18
Smithery Agent Automation review17
what are ai skill files12
skill file12
Bolt software company prototyping wireframing review11
what is a skill.md file9
skills .md8
Notion vs Airtable complex database relationships rollups8
Qodo AI code review automated bug detection7
PulseMCP evaluation MCP Server Directory6
skill md files4

Three patterns stand out.

Definitional queries land hard. Eight different phrasings of "skill.md" or "skill files" all cite the Automation Switch skill.md directory article. AI assistants prefer authoritative single-source answers for definitional queries, and the schema on that article (Article plus structured FAQ) makes citation easy.

Tool-evaluation queries land for emerging tools. Bolt, Qodo, Smithery, PulseMCP were green-field at Automation Switch launch. Writing early earned the position.

Comparison queries land for specific intent. "Notion vs Airtable complex database relationships rollups" is a long-tail decision-stage query, and Automation Switch surfaces because the directory entry covers that decision shape.

What we shipped that earned this

The site is built deliberately for AI-readability. Schema is structural. The work happened during site redevelopment, before any retrofit pressure existed:

  • Article pages carry full Article and BlogPosting schema with author attribution, date, headline, image, mainEntityOfPage, and structured FAQ where relevant.
  • Directory pages (the AI Coding Assistants directory, the MCP Servers directory, the Skills Hub) carry CollectionPage and ItemList schema with each entry as a structured Thing.
  • Content hub pages (the pillar hubs for Workflow Automation, AI Workflows, Tool Comparisons) carry CollectionPage schema with breadcrumbs and parent navigation.
  • Reviews (the AI coding assistant reviews, the MCP server reviews) carry Review and AggregateRating schema with the editorial scoring exposed to bots.

Beyond schema, the site ships:

  • llms.txt at root, listing the canonical pillars and what each one is about.
  • MCP capability declaration at /api/mcp, exposing tools that agents can call directly.
  • Stable URL structure with category prefixes (/ai-workflows/, /tool-comparisons/, /mcp/, etc.).
  • Author attribution on every article tied to schema.org Person.
  • Structured FAQ blocks in JSON-LD where applicable.

This is the floor of agent-readability. Each piece is standard. All of it was deliberate.

Industry analysis from 2026 suggests pages with structured data are cited disproportionately in AI Overviews (Stackmatix structured data analysis), which is why the schema layer is the foundation of this work.

An unprompted reach

The strongest validation of this work happened inside our own stack. A Claude agent was running research for an article adjacent to content Automation Switch had already published. Mid-task, it reached for the Automation Switch MCP server unprompted. The agent picked AS as a source on its own.

Why it picked us matters more than the fact it picked us. The path of least resistance for a research agent in that situation is to summarise the same author’s published articles back into the new draft, hidden under a fresh URL. That pattern is cannibalisation: the same writer reusing the same source material, repackaged. The MCP gave the agent a structured path to query our corpus through tools. The cannibalisation risk turned into a citation surface.

TIP
The validation moment

Agents will route through structured surfaces when those surfaces exist. The infrastructure was earning its keep before the data even surfaced.

This was the validation moment. The thesis behind the plumbing was that agents will route through structured surfaces when those surfaces exist. Watching one of our own agents do that on live work, unprompted, made the case stronger than any citation report could. The infrastructure was earning its keep before the data even surfaced.

Where the principles came from

The pattern came from internal developer platform work. I have spent years on IDP projects, and the work that shaped this site came from one of them. We had introduced schema into the repository for every artefact, and we had introduced metadata that prepared the IDP artefacts for a knowledge graph and retrieval-augmented generation downstream.

The lesson I learned there: to make a system queryable by agents as well as humans, make it legible at the structure level. The structure of the data itself has to carry the meaning. The page level and the rendered HTML follow from it.

When I built Automation Switch, those same principles carried across. Sanity (the CMS) holds the structured content. Each content type is a structured object with named fields. The Next.js frontend renders that structure into HTML with embedded JSON-LD that mirrors the source. Sitemaps generate from the same structure. The MCP exposure at /api/mcp queries the same structure through tools.

One source. Multiple surfaces. Each surface carries the schema cleanly because the source is structured to begin with.

The goal was a site that could scale and that was agent-ready from day one. Both at once. Automation Switch focuses on automation, and AI-native build is the consistent choice.

The plumbing thesis

Worth stating plainly: the floodgates open later. AI-mediated discovery is real and growing, with AI Overviews reaching over 1.5 billion monthly users in early 2025 and traditional search reliance projected to drop 25 percent by 2026. Traditional search still dominates. Most users type into Google. Most decisions happen on traditional SERPs. The 1,400 citations represent where the web is heading. Search engines still drive the bulk of today’s traffic.

The infrastructure work Automation Switch did is plumbing. It positions the site so that when AI-mediated discovery becomes dominant, the scaffolding is already built. Revenue from this work today is incidental. The work is for tomorrow.

This is explore-versus-exploit logic applied to traffic acquisition. The cost of building agent-readable infrastructure is low. The cost of retrofitting it later is high. The cost of agents skipping you when they take over is highest of all.

Automation Switch made the bet early because building the plumbing now costs less than building it under pressure later. When the taps open, the pipes are already in place.

Concentration is loosening

64 percent of citations come from one article. That looks like concentration risk on the surface. The trajectory tells a different story: cited pages per day grew from one to eight over the 30-day window, which means the rest of the corpus surfaces too. The concentration ratio falls as more content earns its share.

The same dashboard filtered to the last 7 days. Total citations 728, average cited pages climbed to 7. The concentration ratio is falling.

The single-article dominance is informative. The skill.md directory piece is a directory-shape article: structured, scannable, comprehensive on a narrow topic. AI assistants prefer that shape for citation. The pattern repeats. Automation Switch has more directory-shape pieces in the editorial pipeline (MCP marketplaces, agent frameworks, AI design tools) and each one is a candidate for the same effect.

What this means for other publishers

The methodology generalises. Any publisher willing to ship the same infrastructure can earn proportional citation share within 30 days. The pieces are standard:

  • Schema on every content type, structurally consistent with the underlying CMS.
  • llms.txt at root.
  • Stable URLs with category prefixes.
  • Author attribution tied to schema.org Person.
  • Optional with high payoff: an MCP server exposing your corpus as queryable tools.

The cost of doing this for an existing publisher is real and bounded. For a publisher with a modern headless CMS (Sanity, Contentful, Webflow, Ghost, WordPress), the schema work takes one to two weeks. The llms.txt is an hour. The MCP exposure is a two-week build for a small team. Before scoping this work, audit what you already ship: see How to Audit a Workflow Before You Automate It.

The harder question is whether AI-mediated discovery will matter enough to justify the work. The answer depends on what you read about where search is heading. My read is that the floodgates open within 12 to 24 months. Yours might differ.

The honest framing

Your content gets used either way. AI assistants read the web. The question is whether they cite you, summarise you, or replace you. Publishers are reporting traffic losses of up to 40 percent when AI Overviews displace the click.

Citation means traffic and authority flow back to your site. Summarisation means your content answers the query, and the user stays on the AI surface. Replacement means an AI-generated alternative takes the slot you used to hold.

Schema and MCP make citation possible. They are levers, and the result depends on the surrounding work too. Sites that skip this work default to some combination of summarisation and replacement.

Automation Switch chose citation. The data so far suggests the choice is paying off.

What Automation Switch is tracking next

The 1,400 citations are the first 30 days of a multi-year compounding curve. Signals Automation Switch reviews weekly:

  • Total citations and unique cited pages. Both keep climbing as more content earns its share.
  • Concentration ratio. Currently 64 percent from one article. Healthy diversification means this drops to 30 to 40 percent over the next 90 days.
  • New URL appearance. Every article that ships should appear in the citation report within 2 to 4 weeks. Articles missing this window flag a structural problem worth investigating immediately.
  • Click-through from citation sources. AI citation traffic in Google Analytics, currently arriving from claude.ai and gemini.google.com. Citations matter. Clicks matter more.

The plumbing is the point. The data tells Automation Switch when the taps open.

For what a good audit covers in general, see What a Good Automation Audit Should Actually Include.

The plumbing was the point. The data tells Automation Switch the bet is working.