← All posts

Marketing · 6 min read

How to audit your WordPress site for AI agent readiness

Google published a checklist for building agent-friendly websites. I used AI tools to audit a real WordPress site against every requirement. Here's the framework, the results, and how to run the same audit yourself.

May 6, 2026

The web is going agentic. Google’s Gemini, OpenAI’s operator mode, and dozens of startup tools can already browse websites autonomously, comparing prices, filling out forms, and completing purchases on behalf of users. Within a few years, a significant share of your site’s “visitors” won’t be humans at all. They’ll be AI agents acting on behalf of people who never visit your site directly.

In April 2026, Google acknowledged this shift. The Chrome team published a guide called “Build agent-friendly websites”, laying out 14 distinct requirements that determine whether AI agents can actually use your site, not just read it.

This is different from GEO (Generative Engine Optimization), which is about getting cited in AI search results. Agent readiness is about whether an AI can navigate your site, click your buttons, and complete a transaction on behalf of a human user. Your site’s next visitor might not have eyes.

So I asked a simple question: is my WordPress site ready for them?

Who better to audit than an AI?

The requirements are mostly about machine-readable structure, semantic HTML, and accessibility. An AI with the right tools can check almost every single one directly.

I pointed Claude Code at a WordPress plugin company’s site (three representative pages: homepage, pricing, and a product page) and had it run a full audit using curl for HTTP headers and AI-bot user-agent spoofing, Python regex over saved HTML for semantic-element counts and JSON-LD schema types, and the Playwright MCP for live in-browser checks: DOM inspection, accessibility tree analysis, computed styles, layout-shift scoring, ghost-overlay detection, and interactive-target sizing.

How agents see your site

Agents perceive webpages through three modalities, often combined. Screenshots use vision models to identify elements by color, size, and position (slow, token-expensive, used as a fallback). HTML/DOM parsing reads element nesting, hierarchy, and attributes to understand structure. And the accessibility tree strips everything down to roles, names, and states, giving agents the highest-fidelity map of what a page actually does.

If your site is broken in one modality, agents can compensate. If it’s broken in two, they’re guessing.

The 14-point agent readiness checklist

I’ve organized Google’s recommendations into 14 discrete, auditable requirements grouped by category.

The 3 modalities

These first three map to the three ways agents perceive your site.

1. Clean, machine-readable DOM (HTML modality). Static HTML should be parseable on first request, with key content already in the DOM (not JS-hydrated only).

2. Meaningful accessibility tree. Strong ARIA roles, names, and states. Semantic landmarks (<main>, <nav>, <footer>). <html lang> set correctly.

3. Reliable visual rendering for screenshot analysis. Stable layout on first paint, low Cumulative Layout Shift (CLS < 0.1).

Cross-modality consistency

4. Consistent signals across HTML, a11y tree, and visual rendering. ARIA roles should match the underlying semantic HTML. Cursor styles should match interactivity. Visible text should match DOM text. When these diverge, agents have to guess which modality to trust.

Build recommendations

These eight are the practical build requirements.

5. All necessary actions clearly reflected in the interface. Every action the user can take should be discoverable in the DOM. Button text should resolve intent on its own (“Buy GravityView Pro” beats “Buy Now”).

6. Stable layout across templates. CTAs in consistent positions. No shifting hero blocks.

7. No “ghost” elements or transparent overlays hiding interactives. Watch for opacity: 0 overlays or decorative containers blocking pointer events.

8. Semantic HTML for actionable elements. Use <a> and <button>, never <div onclick> or <span onclick>.

9. If non-semantic, set role and tabindex. Fallback for cases where semantic HTML can’t be used. A <div role="button" tabindex="0"> tells agents this element is interactive.

10. cursor: pointer on clickable elements. Reinforces the visual signal of interactivity. Trivial to implement, easy to miss.

11. <label for="..."> linking labels to form inputs. Explicit label association for every form field. Without it, an agent can see a text field but doesn’t know what it’s for.

12. Interactive elements have a visible area > 8 sq pixels. No invisible or sub-pixel hit targets. Tiny close buttons and icon-only controls are invisible to agents.

Next steps (forward-looking)

13. Adopt WebMCP and agent-protocol signals. /.well-known/agent.json, /.well-known/ai-plugin.json, WebMCP manifests as the standards stabilize. No action required today, but worth tracking.

14. Audit the accessibility tree with existing tools. Run axe-core or equivalent, ideally in CI for regression detection.

What I found: a real WordPress audit

Overall grade: B+ (12/14 pass, 1 partial, 1 N/A)

Strong: 100% semantic HTML for interactives, rich JSON-LD (Product, Organization, FAQPage, BreadcrumbList), hand-curated llms.txt with all links resolving, excellent CLS (0 to 0.026), all AI bots served HTTP 200, clean accessibility tree, proper form labels, no ghost overlays.

Gaps found:

The site scored B+ without ever specifically optimizing for AI agents. Most of the requirements are things you should already be doing for accessibility and SEO.

How to run this yourself

For modality checks (1-4), Chrome DevTools covers everything: the Accessibility panel shows the accessibility tree, Lighthouse gives you CLS scores, and the Elements panel lets you inspect for ghost overlays and verify DOM consistency. For the build requirements (5-12), a manual walkthrough of your key pages with DevTools open catches most issues.

Or take the shortcut I did: if you use Claude Code, Cursor, or a similar AI tool, point it at your site with the 14-point checklist and let it curl, parse, and report. The AI auditing for AI readiness is oddly satisfying.

The web is becoming agentic

The fact that Google published this guide at all is the real signal. This isn’t a niche experiment. It’s the Chrome team telling web developers: AI agents are becoming a primary audience for your site, and you need to build for them.

The trajectory is clear. AI referral traffic is already measurable and growing. OpenAI’s operator mode, Google’s Gemini agent capabilities, and dozens of startup tools are training users to let AI browse on their behalf. Within a few years, a significant share of your site’s “visitors” won’t be humans at all. They’ll be agents comparing prices, filling out forms, and making purchases for the people who sent them.

Google acknowledging this is one more step toward that reality. And every single recommendation in their guide maps directly to an existing web accessibility principle. Semantic HTML, proper ARIA roles, labeled form fields, visible interactive elements, stable layouts. This is WCAG compliance with a new motivation.

If you’ve been putting off that accessibility audit, the AI agent era just gave you a business case your stakeholders will actually care about. The sites that prepare now will be the ones agents can use, recommend, and transact with. The ones that don’t will be invisible to a growing share of the web’s traffic.

Better to get ready now than to catch up later. Read Google’s full guide. It’s short, practical, and worth your time.

Casey Burridge

Cowritten by Casey & Jarvis 🤖

Casey Burridge

Strategic Growth & Operations Manager at GravityKit. Full-stack marketer, WordPress consultant, and AI-first ops builder. About · Hire me · LinkedIn