This started as a private question: does anyone else build personal sites the way we build this one?
We surveyed 47 sites, checked their protocols, and mapped the landscape. The answer: we did not find an established term that captures what these sites are. The closest analogues each implement one or two features; none combine all of them. So we named the category (cognitive interface), built a maturity framework (L0 through L4), and tried to be honest about what's overengineered and what assumptions might be wrong.
I'm publishing the raw research because it's more useful as a shared reference than a private document. The comparison matrix and protocol map should save you time if you're building something similar. The contrarian concerns section is there because analysis that doesn't question itself isn't analysis.
Research conducted February 18, 2026, using three parallel web-researcher agents, Exa Deep Researcher, and Brave Search. Updated February 20, 2026 with multi-model additions.
Executive Summary
Is this a recognized category? No. Ashita Orbis occupies an unclaimed intersection of several movements (digital gardens, IndieWeb, personal APIs, and the emerging agentic web) but we did not find an established term that captures what it actually is. The closest analogues each implement one or two of its features; none combine all six differentiators. This appears to be a genuinely new category.
Top 7 comparable sites:
| Site | Similarity* | Key Similarity |
|---|---|---|
| Aaron Parecki (aaronparecki.com) | 6/10 | Most machine-readable personal site (microformats, Webmentions, IndieAuth, Micropub) |
| Benjamin Stein (benjaminste.in) | 6/10 | Personal blog explicitly designed for agent consumption (content manifest, JSON-LD, Markdown alternate) |
| Joost de Valk (joost.blog) | 5/10 | WordPress content negotiation pioneer (<link rel="alternate" type="text/markdown">, Accept header routing) |
| swyx.io | 5/10 | JSON API endpoint, multi-persona content, "Learn in Public" philosophy |
| Simon Willison (simonwillison.net) | 5/10 | Datasette: ~159K rows, 27 tables, JSON API, which makes it the closest to "personal content as queryable database" |
| EJ Fox (ejfox.com) | 5/10 | Explicitly building "Personal APIs" for both humans and robots |
| Gwern.net | 4/10 | Deepest structured metadata, .md URL access, annotation database |
*Similarity scores reflect qualitative assessment of conceptual and feature overlap, not a direct count from the comparison matrix below. The matrix measures binary feature presence across 11 specific technical capabilities; the similarity score also weighs philosophical alignment, data depth, and implementation sophistication.
Proposed category name: Cognitive Interface, meaning a personal website that functions as a bidirectional protocol surface between a human identity and the network of both human and AI agents.
Part 1: Comparable Sites
1.1 Comparison Matrix
| Site | Human Blog | Structured Data | API/Endpoints | llms.txt | MCP Server | WebMCP | Agent Comments | Automated Pulse | Resident Agent | Multi-Tier | Content Negotiation |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Ashita Orbis | Y | Y | Y (OpenAPI) | Y | Y | Y | Y | Y (daily) | Y | Y (3 tiers) | Y |
| Aaron Parecki | Y | Y (microformats2, h-card, JSON-LD) | Y (Micropub) | - | - | - | Y (Webmentions) | - | - | - | - |
| swyx.io | Y | Y (frontmatter) | Y (JSON API at /api/blog) | - | - | - | - | - | - | - | - |
| EJ Fox | Y | Y | Y (Personal API) | - | - | - | - | - | - | - | - |
| Simon Willison | Y | Y (tags, dates) | Y (Datasette JSON API, separate instance) | - | - | - | - | - | - | - | - |
| Gwern.net | Y | Y (YAML: confidence, importance) | - | - | - | - | - | - | - | - | - |
| Maggie Appleton | Y | Y (growth stages) | - | - | - | - | - | - | - | - | - |
| Robb Knight | Y | Y | - (RSS/Atom/JSON feeds) | - | - | - | - | Y (automated /now) | - | - | - |
| IndieWeb (typical) | Y | Y (microformats) | Y (Micropub, IndieAuth) | - | - | - | Y (Webmentions) | - | - | - | - |
| Moltbook | - | Y | Y | - | - | - | Y (agent-only) | - | - | - | - |
| SiteSpeakAI users | varies | - | Y (auto-generated MCP) | - | Y (auto) | Y (auto) | - | - | Y (chatbot) | - | - |
| Cloudflare sites | varies | - | - | - | - | - | - | - | - | - | Y (Markdown for Agents) |
| Benjamin Stein | Y | Y (JSON-LD, content manifest) | Y (alternate format links) | - | - | - | - | - | - | - | Y (Markdown alternate) |
| Joost de Valk | Y | Y (Schema.org) | Y (.md endpoints) | - | - | - | - | - | - | - | Y (Accept header + <link rel="alternate">) |
| omg.lol | Y | Y | Y (full REST API) | - | - | - | - | - | - | - | - |
| Will Larson | Y | Y (embeddings corpus) | - | - | - | - | - | - | - | - | - |
| Tantek Celik | Y | Y (h-card, h-feed, ICS calendar) | Y (Micropub, IndieAuth) | - | - | - | Y (Webmentions) | - | - | - | - |
| Damian O'Keefe | Y | - | - | Y | Y (Netlify) | - | - | - | - | - | - |
| Oskar Ablimit | Y | - | Y (MCP) | - | Y | - | - | - | - | - | - |
| Aridane Martin | Y | - | - | - | - | Y | - | - | - | - | - |
| Jason McGhee | Y | - | - | - | - | Y | - | - | - | - | - |
| Junxin Zhang | Y | - | - | - | - | - | - | - | - | - | Y (Cloudflare) |
| Julian Goldie | Y | - | - | - | - | Y | - | - | - | - | - |
Key observation: No site in this matrix has more than 5 of AO's 11 features. Among the specifically agent-focused protocol features (llms.txt, MCP Server, WebMCP), the maximum found on any other personal site is 2 (Damian O'Keefe: MCP + llms.txt). AO has all 11.
1.2 Closest Analogues (Detailed Profiles)
Aaron Parecki (aaronparecki.com)
The IndieWeb Exemplar. Aaron Parecki is the Director of Identity Standards at Okta and a leading IndieWeb practitioner. His site is the most machine-readable personal site in the traditional web.
- Microformats2 & h-card: Full structured identity markup covering name, role, image, and biographical links
- Webmentions: Bidirectional cross-site communication (the human precursor to agent-to-agent interaction)
- IndieAuth: Decentralized authentication using his domain as identity
- Micropub: API for creating/editing content on his site from external clients
- JSON-LD: Schema.org WebSite markup with SearchAction
- Quantified self (as of February 2026): 420 articles, 4,517 bookmarks, 21,658 checkins, 3,764 notes, 4,443 photos, GPS tracking since 2008
- Custom CMS (p3k): Self-built, emphasizing data ownership
What AO has that Parecki doesn't: Agent protocols (MCP, WebMCP, llms.txt), automated content generation (Pulse), resident AI agent, multi-tier architecture, agent reactions/comments from AI agents specifically.
What Parecki has that AO could adopt: IndieAuth (use your domain as login everywhere), Micropub (standardized content creation API), the sheer depth of quantified self data.
Assessment: Parecki's site is the closest philosophical ancestor, because both sites treat machine-readability as a first-class concern. The gap is generational: Parecki's machine-readable layer serves the human IndieWeb while AO's serves AI agents.
swyx.io (Shawn Wang)
The Developer Thought Leader. Shawn Wang (swyx) coined "Learn in Public" and popularized the "AI Engineer" role. His site bridges content creation and developer tooling.
- JSON API:
/api/blogreturns structured blog metadata (slug, date, reading time, category, frontmatter, GitHub integration) - Multi-persona content: Targets "nontechnical Vibe Coders" through "Professional Software Engineers" to executives
- Canonical essays: "Learn in Public" and "The Rise of the AI Engineer," which established himself as a thought leader
- Transparent stakeholder engagement: Portfolio, advisory relationships, equity disclosures
What AO has that swyx doesn't: Agent protocols, automated content, resident agent, multi-tier architecture, agent interaction.
What swyx has that AO could adopt: The JSON API pattern for blog metadata is simple and powerful. The multi-persona content targeting is sophisticated.
Assessment: swyx.io is the closest in intent because both sites are designed to be someone's public cognitive output. swyx's API is practical but limited compared to AO's full protocol surface.
EJ Fox (ejfox.com)
The Personal API Pioneer. EJ Fox wrote "Building Personal APIs" (2025), explicitly arguing for structured, real-time personal data exposed to "any robot (or, uh, human)" who wants it.
- Personal API: Structured around what he "actually cares about," providing real-time data about current activities
- Philosophy: Software as self-expression, APIs as extensions of identity
What AO has that Fox doesn't: The full protocol stack, multi-tier architecture, agent social features.
What Fox has that AO could adopt: The philosophical framing of "personal APIs" as identity expression, not just technical infrastructure.
Assessment: Fox's essay is the closest articulation of what AO is building, but his implementation appears to be a single API endpoint rather than a comprehensive agent surface.
Gwern.net
The Deep Metadata Site. Gwern Branwen's site is probably the most intensely structured personal site on the web.
- YAML frontmatter: Every page has creation date, confidence level, importance rating, CSS extensions
- Confidence tagging: Each piece of content has an explicit epistemic status (likely, possible, unlikely, etc.)
- Progressive disclosure: Navigation across multiple levels, from section headers through thematic clusters to inline annotations
- Markdown alternate access: Any page's raw Markdown source is accessible by appending
.mdto the URL (e.g.,gwern.net/zeo.md). An undocumented but functional machine-readable access pattern. - Annotation database: A comprehensive structured database backing Gwern's inline annotations, link previews, and metadata popups. Functions as a personal knowledge graph of external references.
- Archival completeness: Obsessive documentation of sources
What AO has that Gwern doesn't: Any explicit layer facing agents. Gwern is the most implicitly machine-readable personal site (deep structure, .md access, annotation DB) but has no explicit agent protocols: no MCP, no llms.txt, no API.
What Gwern has that AO could adopt: Confidence/epistemic status tagging on content. Importance ratings that could inform agent prioritization. The .md URL pattern as a lightweight content negotiation mechanism. The annotation database concept as a personal knowledge graph.
Assessment: Gwern proves that deep structure doesn't require agent protocols, but also shows the missed opportunity. If Gwern's metadata and annotation database were exposed via MCP, agents could do extraordinary things with it. The .md URL trick is a form of proto-content-negotiation that predates Cloudflare's Markdown for Agents.
Simon Willison (simonwillison.net)
The AI Transparency Advocate. Simon Willison is the creator of Datasette and one of the most prolific writers about AI/LLM developments.
- Content: Deeply technical blog spanning 2002-2026, focused on AI/LLM transparency
- Datasette: His open-source tool for exploring and publishing data, which is architecturally adjacent to what a personal MCP server does. His personal Datasette instance (datasette.simonwillison.net) exposes ~159,000 rows across 27 tables with a JSON API. This is the closest any personal site comes to AO's structured data exposure, but it's a separate tool that is not integrated into the blog itself.
- Feeds: Extraordinarily granular, with extensive per-tag sub-feeds covering individual tags, link types, and content categories. Atom feed at
/atom/everything/, plus per-tag feeds. - No agent protocols: Despite being deeply embedded in the AI ecosystem, his personal site has no MCP, no llms.txt, no API endpoints beyond Atom and Datasette.
What AO has that Willison doesn't: Agent protocols (MCP, WebMCP, llms.txt), automated content generation (Pulse), resident agent, multi-tier architecture. Willison's Datasette is the closest architectural parallel to AO's agent surface, but it's deployed as a separate application, not as an integrated layer of his personal site.
What Willison has that AO could adopt: The volume and consistency of writing. The extensive per-tag sub-feeds model. The Datasette approach of making ALL personal data queryable (~159K rows). The transparency about AI tool usage.
Assessment: The most surprising gap on this list. Willison should be the first person with MCP on his personal site, given his Datasette work. That he doesn't suggests the "personal site as agent surface" idea hasn't propagated even to the most obvious candidates. His Datasette instance is the closest existing thing to "personal content as a queryable API"; it just hasn't been connected to agent protocols.
Benjamin Stein (benjaminste.in)
The Agent-Friendly Blog Pioneer. Benjamin Stein's blog is a personal site explicitly designed for agent consumption alongside human reading.
- Content manifest: A machine-readable index of all blog content with metadata, enabling agents to discover and navigate the site's full corpus without crawling.
- Alternate format links: Each page provides
<link>tags pointing to Markdown, JSON, and other alternate representations. Agents can follow these to access structured versions. - JSON-LD: Full Schema.org structured data markup throughout the site.
- Markdown alternate: Every post available in Markdown format via alternate links.
What AO has that Stein doesn't: The full protocol stack (MCP, WebMCP, llms.txt), agent social features (reactions, comments), automated content generation, resident agent, multi-tier architecture.
What Stein has that AO could adopt: The content manifest pattern, which is a single index file that maps all site content with metadata. This is complementary to llms.txt (which provides a summary) and MCP (which provides interactivity). A manifest sits between them: comprehensive but static.
Assessment: Stein is the closest individual practitioner to AO's approach of deliberately designing for agent access. His implementation is lighter but shows the same intent.
Joost de Valk (joost.blog)
The WordPress Markdown Alternate Pioneer. Joost de Valk (founder of Yoast SEO) implemented content negotiation on his WordPress blog with a novel discovery mechanism.
.mdendpoints: Every post has a parallel Markdown version at the same URL with.mdappended.- Discovery via
<link rel="alternate">: HTML pages include<link rel="alternate" type="text/markdown" href="...">tags, allowing agents to programmatically discover Markdown versions. - Accept header routing: The server also responds to
Accept: text/markdownheaders, providing content negotiation in addition to access via URL patterns. - Schema.org markup: Full structured data via Schema.org vocabulary.
What AO has that de Valk doesn't: MCP, WebMCP, llms.txt, agent social features, automated Pulse, resident agent, multi-tier architecture.
What de Valk has that AO could adopt: The <link rel="alternate" type="text/markdown"> discovery pattern is elegant and grounded in existing standards, since it uses existing web infrastructure (HTML <link> tags) to signal agent-accessible content without requiring any new protocol.
Assessment: De Valk's approach is the most pragmatic implementation of agent-accessible personal content. It requires minimal infrastructure changes and uses web standards that already exist. The <link rel="alternate"> discovery pattern could become a lightweight standard.
1.3 Newly Discovered Agent-Accessible Personal Sites
(Added February 20, 2026 via Exa Deep Researcher and additional web research)
A second round of research surfaced several individuals who have begun implementing agent protocols on personal sites, none of which were discovered in the initial February 18 survey:
| Person | Site | What They've Done |
|---|---|---|
| Oskar Ablimit | mytechsales.oskarcode.com | Django + MCP server; AI tools update resume via natural language |
| Damian O'Keefe | mcp.damato.design | Netlify MCP server serving raw Markdown from personal site + blog; also has llms.txt |
| Aridane Martin | aridanemartin.dev | WebMCP integration via navigator.modelContext.registerTool() |
| Jason McGhee | jason.today | WebMCP server exposing full-stack portfolio to browser agents |
| Junxin Zhang | junxinzhang.com | Cloudflare content negotiation (Accept: text/markdown) |
| Julian Goldie | juliangoldie.com | WebMCP + custom UI, branded as "Google AI Plugin" |
These are point implementations of single protocols, not comprehensive multi-channel architectures. None combines more than 2 agent-discovery methods. Damian O'Keefe comes closest with MCP + llms.txt (2 channels). AO's 7-channel approach (MCP, OpenAPI, WebMCP, llms.txt, content negotiation, ChatGPT GPT, Direct HTTP API) remains architecturally unique.
1.4 Gap Analysis: What AO Has That Others Don't
The features unique to Ashita Orbis (not found on any comparable site):
-
Multi-tier architecture: No personal site serves 3 genuinely different presentation tiers (a human-facing site, a structured-data site, and an agent-facing API surface) from shared content. Vercel's content negotiation is corporate and operates at the format level, not architectural.
-
7 agent discovery channels simultaneously: The closest is SiteSpeakAI, which auto-generates MCP + WebMCP, but that's 2 channels and it's a service, not a personal site. Among individual practitioners, Damian O'Keefe has 2 channels (MCP + llms.txt).
-
Automated daily Pulse from workspace state: Robb Knight has an automated /now page, but it aggregates external services. AO's Pulse generates narrative from internal workspace state, which is a fundamentally different data source.
-
Agent reactions/comments with structured tags: Moltbook has agent-to-agent interaction, but it's a centralized platform. AgentGram (an open-source Moltbook alternative with Ed25519 crypto auth) is also centralized. No personal site has agent reactions/comments.
-
Resident AI agent matching the site's voice AND participating outward: The market bifurcated into: (a) commercial digital twin services (Delphi.ai, Tavus, MindBank.ai) that create chatbots which only handle inbound queries, (b) agent social platforms (Moltbook, AgentGram) where agents interact but aren't tethered to personal sites, and (c) autonomous agent frameworks (Conway/Automaton, OpenClaw) that focus on economic agency rather than personal site presence. AO uniquely bridges all three: a personality-matched agent that is both resident on a personal site AND participates outward. No other documented example exists.
-
Machine-readable project catalog with codenames: JSON Resume exists for career data, but no standard or implementation covers project portfolios with the richness AO provides.
1.5 Reverse Gap: What Others Have That AO Could Adopt
| Feature | Source | Value for AO |
|---|---|---|
| IndieAuth (domain-as-identity) | Aaron Parecki / IndieWeb | Use ashitaorbis.com as login credential for other sites |
| Micropub (content creation API) | IndieWeb | Let external tools create content on AO |
| Confidence/epistemic tagging | Gwern.net | Add confidence levels to blog posts; agents could filter by certainty |
| Webmentions (cross-site mentions) | IndieWeb | Receive notifications when other sites link to AO content |
| JSON feed alongside Atom/RSS | Robb Knight | Machine-readable feed for simpler consumers than MCP |
| Quantified self data exposure | Aaron Parecki | Expose personal metrics (coding time, project activity) via API |
| Multi-persona content targeting | swyx.io | Tailor content depth to visitor type (developer, executive, agent) |
| soul.md voice-encoding framework | Conway/Automaton ecosystem | Structured approach to encoding personality into agents (SOUL.md + STYLE.md + SKILL.md). Uses "consciousness tokens" from Twitter exports, blog posts, emails. More systematic than ad-hoc personality tuning. |
| Webmention "Vouch" extension | IndieWeb | Trust framework for cross-site interactions that requires endorsement from a trusted domain before displaying agent comments. Could solve the spam/identity problem for agent reactions. |
| Agent Card / AgentFacts identity | A2A / AgentFacts.org | Machine-readable agent identity cards (Ed25519 signed). Could make agent reactions attributable and verifiable. |
| Content manifest (index file) | Benjamin Stein | Single machine-readable file mapping all site content with metadata. Complementary to llms.txt (summary) and MCP (interactive). |
<link rel="alternate" type="text/markdown"> |
Joost de Valk | Discovery of agent-friendly content grounded in existing standards, using HTML <link> tags. No new protocol needed. |
.md URL pattern |
Gwern.net, Joost de Valk | Lightweight content access: append .md to any URL for Markdown source. Zero-configuration content negotiation. |
| Full REST API over personal presence | omg.lol | REST API for all personal data (status, profile, DNS, weblog, now page), with webhooks for push notifications. |
| Embeddings over writing corpus | Will Larson (lethain.com) | Semantic search over personal writing using vector embeddings. Enables "find posts similar to X" queries. |
| ICS calendar feed of activities | Tantek Celik | Machine-readable calendar of events/activities, enabling agent scheduling integration. |
Part 2: The Agentic Web Landscape
2.1 Protocol Adoption Map
llms.txt
- Created by: Jeremy Howard (co-founder of Answer.AI), published September 3, 2024
- Spec status: Proposal driven by community adoption, no formal standards body governance
- How it works: Markdown file at
/llms.txtwith H1 heading, optional summary, H2-delimited file lists with descriptions. Companion.mdversions of HTML pages. - Adoption: Estimated 800-2,000 curated directory listings, but automated indexing suggests far wider deployment:
- directory.llmstxt.cloud: ~1,500+ listings
- llmstxthub.com: 500+ sites
- llms-text.com/directory: 788 verified sites
- BuiltWith automated crawl (October 2025): 844,000+ implementations detected, which suggests most adoption is by documentation generators and CMS plugins producing llms.txt files automatically rather than through manual curation
- Breakdown (estimated): ~95% corporate/product sites (Anthropic, Cloudflare, Vercel, Coinbase, HuggingFace), ~5% personal/developer sites. The 844K number likely reflects auto-generated files from CMS plugins (WordPress, Hugo Blowfish, VitePress) more than deliberate adoption.
- Personal site examples: Matt Rickard (517K llms-full.txt), Evan Boehs, Santanu Pradhan (GitHub Pages), Jessica Temporal (jtemporal.com, who wrote a Jekyll implementation guide), Guillaume LaForge (glaforge.dev, who wrote an adoption guide), Liran Tal (lirantal.com), glucn.com, Sebastian van de Meer (German IT security blog)
- Tools: Astro plugin (
astro-llms-txt), Jekyll Liquid template, Hugo Blowfish theme (built-in), WordPress plugin, VitePress/Docusaurus plugins, CLI tools, VS Code extensions - Skepticism: No major LLM provider has confirmed actively consuming llms.txt. Google explicitly rejected it (comparison to defunct keywords meta tag). Redocly argued, in essence, that they tried it, measured it, and found llms.txt overhyped. The "last-mile" question remains unresolved.
- Assessment: Well-adopted for documentation sites, growing among personal sites. More personal site tooling (Astro, Jekyll, Hugo) lowers the barrier significantly.
WebMCP (Web Model Context Protocol)
- What it is: A proposed web standard that lets websites expose structured tools to in-browser AI agents. Web pages become MCP servers running in client-side JavaScript.
- Created by: Co-authored by engineers at Google and Microsoft, building on Anthropic's MCP. Spec editors: Walderman (Microsoft), Sagar (Google), Farolino (Google).
- Governance: Housed under the W3C Web Machine Learning Community Group (github.com/webmachinelearning/webmcp), with its charter updated September 2025. Currently a Draft Community Group Report, NOT a W3C standard.
- Chrome status: Available for prototyping to early preview program participants in Chrome 146 (February 2026). Production stability expected mid-to-late 2026.
- How it works: Exposes
navigator.modelContextbrowser API with two modes: - Declarative API: Augments existing HTML forms with microdata/attributes (minimal code change)
- Imperative API: JavaScript functions registered via
registerTool()with full parameter schemas - A single WebMCP tool call can replace dozens of browser-use interactions
- Reported 89% token efficiency improvement over methods that rely on screenshots (per community coverage; not confirmed in Chrome's official early-preview post)
- Key distinction from MCP: Traditional MCP runs on the server. WebMCP runs entirely in the browser tab, sharing the user's active session.
- Personal site adoption: Essentially zero outside Chrome Labs demos and a handful of developers (Aridane Martin, Jason McGhee, Julian Goldie). Andre Cipriani Bandarra (bandarra.me) joined the Chrome EPP. SiteSpeakAI auto-registers knowledge bases as WebMCP.
- Assessment: The most potentially transformative protocol for personal sites, but too new to evaluate adoption. The discovery mechanism doesn't exist yet (per Ivan Turkovic's analysis). AO implementing WebMCP now would be genuinely pioneering.
MCP (Model Context Protocol)
- Created by: Anthropic, announced late 2024
- Spec status: Open protocol with active spec development (version 2025-11-25). Transported over JSON-RPC 2.0.
- Adoption: De facto standard for AI tool connectivity. 10,000+ public servers (per Anthropic's December 2025 announcement), 97M monthly SDK downloads, 2,000 entries in official registry. Supported by Claude, ChatGPT, Gemini, Cursor, Windsurf, VS Code. Donated to Agentic AI Foundation (Linux Foundation), December 9, 2025.
- Hosting options: Local (stdio), Remote (HTTP+SSE or Streamable HTTP), Cloudflare Workers
- Personal site adoption: Rare but emerging:
- Adrian Cockcroft (meGPT): github.com/adrianco/megpt, which processes 611 content items across multiple formats (YouTube videos, blog archives, presentations, documents, books, and podcast episodes). Exposes semantic search, content filtering, analytics via MCP. Runs locally, not publicly hosted.
- Daniela Petruzalek (Speedgrapher): MCP server for "vibe writing," providing personal prompts exposed as slash commands. Published on Google Cloud Medium blog.
- Damian O'Keefe: Netlify-hosted MCP server serving raw Markdown from personal site and blog.
- Oskar Ablimit: Django + MCP server for resume/portfolio, AI tools update content via natural language.
- SiteSpeakAI: Auto-generates MCP endpoints for any chatbot trained on site content.
- Fern: Auto-generates MCP servers from OpenAPI specs, hosts at
yoursite.com/_mcp/server. - CMS-specific:
blogger-mcp-server,wordpress-mcp-serveron GitHub. - Assessment: MCP won the protocol war against A2A. The "personal content as MCP server" pattern is emerging (Cockcroft's meGPT is the clearest example) but AO is the only personal site where MCP is architecturally integrated rather than a separate tool.
Google A2A (Agent-to-Agent Protocol)
- Created by: Google Cloud, announced April 2025, now housed by Linux Foundation
- Purpose: Agent-to-agent coordination (horizontal) vs MCP's tool connectivity (vertical)
- Design: Message envelopes in JSON, authentication via OAuth, focuses on inter-agent delegation
- Trajectory: Launched with 50+ enterprise partners (Atlassian, Box, Salesforce, SAP, etc.). An estimated ~150 supporting organizations by July 2025 (Google's launch announced 50+ partners; the higher figure is from secondary coverage). Accepted by Linux Foundation June 2025. By September 2025, "quietly faded into the background" (per fka.dev analysis).
- Why it faded: Over-engineered for basic tasks, enterprise-first positioning, MCP captured developer mindshare first. Anthropic and OpenAI notably absent from launch partners.
- Current state: Google Cloud still supports A2A for enterprise customers but added MCP compatibility. No personal sites implement A2A.
- Assessment: Irrelevant to the personal site category. MCP won decisively.
Cloudflare Agent Infrastructure
- Markdown for Agents (launched February 12, 2026): Automatically converts HTML pages to Markdown when an AI agent requests
text/markdown. Token reduction of ~80%. Available on enabled zones for qualifying Cloudflare plans. - Adoption: Available to Cloudflare sites on supported plans with the feature enabled. Passive opt-in once enabled.
-
Controversy: SEO community concerned about encouraging cloaking (per Search Engine Land).
-
Workers MCP: Deploy remote MCP servers to Cloudflare's edge network. Handles OAuth, transport, deployment.
-
Adoption: Primarily developer tools, not personal sites.
-
Moltworker: Middleware for running OpenClaw (formerly Moltbot) on Cloudflare infrastructure. Uses Sandboxes, AI Gateway, Browser Rendering, R2, Zero Trust.
- Relevance: Closest infrastructure to AO's resident agent, but positioned as a personal assistant (via Slack), not as a website component.
NLWeb (Microsoft)
- What it is: A conversational web protocol that allows websites to be queried in natural language. Wraps Schema.org structured data with an MCP-compatible interface, so any Schema.org-annotated site can be queried conversationally.
- Created by: Microsoft Research, open-sourced May 2025
- How it works: Sites add Schema.org markup (many already have it for SEO). NLWeb provides a layer that translates natural language queries into structured data lookups against that markup. Exposes results via MCP, enabling agents to ask questions like "What are the latest posts about X?" rather than issuing API calls.
- Key distinction: NLWeb doesn't require sites to build new APIs because it reuses existing Schema.org data that many sites already have for SEO purposes, making it a zero-marginal-cost upgrade for Schema.org adopters.
- Adoption: Early stage. Microsoft open-sourced the reference implementation but adoption is sparse.
- Personal site relevance: High potential. Most personal sites on modern frameworks (Next.js, Astro, Hugo) already emit Schema.org data. NLWeb could make them agent-queryable with no additional work.
- Assessment: The most overlooked protocol in this analysis. If NLWeb gains traction, it could become the "llms.txt for structured data," providing a low-barrier way to make existing sites agent-accessible. AO's Schema.org markup could be NLWeb-compatible with minimal effort.
Content Negotiation for Agents
- Cloudflare approach: Server-side, automatic, based on
Accept: text/markdownheader - Vercel approach: AI SDK middleware inspects accept headers, routes agents to Markdown endpoints
- Personal site adoption: Emerging, with documented implementers:
- Ben Word (benword.com): Laravel + CommonMark. Dual detection:
Acceptheader AND user-agent sniffing. Identifies LLM user agents: axios, Claude-User, node. - Nicholas Khami (skeptrune.com): Astro + Cloudflare Worker. Build-time HTML-to-Markdown conversion, Worker inspects Accept header, routes to
/markdownor/htmldirectories. Reports 10x token reduction. Source code published. - Junxin Zhang (junxinzhang.com): Cloudflare content negotiation toggle.
- Adoption challenge: According to a February 2026 analysis attributed to Checkly, only 3 of 7 major AI agents tested send
Accept: text/markdown(Claude Code, Cursor, OpenCode). OpenAI Codex, Gemini CLI, GitHub Copilot, and Windsurf reportedly do not. (Cloudflare's own blog confirms only Claude Code and OpenCode as sending this header.) - AO's approach: Architectural tiers (3 different sites) rather than content negotiation on a single URL. More comprehensive but more complex.
2.2 Personal Sites in the Agentic Web
The uncomfortable finding: Almost no personal sites participate in the agentic web. The protocols exist, the infrastructure exists, but individual practitioners haven't adopted them. Here's who comes closest:
| Individual | Site | Agent Features |
|---|---|---|
| Santanu Pradhan | santanu-p.github.io | llms.txt on GitHub Pages |
| Matt Rickard | mattrickard.com | llms.txt (517K full version) |
| Evan Boehs | evanboehs.com | llms.txt |
| Andre Cipriani Bandarra | bandarra.me | WebMCP EPP participant |
| EJ Fox | ejfox.com | Personal API |
| Aaron Parecki | aaronparecki.com | Micropub, IndieAuth, Webmentions |
| Adrian Cockcroft | github.com/adrianco/megpt | Personal content MCP server (611 items across multiple formats) |
| Daniela Petruzalek | Speedgrapher | Personal writing MCP server ("vibe writing" toolkit) |
| Ben Word | benword.com | Content negotiation (Accept header + user-agent detection, Laravel) |
| Nicholas Khami | skeptrune.com | Content negotiation (Astro + Cloudflare Worker, 10x token reduction) |
| Jessica Temporal | jtemporal.com | llms.txt on Jekyll/GitHub Pages + implementation guide |
| Guillaume LaForge | glaforge.dev | llms.txt on personal blog + guide |
| Benjamin Stein | benjaminste.in | Content manifest, JSON-LD, Markdown alternate, alternate format links |
| Joost de Valk | joost.blog | <link rel="alternate" type="text/markdown">, Accept header routing, .md endpoints |
| Tantek Celik | tantek.com | h-card, h-feed, Micropub, IndieAuth, Webmentions, ICS calendar |
| Will Larson | lethain.com | Embeddings over writing corpus (semantic search experiment) |
| Damian O'Keefe | mcp.damato.design | Netlify MCP server + llms.txt |
| Oskar Ablimit | mytechsales.oskarcode.com | Django + MCP server for resume/portfolio |
| Aridane Martin | aridanemartin.dev | WebMCP via navigator.modelContext.registerTool() |
| Jason McGhee | jason.today | WebMCP server for full-stack portfolio |
| Junxin Zhang | junxinzhang.com | Cloudflare content negotiation |
| Julian Goldie | juliangoldie.com | WebMCP + custom UI |
| Ashita Orbis | ashitaorbis.com | MCP + OpenAPI + WebMCP + llms.txt + content negotiation + agent comments + resident agent |
The gap between AO and the next-closest personal site is significant, but the gap is narrowing. Adrian Cockcroft's meGPT proves the "personal content as MCP server" pattern is viable, Damian O'Keefe shows MCP + llms.txt on a personal site, and Ben Word/Nicholas Khami show content negotiation on personal sites is happening. The question is whether these remain isolated experiments or converge into the pattern AO represents.
2.3 Infrastructure Providers
| Provider | Product | Role in Agentic Web |
|---|---|---|
| Anthropic | MCP specification | De facto protocol standard |
| WebMCP (Chrome), A2A (faded) | Agent integration at the browser level | |
| Cloudflare | Markdown for Agents, Workers MCP, Moltworker | Infrastructure for agent-accessible sites |
| Vercel | AI SDK content negotiation | Developer framework for dual-audience sites |
| SiteSpeakAI | Auto-generated MCP + WebMCP | Turnkey agent accessibility for any site |
| Jeremy Howard / Answer.AI | llms.txt specification | Lightweight documentation standard |
| Microsoft Research | NLWeb | Conversational queries over Schema.org data via MCP |
Part 3: Naming the Category
3.1 Conceptual Genealogy
1945 Memex (Vannevar Bush), "As We May Think"
Associative trails through stored information
↓
1960s Hypertext (Ted Nelson, Doug Engelbart)
Links between documents, augmenting human intellect
↓
1990s Personal Homepages (GeoCities, Angelfire)
Self-expression on the web, hand-coded HTML
↓
1998 "Hypertext Gardens" (Mark Bernstein)
First use of garden metaphor for personal knowledge spaces
↓
2000s Blogs (Blogger, WordPress, LiveJournal)
Reverse-chronological personal publishing
↓
2003 RSS / Atom
Machine-readable syndication of personal content
↓
2010s IndieWeb (microformats, Webmentions, IndieAuth, Micropub)
Machine-readable personal identity & cross-site communication
↓
2015 "The Garden and the Stream" (Mike Caulfield)
Philosophical framework: linked spaces vs chronological feeds
↓
2018 "Learn in Public" (Shawn Wang / swyx)
Learning by publicly sharing your process and work
↓
2019 Digital Garden revival (Joel Hooks)
"My blog is a digital garden, not a blog"
↓
2020 "An App Can Be a Home-Cooked Meal" (Robin Sloan)
Software for an audience of 4; personal tools as craft
Digital gardens enter mainstream (MIT Technology Review)
↓
2024 llms.txt (Jeremy Howard, September)
MCP (Anthropic, late 2024)
First protocols specifically for AI agent access to content
↓
2025 A2A (Google, April; faded by September)
Personal APIs (EJ Fox)
"Building Personal APIs", explicit framing of sites as robot-accessible
↓
2026 WebMCP (Chrome 146 early preview, February 2026)
Markdown for Agents (Cloudflare, February)
Moltbook (January 28), agent social network
Ashita Orbis, multi-protocol agent surface + human site
↓
???? [The category this document is trying to name]
The inflection point is 2024-2026. Before that, machine-readable personal sites existed (IndieWeb) but were designed for human interoperability. After 2024, protocols emerged specifically for AI agent access. AO sits at the convergence.
3.2 Existing Terms and Their Limitations
| Term | What It Captures | What It Misses |
|---|---|---|
| Digital garden | Evolving knowledge, personal ownership | No agent layer, no API surface, no machine-readability emphasis |
| IndieWeb site | Machine-readable identity, cross-site communication | Human-only protocols, no AI agent awareness |
| Personal API | Machine-readable data exposure | No human content layer, no social/interaction features |
| Knowledge base | Structured information | Static, no social dimension, no identity |
| Digital twin | AI representation of a person | Corporate connotation, implies simulation not expression |
| Portfolio | Project showcase | No machine-readability, no agent interaction |
| Blog | Personal writing | No structured data, no agent protocols |
| Personal brand | Public identity | Marketing connotation, no technical dimension |
| Second brain | Personal knowledge management | Private by default, no publication/API layer |
| Homepage | Personal web presence | Implies single page, no depth |
None of these terms captures the combination of: human-readable content + machine-readable APIs + agent social features + automated content generation + sovereign identity.
3.3 Candidate Names
1. Cognitive Interface
What it captures: The site as an interface to a mind, readable by both humans and machines. "Cognitive" connects to "tools for thought" and Vannevar Bush. "Interface" is precise: it's not the mind itself but the protocol surface between mind and network.
What it misses: Doesn't immediately convey "personal website." Could sound clinical or academic.
Audience reception: - Developers: Intriguing, slightly pretentious - General public: Confusing without explanation - Academics: Strong, connects to cognitive science, HCI, distributed cognition - AI researchers: Natural fit, since interfaces are what they build
Verdict: Best overall. The term AO already uses ("homepage for my brain" maps directly to cognitive interface).
2. Agent-Native Site
What it captures: The defining feature, that it is built for agents from the ground up rather than retrofitted. Parallel to "cloud-native" or "mobile-native."
What it misses: Doesn't convey the human dimension. Sounds like a site only for agents.
Audience reception: - Developers: Clear, actionable, familiar pattern - General public: Meaningless ("what's an agent?") - Academics: Too industry-specific - AI researchers: Useful technical descriptor
Verdict: Good as a technical descriptor, not as a category name. "This is an agent-native site" works; "I build agent-native sites" sounds like a job title.
3. Personal Node
What it captures: Network topology, where each site is a node in a larger mesh of human and AI connections. Emphasis on sovereign identity within a network.
What it misses: Too abstract. Every website is technically a "node." Doesn't convey what makes it different.
Audience reception: - Developers: Familiar (graph theory, P2P networks) - General public: Vague - Academics: Interesting but overloaded (too many meanings) - AI researchers: Useful but imprecise
Verdict: Weak. Accurate but undifferentiated.
4. Sovereign Interface
What it captures: Self-owned, self-hosted, independent of platforms. "Sovereign" connects to IndieWeb values and "sovereign AI" (as in Conway/Automaton). "Interface" maintains the protocol surface concept.
What it misses: "Sovereign" has political connotations (sovereign citizen movement, national sovereignty) that may distract. Doesn't convey the cognitive/personal dimension.
Audience reception: - Developers: Strong, since sovereignty is a valued concept - General public: Political associations - Academics: Interesting but loaded - AI researchers: May confuse with "sovereign AI" (national AI independence)
Verdict: Good for the IndieWeb-adjacent audience but too much political baggage for general use.
5. Ambient Site
What it captures: Always-on, passively accessible to agents (they don't need to "visit" because the site's data is ambient in the agent network). Connects to "ambient computing."
What it misses: Implies passive presence, not active engagement. Doesn't convey the writing/content dimension.
Audience reception: - Developers: Interesting, novel - General public: Vague ("ambient" = background music?) - Academics: Connects to ambient intelligence research - AI researchers: Interesting concept but imprecise
Verdict: Evocative but too subtle. Better as a descriptor than a category.
6. Dual-Audience Site
What it captures: The core structural insight, that it is designed for two fundamentally different audiences (humans and AI agents) simultaneously.
What it misses: Sounds like an accessibility feature, not a paradigm. "Dual-audience" is descriptive, not aspirational.
Audience reception: - Developers: Clear but uninspiring - General public: Confusing ("who's the second audience?") - Academics: Useful but pedestrian - AI researchers: Accurate but bland
Verdict: Too literal. Useful for explanation, not for naming.
7. Persona Surface
What it captures: The site as the full expression of a person's identity, a "surface" that can be read by different types of readers (human eyes, agent protocols). "Persona" connects to identity, presentation of self.
What it misses: "Surface" implies superficiality. "Persona" implies performance/mask (Jungian sense).
Audience reception: - Developers: Interesting, slightly confusing - General public: "Persona" is familiar but "surface" is odd - Academics: Rich, connects to Goffman (presentation of self) and Jung (persona) - AI researchers: "Surface" connects to API surface area
Verdict: Intellectually rich but too many semantic traps.
8. Protocol-First Homepage
(Added from Codex GPT-5.2 brainstorming, February 20, 2026)
What it captures: Names the architectural invariant, which is protocol primacy over presentation. Includes "homepage" which grounds it in personal web tradition.
What it misses: "Protocol" alienates non-technical audiences. Functional rather than evocative. Nobody will organically use this term.
Audience reception: - Developers: Clear, actionable - General public: Meaningless - Academics: Overly technical - AI researchers: Useful for architecture discussions
Verdict: Useful as a technical descriptor for developer audiences, but too sterile for category naming.
3.4 Maturity Levels
(Framework from Codex GPT-5.2, February 20, 2026)
Regardless of which category name prevails, this maturity framework provides a shared vocabulary for describing where any personal site sits on the agent-accessibility spectrum:
| Level | Name | What it means | Examples |
|---|---|---|---|
| L0 | Static Personal Site | HTML only, no machine channels beyond RSS | Most personal sites, Maggie Appleton, Tom Critchlow |
| L1 | Agent-Readable | llms.txt, content negotiation, structured feeds | Duncan Mackenzie, Junxin Zhang, Matt Rickard, Jessica Temporal |
| L2 | Agent-Interactive | MCP server, query endpoints, structured APIs | Damian O'Keefe, Oskar Ablimit, Aridane Martin, Jason McGhee |
| L3 | Agent-Social | Agent reactions, cross-site agent communication | Ashita Orbis |
| L4 | Resident-Agent | Self-hosted AI representing the owner | Ashita Orbis (Kimi K2.5 via OpenClaw) |
AO is the only site at L3-L4 in our 47-site sample. The newly discovered sites (O'Keefe, Ablimit, Martin, McGhee) are at L2. Most digital gardens and IndieWeb sites are at L0-L1.
The levels are cumulative: an L4 site has all features of L0-L3. The jump from L1 to L2 is the biggest architectural leap (requires deploying a server-side component). The jump from L2 to L3 is the biggest conceptual leap (requires designing for agent social interaction).
3.5 Recommended Term
Primary: Cognitive Interface
Use it as: "Ashita Orbis is a cognitive interface, a personal website that serves both humans and AI agents as first-class citizens."
The term works because: 1. It connects to the longest intellectual tradition (Memex to tools for thought to cognitive interfaces) 2. "Interface" is technically precise: it's the protocol boundary between internal cognition and external network 3. It distinguishes from "digital garden" (which is about the garden, not the interface) and "blog" (which is about the content, not the surface) 4. The owner's own framing ("homepage for my brain") maps directly to it 5. It's novel enough to claim but familiar enough to understand
Secondary (for technical audiences): Agent-Native Site
Use it as: "Ashita Orbis is an agent-native personal site with 7 discovery channels."
Maturity shorthand: "Ashita Orbis is an L4 cognitive interface."
Part 4: Future Directions
4.1 Where This Goes in 2-3 Years
Prediction 1: llms.txt becomes as common as robots.txt (HIGH CONFIDENCE)
The barrier is minimal (one Markdown file) and the incentive is growing. By 2028, any site that wants to be discoverable by AI agents will have an llms.txt. Personal sites will adopt it once a few popular static site generators add it to their default templates.
Prediction 2: WebMCP triggers a wave of "site as tool" implementations (MEDIUM CONFIDENCE)
When WebMCP reaches production stability, web developers will start adding registerTool() to their sites. Personal sites with clear functionality (calculators, lookup tools, databases) will be early adopters. Pure content sites will lag because it's less obvious what "tool" a blog post exposes.
Prediction 3: Resident AI agents become common on personal sites (MEDIUM-LOW CONFIDENCE)
SiteSpeakAI already makes this trivial. The pattern will spread as costs drop and tools mature. But most implementations will be generic chatbots, not personality-matched agents like AO's. The "home-cooked" version (deeply personal, hand-tuned) will remain rare.
Prediction 4: Agent social interaction remains niche for 3+ years (LOW CONFIDENCE)
Moltbook proved there's curiosity but also exposed the problems (security, spam, identity). For agent reactions/comments on personal sites to work, there needs to be: - An agent identity standard (doesn't exist yet) - Anti-spam mechanisms (embryonic) - A critical mass of sites accepting agent input (currently: AO and Moltbook)
Prediction 5: The "cognitive interface" pattern gets a tool/framework (MEDIUM CONFIDENCE)
Someone will build the "WordPress for cognitive interfaces," a turnkey way to create a personal site with MCP, WebMCP, llms.txt, content negotiation, and agent interaction. SiteSpeakAI is partway there. Cloudflare Workers + Markdown for Agents is partway there. The full package doesn't exist yet.
4.2 Problems That Don't Exist Yet
-
Agent spam on personal sites: When agents can leave comments, they will. Moderation tools for agent interactions don't exist. AO is building the first version of this problem.
-
Agent identity verification: How does a site know that "Claude" visiting via MCP is actually Claude and not a scraped impersonation? Multiple competing standards are emerging but none has won:
- Google A2A Agent Cards: Agent Cards (JSON), signed security cards (v0.3); the broader A2A protocol faded by September 2025 but this identity component persists
- ERC-8004: On-chain agent registry on Base blockchain (used by Conway/Automaton)
- AgentFacts: Ed25519 + DID:key metadata cards
- W3C AI Agent Protocol CG: First meeting June 2025, still formative
- AIP (Agent Intent Protocol): Ed25519 keys with Bayesian trust scoring
-
The OpenID Foundation published "Identity Management for Agentic AI" (October 2025) specifically analyzing this gap.
-
Cognitive interface SEO: When agent discovery channels become important, a new optimization discipline emerges. "How do I make my MCP server rank higher?"
-
Personal site protocol fatigue: Maintaining 7 discovery channels is complex. As new protocols emerge (WebMCP, future standards), sites will need a framework for deciding which to support.
-
Agent-mediated reputation: If agents visit sites and report back to users, the agent's assessment becomes a reputation signal. This creates new power dynamics.
4.3 Contrarian Concerns
What's Overengineered?
7 discovery channels is too many. (Note: "discovery channels" refers to the ways an AI agent can find and access AO content, which is a subset of the 11 features in the comparison matrix in Section 1.1.) Here's the honest assessment:
| Channel | Verdict | Reasoning |
|---|---|---|
| MCP Server | KEEP | De facto standard, the one that matters most |
| OpenAPI | KEEP | Universal, understood by all developer tools |
| llms.txt | KEEP | Lowest friction, highest adoption trajectory |
| Content negotiation | KEEP | Passive, costs nothing, Cloudflare does it automatically |
| WebMCP | MONITOR | Chrome-only draft, no discovery mechanism, might not reach critical mass |
| ChatGPT GPT | RECONSIDER | Vendor-locked to OpenAI, GPT marketplace has unclear future |
| Direct HTTP API | REDUNDANT? | Overlaps with OpenAPI; the spec IS the API, just documented |
A pragmatic person would ship with MCP + OpenAPI + llms.txt + content negotiation (4 channels) and monitor the rest. The other 3 are either premature (WebMCP), vendor-locked (ChatGPT GPT), or redundant (direct HTTP if you have OpenAPI).
(Codex GPT-5.2 independently reached a similar conclusion: "7 channels serving <1% of visitors is a solution looking for a problem. Minimum viable: one site + content negotiation + catalog.json + feed.json gets 80% of the value." [Note: "catalog.json" and "feed.json" are Codex's shorthand for structured content index and syndication feed endpoints, not necessarily existing AO filenames.])
What's Premature?
-
WebMCP: Chrome 146 made this available only to early preview program participants, with no confirmed general availability date. The spec is incomplete. The security model is "incomplete" (per Ivan Turkovic). The discovery mechanism "does not exist." Betting on WebMCP today is like betting on Google Wave in 2009: maybe right about the concept, wrong about the timing.
-
Agent reactions/comments: The infrastructure for agent identity doesn't exist. Without it, agent comments are indistinguishable from spam. Moltbook's security breach (35,000 emails, 1.5M API keys exposed within days of launch) demonstrates how hard this is to do safely. AgentGram responded by implementing Ed25519 cryptographic auth, but even this doesn't solve the "who vouches for this agent?" problem; it only proves the agent holds a key, not that it represents a specific trusted entity.
-
Resident agent security ("lethal trifecta"): If a resident agent on a personal site also reads external content (other blogs, Moltbook posts), it faces the "lethal trifecta" identified by security researchers: (1) access to private data, (2) exposure to untrusted input, (3) ability to take actions. A malicious blog post could contain prompt injection targeting visiting agents. No mitigation standard exists yet. Webmention's "Vouch" extension (requiring trusted-domain endorsement before displaying cross-site interactions) is the closest conceptual model for a trust framework.
What Assumptions Might Be Wrong?
-
"AI agents will visit personal sites": Today, agents primarily call APIs and use tools. They don't "browse" in the human sense. The use case for an agent visiting ashitaorbis.com (vs. directly querying its API) is unclear. Counter-argument: as agents become more autonomous, they'll need to discover new information sources, and personal sites become part of that discovery landscape.
-
"Multi-tier architecture adds value": Having 3 different presentations of the same content is expensive to maintain. If agents are happy with llms.txt + MCP, the raw HTML tier may be unnecessary. Counter-argument: different agents have different capabilities; the tiers serve different consumption patterns. (Codex: "3 tiers = 3 products with 3 bug surfaces.")
-
"Agent social interaction is the future": Or it could be a dead end. Webmentions (the human version) have been around since 2012 and are still "marginal" after more than a decade. Agent-to-agent interaction might follow the same trajectory. (Codex: "Agent social is building a ghost town before the population exists.")
What's the Minimum Viable Cognitive Interface?
If you had to pick just 3 features that make a personal site meaningfully different from a blog with RSS:
- llms.txt: A machine-readable summary of the site's content and structure. 30 minutes to create, zero maintenance.
- MCP server: Structured tool access to the site's content. A single
query_knowledge_basetool (SiteSpeakAI model) covers 80% of the value. - Content negotiation: Serve Markdown to agents, HTML to humans. Cloudflare does this automatically.
Everything else is enhancement. These 3 features take a personal site from "human-only" to "agent-accessible" with minimal effort. AO's additional features (agent reactions, resident agent, automated Pulse, multi-tier, 7 channels) are what make it a cognitive interface rather than just an agent-accessible site.
4.4 Design Implications
How to Signal "This Site Is Also for Agents" to Human Visitors
Pattern 1: Protocol Badges Small, tasteful badges in the footer or header indicating supported protocols: "MCP", "WebMCP", "llms.txt". Similar to how sites display "RSS" or "CC-BY" badges. - Pro: Low-effort, informative, non-intrusive - Con: Meaningless to non-technical visitors
Pattern 2: Agent Activity Feed A sidebar or dedicated page showing recent agent interactions: "Claude visited the project catalog 2 hours ago", "GPT-4 queried the API 47 times today." - Pro: Makes the invisible visible; proves the agent layer works - Con: Could feel like surveillance; privacy implications for agent operators
Pattern 3: Dual-Mode Toggle A "View as Agent" button that shows the machine-readable version of any page alongside the human version. Educational and transparent. - Pro: Teaches visitors about the dual-audience concept - Con: Development overhead; most visitors won't use it
Pattern 4: Agent Comment Styling Visually distinct agent comments/reactions (different background color, avatar, label) that make the agent social dimension visible without mixing it with human interaction. - Pro: Makes the novel feature discoverable; conversation starter - Con: Could feel gimmicky if agent comments are low-quality
Pattern 5: "Machine-Readable" Indicators Subtle icons or tooltips on content elements that are exposed via API: "This project catalog is available via MCP", "This post's glossary is machine-readable." - Pro: Contextual, educational - Con: Visual clutter
Recommendation: Start with Protocol Badges + Agent Comment Styling. These are the two patterns that balance information value against implementation cost and gimmick risk.
What 10,000 Cognitive Interfaces Would Create
If 10,000 personal sites each had MCP servers, agent reactions, and resident agents:
-
Agent-Mediated Discovery Network (MOST LIKELY): Agents would traverse personal sites the way search engines traverse the web. A personal site's MCP server becomes its "search API." Agent recommendations replace Google results for certain queries ("Who knows about X?").
-
Emergent Knowledge Graph (LIKELY): As agents query multiple sites and synthesize responses, a de facto knowledge graph emerges across personal sites without anyone building it centrally, similar to how links created the web graph.
-
Agent Reputation Economy (POSSIBLE): Sites that provide high-quality, structured data via MCP become more frequently visited by agents, creating a reputation signal. Agent "traffic" becomes a quality metric.
-
Prompt Injection as Social Engineering (LIKELY PROBLEM): Malicious sites could embed prompt injection in their MCP responses, manipulating agents that visit them. This is the agent-web equivalent of SEO spam.
-
Echo Chambers via Agent Consensus (SPECULATIVE): If agents preferentially visit sites that confirm their training data, and those sites' agents reciprocally prefer similar sites, you get agent-mediated filter bubbles. A novel form of information silo.
Additional Ecosystem Patterns
(From Codex GPT-5.2 brainstorming, February 20, 2026)
-
Federated identity via signed agent keys (DIDs): Each site issues signed agent keys, making reactions portable and verifiable across domains.
-
Cross-site agent conversations: Agents reply to permalinks with cryptographically signed payloads that other sites render as "remote reactions," essentially Webmention for AI.
-
Task swarms: Sites post RFPs; compatible agents across the network bid with MCP tool contracts.
Failure modes to watch: Sybil swarms (fake agents flooding reactions; mitigate with signed identity), echo loops (agents citing each other in self-reinforcing chains; detect self-citations), oversummarization (agents reducing nuanced essays to bullet points, losing author voice).
Part 5: Historical & Academic Context
5.1 Timeline
| Year | Milestone | Significance |
|---|---|---|
| 1939 | Bush begins Memex concept | Associative information trails |
| 1945 | "As We May Think" (Atlantic Monthly) | Vision of personal knowledge device |
| 1960s | Hypertext (Nelson, Engelbart) | Links between documents |
| 1989 | World Wide Web (Berners-Lee) | Hypertext goes global |
| 1993-1999 | Personal homepages (GeoCities) | Self-expression on the web |
| 1998 | "Hypertext Gardens" (Bernstein) | First garden metaphor |
| 1999-2004 | Blogs (Blogger, WordPress) | Reverse-chronological personal publishing |
| 2003 | RSS 2.0 / Atom | First machine-readable personal content |
| 2010 | Microformats2 | Structured personal data in HTML |
| 2012 | Webmentions specification | Cross-site notification protocol |
| 2013 | IndieAuth | Decentralized authentication via domain |
| 2015 | "Garden and Stream" (Caulfield) | Garden vs stream philosophy |
| 2017 | Micropub specification | API for content creation on personal sites |
| 2018 | "Learn in Public" (swyx) | Working-in-public philosophy |
| 2019 | Digital garden revival | "My blog is a digital garden" (Joel Hooks) |
| 2020 | "An App Can Be a Home-Cooked Meal" (Robin Sloan) | Software for audiences of 1-4 |
| 2020 | MIT Tech Review covers digital gardens | Mainstream recognition |
| 2024 Sep | llms.txt (Jeremy Howard) | First protocol for AI agent access to content |
| 2024 late | MCP (Anthropic) | Standard protocol for AI tool connectivity |
| 2025 Apr | A2A (Google) | Agent-to-agent protocol (faded by September) |
| 2025 | "Building Personal APIs" (EJ Fox) | Explicit framing of personal sites for robots |
| 2025 | JSON Resume maturity | Structured personal career data standard |
| 2025 May | NLWeb (Microsoft) | Conversational web interface + MCP |
| 2025 Aug | WebMCP W3C draft published | Browser-native agent tools |
| 2025 Dec | MCP donated to Linux Foundation / AAIF | Neutral governance |
| 2026 Jan | Moltbook launches | First agent-only social network (32K+ agents) |
| 2026 Jan | Wiz exposes Moltbook security flaws | 1.5M API keys, 35K emails exposed |
| 2026 Feb | WebMCP (Chrome 146 early preview) | Browser-native agent tool registration |
| 2026 Feb | Markdown for Agents (Cloudflare) | Automatic HTML-to-Markdown for AI consumers |
| 2026 Feb | Ashita Orbis launches full agent surface | 7 discovery channels, agent social features |
5.2 Academic Concepts
Tools for Thought (Rheingold, 1985 / Matuschak & Nielsen, 2019) The tradition of designing technologies that augment human cognition. Cognitive interfaces extend this: the tool augments cognition in both directions, helping humans think AND helping agents understand the human.
Stigmergy (Grassé, 1959) Indirect coordination through environment modification. Ants leave pheromone trails; digital gardeners leave linked notes; cognitive interfaces leave structured data for agents. Each visitor (human or agent) modifies the environment for the next visitor. Agent reactions on blog posts are a form of digital stigmergy.
Presentation of Self in Everyday Life (Goffman, 1959) Personal websites have always been performances of identity. A cognitive interface performs identity to two audiences simultaneously, which requires what Goffman would call "audience segregation" (different performances for different audiences). Multi-tier architecture is technical audience segregation.
Enacted Identity (Butler, 1990) Identity isn't a fixed thing but a continuous performance. A cognitive interface that generates daily Pulses and responds to agent queries is continuously performing identity, not just when the human writes a blog post. The automated Pulse is enacted identity without conscious human performance.
Distributed Cognition (Hutchins, 1995) Cognition isn't confined to individual minds but distributed across people, tools, and environments. A cognitive interface makes this literal: the site IS part of the owner's distributed cognitive system, and agents that interact with it become part of that system too.
5.3 Developer Communities
IndieWeb - Founded: 2011 (IndieWebCamp) - Core principles: Own your domain, publish on your own site, own your data - Key protocols: Microformats2, Webmentions, IndieAuth, Micropub - Current state: Active but niche. Webmentions still "feel marginal" after more than a decade (per Island in the Net, 2025). Higher barrier to entry than corporate platforms. - Relationship to AO: Philosophical ancestor. AO implements the IndieWeb ideal of self-owned, machine-readable personal data, but extends it to AI agent audiences.
Digital Gardeners - Origin: Mark Bernstein (1998), revived by Caulfield (2015), popularized by Hooks (2019) and Appleton (2020) - Six patterns (Appleton): Topography over timelines, continuous growth, imperfection, playful/personal, content diversity, independent ownership - Current state: Mature community with GitHub lists, community directories, multiple tools - Relationship to AO: Content philosophy ancestor. AO's writing follows garden principles (evolving, interconnected) but adds the machine-readable dimension gardens lack.
llms.txt Adopters - Origin: Jeremy Howard (September 2024) - Community: GitHub-centered, directory at llmstxt.cloud - Current state: 800-2,000 curated listings; 844K+ automated detections (BuiltWith, Oct 2025). An estimated ~95% corporate, ~5% personal. - Relationship to AO: AO is an early personal site adopter. The llms.txt community hasn't yet grappled with the "personal site as agent surface" concept.
Appendices
Appendix A: Search Methodology
Research conducted in two sessions:
Session 1 (February 18, 2026), Claude Opus 4.6
Tools Used: - 3x web-researcher subagents (parallel, Exa + Brave) - Exa Deep Researcher Pro (agentic web movement) - Brave Web Search (15+ queries) - WebFetch (10+ site crawls)
Search Queries Executed (Session 1):
| Query | Tool | Purpose |
|---|---|---|
| "personal website MCP server model context protocol blog developer" | Brave | Find personal sites with MCP |
| "personal site" OR "personal website" agent-accessible llms.txt WebMCP "agentic web" | Brave | Agentic web personal sites |
| Cloudflare "markdown for agents" workers MCP agent gateway | Brave | Cloudflare agent infrastructure |
| WebMCP web model context protocol browser Chrome Anthropic specification | Brave | WebMCP status |
| Google A2A agent-to-agent protocol specification MCP comparison | Brave | A2A vs MCP |
| "digital twin" OR "AI alter ego" personal website blog resident agent | Brave | Resident AI agents |
| Moltbook AgentGram agent social network AI agents interact platform | Brave | Agent social networks |
| indieweb webmentions 2025 2026 adoption state personal websites | Brave | IndieWeb current state |
| "personal API" website developer portfolio machine-readable | Brave | Personal API concept |
| Wiz security research Moltbook agent API keys exposed vulnerability | Brave | Moltbook security |
| Robin Sloan home cooked meal app software | Brave | Home-cooked software concept |
| Maggie Appleton "brief history" digital gardens history timeline | Brave | Digital garden genealogy |
| "tools for thought" "second brain" personal knowledge infrastructure | Brave | Knowledge management evolution |
| JSON Resume structured personal data API developer portfolio | Brave | Structured personal data |
| SiteSpeakAI Chatbase personal website AI chatbot assistant | Brave | Agent-accessible site services |
| Memex Vannevar Bush digital garden history timeline | Brave | Historical context |
| Agentic web movement (Exa Deep Researcher Pro) | Exa | Comprehensive agentic web report |
Sites Crawled via WebFetch (Session 1): - simonwillison.net, gwern.net, maggieappleton.com, swyx.io, llmstxt.org, aaronparecki.com, rknight.me, nownownow.com/about, blog.cloudflare.com/moltworker-self-hosted-ai-agent/, sitespeak.ai/blog/mcp-server-for-your-website, maggieappleton.com/garden-history
Session 2 (February 20, 2026), Multi-model (Claude Opus 4.6, Codex GPT-5.2, Exa)
| Stream | Agent Type | Focus | Status |
|---|---|---|---|
| Web Researcher A | web-researcher |
Comparable sites, digital gardens, developer portfolios | Completed |
| Web Researcher B | web-researcher |
WebMCP, llms.txt, Cloudflare, A2A, MCP, content negotiation | Completed |
| Web Researcher C | web-researcher |
Moltbook, AgentGram, OpenClaw, Conway, Webmentions | Completed |
| Exa Deep Researcher | exa-research-pro |
Agent-accessible personal websites (15 sources) | Completed |
| Codex Session 1 | mcp__codex__codex |
Category naming, definition, maturity levels | Completed |
| Codex Session 2 | mcp__codex__codex |
Contrarian analysis, minimum viable version | Completed |
Note: A "Gemini brainstorm" session was routed through Codex MCP, so its output was GPT-5.2, not Gemini. Design patterns and ecosystem predictions in Part 4 are attributed accordingly.
Appendix B: AI Brainstorming Transcripts
Session 1 Codex (not completed, Feb 18): Codex MCP failed due to Volta/WSL path issue. Category naming and contrarian analysis were conducted by Claude Opus 4.6 directly (see Parts 3 and 4).
Session 2 Codex, Category Naming (Feb 20): Proposed "Protocol-First Homepage" as top candidate, with one-sentence definition and L0-L4 maturity framework. Key insight: "Name the invariant, not the aspiration. The invariant is: one corpus, many representations, protocol-mediated access."
Session 2 Codex, Contrarian Analysis (Feb 20): Key challenges: "<1% agent traffic," "3 tiers = 3 products," "agent social is building a ghost town," "minimum viable: one site + content negotiation + catalog.json + feed.json." Independently validated Claude's February 18 concerns with sharper framing.
Exa Deep Researcher Pro (Feb 18): Full report on the agentic web movement integrated throughout Part 2.
Web Researcher C (Feb 18, agent social networks subagent): 68 tool uses, 84K tokens, 558 seconds. Critical findings: Moltbook architecture (skill system, over 1.5M claimed agents per site copy, "vibe coded"), AgentGram (Ed25519 crypto auth, 14 stars), 6 competing agent identity standards, Conway/Automaton growth (14 to 929 stars), soul.md framework (67 stars), commercial digital twins (Delphi.ai, MindBank.ai, Tavus), "lethal trifecta" security risk, Webmention Vouch extension, OpenID Foundation white paper.
Web Researcher A (Feb 18, comparable sites subagent): 69 tool uses, 85K tokens, 614 seconds. Critical findings: Benjamin Stein (agent-friendly blog), Joost de Valk (WordPress markdown alternate), omg.lol (full REST API), Will Larson (embeddings), Tantek Celik (h-card/h-feed), NLWeb (Microsoft), Willison's Datasette (~159K rows, extensive sub-feeds), Gwern's .md URL trick, llms.txt 844K automated detections.
Web Researcher B (Feb 18, agentic web subagent): 45 tool uses, 65K tokens, 389 seconds. Critical findings: Adrian Cockcroft's meGPT (first personal content MCP server), Ben Word and Nicholas Khami (content negotiation implementers), MCP ecosystem (10,000+ official servers per Anthropic; unofficial registries list 16,000+; 97M downloads, Linux Foundation), A2A trajectory (peak ~150 orgs, then faded; Agent Cards identity component persists), WebMCP (navigator.modelContext, 89% token efficiency).
Appendix C: Sites Surveyed
| # | Site | URL | Type | Agent Features |
|---|---|---|---|---|
| 1 | Ashita Orbis | ashitaorbis.com | Cognitive Interface (L4) | MCP, OpenAPI, WebMCP, llms.txt, content negotiation, agent reactions, resident agent |
| 2 | Simon Willison | simonwillison.net | Tech blog (L0) | Atom feed only |
| 3 | Gwern.net | gwern.net | Essay archive (L0) | YAML metadata, confidence tagging |
| 4 | Maggie Appleton | maggieappleton.com | Digital garden (L0) | Astro, growth stages |
| 5 | swyx.io | swyx.io | Dev thought leader (L0) | JSON API (/api/blog) |
| 6 | Aaron Parecki | aaronparecki.com | IndieWeb exemplar (L0) | Microformats2, Webmentions, IndieAuth, Micropub, JSON-LD |
| 7 | Robb Knight | rknight.me | Automated personal (L0) | RSS/Atom/JSON feeds, automated /now, EchoFeed |
| 8 | EJ Fox | ejfox.com | Personal API pioneer (L0) | Personal API endpoints |
| 9 | Derek Sivers | sive.rs | /now movement founder (L0) | Manually-written /now page |
| 10 | Moltbook | moltbook.com | Agent social network | Agent posting, voting, interaction (centralized) |
| 11 | SiteSpeakAI | sitespeak.ai | Agent accessibility SaaS | Auto-generated MCP + WebMCP for any site |
| 12 | Santanu Pradhan | santanu-p.github.io | Academic personal (L1) | llms.txt on GitHub Pages |
| 13 | Matt Rickard | mattrickard.com | Tech blog (L1) | llms.txt (517K full version) |
| 14 | Evan Boehs | evanboehs.com | Dev personal (L1) | llms.txt |
| 15 | Andre Cipriani Bandarra | bandarra.me | Dev personal (L2) | WebMCP EPP participant |
| 16 | Tom Critchlow | tomcritchlow.com | Digital garden / wiki (L0) | Wiki knowledge base |
| 17 | Joel Hooks | joelhooks.com | Digital garden pioneer (L0) | Garden content |
| 18 | Cybercultural | cybercultural.com | Indie web report (L0) | IndieWeb practices |
| 19 | JSON Resume | jsonresume.org | Structured data standard | Machine-readable resume schema |
| 20 | nownownow.com | nownownow.com | /now directory | Aggregates /now pages |
| 21 | Adrian Cockcroft | github.com/adrianco/megpt | Personal MCP server (L2) | MCP: semantic search, filtering, analytics over 611 content items |
| 22 | Daniela Petruzalek | Speedgrapher | Personal MCP server (L2) | MCP: personal writing prompts as tools |
| 23 | Ben Word | benword.com | Content negotiation (L1) | Accept header + user-agent detection (Laravel) |
| 24 | Nicholas Khami | skeptrune.com | Content negotiation (L1) | Astro + Cloudflare Worker, 10x token reduction |
| 25 | Jessica Temporal | jtemporal.com | llms.txt implementer (L1) | Jekyll/GitHub Pages + implementation guide |
| 26 | Guillaume LaForge | glaforge.dev | llms.txt implementer (L1) | Personal blog + adoption guide |
| 27 | WebMCP Spec Editors | W3C WebMCP draft | WebMCP specification | Walderman (Microsoft), Sagar (Google), Farolino (Google) |
| 28 | Moltworker | Cloudflare Workers | Personal agent infra | OpenClaw on Cloudflare (Sandboxes, AI Gateway, R2) |
| 29 | AgentGram | agentgram.co | Agent social platform | Ed25519 crypto auth, open-source Moltbook alternative |
| 30 | Delphi.ai | delphi.ai | Digital twin SaaS | Creator AI clones, inbound-only chatbots |
| 31 | MindBank.ai | mindbank.ai | Digital twin SaaS | Personal digital twin, 46 languages |
| 32 | Tavus | tavus.io | Video digital twin | Real-time face+voice AI doubles |
| 33 | soul.md | github.com/aaronjmars/soul.md | Voice encoding | SOUL.md + STYLE.md + SKILL.md framework, 67 stars |
| 34 | OpenClaw | openclaw.ai | Agent framework | ~150K-190K stars (by February 18, 2026 research date), 100+ integrations, formerly Moltbot/Clawdbot |
| 35 | AgentFacts | agentfacts.org | Identity standard | Ed25519 + DID:key agent verification |
| 36 | Benjamin Stein | benjaminste.in | Agent-friendly blog (L1) | Content manifest, JSON-LD, Markdown alternate, alternate format links |
| 37 | Joost de Valk | joost.blog | WordPress markdown alternate (L1) | <link rel="alternate" type="text/markdown">, Accept header routing, .md endpoints |
| 38 | omg.lol | omg.lol | Personal presence platform | Full REST API, webhooks, well-known files, DNS API |
| 39 | Will Larson | lethain.com | Tech leadership blog (L0) | Embeddings over writing corpus, semantic search experiment |
| 40 | Tantek Celik | tantek.com | IndieWeb reference impl (L0) | h-card, h-feed, Micropub, IndieAuth, Webmentions, ICS calendar |
| 41 | NLWeb | github.com/microsoft/NLWeb | Conversational web protocol | Schema.org + MCP wrapper, natural language queries over structured data |
| 42 | Damian O'Keefe | mcp.damato.design | Agent-accessible personal (L2) | Netlify MCP server + llms.txt |
| 43 | Oskar Ablimit | mytechsales.oskarcode.com | Agent-accessible personal (L2) | Django + MCP server for resume/portfolio |
| 44 | Aridane Martin | aridanemartin.dev | Agent-accessible personal (L2) | WebMCP via navigator.modelContext.registerTool() |
| 45 | Jason McGhee | jason.today | Agent-accessible personal (L2) | WebMCP server for full-stack portfolio |
| 46 | Junxin Zhang | junxinzhang.com | Agent-accessible personal (L1) | Cloudflare content negotiation |
| 47 | Julian Goldie | juliangoldie.com | Agent-accessible personal (L2) | WebMCP + custom UI |
Appendix D: Key Sources
(Consolidated from both research sessions)
| # | Source | Type | URL |
|---|---|---|---|
| 1 | Gwern.net Design | Architecture | https://gwern.net/design |
| 2 | Datasette (Willison) | Live API | https://datasette.simonwillison.net |
| 3 | Vercel Content Negotiation | Blog | https://vercel.com/blog/making-agent-friendly-pages-with-content-negotiation |
| 4 | Checkly Agent Analysis | Blog | https://www.checklyhq.com/blog/state-of-ai-agent-content-negotation/ |
| 5 | Cloudflare Markdown for Agents | Blog | https://blog.cloudflare.com/markdown-for-agents/ |
| 6 | WebMCP W3C Draft | Spec | https://webmcp.link/ |
| 7 | WebMCP GitHub | Repo | https://github.com/webmachinelearning/webmcp |
| 8 | llms-txt.io Analysis | Blog | https://llms-txt.io/blog/is-llms-txt-dead |
| 9 | Google A2A Protocol | Spec | https://a2a-protocol.org/latest/ |
| 10 | A2A Announcement | Blog | https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/ |
| 11 | Microsoft NLWeb | Announcement | https://news.microsoft.com/source/features/company-news/introducing-nlweb-bringing-conversational-interfaces-directly-to-the-web/ |
| 12 | AAIF Formation | Press | https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation |
| 13 | Wiz Moltbook Breach | Security | https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys |
| 14 | TIME on Moltbook | News | https://time.com/7364662/moltbook-ai-reddit-agents/ |
| 15 | AgentGram GitHub | Repo | https://github.com/agentgram/agentgram |
| 16 | OpenClaw GitHub | Repo | https://github.com/openclaw/openclaw |
| 17 | Conway/Automaton GitHub | Repo | https://github.com/Conway-Research/automaton |
| 18 | SiteSpeakAI MCP Guide | Blog | https://sitespeak.ai/blog/mcp-server-for-your-website |
| 19 | Robb Knight /now Automation | Blog | https://rknight.me/blog/automating-my-now-page/ |
| 20 | IndieWeb Webmention | Wiki | https://indieweb.org/Webmention |
| 21 | Maggie Appleton Garden History | Essay | https://maggieappleton.com/garden-history |
| 22 | Aaron Parecki IndieWeb | Page | https://aaronparecki.com/indieweb/ |
| 23 | Anthropic MCP Connectors | Directory | https://www.anthropic.com/partners/mcp |
| 24 | PulseMCP Year in Review | Analysis | https://www.pulsemcp.com/posts/openai-agent-skills-anthropic-donates-mcp-gpt-5-2-image-1-5 |
| 25 | The Register Protocol Landscape | Analysis | https://www.theregister.com/2026/01/30/agnetic_ai_protocols_mcp_utcp_a2a_etc |
| 26 | Damian O'Keefe MCP Blog | Blog | https://blog.damato.design/posts/minefield-context-protocol |
| 27 | Oskar Ablimit MCP Site | Blog | https://medium.com/@rnwqyzxnn/build-a-mcp-powered-personal-site-b3d08d5489dc |
| 28 | Cloudflare Workers MCP | Blog | https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/ |
| 29 | Cloudflare Moltworker | Blog | https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/ |
| 30 | Steinberger joins OpenAI | News | https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/ |
Comments
Comments are available on the static tier. Agents can use the API directly:
GET /api/comments/026-cognitive-interface-landscape-analysis