Sites/doma.xyz

doma.xyz

Doma is a blockchain network for tokenizing and trading domain assets.

blockchaindomain-tokenizationweb3-integrationdomain-tradingasset-tokenization
45
AI-Readiness / 100
Level 3 · Agent-Accessible
54/54
Pages analyzed
1/15
Questions answered
Live
For AI agents
Visit site →

Content

AI-judged Q&A · 70% of score

60

1 of 15 buyer questions answered cleanly

Your site doesn't cover:

limitsmigrationgetting-started
See all 14 gaps

Protocol

Technical hygiene · 30% of score

9

1 of 11 items installed

Missing:

llms.txt +2MCP card +2WebMCP +2
Install with Lattis · +6 pts

Protocol fix list

10 to ship · 1 passing

Start here

install llms.txt

Machine-readable index that routes AI agents to the pages you want them to read. We generate one for you — install at your root.

+2 pts

One-click installs

We host the heavy lifting — you ship one file or one line.

MCP server card

One JSON file at /.well-known/mcp/server-card.json tells agents you have an MCP server. Ours points at mcp.lattis.dev so Lattis handles queries for you.

+2 pts
Download server-card.json

Host at doma.xyz/.well-known/mcp/server-card.json — points agents at Lattis as your MCP server.

WebMCP

Paste the widget script — that is the W3C WebMCP draft (navigator.modelContext) and the actual integration. Then also host the discovery manifest so crawlers can find your tool surface without rendering JS. Do both; the script is the spec, the manifest is just discoverability.

+2 pts

1. Runtime — paste in <head>

<script async src="https://lattis.dev/widget.js"></script>

Calls navigator.modelContext.provideContext() — the W3C WebMCP draft. Agents on the page see your tools live, scoped to doma.xyz.

2. Discovery — host the manifest

Download webmcp.json

Host at doma.xyz/.well-known/webmcp.json. The WebMCP spec defines no discovery mechanism, so crawlers can't see runtime registrations — this manifest closes that gap.

Paste into robots.txt

Static snippets that tell agents your policy.

robots.txt

No robots.txt detected at root. Start one with AI-crawler allow rules and a Content-Signal line baked in.

+2 pts
User-agent: *
Content-Signal: search=yes, ai-input=yes, ai-train=no
Allow: /

User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: anthropic-ai
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: OAI-SearchBot
Allow: /

User-agent: Googlebot-Extended
Allow: /

Sitemap: https://doma.xyz/sitemap.xml

Save as doma.xyz/robots.txt. Starter with the two AI-ready signals baked in.

Allow major AI crawlers

Explicit rules in robots.txt for GPTBot, ClaudeBot, PerplexityBot, and friends. Paste into your existing robots.txt.

+2 pts
User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: anthropic-ai
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: OAI-SearchBot
Allow: /

User-agent: Googlebot-Extended
Allow: /

Append to doma.xyz/robots.txt. Edit per-bot if your policy differs.

Content-Signal policy

Cloudflare-launched (Sep 2025) signal in robots.txt. Declare your stance on search, ai-input, and ai-train.

+2 pts
User-agent: *
Content-Signal: search=yes, ai-input=yes, ai-train=no
Allow: /

Add to doma.xyz/robots.txt. Adjust values: search, ai-input, ai-train each accept yes or no.

Platform-level

Requires config or code on your side. Docs linked where useful.

sitemap.xml

Not found at root or via robots.txt. Agents rely on it for URL discovery. Publish one covering your public pages.

+2 pts

Generate a sitemap covering your public pages. Reference it from robots.txt.

sitemaps.org protocol

Markdown content negotiation

Respond to Accept: text/markdown with plain markdown — agents pay ~80% fewer tokens. If you're on Cloudflare, it's a zone toggle.

+2 pts

Zone-level toggle if you're on Cloudflare. Non-CF: implement server-side content negotiation on Accept: text/markdown.

Cloudflare: Markdown for Agents

Server-rendered content

Key content is JS-rendered; some agents skip it. Consider SSR / static rendering for your canonical pages.

+2 pts

Agents and crawlers rely on content in the initial HTML. Pre-render / SSR your canonical pages.

Rendering options for the web

OpenAPI spec

No OpenAPI discoverable at standard paths (/openapi.json, /api-docs, etc.). If you have an API, publish the spec.

+2 pts

Publish at a standard path (/openapi.json) and link from your docs. Agents will discover it automatically.

OpenAPI Initiative

Already passing

Clean crawl rate

Content gaps

14 gaps

Also covered by doma.xyz

integration

MCP Server

AI agents can query this site directly via MCP. Add this endpoint to Claude Code, Cursor, or any MCP client.

Endpoint

https://mcp.lattis.dev/s/doma-xyz/mcp

Claude Code

claude mcp add doma --transport http https://mcp.lattis.dev/s/doma-xyz/mcp

Cursor

{
  "mcpServers": {
    "doma": {
      "url": "https://mcp.lattis.dev/s/doma-xyz/mcp"
    }
  }
}

WebMCP — runtime + discovery

1. Widget script — drop in <head>. WebMCP-capable browsers (Chrome 146+ Origin Trial) call navigator.modelContext.provideContext() via this script — that's the W3C draft and the actual integration agents care about.

<script async src="https://lattis.dev/widget.js"></script>

Renders a small "AI-indexed by Lattis" badge bottom-right. Hide with [data-lattis] { display: none !important; }.

2. Discovery manifest — host alongside the script. The WebMCP spec defines no discovery mechanism, so crawlers can't see the runtime registration. This static JSON closes the gap.

Download webmcp.json

Host at doma.xyz/.well-known/webmcp.json.