Sites/dnw.com

Domain Name Wire

Sign in to claim

dnw.com

Domain Name Wire is a news site that covers the domain name industry.

mediadomain-newsdomain-registrarsdomain-salesdomain-services
9
AI-Readiness / 100
Level 1 · Invisible
1/1
Pages analyzed
0/15
Questions answered
Live
For AI agents
Visit site →

Content

AI-judged Q&A · 70% of score

5

0 of 15 buyer questions answered cleanly

Your site doesn't cover:

operationsgetting-startedlimits
See all 15 gaps

Protocol

Technical hygiene · 30% of score

18

2 of 11 items installed

Missing:

llms.txt +1MCP card +1WebMCP +1
Install with Lattis · +3 pts

Protocol fix list

9 to ship · 2 passing

Start here

install llms.txt

Machine-readable index that routes AI agents to the pages you want them to read. We generate one for you — install at your root.

+1 pts

One-click installs

We host the heavy lifting — you ship one file or one line.

MCP server card

One JSON file at /.well-known/mcp/server-card.json tells agents you have an MCP server. Ours points at mcp.lattis.dev so Lattis handles queries for you.

+1 pts
Download server-card.json

Host at dnw.com/.well-known/mcp/server-card.json — points agents at Lattis as your MCP server.

WebMCP

Paste the widget script — that is the W3C WebMCP draft (navigator.modelContext) and the actual integration. Then also host the discovery manifest so crawlers can find your tool surface without rendering JS. Do both; the script is the spec, the manifest is just discoverability.

+1 pts

1. Runtime — paste in <head>

<script async src="https://lattis.dev/widget.js"></script>

Calls navigator.modelContext.provideContext() — the W3C WebMCP draft. Agents on the page see your tools live, scoped to dnw.com.

2. Discovery — host the manifest

Download webmcp.json

Host at dnw.com/.well-known/webmcp.json. The WebMCP spec defines no discovery mechanism, so crawlers can't see runtime registrations — this manifest closes that gap.

Paste into robots.txt

Static snippets that tell agents your policy.

robots.txt

No robots.txt detected at root. Start one with AI-crawler allow rules and a Content-Signal line baked in.

+1 pts
User-agent: *
Content-Signal: search=yes, ai-input=yes, ai-train=no
Allow: /

User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: anthropic-ai
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: OAI-SearchBot
Allow: /

User-agent: Googlebot-Extended
Allow: /

Sitemap: https://dnw.com/sitemap.xml

Save as dnw.com/robots.txt. Starter with the two AI-ready signals baked in.

Allow major AI crawlers

Explicit rules in robots.txt for GPTBot, ClaudeBot, PerplexityBot, and friends. Paste into your existing robots.txt.

+1 pts
User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: anthropic-ai
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: OAI-SearchBot
Allow: /

User-agent: Googlebot-Extended
Allow: /

Append to dnw.com/robots.txt. Edit per-bot if your policy differs.

Content-Signal policy

Cloudflare-launched (Sep 2025) signal in robots.txt. Declare your stance on search, ai-input, and ai-train.

+1 pts
User-agent: *
Content-Signal: search=yes, ai-input=yes, ai-train=no
Allow: /

Add to dnw.com/robots.txt. Adjust values: search, ai-input, ai-train each accept yes or no.

Platform-level

Requires config or code on your side. Docs linked where useful.

sitemap.xml

Not found at root or via robots.txt. Agents rely on it for URL discovery. Publish one covering your public pages.

+1 pts

Generate a sitemap covering your public pages. Reference it from robots.txt.

sitemaps.org protocol

Markdown content negotiation

Respond to Accept: text/markdown with plain markdown — agents pay ~80% fewer tokens. If you're on Cloudflare, it's a zone toggle.

+1 pts

Zone-level toggle if you're on Cloudflare. Non-CF: implement server-side content negotiation on Accept: text/markdown.

Cloudflare: Markdown for Agents

OpenAPI spec

No OpenAPI discoverable at standard paths (/openapi.json, /api-docs, etc.). If you have an API, publish the spec.

+1 pts

Publish at a standard path (/openapi.json) and link from your docs. Agents will discover it automatically.

OpenAPI Initiative

Already passing

Clean crawl rateServer-rendered content

Content gaps

15 gaps

MCP Server

AI agents can query this site directly via MCP. Add this endpoint to Claude Code, Cursor, or any MCP client.

Endpoint

https://mcp.lattis.dev/s/dnw-com/mcp

Claude Code

claude mcp add dnw --transport http https://mcp.lattis.dev/s/dnw-com/mcp

Cursor

{
  "mcpServers": {
    "dnw": {
      "url": "https://mcp.lattis.dev/s/dnw-com/mcp"
    }
  }
}

WebMCP — runtime + discovery

1. Widget script — drop in <head>. WebMCP-capable browsers (Chrome 146+ Origin Trial) call navigator.modelContext.provideContext() via this script — that's the W3C draft and the actual integration agents care about.

<script async src="https://lattis.dev/widget.js"></script>

Renders a small "AI-indexed by Lattis" badge bottom-right. Hide with [data-lattis] { display: none !important; }.

2. Discovery manifest — host alongside the script. The WebMCP spec defines no discovery mechanism, so crawlers can't see the runtime registration. This static JSON closes the gap.

Download webmcp.json

Host at dnw.com/.well-known/webmcp.json.