The Shift From Search Engines to Answer Engines
Something unusual started happening at BlackBox Vision early this year. Qualified leads began showing up through a channel we hadn't planned for: LLMs.
Twenty leads in three months. Every time we asked how they found us, the answer was the same — "ChatGPT told me about you," or Gemini, or Claude. Some of them even had our direct contact details handed to them by the model.
This didn't happen by accident. Last year, we noticed a paradigm shift forming. People were moving from searching on Google to asking AI models for recommendations. We decided to get ahead of it.
What is Answer Engine Optimization?
Many are now calling this AEO — Answer Engine Optimization. For us, it's really SEO with a technical and strategic twist. The core idea is simple: if LLMs are becoming the new front door to your business, your site needs to be readable and meaningful to them.
Traditional SEO optimizes for crawlers and ranking algorithms. AEO goes further — it optimizes for comprehension. You're not just trying to rank; you're trying to be understood, cited, and recommended.
The Technical Playbook
When we redesigned the BlackBox Vision site, aesthetics weren't the priority. Semantic clarity was. Here's what we actually implemented — no theory, just what's running in production today.
Strict HTML Semantics Across 80+ Pages
Every single page — 40 in English, 40 in Spanish — uses proper semantic elements: <header>, <nav>, <main>, <section>, <article>, <aside>, <footer>. No <div> soup. We also added 543+ ARIA attributes across the site: role="group" on interactive regions, aria-label on every button and navigation element, aria-expanded states on toggles, and aria-current="page" on active nav links.
Every page includes a <a href="#main" class="sr-only">Skip to main content</a> link. Over 554 images carry descriptive alt text. This isn't just accessibility compliance — it's machine-readable context that LLMs use to understand what's on the page.
Layered Structured Data (JSON-LD)
We didn't just sprinkle one schema type. We implemented multiple layered schemas across the site:
- Organization schema on the homepage: includes founder information with LinkedIn URLs, an aggregate rating (5.0/5 from client reviews), individual review objects with author names, our physical office location with GPS coordinates, countries served, a full services catalog via
OfferCatalog, contact points, and links to six social profiles viasameAs. - BreadcrumbList schema on every inner page: proper hierarchy from Home → Section → Page. On case study pages, this pairs with an Article schema that includes headline, description, author, publisher, and language declaration.
- FAQPage schema on service and about pages: structured Q&A pairs that LLMs can extract and cite directly.
This layered approach means models don't just know we exist — they know what we do, who runs it, what clients say about us, and how to navigate our content.
Complete Meta Tags and Open Graph
Every page carries a full meta suite: description, Open Graph (og:title, og:description, og:type, og:url, og:image with explicit 1200x630 dimensions and MIME type), Twitter Cards with summary_large_image, content language declaration, and per-page OG images. Not a shared default image — each page has its own.
These aren't just for social cards. When a model encounters structured metadata, it uses it to build a summary of the page before parsing the body. Good metadata is like handing the model an executive summary.
Bilingual Architecture with hreflang
We serve 37 page pairs in English and Spanish using a subdirectory approach (/es/services/mvp-builders/). Every page pair is connected via proper rel="alternate" hreflang attributes in the HTML, and the sitemap includes full hreflang entries with x-default fallbacks.
This matters for AEO because models serving users in different languages need explicit signals about which version to reference. Without hreflang, a Spanish-speaking user might get recommended the English version — or worse, neither.
LLM-Specific Discovery Files
This is where we went beyond traditional SEO. We generate two files specifically for AI consumption:
/llms.txt— A markdown-formatted overview of the company, services, and page index. Think of it as a structured README for AI crawlers./llms-full.txt— A comprehensive context file with our founding story, detailed service descriptions with engagement timelines, case studies with measurable results, technology stack, industry verticals, and founder profiles.
These files are generated automatically at build time. When a model crawls our domain, it finds a clean, machine-readable summary of everything we do — no HTML parsing required.
Clean URL Structure and Internal Linking
Our URL hierarchy is RESTful and descriptive: /services/mvp-builders/, /case-studies/reduc/, /about-us/. No query parameters, no cryptic slugs. Every service page links to related case studies. Every case study links back to the relevant service. Navigation uses descriptive anchor text — never "click here."
Combined with BreadcrumbList schema, this creates a navigation graph that models can traverse to understand relationships between our offerings.
Permissive Robots and Structured Sitemap
Our robots.txt is three lines: allow everything, reference the sitemap. No disallow directives. Our XML sitemap includes priority tiers (1.0 for homepage, 0.8 for main sections, 0.7 for sub-services, 0.6 for individual case studies) and weekly/monthly change frequencies. Every entry includes hreflang alternates.
We explicitly welcome AI crawlers. If you want to be recommended, you can't block the models from reading you.
Why This Matters Now
Google is still important. But the discovery funnel is branching. When a founder asks ChatGPT "Who can build my MVP?", you want to be in that answer. And unlike paid ads, you can't buy your way into an LLM recommendation. You earn it through clarity, structure, and substance.
The signals that make your site recommendable to AI are largely the same ones that make it excellent for humans — clear value proposition, well-organized information, and technical execution that doesn't get in the way of comprehension.
What You Can Do Today
If you want your site to start appearing in LLM responses, here's a prioritized checklist based on what actually worked for us:
- Add
llms.txtandllms-full.txt— Give AI models a structured, markdown-formatted summary of your business. This is the single highest-signal thing you can do for AEO today, and almost nobody does it yet. - Implement layered structured data — Start with
Organizationschema (include founders, reviews, services, contact), then addBreadcrumbListon inner pages andFAQPagewhere natural. - Audit your HTML semantics — Replace
<div>wrappers with<header>,<nav>,<main>,<section>,<footer>. Add ARIA attributes and descriptive alt text to every image. - Set up bilingual hreflang — If you serve multiple languages, connect page pairs with
rel="alternate" hreflangin both HTML and sitemap. Includex-default. - Clean your URL structure — RESTful, descriptive, hierarchical.
/services/mvp-builders/beats/s?id=42. - Open your
robots.txt— Allow all crawlers. Reference your sitemap. Don't block AI agents. - Interlink with intent — Every page should connect to related content via descriptive anchor text. Pair this with BreadcrumbList schema for a complete navigation graph.
The potential of getting ahead of how machines read your business is enormous. And the window to build this advantage is still wide open.