,

Help AI bots understand your content with the LLM Only React Component

Written by James BerryLast updated November 27, 2025

Your pricing slider looks perfect to humans. ChatGPT sees a blank rectangle

LLM Only React Component

When building websites, we optimise for humans. We add fancy animations, interactive sliders, carousels, and accordions.

But AI crawlers cannot interact with any of it.

Google might crawl your dynamic content, but it is not guaranteed. ChatGPT, Claude, and Perplexity? They absolutely cannot.

Why AI Crawlers Cannot Understand Your Content

When you visit a website, your browser does a lot of work. It downloads the HTML, runs JavaScript, waits for API responses, and renders everything on screen. You can click buttons, drag sliders, and expand accordions to reveal more content.

AI crawlers do not work this way. Tools like ChatGPT and Perplexity send a simple HTTP request to your website and read whatever HTML comes back. They do not run JavaScript. They do not wait for content to load. They do not click anything.

This means any content that requires interaction or JavaScript to appear is completely invisible to AI crawlers. Your beautiful pricing calculator, your interactive feature comparison, your expandable FAQ section. None of it exists as far as AI is concerned.

Introducing LLM Only

LLM Only is a fully open source React component that solves this problem. It lets you show content only to AI bots while keeping it hidden from human visitors.

You wrap your content in the <LLMOnly> component and it only renders when an AI crawler visits your page. Regular users never see it.

The component works best in server-side rendering environments like Next.js, where the HTML is generated on the server before being sent to the browser.

Optimizing for AI Search Visibility

Think about your pricing page. You might have an interactive slider or calculator that lets users drag a handle to see different price points. The information is hidden behind these interactions, and AI bots cannot click or drag to reveal it.

When ChatGPT cannot find your pricing on your own website, it pulls the data from somewhere else. Maybe a competitor site, an outdated blog post, or content that mentions your product with negative sentiment.

The same problem affects feature carousels, dynamic product specs, and collapsible FAQ sections. Human visitors see a polished experience. AI bots see empty space.

How Does LLM Only Help?

You keep your interactive UI for humans. Then you add a plain-text version alongside it specifically for bots.

<PricingCreditsSlider config={pricingConfig} />
<LLMOnly userAgent={userAgent}>
  {pricingConfig.map((price) => (
    `${price.credits} credits - ${price.price}
    Features: ${price.features.join(', ')}`
  )).join('\n')}
</LLMOnly>

The slider works as normal for your visitors. But when ChatGPT or Claude crawls your page, they see the plain text version with all your pricing information clearly laid out.

How Does it Work?

Every time someone visits your website, their browser sends a "user agent" string that identifies what software is making the request. Regular browsers send something like "Mozilla/5.0 Chrome/120". AI crawlers send their own identifiers like "GPTBot" or "ClaudeBot".

The <LLMOnly> component checks this user agent string against a list of 30+ known AI bot signatures. This includes GPTBot (ChatGPT), ClaudeBot (Anthropic), PerplexityBot, and many others.

If the request comes from an AI crawler, the content inside <LLMOnly> renders normally. If it comes from a regular browser, the content is hidden completely.

Installing LLM Only

Install the llm-only package from npm.

npm install llm-only

Then use it in your Next.js app. You need to get the user agent from the request headers and pass it to the component.

import { headers } from 'next/headers';
import { LLMOnly } from 'llm-only';
 
const headersList = await headers();
const userAgent = headersList.get('user-agent');
 
<LLMOnly userAgent={userAgent}>This text is only visible to AI bots.</LLMOnly>;

Helper Function

If you need conditional logic instead of wrapping content in a component, you can use the isLLM() function. This returns true if the user agent belongs to an AI crawler.

import { isLLM } from 'llm-only';
 
if (isLLM(userAgent)) {
  return <FullStructuredContent />;
}
return <InteractiveUI />;

Testing Locally

You can test that your LLM Only content is working by pretending to be an AI crawler. Use cURL with a spoofed user agent.

curl -H "User-Agent: GPTBot/1.0" https://yoursite.com

You will see your hidden content appear in the response. Switch back to a normal user agent and it vanishes.

Is This Hidden Text Cloaking?

No. Google spam policies define cloaking as showing different content to deceive search engines.

LLM Only does not deceive anyone. It supplements your existing content. You are providing the same information in a format that AI can actually parse. The human version and the bot version say the same thing. One is interactive, the other is plain text.

LLM Only - A React Component for AI Web Bots - LLMrefs