AI Brand Index
The AI Brand Index is a snapshot of your brand’s overall AI visibility within a given category. The index was developed by Evertune to provide insight on AI’s unaided awareness of different brands.
We run a dynamic set of prompts through large language models, asking each prompt - at scale - to capture the full spectrum of results. The prompts are engineered to imitate the questions a researcher would commonly ask a human survey panel when measuring unaided awareness in a product category. The intent is to measure how often an AI model will recommend a given brand and to evaluate how the AI model perceives the brand versus others in the same category.
AI Brand Score
The probability of a model driving attention to a brand unaided – i.e. when a brand is not mentioned in a user's question.
Scale: 0-100%
Calculation: AI Brand Score is calculated as Visibility Score (the probability that your brand will appear in LLM responses) weighted by Average Position (the average ranking position - 1st, 2nd, 3rd, etc.).
Position weights: 1st position = 100%, 2nd = 90%, 3rd = 81%, 4th = 73%, etc.
Each subsequent position reduces weight by 10% of the previous position
Example: A score of 100 means your brand appeared in 100% of responses in 1st position)
AI Education Briefs
The AI Education Briefs helps you create effective LLM-friendly messaging strategies for your content teams to improve your brand's likelihood of recommendation against specific buyer preference topics. When you enter a topic, our tool generates up to three tailored messages that align with your brand's strengths and have proven resonance with AI models.
Behind the scenes, we first create about dozens of messages related to your topic, then evaluate them using language models to identify the most compelling ones. The top-rated messages are then provided to you, ready to inform your content creation. This is part of our broader platform that helps brands develop content strategies focused on educating AI models with specific product information.
AI Education Score
AI learns by reading the internet, but each model maker has access to different parts of the internet. To help identify how much influence a domain/URL has on an AI response, Evertune analyzes the following data points to create an AI Education Score:
Technical accessibility (which web crawlers can access the site?):
Does the domain allow AI models to train and search on its content?
Does the domain allow AI models to only search on its content?
Does the domain block all access to its content?
Content authority – the strength of association that the domain has with the category
Citation propensity & impact – propensity for the domain to be cited in responses and how it changes the model's response.
The AI Education Score is scored from 0 to 10.
A score of 10 = The domain has a high influence on both training & search
A score of 0 = The domain is actively blocking AI
Scores in between represent different progressive levels of propensity for AI Education, with a score of 5 being an "average" level of influence.
AI Usage
AI Usage helps you see beyond referral traffic to understand the true scope of AI's influence on your audience. Using our EverPanel of almost 25 million real internet users in the U.S. (demographically weighted to reflect the broader internet), AI Usage shows the scale of users who visit both AI models and sites within your product category.
Audience Size: This will show you the number of users in the U.S. who visit both an AI model AND the site category you have selected.
Audience vs. Google.com: The AI model’s audience (for a particular site category) as a % of Google's equivalent audience.
Audience vs. Bing.com: The AI model’s audience (for a particular site category) as a % of Bing’s equivalent audience.
Aided Awareness
How AI models describe a brand when the brand’s name is included in the prompt. Evertune’s Word Association report analyzes aided awareness by asking the AI models to describe specific brands within a product category. For example, “how would you describe the car brand BMW?”
Association Score
How often a word/phrase appears related to your brand or competitor, based on the word’s text size.
Average Position
The average ranking position of your brand in AI responses (i.e. 1st, 2nd, 3rd). Example: A score of 4 means your brand was, on average, the 4th brand mentioned
Backlinks
The mechanism or pathway that connects you back to the original source of the LLM answer.
Bot Analysis
Reveals which AI bots are accessing your website, which pages they're crawling, and how often they return. This feature provides comprehensive visibility into AI crawler activity that shapes how AI models learn about and reference your brand.
Brand Relevance
The influence of a source on your brand within the category and topic, weighted by sentiment towards your brand. Measured on a scale of -100 to 100. Measures how much content on the source page specifically discusses the target brand related to the topic and the category, weighted by how positively or negatively the source speaks about the brand.
Brand Sentiment
The sentiment of the content related to your brand on a source URL (Positive, Neutral, or Negative). This sentiment plays into the Brand Relevance of the URL.
Brand Share of Voice
Brand Share of Voice is the percentage of time your brand (vs. competitors) is included in the conversation within a domain or URL that is influencing AI responses.
Calculation: Target Brand Mention pages ÷ Baseline Brand Mention pages
Target Brand Mention pages: # pages that mention the target brand & the product category
Baseline Brand Mention pages: # pages that mention (the target brand OR any named competitor) & the product category
Citation
The sources that an LLM references when generating a response from AI search (meaning based on retrieved information rather than internalized knowledge).
Consumer Preferences
Consumer Preferences looks at how likely an AI model is to drive attention to a brand, unaided, for a given consumer preference. We ask the models “what attributes do consumers care about most in this product category?” For example, we might find that when deciding on running sneakers, price, comfort, performance, and durability are most important to consumers. We then take those specific attributes and re-prompt the models at scale, asking “Which running sneakers are the best for price? Which are the best for comfort? Which are the best for performance?” This reveals where you’re winning, and where competitors are stronger on attributes most important to your consumers.
You can plot this out for your brand versus competitors in the space, dynamically change the chart based on ones you want to see, and drill down by model. We use this visualization to help inform what the next action steps need to be - where there are clear knowledge gaps in the model's understanding of your brand's strengths and where you need to focus to reinforce and increase the probability of recommendation.
Content Analytics
Content Analytics shows which domains or URLs most influence AI’s perception of your brand.
Within the section, you will see the following for each domain or URL that most influences AI’s perception of your brand:
AI Education Score
Crawler Status
Brand Share of Voice
Mentions
Mention Share
Unique URL Count
Content Authority
How strongly a site is tied to your category.
Crawler Status
Whether a domain allows AI models to train or search on its content. Options include:
Allows all crawlers
Blocks all crawlers
Blocks training, but allows search
Partnership (meaning the LLM is paying the domain to be able to train off the domain’s data)
Content Studio
Content Studio turns your insights into ready-to-publish blog posts designed to educate AI models on your differentiators. Within Content Studio, you can identify where your brand underperforms, review top 3 messages for each topic area, and generate blog copy optimized with the keywords and positioning that will most effectively educate AI models about your brand's strengths.
Custom Prompts
Custom Prompts allows you to upload custom prompts to analyze AI visibility for your brand, competitors or keywords by prompt or by topic area, as well as see the sources most influencing AI on these prompts.
Dynamic Sampling
We sample responses to ensure statistically significant metrics and insights.
Everpanel
EverPanel leverages insights from over 25 million real people in the U.S., balanced to reflect the demographics of the internet at large. We use this panel to understand what consumers are searching for, the language they use, and how often they’re searching. Instead of guessing what people might ask AI, we show you what they actually do—across your brand, category, and competitors.
We abide by all privacy laws and regulations, such as GDPR. We do not license any personally identifiable information (PII) or IP data.
Fine-Tuning
The second stage of LLM development, where the model is further trained on labeled data sets and human feedback.
Generative Engine Optimization (GEO)
The strategy for influencing AI outputs through thematic and conceptual content. Unlike traditional Search Engine Optimization (SEO) which focuses on ranking well in search results pages, GEO aims to ensure that content is more likely to be referenced, cited, or incorporated into AI-generated responses.
Hallucinations
When an LLM generates information that appears plausible but is factually incorrect, fabricated, or not supported by its training data. This includes making up citations, facts, or details that sound realistic but are false.
Intelligent Prompt Generation
We ask custom questions tailored by category to ensure relevance and depth—averaging over 10,000 prompts per analysis.
Large Language Model (LLM)
AI models that process and generate human-like text. Has 3 stages: Pre-Training, Fine-Tuning, and Search.
Mentions
Total number of times a domain/URL appears in AI responses. The higher the mention count, the higher frequency the source is being used.
Mention Share
Percentage of all AI responses that mentioned the domain/URL (Scale: 0-100%).
Opportunity URLs
Sources that have a high influence on AI models’ perception of your category, but your brand is not mentioned.
Owned URLs
Brand-controlled properties that show up as sources in AI responses
Pages Found
Total number of pages within the domain we were able to find.
Post-Training
The second stage of LLM development that fine-tuns the model’s pre-training and aligns the model with desired output formats and safety constraints. Think of this like after the LLM graduates from college, mentors teach the LLM how to be helpful and professional.
Preference Signals
The attributes AI thinks matter most to consumers–and who leads in each. What people care about–and who AI recommends. Preference Signals measure how likely an AI model is to recommend your brand—unprompted or unaided—when users care about specific things, like fit or comfort in shoes. We first identify which attributes matter most important in your category, then analyze which brands LLMs associate with each one. From there, we calculate AI Brand Scores for every key attribute.
Pre-Training
The first stage of LLM development where the model learns linguistic patterns and general knowledge. Think of this like where the LLM goes to college and reads every word of every book in every library.
Probabilistic
Describes how LLMs generate text by calculating probability distributions over possible next tokens/words, then sampling from these distributions rather than always choosing the most likely option. LLM responses are inherently subject to variation because they are probabilistic.
Prompt Volumes
How often people prompt AI around a specific topic (i.e. a brand, a product or a category). Using our EverPanel of almost 25 million real internet users in the U.S. (demographically weighted to reflect the broader internet), Prompt Volumes doesn’t just scan for granular keywords—we cluster prompts into broader intents and topics, giving you a more complete and accurate measure of user activity.
Reinforcement Learning from Human Feedback (RLHF)
RLHF is part of the post-training stage of building an LLM. The pre-trained model is further trained by human evaluators who rate or rank the model's outputs, and the model learns to generate responses that humans prefer. Rather than learning from fixed examples, the model receives ongoing feedback about what makes outputs better or worse.
Search Integration
The last stage of LLM development where an LLM performs live internet search if it does not inherently know the answer from its pre- and post-training. Think of it as when stumped, the LLM can quickly research current information.
Search Presence
How often a site is cited in AI-generated answers.
Semantic Similarity
The closeness in meaning between different phrases or prompts. For example, if you ask an LLM “'What's the weather like?” and “How's it looking outside today?” - it will understand these mean the same thing, even though they use completely different words. This behavior explains why content creators should focus on thematic and conceptual consistency, not just specific keyword variants.
Sentiment Score
How positively or negatively a word is used. Scale: -100 (negative sentiment) to +100 (positive sentiment).
Site Audit
Site Audit evaluates how effectively your website can be accessed, read and understood by AI bot crawlers and large language models (LLMs). Site Audit analyzes your website's crawler accessibility, technical page metadata and content structure to identify opportunities for improving AI bot accessibility. The tool provides actionable insights and scores across multiple dimensions that directly impact how well AI systems can crawl, parse, understand and cite your website content.
Sources
Sources refer to the domains or URLs that LLMs pull from when discussing your brand. In our Content Analysis section, the "Sources" tab shows you which domains most influenced AI's perception of your brand. You can toggle to "URL" view to see the full URLs of these sources.
Understanding your sources helps you identify what's shaping AI's perception of your brand, so you can prioritize domains that influence model learning and create content that educates AI in your favor.
Site Category
The category of a source (identified at the domain level. Options include: Earned, Owned, Affiliate, Social, or Corporate.
Strength URLs
Sources that have a high influence on AI models’ perception of your category and mention your brand, weighted by sentiment of your brand.
Supervised Fine-Tuning (SFT)
SFT is part of the post-training stage of building an LLM. The pre-trained model is further trained on a curated dataset of input-output pairs that demonstrate the desired behavior. The model learns by trying to match the provided examples as closely as possible.
Topic Relevance
The influence of a source related to the topic and the category. Scored on a scale of 0-100, it measures how much of the content on the page of the source relates to the topic and category measured.
Unaided Awareness
When AI models describe or mention a brand without the brand’s name in the prompt. For example, if a user prompts an AI model “What is the best luxury SUV?” and the AI model responds with “BMW”. The brand was not mentioned in the prompt, but the AI model mentioned the brand. Evertune’s AI Brand Index and Consumer Preferences reports analyze unaided awareness.
Unique URL Count
Number of distinct URLs from a domain. Some domains have lots of URLs, some have very few.
Vector Embeddings
Mathematical representations of words or concepts in multi-dimensional space (think of it as GPS coordinates for ideas—”happy” and “cheerful” live in the same neighborhood).
Visibility Score
Percentage of AI responses that mentioned your brand. This score helps you quantify your AI visibility and benchmark against competitors in your category.
Scale: 0-100
Example: A score of 80 means your brand appeared in 80% of AI responses during the measurement period
Word Association
The most common words and sentiment used when AI models describe your brand (vs. competitors) through “aided awareness” (meaning when an LLM is asked specifically about your brand).
To uncover the words most associated with your brand, we ask each model to generate reviews—repeatedly—to capture a wide range of responses. We then calculate:
Association Score – how often a word appears based on text size (0-100).
Sentiment Score – how positively or negatively the word is used (–100 to +100).
Combined, these form an overall sentiment score for the brand. Then we visualize it through a word cloud: sized by frequency, colored by sentiment–showing exactly how LLMs describe your brand.
