Abhord Product Guide (Refreshed, January 2026)
This practical guide walks new Abhord users through setup, running your first cross-LLM survey, reading results, tracking competitors, and acting on insights. It includes 2026 updates and recommendations based on the latest model landscape and GEO/AEO best practices.
New in this edition:
- Cross-model calibration for sentiment and entity detection to reduce model bias.
- Alias Manager to normalize brand/URL/name variants (and common misspellings).
- SoV weighting by estimated model reach and query type.
- Search-augmented vs. recall-only mode selection per survey block.
- Cost guardrails: per-run caps, auto-downshifting to smaller models, and caching.
- Alerts and exports: Slack/email/webhook alerts and CSV/JSON export for BI tools.
1) Initial setup and configuration
1) Create your workspace
- Add brand entities: official brand name, product lines, URLs, social handles.
- Add aliases: common abbreviations, misspellings, legacy names (use Alias Manager).
- Define categories: e.g., “GEO platforms,” “LLM analytics,” “Answer engines.”
2) Connect data and compliance
- Upload reference pack: homepage, docs, pricing, comparison pages, PDFs.
- Set data handling: choose retention window, redact PII, and enable safe-mode logging if required by your org’s policy.
3) Choose models and budgets
- Enable a balanced panel: at minimum include one proprietary model (e.g., GPT/Claude/Gemini), one open-weight (e.g., Llama/Mistral), and one search-augmented model.
- Set cost guardrails: daily and per-run caps; opt into response caching for repeating prompts.
4) Team access and notifications
- Invite collaborators with roles (viewer, analyst, admin).
- Connect notifications (Slack/email) and webhooks for automation.
Pro tip: Spend 10 minutes on aliases and references. Clean entities and canonical sources improve recognition accuracy and reduce false mentions.
2) Run your first survey across LLMs
Goal: Learn how LLMs currently answer category-defining queries and where your brand appears.
1) Create a survey
- Name: “Q1 Baseline—GEO Platforms.”
- Audience: “Global, English.”
- Cadence: One-time (baseline), then schedule weekly.
2) Add question blocks
- Discovery queries (top-of-category): “What are the leading GEO platforms?” “Which tools help optimize answers from LLMs?”
- Intent queries (brand + task): “Best way to run multi-LLM surveys?” “How to track share of voice across LLMs?”
- Comparison queries: “Abhord vs [Competitor] for LLM insights.”
3) Pick model panel and modes
- Panel example: GPT-4.x, Claude 3.5, Gemini 2.x, Llama 3.x 70B, Mistral Large.
- Mode: Run discovery blocks in search-augmented mode; run comparison blocks in recall-only to test latent knowledge.
- Samples: 3–5 runs per prompt per model to reduce variance.
4) Add context
- Attach the reference pack to intent and comparison blocks to test how well models use your canonical sources.
5) Launch and monitor
- Confirm budget preview. Enable auto-retry for transient API errors.
- Tag this run “Baseline-Jan-2026” for trend comparisons.
Pro tip: Keep prompts neutral. Avoid leading phrasing (e.g., “Why Abhord is best”) to get a reliable market read.
3) Interpreting results: mentions, sentiment, share of voice
Mentions
- What it is: Count of times your brand (or alias) appears across answers.
- What to watch: Duplicates and partial matches are auto-deduped; inspect “Ambiguous” tab for edge cases like generic words in your name.
Sentiment
- How it’s scored: Per-mention sentiment, calibrated across models using the cross-model baseline to reduce polarity drift.
- Tip: Read verbatims for