Abhord Quickstart Guide (Refreshed Edition)
This practical, step‑by‑step guide helps new Abhord users go from zero to actionable insights. It reflects recent platform updates and field‑tested recommendations from the last release.
What’s new in this edition
- Multi‑LLM panel management now supports side‑by‑side runs with consistent personas and locales.
- Enhanced entity resolution reduces false positives in mentions by normalizing brand, product, and synonym variants.
- Sentiment now reports confidence bands and a “mixed” class for split opinions.
- Share of voice (SoV) includes intent‑weighted and position‑weighted scoring (top‑of‑answer vs footnote).
- Competitor tracking adds spike detection and weekly change logs.
- Action Briefs auto‑generate prioritized fixes tied to each failing intent.
1) Initial setup and configuration
1) Create your workspace
- Add organization name, domain(s), and primary market(s).
- Invite teammates and assign roles: Admin (billing/settings), Analyst (surveys/reports), Stakeholder (view/alerts).
2) Define entities (crucial)
- Add your brand, products, and key synonyms (e.g., “Acme Pro,” “AcmePro,” “Acme Pro 2”).
- Add canonical URLs for each entity (homepage, product pages, docs, comparisons, pricing).
- Pro tip: Include common misspellings and legacy names to improve mention capture.
3) Connect data and destinations
- Optional: connect GA4/GSC for landing‑page attribution; Slack/Email for alerts; webhook for BI tools.
- Choose locale and language defaults; set time zone for scheduling.
4) Set up your model panel
- Select the LLMs you want to test (Abhord‑managed seats) or bring your own API keys.
- Choose capabilities per run: zero‑context (pure recall), web‑enabled (retrieval), or tool‑assisted (if available).
- Default recommendation for baseline: zero‑context + web‑enabled, both, for contrast.
5) Governance and QA presets
- Turn on brand‑safety filters, answer citation capture, and response timeouts (20–40s).
- Enable multi‑run variance control: n=5–10 responses per prompt/model with different seeds.
- Save as “Baseline v1” so you can compare future runs.
Checklist to finish setup:
- Entities and competitors added
- Locales chosen
- Alerts connected
- Baseline preset saved
2) Run your first survey across LLMs
1) Pick a template
- Brand Coverage: “Who are the top solutions for [category]?”
- Buyer Intent: “Best [product] for [use case/budget/industry].”
- Comparison: “X vs Y for [scenario]—which is better and why?”
- How‑to/Support: “How to [task] with [your product].”
2) Draft prompts (be precise, not leading)
- Example: “What are the top five vendors for enterprise password managers? Briefly justify each pick.”
- Add 3–5 paraphrase variants to reduce prompt‑form bias.
- Set persona and context: “US‑based IT director, mid‑market, budget‑sensitive.”
3) Configure runs
- Models: select 3–5 leading models for breadth.
- Locales: start with your primary market; add 1–2 secondary locales if relevant.
- Trials per prompt/model: 5–10 for stable SoV and sentiment estimates.
- Mode: run once on zero‑context and once with web‑enabled retrieval.
4) Execute and monitor
- Use “Dry Run” on 1–2 prompts to sanity‑check outputs.
- Launch batch; typical completion is under 20 minutes for 100–150 responses.
Pro tips
- Lock persona and locale for strict A/Bs.
- Capture citations to validate claims and reduce hallucination noise during scoring.
3) Interpreting results: mentions, sentiment, share of voice
Mentions
- Exact vs normalized: Abhord resolves mentions across name variants. Review the “Entity Dictionary” if oddities appear.
- Visibility tier: Top‑of‑answer, In‑list, Footnote. Treat these differently; top‑of‑answer drives the most influence.
- Action: Low exact but high normalized mentions usually points to naming inconsistencies in your content and PR.
Sentiment
- Classes: Positive, Neutral, Mixed, Negative.
- Confidence bands: Look for narrow bands (±5–8 pts) before making big decisions; widen sample size if bands are large.
- Drivers: Open the “Rationales” panel to see pros/cons that pushed the score