Questions & Answers
Understanding Your Results
Confidence % is the Rapporteur's estimate of how well-supported each claim is, based on the discussion. It reflects model agreement — not an absolute measure of truth.
- 100% — all participating models agreed on this point without needing discussion.
- 70–99% — most models agreed, or initial disagreement was resolved during discussion.
- Below 70% — significant disagreement remained after all discussion rounds.
A high confidence score means the AI models converged — it does not guarantee the claim is factually correct. Always verify important claims independently.
Each Key Claim carries a source label showing how consensus was reached:
- consensus — all models agreed from the very first round, with no discussion needed.
- discussion_resolved — models initially disagreed, but after one or more discussion rounds they converged on the same position.
- disputed — models still disagreed after all discussion rounds completed. The positions shown under the claim are each model's final stance.
How the Discussion Works
Consensable runs a four-step process:
- Step 1 — Query: Your question is sent to all selected models simultaneously. Each answers independently.
- Step 2 — Analysis: The Rapporteur reads all responses and identifies points of consensus and disagreement.
- Step 3 — Discussion: For each disputed claim, the models engage in structured rounds. Each model sees the others' positions and can update or defend its own. Disputes run in parallel.
- Step 4 — Synthesis: The Rapporteur produces a final synthesised answer, verdict, confidence scores, and key claims.
These are safety caps that prevent the discussion from running too long or costing too much:
- Max Rounds — the maximum number of discussion rounds per disputed claim. Each round, every model gets one response.
- Max Tokens — the total token budget across all model calls. The discussion stops early if this limit is hit.
- Max Minutes — a wall-clock time limit. Useful for keeping responses fast on time-sensitive topics.
Synthesis always runs to completion regardless of caps — only the discussion phase is cut short.
- Rapporteur — the role name used in Synthesis and Discuss modes.
- Adjudicator — the role name used in Debate mode.
For questions about recent events, current news, or real-time data, the models' training data may be out of date. Live Web Context fetches fresh information from the web and injects it into all model prompts:
- Auto — a cheap classifier model first decides whether the question needs web search. If yes, Brave Search is used; if not, the web search is skipped.
- Brave — always searches via Brave Search, returning the top 6 results.
- Perplexity — uses Perplexity's AI-powered search to produce a pre-summarised web context.
- None — no web search; models rely entirely on their training data.
Cost and Billing
Costs are based on the number of tokens (units of text) processed by each AI model. Each model has a different per-token price. The cost shown is the total across all model calls during your query, including the discussion and synthesis steps.
The initial cost shown (marked ~) is a real-time estimate based on token counts. The final actual cost is confirmed a few seconds later from the model provider and updates automatically.