loworbit

▲ loworbit.ai / tools / panel

mock evaluation panel

panel.loworbit.ai

a mock evaluation panel that will give you multiple (realistic and useful) perspectives on any sales document.

data: not declared per-tool yet — see /privacy for the loworbit-wide picture.

— how it works —

┌─ project: panel ───────────────────────────────── rev 0.1 / 2026-04-17 ─┐
│ drawing: evaluation pipeline                                            │
│ scale: nominal · panelists run concurrently within skim + notes         │
└─────────────────────────────────────────────────────────────────────────┘

        ┌─ A ──────┐    ┌─ B ──────┐    ┌─ C ──────┐    ┌─ D ──────┐
        │ skim     │    │ notes    │    │ conv.    │    │ summary  │
 doc ─┐ │  [1]     │    │  [2]     │    │  [3]     │    │  [4]     │
      ├→│          │ ─→ │          │ ─→ │          │ ─→ │          │ ─→ [5]
crit ─┘ │ panelists│    │ margins  │    │ rounds   │    │ + score  │
        └──────────┘    └──────────┘    └────┬─────┘    └────┬─────┘
                                             │               │
                                           [op]            [op]
                                             ↓               ↓
                                         pause/stop      review

─ legend ──────────────────────────────────────────────────────────────────

doc    document under evaluation
crit   evaluation criteria (optional, enables scoring)
[op]   operator gate — review, edit, advance, scrap

[1]  skim             n panelists each form an initial reading, persona-shaped
[2]  notes            panelists annotate source spans with perspective-specific concerns
[3]  conv.            panelists discuss across n rounds, operator gates continue/stop
[4]  summary          synthesis across panelists; composite score if criteria were provided
[5]  evaluation report verdict + strengths + weaknesses + suggestions + optional score

─ notes ───────────────────────────────────────────────────────────────────

. panelists run concurrently in skim and notes phases.
. conversation rounds gated by operator continue/stop.
. summary score only renders when evaluation criteria were provided
  upfront.
. this is a simulation, not an actual evaluation. treat as directional.

an ai evaluation room for customer-facing documents. you upload a proposal (or any sales doc, really), select evaluation criteria, and four panelists (each with a distinct persona) skim, discuss, score, and summarize. margin notes surface what they would actually say to you in a real review.

built after watching the way proposal teams over-polish documents without ever running them past a reader.