Ask AuthorAI:
Document Intelligence
Enable any Authorium user to have a grounded, citation-backed conversation with a document they are currently viewing — accurate answers with clickable references to the exact source location, without leaving their workflow.
Feature Overview
Problem Statement
Government agency staff spend significant time manually reading through dense documents to find specific clauses, understand impacts, or produce summaries. Current workarounds — Ctrl+F, manual skimming, asking a colleague — are slow, error-prone, and leave no audit trail. Generic AI tools hallucinate and cannot be trusted for government work where accuracy and citation to source text are non-negotiable.
Users Who Benefit
| Persona | Context | Primary Need |
|---|---|---|
| DOF Legislative Analyst | Bill Info — reviewing bill versions | Quickly identify key changes, fiscal impacts, affected entities |
| Procurement Officer | Contract/RFP documents | Find specific clauses, obligations, compliance requirements |
| Contract Manager | Contract development & management | Summarize terms, identify risks, track obligations |
| IT Security Reviewer | Contract compliance review | Locate specific security and compliance requirements |
| Executive | Approvals across document types | Fast, accurate summaries before signing off |
Business Context
Real-World Use Case — DOF Analyst
An analyst at the California Department of Finance is reviewing SB-134 (Regional Center Contracts) — a 40-page bill. They need to understand which entities are affected, the key employee retention requirements, and implementation deadlines.
Without Ask AuthorAI
Analyst opens bill → reads through manually → uses Ctrl+F for keywords → still misses context → produces summary from memory → risk of error or omission → no citation trail for internal review or supervisor sign-off.
With Ask AuthorAI
Analyst opens bill → clicks "Ask AuthorAI" → panel opens scoped to that exact bill version → types a question → receives structured answer with numbered citations → clicks citation → bill text scrolls to exact paragraph → copies answer with source references into analysis. Done in 2 minutes, not 20.
Value Drivers
- Time Savings: Eliminates line-by-line reading for fact-finding; analysts get targeted answers in seconds.
- Accuracy & Trust: 100% verifiable citations to source text, eliminating hallucination risk.
- Platform Differentiation: No other public sector administrative platform offers grounded, citation-backed document chat.
- Cross-Persona Utility: Same underlying capability serves DOF analysts, procurement officers, contract managers, and executives.
- Foundation for V2: Single-document chat establishes the architectural and UX foundation for collection-level intelligence (cross-document analysis) in future releases.
MVP Scope
✅ In Scope — V1
- Ask AuthorAI on Bill Info pages (all versions)
- Ask AuthorAI on Authorium Documents (current version)
- Single-document scope only
- Suggested questions on panel open
- Numbered inline citations with clickable deep-links
- "Ask AuthorAI" button via progressive disclosure
- Feature flag per organization
- Session-scoped conversation (clears on close)
🚫 Out of Scope — V1
- Collection/stage-level cross-document chat (V2)
- PDF, Word, Forms ingestion (V2)
- User-uploaded attachments mid-conversation (V2)
- Bill amendment relationship mapping (V3)
- Global chat from all pages
- Chat history persistence across sessions
- Admin configuration UI
Job Stories
Accessing Ask AuthorAI
Asking Questions
Citations & Deep-Linking
Technical Architecture
Component Breakdown
Rails Context Broker
- On chat open: fetch document content from DB (not the rendered page) and serialize into a structured context payload.
- Authorium Docs: fetch section content via existing export/serialization path (used for PDF export). Each section has a stable DOM ID (
workflow_section_{id}). - Bills: bill text renders inside an iframe via
srcdoc— must be fetched from DB, not scraped. Paragraph-level GUID anchor IDs must be extracted and passed alongside text. - Payload shape:
{ document_id, document_type, version_context, content_chunks: [{ anchor_id, text }], suggested_questions_type } - On each user query: append to conversation history, call AI Service, stream response back to FE.
- Enforce authorization: user must have read access to the document.
- Log session events to SigNoz: panel open, query sent, response received, errors.
AI Service (Python — extend existing AuthorAI service)
New POST /chat endpoint with a citation-aware prompt template. The prompt must:
- Answer only from provided document content — no external knowledge.
- Require numbered inline citations for every factual claim.
- Return a structured citations array with
anchor_id, section name, and display text. - Treat unresolved merge fields (e.g.
{{agency_name}}) as placeholders — do not hallucinate values. - Explicitly refuse to speculate when the document does not address the question.
Citation Deep-Link Service
- Authorium Docs: Each section has a stable
workflow_section_{id}DOM ID. Citation scroll is a standard DOM operation on the parent page. - Bills: Each paragraph has a GUID-format anchor ID. Because bill text renders inside an iframe (
srcdoc), citation scroll must target the iframe's internal document — not the parent page DOM. This is a discrete FE task.
Frontend Chat Panel (Rails/Hotwire)
- Slide-in right panel as a Stimulus controller, consistent with existing
ck-ai-buttonpattern in Authorium Docs. - Streaming AI responses rendered via Turbo Streams.
- Empty state: AI icon + suggested questions (tappable chips).
- SOURCES section: citation cards below each response; clicking scrolls + highlights the anchor.
- Feedback: thumbs up/down on each response for quality monitoring.
Feature Flag & Access Control
- Feature flag per org:
ask_authorai_enabled(default: off). - No settings UI in V1 — flag set by Authorium admin/engineering.
- Button only renders when flag is enabled and on Bill Info (Bill Text tab) or Authorium Document view.
Observability & Analytics
SigNoz: Chat session initiated, query received, AI Service call latency, response streamed, errors/timeouts. AI Service: inference latency, token count (input/output) per query.
Pendo events to instrument:
| Event | Trigger | Key Properties |
|---|---|---|
ask_authorai_opened | User clicks Ask AuthorAI button | document_type, org_id |
suggested_question_clicked | User taps a suggested question | question_text, document_type |
query_submitted | User submits a free-form query | document_type (no query text) |
citation_clicked | User clicks a source card | citation_index, document_type |
feedback_submitted | User clicks thumbs up or down | sentiment, document_type |
panel_closed | User closes chat panel | queries_in_session |
Design
Key Principles
- Auditability first: Every factual claim has a numbered citation. No uncited assertions.
- Context clarity: User always knows what document scope they are in (panel header).
- Non-disruptive: Slide-in panel does not replace the document view — both are visible simultaneously.
- Progressive disclosure: Button only appears where it is relevant; never clutters unrelated pages.
- Familiar patterns: Chat UI follows standard patterns to minimize learning curve.
Entry Points
| Page | Show? | Notes |
|---|---|---|
| Bill Info → Bill Text tab | ✅ Yes | Primary V1 entry point |
| Authorium Document view | ✅ Yes | Primary V1 entry point |
| Bill Info → Summary, Details, other tabs | ❌ No | No document text context |
| Project list, Dashboard, Workflow config | ❌ No | No document context |
| Stage view (collection) | Consider V2 | Multi-document scope needed first |
Open Questions
For Engineering
Does the existing AuthorAI service accept arbitrary HTML as context input, or is it currently wired only to Authorium Doc structure? What is the current max context window — is 200k tokens (full Sonnet context) available, or is there a shorter limit set for cost reasons?
Bill HTML is rendered via iframe srcdoc — Rails must fetch bill text from the DB. Confirm the correct DB path for fetching raw bill HTML by version. The context payload for bills should include structured chunks with anchor IDs alongside text.
How is bill version stored in the DB? Is it a version_id FK, a bill_version record, or a timestamp-based approach? Need to confirm the correct identifier to pass as version_context in the API call.
Should conversation history be stored server-side (DB) or client-side for V1? Recommendation: client-side only, clears on panel close — no persistence in V1.
For Design
What is the "Ask AuthorAI" button placement on an Authorium Document page? Bill Info placement is clear from Figma — need equivalent for Authorium Docs.
Three error states required: (1) AI Service unavailable, (2) Document content could not be fetched, (3) Response generated with no citations (edge case — LLM answered without grounding).
For Product
Who defines the suggested questions per document type? Should these be hardcoded per type (bills, Authorium Docs) or dynamically generated by the AI on panel open?
Is DOF the confirmed V1 beta customer? Any others (CDSS, CalPERS)?
Suggested: session engagement rate (% of bill views that open Ask AuthorAI), citation click-through rate, thumbs up/down ratio, time-to-first-query. Any qualitative targets from DOF (e.g., "analysts spend 50% less time on initial bill review")?
Future Enhancement Ideas
Not included in this release.
Collection-Level Intelligence (Document Stage Chat)
Expand Ask AuthorAI to chat across all documents in a document stage — cross-document synthesis, conflict identification, trend analysis. Requires RAG infrastructure (vector DB, ingestion pipeline, hybrid search).
PDF & Uploaded File Support
Extend Ask AuthorAI to uploaded files (PDF, Word) within Authorium stages.
User Attachments Mid-Conversation
Allow users to upload a supplemental document mid-conversation (e.g., a budget spreadsheet or prior analysis) to include as additional context.
Document Relationship Mapping
Use AI to automatically detect amendatory language and create relationship links between documents. High value for legislative teams.
Conversation History & Sharing
Persist chat sessions across logins; allow users to share a conversation thread with colleagues.
Ask AuthorAI in Approvals
Surface Ask AuthorAI within the approval workflow so approvers can ask questions about the document before acting.