Framework Guides
DeepCitation works with any LLM provider or framework. These guides show the exact integration pattern for the most common setups, so you spend 10 minutes wiring, not 30 minutes figuring it out.
Available Guides
| Guide | Best for | Example |
|---|---|---|
| LangChain | Backend RAG pipelines — legal, medical, financial AI | langchain-rag-chat (demo) |
| Next.js App Router | Full-stack apps with React Server Components + streaming | nextjs-ai-sdk (demo) |
| Vercel AI SDK | useChat / streamText apps on Vercel infrastructure |
nextjs-ai-sdk (shared with Next.js, demo) |
| Express.js | Node.js REST APIs with upload, chat, and verification routes | basic-verification |
| Mastra | Mastra RAG pipelines with TypeScript-native chunking + verification | mastra-rag-chat (demo) |
| AG-UI | AG-UI protocol agents with SSE streaming + verification | agui-chat (demo) |
| Python / FastAPI | Python backends using the REST API directly | — |
How DeepCitation Fits Any Framework
DeepCitation is framework-agnostic. It adds two server-side steps around your existing LLM call:
[your docs] → prepareAttachments() → [enhanced prompt] → [your LLM] → verifyAttachment() → [verified output]
- Before the LLM call —
prepareAttachments()uploads source files and returnsdeepTextPages(raw page text) thatwrapCitationPrompt()renders deterministically when you build the prompt - After the LLM call —
verifyAttachment()checks citations in the LLM’s response against the source, returning visual proof
The React components (CitationComponent, CitationDrawer) are client-only and optional — they render the verification results. You can use a plain text or Slack renderer instead.