Research
Building intelligence for organizational memory
We develop specialized AI models trained to understand how organizations create, lose, and recover knowledge. Our research focuses on memory extraction, relationship detection, and temporal reasoning across unstructured organizational data.
Memory is sensitive. It contains decisions, relationships, and institutional context that should never leave an organization's control. That belief shapes everything we build. Rabbit's architecture, training pipeline, and inference stack are developed entirely in-house. We do not depend on external model providers for any part of the intelligence layer. When an organization deploys Rabbit, they deploy technology we built, on infrastructure they control.
All training data is ethically sourced. We use synthetic data generation to create diverse, representative training sets without exposing any real organizational data.
Training Log
Model evolution
Every training run documented. We believe in transparent research.
Initial release v1.0
First multi-signal model. Trained on 55,750 filtered examples across 8 specialized signals: intent classification, entity extraction, triage, query expansion, answer generation, summarization, sentiment analysis, and importance scoring.
Conversational quality v1.1
Major quality improvement in answer generation. Added conversational formatting with citations, multi-turn conversation support, and graceful uncertainty handling. Introduced reasoning phrases that make answers feel like a knowledgeable colleague.
Relationship intelligence v1.2
Full 12-signal model with memory relationship detection (7 link types) and contradiction detection. Deployed to production on Google Cloud infrastructure.
Quality and faithfulness v1.4
Targeted quality improvements with 8,000 faithful extraction examples, 5,000 formatted answers, and 4,000 clean JSON examples. Fixes hallucination in entity extraction and ensures consistent output formatting across all signals.
Knowledge compilation v1.5
Introducing compile and lint capabilities with 18,000 new targeted examples. Focused on summary quality, reasoning depth, and retrieval accuracy. Incorporating user feedback from production deployment.
Benchmarks
Evaluation on real organizational data
All benchmarks evaluated against meeting transcripts, email threads, Slack conversations, and project documents.
| Signal | Description | Accuracy | Latency |
|---|---|---|---|
| Intent Classification | Route queries to the right strategy | 97% | 270ms |
| Memory Triage | Auto-classify and summarize | 94% | 2.1s |
| Entity Extraction | People, orgs, decisions, actions | 92% | 1.8s |
| Sentiment Analysis | Emotional tone detection | 91% | 350ms |
| Relationship Linking | Cross-memory connections | 89% | 1.5s |
| Query Expansion | Enrich vague queries | 87% | 700ms |
| Answer Generation | Cited conversational answers | 85% | 4.2s |
Data Ethics
How we source and handle training data
We take data ethics seriously. No real user data is used for training.
No real user data
Trained entirely on synthetic data and ethically sourced public datasets. No customer data, no scraped content.
Synthetic generation
Seed-and-expand methodology: hand-crafted examples per signal expanded to thousands using controlled generation, then quality-filtered.
Continuous evaluation
Every version evaluated against held-out test sets with human review. All benchmarks and training parameters published transparently.
Infrastructure
What we use
The tools and services powering Rabbit's training, evaluation, and deployment.
Google Cloud Platform
Production inference and model serving.
RunPod
GPU compute for training. A100 instances with Unsloth.
Hugging Face
Model hosting and distribution.
FastEmbed
Local embedding model. Zero external calls.
Open Evaluation
What we measure and why
Every capability is evaluated against specific quality criteria across model versions.
Faithfulness
Does the model only state facts present in the source? v1.4 scores 92%. v1.5 targets 97%+.
Citation accuracy
Does every claim point to the correct source? We measure source attribution precision across answer tasks.
Format compliance
Does the model produce clean, structured JSON? v1.4 achieves 95%+ clean output with post-processing.
Graceful uncertainty
When the answer is not in context, does the model say so? Trained with explicit don't-know signal data.
Human preference
Side-by-side evaluation against baselines. v2.0 will introduce DPO from production preference pairs.
Follow our research
Get updates on new model releases and benchmark results.
Join the waitlist