An internal AI assistant that finally read all the case files
A 200-person professional services firm was drowning in document search. We built a grounded AI assistant over their case archive that the team actually trusts.
-71%
Search time per query
183/200
Active weekly users
97.4%
Citation accuracy
Where they were stuck.
Senior professionals were spending hours each week looking for precedent documents, prior client work, and policy memos across SharePoint, an archived case management system, and a Confluence wiki. The information was technically findable, but in practice nobody could find anything in less than ten minutes, and partners were re-doing work that had already been done.
What we built.
- 01
We ingested 18 years of case files, memos, and templates from three sources, with a custom chunker that respected document structure and a separate extraction path for tables and exhibits.
- 02
We deployed a hybrid retrieval system (vector plus keyword) with a reranker, exposed through a clean web UI in the firm's tenant. Every answer cites sources with the document name and page; if no relevant context is found, the assistant refuses rather than guesses.
- 03
We built an evaluation harness with 240 real questions sourced from the team. The harness runs on every change to retrieval, prompts, or models, and gates production deploys.
- 04
We ran four hands-on workshops to onboard partners and associates, with playbooks for the most common query patterns (precedent search, client conflict checks, internal memo retrieval).
What happened next.
The assistant has 183 weekly active users out of 200 across the firm. Search time per query is down 71% on the questions in the evaluation set. Citation accuracy is 97.4% on the harness, and the team's trust in the system is high enough that it has become the default for case file questions, replacing direct SharePoint search for most workflows.