Efficiency gains in legal workflows
Measuring where AI actually saves time in legal work, and where it does not.
Workflow measurement framework and case study template
→We prioritize evaluation-first work. Before AI is used in high-stakes legal settings, it should be measured with clear criteria, documented limitations, and reproducible methods.
We study how AI changes legal work in practice, including what improves efficiency and what new bottlenecks or verification steps appear.
Measuring where AI actually saves time in legal work, and where it does not.
Workflow measurement framework and case study template
→Understanding adoption differences across firm sizes, practice areas, and operating models.
Survey instrument and adoption playbook outline
→We evaluate public-interest use cases and the conditions under which AI can safely broaden access to legal help.
Evaluating where AI can broaden access to legal help, and how to reduce harm in public-facing use.
Use-case taxonomy and evaluation checklist for public-facing tools
→We develop evaluation methods and safeguards for legal AI, including traceability, fairness, and privacy in real workflows.
Developing evaluation methods for how well AI systems cite authorities and trace claims to sources.
Evaluation protocol and benchmark design notes
→Developing practical methods to measure disparate impact and fairness risks in common workflows.
Measurement framework and audit checklist
→Practical guidance for data handling and risk controls when adopting AI in legal work.
Data handling guide and privacy risk control checklist
→No releases published yet.
Technical reports and benchmark results will appear here.