Litigation prep assistant

What it does: Allows the client to use a structured interface to capture case facts, events, claims, and evidence — then generate demand letters, and draft summons. It accelerates the intake and case prep process for litigation lawyers and shifts more work to the client for potential cost savings, letting the lawyer focus on higher-value tasks.

Why I built it: To illustrate how AI has enabled us (lawyers) to build more powerful tools ourselves, without needing to be expert developers. Narrow use cases are often too complex for generic AI tools, and software vendors are less incentivized to build for small markets. By building our own tools, we can better serve our clients and explore new ways of working.

Judge-O-Matic 3000

What it does: Let's you submit a small dispute to a panel-style AI workflow and see how different model “judges” vote.

Why I built it: To illustrate how AI is not a single instance entity but rather an infinite resource that's easily scalable. Further, it addresses the issue of error correction in AI decisions. Where one AI instance may not be considered 'reliable' enough to make actual decisions, multiple AI instances can be used to aggregate and improve accuracy, using different models and intentional bias.

Invoice reviewer

What it does: Reviews invoices for legal services against pre-defined billing policies and user-selected red flag focus areas. The application extracts structured invoice data (schema-validated), performs deterministic compliance checks, adds scoped AI-assisted observations, and produces a clear recommendation: Pay, Query, or Reject. It can also generate a draft email to the law firm requesting clarification or correction. Rather than relying on a single AI agent, the workflow uses four independent, narrowly defined AI calls, increasing reliability, auditability, and robustness.

Why I built it: To demonstrate that effective legal AI is often about constraining scope rather than automating entire workflows. By limiting AI to specific extraction, pattern detection, and drafting tasks — and letting deterministic policy rules drive the assessment — the system becomes more transparent and controllable. The use of strict JSON schemas further illustrates how legal policies can be designed in a machine-readable, automation-ready format from the outset.

RAG demo

What it does: This demonstration app shows a simple implementation of a Retrieval-Augmented Generation (RAG) system for legal documents. It allows users to input a legal question, retrieves relevant chunks from a preloaded document (EU Directive), and generates an answer using an LLM. The app illustrates how RAG can help manage information overload by selectively retrieving and processing only the most relevant information for a given query.

Why I built it: I believe that we (lawyers) need to understand the underlying mechanics of AI tools to properly understand their capabilities and limitations. The concept of a "context window" is fundamental to how LLMs process information, and very often it is the first stumbling block when it comes to understanding how LLMs work. We must bridge the knowledge gap around these concepts to effectively use and build with AI. I made this demo in an attempt to create a simple, interactive way for lawyers to grasp the RAG architecture and the importance of retrieval, which is usually a hidden aspect of the day to day use of AI powered applications.