Blogs

AI IN ACTION: TRANSFORMING LEGAL INVESTIGATIONS

At a recent Nuix event in Brussels, a distinct shift in conversation occurred. The dialogue moved past the theoretical “if” of Artificial Intelligence in law to the urgent “how” of operational reality.  

The event brought together experts including Penelope Papandropoulos, Head of Data Analysis and Technology Unit at the European Commission’s Directorate-General for Competition (DG COMP), and our Field CTO, Stephen Stewart, Nuix. From the boardroom strategies dictating technology budgets to the courtroom standards requiring absolute accuracy, the consensus was clear: we are no longer just processing data; we are engineering intelligence. 

For senior legal professionals, the takeaways from these sessions offer a pathway for navigating the complex intersection of regulation, technology, and operational efficiency. 

Quote

The Challenge: Volume vs. Velocity 

The legal sector faces a paradox. Regulatory bodies and private firms alike are under immense pressure to conclude investigations faster, yet the volume and variety of data continues to explode. 

Penelope Papandropoulos framed this challenge perfectly during her fireside chat. She noted the impetus for the discussion was simple but daunting: “Everybody's facing increasing data volume, data sources and formats, and everybody is under pressure to go a lot faster with everything. So how do we go about using AI? How do we go about using it sensibly and responsibly?” 

The traditional linear review models are breaking under the strain of modern digital evidence. Whether it is an antitrust case involving millions of documents or a merger requiring a rapid market assessment, the old ways are too slow and too costly. 

Stephen Stewart reinforced this operational bottleneck during his session. “The reality is everything we do is about reducing that total amount of data that a human has to look at,” Stewart explained. He pointed out that even if you filter a dataset, reducing it by 40%, you aren't increasing human reading speed. You are simply left with a slightly smaller, yet still insurmountable, mountain of information. 

Rethinking the “Gold Standard” of Human Review 

One of the most provocative themes to emerge from the Brussels event, was the questioning of human infallibility. For decades, manual human review has been the benchmark for legal defensibility. However, both speakers suggested that relying solely on human eyes might actually introduce more risk than a well-calibrated AI model. 

When it comes to identifying relevant evidence in vasts amounts of data, “Is AI as good as humans?” Papandropoulos asked. “There's a lot of requests to test and show that AI gets to the same evidence as a human or evidence a human would not have found.  But what should the benchmark be to assess the effectiveness of AI relative to humans?”.   

Stewart expanded on this by highlighting the inherent physiological limitations of human reviewers. “The inconsistency of human behavior, and the ability to continuously drive that across a huge number of things,” poses a significant risk, Stewart noted. “Large language models don't get tired.” 

In high-stakes litigation or regulatory audits, fatigue leads to missed evidence. AI offers a level of consistency that humans simply cannot maintain over thousands of hours of review. The goal, therefore, is not to replace the lawyer, but to ensure the lawyer is only reviewing documents that actually matter.  In all cases, human checks and decisions should always be the final authority.  

Quote


line

Operationalizing Generative AI: Beyond the Hype 

While the legal industry is awash in buzzwords, the sessions focused on practical application. How do you actually put Generative AI (GenAI) to work without exposing your firm or agency to hallucinations or data leaks? 

The Workflow Approach 

Stewart emphasized that GenAI should not be treated as a magic button but as a component of a broader workflow. He described scenarios where Nuix is innovating, “like embedding it [AI] into review, but also how to push it forward into things like early case assessment.” 

He shared a compelling use case regarding contract analysis. An internal legal team needed to determine if they had the rights to use partner logos across thousands of contracts. A manual review took a month. By operationalizing GenAI to ask specific questions of the contracts – extracting limitations of liability and specific clauses – the automated process delivered superior results in two hours for a fraction of the cost. 

“It's about using [GenAI] as that last step before you're going to ask a person to get to work,” Stewart advised. 

The "Right Tool for the Right Job" Strategy 

Papandropoulos highlighted a nuanced approach to model selection. Not every task requires a massive, cloud-based Large Language Model (LLM). For sensitive investigations involving business secrets, efficiency and security must coexist. 

She discussed experimenting with smaller models that can run locally and noted that while smaller models might struggle with broad summarization, they can be highly effective for preliminary classification – filtering out noise like newsletters so that the expensive, high-power models (and humans) focus only on potential evidence. 

line

Navigating Security and Cloud Anxiety 

For senior legal leadership, the cloud remains a source of anxiety, particularly in jurisdictions with strict data sovereignty laws like the EU. Papandropoulos touched on the fact that data security and sovereignty is an important issue for authorities, including competition authorities. 

“Moving to the cloud... generates a number of questions given the type of documents concerned” she noted. 

The solution lies in flexibility and hybrid approaches. Stewart demonstrated how modern platforms allow firms to choose their AI backend – whether that is an air-gapped local model for highly sensitive criminal data or a private instance of a commercial LLM for corporate contract review. This flexibility allows legal teams to balance the power of AI with the non-negotiable requirements of client confidentiality. 

The Future: Collaborative Intelligence 

Perhaps the most forward-thinking concept discussed was the potential for shared AI models. Papandropoulos mused on the inefficiencies of every national authority training their own models from scratch for every case. 

“Why start from zero every time for every case?” she asked. She promotes the idea of collaboration across competition agencies for collective learning.  Techniques to effectively assess information with AI can be shared without the need to share any confidential data.  This broadens the scope for exchange of information and collaboration.  

This vision of collaborative intelligence represents a mature next step for the legal industry – moving from isolated silos of data to a networked defense against regulatory breaches and criminal activity. 

Quote


line

Strategic Takeaways for Legal Leaders 

The sessions in Brussels clarified that we are moving past the experimentation phase. For General Counsels, Partners, and Heads of eDiscovery, the mandate is now operationalization. 

Audit Your Benchmarks: Stop assuming human review is perfect. Begin benchmarking your current review accuracy against AI-assisted workflows to find the true gaps in defensibility. 

Automate the “Boring”: Use GenAI to handle structured data extraction (like contract clauses) and summarization. Save your high-billable human hours for strategic legal analysis. 

Demand Model Flexibility: Ensure your technology stack allows you to swap between local, private, and public models depending on the sensitivity of the case data. 

Invest in AI Literacy: As Papandropoulos warned, “Organizations are slow, technology is fast.” Closing the skills gap in your team is as critical as buying the software. 

The overarching message from Stewart and Papandropoulos was one of cautious optimism. The technology to solve the data volume crisis exists today. The challenge now is having the courage to rewrite yesterday's workflows.