Nuix Notes: Monthly updates from the CEO - Jul24

Written by: Jonathan Rubinsztein

Jonathan Rubinsztein Chief Executive Officer

July 2024
Slow Down to Speed Up: The Discipline of Responsible AI

Last month I had the opportunity to have dinner with fellow executives from across a broad section of organizations. Unsurprisingly, the conversation shifted towards AI. One of the executives shared how they had recently updated their board on their organization’s AI strategy, and how this had quickly moved into a frenzied discussion about the need to go faster, to speed up the adoption of AI, and to embrace the use of large and ever-growing data sets. What was less clear was the ‘how’ and more importantly, for what set of challenges.

And this, I have come to believe, is exactly the problem. As we hit the top of the AI hype cycle I’ve seen organizations—and in particular litigators and litigation support teams—rush into over-reliance on GenAI without the proper guardrails or thought put into the right use cases. Regrettably, its causing wrong outcomes, undermining confidence, and amping up the fear about AI’s irresponsible development by tech companies.

For those who know me well, “slowing down” isn’t part of my usual vocabulary or management style. However, in the case of AI, slowing down in order to speed up is something that I believe is the right approach and can minimize risk, maximise value, and importantly build TRUST. In fact, I would argue that slowing down to get things right early in the process actually enables us to move faster and go farther.

As a quick example of how wrong it can go, Steven A. Schwartz, a lawyer of more than 30 years’ experience practising law in the US, misused ChatGPT with terrible consequences. In an effort to speed up his preparation for a personal injury case against a major airline, he used ChatGPT for assistance with research without fact checking the results. As it turned out, at least six of the cases submitted in the brief didn’t exist, included false names and docket numbers, and even presented bogus internal citations and quotes. Just imagine the chaos if some of our most trusted institutions start hitting the ‘easy button’ like this without the right thoughtfulness or protective measures in place.

Lack of transparency by big tech companies around their training sets, models, and methods is also something that keeps me up at night – especially when many organizations are rushing to deploy. In a recent interview OpenAI’s own CTO could not specify or exemplify the sources of data used to train OpenAI’s Sora, apart from it being public.

Fortunately, I am not alone in my concern. Earlier this year the Italian DPA told OpenAI its suspected of violating European Union privacy, following a multi-month investigation and that their chatbot was being fed large amounts of data, including sensitive personal data.

As global players in big data analytics and enterprise intelligence software, the importance of building and maintaining customer and societal trust is paramount. For us, there is no easy button. To quote Steven M.R. Covey Jr “Trust always affects two measurable outcomes: speed and cost. When trust goes down—in a relationship, on a team, in a company, in an industry, with a customer—speed decreases with it. Everything takes longer.”

I recognize that slowing down to get AI right can feel foreign, counter-intuitive, and sometimes frustrating – both for our innovative and passionate staff, as well as our eager customers. But in the long run, discipline around this strategy will enable us to go faster, farther, and most importantly with more trustworthiness, reliability, and defensibility.

At Nuix, we have been fortunate to be on this AI journey for a few years and have embedded cognitive AI into our powerful, patented Nuix solutions. We’ve been diligently developing our models to shape an AI-empowered world with humans at the centre— we refer to this as our own brand of Responsible AI (RAI).

Nuix RAI is not a stand-alone strategy but stands as a key pillar in our overarching Corporate Governance strategy – with direct oversight from myself, our Executive Leadership Team and General Counsel.

Our RAI Principles, Policy & Practices are thoughtfully designed by a multi-disciplinary team to optimise our products’ explainability, purpose-driven specificity, human-centric control, defensibility, and safety. This includes diverse AI staffing to ensure fair representation, balanced perspectives, and bias-mitigation at the model point of creation.

Additionally, we embrace a holistic approach to our AI development, recognizing that to solve the world’s biggest challenges, for and with our customers and partners, we must utilize a sophisticated blend of technologies, techniques, innovative thinking, and thoughtful application. This includes the use of both Cognitive AI and Generative AI where and when it can be done responsibly, leveraging the benefits of each, thus reducing risk, while optimizing speed and accuracy.

And so I’ll close here this month on this post. However, if you think the idea of slowing down to speed up is interesting, stay tuned in an upcoming blog for my take on AI models.

And here’s a teaser for you…Small is the new BIG.

Best, Jonathan

Jonathan Rubinsztein
Chief Executive Officer