Blogs

Nuix Notes: Monthly updates from the CEO - Mar24

Written by: Jonathan Rubinsztein

Jonathan Rubinsztein Chief Executive Officer

MARCH 2024
Ai – Marching Us Forward or Holding Us Back? 


In last month’s post, I had the opportunity to share more about the new Nuix and the incredible journey of transformation as we work to execute on our strategy and vision of using our tech as a Force for Good. 

This month, in the spirit of International Women’s Day, I’d like to focus on the human side of the equation, as I believe diverse teams are just as critical to our success as our technology. As we build tools for the future, we must be conscious about removing biases and barriers that have held us back in the past. 

Whilst we work to build the best tech to empower more people across the world, what is critically important is the ways in which we build it, and who’s involved in the process. This is especially true when it comes to the development of our AI-powered solutions.  
 

Let me elaborate: 

Artificial Intelligence (AI) is revolutionizing virtually everything in our digital world, from customer service, transportation, and banking, to healthcare, energy production, and national defense. But what makes AI tick? Data. Data is the fuel that drives AI. This is where the humans play a critical role in the selecting, collecting, curating, and preparation of the data that will be used to train and optimize our AI models. The better the data, the better the models, and the better the outcomes. The tricky part is humans are imperfect creatures prone to subjectivity, unpredictable emotional states, and ingrained biases. And the consequences of training AI with imbalanced or biased data can be devastating, and even deadly. 

Take for example facial recognition. According to numerous studies of this technology, including recent research by the University of Calgary, some facial recognition systems achieve a 99% accuracy rate in recognizing white male faces, while experiencing error rates reaching 35% for the faces of black women. This outcome correlates directly to the nature of the data used to train the models, and the demographics of the humans who trained the models. The real-world impact here is chilling, as this has resulted in law enforcement actually arresting and detaining innocent people based on these systems. 

In another example, cited by New York University’s AI Now Institute, a resume-scanning tool, trained on previous examples of ‘successful applicants’, downgraded applicants whose resumes included the term “women’s” or who attended women’s colleges. In the same article, Facebook and Google (two global AI providers) were reported to have 15% and 10% female representation on their AI teams, respectively.

This bleeds into the health space too. There is AI technology that supports medical practitioners in making health recommendations to patients, however as published in the Journal of the American Medical Association, when evaluating patients’ potential for acute respiratory failure, accuracy significantly decreased by 11.3% when clinicians were shown systematically biased AI model predictions and model explanations did not mitigate the negative effects of such predictions. This then becomes a matter of life and death. 

The recent advent of generative AI, such as OpenAI’s ChatGPT or Google Gemini, has exacerbated the issue due to their heavy reliance on ever-growing large language models (LLMs). These models are built using trillions of tokens (words and word parts) and parameters (variables the model learned during training). Whilst these LLMs can often deliver impressive results, they are, by nature, a ‘black box’ providing no explainability or insight into their operations. And because they are trained on untamed web data, which is rife with opinion, inuendo, racism, bias, and other unsavory content, the opportunities for robust AI governance, safety, reliability, and accountability are undermined. There is presently no concept of human diversity or human control in this new space, which is deeply troubling. Especially considering the generative AI hype cycle we are currently in!
 

Turn the attention back to Nuix – as I promised in my opening …

Nuix has embedded a powerful Natural Language Processing (NLP) engine into our enterprise software and solutions.  This easy-to-use cognitive AI technology enables our customers to understand their data at scale and with hyper granularity, finding patterns, correlations, and risks, and getting to actionable intelligence faster than ever before. It’s super cool tech. But, in my opinion, what’s even better is that our AI is designed to empower subject matter experts, with no engineering or data science skills, to build, tune, and validate models through a simple no-code interface. Here, we’ve gone the extra mile to abstract the underlying complexities, and democratize the process with deeper explainability, and broader use than any other solution on the market. 

At Nuix we have purposefully built our NLP models using a diverse team of dedicated individuals that includes a healthy mix of women (one of which leads the model team) and men. And because our system is designed for use by non-technical people, we have the unique ability to staff these positions with talented professionals from a wide range of backgrounds including a school teacher, a philosophy major, a filmmaker, and a full-time parent returning to the workforce.

So yes, while I do believe AI can be a powerful tech force for good as we work to find the TRUTH in digital world and solve some of the world’s biggest challenges – we need to ensure the right governance and teams are in place, so we create fairer and more equitable solutions that benefit everyone. 

And as always, if you have any further thoughts or insights on this topic, I’d love to hear from you – so please reach out – my door/inbox is always open.

 

Jonathan Rubinsztein
Chief Executive Officer