Fact Checking Artificial Intelligence: Efficiency vs. Accuracy | Nuix Skip to main content

Fact Checking Artificial Intelligence: Efficiency vs. Accuracy

Circuit connections

Artificial intelligence or, more specifically, deep learning entered the digital forensics and incident response (DFIR) space originally to help reduce the amount of time investigators spent analyzing cases. It started out with simple skin tone and body shape detection in graphics; this evolved into the ability to place a percentage value on how much skin appeared in a picture or video.

This technology is especially helpful for investigations into sexual assault, child abuse and other graphically disturbing cases. It saves analysts from manually looking at thousands, or in some cases millions, of graphics and video content, speeding review while at the same time (hopefully) preserving their general well-being in the process.

As deep learning technology has evolved, and more models have been developed, we're faced with an important equation to solve. Which is more important: efficiency or accuracy? In a perfect world we would want both.

Multiple Models

If you take a moment to look around, you’ll notice some big companies are making huge, regular investments into developing artificial intelligence models. These models are often freely available for technology companies like Nuix to incorporate into their own products.

I think it’s important to note here that these models, while freely available, are not the latest and greatest technology available. Still, the fact that so many options are available is impressive in its own right.

By default, Nuix uses Google’s Inception V3 model for image classification. This model balances accuracy (greater than 78.1%) with incredible speed. That’s great for cases where time is a critical factor; other options such as the Yahoo Open NSFW model (now known as Tensorflow by Google) and VGG16 work more slowly, relatively speaking, but operate at over 90% accuracy. The VGG16 model has the ability to learn through data ingestion, thus increasing its accuracy over time.

There are models in development that reach 98% accuracy while maintaining the speed of the Inception V3 model, but they have yet to reach our market.

If you’re looking to change machine learning models from the Nuix default, they are available at download.nuix.com. This is one of our strengths, letting you change your model based on need or preference.

Exploring the Models Further

Graphics analysis is an example of artificial narrow intelligence (ANI), which I explored in my last article. ANI is programmed to perform a specific task, freeing humans from performing repetitive and time-consuming work. For anyone who has performed graphic analysis, we know it certainly qualifies as both.

The models we’re talking about are convolutional neural networks (CNN), which detect patterns in images using techniques that analyze small areas of the image in a layered, procedural approach that can detect image edges, ranges of color and other aspects that help the machine classify the image for the analyst.

Explaining how this works is difficult. Thankfully, there are some great explanations online. One is the brilliant Anh H. Reynaolds’ article on Convolutional Neural Networks, which she was gracious enough to give me permission to share in this blog. AI education site DeepLizard also published an explainer video that’s worth watching to learn more. If you have a need-to-know mindset about how things work, both are worth the time investment.

Making the Choice

As I did the research for this article, I came to an important conclusion. I can’t definitively say which model or approach is right for your investigative needs. Analysts should take the time to assess the different models and be comfortable with the mix of accuracy and speed they offer. During testimony, a decent attorney may ask what kind of testing and comparison you conducted to choose your machine learning model. It hasn’t happened often, but I have had attorneys surprise me with the homework they do.

I prefer to use real cases and test different models against them, with a couple caveats. First, you should run tests against models that you’ve already run through your trusted methods and technologies – just be prepared to find things you might not have found the first time around. After all, that's the benefit of using the technology.

Also, testing is unbillable work – it's just wrong to bill a client for work done while testing a new machine learning model. That doesn’t make the work any less valuable; the time you spend testing your models and documenting the results will have an incredible impact at every stage of your investigation.