<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1919858758278392&amp;ev=PageView&amp;noscript=1">

Human or Machine? The Voight-Kampff Test for Discovering Web Application Vulnerabilities by Vanessa Sauter

Apr 17, 2020 4:21:41 PM By Mark Henke

Vanessa Sauter, who works at penetration testing company Cobalt.io, shows how we can apply the Voight-Kampff test to detect web application vulnerabilities.

This topic is important because there is not a ton of research on how to appropriately penetration test or ethically hack into their own applications. But even more so, discerning between human and machine is a way of discerning value in a results-driven market. Specialized pen testers are competing against open-source tool developers.

Hacker Report

The 2019 Hacker Report shows us that 90% of vulnerabilities still exist in web applications.

 

 

We will take these results to the next level and show what vulnerabilities humans can find versus what vulnerabilities machines can find.

But first, let’s establish some basic terms and talk about what they mean.

A Quick Explainer

To go to the next level, we will talk about black-box penetration testing, as well as dynamic scanners.

Black-box penetration testing, or pen testing, is where we have no access to the source code and no internal knowledge. In this testing, you come in relatively blind, relying on your general knowledge and public access points to a software system.

In dynamic scanning, you are analyzing applications at runtime. With this, you have internal accounts and other tooling to access applications with internal knowledge.

There are also proxies, where hackers can intercept HTTP requests between an application and its users.

Findings

We can find out quite a lot with all these tools at our disposal. For starters, the number one cause of web application vulnerabilities is misconfigured application settings. This includes anything from insecure business logic to cookie settings.

 

We also found that machines are better at detecting the vast majority of vulnerabilities. For example, they can find cross-site scripting issues much faster than any human.

Much of the time, the machine wins because they are being “told” what to look for, especially with DAST/OAST testing:

 

There’s an argument that humans are better at knowing where to look. Therefore, they should instruct the machine to go find all instances of whatever it is that the human seeks.

There are a variety of tools that can do this work for us, but we have to calculate the cost of configuring these tools versus the time they save us in continually finding vulnerabilities.

Where the Human Wins

The human wins in a few situations. This includes business logic bypasses, which are human-centric. It also includes concurrency issues like race conditions.

For example, Uber had a bug that allowed a user to change input in the HTTP request from “false” to “true.” This allowed Uber customers to bypass built-in limits when hiring a car.

Big vs. Small

Overall, we see that machines win when categorizing small, granular things. Humans win when thinking strategically and creatively, or “thinking big”. Humans are good at thinking about the overall workflows and user journeys of an application.

Humans shine when thinking about race conditions also. For example, at Shopify, hackers figured out how to bypass a partner email confirmation so that they could take over any store. They did this by quickly modifying a resource in between the time a file is opened and the time it is processed.

One may think that scanners would be good at detecting these sorts of conditions, but multi-step workflows turn out to be a hindrance for them.

As another example of humans thinking big in order to exploit bugs, we can look at chained exploits. This is where we can use one vulnerability to find another. Using Shopify as an example again, someone was able to use server-side request forgery to get root access to all instances of the application, letting them build in other vulnerabilities on the server.

 

It's Not a Competition

There are certain things machines are good at and others that humans are good at. This is not a competition. We humans should be working with machines to find vulnerabilities together. Humans assisted by machines allow us to enter a new frontier where we can lean on tools while leveraging our creativity.

Takeaways

The key takeaways for all this are:

  • Tool price is not correlated to tool quality.
  • Creativity, not narrow thinking, is key to be better at finding vulnerabilities.
  • Humans should work with machines instead of against them.

Remember these points and you’ll excel at discovering application vulnerabilities.

This post was written by Mark Henke. Mark has spent over 10 years architecting systems that talk to other systems, doing DevOps before it was cool, and matching software to its business function. Every developer is a leader of something on their team, and he wants to help them see that.

Photo by Eric Krull