Secure 2024: Forrester Wave™ Q2 2022 Showcases Leading Bot Management Solutions
HUMAN Blog

The Human Side of AI with Zach Edwards

If you ask the newest face of artificial intelligence, ChatGPT, who won the Academy Award for best picture last year it could not give you the answer. Its trained knowledge cutoff date is 2021-09, as of right now. The answer to the question is “Everything, Everywhere, All at Once.” Which I think was a great movie but also a great theme for what has happened to Artificial Intelligence this year. It has been everything, everywhere, and all at once. I sat down with our Senior Manager of Threat Insights, Zach Edwards, to talk about where AI is at, where it’s going, and what are the implications for cybersecurity as a whole. Let’s have a conversation.

I came across a quote that said something along the lines of, “Artificial Intelligence will go very slowly and then out of nowhere come all at once.” How do you feel about that?

Zach Edwards: I saw someone say something similar and the reply was “ It didn't come out of nowhere, we've been working on it for 20 years.” It was just no one had a product that worked for the average person. You had to be an AI engineer to know what the heck was going on. That part has changed. Moving forward every single industry, every aspect of life is seemingly going to be impacted by this. 

How is the cybersecurity industry in particular going to be impacted? 

ZE: Well ChatGPT could enable a golden age of ad fraud by making it easier to steal content in order to create content farms. It will change the way cybercriminals operate and, as always, we will have to evolve to protect the internet.

In what regards?

ZE: What's interesting about ChatGPT and similar AI's is they have safety flags. So, if you actually ask “Hey, how do I commit ad fraud?” It tells you “I'm sorry, that's a crime. And we're not allowed to help you with that.” 

Still, there's all these threads on social media of escapes and how you can break ChatGPT to do something unsafe or tell you how to do something unsafe. And so I did that with an ad fraud prompt. Then asked ChatGPT to come up with a criminal ad fraud scheme.

It gave all the places where one should be careful. And there's a possibility of the scheme being found out. The system said make sure to test your work. It didn’t do the steps for you but gave a guide. So when I think about how it could impact us, it’s “how is AI going to be used in criminal schemes?” 

The thing that people have to think about is, “will a machine be able to find a software or coding vulnerability better than a human?” And the answer is yes, probably. AI can find a random semicolon out of place or an open close bracket like a minor formatting issue. Discovering a security hole is perfect for a machine to see compared to a human eyeball. And when it comes to security testing, experts are already using tactics like “fuzzing” which relies on setting up automations to find niche vulnerabilities and side channels in applications and code.

How should companies focus on stopping these kinds of things? How can they be proactive?

ZE: I think that companies have to assume that some of these machine learning systems are going to find flaws that their humans can miss. We have to get comfortable with privately feeding machine learning systems technical information, asking that system to look for flaws.

Fraudsters are always looking at what can be taken advantage of, and we need to make sure adversaries aren’t using vulnerability probing tools that the defenders aren’t using too

Threat actors are constantly scanning, looking for a line of code that's incorrectly formatted or an opportunity to exploit. We have to start moving at the speed of machine learning auditing in order to be on the forefront of modern defense. Historically, adversaries used to not be technical enough to do this, but they can now load up a bot and network session and start asking questions saying “dear bot, parse this JavaScript, look for unsafe methods, and please parse this network log looking for missing content security policies within specific pages.” Auditing processes like this are completely possible with manual or simple rule sets, but being able to “ask a bot” to look for potential vulnerabilities is one of their core purposes of automation, and we need to appreciate that this could improve the code writing process, or create opportunities for non-coders to write simple exploits that work on live systems. 

We have to ask the questions that give us the advantage, and create systems that empower people to become technical superstars. 

So the same way it can enhance bad actors' abilities, you can enhance the time it takes to defend against fraud, right? 

ZE: Yes, I think what's exciting about the challenges here is realizing that it's going to take AI and machine learning to catch schemes done by other machine learning. We must use these tools to our advantage, and HUMAN is poised and prepared for the future since we’ve been using machine learning in our detections for years. When it comes to fraud operations they are usually breaking some technical system, but also leaving clues behind for defenders to find. For our researchers and reverse engineers, investigating the details left behind from a sophisticated operation is what a lot of their time is focused on. 

How is this thing doing it? 

What did they break in order to be able to do this? 

What the heck happened here? 

We spend hundreds of man hours to reverse what was broken. And we don't use much machine learning for those types of manual audits. It's just people spending time. Lots of manual processes, little bits of automated collection and parsing. If given the chance AI will likely enhance our already amazing response time even more by empowering even more defenders to eliminate threats before adversaries could even discover them.

That's interesting, because when it comes to HUMAN, our mission is disrupting the economics of cybercrime. Who do you think in your opinion would be more at fault, the person using the AI for the nefarious deed or the AI company for allowing it to happen?

ZE: Both. I think the example of that is a chop shop. So if you bring in the car to be chopped, you're a criminal. If they're chopping for you and they know it's stolen; they're criminals too. Companies get shut down for that all the time where they're like, “Oh, I'm just an automobile shop. We just happen to have clients that may commit crimes.”  

At some point the evidence becomes overwhelming that they're ignoring criminal activities they keep on allowing to happen and aren't policing correctly or purposely turning a blind eye.

Italy has banned ChatGPT, because of privacy and data issues when it comes to their citizens. Do you think that'll be something that more countries will do moving forward until we get more regulation on AI? Do you feel that it could be a breach of privacy and data if not overseen correctly?

ZE: Absolutely. A lot of artificial intelligence is training one AI model off of another AI model. Not only is this unsafe, it's a huge privacy violation. And so I think probably the government of Italy did what many of us have done. You go to ChatGPT and you say, tell me the biography of a random person's name. The person does not have to be a celebrity, it will give out all types of biographical information that it's scraped from websites. Laws are different in every country and so it's almost like shooting fish in a barrel to catch the fact that they're using public data right now. And it's really bad for these companies to be scraping the public Internet without safeguards in place.

I guess nuance is a human quality.

ZE: Yes exactly, I think that these bots are obviously crawling the internet now, like in a million different contexts. You don't know where the domain country of origin is, that's not transparently labeled. There's a variety of data bubbles on the horizon where these systems that have trained themselves on third party content are going to face lawsuits. And in some countries, like Italy for example, I wouldn't be surprised that they ban reuse of someone else's content in specific circumstances.

As artificial intelligence scales and bandwidth increases, do you think AI is net positive or net negative for humanity as a whole?

ZE: I think net positive. As someone who loves technical systems, it will simplify a lot of testing processes while creating more opportunities to detect sophisticated problems. For all of the technical advances that have happened in the world, it's going to make those systems better. I'm just optimistic about more people being able to write safe code and that will in turn help with even more real world problems. It's gotta be good long-term.

 Find out how HUMAN is safeguarding our clients now and in the future with a modern defense strategy