HUMAN Blog

Fighting Fakes

One of the most enduring and frustrating struggles we all have with our internet-facing lives is the idea of fake accounts. These fakes—which are often built to behave as human-like as possible, but faster—can cause a surprisingly wide variety of problems:

  • On retail or ticketing websites, they can snap up tickets or limited-edition merch before you or I could even hit “add to cart”.
  • On streaming websites, they can make an artist, show, or album seem much more popular than it really is by faking streams.
  • On internet forums, they can create the appearance of broad consensus through sheer volume of comments, or they can negatively engage  a human user with a different opinion.
  • And on survey and poll tools, they can put not just their thumb on the scale, but their whole hand, resulting in metrics that don’t reflect the reality of human voting.

All industries are being impacted by bots and fake accounts, and they’ve taken different approaches to combating the challenge of fakes. It’s usually some flavor of CAPTCHA, in which you, as the user, have to identify a blurry photo of letters or click on all of the stoplights in a grid of photos.

What makes it difficult is that those CAPTCHAs, while well-intended, don’t really get the job done. Many CAPTCHA mechanisms have actually been defeated through the use of machine learning, and the ones that haven’t can be referred to human-run “CAPTCHA farms”: imagine a school computer lab, full of people who sit at their desks solving CAPTCHA challenges all day. And, frustratingly, it’s cheap, too. Some CAPTCHA farms will put human intelligence behind the problem for only pennies per solve.

If you zoom out a bit, though, it’s the very idea of CAPTCHAs that are fundamentally backward. CAPTCHAs challenge every user, human or robot, to demonstrate their humanity before proceeding. The result is an annoying user experience for humans and little to no impact to the bot population on the platform.

What’s needed instead is to challenge only the users who show signs of automation in the first place. Let the humans through unfettered, and have the would-be users whose devices or behaviors show signs of automation prove they’re real before moving forward.

That’s where BotGuard for Applications comes in. BotGuard isn’t yet another CAPTCHA tool, it uses machine learning, more than 2,500 detection algorithms, and hacker intelligence to make a bot-or-not decision in only milliseconds. With that information, the account creation can allow a user to move forward, block the user outright, or offer a “prove you’re real” challenge.

This approach keeps the web more human for those of us who use it every day. Don’t challenge the humans to prove themselves, challenge the bots. That’s how you get the bots out of the woodwork and protect the integrity of the internet.

Knowing who’s real is critical to the integrity of the internet. There are more than five billion (with a b) humans online around the world. We’ve dealt with multiple scenarios in which a single bot operation can generate billions of fake interactions in a single week. The impact on what we know and trust is real.

Bots have become increasingly sophisticated and are one of this decade’s most important cyber threats. They are used in over three quarters of security and fraud incidents online, not just for manipulating popularity and sentiment, but also for stealing sensitive data, breaking into our accounts, purchasing limited goods and resources, fraudulent financial transactions, and much more. If you can look like a million humans online, there’s a lot that can be done.