HUMAN Blog

The Fraud/Friction Tightrope: CAPTCHA

The balancing act between preventing fraud and reducing friction comes to its climax in the tools we use to try and manage and mitigate fraud. These are the very elements that run the risk of introducing friction in the name of preventing fraud. And while every tool used in this context is certainly well-intended, it remains an open question whether they’re effective at the job they’re intended for. Namely, if these tools are, in fact, slowing down users, are they at least bringing fraud to a negligible level?

TL;DR - not really. The bots that plague the digital experience today are sophisticated enough to evade the tools and features deployed to stop them. Which, in turn, suggests that the only thing those tools and features accomplish is to increase user friction.

Below, we explore one of the most common tools used for fraud prevention: CAPTCHA. We explore new HUMAN research into just how much friction users experience trying to complete CAPTCHA challenges, as well as reviewing the business impacts of relying on a defeatable technology to prevent fraud. 

CAPTCHA capture

CAPTCHA fields and tools, also occasionally referred to as cognitive challenges, became one of the front-line weapons in an attempt at preventing automated attacks from reaching the sensitive information at the heart of the tech stack. But how do the humans who spend time solving these challenges feel about that spent time? How frustrated are they with the process?

HUMAN recently completed a research study asking 1,000 consumers their impressions and frustrations about various styles of CAPTCHA, and the results suggested that while the tools aren’t often a showstopper for a human trying to do something online, they can introduce a level of annoyance that may push a consumer to look elsewhere:

  • Only half of respondents reported solving cognitive challenges on the first try. In an internet where seconds may make the difference between getting the new gaming console and missing out yet again, one wayward keystroke can sour a consumer significantly.
  • Forty percent of respondents quit their login or transaction attempt because of CAPTCHA frustrations. Simply put, that’s an enormous proportion of current and potential customers who walked away because of a tool that doesn’t protect the business from automated cyberattacks in the first place. What’s more, more than half of those respondents who acknowledged abandoning a login or transaction did so on a banking/insurance or retail site, industries in which competitors abound.
  • Forty-four percent of respondents described the “enter the characters from this blurry box” style of CAPTCHA as moderate to extremely frustrating. I remember when these text boxes had actual scans of words that OCR didn’t seem to be able to parse, and it felt as though solving these CAPTCHAs was somehow doing good work in helping to digitize written works that might not be preserved otherwise. But that’s not how those boxes look anymore, and I suspect OCR has gotten good enough that it renders the “actual words” style of blurry box cognitive challenges obsolete.
  • More than half of those surveyed found the “click all the stoplights” style of CAPTCHA to be moderate to extremely frustrating. Of the cognitive challenge mechanisms we included in our survey, this one (perhaps unsurprisingly) met with the most frustration. Everybody who lives a mostly digital life has experienced the hesitation of “wait, this tiny corner of the stoplight is in another box, do I need to click that one too?”. They serve only to slow humans down and prevent them from completing what they came to do.
  • Overall, nearly two-thirds of respondents are fed up with CAPTCHA. A solid sixty-four percent of those surveyed said they felt moderately to extremely frustrated with the experience of using CAPTCHA to prove their humanity.

These statistics may not be especially surprising, but they underline one major takeaway: CAPTCHA and cognitive challenges introduce friction in the online user experience. And what’s more, they don’t do enough to stop automation by bots. Search on any combination of “captcha” and “solver” and you’ll find numerous services that will claim to solve thousands of CAPTCHA challenges for only pennies. Some of these even have human solvers in the background, doing the work that a cybercriminal needs done to get on with their attacks.

If a tool doesn’t effectively protect against the mechanism of attack and frustrates the real users attempting to use the site, that’s a failed tool.

Frustrations can become painful

One major realization I’ve had in the past few years is that too many organizations perceive bot mitigation and management as a checkbox item in their cybersecurity planning. A little bit of user frustration is thought of as the price of protection, that friction is somehow a symptom of security. But when cognitive challenges are dispensed with easily (both by and for the benefit of bots), that frustration can become the source of significant pain to the business.

ESG’s recent research into bot management trends asked cybersecurity decision-makers how long they believed it would take to recover market share and customer trust following a bot attack:
  • Six months: respondents to ESG’s research said it would take an average of six months to regain market share after an attack. Consider the data point above about how many respondents abandoned their login or transaction attempt due to cognitive challenge frustrations. There are alternatives, and a successful bot attack paired with a difficult user experience can lead to dramatic losses.
  • Nine months: the same ESG research uncovered that it would take an average of three full quarters of business to rebuild customer trust lost following an attack. Businesses can’t afford to coast for the better part of a year, waiting for skeptical customers to come back around.
Another data point worth highlighting from the ESG research: the vast majority (86%) of respondents—who, again, are cybersecurity decision-makers—believe that bots can bypass simple defenses. Like CAPTCHAs.

 

Stop the bots. Not the humans.

It’s 2022. User friction as a substitute for security is long since played out, and there are ways to mitigate threats without slowing down users. BotGuard for Applications doesn’t ask every session and every login attempt to demonstrate humanity before continuing, it scans for signs of automation and only then takes action, up to and including preventing the request from proceeding at all. And BotGuard completes that scan and decision faster than the time it takes to blink.

Test-drive BotGuard for Applications for 30 days with a single line of code and see for yourself just how seamless effective cybersecurity can be.