In the world of cybercrime, ransomware and DDoS attacks had the highest profile by far during the past year. There was an entire day devoted to a ransomware “summit” at the recent RSA conference in San Francisco.
But when it comes to money being lost (and made), bot fraud is king – by a lot.
Most estimates of losses in the US from ransomware during 2016 were in the $1 billion range. By contrast, a study published in January 2016 by White Ops and the Association of National Advertisers (ANA) titled “Bot Baseline: Fraud in Digital Advertising,” estimated global losses in 2016 would be $7.2 billion.
A more recent report by Marketing Science Consulting Group estimated 2016 losses in the US alone at $31 billion.
Either of which makes ransomware losses, by comparison, “chump change,” in the words of Augustine Fou, cybersecurity and ad fraud researcher at Marketing Science and author of the report, “State of Digital Ad Fraud.”
Fou, a self-described “bot hunter,” said that kind of cybercrime is so vast because it is so easy, profitable and safe.
“It is extremely lucrative, it is scalable, and perpetrators don’t have to risk their lives to do it. They can commit ad fraud from the comfort of their Aeron chairs,” he said.
Indeed, Bruce Schneier, CTO of Resilient Systems, wrote in a recent blog post that the growth of “click fraud” – bots designed to trick advertisers into thinking that real people have viewed, and clicked, on their ads – has the potential to cause, “the whole advertising model of the Internet (to) crumble.”
There is no great mystery in the industry about how bot fraud works. The online advertising model is based on advertisers paying for the number of people who view their ad on a website and/or click on the ad.
Companies pay based on CPC (cost per click) or CPM (cost per thousand impressions), which is the number of times the page with that ad is viewed.
All of which, in the beginning, was considered much better for advertisers than newspapers, since there was no way to tell if readers actually looked at, or responded to, an ad, unless they brought in a coupon to a retailer.
The internet model ensured that advertisers paid only for people who actually looked at, or responded to (clicked) an ad.
Or, it did until the arrival of bots, which use thousands to millions of “zombie” computers or connected devices in a botnet to create fake traffic to websites and fraudulent “clicks” on ads.
“The going rate for sophisticated bot traffic is about 1 cent per visit,” said Michael Tiffany, CEO of White Ops. “If a botnet operator can make 100,000 unique computers visit a particular website, that’s worth $1,000 if he can make those visits look real.”
And, as has been widely reported, bot makers have gotten very good at making them behave like real human visitors.
Joe St. Sauver, scientist at Farsight Security, said bot makers, using compromised devices, spread the “traffic” among multiple IP addresses, “so that some clicks come from Oregon, others come from Ohio, others from Oklahoma etc.
“That software may also include routines designed to mimic natural pauses, while pages are ‘being read,’ or subsequent clicks – perhaps drilling down on optional features, looking for local dealers or other things that look like what a normal human visitor would do,” he said.
He added that in some cases, they don’t even have to be that sophisticated. “At a tenth of a cent per visit, you won’t get traffic that looks realistic, so it won’t fool ad buyers who use sophisticated analytics, but it will be good enough to make your site look popular,” he said.
And that is why bot fraud is so popular. “Can you imagine making a penny every time you make any machine you’ve infected load a webpage? Nothing beats those economics,” he said.
Fou agreed. “Fake website owners buy traffic to generate ad impressions – they buy traffic for $1 CPM and sell ad impressions for $10 CPM – they pocket $9 of pure profit.”
All of which raises the obvious question: Given the staggering losses to advertisers, why aren’t there more aggressive, and successful, efforts to curb it?
There are, in some cases.
Mike Lynch, chief strategy officer at inAuth, said the use of a tool called “velocity detection” can spot devices taking multiple, unusual actions. But he said if the tool uses IP addresses or cookies, bots can easily defeat it, since they change IP addresses and disallow cookies.
“So device intelligence and a method called device fingerprinting is a critical defense,” he said. “The more reliable the device fingerprint, the better the ability to detect velocity, which could be the result of a bot.”
Lynch said other techniques to defeat bots include:
- Static – detecting a particular known malware
- Behavioral – detecting a high number of attempts, a high number of failures, unusual traffic patterns, unusual speed of access and access attempts
- Honeypots – created to lure attackers to what seems like a legitimate part of a site, to gather information about and block the attacker
St Sauver said one way to make fake traffic or clicks meaningless would be for, “online retailers to move to a revenue-share model that paid only if a purchase was made, and didn’t get reversed due to use of a stolen credit card, etc.”
But he acknowledged that such a model would have its own complications. “Let’s say you visit a sports car site after seeing an ad on site A. The next day, you see another ad on site B,” he said. “A week later, you go to a dealership and buy the car. How does that purchase get connected to site A and site B?”
Fou said advertisers could cut their fraud losses simply by being careful. “Don’t buy from the swamp of long-tail ad exchanges where fake sites and ad fraud thrives,” he said. “There are only a finite number of humans that go to large mainstream sites. Buy from good mainstream sites and focus on low quantity, high quality – forget about the low-cost junk.”
But he said most of them, while they don’t pay knowingly for fake traffic, “use media buying agencies that buy large quantities of ad impressions, and don’t check where the impressions came from.”
He added that another reason for rampant ad fraud is that there is virtually no legal barrier for this kind of crime.
“Just look around on LinkedIn or Fiverr,” he said. “Traffic sellers operate in broad daylight because there is no law against this, and no one asks where they got the traffic and how or why they say ‘real traffic.’ There is no law against ad fraud, so there is no risk for the bad guys.”
That, he said, means it is up to advertisers to confront the fraud. “If they don’t insist on change, no one will,” he said.
And even if they do, St Sauver said online advertising is unlikely to regain a measure of stability, since there is another, even bigger, threat.
“The biggest threat to online advertising is likely not ad fraud bots,” he said. “It’s consumer adoption of ad blockers.”