Approach #1 – User agent
The simplest approach to monitoring bot clicks is to block clicks generated by self-proclaiming bots. Some bots scour the web to collect information. These bots often purposely communicate to other parties that they are bots through their user agent, a means by which a web browser tells a website information about itself. The common convention is to include the term “bot” in your user agent when employing a bot. Other terms that may be useful to monitor include “crawler” and “spider”.
Approach #2 – Click-through rate
One of the most effective approaches to monitoring bot clicks is using click-through rate. Humans typically click on less than 5% of display ad impressions they observe. If you notice an IP address clicking on ads an inhumanly large percentage of the time, that’s a clear indicator that the IP has been compromised by a bot. However, humans may open up a page with a few ads, click an ad, then end their browsing session. Because of this, in order to confidently use click-through rate for bot detection, some minimum threshold of clicks per time period must first be observed.
Approach #3 – Frequency
Another commonly used bot monitoring approach is actions — such as clicks — per time period. This is useful for covering cases where bots mimic human click-through rates but instead rely on volume to create sizable fraud. Humans typically click on less than 10 ads in a given minute. If you notice a cookie clicking many times per minute, that’s a clear indicator that the cookie has been compromised by a bot. Keep in mind that IPs may comprise a large number of devices, so attempts to count frequency by IP would need to be done conservatively.
In part three of the series, we’ll be looking at how only about half of display ads are ever viewed.