Search Blog for

Good Bots In. Bad Bots Out.

More than half of Internet traffic today comes from bots. These non-human visitors crawl the web constantly, their numbers are increasing, and they are getting smarter and more human-like by the minute. Imperva has been tracking these trends for more than five years, in an ongoing statistical study of the bot traffic landscape.
Of course, some bots are welcome visitors to your website. Search engine bots, commercial crawlers, feed fetchers and monitoring bots all fall in the classification of good bots. These are the underlying tools of our digital economy that enable effective search engine optimization, digital marketing, website health reporting and even mobile content.
More than half, however, are nefarious and decidedly unwelcome. Impersonators. Spammers. Scrapers. Hacker tools. The increasing web security challenge is how to quickly tell the difference when these bots attempt access, without burdening or interrupting the experience of your human visitors. In this post we’ll examine the way Imperva determines good bots from bad to help our customers win the bad bot battle.

Challenging Non-Human Visitors

We all have a love/hate relationship with the captcha, but it remains among the most stringent tests to sort human visitors from bots. The bots are getting smarter, but they still aren’t human. When designing a good web experience, however, no one would choose to present a captcha to every visitor.
Let’s say you’re hosting a dinner party during the Zombie Apocalypse. You’d want your human guests to have a wonderful experience, so you’d need a way to identify and lock out any zombies without challenging every guest with offensive questions like whether they’d prefer bouillabaisse or brains to determine who they are.
Likewise, to ensure an inviting and uninterrupted browsing experience, the more innovative approach is to identify and deny access to nefarious bots transparently, before the human visitors even realize the bots were at the door.

Imperva’s Three-Step Bot Identification Process

Imperva’s bot identification integrates challenges to separate humans from bots. This happens dynamically and transparently through a three-step process, so human visitors aren’t even aware their access is being challenged (Figure 1).
transparent bot identification
Figure 1: Imperva’s transparent bot identification process provides users with an uninterrupted browsing experience.

Step 1: Classify

Imperva has built a signature database of millions of known bot variants. The bot is compared to the known bots and classified as malicious, legitimate or unknown. Malicious bots can be blocked. If the bot cannot be found in the database, it’s profiled in the next step.

Step 2: Profile

Imperva subjects unknown bots to various levels of inspection including headers, IP addresses, and client fingerprinting among other techniques. During the profiling step, the bot is classified as legitimate, malicious or suspicious. Malicious bots can be blocked while suspicious bots go on to the next step—challenges.

Step 3: Challenge

In this step, bots are presented with a set of challenges including holding a cookie and parsing JavaScript. We’ve found that over half and up to 80% of malicious bots cannot pass the cookie or JavaScript challenges. Captcha can also be used as the definitive human test, but only as a last resort.
This process takes less than a few milliseconds and ensures the highest level of accuracy. Each step is increasingly more difficult, which makes it easy to discern between bots and humans. The majority of questionable traffic is stopped automatically at one of the steps without affecting the overall user experience, so visitors can enjoy their sessions uninterrupted.
The result is a highly-secure approach to transparently identifying and classifying bot traffic so non-human bad actors are stopped before they can reach your virtual doorstep.
Learn more about Imperva Incapsula bot mitigation or request a demo to see if for yourself.