WP How Incapsula Client Classification Challenges Bots | Imperva

Archive

How Incapsula Client Classification Challenges Bots

How Incapsula Client Classification Challenges Bots

We’ve talked about how web bots are software applications that perform routine scripts on the internet previously. For more detailed information about bot activity, our Bot Traffic Report 2016 is a good place to start.

Bots are neutral in their most basic form and have different uses, the most prominent of which is combing the internet for data to compile and analyze. As a result, bots can be used for a wide range of purposes. Many bots are used to run automated tasks that improve efficiency and save time. Some bots, however, can be organized by cybercriminals for illegal purposes. In this post, I’ll detail how Incapsula challenges bots to determine if they are beneficial or malicious.

Daily trends show malicious bot activity is on the rise. Recently, for example, cyberforgers used a barrage of web bots to impersonate 6,000 news and content sites to steal advertising revenue. And a study from Newcastle University in England revealed that it only takes six seconds for bots to hack a credit card account. Malicious bots like Mirai and Nitol have disrupted websites in never-before-seen levels.

Sorting Bots

Bots fall under three broad categories:

  • Good bots – such as crawlers, chatbots, transactional bots, informational bots and entertainment bots (like gaming and event bots).
  • Bad bots – like hackers, spammers, scrapers and impersonators.
  • Suspicious bots – anything that initially raises a flag of concern.

Malicious bots can perform a variety of tasks that compromise website security or site performance, including these four:

  • Scraping Site Content – by extracting large quantities of data from websites, these bots can slow down sites. Many “headless browser bots” can masquerade as human visitors to fly under the security radar and bypass website security.
  • Probing for Vulnerabilities – here bots visit websites and test their security to exploit weaknesses in application codes.
  • Launching DDoS Attacks – malicious bots are often combined by the thousands to form a botnet. These aggregates give criminals the ability to launch large attacks by controlling and directing the attack on demand.
  • Distributing Spam – spambots create fake email accounts and solicit data for fraudulent purposes.

As I mentioned earlier, a bot which is a singular software application is intended to perform automated tasks. Malicious bots are created and aggregated by a botmaster who controls and communicates through command and control channels. A botnet is a large collection of these bots that is used for malicious activity and to infect and recruit more bots.

Here’s how we tackle bots.

How Client Classification Works

Incapsula uses an intelligent client classification  engine to mitigate malicious bots while allowing beneficial bots through. A simple concept of the bot classification is to apply sequential layers of analysis with the sole purpose of identifying if a visitor is human or a bot.

In the client technology fingerprinting step shown above the classification looks for the following attributes of suspicious sites:

  • Does the client support JS? If so, what engine does it use?
  • Does the client support cookies?

Depending on what the classification algorithm detects, it can respond with a challenge to suspicious sites such as:

  • Cookie challenge: If the client supports cookies, we respond to an HTTP request with a cookie. Web browsers typically will store and resend this cookie. Most bots do not support cookies and therefore will not respond.
  • JS cookie challenge: After receiving an HTTP request, we respond with a JS cookie, instructing the browser to perform an action. Web browsers typically will execute the JavaScript instructions, on the other hand most bots do not support a JS engine and therefore will not respond
  • CAPTCHA: Send a CAPTCHA challenge, expecting a human response to the challenge.

A hacker will ignore the challenges and aim to generate as many requests as possible.

How You Can Manage Bots

In addition to the automatic blocking of malicious bots, you can also manage suspicious bots on your site by using either the user interface  or the API. To classify and manage those suspected bots you can define specific application delivery rules for your sites, such as:

  • Redirect – You can redirect suspicious bots, such as scrapers during peak traffic to a different URL that will handle those requests. On a Delivery Redirect rule API, for example, add the filter parameter

-d filter=”get-page-ip > X”

  • Block – You can apply an IncapRule to block requests from that site or IP.

By applying these rules, you can reduce bandwidth and resource utilization on your site.

The Incapsula SOC

The client classification technology was written by our security research team and the SOC.  The fully automated structure identifies, classifies and blocks malicious bots with no manual intervention. Because of this we have the lowest false positive number in the industry.

We can update our entire global infrastructure daily, weekly or as needed. We have this flexibility because our technology was written from the ground up, allowing us to add features and upgrade at any time.

Incapsula client classification poses multilevel challenges for malicious bots automatically. More importantly, the security team updates the database daily with new bot signatures to keep on top of new bot types and keep your site safe.