Site scanning/probing is the initial phase of any attack on Web applications. During this phase, the attacker gathers information about the structure of the Web application (pages, parameters, etc.) and the supporting infrastructure (operating system, databases, etc.). Target Web sites are scanned for known vulnerabilities in infrastructure software (such as IIS) as well as unknown vulnerabilities in the custom code developed for the specific target application.
Site scanning/probing is the main technique attackers use to gather as much information as possible about a Web application and the supporting infrastructure. A standard site scan is composed of several steps. First, the attacker detects the operating system installed on the server. This can be done using automatic tools like nmap, by identifying the Web Server type in the HTTP Request (for example, IIS runs on Windows-based sites) or by guessing according to file extensions. (Usually, Windows-based sites use “.htm” and “.jpg” files, whereas UNIX sites use .html and .jpeg files).
Identifying which Web server runs on the target machine is very useful to the attacker. Knowing the specific type of Web server (and by extension its default configuration), an attacker may try to exploit known vulnerabilities, access sample files, and try default user accounts. There are three common ways of detecting the Web server: by using automatic tools such as Nikto, by identifying the Web server type in the HTTP Request, or by guessing according to files suffixes (ASP Pages normally indicate an IIS Server whereas PHP Pages normally indicate an Apache server).
Additional important information about the infrastructure of the target server can be gathered using probing techniques such as path revealing, Directory Traversal and remote execution (may allow mapping the entire site and its source). The attacker can complete the Infrastructure knowledge base by identifying database server types, content infrastructure types (WebSphere, BroadVision and Vignette), and so on.
After the attacker analyzes the infrastructure, the entire application can be scanned. Application scanning provides a map of the entire site including: all pages, parameters used by dynamic pages, cookies used by the site and transactions flow. This information leads the attacker to an understanding of the application’s authentication, authorization, logic, and transactional mechanisms. This body of information provides the basis of a strategy to attack the target site.
|SOLUTION||BLOCKS SITE PROBING?|
|Imperva SecureSphere||YES (known and unknown attacks)|
|Firewalls||Some/Partial (known attacks only)|
|Intrusion Detection Systems||Partial detection only, known attacks only|
|Intrusion Prevention Systems||Partial (known attacks only)|
During site probing the attacker performs several operations:
- Generating errors using non existing URLs
This type of activity can only be detected by products that learn which URLs are allowed by each specific application. Intrusion Detection and Prevention Systems that are not Web application oriented do not implement this capability.
- Providing long parameter values
In order to detect long parameter values the product must know the length constrains on each parameter. This requires learning parameters’ constrains. Intrusion Detection and Prevention Systems that are not Web application oriented do not implement this capability.
- Accessing unauthorized parts of the application
In order to detect unauthorized access (e.g. /iisadmin/ and /iissamples/) the product should gain knowledge on which parts of the application are authorized and which are not. Only products that include learning capabilities can gain that knowledge.
- Adding and removing parameters
To detect this behavior the product must understand which parameters are used with each specific URL and which are required. Intrusion Detection and Prevention Systems that are not Web application oriented do not implement this capability