WP Six Pitfalls to Avoid When Setting Up Your Off-Prem Security

Archive

Configuring Your Off-Premises Security Service: 6 Pitfalls to Avoid

Configuring Your Off-Premises Security Service:  6 Pitfalls to Avoid

Off-premises (off-prem) security services are a popular choice for many website owners today. Their features include ease of implementation, a comfortable price point and a managed services model—all of which are preferable in our SaaS era.

Making the switch isn’t always seamless, however. A number of misconfigurations could easily erase the benefits these solutions provide. Here are six of the most common pitfalls to avoid while getting the most from your off-prem security service.

1. Partial DNS changes

Altering your DNS settings is a common way to onboard an off-prem service; it ensures that all HTTP(S) traffic is routed through your security provider. The process typically involves changing the following DNS records:

  • The A-record, which translates a root domain to an IP address
  • CNAME records, which specifies a domain name as an alias of another domain

One issue is that sometimes only one of these records is modified during the onboarding process, which can result in negative consequences for your application.

For example, not changing the A record lets anyone identify your origin IP simply by resolving your root domain (e.g., mydomain.com). With it, perpetrators can circumvent your off-prem security and launch an attack directly against your origin server.

CDN with partial DNS changes

Furthermore, a partial DNS change can lead to significantly uneven site performance or prevent legitimate traffic from reaching your site altogether.

2. Disregarding parts of your network

When onboarding, it’s important to consider all aspects of your network, including those outside of your security perimeter.

The most common mistake is disregarding your DNS services while prepping your perimeter against DDoS attacks. Even with your entire perimeter secured, a DNS-targeted attack could still prevent users from reaching your domain.

Assets in your security perimeter not optimized for off-prem security represent the flipside of this. For example, both your IP address and subdomain names could be used to circumvent your service. If discovered, either can be used to launch direct-to-IP attacks.

It’s considered best practice to change your IP address after onboarding an off-prem security service, as it’s likely to be registered somewhere on the web. Additionally, generic subdomain names for your peripheral services (e.g., ftp.mydomain.com for FTP access) should be avoided, as they’re easily guessable.

Learn more about common ways bad actors exploit misconfigurations to circumvent your off-prem security using direct-to-IP attacks.

3. Not testing your solution in Alert Only mode

It’s common practice for security solutions—including web application firewalls—to inspect incoming requests and block those containing specific strings used by hackers. However, your web application or website might contain URLs that unintentionally trigger blocking rules simply because they also contain the same strings.

For example:

The URL mydomain.com/blogpost.asp?name=rob%27s+blog contains a %27 string. When not encoded, %27 represents an apostrophe (i.e., “rob’s blog”). Because this is a common first character used in a SQL injection, a security rule may mistakenly block this URL.

To avoid triggering false positives and blocking visitors, we advise you to onboard and use your security service in ‘Alert Only’ mode for several days. This will help you learn how it responds to your website. You’ll then have an opportunity to review your event log, see what resources triggered alerts and whitelist them, if needed.

In addition, it’s important to notify your security provider of any APIs you’re running, as unique policies are typically needed to ensure their continued functionality.

4. Not testing custom security rules

Many organizations use custom security rules which help tailor policies while mitigating specific threats and attack scenarios.

Speaking from bitter experience, such rules are rarely subjected to sufficient testing and are likely to cause false positives as a result.

Similar to testing your security solution, our recommendation is to run all custom rules in ‘Alert Only’ mode to see if they perform as expected. Any rules that generate false positives can then be whitelisted.

5. Misconfiguring whitelist and blacklist rules

Whitelist and blacklist rules are effective ways to create broad policies within your security service. For example, whitelisting your office IPs gives them total access to your website, while blacklisting an IP used in repeated attacks prevents it from hitting again.

While somewhat similar to the custom security rules outlined above, these policies deserve a special mention because they are applied on a much larger scale. If not configured properly, they can interfere with your site’s functionality, blocking legitimate users and bots while allowing in malicious actors.

Therefore it’s best practice to adhere to the following rules when setting up whitelists and blacklists:

Whitelisting API traffic – APIs need to be whitelisted to ensure the continued functioning of all software and applications on your site.

Avoiding blanket blacklisting – When setting up blacklist rules, remember that they can inadvertently encompass legitimate traffic wanting access to your site. For example, a blacklist of the United States will also apply to Googlebot crawlers.

Whitelisting entire IP ranges – Blanket whitelisting can provide malicious actors with a platform to hit your site. For example, Amazon Web Services hosts many legitimate online applications. Any one of them can be used as a base to target your site if whitelisted.

6. Setting incorrect threshold configurations

In the context of DDoS protection, a threshold configuration determines how much traffic your web application can accept before mitigation kicks in. It should be based on the highest number of requests your site handles at a given time.

While the process sounds simple enough, if misconfigured it can either fail to recognize a DDoS attack is taking place or incorrectly signal one is underway.

Misconfiguration is typically a result of incorrectly metering traffic flows and then setting the bar either too high or too low.

Your threshold setting should be based on these two questions:

  • What is the absolute maximum amount of traffic I can expect?
  • What is the absolute maximum amount of traffic I can handle?

To answer the first question, review your site’s past traffic flow to determine the peak requests per second (RPS) it has served (outside of an attack scenario). Then, add 20% or so to that figure to account for external factors that could result in additional site visits, such as holidays or marketing campaigns.

For the second question, determine how many RPS each part of your infrastructure (e.g., DNS, WAF, server) can handle. Your threshold should never be set higher than the weakest part of your network.

For example, if your WAF can handle 100,000 RPS and your server can accept 150,000, don’t set your threshold higher than 100,000 so as not to overload your WAF.

Getting the most from your off-prem security service

Off-prem solutions are a convenient, efficient and cost-effective option for website protection. However, just because these solutions are managed doesn’t mean that they’re hands off.

When deploying one, take into account that you most likely know more about your website than anyone else. To get the most from an off-prem solution, work with your service provider by providing information on custom rules, APIs and anything else that may impact functionality.

Imperva Incapsula representatives can answer any questions you may have about how off-prem security works, about migrating to the cloud or about configuring your service. Contact us for additional information.