Today’s software challenges require companies to offer a wider feature set while shortening the time to market of their products. Small startups as well as large corporates invest more time and resources in formalizing their deployment cycle to deliver more quantity in less time. There is a continuous pressure from the business to deliver more content, which would hopefully yield more profit, often at the cost of quality and security. Business owners who are measured by ROI, would always prefer to invest their resources in more features rather than in enhanced security.
As today’s IT environments become more dynamic and continuously evolving, they are faced with ever-increasing security challenges. In a typical environment new applications are deployed regularly and existing applications are changed around the clock, many times without notifying the security teams. In-house applications along with third party services form a spaghetti code across on-premises, private and public clouds that easily become the CISO’s worst nightmare. Even if you manage to get all the pieces working perfectly in sync, how can you assure that future architecture changes, software updates, vulnerability patches, or security policy updates will not break your production environment at the most critical time and lead to a catastrophe?
Ensuring the stability and reliability of your production environment requires a holistic approach. This holistic approach covers the entire gamut of architecture changes, software code changes, security policy and provisioning changes in itself. Each piece of new code or change in configuration must go through a number of intermediate testing environments before hitting the precious production server. Some companies enforce code control through development, system integration testing (SIT), user acceptance test (UAT), and staging (pre-production), where successful deployment in one environment is a prerequisite for being deployed on the following environment.
But how can I do this on a larger scale ensuring that all my web assets are always protected?
The secret is by using the DevOps approach. Having a truly DevOps environment means that your full deployment environment can be automated from scratch, spinning up and tearing down the entire deployment on the fly from top to bottom leveraging a no-touch install. Regardless of if you deploy a new server, a new application, or move an existing service from one server to another, the security policies and provisioning layer linked to this service must be taken into consideration as well.
Choosing a perimeter defense approach using a “DevOps-friendly” security product assures that whenever deploying a new service, changing security policies, or applying the latest security vulnerability patch, you would be able to create and provision its security layer automatically via Restful APIs. Applying a DevOps approach to protect your web assets would not only save you time and resources, but would also assure that your organization can scale without compromising on security. A company that launches new applications on regular basis should be able to automate not only the launch process, but also the security layer to protect these applications. Protecting new applications data from cyber-attacks, dynamically learning your applications’ “normal” behavior and correlating this with the threat intelligence crowd-sourced from around the world, should be done automatically and seamlessly as soon as you push the launch button and fire your applications into production.
The Imperva customer base includes many companies who protect thousands of applications in the cloud and on-premises and manage their entire perimeter defense using DevOps tools. Whether you want to add new pattern based signature policy, to set up a dynamic profile learning, to apply virtual patching, or to provision your security policy automatically, DevOps allows you to achieve these goals.