WP Why the Search for Best-Of-Breed Tooling is Causing Issues for Security Teams | Imperva

Why the Search for Best-Of-Breed Tooling is Causing Issues for Security Teams

Why the Search for Best-Of-Breed Tooling is Causing Issues for Security Teams

The growing need to consolidate vendor portfolios

The adoption of best-of-breed security solutions has led to unforeseen problems for SOCs. Onboarding a new solution increases complexity; it requires configuration, integration with existing tools, fine-tuning policies, and the ability to create meaningful alerts that can be quickly acted upon. Since most tools can only see one part of an attack, it’s difficult to create alerts that tell the whole story. Analysts are inundated with alerts from each product, most of which require additional research to vet out false positives. Aggregating everything to a SIEM provides a central repository but requires manual overhead. Where does this leave organizations looking to maximize their solutions while keeping vendor count and expenses low?

A recent study revealed that “the average enterprise deploys 45 cybersecurity-related tools on its networks [IBM].” The widespread use of too many tools may contribute to an inability to detect and defend against active attacks. Enterprises that deploy over 50 tools ranked themselves 8% lower in their ability to detect threats and 7% lower in their defensive capabilities than organizations employing fewer toolsets. We’ll take a closer look at the disadvantages in deploying too many security tools and offer recommendations to address this.

A limited view of an attack

Recent research shows that more tooling is not necessarily better. A study conducted by Ponemon discovered that only 22% of security tools are vital to an organization’s primary security objectives and approximately half of a company’s available security tools are simply clutter. Another survey found that almost 80% of senior IT and IT leaders lack confidence in their organization’s abilities to stop data breaches using its current security tooling.

Why is there so little confidence in all of these tools?

One challenge is little coordination between products from different vendors. Specialized solutions have limited network visibility. For a tool to be both effective and accurate, solutions need to have full visibility over the network that they’re protecting. In most organizations, it’s common to see security tools like XDR, UEBA, DLP, and WAF. Each of these serves a specific purpose, and while they each do their job exceptionally well, they lack integration with other solutions deployed in the same environment. A WAF may see incoming network traffic, but it lacks insight into what’s happening on the endpoint. As more companies move to the cloud, on-premise DLP tools do not have insight into data that is stored in SaaS applications. In the event of an attack, multiple alerts could be generated from different tools with little indication that these alerts are connected. While many tools offer APIs to further enrich their solutions, each API is not built on the same standards, often requiring custom code to integrate products fully. Alerts from each of these tools only tell part of the story; analysts are left to piece together information from multiple sources to understand an attack.

Even after tools have been calibrated, analysts are required to manage a staggering number of alerts. A survey of security professionals revealed that 44% spend more than four hours daily on alerts. As stated earlier, many of these alerts don’t offer full insight into the attack; they tell just enough information to start an investigation but lack the necessary detail to act accordingly solely based on that alert. Triaging thousands of vague alerts wastes time and resources; senior analysts are pulled in to assist junior analysts in investigating incidents.

It’s no surprise that analysts are suffering from alert fatigue. With SOCs receiving thousands of alerts daily, how can they be prioritized, investigated, and mitigated promptly? It’s challenging to know which incidents to focus on, especially if additional research needs to be completed to understand the entire narrative. Lower priority alerts are glossed over or outright ignored to focus on more critical alerts. According to Forrester, 67% of IT teams have admitted to ignoring lower priority alerts. Too many alerts and too few resources leave companies at risk.

SIEM Management slows SOCs down

SIEMs have been historically revolutionary for SOCs. They aggregate logs from various sources within an organization and can display this data in easy-to-digest dashboards. Instead of logging into each product’s console to triage incidents, data can be imported into the SIEM, making it a central repository. Alert rules can then be created from this data, which gives analysts a one-stop shop when it comes to triaging attacks.

While SIEM tools alleviate the need to manage alerts in multiple interfaces, data management is a costly, complex process. APIs and Syslog connectors need to be configured and maintained. Alert rules need to be created and maintained to optimize SIEM usage. Companies risk blindspots if apps are not onboarded to the SIEM. Storing petabytes of data on-prem or in the cloud is pricey and requires organizations to adhere to specific compliance standards. A study conducted by Ponemon Institute revealed that 75% of SIEM costs go towards installation, maintenance, and staffing.

The hidden costs of best-of-breed tooling

Purchasing and deploying another tool is an arduous process. Analysts are pulled away from day-to-day responsibilities to do comparative vendor analysis, create use cases, conduct POVs, and deploy the new solution. Other teams, such as IT, Legal, and Procurement, also need to be brought in at some point. Rolling a new tool out can take several months to a year depending on the size of the company, the complexity of the tool, and how compliant a company is in adopting another solution. Often, a POV does not account for all the custom configurations an organization has, which leads to unforeseen issues when rolling out in production. Analysts have to work closely with both SMEs from their organization and SEs from the vendor to properly set everything up. Deploying a new tool to production is rarely easy; it could take months, if not years, for an organization to recognize that the pros of a solution outweigh the cons.

Disparate tools also lead to a decrease in workforce productivity and satisfaction. A multivendor environment introduces additional complications into an already complex ecosystem. Each solution requires separate and ongoing training, upgrades, and fine-tuning policies to minimize false positives. Decisions need to be made about where actionable data will go, what integrations are necessary, and how everything should be architected. If alerts are being fed to a SOAR or SIEM, playbooks and correlation rules need to be created. Outside of tool configuration and training, analysts are also required to troubleshoot any issues brought on by this new solution. Another tool often means another agent on the endpoint. No matter how lightweight a company claims its agent is, endpoint performance is impacted by too many agents. Even if it’s an agentless deployment, new tooling can impact network performance. When an end-user is impacted by poor performance, uncovering the root cause can feel like finding a needle in a haystack.

Deploying many best-of-breed solutions in one environment comes with another unexpected cost: data storage and analytics. Correlating data from tools that don’t integrate well together is costly. Oftentimes, data needs to be transferred to a central location for it to be processed. Using a big data tool to analyze everything provides the SOC with more meaningful information but can lead to hidden costs in terms of cloud storage. Most cloud vendors charge between five and twenty cents per GB that is transferred from the cloud to an on-premises location. Companies regularly moving terabytes of data to the cloud can expect steep data egress fees.

Reevaluating security tooling strategies

If more solutions aren’t cutting it, then how can organizations adequately and efficiently protect themselves? Reevaluating a security strategy is a good starting point. This includes determining what the priorities are, understanding what needs to be protected, knowing where data resides, and evaluating the current strengths and weaknesses of the current security program. Organizations should strive to create a strategy that is cost-effective, straightforward, and gives leadership confidence in the SOC’s ability to respond to an attack.

After reevaluating an organization’s security strategy, focus on consolidating the vendor portfolio. Cutting the number of vendors reduces complexity, lowers costs, and maximizes tooling that most closely aligns with the updated security strategy. Retain vendors whose solutions provide protection across their environment, are cohesive, provide a rich source of threat intelligence, utilize machine learning to automate manual processes and deliver meaningful, actionable alerts to the SOC.

Focusing on vendors whose solutions utilize automation and machine learning increases efficiency while minimizing cost. Solutions with machine learning capabilities relieve analysts of manual tasks, allowing them to focus on higher-level tasks. Solutions with high data-processing capabilities can sort through thousands of events to determine the overall narrative. Instead of moving data from the cloud to on-prem in order to do analysis, a tool that runs analytics and machine learning where data is already being processed cuts down on cloud data egress costs. This cost-effective strategy also provides more meaningful information to a SIEM. Actionable insights are provided to the SIEM without requiring any correlation rules to be performed, saving time and processing power.

Tools that decrease the time to detect new attack vectors are highly valuable. Products that utilize machine learning to discover new threats and vulnerabilities and can respond accordingly are instrumental in staying one step ahead of bad actors. Machine learning helps take the burden off analysts to create and test new correlation rules when new threats are detected, allowing them to take a proactive approach to security.

Prioritizing tools that integrate well together has many benefits from which organizations can instantly benefit. These types of tools provide a richer narrative for the SOC, give analysts a unified management system, and minimize alerts. Cohesion amongst tools also can provide a level of alert prioritization, decreasing the amount of time an analyst spends deciding which alert to focus on.

Organizations should also strive to consolidate at different layers. Consolidation means fewer tools to purchase, deploy, manage, and troubleshoot. Investing in tools that can protect different attack surfaces, such as endpoints, web applications, and networks, also minimize costs, reduce complexity, and provide the SOC with actionable information. Alerts from these types of tools are prioritized accordingly and provide the SOC with the entire attack narrative. Analysts are provided with a one-stop shop for alerts; they understand which alerts are critical and don’t need to do supplementary research to understand what’s happening.

Imperva Attack Analytics detects application attacks by applying machine learning and domain expertise across the application security stack to reveal patterns in the noise. Artificial intelligence analyzes millions of events to reveal commonalities and patterns invisible to the naked eye and dramatically reduces alert fatigue. This solution sorts and groups security events into clusters of narratives, assigning each a severity level and supplying additional reputation intelligence so your teams can investigate incidents quickly. Analysts using Attack Analytics get easily understandable, prioritized incident reports they can act on without further research. Utilizing attack data from across the globe, Attack Analytics identifies the latest attack trends and updates incidents accordingly. This allows analysts to pivot away from SIEM alert management and focus on responding to threats. Prioritizing consolidation of different layers, ease of use, scalability, and centralized configuration, Imperva includes Attack Analytics in each of our solutions, including DDoS, API Security, Cloud WAF, Runtime Protection, and Advanced Bot Protection. This means an easy-to-deploy solution that provides meaningful narratives across each Imperva tool. Learn more.