WP A Quick-Start Introduction to Database Security: An Operational Approach | Imperva

Archive

A Quick-Start Introduction to Database Security: An Operational Approach

A Quick-Start Introduction to Database Security: An Operational Approach

The recent SingHealth data breach incident exposed around 1.5 million patients’ records. In its aftermath, the Cyber Security Agency of Singapore published a set of security measures aimed at improving the protection of Personally Identifiable Information (PII) data.

The recommended security measures covered several facets of IT security domains:

  • Data governance and data management lifecycle
  • Identity access management around user rights review
  • Tightened controls around least privileged access and execution rights
  • Up-to-date software patching practices
  • Encryption of key sensitive data
  • Monitoring of all data access and timely detection of “suspicious database queries”

Adherence to these best practices often requires a carefully thought out, long-term security strategy and operational risk management plan. This involves human process orchestration, multiple security controls and, most importantly, support from senior management members of the organization.

There’s no silver bullet process or single solution that solves all of the above points. The adoption of these best practices usually involves several security detection/correction/deterrent/prevention controls such as two-factor authentication, strict enforcement of access control lists, regular user rights review, DLP implementation, data encryption, privileged access management, centralized log monitoring and regular data access review.

In this article, we’ll focus on database security, which can be daunting for some security teams, mainly due to lack of technical familiarity and database system performance concerns. This topic is relevant for any industry vertical faced with operational risks around database servers.

(Also, the emerging discipline of Infonomics provides business and IT leaders a way to understand and value their information and create security policies that take into account the relative risk around breaches that also make fiscal sense. Download the Gartner report.)

We’ll briefly discuss an overview of the challenges and possible approaches we could adopt to better deal with the operational risks surrounding data security. Some of the opinions are collections of challenges gathered from clients that I have worked with across Southeast-Asia (ASEAN) over the last few years.
Data is often stored in data warehouses, database servers (RDBMS) and file servers. For simplicity sake, we refer to the above-mentioned storage mediums as “databases” in this article.

Haven’t security folks been implementing these best-practices and guidelines all along?

Over the years, in order to implement technical controls that satisfy the above guidelines and more, organizations spent significant resources in beefing up their next-gen firewalls/IPS, unified threat management systems, identity access management software, application access management software, data leakage prevention (DLP) technologies, SIEM, etc.

If we take a step back to look at these security investments, the end-game objective is crystal clear. The sole purpose is to protect the organization’s “crown jewel” – i.e. critical data. Data is deemed the organization’s crown jewel because it is a critical asset which the business requires to function properly and remain profitable.

Despite the straightforward end-game objective, the irony is that databases are often neglected, not adequately protected and data access activities are not thoroughly reviewed. The reason is not due to the lack of due diligence, negligence nor laziness. Rather, the underlying reasons are often due to manpower and skill set issues.

Below list is some of the common hurdles that I have come to understand from security teams across Southeast-Asia while working with them on the domain of database security:

  • How do we determine which SQL query is considered “suspicious”? There are a few thousand lines of SQL queries in our weekly CSV log report and they all look the same to me.
  • We have this user who directly accessed data on the database server which shouldn’t happen as his user ID doesn’t even exist on that server. I spent 3 days going in circles with this investigation and I was later told by the DBA that there is a database link established between the primary node and that secondary node. This explained the behavior that we saw. We wasted 3 days on this and we thought to ourselves, “If only my Oracle DB knowledge was better, this would have been accelerated.”
  • My database administrators do not allow us to enable native auditing on all our database servers due to performance impact concerning CPU, disk I/O and storage capacity. They only allow us to natively log all login/logout attempts and activities with SQL exceptions. Therefore, we do not have 100% visibility in the SQL transactions executed on my database servers. We have no answer to this question as well: “What if a suspicious query is run and did not trigger any SQL exception? How do we manage this inherent risk?”
  • We enabled native auditing and send all the database logs to a SIEM which parses them into readable meta-data. However, we still run into operational issues when we try to review some SQL statements. For example, a privileged user executed “select * from psx64;” which looks suspicious. Is psx64 a table, a view or a synonym? Are there sensitive app data inside this object? Our database/apps colleagues are not available today to run this through with me. We are not able to make a timely assessment on this query and we just gave up after some time.
  • There are easily a few thousand lines of SQL queries we need to review in the log report file for a database server. We just cannot make sense of the queries at all. We can’t cope, and our team members are mentally burnt out from doing audit review for just a single database server.
  • There are so many database service IDs used by our different applications. We do not know what the normal data access behavior for each of the IDs is. We simply have no way to review for suspicious activities if the service IDs are abused maliciously.
  • We have 5000 employees and the IT security team has only 10 staff. We have difficulties learning, profiling and documenting the “normal behavior” for every single one of our employees. It is impossible to answer the question, “Can we be alerted if someone accesses more data today than he/she did over the last 12 months?”
  • We understand network security very well, but database security is a big scary unknown variable to me. We do not dare to implement risk controls on the database servers, fearing that may negatively impact its operations. We chose to ignore the potential data risk simply because we cannot cope with the extra workload as well.
  • We operate the following data stores: CIFS file servers, MS SQL, Oracle, MySQL, DB/2 on Windows, DB/2 on z/OS, DB/2 on AS/400, Cloudera, SAP HANA. Recently some of our less-critical data were migrated to AWS RDS. Each of these data platforms speaks a different language and we have operational difficulties even maintaining a simple security policy, let’s say “track all database configuration changes”. We ended up focusing to protect only the platforms that we are more familiar with.

What can we do to overcome these database security hurdles?

The bright side of the situation is that many organizations have matured/are maturing in terms of security monitoring and knowing what they want to achieve realistically.

This section discusses some of the practical steps which some organizations are adopting or have adopted. While there will always be residual data risks which might arise from compromised, careless or malicious users or misconfiguration oversight; implementing the below concepts beats having no risk controls in place at all.

Step 1: Gain 100% visibility on data access

We cannot protect what we do not see. We will need to audit capture 100% of all database accesses for forensics, audit review and non-repudiation purposes.

We need to take note enabling native auditing feature on most database systems will cause degradation of database system performance.

To overcome this, consider adopting an independent database audit and protection solution which does not require native auditing to be enabled and still provide you with 100% data access visibility. More on this is discussed in Step 3 below.

Step 2: Prioritize your security monitoring focus

While database servers are our important crown jewels, bear in mind that they are just one of the many classes of asset in the organization’s asset inventory list which security teams need to review and protect during a typical business-as-usual day.

There is always a hard limit on how much a human analyst can process when it comes to reading raw audit data or meta-data – which often does not present the data context behind a SQL query.

Therefore, we need to prioritize our risk management effort.

Regardless of asset type, security monitoring and audit review process generally focus on getting answers to the below 3 basic questions:

  1. Exactly WHO is accessing my assets?
  2. Is the access OK?
  3. How do I respond QUICKLY if it’s not ok?

To deal with these 3 basic questions, one of the CIOs that I worked with shared this interesting analogy with me:

“Ants love to eat sweet stuff like cakes. In my organization, my data is the cake and there are many internal/external ants who’d love to eat my cake. I have set up perimeter defenses and other pesticide controls which kills off the malicious ants when they are spotted near the cake.

However, every now and then, the ants mutate and change in colors and structures and my defenses have to play the catch-up game to identify them when that happens. I thought to myself, why don’t I put a cover over my cake? This cake cover should understand how my cake looks like and how it’s usually being consumed.”

I should not have to worry about the mutation of these ants anymore. Now I just have to monitor the surface of this cake cover to spot for ants breaching it OR any previously-unknown ants crawling out of my cake from the cover.”

The question then becomes, “How do we build this intelligent cake cover that is data-aware and application-aware?”

I find the following operational approaches useful when dealing with this question. It might not be the most comprehensive set of approaches, but it serves as a good starting point.

Before we even begin talking about solutions and tools, the below approaches should be inculcated in the organization’s security review processes. Without these fundamental beliefs, even the best tools on this planet will not be able to help us in any way.

  1. Focus on top critical database assets based on the business impact analysis (BIA) – No one is able to cover all the risks affecting every single asset, not even for the biggest organizations which have abundant resources and tools deployed. Focus your already-limited amount of time and effort on assets which will have the biggest negative impact on the business if they get compromised. Impact can be categorized as operational, financial, reputation, etc. If you have not done a BIA, I highly encourage you to do so. Without a BIA, you will never be able to determine the right asset to place your focus on.
  2. Focus on access to sensitive data – Classify and maintain a data dictionary of all your sensitive data in the database. This habit also needs to be inculcated in the procedure runbook for the development of every new application. Given the short amount of daily working time, a key part of the data access review should focus on “Who is touching my sensitive data and what are they doing with it?” It would not make sense to spend an excessive amount of time reviewing access to data which has a relatively low business impact. With this in mind, craft out governance policies and controls that revolve around sensitive data access control and review. This makes the whole security audit process much more manageable and focused.
  3. Focus on what your privileged users are doing – Privileged users have the most powerful rights to the data. Focus on reviewing what they are doing. Are they using unauthorized tools to query the data? Are they touching any of our sensitive data directly without going through the application or jump host? If yes, what did they do to the data?
  4. Avoid generating only log reports with 60k lines of raw logs. Rely on chart-based reports as the first approach. No human analyst is able to crunch through a report with 60k of raw logs in time and perform a proper database audit log review. Always generate high-level chart-driven reports which illustrate the situational awareness overview of key databases. These chart-based reports should be designed to empower a security analyst to quickly review the data access activities for any DB within 10 minutes. The chart-based reports should minimally describe the source database IDs, shared user accounts (if any), IPs and applications connecting to the database and breakdown of operation types on the database per user. The charts-driven report will enable an analyst to easily spot abnormal events such as “Hey, why is there a Microsoft Office 2010 source app connecting to my payroll database?” From here, the analyst will then finally open up the 60k lines of raw log report to look for the exact log entries which involve “Microsoft Office 2010” accessing the payroll database. More investigation and data pivoting can take place from there on, making the 60k lines of raw log report more useful than aimlessly crunching through it line by line. This approach helps to keep one’s sanity and be effective in review database access activities.

Step 3: Invest in an independent database audit and protection tool that supports the concepts in Step 2

By now, it is apparent that it is technically impossible to rely solely on human effort to focus on sensitive data access and privileged users’ activities on the database. Solutions and tools should be considered to accelerate and automate this journey as much as possible.

Independent database audit and protection solutions typically collect audit data from the database servers without having to enable native auditing on the database server. This improves and conserves database server performance and also reduce resistance from database administrators when it comes to database security monitoring.

These solutions collect database activities using a couple of methods which should have minimal impact on the database servers’ performances:

  • Sniffing off the SPAN/TAP port of a switch or network aggregation TAP for database traffic
  • Deployment of lightweight agents to collect either local access database activities OR all database activities

Ideally, the adopted tool should support the following key operational concepts:

  • Present low-level SQL language into a human-understandable language – Instead of having to understand the 1001 SQL commands across different types of database flavors, the tool should provide the capability to group related SQL commands into human-readable commands groups such as “Backup Operations”, “Database Code Changes Operations”, “Data Object Management”, “Privileges Manipulation”, “Users & Privileges Authorization”, “Object Creation”, etc. This removes the requirement for the security administrator to be a database expert before he/she can protect databases. Instead of having to list down the different exact types of backup commands and procedures for different databases flavors, this will now allow you to simply create a security audit policy to say, “Alert me whenever any of my DB2/MS SQL/Oracle databases have “Backup Operations” from an unauthorized source application.”
  • Support the “Define Once and Comply Many Times” approach for audit and compliance manageability – Compliance regulations and auditors generally do not care what flavors of databases are being run in our environments. They are more concerned with the process of how we systematically deal with the risks across all our data assets and the security controls in place to mitigate/reduce those risks. The manpower, time and money incurred to satisfy these audit requirements are also known as the cost of compliance. The audit and security tool should support the capability of defining a required security audit policy only once and applying it uniformly across the entire heterogeneous database environment. This is based on the governance, risk and compliance best practice of “Define Once and Comply Many Times” which aims to drive down the cost of compliance. With this approach, you do not have to worry about the relevance of your current security audit policies when you spin up a new database server or introduce a new database flavor in the future or migrate to a public cloud-based database two years from now.
  • Ability to automatically build a baseline of database service ID behaviors – applications typically access databases via a database service ID account. Service IDs are one class of resources which needs to be regularly reviewed as they hold access to sensitive data. Applications data access behaviors typically do not deviate often as they are programmed to access data in a certain fashion. As such, it is technically possible to “profile” the data access behavior of database service IDs and document it in an access matrix format. This behavior profile dictates the authorized characteristics of the database service ID from which it can originate. This includes the list of source IP addresses, list of machine names, list of source applications, list of permitted data manipulation language queries (select, insert, update, delete) against a specified set of tables, databases accessed, etc. In the event a service ID’s behavior is found to deviate from the norm, the solution can alert the administrator to it instead of having the administrator hunt for deviations/anomalies amidst the hundreds of database service IDs data access activities logs. This helps in identifying service ID account abuses arising from lateral movement exploits. Most security teams would not have an idea how every service ID in the environment behave and it is unfeasible to manually build a profile for every service ID. Good news is that some database audit and protection solutions in the market today have the ability to automatically build a profile of service IDs – reducing manpower effort in the process.

Step 4: Explore machine learning and analytics to deal with the chaos

Security audit data review is a tedious process even with the right auditing tools and processes in place. Human effort is always involved in the audit data review process.

It is also easier to review service ID accounts activities because their behaviors are deterministic (as explained in the preceding section Step 3). The behaviors of human account IDs are the hardest to review because it involves understanding the dynamic behaviors of each employee and their data access patterns. Their access patterns may change with their workload, their mood and job requirements as they move from one department to another. Unlike behavior profiling of DB service ID accounts, human behavior is not something which can easily be documented in an access matrix.

As such, there will always be gaps or suspicious access activity missed no matter how diligently we review the data access activities on a regular basis. We are still limited by our human processing ability and capacity after all.

As machine learning technology development picks up in pace in recent years, this could possibly help to plug some of these gaps left behind by human review effort.

Machine learning security detection technology should aid us in tackling the fuzzy-logic question of “suspicious data access activities”:

  • Are there critical data access violations which I have missed out during my security review process?
  • Is there any sensitive data which I have not identified in Step 2? Therefore, I am not monitoring it?
  • Are any of my human users accessing more data than they should be?
  • Are any of the above taking place outside of an employee’s normal working hours?

The key to a successful machine learning security detection technology relies on:

  1. The quality of the ingested data (garbage in, garbage out concept)
  2. A proper understanding of the data context (what do each of the data points mean in different scenarios?)
  3. The data models specifically built to solve certain use cases

The market is generally moving towards solutions that already have built-in data models. This reduces the need to hire specialty services to build data models whenever a new security use case is required.

Some of the supervised and unsupervised machine learning solutions in the market today provide these data models out-of-the-box for our considerations:

  • Lateral movement
  • Endpoint takeover
  • Service ID account abuse (humans using service IDs instead of applications)
  • Suspicious direct access to production data (bypassing the authorized source applications)
  • Excessive data access
  • Sensitive data objects prediction
  • Profiling and prediction of human user IDs and service IDs

Once again to recap, we are really just trying to answer the below 3 seemingly simple questions:

Database security is a big domain of knowledge that truly requires operational working knowledge of applications, service IDs, ACLs, database systems and the overarching governance, risk and compliance concepts around them. It is not a topic that could be easily comprehensively described and explained in an article like this. But rather, this article attempts to serve as a useful starting point for folks who are seriously considering database security.