• Overview
  • Specifications
  • Reduce the risk of non-compliance and sensitive data theft

    database activity monitoring video

    The Imperva data security portfolio is purpose-built to provide you with security and compliance capabilities that address a broad range of use cases across databases, files, user activity, Big Data and cloud-based systems. The Imperva Camouflage Data Masking solution will reduce your risk profile by replacing sensitive data with realistic fictional data. The fictional data maintains referential integrity and is statistically accurate enabling testing, analysis and business processes to operate normally. The primary use of this masking is for data in non-production systems, including test and development systems or data warehouses and analytical data stores. Another set of candidates for data masking is business enablers that require data to leave the country or company control, such as off-shore teams or outsourced systems. The Imperva Camouflage Data Masking solution will not only protect data from theft, it will help ensure compliance with regulations and international policies dictating data privacy and transport.

    • Discover and document sensitive data and data relationships across the enterprise
    • Reduce the volume of sensitive data in non-production systems
    • Facilitate data transport for out sourcing or compliance with international privacy regulations
    • Enable use of production data in development and testing without putting sensitive data at risk
    • Track changes and generate compliance reports at each data refresh
    • Prevent sensitive data loss from non-production systems

    Key Capabilities

    • Discover: Retrieve and analyze sensitive data

      The goal of the Discover phase is to identify data that needs to be masked in order to provide sufficient protection without compromising data utility. This stage involves documentation of requirements and education on the implications of masking necessary for the creation of configurations during the Policy stage of the Data Masking Best Practice. Automated discovery of sensitive data is a key factor in minimizing deployment times and long-term success.

    • Assess and Classify: Establish context for sensitive data

      The Assess and Classify phase is intended to establish criteria that will aid in determining how to mask the data. Including the codification of the contextual information determined during the Discover phase, the sensitivity of various data, its intended use(s), the transformation requirements and any inter-database dependencies.

    • Set Policy: Create data masking configurations

      The goal of the Policy phase is to create data masking configurations based upon customer-specific functional masking requirements defined in prior phases. Including plans and requirements for integrating data masking configurations into the overall data refresh process for non-production environments. This phase also provides an opportunity to develop data masking schedules and establish appropriate change management processes. Data masking software that is easy-to use, flexible and scalable is critical for accommodating varying and often complex requirements.

    • Deploy: Integrate data masking in the existing processes

      The Deploy phase is intended to transition data masking into the refresh process for non-production environments taking the overall business process(es) into account. This phase entails executing configurations constructed during the Policy phase. Report automation and pre- and post-run scripts options support a wide range of ancillary processes and requirements.

    • Manage and Report: Adapt to changing requirements and provide visibility

      The Manage and Report phase is where the “fit and value” of the solution will become clear. This phase includes change management, job maintenance, configuration updates and compliance reports about data relationships, masking techniques, and masked database structures.

  • Specification Description
    Supported Databases
    • Oracle®
    • SQL Server®
    • DB2®
    • Sybase ASE®
    • Teradata®
    • MySQL®
    • HSQL
    • Netezza
    • Postgres SQL
    • IMS via export to VSAM (KSDS, ESDS) or QSAM data files
    • And more
    Supported Main Frame
    • DB2®
    • VSAM
    • IMS
    Supported Flat File
    • Hadoop HDFS
    • XML
    • CSV
    Data Transformers
    • Combo
    • Credit Card Generator
    • Data Load
    • Date
    • Date Generator
    • Encryption
    • Generic Luhn Generator
    • IPV4 Address Generator
    • National ID Generator
    • Noise
    • Random Number
    • Replace
    • Scramble
    • Script - Column
    • Script - Row
    • Script - Table
    • Sequential Number
    • Shuffle
    • Table Delete
    • Update
    • Update Rows
    • Custom/User-defined