by Stephen Lawton, editor, November 17, 2014 – Click here to download the entire ebook. (Following is an excerpt from SC magazine.)

Additional layers of identity credential access management could identify and stop a potential breach. Some enterprises, such as the NSA, are finding that migrating to the cloud aids in the protection of Big Data.

While the definition of “Big Data” tends to depend on one’s perspective of how big is big, one point is clear: Big Data has become big business and, as such, it has become a prime objective for cybercrooks, political activists and nation-states that want to get their hands on these massive databases that contain petabytes and more of detailed information on individuals, companies and government agencies.

Few organizations, if any, generate more Big Data than the National Security Agency. Similarly, few organizations are as large a target of attacks as is the NSA. While breaches of recent confidential government information have been well documented – NSA contractor Edward Snowden and Pfc. Chelsea Bradley) Manning immediately come to mind – the agency continues to redefine its defenses.

Sally Holcomb, CISO of the NSA, says the agency is now using a private cloud to apply multiple levels of analytics to Big Data in order to further protect it. Aside from installing several additional layers of physical security using a combination of tokens, locked gates around servers, and additional identity credential access management (ICAM) tools, moving to a private cloud also permits the NSA to build in additional layers of attribute-based access management unavailable to its previous databases, Holcomb says. In addition to the traditional rules that identified who could access the data, such as the nationality of the user, privileges based on job title, security clearance and the like, now the data can be protected by policies that can only be determined through a deeper analysis of both the data and the user, she says.

For example, data can be tagged to require specific training and experience of the potential user in order for them to understand how to use it. The characteristics of the potential user’s training, education, job description, current assignments and various other ascribed elements create a matrix of attributes that must match the data in order for the user to gain access. Even if the person trying to view the information has all of the requisite security clearances, training and expertise required to view the data, should the user have access? These additional layers of ICAM could identify and stop a potential breach based on either the use of stolen credentials and impersonation or by a potential insider with valid security clearance who does not have any reason to access the specific data they are trying to view, print or copy, Holcomb says.

Additionally, the NSA’s improved audit capabilities – known a data provenance – show in greater detail when and what changes are made to documents. Part of the data header from when the original document was created now includes a much more detailed record of what happens to a document that is moved to the cloud, she says, including who “touched” the document, what users did and what alterations were made to the original.

Since the Snowden breach came to light, she says, a lot of security vendors approached the NSA with products they said would “solve the problem” that led to the infamous exposure. While the various offerings might address some of the issues identified by the breach, she says, moving the data to the cloud pro-vides the most effective new security controls, including limiting physical access to data and storing it in protected locations.

But the NSA isn’t unique. Paul Hill, senior consultant at SystemExperts, a Sudbury, Mass.-based network security consulting firm, says the same issues challenge security executives across all industries. “Big Data can create big problems for a CISO,” he says. “As the hype around Big Data has grown, a number of companies have taken the approach of ‘let’s store everything’ and later worry about what subset of the data they will use for analysis.”

But not all data requires the same level of security – be it traditional, paper-based documents or Big Data – nor should all data be stored forever. “Data should be classified and labeled so that an organization can understand its data retention and destruction requirements, as well as the handling requirements,” he says. “Large data sets of unstructured data can have a lot of unexpected information in them. This may include personally identifiable information (PII), financial details, or even data that falls under the payment card industry data security standard (PCI-DSS).”

Hill offers some specific tips on how to secure Big Data:

  • Identify the types of information being gathered and the regulatory encumbrances associated with the data
  • Do not ignore incidental information that is gathered unintentionally
  • Each type of data being considered for storage should be evaluated for its risk:
    • What are the potential benefits?
    • What are the regulatory or contractual obligations inherent in storing the data?
    • How long should it be retained?
    • What are the implications if the data were disclosed during a breach?
    • Does storing the data make it a target for eDiscovery? And what will the costs be to comply?”