November 12, 2019 By Vu Le 3 min read

It seems that major headlines every week focus on data breaches or cyber events against well-known, reputable businesses or government agencies. Cyberattacks are becoming more prolific and sophisticated, so it’s no longer a question of if it will affect your organization, but when. Certain cyberattacks such as ransomware can cripple an organization, if not shut it down completely, which is why all organizations need to focus on cyber-resiliency.

Cyber-resiliency is the ability to continue operation in the event of a cyberattack. While there are multiple aspects of cyber-resiliency, in this post I want to focus on storage resiliency, which should be designed around three key assumptions:

  1. Compromise is inevitable.
  2. Critical data must be copied and stored beyond the reach of compromise.
  3. Organizations must have the tools to automate, test and learn to recover when a breach or attack occurs.

Let’s break down each of these aspects and look at what organizations can do to bolster their cyber-resiliency.

Compromise is Inevitable

While it’s nearly impossible in today’s world to completely avoid data breaches or other cyberattacks, there are certain practices that enhance security and help protect against attacks:

  • Discover and patch systems
  • Automatically fix vulnerabilities
  • Adopt a zero-trust policy

However, when an attack comes, you need a plan to be able to respond and recover rapidly.

Critical data must be copied and stored beyond the reach of compromise

Organizations need to understand what data is required for their operations to continue to run, such as customer account information and transactions. Protected copies of this mission-critical data shouldn’t be accessible and manipulatable on production systems, which can be compromised.

There are several important points of consideration in protecting data:

Limit privileged users: Often times, threats come from internal actors or an external agent that has compromised a super user, giving the attacker total control and the ability to corrupt and destroy production and backup data. You can help prevent this by limiting privileged accounts, and only authorizing access on as-needed basis.

Generate immutable copies: It’s critical to have protected copies of your data that can’t be manipulated. There are multiple storage possibilities for ensuring the immutability of your most critical data, such as Write Once Read Many (WORM) media like tape, cloud object storage or specialized storage devices. A snapshot that can be mounted to a host is still corruptible.

Maintain isolation: You also need to maintain a logical and physical separation between protected copies of the data and host systems. For example, put a network airgap between a host and its protected copies.

Consider performance: Different methods of data protection come with different performance characteristics, such as copy duration (How long will the backup take?) and performance implications to production, recovery point objective (RPO; How current is my protected data?) and recovery time objective (RTO; How fast I can restore my data?). Organizations will need to understand the tradeoffs between their budgets and their business objectives.

Organizations must have the tools to automate, test and learn to recover when a breach or attack occurs

Build automation: Restoration and recovery normally include multiple, complex steps and coordinating between multiple systems. The last thing you want to worry about in a high-pressure, time-critical situation is the possibility of user error. Automating recovery procedures will provide a consistent approach under any situation.

Make it easy to use: Recovery methods should be straightforward enough to be handled by operators and not require calling 10 different engineers, and that applies especially in a high-pressure situation. Tools, such as push-button web interfaces that can launch an automated disaster recovery process, make recovery more accessible.

Practice makes perfect: Testing the recovery process often is important, not only to validate the process but to provide familiarity to the ones executing it. This can be achieved using recovery systems that won’t affect production systems.

It’s not just important to focus on cybersecurity and the prevention of cyberattacks; it’s equally important to recover and continue operations from attacks, when they occur.

IBM Systems Lab Services has a team of consultants ready to help organizations address the risks and impacts of cyberattacks. We can help you plan ahead, detect issues and recover quickly should a breach occur. If you have a storage or cyber-resiliency questions, please contact us.

Was this article helpful?
YesNo

More from Cybersecurity

Authentication vs. authorization: What’s the difference?

6 min read - Authentication and authorization are related but distinct processes in an organization’s identity and access management (IAM) system. Authentication verifies a user’s identity. Authorization gives the user the right level of access to system resources.  The authentication process relies on credentials, such as passwords or fingerprint scans, that users present to prove they are who they claim to be.  The authorization process relies on user permissions that outline what each user can do within a particular resource or network. For example,…

Intesa Sanpaolo and IBM secure digital transactions with fully homomorphic encryption

6 min read - This blog was made possible thanks to contributions from Nicola Bertoli, Sandra Grazia Tedesco, Alessio Di Michelangeli, Omri Soceanu, Akram Bitar, Allon Adir, Salvatore Sollami and Liam Chambers. Intesa Sanpaolo is one of the most trusted and profitable European banks. It offers commercial banking, corporate investment banking, asset management and insurance services. It is the leading bank in Italy with approximately 12 million customers served through its digital and traditional channels. The Cybersecurity Lab of Intesa Sanpaolo (ISP) needed to…

What is AI risk management?

8 min read - AI risk management is the process of systematically identifying, mitigating and addressing the potential risks associated with AI technologies. It involves a combination of tools, practices and principles, with a particular emphasis on deploying formal AI risk management frameworks. Generally speaking, the goal of AI risk management is to minimize AI's potential negative impacts while maximizing its benefits. AI risk management and AI governance AI risk management is part of the broader field of AI governance. AI governance refers to…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters