Every industry stores massive amounts of archival data, with new creation of data accelerating constantly. Although every organization has unique data access requirements, a large amount of archive data quickly moves to a rarely (if ever) accessed state. 

But users and the applications they use demand access to data. The challenge has been how to integrate data access with long-term storage and data resiliency. This has caused the deep-but-accessible archive. 

 Figure 1: IBM Storage Deep Archive
Figure 2: The evolution of data storage

Modern data centers have been focused on solid-state drive (SSD) and hard disk drive (HDD) implementations at the lowest possible operational impact. Object storage has been the crucial implementation to supporting billions of data assets on HDD at the lowest possible cost. Even lowering costs with object storage have not been enough to keep up with the growth of data and infrequently accessed but retention-required data.

Making tape adoption accessible

For organizations and storage admins that have never deployed tape, it can be difficult to use. But once tape infrastructure is in place, it is easier to manage per PB of data than active storage (SSD or HDD). 

A significant part of the difficulty in using tape has been the requirement for independent software vendors (ISVs) to manage the data and interface to tape. Administrators need specific skill sets and do not have direct applications to access data.

One of the reason ISVs have been required is because tape is a linear access device. Data streams faster from tape than from HDD, but the time to random data has a high latency. That is all solved with IBM Storage Deep Archive with S3 Glacier Flexible Retrieval storage class.

Figure 3: IBM Storage Deep Archive use case

IBM Storage Deep Archive provides a standard S3 interface, with full data storage placement on tape, without special tape skills. The application and the administrators simply must support the S3 APIs. S3 Glacier Flexible Retrieval storage class handles the latency of tape for users and supporting applications.

Figure 4: IBM Storage Deep Archive benefits

IBM Storage Deep Archive reduces the impact of retaining deep archival data. To address the difficulty of tape deployment, IBM® has delivered IBM Storage Deep Archive in an install-and-go configuration. The IBM Diamondback can be installed in less than 30 minutes. Once the services are brought online, the system is ready to use.

IBM Storage Deep Archive on Diamondback provides the S3 Glacier compliant interface up to 27 petabytes of uncompressed data archives and up to 16.1 TB per hour of uncompressed data transfer.

If your organization is considering enhancing storage data placement, or if it has a cloud-first initiative, IBM Storage Deep Archive can help. CFOs will appreciate the reduced cost of information technology; IT teams will benefit from reduced complexity for storing data in the hybrid infrastructure; and Information Security and Governance teams will value the security and control of on-premises data storage.

Learn more about IBM Storage Deep Archive
Was this article helpful?
YesNo

More from Business transformation

Technology Lifecycle Services: Envisioning the next generation of support with AI

4 min read - As companies integrate AI to enhance customer experiences and optimize business processes, AI is becoming ingrained in their operating models. This has created a need to effectively design, deploy, and support the underlying infrastructure for smooth operations of AI-enhanced mission-critical applications. IBM Technology Lifecycle Support (TLS) provides a wide range of integrated data center services and support designed to help accelerate our clients’ transformation to hybrid cloud and AI. IBM TLS delivers support services for IBM infrastructure products and products…

Maximizing compliance: Integrating gen AI into the financial regulatory framework

8 min read - In an era where financial institutions are under increasing scrutiny to comply with Anti-Money Laundering (AML) and Bank Secrecy Act (BSA) regulations, leveraging advanced technologies like generative AI presents a significant opportunity. Large Language Models (LLMs) such as GPT-4 can enhance AML and BSA programs, driving compliance and efficiency in the financial sector, but there are risks involved with deploying gen AI solutions to production. Financial institutions face a complex regulatory environment that demands robust compliance mechanisms. The integration of…

10 tasks I wish AI could perform for financial planning and analysis professionals

4 min read - It’s no secret that artificial intelligence (AI) transforms the way we work in financial planning and analysis (FP&A). It is already happening to a degree, but we could easily dream of many more things that AI could do for us. Most FP&A professionals are consumed with manual work that detracts from their ability to add value to their work. This often leaves chief financial officers and business leaders frustrated with the return on investment from their FP&A team. However, AI…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters