A data architecture describes how data is managed, from collection through to transformation, distribution and consumption. It sets the blueprint for data and the way that it flows through data storage systems. It is foundational to data processing operations and artificial intelligence (AI) applications.
The design of a data architecture should be driven by business requirements and data needs, which data architects and data engineers use to define the respective data model and underlying data structures that support it. These designs typically facilitate a business strategy or business need, such as a reporting or data science initiative.
As new data sources appear from emerging technologies, such as the Internet of Things (IoT), a good data architecture helps ensure that data is manageable and useful, supporting data lifecycle management. More specifically, it can avoid redundant data storage, improve data quality through cleansing and deduplication and enable new applications such as generative AI.
Modern data architectures also provide mechanisms to integrate data across domains, such as between departments or geographies. They break down data silos without the huge complexity that comes with storing everything in one place.
Modern data architectures often use cloud platforms to manage and process data. While it can be more costly, its compute scalability enables important data processing tasks to be completed rapidly. The storage scalability also helps to cope with rising data volumes and ensure that all relevant data is available to improve the quality of training AI applications.
Learn how the right business analytics solution can help turn insights into action.
Register for the ebook on generative AI
The data architecture documentation includes 3 types of data models:
A data architecture can draw from popular enterprise architecture frameworks, including TOGAF, DAMA-DMBOK 2 and the Zachman Framework for Enterprise Architecture.
This enterprise architecture methodology was developed in 1995 by The Open Group, of which IBM is a Platinum Member.
There are 4 pillars to the architecture:
TOGAF provides a complete framework for designing and implementing an enterprise’s IT architecture, including its data architecture.
DAMA International, originally founded as the Data Management Association International, is a not-for-profit organization dedicated to advancing data and information management. Its Data Management Body of Knowledge, DAMA-DMBOK 2, covers data architecture, governance and ethics, data modelling and design, storage, security and integration.
Originally developed by John Zachman at IBM in 1987, this framework uses a matrix of 6 layers, from contextual to detailed, mapped against 6 questions such as why, how and what. It provides a formal way to organize and analyze data but does not include methods for doing so.
A data architecture demonstrates a high-level perspective of how different data management systems work together. These are inclusive of various data platforms and data storage repositories, such as data lakes, data warehouses, data marts, databases and more.
Together, these can create data architectures, such as data fabrics and data meshes, which are growing in popularity. These architectures place more focus on data as products, creating more standardization around metadata and more democratization of data across organizations via application programming interfaces (APIs).
The next section delves deeper into each of these storage components and data architecture types:
A data warehouse aggregates data from different relational data sources across an enterprise into a single, central, consistent repository. After extraction, the data flows through an extract, transform and load (ETL) data pipeline, undergoing various data transformations to meet the predefined data model. When it loads into the data warehousing system, the data lives to support different business intelligence (BI) and data science applications.
A data mart is a focused version of a data warehouse that contains a smaller subset of data important to and needed by a single team or a select group of stakeholders, such as the HR department. Because they contain a smaller subset of data, data marts enable a department or business line to discover more focused insights more quickly than possible when working with the broader data warehouse data set.
Data marts originally emerged in response to the difficulties organizations had setting up data warehouses in the 1990s. Integrating data from across the organization at that time required numerous manual coding efforts and was impractically time-consuming. The more limited scope of data marts made them simpler and faster to implement than centralized data warehouses.
While data warehouses store processed data, a data lake houses raw data, typically petabytes of it. A data lake can store both structured and unstructured data, which makes it unique from other data repositories. This flexibility in storage requirements is useful for data analysts, data scientists, data engineers and developers, enabling them to access data for data discovery exercises and machine learning (ML) projects.
Data lakes were originally created as a response to the data warehouse’s failure to handle the growing volume, velocity and variety of big data. While data lakes are slower than data warehouses, they are also cheaper as there is little to no data preparation before ingestion. Today, they continue to evolve as part of data migration efforts to the cloud.
Data lakes support a wide range of use cases because the business goals for the data do not need to be defined at the time of data collection. However, 2 primary use cases include data science exploration and data backup and recovery efforts.
Data scientists can use data lakes for proofs-of-concept. Machine learning applications benefit from the ability to store structured and unstructured data in the same place, which is not possible using a relational database system.
Data lakes can also be used to test and develop big data analytics projects. When the application has been developed and the useful data has been identified, the data can be exported into a data warehouse for operational use, and automation can be used to make the application scale.
Data lakes can also be used for data backup and recovery, due to their ability to scale at a low cost. For the same reasons, data lakes are good for storing “just in case” data for which business needs have not yet been defined. Storing the data now means it is available later as new initiatives emerge.
A data lakehouse is a data platform that merges aspects of data warehouses and data lakes into one data management solution.
A lakehouse combines low-cost storage with a high-performance query engine and intelligent metadata governance. This enables organizations to store large amounts of structured and unstructured data and easily use that data for AI, ML and analytics efforts.
A database is the basic digital repository for storing, managing and securing data. Different types of databases store data in different ways. For example, relational databases (also called "SQL databases") store data in defined tables with rows and columns. Nonrelational databases (also called "NoSQL databases") can store it as various data structures, including key-value pairs or graphs.
A data fabric is an architecture that focuses on the automation of data integration, data engineering and governance in a data value chain between data providers and data consumers.
A data fabric is based on the notion of “active metadata” that uses data catalogs, knowledge graphs, semantics, data mining and machine learning technology to discover patterns in various types of metadata (for example, system logs, social and more). Then, it applies this insight to automate and orchestrate the data value chain.
For example, a data fabric can enable a data consumer to find a data product and then have that data product provisioned to them automatically. The increased data access between data products and data consumers leads to a reduction in data siloes and provides a more complete picture of the organization’s data.
Data fabrics are an emerging technology with enormous potential. They can be used to enhance customer profiling, fraud detection and preventive maintenance. According to Gartner, data fabrics reduce integration design time by 30%, deployment time by 30% and maintenance by 70%.
A data mesh is a decentralized data architecture that organizes data by business domain.
Using a data mesh, the organization needs to stop thinking of data as a by-product of a process and start thinking of it as a product in its own right. Data producers act as data product owners. As subject matter experts, data producers can use their understanding of the data’s primary consumers to design APIs for them. These APIs can also be accessed from other parts of the organization, providing broader access to managed data.
More traditional storage systems, such as data lakes and data warehouses, can be used as multiple decentralized data repositories to realize a data mesh. A data mesh can also work with a data fabric, with the data fabric’s automation enabling new data products to be created more quickly or enforcing global governance.
Well-constructed data architecture can offer businesses several key benefits, which include:
There might be overlapping data fields across different sources, resulting in the risk of inconsistency, data inaccuracies and missed opportunities for data integration. A good data architecture can standardize how data is stored and potentially reduce duplication, enabling better quality and holistic analyses.
Well-designed data architectures can solve some of the challenges of poorly managed data lakes, also known as “data swamps”. A data swamp lacks appropriate data standards, including data quality and data governance practices, to provide insightful lessons.
Data architectures can help enforce data governance and data security standards, enabling the appropriate oversight of data pipelines. By improving data quality and governance, data architectures can help ensure that data is stored in a way that makes it useful now and in the future.
Data is often siloed as a result of technical limitations on data storage and organizational barriers within the enterprise. Today’s data architectures aim to facilitate data integration across domains so that different geographies and business functions have access to each other’s data. That leads to a better and more consistent understanding of common metrics, such as expenses, revenue and their associated drivers. It also enables a more holistic view of customers, products and geographies to inform data-driven decision-making.
A modern data architecture can address how data is managed over time. Data typically becomes less useful as it ages and is accessed less frequently. Over time, data can be migrated to cheaper, slower storage types so it remains available for reports and audits, but without the expense of high-performance storage.
As organizations build their roadmaps for tomorrow’s applications, including AI, blockchain and Internet of Things (IoT) workloads, they need a modern data architecture that can support the data requirements.
The top characteristics of a modern data architecture are:
IBM Cloud Pak for Data is an open, extensible data platform that provides a data fabric to make all data available for AI and analytics, on any cloud.
Put your data to work, wherever it resides, with the open, hybrid data lakehouse for AI and analytics.
Unlock the value of enterprise data and build an insight-driven organization that delivers business advantage with IBM Consulting®.
Read the smart paper on how to create a robust data foundation for AI by focusing on 3 key data management areas: access, governance and privacy and compliance.
Data fabric can help businesses investing in AI, machine learning, Internet of Things and edge computing get more value from their data.
Learn how a modern information architecture can help accelerate your journey to AI, high-performance computing (HPC) and advanced analytics.