Power the agentic enterprise Watch the Think Keynote

What is data quality?

What is data quality?

Data quality measures how well a dataset meets criteria for accuracy, completeness, validity, consistency, uniqueness, timeliness and fitness for purpose and it is critical to all data governance initiatives within an organization.

Data quality standards ensure that companies are making data-driven decisions to meet their business goals. If data issues, such as duplicate data, missing values, outliers, aren’t properly addressed, businesses increase their risk for negative business outcomes. According to a Gartner report, poor data quality costs organizations an average of USD 12.9 million each year 1. As a result, data quality tools have emerged to mitigate the negative impact associated with poor data quality.

When data quality meets the standard for its intended use, data consumers can trust the data. This trust enables them to improve decision‑making, leading to new business strategies or optimization of existing ones. However, when a standard isn’t met, data quality tools provide value by helping businesses to diagnose underlying data issues. A root cause analysis enables teams to remedy data quality issues quickly and effectively.

Data quality isn’t only a priority for day‑to‑day business operations. As companies integrate artificial intelligence (AI) and automation technologies into their workflows, high‑quality data will be crucial for the effective adoption of these tools. As the old saying goes, “garbage in, garbage out” and this principle holds true for machine learning algorithms as well. If the algorithm is learning to predict or classify on bad data, we can expect that it will yield inaccurate results.

Data quality versus data integrity versus data profiling

Data quality, data integrity and data profiling are all interrelated with one another. Data quality is a broader category of criteria that organizations use to evaluate their data for accuracy, completeness, validity, consistency, uniqueness, timeliness and fitness for purpose.

Data integrity focuses on a subset of these attributes, specifically accuracy, consistency and completeness. It also focuses on this concept more from the lens of data security, implementing safeguards to prevent against data corruption by malicious actors.

Data profiling, by contrast, focuses on the process of reviewing and cleansing data to maintain data quality standards within an organization. This practice can also encompass the technology that supports these processes.

Think Keynotes

Power the agentic enterprise

Understand how AI-ready data platforms enable real-time insights and execution, while supporting secure, sovereign deployment across environments.

Dimensions of data quality

Data quality is evaluated based on various dimensions, which can differ based on the source of information. These dimensions are used to categorize data quality metrics:

  • Completeness: This metric represents the amount of data that is usable or complete. If there is a high percentage of missing values, it can lead to a biased or misleading analysis if the data is not representative of a typical data sample.
  • Uniqueness: This measure accounts for the amount of duplicate data in a dataset. For example, when reviewing customer data, you should expect that each customer has a distinctive customer ID.
  • Validity: This dimension measures how much data matches the required format for any business rules. Formatting usually includes metadata, such as valid data types, ranges, patterns and more.
  • Timeliness: This dimension refers to the readiness of the data within an expected time frame. For example, customers expect to receive an order number immediately after they have made a purchase and that data needs to be generated in real-time.
  • Accuracy: This dimension refers to the correctness of the data values based on the agreed upon “source of truth”. Since there can be multiple sources that report on the same metric, it’s important to designate a primary data source. Other data sources can then be used to confirm the accuracy of the primary one. For example, tools can check to see that each data source is trending in the same direction to bolster confidence in data accuracy.
  • Consistency: This dimension evaluates data records from two different datasets. As mentioned earlier, multiple sources can be identified to report on a single metric. Using different sources to check for consistent data trends and behavior allows organizations to trust any actionable insights from their analyses. This logic can also be applied around relationships between data. For example, the number of employees in a department should not exceed the total number of employees in a company.
  • Fitness for purpose: Finally, fitness of purpose helps to ensure that the data asset meets a business need. This dimension can be difficult to evaluate, particularly with new, emerging datasets. These metrics help teams conduct data quality assessments across their organizations to evaluate how informative and useful data is for a specific purpose.

These metrics help teams conduct data quality assessments across their organizations to evaluate how informative and useful data is for a specific purpose.

Why is data quality important?

Over the last decade, developments within hybrid cloud, artificial intelligence, the Internet of Things (IoT) and edge computing have led to the exponential growth of big data. As a result, the practice of master data management (MDM) has become more complex, requiring more data flight attendants and rigorous safeguards to ensure good data quality.

Businesses rely on data quality management to support their data analytics initiatives, such as business intelligence dashboards. Without this oversight, there can be devastating consequences, even ethical ones, depending on the industry (for example, healthcare). Data quality solutions exist to help companies maximize the use of their data and they have driven key benefits, such as:

  • Better business decisions: High-quality data allows organizations to identify key performance indicators (KPIs) to measure the performance of various programs, which allows teams to improve or grow them more effectively. Organizations that prioritize data quality will undoubtedly have an advantage over their competitors.
  • Improved business processes: Good data also means that teams can identify where there are breakdowns in operational workflows. This requirement is true for the supply chain industry, which relies on real-time data to determine appropriate the inventory and location of it after shipment.
  • Increased customer satisfaction: High data quality provides organizations, particularly marketing and sales teams, with incredible insight into their target buyers. They are able to integrate different data across the sales and marketing funnel, which enable them to sell their products more effectively. For example, the combination of demographic data and web behavior can inform how organizations create their messaging, invest their marketing budget or staff their sales teams to service existing or potential clients.
Techsplainers | Podcast

Listen to: What is data quality?

Follow Techsplainers: Spotifyand Apple Podcasts

3D render of a spiral of several icons lined up such as a camera, volume knob and a clipboard
Multiple icons in three flows that intertwine in a spiral
Related solutions
IBM® watsonx.data®

Access, integrate and understand all your data —structured and unstructured—across any environment.

Explore watsonx.data
Data quality solutions

IBM offers data quality solutions that optimize key dimensions like accuracy, completeness and consistency.

Explore data quality solutions
Data and AI consulting services

Successfully scale AI with the right strategy, data, security and governance in place.

Explore data and AI consulting services
Take the next step

IBM watsonx.data® optimizes workloads for price and performance while enforcing consistent governance across sources, formats and teams.

  1. Explore IBM watsonx.data
  2. Explore data quality solutions