What’s the cost of unattended Data Quality? Five situations that wreck projects

Poor data quality costs – on average – to businesses between $10 million and $15 million annually ($12.9 million according to the latest Gartner estimate in 2021), figures that will increase as business and information systems become more complex.

In the United States alone, data errors have a negative effect on the economy equal to $3.1 billion a year, highlighted in a 2016 IBM estimate, with serious consequences such as lost productivity, unavailability of IT systems and higher maintenance costs. And in many cases (60% of respondents), organizations fail to even provide a number on the financial impacts of the phenomenon, because – as a Gartner survey found – they do not measure the consequences on their own balance sheet.

Poor data quality creates a cascade of negative externalities, for example, it is the leading cause of failure for advanced analytics projects, as pointed out by a survey by the Digital Innovation Observatories of the Politecnico di Milano.

Irion itself was born (in 2004) to address the data quality needs of the banking and insurance industry, but it has since become much more: an end-to-end platform capable of optimizing every data management process, combining various functionalities to create endless data-driven solutions tailored to and from the needs of any business.

Nearly 20 years of challenges, over 360 Data Apps created and put into production. What are the most common obstacles to a successful data quality project? Let’s look at five fairly common situations and how to avoid them.

Starting with uncertain or unmeasurable goals

Ambiguity, lack of clarity, absence of dedicated and calculable KPIs and KQIs i.e., a lack of care in defining key metrics in advance to assess future performance. This is an across-the-board trend, in many organizations and in every industry. Every business goal should be formulated from a S.M.A.R.T. perspective: specific, measurable, attainable, relevant and related to a well-defined time frame.

Starting too late or with insufficient resources

The figures mentioned should explain the urgency of prevention, even for monetary reasons. Just as is with our health. Intervening on already dirty or compromised datasets is more costly: it’s better to unravel while it’s still manageable. Data do not heal themselves. And if the source data is incomplete, incorrect, or poorly structured, it can take a lot of time and effort to correct it and improve its quality. In some cases, the organization may give in to the temptation to “settle” for low data quality in order not to deal with change management.

In addition, the increase in entropy related to new paradigms, architectures, infrastructures, and models, including (but not limited to) those related to machine learning and AI, must be considered.

Neglecting sponsorship

Hundreds of times referred to, on every continent and situation imaginable, as the main cause for project failure, in the various shades of “lack of alignment” between IT and Business,, or lack of active support from top managers (CxO level).

In order to gain a decision maker’s approval and support, in the past, the business case for data quality was based mostly on external and compelling needs, such as regulatory deadlines and related reporting. Today, it is clear how widespread, diverse, and permeated throughout the enterprise are the benefits of actively overseeing data quality.

Elaboration on DAMA-DMBOK2 Data Management Framework
© DAMA International (2017)

Neglecting governance

As reflected in the DAMA Wheel above, data quality is one of 10 spokes in the big wheel of Data Governance, along with – among others – metadata, architecture, design, data integration and interoperability. Without a proper approach to data governance (proper, manned policies and processes), quality cannot be ensured over time, nor can accountability, ownership and responsibility.

Having “the iceberg syndrome”

The risk of focusing only on the “tip” of an underlying and less visible set of critical issues, TDAN points out. In fact, the most common checks (completeness, duplication, integrity, etc.) detect only a small fraction of the possible problems, compared to the large “iceberg” of risks to be managed with respect to the use of data, before they are detected by internal or external customers (data consumers).

A paradox is also noted: on the one hand, “Fortune 1000” companies spend an average of $5 billion a year to secure reliable data, on the other hand – as several Gartner surveys point out, for example –only 42% of top managers trust their data.

Learn more about the Irion EDM© platform

You may also be interested in