From our vantage point, it’s clear that while some pricing teams are extracting tremendous value and insight from their data, other groups are struggling to use their data to do much of anything beyond the basics. Observing these dynamics, it would be easy to assume that the leading teams must have access to data that is somehow “better” or “cleaner” than the data the laggards are working with.
But is that really the case? Does the difference between the data leaders and the data laggards really boil down to just having cleaner data?
Nope. Not really.
You see, the leaders usually have even more data quality issues to deal with than the laggards. After all, their businesses tend to be bigger and more diverse, with many more products, customers, and transactions to manage, across a wider range of geographies, sectors, and so on. So it’s not like their data is inherently any simpler or cleaner. If anything, the leader’s data is an even bigger mess to begin with.
It’s also not as though the leaders are using some super-secret, ultra-powerful data hygiene processes and protocols that just aren’t available to the laggards. When it comes to data hygiene, everyone uses the same standard set of techniques and routines as everyone else—i.e. fixing formatting, duplicates, outliers, null values, corruptions, irrelevancies, mismatches, etc.
What really makes the difference between the leaders and laggards is their strategic approach to dealing with the whole issue of “bad data.” They think very differently about using data; they focus on different things; they have different expectations; they put different structures and processes in place, etc.
In the “Working With “Bad” Pricing Data” webinar, we explore and explain this strategic approach to data handling in depth, including a number of proven tips and helpful suggestions gleaned along the way. There’s much more to it than we can cover here, but some of the primary components of this more strategic approach include such things as:
- Embracing a more pragmatic perspective on data quality versus utility. While your data will never be as good as you want it to be, it can be as good as you need it to be to inform decisions and drive improvements. Precision is great; but relative accuracy and directional accuracy are almost as valuable and far more achievable. Remember that when the status quo is guessing, “perfect” isn’t necessary to drive meaningful improvement.
- Focusing on the data necessary to execute your most valuable initiatives. It’s a mistake to focus on data “in general” or data that has relatively little value. As it’s more compelling for all concerned to be working toward something important, pick a key initiative with significant and obvious value. Everyone can then focus all of their attention on the subset of data that’s needed to make that valuable initiative successful.
- Being proactive about setting others’ expectations around errors and iteration. Addressing objections and concerns upfront can mitigate them. So acknowledge to others that the data won’t be perfect and that there will be errors/omissions. Explain to them how the data doesn’t actually need to be perfect to have great value and utility. And make it clear to everyone that it’s an iterative process and that reported errors will be used to improve.
- Leveraging segmentation to boost accuracy, relevance, and credibility. Very often, the issue is not “bad data” but “bad comparisons.” Comparing apples to oranges can’t help but produce errors and false signals that hurt the utility and credibility of your data deliverables. By leveraging data segmentation, however, you can ensure that you’re always using an accurate comparative basis.
Of course, having reasonably clean data is necessary and always a good thing. But again, it’s not what sets the data leaders apart from the data laggards. What makes the critical difference is a strategic approach to data-handling that goes well beyond any Sisyphean quest for “clean” pricing data.