When working to become a more data-driven pricing function, data quality is an obvious concern and potential stumbling block. After all, the saying “garbage in, garbage out” is more than just a clever turn of a phrase and shopworn cliché; it’s a pretty accurate reflection of real-world dynamics.
But in most commercial operations, there are so many different players and groups, so many disparate systems and toolsets, and so much volatility and dynamism, that there’s just no way we’re ever going to get to a place where we can rest assured that we’re always working with a perfectly accurate and complete dataset.
So we’ve got to figure out how to do the best we can with the data we’ve got…no matter how bad, dirty, or incomplete that data may be.
Fortunately, we can do a lot more with bad data than most people assume…
In the Working With “Bad” Pricing Data webinar, we explore seven key components of a strategic approach leading teams are using to extract significant value from their “far-less-than-perfect” datasets.
From adopting the right mindset and narrowing your data focus to leveraging segmentation models and dedicated repositories, what we cover in this session goes well beyond the standard data hygiene routines and practices that most everyone employs. And along the way, we also highlight the minefields and traps that can really set you back if you aren’t watching out for them.
One of the most common traps we discuss in the webinar session has to do with letting others adopt or establish the wrong comparative perspective on your data-driven deliverables.
Here’s how it will typically play out…
Your team works really hard to scrub and rationalize the data to produce some data-driven recommendations, which are then delivered to the sales team.
Working with the deliverables, the sales team then discovers that there are errors and omissions in the recommendations.
Pointing to these errors and omissions, the sales team then claims that your recommendations “aren’t ready for prime time” and can’t be used.
Your team then goes back to the drawing board, trying to achieve a much higher level of data accuracy and precision.
And that’s the trap.
By allowing your data deliverables to be compared to a mythical and Utopian vision of data perfection…where there are no errors or omissions…you can find yourself trapped in a never-ending loop or quest for a level of data quality and accuracy that is simply unattainable in most commercial enterprises!
You see, the proper comparison is not to something mythical or aspirational; the proper comparison is to the current reality on the ground — i.e. the status quo. And in most cases, the status quo amounts to nothing more than guessing about what to do, or simply doing whatever was done the last time around.
Compared to Utopian perfection, your data-driven deliverables may indeed appear to be “not ready for prime time.” But compared to the status quo? Compared to guessing or just relying on gut feels? Compared to just doing a rinse-and-repeat of the last transaction? Even data-driven deliverables with a relatively high degree of errors and omissions are likely to be a huge improvement over that!
Ultimately, the point we made in the webinar session was that you have to be proactive about setting proper expectations and establishing the right comparative perspectives. After all, left to their own devices, others will often expect that anything having to do with “data” will necessarily be extremely accurate and precise—even when extreme accuracy and precision is not at all required, or even possible for that matter.
Of course, this advice applies to you and your team as well.
To even begin to get past the “bad data” roadblocks, your team must recognize that while your data will never be as good as you want it to be, it can definitely be as good as you need it to be to inform decisions and drive improvements over the status quo. Beyond that, your team must also recognize that “relative accuracy” and “directional accuracy” are much more achievable than absolute precision, while still being extremely useful and valuable.
Like the oxygen masks on an airplane, your team needs to embrace these perspectives before they can effectively reframe the “bad data” debate for others.