Data-Driven Management Depends on Identifying The Right Data

July 18, 2017

Following is an excerpt from our ebook called "Make Your Work Matter: 7 Thought Leaders on Why Work Isn't Working For You and How You Can Change It." You can download the free ebook here.


Knowledge workers today are overrun with data. The challenge for managers isn’t so much to make sure our decisions are driven by data—that’s a given.

The numbers we pass around and pay attention to fundamentally inform our view of how work is going, how much bandwidth we have, what’s most important to tackle next, and more. In this intangible, digital world we inhabit, data is the lens through which we see our work.


See our post "Too Many Metrics? 5 KPIs Marketers Don't Need" to find out which metrics you may not need to be tracking.


The real challenge is in ensuring our decisions are based on the right data, applied in the right way—a far more subjective proposition.

Given that there’s no universal handbook that will work for every industry or every company, how does a manager know which metrics to elevate and which to ignore?

I can’t tell you that. But I can give you four guideposts that will help you zero in on the measurements that will matter most in your particular circumstances.

1. Think of Yourself as a Translation Layer

Managers usually have to rely on two completely different data sets—one for managing their team effectively and another for reporting up to their bosses. And the two don’t always correlate directly, especially for knowledge work as opposed to engineering projects or manufacturing.

For me, at the director level in the software industry, that worker-level metric can be difficult to tie up to a number that the boss cares about. Why? Because the identification of what we want to do up front is all about technical work, but the evaluation afterward is all about revenue.

It’s like speaking two different languages.

To help navigate the two halves of my work world, it helps to think of myself as a translation layer.

The effort I spend translating my team’s work into reportable metrics helps reveal the numbers I should be paying attention to—and it ensures that the projects coming into my team are aligned with the strategic goals my bosses want.

Depending on your company and industry, the complexity of this conversion task varies widely. In manufacturing, it’s relatively straightforward; the number of widgets created per hour directly corresponds to revenue.

In the realm of knowledge work, you might feel like you’re translating Spanish to Portuguese—not easy, but not overwhelming. Or you could be in the unenviable position of translating English to Chinese.

If you’re in the Spanish-to-Portuguese zone, your task may be as simple as putting the right boundaries on projects to make them measurable—perhaps for a SKU or a shrink-wrapped item.

On the other end of the translation spectrum, you often need to tie multiple pieces of work together. For example, right now we’re working on three major projects across four different teams to achieve one measurable revenue goal.

It’s okay to put in several layers of translation (think Rosetta Stone—you know, the original trilingual Rosetta Stone discovered in 1799) to tie the work to the revenue. It’s also okay to continually try new things.

Be agile in your measurement, not just your work processes. Try new metrics with input from your team and from management, keeping what helps and quickly discarding what doesn’t.

Once you’re aware of what your baseline difficulty level is in translating metrics for management, pay attention to whether it’s getting harder or easier over time.

The more challenging it is to convert the numbers, the more likely it is that your team’s work is out of alignment with key corporate objectives. As the task gets less challenging, that may be a clue that you’re getting closer to the right track.

2. Learn to Identify Solid vs. Spurious Data

Numbers can be seductive, and the bigger they are, the greater their allure. However, not all data is created equal.

In a competitive software company with thousands of active users, there’s never a shortage of user requests, potential upgrades, and competitor innovations to consider—with varying degrees of potential financial impact.

The same is true for almost any industry, from high-end skincare to toy manufacturing. With so much feedback coming in from all directions, how do we wade through it all to decide which data should drive our decisions around what to tackle next?

It’s not uncommon to hear numbers tossed around, warning that if we do this or don’t do that, we could lose, oh, tens of millions of dollars. But the number of guesses, conjectures, and variables involved in these kinds of future estimations make many of them specious at best.

Rather than focusing your efforts on hypothetical “we could lose $20 million” scenarios, try addressing the more concrete “we did lose $1.5 million” problems first.

Here’s an example from my world. We recently identified a problem that resulted in a three percent loss of revenue in the previous year.

Of the 10 biggest companies that failed to renew their contracts, half of them left simply because the client’s internal champion, sponsor, or executive—the individual who lobbied for our product, knew it best, and brought colleagues on board—ended up leaving the company.

When she went, so did our contract. 

This is what actionable intelligence looks like. It’s not something that could happen. It did happen last year, and it’s likely to repeat.

Once you have real data that is incontrovertible, the next decision revolves around action: should you act on the data?

In this case, the answer was yes. We determined a set of features to develop, test, and roll out in order to prevent that issue from recurring year after year—keeping that relatively minor $1.5 million yearly loss from turning into $20 million over time.

When deciding which data trail to follow, don’t allow yourself to be enticed by the biggest, most alarming numbers. Start instead with the data that’s most reliable, verifiable, and actionable.

3. Understand That You Get What You Inspect

Metrics aren’t just a way of measuring behavior and outcomes. They also strongly influence behavior.

If you’re not getting the results you want, there may be a problem in what you’re measuring, how you’re measuring it, or how you’re communicating around those metrics.

When you’re developing software, individual projects are often framed in terms of “stories.” If you impart the message to your team, even subconsciously, that your view of their success depends on the number of stories they complete, guess what will happen?

They’ll take what used to be one story and break it down into 27 stories.

No matter the industry, if a team is evaluated based on the number of tasks completed, it’s human nature to break projects down into their smallest component tasks to make the metrics play better—even though no more work is being done.

Instead of rewarding this kind of busy work, make an effort to consciously and verbally define what success looks like in terms that keep the higher-level goal in mind.

For example, if we’re facing a large initiative, I start by breaking it into smaller pieces and making commitments on those pieces:

  • What can we get done?
  • When can we get it done?
  • What is the response we want from how many people?

Then we break that down into our two-week sprints, detailing what needs to be achieved in that timeframe. This way, we have an actionable set of work that we can declare as a victory, and it can be tied all the way up to the higher goal, whether that was revenue or awareness or something else.

My team isn’t measured on how many stories they crank out, but on whether they meet these specific commitments.

Whether you’re producing marketing materials or building bicycles, ask yourself: "What is the outcome I want? Greater speed, quality, efficiency, number of tasks completed per hour?"

Then examine whether you’re really measuring your team toward that result—or accidentally encouraging a different outcome. 

4. Regularly Evolve What You’re Monitoring

Being a data-driven manager doesn’t mean that you put extremely strict measurements on every single thing all the time. Not only is this impossible, but the very attempt will drive you and your team crazy.

If you remember to tie every metric you’re monitoring to a specific outcome you want, it will naturally keep you from tracking too many unnecessary numbers, because there are only so many outcomes you can reasonably expect from your team.

There are only so many things a human being can care about at a given time.

Looking at metrics from an outcome-based perspective will also force you to regularly evolve what you’re monitoring, which is a great thing to do, as long as you inform the team of every change and make sure you’re not changing so often that your team feels disoriented.

Trial and error is an integral part of the process. You have to just try something, and then examine whether you got the outcome you wanted. If you didn’t, try to zero in on the component that broke down, and change the way you’re measuring it.

Especially when something goes wrong, stop and ask yourself a few questions:

  • How were we measuring that thing that went wrong?
  • Was the problem in the research, the mechanics, or the ideation phase?
  • Did we get the concept in front of the right audience at the right time?
  • If we skipped any steps, how or why did they get skipped?

Then it’s as easy as telling your team: “It’s okay, we weren’t measuring it right before, but we’re going to change that now so we get the outcome we want.”

If you’ll be willing to take responsibility for these kinds of issues as a manager, it relieves your team of the feeling that they personally failed, when it was really nothing more than a measurement problem.

What is The Right Data?

As you’re managing the metrics your team cares about along with the numbers you report up the chain, it’s natural to experience information overload.

My advice is to focus first on incontrovertible data that can be effectively translated into key strategic objectives, and stop measuring anything that doesn’t directly relate to the outcomes you want.

If you’re not getting the expected results, pause and reevaluate what you’re measuring, as well as why and how you’re measuring it. When you’re vigilant, willing to try new things, and focused on outcomes, the right data has a way of revealing itself.


Learn more about why getting the right data is so important in our post "3 Ways The Wrong Data Can Lead to Failure."

Previous Article
Why Change Initiatives in The Workplace Fail
Why Change Initiatives in The Workplace Fail

by Dr. Craig Knight - Pursuing productivity is at the heart of almost every change management and design pr...

Next Article
Engaging 5 Generations in the Future Workplace — Part 3
Engaging 5 Generations in the Future Workplace — Part 3

by Marcus Varner - Jeanne Meister, Alan Lepofsky, and Steven ZoBell discuss how businesses can prepare for ...

×

Great content straight to your inbox.

Thank You For Subscribing
Error - something went wrong!
×

Join the 20,000 professionals who read "Talking Work."

Subscribe Today!

Thank you!
Error - something went wrong!