One Mission Metric that Matters
Summary:
Metrics matter—but only when they serve the mission. In this provocative session from Prodacity 2025, Alistair Croll, coauthor of Lean Analytics, challenges organizations to rethink how they measure success.
Drawing from real-world failures of misused metrics—from military body counts to corporate KPIs—Croll explains why bureaucracy, bad incentives, and Goodhart’s Law often make metrics worse, not better. He introduces the concept of One Mission That Matters, urging leaders to focus on real impact over vanity metrics, process efficiency, or easily gamed numbers.
🔹 Key Topics Covered:
- Why metrics often fail—and how they can backfire catastrophically
- The dangers of proxy metrics (like body counts, bounty incentives, and compliance KPIs)
- Goodhart’s Law: Why "when a measure becomes a target, it ceases to be a good measure"
- How bureaucracy distorts mission-driven work (and how to fix it)
- One Mission That Matters: The new approach to designing effective, resilient metrics
🕒 Key Highlights & Timestamps:
[00:03] - Introduction: The evolution of Lean Analytics and why metrics matter
[01:33] - What makes a good metric? Explainable, comparable, a ratio, and behavior-changing
[04:03] - One Metric That Matters (OMTM)—and why it’s critical for focus
[06:03] - How bad metrics create unintended consequences (Cobra Effect, ghost soldiers, and more)
[08:54] - Gaming the system: Why people exploit weak KPIs
[12:19] - Loopholes in the system: How innovators hack regulations to their advantage
[15:10] - The bureaucratic trap: How government incentives create inefficiency
[19:45] - Why traditional performance metrics fail in public sector organizations
[21:12] - The Nigerian Prince Scam Lesson: Why good metrics must be inconvenient
[22:09] - The shift from One Metric That Matters to One Mission That Matters
[24:35] - 4 new rules for better metrics: Inconvenient, resilient, outward-facing, and mission-focused
[26:09] - Closing thoughts: How to fix measurement for real impact
🔗 Stay Connected:
Learn more about Prodacity: https://www.rise8.us/prodacity
Follow us on X: https://x.com/Rise8_Inc
Connect with us on LinkedIn: https://www.linkedin.com/company/rise8/
👍 Like, Subscribe, and Share:
If you’re ready to ditch bad metrics and focus on real mission-driven impact, give this video a thumbs up, subscribe to our channel, and share it with your network. Let’s measure what really matters.
#Prodacity2025 #AlistairCroll #LeanAnalytics #Metrics #GoodhartsLaw #DataDriven #GovernmentInnovation #OneMissionThatMatters
Transcript:
Alistair Croll (00:03):
Thank you for joining me today. I have a bunch of things to talk about, but oh, here we go. Now I can talk to you about it. So I want to talk about a book I wrote a while ago and a little update for that book. Now I know that pandemic feels like it was like March, November, and I keep thinking it's 2025, but just before the pandemic. So I recognize it's an increasingly long time. Lean Analytics came out a bunch of time ago, so 12 years ago this book came out. I'll wait for the slides to advance. You can just advance the slide for me until the clicker's working. It's not responding here, so I'm not sure if I have the right one. So yeah, 12 years ago I wrote a book called Lean Analytics. Lean Analytics was a testament to the startup world.
(00:47):
There we go. So I wrote this book, lean Analytics, and it went much further than we expected it to. We wrote it as part of this whole lean series. You've probably heard a few of the lean speakers here today. Eric Re's Lean Startup was the book that launched a thousand ships, and in this book we talked about how to measure metrics to make your organization grow faster. And we didn't realize it at the time, but there was a hunger for these kinds of metrics. And so we would quite literally go and get people drunk when they would tell us what their churn was or whatever. Then we would use that to get the next person to tell us. Because back in 2010, people weren't really openly discussing their metrics. It was kind of something you kept close. Now people are much more comfortable sharing that, but nobody knew what good churn was like.
(01:33):
And as it turns out, this book did pretty well. Its lessons are actually still taught in classes and we get mails, and this is how a textbook is out of data is when it becomes a textbook. So this book has been out for a long time. Some of its principles are still valid. And one of the things that's kind of timeless from this book is the question of what makes a good metric. So if you've heard of this book, great. If not, this is sort of in a nutshell what's matter. First of all, a good metric must be explainable. If you're spending too much time explaining what the metric is to someone, then you're kind of stuck and they don't talk about what the metric means. So the rule of thumb is like a golf score is good. You're supposed to get in the hole in four, six is bad and three is okay, three is great, or a bowling score.
(02:23):
Bowling is incomprehensible to me, I don't understand strikes and spares and all that's a metric that's far too complicated. So somewhere in there is explainability. The next metric is the metric comparable. If I tell you that I'm going a hundred miles an hour without context, that's not very useful. If I tell you I'm going a hundred miles an hour down the highway and everyone else is going 10 miles an hour, I'm in a high speed chase. If I tell you that I'm going one mile an hour and I was going a hundred miles an hour five seconds ago, that chase is over and I've just pancaked, right? So comparability is really important to competitors to previous timeframes. The third thing is metrics work really well when there a ratio or a rate, so miles per hour return on investment because generally you want a metric because you're trying to optimize something.
(03:13):
And so your metric must reflect the tension between those two trade-offs you're constantly making. And the last thing for what makes a good metric is that a metric must be behavior changing. If you can't tell someone how your behavior will change based on the metric, it's not a good metric. It might be data you need to collect, you may need to know sales per quarter for tax purposes, but you need to know if my sales go up by more than 10% today, if my data comes in from the battlefield in this way, I will react in a good way. So those are kind of the four core ideas of a metric. And the other thing that came from this book that has gone really far and wide is the idea of one metric that matters. This was somewhat controversial at the time because we said, look, you have to pick a metric and focus on it.
(04:03):
If you're solving an equation, you can only solve the equation for one thing and it's very, very easy to get distracted what you've solved that one metric, go solve the next one, go solve the next one. Metrics are kind of like squeeze toys. Once you optimize, then the next one kind of bulges out what's obvious. But you have to have focus. Startups only aren't the only organizations that need to understand this kind of data and have this kind of focus. We spent a lot of time talking to some of the world's biggest companies and they have what we call the many metrics that might matter syndrome. Simply they have all these metrics they think will matter, but they're not able to pick one. So a little audience interaction after sleepy lunch. Anybody else here deal with the many metrics that might matter? Yeah, it's pretty prevalent and it is very hard to convince your boss that what you're doing is actually analytics.
(04:51):
Theater metrics in high inertia organizations often backfire. And the reason that they backfire is good art's law. Good art's law simply says things are hard to measure. So we measure proxies for those things. And good art's law says when a measure becomes a target, it ceases to be a good measure. The classic example of this is dead cobras. The British in colonial India wanted to get rid of deadly cobras. So they put out a bounty on cobra skins only to have the local population go, I'm going to make me some money by breeding cobras. And then when they cracked down on this and the fraud was uncovered, those people fled and released all their cobras. And the population of cobras in India went up because they were not looking for the reduction in the population of cobras. They were looking for a number of dead cobras. That was an easy proxy metric. This has happened elsewhere. When the French wanted to get rid of rats in Hanoi in 1902, they paid bounty hunters a French 1 cent coin. Here's an example, per rat tail.
(06:03):
So everybody cut the tails off the rats and let them breed. They even smuggle in rats from the countryside. And this has happened in the US too. The US Army had its own run in with pests at Fort Benning in 2007, 2008, they offered to pay $40 a tail. They didn't learn from French Indochina. And guess what? The hunters put out bait to attract the pigs, which meant they were better fed. They tended to hunt the big pigs leaving the female hogs and the smaller hogs to breed. And the population went up, excuse me. In 2002, the British wanted to bring down the opium crop in Afghanistan. So they said, we're going to pay $700 an acre for destroyed poppy fields, which led Afghan farmers to grow poppy fields everywhere they could. So they could collect 700 bucks because that was way easier than dealing with shipping opium.
(06:55):
And in some cases they would harvest the opium and then call the British and say, please come and burn my field. So they just got a $700 bonus on their opium. Tragically, this has happened elsewhere because one of the conditions for us withdrawal from Afghanistan was the number of Afghani soldiers, and they were compensated for that. Afghan officials created what they called ghost soldiers and collected their paychecks, meaning that when the US withdrew in 20 21, 1 of the contributions to the chaos that happened was there weren't nearly as many Afghan soldiers as the Afghans had told us because they were trying to collect this bounty. And you may have heard of this one, because the US couldn't use territory liberated as a measure of success in Vietnam, McNamara came up with body count. If body count is your measure of success as a tendency to count every body as an enemy soldier. So they reported 10,899 enemies killed, but only 748 recovered weapons.
(07:56):
Humans, as you can probably tell from these examples, will gain any system once they understand how it works. Octoberfest is an October long celebration to promote open source software, and when they announced that the first 75,000 participants would get a, "Hacktober" Fest t-shirt for four pull requests, it caused a huge number of frivolous pull requests because people wanted t-shirts. Some insurers offered discounts for people who walk a certain number of steps each day so you can actually go get a device that will fake your steps for you. When the US Postal Service got a contract to deliver with Amazon, the contractor relied on number of packages delivered on time. So at seven 15, postal service carriers in Atlanta were told to pull their trucks over, scan all packages on board as delivered, and then finish their deliveries.
(08:54):
Back in the 1980s, airlines wanted to get good time ratings, so they just listed their flights as a little longer and then guess what? They were all on time. These kinds of loopholes thrive in the gap between metric and incident. And plenty of loopholes come from exploiting good art's law. And I've spent the last eight years looking at how challengers and underdogs tend to subvert systems, and a lot of times they're these loopholes. I've written a book called Just Evil Enough. I was hoping to have some here for you, but it's not here because customs. But it always comes back to someone looking at the system, seeing it for what it is, and then getting it to behave in a way that was not intended. In 2005, San Francisco had these parking spots and there was a design firm that worked with a coffee shop and said, we're going to set up a park in the parking lot and every four hours we'll pick it up, take a photo that the parking space was empty and put it back.
(09:50):
And they created these things called parklets that were being built across the country as part of a civic tech movement to sort of reclaim open space. And city councils had to come up with rules to deal with these parklets. Meanwhile, there's a guy in England, Adam Tranter, who's like, I would like a parklet. I have a parking spot in front of me, but unfortunately for Adam, the parking lots. So he put a park out there and the local town said, you can't do that because our laws say that parking spots are only for vehicles. So he said, fine, I'll build a park on a vehicle. So this is Adam Tranter's parklet, but he can't choose just any vehicle because if it's a regular vehicle, he has to take it to be smog checked once a year. So he went and got this car. It's called a PGO Ape. It's actually a van built on a Vespa scooter, and it's a historically significant vehicle and therefore exempt from smog checks. This is a great example of loopholes into 2021. The US Congress passed these incredibly stringent requirements for sesame seed allergen labeling. It turns out sesame seeds can be a very toxic allergen for a very small part of the population. Many manufacturers found it simpler to add sesame to their product than to comply with the cross-contamination guidelines within their preparation facilities. So it was so expensive to comply with the cross-contamination. They just put sesame in.
(11:18):
Developers of ultra expensive towers want the tallest building possible because people will pay good money for it. And there are rules that limit how tall a building can be. So in New York City, they want to build buildings higher than the highest possible structure. The good news is that's pretty easy to do because the city's regulations for height do not include mechanical rooms. And so more than a fifth of Central Park Tower, which is 1,550 feet tall, one of the tallest residential skyscrapers in the world, more than a fifth, is devoted just to machinery. And a quarter of the 88 floors at 4 32 park are as well. These are all great examples of hacking loopholes. Perhaps nothing is a better example of hacking a loophole than Elizabeth Sweeney. Sweeney was a woman who really wanted to compete in the Olympics. She was a skier and she decided to enter a new event in 2018, which was called the Women's Freestyle Halfpipe.
(12:19):
You've probably seen it, right? It's like this U-shaped thing. You ski and do tricks as you go up the end, you do aerials and all kinds of stuff. It's really cool. But it was brand new in 2018. So the Olympics has three rules for competing. Number one, you have to come in in the top 30 in a World Cup event. Number two, you have to be one of four people in a country to compete. There's only four contenders per country. Number three, you have to be one of the 24 in the world that goes to the competition. So Elizabeth says, number one, I'm going to go compete in World Cup competitions with fewer than 30 participants, and as long as I don't fall, I've checked the first criteria. And then she says, number two, the US is dominating this sport, but I'm Hungarian as well.
(13:04):
I'll enter through Hungary, which had no contenders. And number three, there's only like 35 people in the world that ski this sport at a competitive level who've actually competed in one of these events. She got in easily on 24. So I give you the Epic 2018 ski run of Elizabeth Sweeney. All she wanted to do was ski in the Olympics. Look at that. My heart goes out to Elizabeth. So there are these, I'll give you one more great example. I love this one. The Australian Open was on recently, and they wanted to avoid licensing issues with logos or players because there are games where you're not allowed to use licenses of the players, and they wanted to stream it on YouTube. So they just used software. It kind of looks like a Nintendo Sports. These are actual athletes in the background. The commentators are doing the real commentary. They just replaced it with these animated stick figures. And when the BBC asked them about it, they said, no, no, no, we're just doing it to attract a younger audience. Because if they had said we're doing it to bypass copyright laws, they might've gotten in trouble.
(14:07):
Even when things aren't corrupt or when you aren't looking for a hack, Godard's law means you get what you measure. The classic example is a nail factory. If you drive that factory based on the number of nails you produce and let your team optimize for that metric because the metric is now a target, they will make a very, very large number of very small pins. On the other hand, if you say, I'm going to go buy total massive nail made, they'll just get all the iron, make one big nail and go, here you go, we're done. So neither of those is proper. The organization will optimize for the metric, not for the outcome, which is why if you rate doctors on successful operations, they will only operate on healthy patients. It's why if you reward teachers for publications in journals, they will cover topics that are salacious and new rather than going and checking past science. If you watch YouTube and number of subscribers is what gets you monetization, everybody will say like and subscribe at the end.
(15:10):
This matters a lot for government if you haven't guessed by now, my point in saying all this is that Godard's law is the undoing of the one metric that matters because when the metric becomes a target, it doesn't just become useless, it tends to backfire. But bureaucracy is a system. And to understand how these metrics go, Orion government, we need to look at how government inevitably involves. Imagine you are a brand new government, you want to create a service or deliver some kind of outcome. And you look at the tools you have and you have a pretty clear start point and a pretty clear goal and some steps along the way. And so you kind of draw a straight line. Someone wants a permit or a loan or a visa, whatever the thing is that you're trying to optimize for. But this is a fair free democratic society.
(15:58):
And the straight path might not be fair. Maybe your initial approach of coming in and talking to someone who says yes or no doesn't treat people fairly or it's not accessible. So you create a paper form so that everybody has asked the same seven questions and it feels fair, and now we have some consistency. And then because you're like a group of states and they all have slightly different rules, maybe there's an additional step to ensure that citizens from those states are treated equally according to your constitution across every jurisdiction. And then it turns out that those forms are just being stored in a closet somewhere and we don't like that. So we better put some privacy legislation, we better make sure we have people's consent to use the data. Every one of these is a very valid thing. That's absolutely makes sense, right? Because some people are now cheating the system.
(16:42):
You need to find a way to verify who they are. So you got to mail them a thing that they can then use that to type in some code to prove that you actually have their mailing address because two factor authentication works using the postal service. And then it turns out this process is really unfair to certain people. So we got to put in something to make sure that everybody's included and people with disabilities can still access the information. And then now people are getting kickbacks from their friends in government. So we better put in an RFP process to make sure that the vendors are properly vetted and that excludes the smaller vendors. There's a longer implementation cycle. It means more money, more stakeholders, more requirements, more complexity. And now the union's pretty upset because they're kind of overworked. So they say, Hey, we'd like some data on the hours we're working in payment. And then the citizens are kind of furious because this is getting slow and awful. So they want reporting for accountability with KPIs. And now everyone starts managing and measuring to those KPIs because if they don't get yelled at or fired, and they're all gaming the system for their own part of that. And so this diagram, we built it and every point was very valid.
(17:48):
The simple goal we had is now a complex process and workers stick to ensuring that they complete their little part of the process. They're focused on, this is my job. I live here. Now my job is to preserve and optimize this little piece. This is a problem because the public sector rewards do the thing, right, not do the right thing. Your job is to make that little line go from here to here. According to some metric, you get in trouble if you don't, you're awarded. If you do and nobody has a vision of the overarching mission, all you need to do is take a process, break it into steps, and make each metric for each step into a target. And Goddard's law will do the rest. We have a metric for each step in the process. The bureaucracy is gamed it using metrics as justification.
(18:47):
Not because public servants are lazy, but because we punish them for deviating from the process with historically very good reasons. So define my process metric quickly becomes optimize that metric at the expense of the overall outcome. This is a problem because when new tools appear, for example, biometrics or cloud computing, that could solve the thing and we could easily use these new tools. Mobile app could do authentication, don't need the post anymore. Cloud computing could scale things, customization of the UI to the user to tell them what features they could get or what benefits they could enjoy and so on. These are all things that could happen, but unfortunately we have this wall of structural incentives and politics and inertia and a bias for predictable repeatable outcomes that gets in the way of this. We can't have nice things because they break the existing process and the structure of public service preserves the process even when it hurts the outcome.
(19:45):
So how do we fix this? I'm going to tell you a little story about why spam is really badly written. You probably haven't seen it because these days it goes into your spam folder. But if you ever look in your spam folder and looking in your spam folders like opening up the basement in a cabin in the woods, like it just feels creepy. But there's at least one message in there from Nigerian royalty, right? And I always look at those folders and I think to myself, if spammer actually learned how to write properly, we might consider their offer. But it turns out that badly written spam is entirely intentional, badly written spam with sketchy headlines and mentions of Nigerian royalty or whatever this scam is, have terrible open rates. Do you know who opens them? People who haven't heard of the Nigerian royalty scam. Do you know who the target customer for a scammer is? Someone who hasn't heard of the Nigerian royalty scam. Cormack Hurley from Microsoft went to Nigeria like a decade ago, sorry, Corman Hurley. And he said these people told him by sending an email that repels all but the most gullible, the scammer gets the most promising marks to self-select and tilts the true to false positive in their ratio.
(21:12):
The spammer is not focused on a metric called maximize open rates, because while that metric seems good for that part of the process, it's actually bad for the mission of cheating idiots out of their money. Spammers are accountable to their mission, defrauding the uninformed, not their process, which is maximizing open rates. And so 13 years later here at Prodacity, I would like to revise the one metric that matters to say we need one mission that matters. Metrics must be a consequence of the mission, but not the goal of the mission. Your goal is not to move the metrics. The metrics must be a consequence of the mission. And so with that in mind, I'm going to give you four more things that make a good metric. I started out by talking about how a metric should be explainable, comparable, a ratio or a rate and behavior changing.
(22:09):
I think the metric should be inconvenient. This sounds weird to say, but if a metric is easy to get, it's probably not reflective of the complex outcome you're seeking. So ask yourself, is this the easiest metric I can get? Because it's probably the wrong one. It must be based on what's right to measure and not what's easy to measure. Is it the actual outcome we are hoping to achieve and not a proxy for it? It must be resilient. Resilient means someone is red, teaming the metric, have a team whose job it is to think like people in colonial India who are going to breed cobras red team your metric and see how it can be subverted. It must be outward facing. That means it must be defined by those it serves, not those it measures. This one change. If we just changed metrics to measuring those it serves and defined by those it serves would transform everything about the public service. And finally, it must be mission focused. It must capture the mission, not the task or process currently being used to accomplish that mission. So that's my new set of metrics for one mission that matters. Hopefully it gives you something to think about. If you want to get in touch with me, I'm acroll@gmail.com. Thank you all very much.