How to Measure Anything
Summary:
What if everything you thought was immeasurable could actually be measured? In this insightful talk from Prodacity 2025, Doug Hubbard, author of How to Measure Anything, breaks down the misconceptions around uncertainty, risk, and measurement in decision-making.
Doug reveals why traditional scoring methods, intuition-based decisions, and risk matrices often make things worse—and how Monte Carlo simulations, probabilistic modeling, and better data strategies can transform decision-making across industries, from defense to technology.
🔹 Key Topics Covered:
- Why everything can be measured (and why people think it can’t)
- The failures of traditional risk matrices & scoring methods
- How Monte Carlo simulations & probability modeling improve forecasts
- The three illusions that make people believe things are immeasurable
- The measurement inversion: Why organizations measure the wrong things
- How leaders can make better, data-driven decisions
🕒 Key Highlights & Timestamps:
[00:04] - Introduction: Why measurement matters in every industry
[00:54] - The myth of immeasurability: How every problem can be quantified
[01:48] - Why intuition often leads to worse decisions
[02:42] - Comparing human judgment vs. algorithmic forecasting
[05:39] - The real definition of measurement (it’s not about exact numbers)
[07:52] - The importance of quantifying uncertainty and risk
[09:26] - How people are statistically overconfident in their forecasts
[12:19] - Case study: Measuring trust, innovation, and cybersecurity
[14:06] - The measurement inversion: Why organizations focus on the wrong data
[17:07] - Debunking the myth of “statistically significant sample sizes”
[19:32] - Why small data samples can yield powerful insights
[22:59] - How to improve portfolio decision-making with better measurement
[24:03] - The final challenge: Monetizing the “immeasurable” (including human life)
🔗 Stay Connected:
Learn more about Prodacity: https://www.rise8.us/prodacity
Follow us on X: https://x.com/Rise8_Inc
Connect with us on LinkedIn: https://www.linkedin.com/company/rise8/
👍 Like, Subscribe, and Share:If you found this session valuable, hit the like button, subscribe to our channel, and share it with your network. Let’s embrace better measurement for smarter decisions.
Transcript:
Doug Hubbard (00:04):
Hi. Thanks Bryon. So as you mentioned, I've been doing this for over 35 years. I've been dealing with what at first seemed like really difficult quantitative or measurement problems of various sorts in a variety of industries. Those are the books so far. The fifth one's not up there yet. How to measure anything in project management. These are just a short list of some of the things we applied this to, but they all had something in common. They all thought they were dealing with a uniquely difficult or impossible measurement or quantification problem. None of them were impossible by the way. So when I called my first book how to Measure Anything, I meant it literally. So if you have any ideas about things that you don't think could possibly be measured or just infeasible to measure it all, let's talk about that. Talk to me after the q and a here.
(00:54):
Alright, so let me tell you a few things about the research we've done that led me up to this point. How many of you use some sort of a scoring method in your organization? Show of hands to prioritize projects in a portfolio. Alright, how many of you use a risk matrix? Red, yellow, green risk matrix? How many of you rely on intuition and judgment entirely? Alright, this has all been researched. We know a lot about the relative performance of each of these items and it's not good news. In fact, there's even been research on why we might think those things work when in fact they don't. They actually add error to your unaided intuition. Your unaided intuition is a baseline that we have to improve on. It's not bad. It is a measurable performance itself. We can say something about how good it is, but some methods actually make it worse.
(01:48):
Perhaps some of the most popular qualitative methods actually add error of their own. The research shows. So let me dive right into what does work. Relatively naive statistical models improve on a wide variety of expert judgments, and this has been true for disease prognosis, which small businesses are more likely to fail, which married couples are more likely to stay married in five years and so on. One researcher, Paul Meehl, had looked at over 150 studies closer to 200 by the time he died, where he was comparing expert judgment for estimates and forecast to naive algorithms on a wide variety of topics, all the ones that I've mentioned and many more. And he concluded that he could only find six out of all of those studies where the humans did just as well or slightly better than the algorithm. Another study, this is a Philip Tetlock.
(02:42):
He did the Good Judgment Project. Is anyone familiar with that one? The Good Judgment Project. He wrote a book called Super Forecasting. You might be familiar with that. That's a more popular book. He did a clinical trial, a study that went on 20 years. He collected over 82,000 individual forecasts from 284 experts over a 20 year period. And these were experts in military affairs, technology trends, economics, politics. And he concluded this, it is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones. So it sounds like the jury's in on this. We can keep studying this and we keep finding the same things. Humans aren't as good as the algorithms that humans write. Here's another thing that we find out. Some subjective estimation methods measurably outperform other subjective estimation methods. So we're definitely not saying get rid of subjective estimation methods.
(03:43):
We're saying if you're going to do that, use the best performing ones. One is building Monte Carlo simulations. Who's familiar with that? Who wants to know a real short way of explaining it, even if you're familiar with it, it's how to do the math when you don't have exact numbers. Has anybody ever seen a business case like a net present value or an ROI calculation and a spreadsheet that has a bunch of exact numbers in it? Anybody seen that sort of thing? A big spreadsheet? There's a bunch of benefits, a bunch of costs. It might go out a few years making a forecast. Sure. How many of those numbers did they know exactly in reality? Zero. None. But they used exact numbers, right? Of course they did. But you don't have to do that. We can quantify our uncertainty about things that we don't know exactly and it turns out that just doing that even subjectively makes your forecast better.
(04:33):
I cite several studies on this one, a group of NASA projects, also a large oil and gas industry study, and even some fundamental psychology research where they've done clinical trials on this. So I will propose to you that there's only three reasons why anybody ever thought something was immeasurable. And they're all three illusions. There literally is nothing that's immeasurable. And if somebody believes something is immeasurable, it's for one or more of these three reasons. I call them concept object method. If you want to pneumonic, think of.com concept object method concept has to do with the definition of measurement itself. It might not mean what you think it means. Object of measurement is figuring out what the thing is that you're measuring, defining it. And finally, the methods of measurement. There's really quite a lot of misconceptions about how statistical inference even works. So let's talk about the first one of these. What do you think measurement means? By the way, anybody, no one wants to volunteer one after I said all that.
(05:39):
Well, it's not a point value. It hasn't meant that for about a hundred years in the empirical sciences for about a hundred years now, since the 1920s in the empirical sciences, the real defacto definition of measurement is a quantitatively expressed reduction in uncertainty based on, it doesn't say elimination. It means you have less uncertainty than you had before. So you have a current state of uncertainty, which you might describe as some sort of a probability distribution of what you think is a plausible series of outcomes. You put probabilities on various quantities, you make some observations, do some trivial math. Now you have less uncertainty than you had before, and that constitutes a measurement. It constitutes a measurement in any peer-reviewed scientific journal, and it's the most practical definition of the term for decision-making. It's about making better bets. How can I make observations that inform my bets?
(06:35):
I can reduce my uncertainty with better observations. So why do we want to do that? Why do we need to even quantify our uncertainty to begin with? Well, we can quantify risk. How many of you do risk analysis? How many of you are putting probabilities on events? I would argue that if you're not actually talking in terms of probabilities like an actuary would, you're not really doing risk analysis. You're doing something. It's not a bad idea to list things that you might think are risky, but until you actually start speaking the language of probabilities, you're not measuring risk. How many people are putting a score on risk? Have a risk score of some sort? That's a pseudo measurement. I'm talking about probabilities of various magnitudes of losses. Alright, so why do we need to quantify risk? Why do we need to quantify uncertainty? One is quantify risk, one is computing the value of information. This is really important for a couple of reasons I'm going to talk about you can compute the value of information. This has been known in game theory and decision theory since World War ii. We know how to do this and when you start computing the value of information, it takes you down completely different avenues regarding what you should be measuring and how.
(07:52):
So how do we start out with a current state of uncertainty? You have a current state of uncertainty right now for anything. Think about the next big project you're working on. What's your 90% confidence interval for how long it's going to take? What's the probability that it's going to get done on a given date? You can state those probabilities even subjectively. It's a skill you can learn to get good at. One of the individuals I cite quite a lot in my research is Daniel Kahneman. He won a Nobel Prize in economics in 2002. I was able to interview him for my second book, the Failure of Risk Management. He died last year and he died just after he wrote the book noise. If you've seen that one. That was a pretty important one. But I cite him in many areas. I actually don't cite him in the one area he won the Nobel Prize for.
(08:42):
I cite him in a lot of other research and some of the research that I cite 'em for has to do with how well people subjectively assess probabilities. And perhaps this is no surprise to you, but the vast majority of people are statistically overconfident when they assess probabilities. If you look at all the times that somebody says something's 90% likely, they're right, less than 75, 60% of the time. Alright? In fact, of all the times they said they were a hundred percent confident in a forecast they're making, we've tracked this. We have a lot of data points on this. We're wrong about 12% of the time. Now you can compute something called a statistically allowable error. People can be close to 90% right? Or close to 75% right when they state of probability because they're going to be right. Sometimes they're going to be wrong sometimes, but guess what?
(09:26):
There's no statistically allowable error when you say you're a hundred percent confident, you have to be right every time. If you ever said you were a hundred percent confident and were wrong, you were overconfident. You don't need math for that part. The neat thing is, as I said, this is a skill you can measure and get measurably better at. It's something you can learn. It takes about half a day. About 80 to 85% of our participants in our calibration training end up about as good as a bookie at putting odds on things. They're statistically indistinguishable from a bookie. In other words, if you look at all the times they said something was 70% likely to be true, it happened about 70% of the time. All the times they said it was 95% likely to be true. It was right. 95% of the time, of all the times they give a 90% confidence interval on an unknown quantity. The true value falls within the range about 90% of the time. Again, it's a skill you can learn. We've done some really interesting research on this lately with getting ais to do that, by the way.
(10:24):
So that's the concept of measurement. That's a major obstacle to measurement. For many people. They misunderstand what measurement means because they think they have an exact number. Somebody will say something like, Doug, we'd like to measure that, but we can't because there's no way to put an exact number on it that misunderstands what the definition of measurement is in the empirical sciences and for practical use and decision-making. Secondly, the object of measurement, defining the thing that you're trying to measure. What does that mean? Shout out some really difficult measurements, things that you think are impossible to measure. There's no way Doug can measure that. There's a good chance, by the way, I should say I've heard it before many times, but go ahead. Ideas, piano tuners in New York, piano tunes. You're using an example from the book. Okay, anyone else?
(11:19):
Long-term love. Andrew Oswald, the economist I wrote about in the first book, did that exact measurement. Any other questions? Another one. A really good one, not an easy one like that. No. Give me some hard ones. Something you might run into in your organization. Trust. Trust. Oh, a good one. Excellent. Alright, let's deal with each of those. The first thing you have to figure out is what do you mean when you say it? What do you really mean? And in fact, what do you see when you see more of it? Have seen variations in trust. Have you seen trust more over here than over there? What did you see that was different? Any ideas, right? Yes. Part of it might be preferences and attitudes. People's response to someone else's actions. What can you do to get their preferences and attitudes You can ask them.
(12:19):
That's called a stated preference. You can also look at what they reveal through their actions, how they spend their time and money that's stated and revealed preferences. Alright, so all of these things are things we've been asked to measure before. I've also been asked to measure things like the value of ai, team, collaboration, innovation, certainly trust cybersecurity. We've measured all of these things. The value of improving the environment or water resilience in the developing world. These are all things we've measured. First you have to figure out what people mean when they say these words. They're not sure what they really mean when they say these words by the way. And finally, why do you care? So this is kind of getting back to Martin's point here. What's the why behind some of these things? When you put the why do you care around it, you're starting to frame it as a decision-making problem.
(13:14):
Now, here's an important reason why you want to start measuring things. One is there's this phenomenon we ended up calling the measurement inversion. If you can compute and we have compute the valley of information for every uncertain variable and a big decision problem, we build a model a Monte Carlo simulation that might have a couple of dozen or a couple of hundred variables in it. So it could be prioritizing RD projects in aerospace and defense, et cetera, or the project we did for the Marine Corps forecasting fuel for the battlefield. So we built these algorithms for them and we compute information values for each of these uncertain variables. What we tend to find out is that the high information, value variables are not what they would have measured otherwise. They spend more time measuring things. They're statistically unlikely to improve decisions. Does that sound familiar?
(14:06):
It is not just that people are measuring the wrong things, by the way, they're measuring almost exactly the wrong things and almost exactly the wrong order. What do you think they spend more time measuring in it? Cost or benefits? Which one's more uncertain cost or benefits? Within costs, where they spend more time measuring initial development costs or long-term maintenance and training? Which one's more uncertain? There you go. Those are measurement inversions. And by the way, the fact that those are uncertain, this is kind of non-obvious, but that doesn't mean it can't be measured. We use probabilities because we're uncertain, not in spite of it, by the way. And finally, the methods of measurement. Let's talk about that for a little bit. There are some profound misconceptions about the of statistical inference that need to be overcome. Here's a simple little test who works for a big organization?
(14:57):
I know some people work for big organizations here, maybe over a hundred thousand or so. Suppose I wanted to figure out what time they spend per week in some activity or per day. So I randomly sample five people out of that entire population and it's random. Everybody in the entire population had an equal chance of being selected. Here's the results. The smallest one was 15 minutes. The largest one was 40 minutes. Let's say a week or per day In this case, what's the probability that the median of the entire population falls within the smallest and largest of that random sample of 5 93.75%. Does that surprise anybody? Is that a statistically significant sample size? Well, I'll get to that in just a minute. Have you heard anybody say any of these things or said them yourself? We don't have enough data to measure that. Have you heard that?
(15:56):
That's a specific mathematical claim for which almost nobody ever has the math for. Does that mean they actually computed the uncertainty reduction you can get from a given sample size and they computed the economic value, that uncertainty reduction to determine whether or not it was justified? Of course not. They're winging it every time you've ever heard that. Who's heard the phrase statistically significant sample size? Any statistician will tell you this. I'm asking you to go check it yourself, but they will tell you there is no such thing. It's a misconception. There's something called statistical significance. That's true. And there's a thing called a sample size, but there's no universal number where if you're one short of that, you can't make an inference. That's not how the math works. Technically, every single observation you make budgets the needle a little bit. That's how the math actually works. Had you heard this, something's too complex to model quantitatively or just too complex to model. Anybody heard that? You're modeling it anyway. You might be using your intuition, your experience, your judgment. That's still a model. Those models performance have been measured and we know a lot about their relative performance now.
(17:07):
So the incorrect conclusion to draw from any of those is that we're better off relying on our intuition. That is not the case. It does not follow that is a non-sequitur. Well, here's another thing we like to point out. Do you think you've ever run into kind of an assumption that if you have a lot of uncertainty about something, you need a lot of data to measure it? Does that seem like a behavior mathematically speaking, just the opposite is true. Let's talk about this a little bit. So I've got a cost of information function here. Sorry about the extra figure there that popped in. And I've got a value of information function there. So that cost of information and the value of information function occur in opposite directions. As you increase your certainty by reducing uncertainty, your cost of gathering that information goes up and up. In fact, it could skyrocket, it might be infeasible to ever reach perfect certainty even if you spent nearly some infinite amount. On the other hand, the value of uncertainty reduction levels off, alright? And it goes that way. So the biggest bang for the buck tends to be relatively early in a measurement.
(18:22):
Here's the way the math actually works. If you know almost nothing, almost anything will tell you something. That's what the math actually says, not word for word, but that's in the formulas if you know how to read it. So again, we use probabilities because we're uncertain, not in spite of it. I've heard people say things like, we'd like to do a Monte Carlo simulation or a probabilistic analysis, but we don't have enough data for that. They're thinking about it entirely wrong. It is the lack of data that means that you need to do that sort of analysis. You want to quantify your current uncertainty. Now here's another thing I like to quote Daniel Kahneman on. They've done some early research on people's intuitions about sampling, and what they concluded is that people have some profound misconceptions about how sampling works. People are routinely surprised at what kind of inductions inferences you can make about the first few observations when you have lots of uncertainty. That's exactly when the first few observations reduce your uncertainty the most.
(19:32):
So here's a few things I've learned about measuring things that seemed immeasurable and I'd like to spend a little bit of time just trying to address some of those really difficult measurement problems you might've been thinking of. First off, you have more data than you think and you need less than you think. People are giving up way too fast on this. Once you hear somebody call something an intangible or immeasurable, it's like they've given up on it, it's just stamped, it's categorized that way. I'm not going to think about it again. No, I'm proposing that you keep thinking about it. Start with the assumption that it is measurable. Start with the assumption that you have the data. If you're more resourceful about what data might be informative, can you make indirect inferences from the data that you have? And do you really need as much as you thought? Especially if you have lots of uncertainty. Remember, you get the biggest bang for the buck early when you have lots of uncertainty. If I had no idea what percentage of the population was following some security protocol correctly, if I said, I have no idea, it's zero to a hundred percent. How many people would I have to randomly sample to reduce that range by half?
(20:42):
Well, one's too few, but nine is the answer. Yes, it has a statistical, it has standard deviation of about 0.29 of the difference. So 29% is the standard deviation of zero to a hundred percent. Alright? And you can reduce that by half by sampling nine people. It's how about in the situation where you already have information that says somewhere between 40 and 44% of the population is doing it correctly. Now, how many people do you have to sample to reduce that range by half thousands? Actually it gets harder the more certainty you already have. Alright, this is an odd one. This is so easy to deal with and I keep running into these situations where people sort of assume that they're the first people on the planet to ever try to measure this thing. It's been measured before. Just assume that. And if it's not true, you might win a Nobel Prize.
(21:36):
It might happen that you really are dealing with something that's never been measured before. I haven't found it yet. And I've measured all sorts of stuff. Everything I've ever tried to measure somebody's written something about before. And even if that particular publication doesn't directly address the measurement that I'm talking about, they invented a method that applies. So that gives you more and more ideas about how to do these things as the measurement inversion showed. Well, you probably need different data than you think. Anyway, this is really profound. This is a really important point. Everybody's measuring the wrong stuff. I don't know how it doesn't affect the GDPI really. I've seen it in every company, in every industry we've worked in for over 35 years, almost 40 years now. You probably need different data than you think. And finally, the best investment in most portfolios was a better measurement of investments. Think about this. What is your single biggest risk? Anybody? I think I know the answer. Regardless of the organization you're in, I think your single biggest risk is how you're measuring risk. If that's flawed, that affects the rest of your management of risk, does it not? Yes. If you have a big portfolio of projects, who's got a big portfolio of projects?
(22:59):
Well, if you're challenged with how to prioritize projects, I know what your first project ought to be. How to prioritize projects, right? That's your highest priority project. So the best investment, most portfolios is a better measurement of investments. And this makes a lot of sense. Take a 10th of a percent of a portfolio and how much more performance out of the rest of the portfolio you can get if you actually spent a 10th of a percent of the portfolio just trying to figure out what investments to approve and prioritize. Some of you have some pretty big portfolios. I know. So that's a pretty hefty investment itself. People routinely underestimate how much they really should be spending on that kind of analysis. They don't see that as an end goal in itself, even though it's the meta project, it's the project that affects other projects. So it seems like if you want to get the biggest bang for the buck, and these are typically not large investments, spending more time even hiring a couple people that might be quant analyst and giving them the right strategy to pursue that all matters.
(24:03):
Alright, so I think that'll wrap up my session here. I just want to make sure I spend some time dealing with the hardest measurement problems you've ever come across and going further monetizing them. Everything can be monetized. That's where I usually get a little bit of pushback. Doug, what about a human life? I'm not telling you by the way, you should be monetizing a human life. How many of you make investments that have anything to do with safety and health? Alright, I'm telling you, you have been monetizing it all this time. However long you've been doing it, you've been monetizing it and you've revealed how you monetize it through the investment that you make. But you've been monetizing it extremely inconsistently to the detriment of your objective of improving health and safety. And there's all sorts of reasons behind that. But I think I'll wrap it up right here unless I hear something else or we'll follow up in Q & A. Thanks a lot for your time.