Navigating Uncertainty
Summary:
How do you test business ideas and navigate uncertainty in high-stakes environments? In this actionable talk from Prodacity 2025, David Bland, author of Testing Business Ideas, shares his proven frameworks for reducing risk, validating assumptions, and making smarter decisions.
David blends design thinking, lean startup, and business model innovation to help teams de-risk innovation and avoid costly mistakes. Whether in government, defense, or enterprise innovation, his insights provide a clear path to learning faster, failing smarter, and scaling only what works.
🔹 Key Topics Covered:
- Why most organizations fail at testing business ideas
- The Mission Model Canvas: A strategy tool for alignment & decision-making
- How to identify desirability, feasibility, and viability risks
- The three-step process for extracting, mapping, and testing assumptions
- Why scaling too early kills good ideas
- How to design experiments that maximize learning with minimal investment
🕒 Key Highlights & Timestamps:
[00:03] - Introduction: How to make better decisions in uncertainty
[01:09] - Why Testing Business Ideas became a go-to tool for governments
[02:06] - Applying lean startup principles to mission-driven organizations
[03:49] - How to avoid sunk cost fallacy & bad investment decisions
[05:39] - The Mission Model Canvas: A one-page tool for strategy alignment
[07:51] - The danger of scaling too early (and why you can’t pivot if you’re broke)
[09:28] - Measuring impact over revenue in government & nonprofit settings
[12:26] - Why most organizations focus too much on feasibility risk
[14:04] - Mission outcomes vs. traditional KPIs: Measuring the right things
[17:34] - The importance of challenging assumptions (before building anything)
[19:07] - Assumptions Mapping: How to prioritize risk in your projects
[22:13] - Why "MVP" is often misunderstood—and what to do instead
[24:35] - How to sequence low-cost experiments to validate ideas
[25:24] - Final takeaways: A practical 3-step process for de-risking innovation
🔗 Stay Connected:
Learn more about Prodacity: https://www.rise8.us/prodacity
Follow us on X: https://x.com/Rise8_Inc
Connect with us on LinkedIn: https://www.linkedin.com/company/rise8/
👍 Like, Subscribe, and Share:
If this session helped you rethink how to test business ideas & reduce risk, give this video a thumbs up, subscribe to our channel, and share it with your team. Let’s build smarter, faster, and with less wasted effort.
Transcript:
David Bland (00:03):
So today maybe a timely topic. I'm going to be talking a little bit about certainty and how we can navigate that. Now, my background, as you heard in the introduction, is kind of all over the place. Mostly what I do is I take design thinking, lean startup and business model innovation, and I kind of blend them together to help teams address risk. And I started this with many, many different companies. I started small and then started working with bigger companies over the years. Again, my initial background is in startups and I felt like maybe bigger companies and organizations can learn how I made mistakes at startups, one for three in startups. Usually that's still above average. Most startups don't make it eight or nine, don't make it usually out of 10. And I work with a lot of different agencies. And what I noticed is when I published this book, testing Business Ideas, so we sold probably about a hundred thousand copies of this book.
(01:09):
It's in 20 different languages. And what I noticed is I had governments reaching out to me going, Hey, we're using your book. Can you come give a talk or Can you do a workshop or can you help us? And I thought, wow, I didn't actually write the book for you all, but it was very interesting to see how the ideas resonated with people dealing with a lot of uncertainty. So what I'm going to do today as I made this decision to take a step back and say, okay, what have I learned over the last five years working with more mission-driven organizations? And can I share my learnings with all of you? And that's why I'm here. So thanks you all for having me. So a lot of my background is in for-profit, but when I have pulled into nonprofits, pulled into government, pulled into mission-driven organizations, I realized, wow, a lot of these tools can still work.
(02:06):
But you do have to make some minor adjustments to them. And I'm not very dogmatic about how I approach things. I'm very much, you could probably tell by my vibe, I'm very much like, Hey, what are you trying to do? And then, oh, here's some things that could help you maybe achieve an outcome. But what I've learned is this is really, really, really difficult to do. And I think one of the reasons it's really difficult to do is because you're switching context when you're dealing with uncertainty. So I'll try to sketch this out for you all and you can see if this resonates with you. So first off, it's like, okay, we have this opportunity or we have this mission, we have this idea, but we have to make a decision. We have to decide what we're going to do with this. A lot of my work is helping people make investment decisions.
(03:00):
So I don't know how many have been working on a project and you're like, I don't know why I'm still working on this. This is going nowhere. It's like sunk cost fallacy. You keep throwing more money at it. More people, my favorite's, more people like, oh, if we just throw 200 more people at it, it's going to make progress. So it's really difficult to make decisions, especially when you're dealing with extreme uncertainty. So the way I kind of break this out is you have the mission design side and you have the testing side. So on the mission design side, and you've heard some great speakers today, I'm a huge fan of this really, this whole lineup is amazing. It's like how do we ideate? What are potential things that we can solve for jobs to be done for our citizens, for our war fighters? And then how do we have a prototype?
(03:49):
But in this case, I'm not thinking prototype in the sense of this is a multi-year development effort. I'm talking almost like a strategy prototype. So I'm going to share a tool today with you that I use a lot called the mission model canvas. But there are also other tools you can use. I quite often use value prop canvas. There are many, many different tools. But there to align strategy, it's to have alignment, to have this common understanding of potential options that we could pursue. Then we do some assessment of those and then we decide to go test. I think this is one of the reasons we struggle because of the context we start with, okay, what could this potentially be? What are the options? But then how do we go test those, right? And that's a very different world when you're down in the weeds and testing those.
(04:45):
So on the testing side, normally I'm saying, well, what are the assumptions? What are your hypotheses or hypotheses? And then how do you prioritize those? So I use a little two by two that I'll cover today called assumptions mapping, and we prioritize our risk and then we say, okay, what experiments could we run that go after our riskiest assumptions in our mission? So we're prioritizing that to say, okay, there's some things we could do and yes, we'd be very busy, but we might not be paying down the risk of what we're trying to do. And from there, it's what are we going to learn? Obviously you want to put that learning into action. Do a lot of work with companies in Silicon Valley. And there's a quote I love, which is you win by learning faster than everyone else. But I don't know how true that is because you do have to put that learning into action.
(05:39):
You're not going to win just by learning. You do have to use that in some way to move forward. And then let's make a decision based on what we've learned. So I think this is one of the reasons it's so difficult to do this work when you're navigating uncertainty, especially in missions because you're going from one side to the other and back and forth. Your weak might be, Hey, we're down in the weeds testing things. You might have to back up and say, did that impact our strategy? Did that change our approach in some way? Because if you over course correct on one side versus the other, it usually ends up in a bigger failure and not a smaller failure. So if you play in mission design land for too long, it all looks great on PowerPoint, it looks great on the whiteboards in those conference rooms, but then you don't know how much of it is reality. You have to get out the building and actually go test that. If you spend too much time in testing and you don't back up to look at your mission strategy, then potentially you are locally optimizing for something that seems really risky, but you're missing the bigger picture. And so when I think about navigating uncertainty and how we pay down risk, it's a lot of this back and forth and back and forth until we get something that we can scale.
(07:01):
But I don't know about you all, but I've worked on some products that we scaled too quickly on, and it's really hard to course correct when you're out of money or you're out of resources where we've scaled the demand and unfortunately the demand wasn't there or the needs weren't there. And I have this saying, you can't pivot. We always love this term pivot. I already heard it in previous talks in q and a, right? But you can't pivot if you're broke. You can't pivot if you don't have any resources. It does take resources to pivot. The first startup I joined was financial services. We thought we were business to consumer. We actually ended up being a business to business startup, and that's when we took off and we were acquired for 16 million. But if we didn't call it a pivot, we just didn't want to go back and live with our parents because we were running out of money.
(07:51):
So this idea of pivoting, it does take resources. It does take thought. So one of the tools I use for this is the mission model, the mission model canvas. I did not personally create this tool. My co-author collaborated with Steve Blank and Pete Newell and made a customization of a tool that I already used that's already in my book. And I find the little tweaks matter, and I just wanted to explain them to you, but how to make it actionable. I view this as your one page strategy document. I think we talk a lot of strategy, but then we talk past one another. We talk about how do we stay aligned? Well, one of these do that is through visual thinking and making sure we're all agreement. So some of the minor tweaks on this one that I like. So we're talking about beneficiaries. This is usually where I begin, who are the beneficiaries of this project, of this mission?
(08:40):
What are their jobs to be done? What are their needs? What are we trying to do? What are the value propositions to them? It's not just about the product or the service or whatever you're building. It's about the reward or the outcome of it. Okay, how do we deploy it? How do we get it to them? How do we get buy-in and support? I know this is a topic, it's come up quite a bit today. How do we get buy and support for this mission? You have to build a coalition for that. And then what are your impact factors? So this is the biggest difference I've noticed from for-profit to more mission-driven and nonprofit that I work with is that we usually measure revenue, but in this case, we're measuring impact. What is the kind of impact that we want to have? So you'll notice first off, it's not a left to right tool.
(09:28):
I kind of bounced around. I start on the right and I'm like, your beneficiaries, what's your value prop to them? I feel like that's a really smart way to start. Otherwise you might start in, can we build it land? And you didn't frantically look for somebody that can adopt the thing that you built backstage. It's very straightforward. We talk about activities. So think about the verbs, the key verbs you would need to have or to do the resources. So think nouns. What nouns do we have? Physical and digital? Who do we need to partner with? And then what's our mission, budget and cost? So the biggest mistake I see with this when I walk leaders through it is that they treat it almost like a checklist. But when you think about it, it's more about the relationships between things and getting alignment. So for example, if you're partnering somebody, they're bringing activity or resource or potentially they're a deployment channel for you to succeed, your value prop is about deploying it to your beneficiaries.
(10:34):
You want to see the impact from them. You want to keep your mission cost and your impact. You want to be able to say for this cost so we can have this impact. So it's more about the relationships between the things. So I'm going to play with a fun case study today. I'm really fascinated by this idea of local resilience hubs. I'm based in California, we've had a lot of disasters for wildfires, and how do we be more resilient? How do we set up hubs or how do we set up these shared spaces where we can make communities more resilient? So I'm just go play with this for an example just because I think you would learn more for an example today. So start with the beneficiaries. Well, we're going to go after vulnerable populations. What are our value props? Well, like a proactive inclusive safety net. How do we mitigate the immediate impacts of disasters? We deploy it through a pilot program. We get kind of buy-in and support from local government.
(11:35):
We want to measure impact. And this is where it gets tough. It's like how do we measure the impact? Well reduce the response times, uptime of the energy microgrids, economic growth. There could be many more than this, but I'm just looking for a few to get started here. The key ones. And then backstage, what do we have to do? Well, we're going to have to do community engagement, capacity building and integration, right? Resources. We're going to need resources for this too. Facilities, human capital, supportive legislation. But we partner with local governments, energy technology providers, community based organizations. And then we have our costs. Obviously there's going to be some costs here we have to incur. So infrastructure, operational costs, regulatory permitting. Now granted, there's a lot more than this, but I'm not using this as a project management tool. It's more of a strategy tool.
(12:26):
So I want a one page alignment of, hey, do we all align what we're trying to do? And can we agree on that? Now, this is great, but there are a lot of assumptions baked in here, and I think that's where we have to be careful with these tools is we use them and then we're like, oh, this is all fact. Let's just go execute our 10 year plan on this thing we sketched out in half an hour. So my world is really helping people deal with assumptions and uncertainty, and I kind of have a layer of thinking on this I want to share with you. So first off, I think this is a great tool to use and you may even customize some of the fields and that's fine. But that top of that tool of that canvas is a lot of desirability risk.
(13:08):
You have a lot of risk around the beneficiaries. Do they really have these jobs to be done? Do they even understand your value proposition to them? You have a lot of risk at the cost and the impact side of things. The bottom two boxes, there's a lot of viability risk. Can we have this impact for this cost? And then where we spend probably too much of our time is backstage with feasibility risk, which is, Hey, can we do this? Do we have the activities, resources, and partners? So when I'm pulled into agencies, when I'm pulled into organizations, this is where all their time and efforts are going into. But you can still fail in a big way if you are not addressing your desirability risk and your viability risk. So I'm a big fan of design thinking, but the way I use design thinking is slightly different.
(13:55):
I use it to frame risk. How do we navigate uncertainty with design thinking? And I think it's a great overlay on a tool that sometimes intimidates people. So I basically have a three-step process I use here. I just want to share it all with you. So we have extracting risk, we have mapping it, and then we have running experiments and testing. So extracting, what does that look like? Well, we have this overarching question of what would have to be true for this to succeed. We have some very deep domain experts in the crowd. We have people that some are the best in the world at what they do. We're also very persuasive because we're the experts. We want to say, yes, I have the answers. It's usually how you get promoted to leaders inside organizations. But part of this, as Barry mentioned, unlearning, we have to think, okay, what if I'm wrong or what would have to be true if I broke it down into smaller pieces?
(14:57):
So what I do, and this is something that you all can take away and start doing even next week, is you start asking, what are these assumptions that we're making? Can we at least get them on paper? Can we start writing them down so that we can address them? So some desirability assumptions here, I like kind of using orange for desirability. So we believe local communities are actively seeking emergency preparedness. If they're not seeking this, this is going to be tough to be successful. Also, we believe that they'll perceive these hubs as valuable. So maybe the hub isn't the way to go. Maybe that's not the way we help them, but I think on top of your head, you have to think through, I can't be married to this solution right away. I have to be thinking through, okay, if they have this need and it's real, what are other possible solutions?
(15:45):
And open yourself to that. Also, we have viability risk as well, so that we can actually lead to measurable improvement in emergency response times. We can also believe that the hubs will result in cost savings by mitigating the impact of disasters. So again, I'm looking at the bottom two boxes here with viability, what would have to be true? What are the beliefs? We have the assumptions. And then the same thing with feasibility backstage, although this tends to be easier with the teams I work with because you live and breathe this all the time. We believe community partnered, community based organizations will partner with us. And then we believe that existing facilities can be repurposed into hubs. And I'm just using hubs as an example. Feel free to steal this idea. This is just something I'm interested in because I'm based in California, obviously. But this idea of we're not treating our strategy as a series of facts.
(16:46):
We are being upfront with, yes, we're aligned on this strategy, but there are assumptions that would have to be true for it to work, for it to succeed. And what I've found is especially when you're starting a new project or there's a lot of uncertainty, people are hesitant to maybe write these down or you don't feel like you have space in your day to write these down. I see teams with roadmaps for years, backlogs of work, and they're delivering on it. The execution isn't the problem. Did we actually question if this is something we should be building right now? And I think that's where the conversation needs to shift a little bit.
(17:34):
I was a big fan of the opening keynote where we said, don't be cynical. So you'll notice I'm writing these as we believe, and it's something that, a belief that is somewhat positive that would have to be there for us to succeed. I try not to get into, well, we don't believe, or my favorite is, I won't name an agency. But I went in and I said, okay, let's start writing these down. They said, well, we believe we can't do this. And I was like, okay. And I was like, okay, what kind of evidence do you have? And they're like, we have plenty of evidence. We can't do this, so we should just give up. And I do believe in being a little more optimistic about, well, let's believe we could do this. What would have to be true for us to be able to do this?
(18:22):
So I'm a strong believer in that, and I think that's the overall vibe you're going to get from the three days here. But this idea of I try to write these in the positive. Now granted, when we test them, we'll have acceptance criteria. We'll go through all that scientific process, but I try to keep it positive and not saying, well, we don't believe any of this and we should not make any progress whatsoever, because I've been pitched a lot of crazy ideas over the years, and some of them I thought, wow, there's no way this could succeed. And then I'll see it be successful years later. So I really try to reserve judgment as best I can on the opportunities I'm working on and just say, well, instead of me judging you, can we break it down into smaller chunks and can we go test?
(19:07):
Right? So that's step one is extract. And then we map. I'll give you a high level example of how I map. Basically, I have a two by two that I called assumptions mapping. I first learned this from Jeff Gothelf, Josh Seiden, who wrote Lean UX back when I worked with them. And I basically just customized it over the years, over and over and over again to work with teams that I work with. It's used by Google, it's used by the NSA. It's used by a bunch of different organizations, but all it is is it's basic two by two, but it's deceptively hard and it's like a structured conversation about risk. So I'll just give you a high level of how this works. So you kind of have, Hey, is this important or is this unimportant? I know it's not binary, but it's how important is this the success of everything?
(19:51):
We're doomed if this is wrong, or hey, if this is wrong, we can probably recover. And then probably the hardest part is do we have any evidence or not? So it's interesting when you get a bunch of smart people in a room and you have all this stuff laid out, and you have to have a conversation about, okay, do we all agree that this is really important? Because everything we wrote down just now isn't the most important thing. And you may have a lot of evidence, or you may have no evidence depending on where you're at with the project and the mission. So I tend to ask questions, how important is this? How much evidence do we have? And I also rely on the book a little bit here as there are different levels of evidence, and I'll cover that a little bit with you today.
(20:35):
But this idea of, wow, you know what? If people aren't actively seeking this, this is going to be really tough. We need to go find evidence of this. And then we say, okay, the next one, is it more or less important? Do we have more or less evidence? And it's like, well, we have a little more evidence on this, not much more, but it's not as important with the hub because we could have another solution that maybe isn't a hub. And I kind of just walk through this process. I usually start with desirability first with the teams, and then we go through viability, and then we go through feasibility. I have to say, you'll learn a lot about your teams doing this process. I was once in a workshop where I was pitched Airbnb for power tools, and I thought that one, I don't know about that idea, but I reserved judgment.
(21:19):
And the team wrote down all their risk. And down here they put things like felonies and dismemberment and all this horrible stuff that could happen. And I was like, what is with this team? And they said, well, we're just going to lawyer up and we're not going to address any of that. So it goes down here. So I'm not saying that's your team, but you will learn a lot about your team. Let's say your team doesn't put anything below that line. Everything's important. Well, your team might have a hard time prioritizing, or let's just say your top right is all blue stickies, all feasibility and everything else is all the way to the left. I might start asking, well, how much recent evidence do we have that people have this problem and it's viable? Are we just focusing on building right now? So it's really interesting when you facilitate this, what comes out with your team.
(22:13):
And then finally, we test. So I do believe in prioritizing before we test. I've changed my approach over the years. I think when I first started doing this around 2012 or so, I would just get people excited about testing and I realized, oh, we're not always testing something. They're actually paying down our risk. So that's why I really gravitated to that two by two. So with testing, the good news is there's all kinds of tests you can run. There's 44 in the book, but there's many, many more than that. I just had to stop writing at some point. So the way I view testing, and this is very aligned to Steve Blank's work also with Hacking for Defense and icorp is from an evidence point of view, I kind of start with discovery. And there's all these experiments you could do from discovery, but basically from an evidence point of view, you're trying to go from none to light. So you're looking for any directional evidence that those assumptions could be true. So it's more exploratory. You can frame it slightly different ways, but you're not jumping to build right away. Usually building is the most expensive way to learn. Now, I know the cost of building has come down, especially over the last three to five years, but it's still the most expensive way to learn in my opinion. So I like to do things where we don't have to build first. What can we do to test without building and learn about our riskiest assumptions?
(23:41):
The validation experiments, you're a little further down the road. And from an evidence point of view, you're going from light evidence to strong evidence. So now there's a value exchange here. I know we throw around the term MVP minimum viable product. I actually heard it earlier in a couple talks already. I tried to write a book without using that. I wasn't successful. I was almost successful. But the reason I'm backing away from that is instead of just arguing over what that means, can we get to specific types? Are we doing something concierge where it's manual? Are we doing a single feature test, right? Are we doing a mashup or we're taking multiple things and putting 'em together to test? I feel like that's much more productive conversation than being in a conference room arguing over what MVP means, because what I've learned over the years, MVP means everything and nothing.
(24:35):
And I would say I've seen people say, I worked four years on my MVP, and I'm like, oh, really? That's okay. Maybe we could do something smaller. So I kind of take these experiments from discovery and validation. So a sample sequence you could use here, and again, I think this is where your superpower is going to come from this session, is how do you stitch things together? So we could do something like a day in the life. It's like ethnographic research. We could go out and explore. We could start doing stakeholder interviews from that. What do we observe? Can we use that to inform our script to do our interviews? We could go off and storyboard and co-create possible solutions with our stakeholders, with our beneficiaries. From there, we could back our way into explainer video that explains it a little better, that we can use with a broader audience, but it's based on what we've learned before.
(25:24):
We could actually go into the specs of the data sheet of the thing that we could build, but we haven't yet and test that as well. And then finally, maybe back our way into an MVP, but I don't start with the MVP. Okay? So for today, I try to take this down, a three step process. We have extract and map and test, and I believe if you love to learn more about that, I do have a writeup on this. If you just scan this QR code, it'll have a writeup of what I just spoke about. And as long as you keep your processes simple, have a learning mindset, I believe that's half the battle here with exploring uncertainty. Thanks for having me.