Mission Improvement Kata

Summary:

How do you accelerate digital transformation in large enterprises and government agencies? In this foundational talk from Prodacity 2025, Bryon Kroger, Founder & CEO of Rise8, delivers a strategic roadmap for continuous delivery, mission-driven software development, and organizational change.

Drawing from his experience launching Kessel Run and modernizing digital services across the DoD, Kroger outlines why continuous delivery must come first, how to align teams around mission outcomes, and the biggest pitfalls to avoid in digital transformation.

πŸ”Ή Key Topics Covered:

  • The Improvement Kata: A simple framework for continuous improvement
  • Why continuous delivery must come first in digital transformation
  • How to move from requirement spreadsheets to outcome-driven development
  • Avoiding the alignment trap: Why most organizations fail at transformation
  • The role of product managers, designers, and engineers in mission-driven software
  • Why the first teams you build determine success or failure

πŸ•’ Key Highlights & Timestamps:
[00:03] - Introduction: Connecting strategy, execution, and continuous improvement
[01:45] - Why high-performing software teams drive mission outcomes
[03:33] - The problem with requirement-driven procurement
[05:01] - Moving beyond simple web apps to complex systems
[06:31] - How to fund problems, not solutions in government IT
[09:57] - Mission impact mapping: Prioritizing the right outcomes
[12:25] - The twofold responsibility of innovation & subject matter experts
[14:52] - The real meaning of a software factory (and what went wrong)
[16:30] - Why continuous delivery is the first step in enterprise transformation
[17:22] - Avoiding the alignment trap: The biggest risk in transformation efforts
[19:59] - Why starting small is critical for innovation success
[21:40] - The role of product management in mission-driven software
[23:10] - Culture change through behavior change: How transformation really happens
[25:31] - Final takeaways: Accountability, scaling, and learning faster

πŸ”— Stay Connected:
Learn more about Prodacity: https://www.rise8.us/prodacity
Follow us on X: https://x.com/Rise8_Inc
Connect with us on LinkedIn: https://www.linkedin.com/company/rise8/

πŸ‘ Like, Subscribe, and Share:
If you’re serious about transforming enterprise software delivery and driving real mission impact, give this video a thumbs up, subscribe to our channel, and share it with your network. The future of digital transformation starts here.

‍

Transcript:

Bryon Kroger (00:03):

I am going to weave in some things we learned on day one. Also foreshadow some of day two and day three as well and really provide a scaffolding for this all to fit together. So as I do that, I really want you to think about how it fits together and how it might be deployed inside of your organization. Alright, so this is the Improvement Kata. Some of you might be familiar with it. I love it because it's a really simple framework. It starts with understanding the direction, understand your current condition, set a target condition, and then conduct experiments to get there. It doesn't get much simpler than that. Now, I'd be remiss if I didn't remind us of what Martin said yesterday, and that's that tools don't make a strategy. People do the same is true of deploying continuous improvement. Your tools won't deploy the continuous to prove it for you.

(00:48):

Your people have to do that, but I do think it's a helpful tool. So let's start out by talking about the direction. Now I think that this is something that often gets overlooked and that's the impacts that we talked about yesterday. That's what senior leaders are thinking about their mission impact. What are they trying to achieve? What are the results we want to generate for the mission? And then acquisitions typically talks about all of the resources, activities, and outputs that are required. The outputs come in the form usually of a long requirement spreadsheet, and nobody is thinking about what are people actually going to do with those things and do they actually produce the mission impact that we're trying to achieve? Those are outcomes. So when we say outcomes for the rest of the day and hopefully for the rest of our times working in this space together, that's what I want us to anchor on. What do people actually do with the software we give them?

(01:45):

We're going to talk about that at a high level first. Understanding the direction or challenge is really important. What we know from the DORA research is that organizations with high software delivery performance are two times more likely to achieve their organizational outcomes. And so that brings us to not only thinking about what outcomes do we want to achieve, but how do we achieve really high software delivery performance. That's how these two things fit together. So you've got speed, stability, those are the DORA top four we usually hear about. And then reliability. And when you think about those in terms of operations and acquisitions and operations here could mean military operations doing satellite command and control. It could be delivering veteran care inside of a VA clinic. There's an intersection here of acquisitions should be thinking about and caring about the software delivery performance. I need to make sure the people who are conducting the mission have really good software delivery throughput so that they can deliver mission outcomes and impact.

(02:48):

And then operations should be the one actually owning those outcomes. And today, that's not what we see. Usually the organization doing the procurement, they've been handed that list of requirements and they're now in charge of the requirements. So much so that when the operations community comes back and says, Hey, that's not actually what we want. We think we want this instead, or That didn't work, let's try something else. They're like, no, no, I got my spreadsheet. I don't answer to you. It's like, but I created the spreadsheet. And they're like, well, you created it five years ago through a committee, but you're not the committee and you have to go through the five-year process again if you'd like to change that. So that's wild, right? But the problem is when you have a five-year planning process, that's what you're optimizing for. And so instead we need to get to the point where we're looking at our current mission condition at all times.

(03:33):

This is a continuous process of understanding our value stream. So I'm not going to go into too much detail on that today because value stream mapping is a whole talk segment that Karen's going to deliver. But the idea here is you map your mission thread, understand where the bottlenecks are, and you don't want to do too much of that because once you address one bottleneck, you create five new ones, it completely changes the value stream. And so you don't want to deliver five years worth of product at once. That will break everything. You deliver one thing, measure what happens, then go after the next thing. Those are your target conditions. But underneath that is another important thing that we forget in the technology landscape, which is there's an existing heritage system out there and you've got to understand that and you've got to deliver into the seams of it.

(04:18):

And that's where domain-driven design comes in. And Joe will be talking about that later today. But eventually you start producing apps. This is an example of a space command and control app that was delivered called Spaceboard. And who in the software factory community has heard like, oh, software factories. They produce those cute web apps, but they can't build complex system of systems. That's a bit of a self-fulfilling prophecy because we never let them. But if I put that aside, I'll say there's also a problem in the innovation community is that we never go beyond this. We talk to some end users and we talk to them about their pain points, we build them a solution like this and then we go do 10 more of those that won't produce a cohesive system, but neither will five-year planning. I talked about Gall's Law on stage yesterday.

(05:01):

You can't build complex systems from scratch. You have to start over with working simple systems. So what do you do? Well, you map out that value stream. I don't want you to have to understand everything on here, but this is the space tasking cycle of which Spaceport is one small part of. Map out that value stream, understand where some of the constraints or bottlenecks are in each of those sub threads and start building software around that. And when you build those working simple systems, you realize they have to start connecting. And that's when you get more complex systems built from working simple systems. And so now I've talked about it at a high level. We want to achieve mission outcomes, but how do we actually decide which ones to go after we know our current condition? How do we set target conditions? There's a lot of goaling frameworks out there.

(05:47):

So the one you use is not important. I'm just going to talk about this one. But you could use OKRs, you could just use a spreadsheet of priorities and call it good, you put it in whatever. You could use Jira Align if you want. I know I made a joke about that yesterday. But the idea here though is that we want to take this traditional requirements process planning process and turn it on its head. And so rather than having we talk to the customer and then six years later we deliver a Frankenstein that doesn't meet any of their needs, we want to have everybody from executives all the way to product teams centered around the customer and looking at the same sheet of metrics. And so one thing that's really important as you do that is you need to think about problem and opportunity statements.

(06:31):

So rather than funding ideas and solutions, your funding problems in teams, and they're all oriented around this by way of some type of accountability, we like to talk about growth boards. I'm not actually going to get into that today, but the idea here is that everybody's looking at the same metrics. Talked a lot about metrics yesterday and I'll get into that. But a few things that have to be unlearned is starting with the answer. So that is our default pattern in the enterprise. Start with the answer. We're moving to problem and opportunity statements. We have the customer on the periphery again, so much so that when our customer comes to us and says, Hey, that didn't work for me, I'd like to change it. Like no, I got this requirement spreadsheet, that's my customer. We got to put them at the center and work around them in an evidence-based way and then we become learning driven.

(07:21):

Barry talked a lot about that precision and accuracy you heard about from David Bland. And then getting everybody accountable through clarity is really important. And I think one thing that was left out, not left out per se, but when Martin talked about 95% of your people don't understand your strategy, there's an emphasis on communication, but there's also an emphasis and he said it, but I'll just say it much more bluntly, that you actually have to have a good strategy. Most strategies are hard to understand because they suck and even when they're good, they have to be able to fit in people's heads. In the software world, we have this concept of context that can fit in your head. And so you really don't want to architect beyond the context that can be held in one person's head. Similarly, you don't want to do that from a strategy and mission perspective.

(08:09):

Alright, so I'm going to just use OGSM as an example. We talked about objectives and strategy a lot yesterday. Goals would be time-bound, numerical articulations of those objectives and then the metrics that Alistair talked about. When you put all of those together, you can start building a mission command tree for your strategy. Now, I know some people hate cascades and think you should never cascade strategy, but I tell you, when you work in an organization like the Department of Defense where you're starting with the national defense strategy and then you're trying to say, what should I do as a product developer way down here at the bottom of this, literally the largest organization on the planet, you need things between that to get people aligned. And so here, the strategy at each layer becomes the objective at the next layer. So in a mission command hierarchy, I would give you the strategy.

(09:02):

And again, remember, strategy is the why of movement, why this not that There are decisions we make about the direction we're going because there's a lot of ways we could get to our objective. And so that strategy becomes the objective for the next layer down. You can see you get out here where now you have four product teams that all know why they're doing the thing they're doing, their objective and what they're going to go build and how it relates back to the overall strategy. And summing that up, Martin said it brilliantly, A great strategy is a coherent set of choices about what we're going to do to achieve our vision, but then you have to bring in the mission ratio because we're not a profit led set of organizations. So Jason brought up confounding constraints, big ones that we see a lot in defense and larger federal government time and people.

(09:57):

So he said, how might we do the same or do more with the same amount of time or do more with the same amount of money as you're making those decisions? He recommended using impact mapping. By the way, this is not aligned to the definitions we just used of impact and outcomes. If I were going to redo this, I would call the impact, the mission, impact, the goal and a set of outcomes, the things that actors do with the things that we give them. Well, one thing I want to highlight that's really important about mission impact mapping is that typically we don't need all nine of these. So we look at this and we make a bet, we say, I think I can do those three things to get to the goal. I don't need to do nine things, I only need to do three.

(10:40):

That's how we get efficient. But what happens in the requirements process is you get handed those four that I have highlighted there with none of the context. So Martin said another brilliant thing yesterday. He said a lot of these artifact aren't that important. It's the conversations and alignment you gained along the way to produce them. And so when we hand this to developers, they have no connection to what outcome they're trying to achieve for what actors to achieve, what impact they're just told do this thing. And what we know is that in this uncertain world, most of those things are going to be wrong. We'd be lucky if we could bat 500 here. And so there's a twofold problem here that I see in the innovation community and that's that on the one hand, people in the innovation community don't want to take these things from subject matter experts.

(11:29):

If the people in the business build me this, I'm super excited. This just saved me like six months of discovery and research from a whole bunch of people that have a ton of context, but also they might not be right. They're giving me a good place to start. They're saving me time. And so on the one hand, the innovation community should have the responsibility of taking these things and using them as their starting point. And too often I see them throw it out and they're like, we're doing discovery and framing for the next six weeks, and then it turns into 12 months because it's a really complex domain. And then before long we're looking at an old waterfall requirements process. But on the flip side, when we do these four things and it turns out that two of them didn't work about 500, then we need those SMEs to say, Hey, I appreciate you taking our starting point and testing those ideas.

(12:25):

I think we saved a lot of time here. Let's pivot and focus on the next two. And so it's a twofold responsibility on both sides for us that are trying to be innovative and go push boundaries. We have to be willing to start with what the experts are telling us and they have to be willing to listen to what we learn along the way. I'm not going to recap this part, but just worth mentioning again, how does this all fit together? This is where Alistair's message about what makes a good metric becomes really, really important. And if you think, oh my gosh, the things we're talking about can't be measured, I would point you back to Doug's talk where he said, you can literally measure everything. And in fact, somebody else probably already has. And especially when you know nothing, almost anything will tell you something.

(13:10):

It's probably my favorite quote of yesterday. So definitely set out trying to measure things even if you don't have baselines, even if you think it's really hard to measure. So now we get to experiments. The important part, Barry talked about NUMMI, and I just want to bring up, when we started the software factory, or I should say reignited, the use of the term software factory in the DOD, our advisors at Defense Digital Service, they said, Hey, you should use a term that would be really comfortable to the people in the Pentagon who don't understand software, but they really understand factories and widgets. So let's call it a software factory because we were inspired by NUMMI. So what we had in mind when we said software factory was not like an assembly line, which has become almost a synonym for platform in a lot of government spaces now.

(13:57):

It also was not like a dingy dark place that sounds awful to work. Kessel Run was definitely not that. We defined it. And this is literally, I pulled this old definition from the first time we pitched it. We said a software factory is the combination of people, process and technology that provides the ideal conditions for continuously delivering valuable software and users love with minimal waste. That was what we were thinking about very much what Barry showed, not just a platform. And then we want to conduct experiments to get there. Now, hat tip to Adam actually, even though we called it a software factory, what we actually called the place we worked, people have heard probably KRELL or some people said K-rell, if you didn't know what that stood for, it was the Kessel Run Experimentation Lab. Very intentional. So just to reemphasize, it was never about software platforms, just continuous delivery.

(14:52):

It was about building a platform for learning quickly and experimenting. And you run into this problem as soon as you set out on all of this, which is how the heck do I test all these hypothesis like, great, Brian, I'll accept that we don't have all the answers. I'm going to make a hypothesis. I'm going to take a bet. How do I test it? I got this crazy enterprise. The first thing that you have to do, your first strategy, your first why this not that is continuous delivery first. Continuous delivery is a prerequisite for establishing the experiment feedback loops. You need to learn anything at all. And so this is blasphemy in a lot of the alignment safe crews because we need to focus on alignment. I disagree. There's a few reasons why this is the definition of continuous delivery from the book. Continuous delivery by Jez Humble.

(15:41):

Continuous delivery is the ability to get changes of any kind we'll say into production safely, quickly, and in a sustainable way. And this relates to Josh Kruk's comment about the risk sawtooth. So I threw this in here last night. I was glad that he brought that up. This gives you the ability to rapidly validate all of your hypotheses instead of building up risk, which grows exponentially over time. And as you do that, you should be thinking about elite level software delivery performance. So if you want to be an elite learner, you need to be elite at continuous delivery. So settling for like, oh, I'd be happy if we could deploy once a month. That's actually a really slow learning cycle, not being able to know if you built the right thing for a month in the DOD when I mean look at your monthly burn rate, that's your cost of being wrong, your monthly burn rate.

(16:30):

And so I would say getting that down to daily is achievable. We've seen it done. I see it done in the va, I've seen it done at Department of State and all across the DOD. So it is possible and that allows you to avoid the alignment trap. So another shout out to Lean Enterprise. This is from the book. There's a really great study you can read about it in there. But essentially what happens, this is well studied when organizations start out with that safe approach or any similar approach that emphasis alignment first. They spend so much time, energy, money on alignment activities and developing all of these hypotheses or lists of requirements, things to go build that by the time they actually go to try to test them, they don't have the means to test them and they also don't have the means to build the it, the well-oiled it, the path to production to test them.

(17:22):

So they get trapped, like literally trapped. Once you get in that upper left quadrant, you can't get out. And so you have to establish well-oiled it first, then focus on alignment, and that's how you get to it enabled growth. And so at the core, you're going to need platform. DJ's going to talk to you about that later. Applications and data continuously delivered. Edward's going to talk about a lot of the practices and how to do that well. And then you need a really well-oiled path to production. We have a whole segment on lean GRC and how to approach things like a TO in government, but then we can use continuous delivery to discover and capitalize on value and get to that mission impact. So that's where value stream mapping, domain driven design and goaling really come into play. So your first strategy is continuous delivery.

(18:10):

Once you've established that, now is when I think you can start going a little bit deeper on thinking about target conditions for the mission, establish continuous delivery first, then bring in these principles and practices. And that's going to unlock a few other things that are talked about in the book. One is being able to quickly cycle through small errors to find big discoveries. There's a great case study on this from Maersk if you're interested. It's a great enterprise case study, like high dollar value. And that's shown on the right there. They ended up finding it in cost of delay, but this is black swan farming using the cost of delay. There's a paper out there on it that's really fantastic. And I would say the cost of delay in government is hard to measure, but not impossible because it's not measured in dollars oftentimes, but it's much more consequential because usually the cost of delay is lives and essential citizen services.

(19:03):

So how do you start and scale then? I think one really important thing that I want to emphasize that hasn't really been talked about yet is we say start small from a strategy and goals perspective, but you should really start small from a team perspective too. During one of the fireside yesterday we talked about the diffusion of innovation curve, and I think I recapped it afterwards. It said something to the effect of the diffusion of innovation also applies to the diffusion of ideas. It's the same curve and the uptake of ideas. And if you're not familiar with that curve and crossing the chasm, essentially you have your innovators, your early adopters, and then your early and late majority. And one thing Geoffrey Moore says in Crossing The Chasm is you can never market to two groups at once. And that's because you can't meet the expectations of, the reason they're in these little segments is because they have different expectations for adoption.

(19:59):

So if we're talking about product, your early people are like, you could show them a wire frame and they're ready to go. They're like, sign me up, I want your product. And then you've got those early adopters that are like, oh, I'll follow those people and seed it. But then the other people are like, Hey, until you get X, Y, and Z integrations and you can substitute this other app that I'm using, I'm not adopting. And it's similar for ideas. The evidence people need to see to adopt your idea in each one of those segments under the bell curve changes. And you shouldn't talk to them until you can meet their expectations. And so when you're thinking about that from an organizational change perspective and who you're going to have go build your first apps, the people you need to bring are a very small group of people.

(20:47):

They're those innovators, maybe early adopters. There just aren't that many of them. So starting small, not only is there the two pizza team rule that everybody always talks about. Even more important than that I think is just realistically the number of people you have in your organization that are fit to seed these first few teams. And so start small and they should be balanced. We recommended a minimum always having product management. And I say product, not project or program for all the government folks in the room, product managers, designers, and engineers. When you put those three things together, you tackle. David Bland showed you on the mission model canvas a few different areas like desirability, feasibility, viability or desirability and viability. I think these are really important. And one thing I do want to say related to that is a lot of times in mission domains, product managers are responsible for mission impact, right?

(21:40):

Designers are talking to users. But what I see when people come into a domain they're unfamiliar with as a product manager, they will often default to being another user empathizer like, oh, I don't understand all this mission. I understand what that person just told me. And you end up with two designers on your team when you really need a product manager that has deep domain expertise so they can make bets about how the product can impact at the mission level while the designer is figuring out how to interact at the human level. Because we're not just delivering consumer software, we're delivering software that's supposed to produce a space tasking cycle for the space operation center. It's not just supposed to make that user happy. We have to make that user happy and produce desirable software so that they can use it and use it effectively so that they can improve the mission.

(22:30):

And that part can't get lost and it has to be owned by a product manager and they have to have domain expertise. So I just really wanted to foot stomp that. And then last thing I'll leave you with, Barry said this quite a bit. Changing culture I think really happens one-to-one through changing behaviors. We'll have a whole segment at the beginning of day three about ways you can approach this and do it at scale. But hopefully this helped put into context all the things we learned yesterday and how they're going to fit into what we're going to talk over the next two days. I will say that there's one aspect that I'm not going to talk about today that's really important and that's accountability. And I mentioned growth boards, if that's something you're interested in or want to learn more about, reach out to us. We have a lot of resources we can share, but that's the next critical piece. Once you start scaling, the accountability function becomes really important.

‍