The Art and Science of Continuous ATO
Summary:
On the stage at Prodacity, Rob Monroe delves into the nuances of continuous delivery for enhanced production outcomes. This session is a must-watch for leaders and change agents in GovTech, DevOps, and anyone passionate about digital transformation.
Transcript:
Rob Monroe (00:13):
What's up, Prodacity? Oh, come on. You guys can do way better than that. I know I'm the one that's standing between you and lunch. Just give me a little bit more energy, and I promise we'll get through this together. So what's going on, Prodacity? There we go. That's more like it. That's more like it. Hello everyone and welcome to "The Art and Science of cATO." This will be my take on what leaders and change agents can do to establish continuous improvement and continuous delivery for better outcomes in production. Or not...can we get the next slide? Great, thanks. Okay.
(01:03):
"That'll never work here." "This is the way that it's always been done." I'm sure most of you, if not all of you, have all probably heard some of these statements before throughout your careers.
(01:17):
"I'm going to make sure this doesn't get approved, and I'm going to shut you down." It seems pretty hostile, right? But I can guarantee you that I've experienced this type of response in just about every change initiative I've been involved with since my time at Boeing, MasterCard and now with the Department of Veteran Affairs. Look, change is really freaking hard. I get that. And we all know it. When you encounter these responses abruptly, they mean that we're wondering if our hard work is actually going to be derailed. I want to share a quick backstory of a relatable situation, and what ultimately led me down my personal career path here to getting into federal government.
(02:02):
In 2018, I was a senior product manager at the Boeing company, and our team had just launched an internal product that was responsible for enabling self-service of provisioning and configuring DevSecOps capabilities in minutes, and at a click of a button. Who wouldn't want this? It seems pretty obvious this is a change that we should try to adapt to because why is it that the hardest thing that we do when we show up to work, is just doing our jobs? And that's why we were motivated to make this change. Fast forward into early 2019, I had the pleasure to meet Bryon Kroger after he left the Air Force, and pair up, actually for an initiative that we labeled "Path-to-Prod." I remember the day pretty well. I was in our Bellevue office where I was scrambling to get some last minute things done for a three-day offsite, and there were about 30 to 40 people who were all responsible or accountable for the processes and policies of how we defined delivering software at the Boeing company. Now, Bryon helped provide an inspiring set of remarks, as he always does, and I made sure to outline the challenges that our teams would be solving for during this offsite, as well as the current conditions that we actually researched prior to getting to that offsite.
(03:24):
Now, of course, in any offsite, the very first step that you take is to introduce a room full of strangers. You could feel the rising positive energy from each individual person as they introduced themself. The level of excitement increased. Everyone was excited about the opportunity to work on something and contribute to something so novel as establishing continuous software delivery in an organization that was a hundred years old. Something we haven't really done before and we don't advocate for. And then, I kid you not, one gentleman out of the group actually said, "I'm attending today's offsite. I'm actually from a security and compliance organization. I'm attending this offsite today so that I can try to stop you from continuous delivery."
(04:17):
To keep the long story short, luckily we had pre-negotiated some rules of engagement with this group of people's leadership teams prior to this offsite, that would help us keep everyone engaged in helping us define what was actually going to keep us safe at establishing and testing some hypotheses for continuous delivery. As well as the decisions that we would make as a team when we left that offsite. As a result, we were able to co-create a vision and an experiment for how we would make it easier for product teams to safely achieve continuous delivery, by ensuring they had the right guidance from the beginning and throughout their entire journey. And this ultimately led us to implementing our product portfolio that you see before you.
(05:06):
Our objective and target conditions were simple: enable every single product team with a continuous path to production, that could allow us to ship a pre-vetted HelloWorld app into a staging environment on day one startup. Have that same team ship their HelloWorld app to production within two weeks, with the focus being on demonstrating their ability to show software development lifecycle practices for continuous delivery, as well as the day two operations necessary just to keep that app running. And finally, in the next two days, push one value, add change all the way through to production. And then repeat. We're building new behavior, and we designed a way to change behavior to change mindsets. To this day, my favorite user quote of all time from any engineer, when working on anything related to path-to-prod, or in this case now, cATO, is just three simple words. It just works. And yes, oftentimes engineers do smile and tear up whenever they actually enjoy using things inside of our organizations. It's a fact.
(06:16):
My name is Rob Monroe. I'm a senior product manager from Rise8, and if it wasn't already obvious, I'm pretty obsessed with developer experiences and helping organizations achieve continuous delivery of mission critical software in a way that I believe is safer, leaner, and far more sustainable. All so that, we can achieve better outcomes for our end users, as well as the growth and longevity for our business.
(06:43):
Since May of 2022, I've dedicated my time to partnering with Lighthouse, a Veteran-centered API program within the Department of Veterans Affairs where our teams have established the first ongoing authorization and cATO, in federal government history. Now, unfortunately, not all of the teammates that were involved with this, as Bryon just pointed out, there are a lot of them, could be here in person today, but I would like to surprise some individuals who are in the audience by asking if you are a teammate or on the platform team, an assessor or a stakeholder from this organization, that you please stand for a round of applause, because you absolutely deserve the hard work and dedication of this achievement. Thank you all.
(07:40):
Now, during my time at the VA, I frequently get asked, "what the heck is a cATO anyway, and where did this term even come from?" And if you're my wife, it sounds more like begging me to stop talking about anything other than cATO. So, I thought I'd quickly just break down some of these terms just so we're all on the same page in case you're like me, who didn't start or come from federal government or DOD. The authority to operate, or ATO approval, is essentially the outcome of performing what's called NIST Risk Management Framework, NIST RMF, and this aids federal government agencies on how to effectively manage security and privacy risks.
(08:24):
They are the official authorization or in some cases, how I like to refer to it as "permission," to launch a system or a change to a system into a production environment. As well as accept any risks towards the organizational operations, assets, individual people, other organizations, and in some cases, national level security. Ongoing authorization means that the security controls and risks are assessed at a frequency that sufficiently supports risk-based decisions and adequately protects the information of our systems. And now, of course, cATO. This is actually a branded term that manifested from a Kessel Run standard operating procedure that was leveraged to describe how the Air Force was able to do ongoing authorization.
(09:24):
So wait a second. If the Air Force and now the VA have been able to achieve cATO, then what's holding us all back from doing the same thing? In order to grapple with this, we first need to take a step back like these organizations did, and understand how we got to where we are today. Most, if not all of us, probably remember a time where software was developed over the span of years...delivered over the span of years, and that used to be the norm. Unfortunately it still is today. We got really comfortable with building lots of assumptions into our finished products, segmenting roles, responsibilities, and even skills into explicit stages, relied heavily on documentation, as if it was our preferred method for shared context, and then tested our software after implementing features.
(10:17):
Now, in fact, organizations within the federal government actually exacerbated this problem because our approaches to RMF are very much waterfall-based as well. By front-loading several of our RMF steps, and bringing in third-party assessors at the very end of our efforts, and THEN a senior organization official would authorize the system and some of those risks that we're taking with us into production. We would then deploy, release, say our thank yous and then repeat this heroic effort in one to three years when our A TO expires and do it all over again.
(10:55):
So if we're following NIST RMF to authorize our systems, and we're experiencing a lot of pain in doing so, then perhaps RMF is the problem - why we're not achieving ongoing auth in cATO. It actually turns out, and I'll spare you the couple of hours it might take to read this entire document, it turns out that NIST actually expects organizations to have significant flexibility in how each of the RMF steps are carried out as long as we are taking care of our requirements for applicable management of security and privacy risks. In fact, NIST encourages organizations to maximize the use of automation wherever possible to increase the effectiveness, efficiency, and execution of RMF.
(11:43):
NIST also states that the best RMF implementations are ones that are indistinguishable from our software development lifecycle processes. So when I read through this document, there were two things that came to mind. We should be aligning our RMF implementations and strategies around our Agile and DevSecOps principles and practices, not the other way around, given the fact that we are leveraging modern practices to deliver software. Lastly, do our RMF supporting teams actually know what we consider routine SDLC processes in today's modern world? And in my experience, I have seen quite often that that is not the case, and there is some education needed, and that's okay. Finally, in the latest versions of NIST RMF, it also comes with a list of tips on how to streamline the RMF implementation. My personal top five that we will touch on today are maximizing the use of cloud-based service systems, services and applications; establishing common control providers; maximizing the reuse of artifacts; automation; and then gradually add in continuous monitoring strategies.
(12:55):
So when you stop and think about it, continuous delivery in support of demonstrating continuous RMF is really another exercise in maturing risk reduction. The goal of continuous delivery, in my mind, and I use this quote from Jez Humble because I think it rings true, but the goal in my mind of continuous delivery is to have all deployments be so routine that our teams can do them at any time, and with no real impacts to our customers, users or the organization.
(13:31):
Today I want to break down and share what I've seen work well at establishing and continuously improving RMF into these three different areas, starting off with science for people in process, then worrying about the technology on how we can enable those to be better, and then finishing up with the art. So what should this look like and who should be involved? You want to start by recruiting your dream team. Looking from within our own organizations, where can we identify and redirect the right people to tackle the specific challenges that we're about to face? Now having passionate change agents, in my mind, within your ranks, who also possess local context of how your business operates, is a non-negotiable starting point. After that, I can tell you from my experiences of being enabled, that bringing in a fully balanced team to this problem who are equipped with lean product management, user-centered design, and modern engineering practices, is just as valuable at enabling excellent service design for internal solutions, and the operations or processes we're about to introduce. Because believe it or not, your employees and contractors are users. Their goal, is to effectively and safely deliver value to an end user. They want to do the right thing. We have to make it easier for them, and if we don't, they will make their own choices to go a different direction when you don't help them effectively and efficiently do their jobs.
(15:07):
Now, it's okay if these team members are not already equipped with these modern practices. In this case, I would recommend investing in finding and pairing them up with someone who does within your organization - that is possible. Or, you can find a partner...external partner service that can enable your people to think in different ways and introduce these new skill sets. In my opinion, if we're going to succeed in changing the way we deliver software, we can't afford not to understand how to research, prioritize, and validate what problems we actually need to solve - what's actually blocking us? What opportunities are most viable to us at dramatically impacting the business? And how do we effectively solve these problems based upon the local context of our business?
(15:55):
As part of your dream team, you'll also want to identify a set of stakeholders and where they stand with either being an advocate or a detractor of your vision and mission. I like to leverage something like a stakeholder influence map to help align our teams on the expectations before we start any work. And so that we can try to validate whether each stakeholder needs direct or indirect involvement. Where and what do they have influence or authority over? What type of coaching might we need to invest in to help with perceptions, credibility, or trust? And then finally, what motivation or incentive protocols do they operate with? Something else I like to suggest that doesn't always gain traction, is starting up a book club. By bringing your stakeholders in teams who are about to be on this transformational journey together with specific reading material, that is purposely chosen for the mission we're about to go on, can actually open up a safe way for open dialogue regarding the challenges and changes that we're bound to face. Whatever your stakeholder list looks like, and regardless of where they fall on the map, I'm a huge proponent of everyone going through a regular cadence of walking our software factory.
(17:13):
Trust me when I say, you do not want to leave understanding your change initiative up to interpretations through standard or traditional ways of briefing materials like PowerPoints or Word docs. The opportunity to have someone actually go and invest time directly with a team who is using your cATO and path-to-prod is going to be far more valuable and enjoyable for them to clearly understand what's actually happening here.
(17:43):
Okay, so we've started to assemble our team. Now what? Our goal is to instill greater trust in how our adjustments to people and process are going to enable greater successes when using our CATO to ship code. At the VA, we started by embedding security control assessors with high technical proficiency, directly within product teams at a ratio of one assessor to four app teams. We did this because traditionally, teams would actually have to coordinate with an assessor organization months in advance, bring them in for a solid week where they have never actually understood the real context of the system from the beginning, and be ramped up using practices such as documentation, screenshots, and a little bit of shoulder surfing. And here they would finally actually assess all those applicable controls that we would've selected from the beginning.
(18:38):
What we wanted to do, was enable an environment where assessors and product teams could achieve ongoing context flow. And be able to go deep on technical implementation details with engineers. So that everyone was more effective and comfortable at digesting things like, the problem we're actually trying to solve, the users that are going to be affected by this system, what we intend to build, how we intend to build it, and of course, the actual data and the technologies we're going to be using. This ensured that everyone was well prepared to accurately and effectively categorize the risk of our systems, as well as select and verify appropriate controls based upon the system's unique context, as it was changing along the way. Control implementation details and up-to-date documentation is probably something we would never really imagine being a reality, but for us, what we actually came up with, could now become part of the iterative and incremental development process itself with our NIST RMF and SDLC tool suite now becoming more complimentary to each other. At the VA, we were heavily committed to improving accessibility and transparency, so we leaned heavily into leveraging solutions like SD Elements as a means to collaborate and communicate changes in risk more effectively with all parties being involved. This means we could profile system risk with a given survey and diagram threat model at the beginning, and throughout its changes, release over release. We then had an easier way to translate this into actionable tasks for our backlogs where we could prioritize and sync them up with the rest of our product backlog priorities.
(20:23):
This is an actual example of expectations that we establish for inline evidence on those backlog items between engineers and assessors. At a minimum, I would implore you to expect that the technical decisions made on the implementation of that task, as well as providing links to actual software implementation - or always maintained artifacts, and clearly identifying the engineer who implemented the solution, as well as the security assessor who verified the task, to be a part of your minimum expectations. Outside of the normal day-to-day conversations that can easily happen over persistent chat tools, or in-person cooler talks, I would highly recommend that assessors and development teams are meeting on a weekly cadence where they can learn what's coming up next on product roadmaps, product requirements documentation, and confirming if any of these changes are introducing new risks or a valid reason to reassess controls that we previously already had. As well as help product managers prioritize applicable controls among other backlog items.
(21:29):
This is also an opportunity to review product health and security vulnerability metrics, supporting the team with retrospective conversations on how we can improve our performance, and helping to answer any additional questions that any teammate might have about this journey. As we're nearing the point of authorizing our system for production, assessors would actually be able to generate now, the software assessment report using data from our security tool suite rather than having to reach out to external parties who are managing governance requirements and compliance systems that some of us don't even have access to, even though we're the actual caretakers of the system being developed. At a minimum, this should entail control implementation details, scanning results, and the actions that have been confirmed, and an assigned risk rating in an outline of any additional requirements that are going to need to be addressed that we took into production if they were deficient. Of course, with milestones and timelines.
(22:26):
While our approach to continuous monitoring does leverage things like 24/7 monitoring of security vulnerabilities of our containers in production, or leveraging your typical CM solutions that a CSOC or NSOC organization would actually be monitoring for you with the given health and performance of your platforms, infrastructure, and the applications running on them, I actually would look at this as an opportunity for even more continuous monitoring, because embedded assessors can also provide significant value by reviewing scan results when our developers decide to mark things as false positives, or when they suppress findings with the intention of addressing them in future sprints. These are opportunities to provide immediate pushback, sometimes, and feedback if there are disagreements and insufficient details. A highly technical assessor can now coach on actual cybersecurity mitigation strategies that the team can then employ. They're learning, and the assessors are learning from us at the same time, what our system is attempting to do. They can also perform spot-check audits on any portion of your cATO process. They're now inundated with everything that we do and why we do it. Or even have some fun running some different forms of penetration testing exercises. After all, they are highly technical. They do love to work with software. Why not?
(23:49):
How should we approach getting initial approval for the ongoing authorization in cATO? At some point or another, you're going to need that approval, and so I wanted to offer some thoughts on where to start with this because it aligns to how we actually did do this with the Veterans Affairs.
(24:05):
Start with an initial zero-based authorization for the entire system as a whole. So your infrastructure, your platform, the pipelines that are centralized, and consider what the applications are going to be like on this platform, and then also wrap that in the overall approach and processes that you're defining for your cATO. Now, I mentioned this earlier, if you're using a GRC tool such as eMASS and it isn't supporting your SDLC desires for agility and speed, we actually took this as an opportunity to get a signed waiver for us to bring the location of where we would actually manage the context of how we were meeting controls, our body of evidence, out of eMASS and localized it to our processes, our people, and made it easier for them to actually work with this. And we'll talk about that here in a second. And so I would implore you to get a waiver to leverage something like what we used for SD Elements or something similar instead. If that is a blocker for you.
(25:02):
You can then take that into leveraging a memorandum signed by your AO, that outlines the deviations from your organization's standard RMF practices and policies. We're not trying to rewrite them, we're just simply trying to state where we have deviated, where we have done something different, and why we believe this to be more valuable. Then you'll want to actually implement what's known as a renewal frequency for your ongoing authorization. And some of us here might be wondering..."Why would I even need to do that? We have ongoing auth; I thought this was supposed to be easier, and why would we have to revisit this conversation again?" Well, as we saw at the beginning of the presentation, teams often view ATOs as non-value added work that we just stopped thinking about once we've achieved it. What we really want is a culture and mindset shift for our teams to treat security, privacy, and operations as first-class citizens because just like their products, these things are really never done. Lastly, I would also implore you to implement what we call quarterly risk reviews with your stakeholders. This is your opportunity to demonstrate what is and is not working for cATO, identifying what improvements could be prioritized to further mature them, and what you're learning about this journey along the way. Start with manual briefs to validate, in low fidelity ways, what is helping you to have more rich conversation about actual risks, and shifting away from the typical 'when things are due' and 'when things are out of compliance' checklist. Then pivot to automated reports and dashboards to make that more efficient and a higher frequency conversation going forward.
(26:48):
So we've focused on the people and the process. We've got a pretty good starting point for this. How can the technology shift actually enable these pieces even further? Whether you choose to build, buy, or rent, your platform is completely up to you and your local context or constraints. But make no mistake that cloud infrastructure and platforms are a prerequisite for cATO outcomes within your enterprise if you want to achieve economies of scale. And we heard a great deal about that from Bryon earlier. A platform will greatly reduce the overhead of maintaining layers in your tech stack that are below the value line for mission-critical apps, and thus, enabling greater acceleration and scalability for ongoing authorization in cATO.
(27:39):
By establishing your infrastructure and platform layers as fully assessed common control providers, and maximizing re-use of control implementation artifacts for your product teams to consume and then reference for their own packages, you can establish what we refer to as a controls inheritance model that can reduce the effort for your product teams. In this case, what we're doing is we're bringing all the context of actual controls that are being addressed by these different layers into a single, consolidated location where, as changes are being made, as implementation details are shifting, teams are aware of what is the impact from those decisions being made, and can account for that in their own set of control baselines. As an example, during our initial approval for ongoing authorization within the VA Lighthouse Program, we saw that nearly 70% of NIST controls could be inherited by applications from organizational constructs, the infrastructure and platform, and even the SECREL pipeline. And roughly 27% of inheritance was directly attributable to just the platform alone. As Bryon covered in his talk "Why GOVTECH Platforms Don't Have to Suck," I won't go into great details there, he did a really great job at covering all of these aspects, but I want to reemphasize and bring back one of my previous statements that, when we enable teams with modern practices, for this platform and for its different capabilities and services that we now offer, delivering them so in a viable and desirable way, for your engineers, for your consumers of that platform, will help drive better overall user adoption and metrics for both your platform and your cATO, which will help further saturate your organizational change strategy.
(29:29):
Earlier I quoted NIST, stating that the best RMF implementations are indistinguishable from your SDLC process. For this reason, we made four big changes to make the right thing, the easy thing. First, we pivoted security vulnerability scans away from annual exercises that were performed manually by third-party groups who are several layers removed from the actual context of these systems, and adopted what we call the secure release pipeline, centralizing that service offering, and enabling the ability for security scans to occur upon every single code commit. We also enabled 24 by seven runtime monitoring with a solution called Aqua, and this helped us make sure that we were monitoring for vulnerabilities that got past our development cycles, or would show up as zero-day issues with running containers in production. We also adopted SD Elements as I mentioned before, and what we've done with this, is to actually look at taking the feature set of SD Elements, and seeing how we can apply this into achieving some of those NIST RMF activities. In our case, there were things like system categorization, or assessing the privacy risks of different systems as they were coming onto the platform, and really taking this to a different level in terms of how we would shift away our practices of leveraging Microsoft documents, PDFs, and even emails to get the same job done. And then of course, avoiding the issues of having to manually hand these things off to other people, who would then manually upload them into your GRC solutions like eMASS.
(31:11):
Last but not least, we implemented policy as code using specific gating criteria that leverages data from all of our solutions. This created an opportunity for risk-based decisions and enforcement, which improved reciprocity for our security, privacy, and product teams. Because our path-to-prod became more visible, it meant that everyone could support unblocking flow, and by communicating these blockers directly, next to where engineers are writing code, we made it easier for them to do the right thing. In short, a security and release pipeline will help you achieve an actionable, measurable, and auditable expectation for behavior that everyone can actually understand.
(31:57):
So maybe I've convinced everyone in this room, and that would be great. That would make my life easier. But I always can understand that there is some skepticism in the room after seeing some of these conversations. And so, there's typically a question around "do we really need to adjust our people, process, and technology in order to achieve ongoing auth and cATO?" The short answer is yes, you do. Here's my rationale. Remember that a waterfall SDLC process, coupled with fragmented RMF departments, and external handoffs using disparate tools, will always just be waterfall RMF. Nothing has changed.
(32:37):
Now, we could just focus on cleaning up our own house and improve the way that we develop, test, and deliver software, and say we're doing "Agile." But if we're still bogged down with manual processes, that are governed by external teams who are several layers removed from our actual systems context or our SDLC process, we still won't be Agile when it comes to business agility and delivering end-user outcomes. We could address other areas of risk, outside of just privacy and security, by including additional roles and competencies within our cross-functional teams. But if we're still waiting on external teams to tell us what we need to solve, what controls are actually applicable for our system, or how or when we're going to have those controls assessed, and even when we're waiting on what to be told in terms of incidences or outages that need our attention, we're still not making continuous risk-based decisions at a rate that adequately protects our systems or its information.
(33:48):
If you're taking opportunities and requirements from an outside source, such as standard requirements documentation that's been passed down through your contract vehicles, or perhaps it's coming from a senior manager, senior executive who's playing the role of a product owner, telling you what to do, we're missing out on a very critical, very critical piece of context that we need within these teams. And that is, why we're even doing this in the first place and what value does it actually deliver to our users and the business? Being left out of those conversations and opportunity to learn these things, and to discover these things, makes it really difficult for teams to actually own autonomy and be empowered to make decisions. You're just introducing other forms of risk that will quickly deflate all value that you achieve by all those other previous optimizations. But a fully balanced team, enabled with modern practices, who are empowered to make risk-based decisions for their system, and be held accountable for the outcomes, while also having ongoing context flow with security and privacy experts at their disposal, and using these new processes and technology that support both continuous learning and delivery...that is how we're going to actually achieve a true sense of ongoing authorization and continuous RMF.
(35:14):
As an example, the benefit that you'll receive after these behavior changes will look something like this: a higher sense of urgency and ease of addressing security vulnerabilities; a reduction in both the volume and aging effect of risks that we accept going into production; and this all leads to enabling a faster time to market. Let's summarize some myths that we've debunked about cATO today. The RMF is purposely designed to be technology agnostic so that any methodology can be applied to any type of information system without modification. cATO requires RMF excellence. You actually need more RMF, and it's going to take more work, not less. You still have to document things and if you say it's in Git, it better be in Git. Any authorizing official can grant an ongoing authorization and cATO, you just have to be willing to demonstrate it to them. So what's the art that actually makes this all work? I'm a strong believer that leveraging analogies and metaphors, with good storytelling, can make conveying your visions and radical changes a lot easier to digest and relate to within your large organizations. During my time at Boeing, our teams had a lot of fun actually relating path-to-prod to I-5 and the Autobahn.
(36:41):
Because of course, who doesn't want a navigation assistant that helps determine your optimal route to production, and suggest important stops that you need to make along the way? Perhaps you want to customize your car, or perhaps the scenic route that you'd like to take to production, in this case, a customized set of pipeline capabilities. Or, you could just leverage cruise control at high speed, in hopping on the Autobahn, and allowing our CI pipeline generator to do all that work for you. Heck, we even had highway repair workers that regularly maintained those nasty potholes that inevitably were going to pop up in your pipelines. And rest assured that we took care of things to get you on to resume normal travel. Okay, I'll admit, we went a little overboard with this during this initiative, but I think you get my point.
(37:32):
The actual art of achieving continuous authority to operate, and improving that from day one, is actually empathy. If you're a leader of a cATO transformation, remember that the team supporting your vision are battling years of "this is how it's always been done." You're going to need to be patient. This means listening more than speaking, be a helpful coach to your teams, battling that bureaucracy.
(38:02):
And celebrate the wins and expect to play the long game. Over-communicate. You want to make sure we're rowing in the same direction, together, and don't leave things up to interpretation. Incentivize collaboration over power. No one person should have all the answers. Foster a culture of psychological safety. And trust me, this is going to shock your entire system, your entire way of thinking, entire way of working inside your organizations. Expect that there will be failures, and focus on what we can learn from them, and improve. And again, for all of you change agents out there, take it from me...if you're working towards achieving willing cooperation from others that don't see your viewpoint, be patient with those that don't share your aspirations for change. Learn and demonstrate how you care about their concerns, their fears, and what work you're trying to do to help them through that. Leverage all the brains around you. Bring them in. Don't leave them out. Remember, you're going to have difficult conversations, and you won't always receive responses that you are expecting.
(39:13):
Now, here's how I would try to handle those really tough conversations, and these are my final remarks. RMF is our common denominator. Start there. We saw that NIST RMF actually expects a lot of things that are flexible, automated, and will ensure that we have a better path forward for making continuous RMF a reality. Discuss real concerns. And I would expect that we leverage data whenever possible for those conversations without generalizing the problems. Compare outcomes, not intentions over outcomes. And lastly, invite folks to participate in our experiments and create a better process together. I guarantee you, that you'll both learn something new.
(39:59):
I want to say thank you all so much for your time today. I really appreciate your involvement and your energy. You can connect with me on LinkedIn if you'd like to continue the conversation, and also don't forget to follow Rise8 so that you can get a copy of our first cATO playbook that we plan to release later this year, and of course, enjoy the rest of Prodacity. Thank you.