Fireside Reflection with Clinton Herget, Luke Strebel, and Asare Nkansah
Summary:
What does secure software delivery look like in the federal space? In this engaging session from Prodacity 2025, Clinton Herget (Snyk), Luke Strebel (VA), and Asare Nkansah (VA) dive into the challenges and opportunities of DevSecOps, platform engineering, and continuous ATO in government software development.
From security as a bottleneck to balancing compliance and developer experience, this discussion explores how federal agencies can embrace modern software practices while ensuring mission-critical security and reliability.
🔹 Key Topics Covered:
- Why platform engineering is key to DevSecOps success
- The biggest security challenges in government software delivery
- How federal DevSecOps teams can reduce friction & cognitive load
- The role of continuous ATO in streamlining compliance
- How to balance developer speed with security best practices
- Why AI-powered tools can help secure and accelerate DevOps pipelines
🕒 Key Highlights & Timestamps:
[00:03] - Introduction: What DevSecOps really means in the federal space
[01:35] - The role of platform engineering in secure software delivery
[04:50] - Why traditional security models slow down developers & increase risk
[06:19] - Developer experience vs. security compliance: How to find balance
[08:14] - The power of automation in security & risk management
[10:47] - Choosing the right security tools for federal DevOps environments
[13:32] - Why shifting left isn’t enough—developers need better context
[15:59] - How federal teams are implementing Continuous ATO (cATO)
[18:44] - The challenge of AI-generated code in secure environments
[22:55] - What’s next? The future of DevSecOps in federal technology
🔗 Stay Connected:
Learn more about Prodacity: https://www.rise8.us/prodacity
Follow us on X: https://x.com/Rise8_Inc
Connect with us on LinkedIn: https://www.linkedin.com/company/rise8/
👍 Like, Subscribe, and Share:
If you’re working on DevSecOps, federal software security, or continuous compliance, give this video a thumbs up, subscribe to our channel, and share it with your network. Secure software delivery doesn’t have to slow you down.
#Prodacity2025 #DevSecOps #ClintonHerget #LukeStrebel #AsareNkansah #ContinuousATO #PlatformEngineering #SoftwareSecurity #DigitalTransformation
Transcript:
Clinton Hergret (00:03):
Thanks everybody. We have heard a lot of really interesting strategic thinking today. We talked about setting the right goals. We talked about measuring what matters in particular, we talked about identifying risk in a lot of the decision making. So what we wanted to do now is bring that maybe down to a slightly more tactical level and tell a real world story about a high velocity software development project at the va, and really weave through a lot of these things we've been hearing about when it comes to this kind of high velocity software development, particularly in a federal space as we are now, what, 10, 12 years into what I'll call the DevSecOps revolution, some of the challenges that we continue to face. So without further ado, would love to have my two guests here introduce themselves, Luke, and then Asare.
Luke Strebel (00:46):
Sweet. Yeah. Hey everyone. My name's Luke. I'm product manager for the secure release pipeline for the Veterans Affairs. We make sure that veterans, clinicians, doctors, they're connected to the data they need to make that veteran's care the best possible experience, and we do that through enabling that top-notch developer experience.
Asare Nkansah (01:04):
Well said. My name is Asare Nkansah. I help as a platform engineer to make developers lives a lot less complicated. A lot of developers don't want to learn how to use Kubernetes or figure out how Argo CD works or get into the weeds of Istio. So trying to make sure that developers are focused on the end mission, being able to provide for the end user, so as a platform engineer, help to make sure they have the tools, self-service capabilities, and golden path that they need in order to do what they need to do.
Clinton Hergret (01:35):
And I think that's really important. Asare, when you talk about the rise of platform engineering that we've certainly seen in the private sector over the past few years, almost as a reaction to maybe some of the chaos of the early days of DevOps. Originally it was like let developers do what they want, pick their own tech stack, break things and move fast. And it turns out that's a really great way to introduce a lot of risk and vulnerability into your software. And so at least the way I see it, platform engineering almost comes as a response to that to say, we need to give developers a paved road to drive on so that we're not necessarily limiting the amount of choice that they can make, but nudging them toward the choices that we know are going to best benefit the mission. Is that kind of how you see your responsibility?
Asare Nkansah (02:16):
Yeah, I think that's well said. I think it's tricky because different platforms serve different purposes, right? Like AWS has a platform that has lots of options, lots of different ways you can configure, and that's for a specific group of people, at least for what we want to do. We don't want developers to have to be faced with a lot of that complexity upfront, or even from the security point of view. A lot of the complexity that comes with, what do I need to implement? What do I not need to implement? What does that look like? So yes, we try to figure out how to provide at least enough configuration so that they can do the things that they need to do while limiting at least the amount of overhead that comes with that.
Clinton Hergret (02:51):
Yeah, absolutely. And of course, we're all thinking about that in terms of the risk that those developers are bringing on board to the organization, which is not something that's in the natural critical path of building software. As a developer, I can very easily build vulnerable software that succeeds in meeting all of my KPIs. We've talked a lot about Goddard's law today, and it turns out when you measure developers based on the number of user six stories they're successfully delivering, right? You're going to see all those user stories delivered into production as quickly as possible. Nowhere on that list is, and also it can't be vulnerable, and also you're going to be docked if we happen to have a successful breach or attack from the outside, because that's actually very hard to do, right? To adequately trace a potential vulnerability back to a developer. So Luke, I'm kind of interested in your experience. How do you see that relationship between the experience of the developer in building software, particularly in a federal context and the kind of risk they're bringing on board, especially when there's a security team whose entire job it is as traditionally conceived to sort of slow them down to put up gates to make it harder to deliver those use cases because of the security impact?
Luke Strebel (04:05):
Yeah, definitely. I think one of the interesting experiences that I've had joining this program is finding that a lot of our developers may not be coming from the same background. They're not always full stack engineers. Maybe they're not super familiar with Kubernetes like Asare mentioned a minute ago. And so it's not just they don't want to deploy secure products. Of course they do, and they want to do it fast and they want to make the veterans' lives awesome, but when they see this behemoth ahead of them, this extremely hard thing, how do I get to prod? I don't even know. I'd rather just stay in my circle here. I think some of the interesting stories that are starting to be told are similar to how Mr. Fanelli said in the Navy, we found some ways to build more public versions of these dashboards to give us a North star, but just like you mentioned a minute ago, there's danger in those metrics.
(04:50):
How do we make sure that the metrics aren't being misused or gamified? Some, I think the really strongest value prop we've experienced is integrating these cybersecurity professionals, these application security assessors into the development teams at a ratio that lets them actually understand the tech stacks that they're helping. So instead of a developer having to have all of this breadth of knowledge, we can give you the support you need on these really tough milestones, these bottlenecks that often pop up on the path to prod and target you before that becomes a problem. And that way, as a developer, you start getting excited about this deployment process and maybe the security part becomes part of your normal routine versus something that you really have to go attack and prepare for.
Clinton Hergret (05:34):
Yeah, it becomes a sort of a downhill motion, right? As opposed to constantly having to push uphill. I do a lot of research on developer experience, the idea that what makes you productive as an engineer? And it turns out it's not all that farfetched. You basically need three things. You need a flow state, which basically means don't interrupt the people doing the hard work. You need limited cognitive load. All of the answers to the questions that you'll have as part of an engineer's workflow should be readily available to you if you have to go somewhere else to answer them. That is all lost time. And the third thing generally is you want feedback loops to be as short as possible. If I need feedback from someone else, and if I'm waiting for that, then unfortunately that makes me less productive as an engineer. Now, when we think about traditional cybersecurity, what does it do?
(06:19):
Well, it gets all three of those things wrong. It pulls engineers out of their flow state. It says, you have to focus on this. Not that it increases their cognitive load. It says, fix this, but I'm not going to tell you why, and I'm certainly not going to tell you what you have to do. I'm just going to slap you on the wrist and say you did something wrong. And then it ensures that those feedback loops are as long as possible because I have to go out to my security team and say, what do I do about this? Right? I can't self-service it the way that I'm used to now self-servicing my packaging through containers or my infrastructure through cloud APIs or infrastructure as code. So it sort of violates almost maybe even the spiritual compact that modern engineers have, which is we say, Hey, move fast. You can self-service everything. The world is your oyster. And then traditional security says, no, no, no, forget all that. You have to live in our jail forever. So with all that now, sorry, how do you see the current state of that collaboration in your role, at least between developers and security? Has that gotten any better? Is a lot of this thinking now making its way into some of these public sector projects?
Asare Nkansah (07:21):
Yeah, I think so. I think it's an interesting idea because in order to have something that you are able to do continually, you have to have systems, you have to have automation in place. That's not just like once we get the results of the thing, then we'll go back and figure out how to respond to it. So trying to figure out how to get that as the idea is being innovated on of like, okay, what does this mean from a security point of view? What does it look like to do this systematically so it can be repeatable over and over again? I think that's the interesting part about cATO or just this cATO concept of really having a process in place that it's continually being done instead of just a big bang, alright, now that we've created our thing, here we go trying to figure out all the security implications of this thing that we made and how do we fix it? How do we make sure that it fits the form that we want it to fit? If we can do that upfront and do it continuously over and over again, I think results in throughput have a better result.
Clinton Hergret (08:14):
Well, and what's the role that security tools kind of play in that equation, right? You've got a wide landscape, there's a number of vendors here that are selling software that ideally helps to reduce that risk. What does that relationship look like and what do you look for in a tool to support that kind of platform developer experience that you're both trying to build?
Asare Nkansah (08:32):
Yeah, I think it's a good question. And someone wise told me a while ago, before you start looking for the tool, try to understand the actual problem that you're trying to solve. I think it's easy to look at some of these tools as like the shiny light that's going to fix all of my problems, and then once I get that tool, it's all going to be good. But it's really the opposite of what problem are we trying to solve? What outcome are we trying to achieve? And then how do these tools fit together to achieve that outcome? So I think to answer your question, I think the role that these tools play is trying to figure out how do we create a seamless process, not just disparate functionality that achieve many different purposes. I actually liked what he was saying in the last talk of really understanding what is the bigger picture? What are we actually trying to accomplish? Not let's just increase the deployments. That's a nice thing and that's a good thing in a lot of cases, but that's not the end goal is to just deploy more. Our ends goal is to try to figure out how do we make sure the end user has what they need so that we're providing value to them and they can really do their job more effectively or whatever they're doing more effectively. So yeah, hopefully that made some sense.
Clinton Hergret (09:40):
Turns out you can deploy software millions of times a day if all you're doing is changing comments, right? That's true. A hundred percent test passage every time. That's a neat trick if you're being measured on how often a day you're deploying software. But I do think it's an interesting point, right? We heard from David earlier talking about the importance of validating some of those assumptions. I like what you said about kind of getting to the root cause because when you talk about application security, I actually think there's a maybe dual mandate there in play. On the one hand, yes, we want to enable engineers to make better decisions to understand the risk they're introducing into software better so that they can at the cheapest and easiest possible point, which is when they're writing that line of code the first time, have a better chance to understand the implications. But on the other hand, security of course, is a compliance checkbox. So typically the person who is buying that tool has a different outcome in mind than the end user. And that introduces maybe a little bit of attention there. I don't know. What are your thoughts on that, Luke?
Luke Strebel (10:34):
Yeah, I think there's something interesting, even to what Asare's point was earlier with the tools that we find are the most effective are the ones that actually give the end user, the person who has to make that change the information they need to make that change. So there's a huge difference between a tool that might suggest an inline fix right there for the developer in their IDE. I can easily merge that in, and the vulnerabilities mitigated right there. Awesome. There's versus maybe something that gives a pretty vague description of approximately what's happening and doesn't really give an indication of the risk of that vulnerability, and therefore the priority starts slipping. And it just becomes this circle of, well, somebody has to draw a line somewhere, and who do you trust to make that decision? And so if you don't have a cybersecurity professional on your team, which most teams don't, most teams are going to have the developer they need to build the product they need.
(11:28):
So where are you going to offload that risk? And one of the interesting responsibilities for especially the secure lease pipeline is taking on contracts with vendors who have proven they can think outside of the immediate domain that we're in, proven commercially, proven in federal as well have tackled the FedRAMP challenge, have gone after all of these different other categorizations that can help us ensure that the risk that we're accepting because that's what we're doing. We're offloading a lot of the risk to these vendor tools to say, Hey, we need you to help us find where there are problems, get our brains into the right locations and de-risk them one by one. And if you don't have the right amount of feedback for the developer, or maybe there's too much noise and a lot of false positives, you just start throwing suppressions at these vulnerabilities and maybe you just hope that no one reads them too deeply. So there's definitely an interesting relationship here between the outcome of security that we're chasing the feedback loop between the developer and their assessor who's going to go look at the suppression that they added to make sure that code is secure and the end user itself, which sometimes doesn't even care about all that work that happened in the background, unless of course the system goes down.
Clinton Hergret (12:38):
Exactly. Well, and it reminds me of what Doug said earlier, which is often your biggest source of risk is how you assess risk. And I'm reminded by the traditional, I think anyone who's built software for the government is used to getting that 1100 page PDF full of vulnerabilities. Here's everything you did wrong, but how many things in that list are actually risks in the real world? For example, how much of that software is actually getting deployed? How much of it is internet facing? Is there ingress? Is the networking configured to allow an attacker to potentially make that entry? What's the blast radius, right? I mean, how are you even conducting a risk assessment when all you've got is an undifferentiated list of vulnerabilities? So I think when we talk about DevSecOps, and again, this term is not new, it's been going around the industry for 15 years now, and yet we're still talking about how to implement it, how to become mature at it, and I think that's because we took the concept of DevOps.
(13:32):
Let's let developers and engineers run fast. Let's let them make a lot of their own decisions. We stuck traditional security in the middle of it, but we didn't think about how those two concepts fundamentally are not compatible with each other. There's not necessarily a way to self-service security, which is why I think we see the rise of platform engineering integrating a lot of these tools in a way that can potentially be self-serviced by developers. So I guess my question is maybe Asari, how do you see the current state of that integration? How far have we come in true DevSecOps, particularly in some of these large federal programs, and how far do we still have to go?
Asare Nkansah (14:09):
Yeah, that's a great question. I think what's interesting about it is the software industry, even the open source community is thinking a lot about some of these things. You think about tools like gatekeeper, verno that add policy as a service so that instead of needing to place constraints on your tenants by telling them this is the kind of code you write, et cetera, you let them be able to do what they need to do and provide boundary lines around them of like, if you cross this far, that's too far, that's when we'll go ahead and change it back. So I think it's interesting. There's a lot of focus on this subject. I still do think there's a ways to go, especially trying to figure out the cognitive load piece of all of this. I think it's because of the way that things are advancing. It's to keep the complexity away from developers and to keep the complexity away from the people that are really trying to keep some of these outcomes, especially as microservices, you just have a lot more things. You're trying to think about the amount of tools that all of these organizations are using. It's really overwhelming. And so I think there definitely is still a ways to go here, but we're making progress. I think it's one piece at a time,
Clinton Hergret (15:13):
And I think you touched on a really interesting aspect of this, which is the complexity piece of it. Software never gets simpler over time. It may appear simpler, but that's only because an additional layer of complexity has come along to aggregate some of the old complexity, and then something else will eventually come along. On top of that, you talked about Kubernetes, which is as an orchestration tier, really moved. I think the industry forward a generation in terms of how easy it's to do things. There are engineers who spend their entire careers now writing YAML files, which for those of you don't know, is a very simple sort of key value type syntax. They never write code at all. Does that mean their job is simple? Well, no. As anybody who's ever had to troubleshoot a Kubernetes YAML file knows it can be infinitely complex because it hinges on exactly where you put your colon in a line of white space.
(15:59):
Do you think though, that there is, I'm going a little bit off script here, but this is kind of what I'm interested in getting at. Do you think there's an understanding of that complexity on the part of the security assessors on the federal side in particular? Because what it seems to me is we all talk about how, FedRAMP is five to 10 years behind and the regulators will never catch up, and this is holding us back. I don't necessarily think that captures the full nuance of where we're at, but what does that idea of complexity do in a situation where ultimately, at the end of the day, you don't want to be breached in a way as a risk assessor. Your job is much simpler and you don't have to look at all that complexity, but me as an engineer, I'm going like, well, which of the 17 layers of my stack do you want me to integrate that control into? And then who's going to be responsible for all of the troubleshooting, all of the slowdowns that inevitably result? I don't know what do you think about that, Luke?
Luke Strebel (16:51):
Yeah, I mean, I think we probably could have a drinking game for opinionation in platform right now. Where do you draw this line and where do you try to reduce this complexity so that even we'll take a Greenfield app, someone who just came to your platform, they don't have any of this history of things they have to learn and unlearn. They just have to come to you with expectations that you're going to help them get to prod. And even in that case, it's super hard to figure out where do we really want to spin the resources upfront to help you move quickly? Is it going to be a lot of effort spin into golden templating like the 80% solution? Because we think that's probably good enough for most of these use cases. And that's also by the way, where we're going to stack our assessors and all of the cybersecurity professionals, they're going to really, really be experts at this 80%, and we're going to try to stay in that boundary.
(17:41):
And if the veteran, maybe there's this use case that our platform right now supports basic web app development and things like that, but there's an awesome value prop for our platform to support video calls and we have to upscale resources. And maybe that's not a traditional tech stack that we might deploy. And so one of the ways that we're trying to approach opinionation is what is that in value? We're trying to enable not just is it secure now or is it going to be held down to prod? But is the thing we're trying to give our developers this technology or this end outcome worth the cost of us not being opinionated, is it worth the cost of us taking on more risk? Because ultimately, if we fail fast and we get the right people in the right roles, we can still burn down that risk and provide that end value. So I think that's kind of the mix here. Is there an 80% solution that gets us a lot of these good use cases? And then when something is outside the boundary, prove to us that that's going to make the veteran's life better, and how do we help you make that happen?
Clinton Hergret (18:44):
Sort of the idea of batteries included, but accessible, right? I can take out my screwdriver and I can make changes if I need to, but I recognize I am taking on additional risk that is not how it was designed to work, and ultimately I'm relying on my own expertise to be able to do so. I think getting past the binary thinking of we either need to provide everything that an engineer might need and prevent them from making any decisions that might potentially be insecure, whatever that risk-based definition is or say we're putting all of the responsibility on them. We've been talking about shifting left as an industry again for at least as long as I've been writing software since the late nineties. But ultimately, we don't talk about what we're shifting and why, and are we shifting the context that allows better decision making or just the responsibility. So are you shifting or are you throwing? Are you simply saying this security stuff is really hard to do, so now you have to do that in addition to your day job? Or are you saying here is a set of opinions that allow, again, as you say, Luke, 80% of those risk decisions to be made for you. And if you want to move outside of that for the 20%, you're welcome to, but that's when you're crossing the line into incurring potentially additional risk.
Asare Nkansah (19:52):
Yeah, I think to add on to that, I think what makes this tricky is right, a lot of things to think about. There's lots of ways you could go ahead and draw that line or figure it out, but part of it's just doing it right? It's just going ahead and just doing something, trying to get out there and trying to figure out how can we do something and do it continually so we can learn from what we're doing and then make adjustments from there. I feel like we can kind of have analysis paralysis a lot of the time and we can overanalyze some of these things. And obviously we're in govtech, so that's obviously the name of the game, but trying to break out of that, trying to figure out what does it look like, and I think that's where something like a continuous ATO comes in, something that can kind of help us to try and try again from a security and software development point of view.
Clinton Hergret (20:33):
Yeah. Well, and of course the complexity of software is never decreasing. New things are always happening. Look, it's 2025, so I think we would have some demerits if we didn't talk about AI at least for a second at a tech conference. But it is not a joke that most developers are now using some level of machine generated code in their work. What does that look like to you guys from this perspective? As we talk about risk management? Do you see a space for AI generated code in some of these projects? How are you considering it as part of these platforms you're building?
Luke Strebel (21:00):
Yeah, I think there's definitely going to a place for it, and I don't know if I have 60 seconds worth of anTLDR here, but I think that the biggest thing is that from a security perspective as AI will help us, it will also cause problems and there's going to be so much noise. So how do we find ways to eliminate that noise and put the human in the right spot? I think, Doug, we were talking in the back about how maybe AI can put confidence bounds on risk and things like that, and maybe giving it some different frameworks to approach not just writing code, but also understanding threats that are happening in the wider ecosystem and to help that human be the best version of themselves. I think there's a lot of really good potential there.
Asare Nkansah (21:41):
Yeah, I think that's well said. I think the only thing I'd add onto that, one of the biggest challenges I feel like developers are running into right now is just, again, like I've been saying, the cognitive load challenge, right? There's so much information that is sometimes needed in order to change and to help the end user to accomplish what they want to accomplish. I see AI helping to bridge that gap in quite a few different ways. I think just allowing for the relevant information, not just all of the information that they could have, but the relevant information to be brought to them in a way that makes sense that is actionable and that can help them to figure out the more complicated decisions from there. And also abstracting even some of the more minutiae or the minute tasks that maybe necessarily need to be done trying to help reduce some of the more simple things and allow them to focus on the complex things that is needed in order to provide value to their end user.
Clinton Hergret (22:38):
Definitely. Alright, well, I think we got to wrap up, but I want to ask you just one quick last question. Looking forward to a new year here. A lot of changes I think in the federal space. That's fair to say. Where's your head at and what's maybe one thing you're keeping your eye on as we get into this next year in terms of public sector tech?
Luke Strebel (22:55):
Okay. Yeah. I think the number one thing for me is I think we can check the box that people know what Cato is, but now we need to make it approachable, easy, fast, reliable, scalable. That's where we want to focus at the VA and bring this value prop that we have on this beta team over in the corner, out to the entire ecosystem and really increase the lives of veterans with that.
Asare Nkansah (23:16):
Yeah, I've been very impressed by the Department of Veteran Affairs. I think it's been very cool seeing how forward thinking a lot of these executives are and a lot of these people that are kind of helping to push the department forward. I think specifically, I really liked the talk earlier about being able to unlearn. I feel like there's just a lot of people that are looking to unlearn and try to figure out, okay, with fresh eyes, what is the best path to solve the problems that our end users are having? And I know we can talk about a lot of the innovation with the technology and all that good stuff, but I think having a good understanding of the strategy and the principles that were kind of coming into this and letting that inform how we use the tech, how we build the platforms, how we allow the developers to work. I feel like that makes all the difference. So that's where I'm seeing a lot of at least change in the space is just with the mindset. Very
Clinton Hergret (24:06):
Well said. Well, thank you all so much for joining me today, Luke and Asari. If anybody wants to talk more about DevSecOps, have questions about how SNYK can help support your federal mission, please come up and see us at the booth. Thank you all for having us, and thank you for your service.