October is security awareness month! As such, I wanted to revisit a blog post I wrote last year detailing how organizational change management and organizational psychology principles relate to the cultural changes needed for effective DevOps and DevSecOps implementations.
I have the good fortune to work in the xLabs program here in OCTO. We focus on 1-3 years ahead of roadmap innovation in close partnership with our business groups, with whom we exit the product we build to market.
What fascinates me most about DevSecOps and organizations’ journeys towards it is the deep behavioral and epistemological changes that must take place for this change in thinking to succeed. Likewise, DevSecOps will look different in every organization in which it is practiced due to the unique technological, market, industry, and cultural constraints that an organization must navigate. While certain types of tools, practices, and technologies may signal DevSecOps maturity, the key is all in the behavior and knowledge of people and groups of people within the organization.
If you want to understand how to federate and radiate security knowledge in a digestible and scalable way, keep reading — this post is for you! Of course, we can’t expect everyone to be security experts. It’s not realistic or productive. We need developers to be developers, and so forth. But what is realistic and productive is giving everyone a little knowledge about basic security concepts and having clear, well-paved roads for security-related concerns requiring expertise. Teaching people when and how to use them and what questions to prepare for when engaging them is far more practical.
What is organizational change management?
Organizational change refers to the actions in which a company or business alters a major component of its organization, such as its culture, the underlying technologies or infrastructure it uses to operate, or its internal processes (source). Big changes — say, across organizations that are 10K+ in force — are the result of a lot of little changes, specifically in individual behavior. Several of the tactics I will go through have to do with making information accessible to people so that the path to change is not a big deal. Most of the time, it is not change that people fear, but loss. Examples of perceived loss could be things like loss of control and predictability or loss of hard-earned expertise in a system that may change, and thus loss of influence.
Thus, the key to successful change is connecting with people in a way that helps them realize that they have things to gain from participating in change and that participating in this change is easy. Taking time to understand what losses people fear can help you tailor your change management motion to increase the likelihood that it matches exciting gains against these losses. For example, you can partner with experts in the older system to help them become experts in the new system and help you build an onboarding program. Regarding loss of control and predictability, you can think through how to use your communication strategy to mitigate these blockers. How and at what intervals do people want to be communicated with so that they have a chance to understand their options and feel informed?
Humans have an enormous capacity for change when the conditions are right. Parenthood (99% chance of sleep deprivation, 100% chance of diapers), buying a house, buying a car, going to college — these are all big changes that happen all the time. Same with the transition to DevOps and DevSecOps: it is hard, but it is possible, and organizations have undergone these transformations. Why and how?
The change management equation and facilitating organizational change
These massive changes happen when certain variables are aligned that make the change worth it. The change management equation, developed by David Gleicher in the ’60s and iterated on over the decades since, sums up the main components we need to navigate when rolling out change.
Remember that organizational change is the result of changes in individual behavior. In other words, change needs to be digestible so that individuals can connect with it and understand how to engage with it. Chances are that you won’t be rolling out any changes in your professional life that are quite as monumental as parenthood — and yet, it is useful to know that massive, everlasting changes are possible when the variables are tuned thusly.
With those variables under our belt, let’s switch to some tactics. These are taken from the book Switch: How to Change Things When Change is Hard by Chip and Dan Heath. I love this book for its simple language and fantastic examples.
The first tactic is looking for — or creating — bright spots. These are examples of things that are going well or at least going better than average in your current environment. Something like a high-performing team that, for example, has better uptime than its peers is a great use case. Pilot programs can also be used to create bright spots to demonstrate success and learn a few things before the grand rollout.
To tie back to our equation for a second, I tend to associate this tactic with dissatisfaction with the current state and a compelling vision of the future — it’s proof that the vision is possible and that you don’t have to settle for the current state.
Next are scripting the critical moves and shrinking the change. These tactics work together to make the first steps to participating in the change (F) achievable. Another way of putting it is that scripting the critical moves shrinks the change. Think about 1–3 small but important things a person can do to participate in a change, and voila — you’ve mastered these tactics. You will probably need to dig deep to find these small but important things, but it’s time well spent.
How to source bright spots and script critical moves to shrink change
Let’s walk through a quick example of how we can use these tactics to help a development team be more security conscious in how they architect, code, test, and deploy their applications. Again, the goal is not to make anyone an expert. In fact, you could think of the goal as an exercise in optimizing security knowledge such that the team is armed with the most critical pieces of security knowledge in digestible media to get the biggest impact. This example makes some assumptions about the quality of the tools used and that the teams using them, trust and understand them, and know how to get the most out of them.
We will also assume that the system at hand is sufficiently stable technically and politically such that making informed, balanced tradeoffs to make consistent security progress is feasible. It is out of scope of this post to dive into those topics, but we all know that security tools, software systems, and the human politics around them are imperfect and chaotic. This is a reminder to any product development leaders that the extent to which you equip your teams with the proper security knowledge, tools, and psychological safety to ask for tradeoffs is the extent to which you will have a secure product.
An example of a bright spot in a security context could be a team that is meeting its security policies for all the tools it is required to run against its application at build, deployment, and runtime. Perhaps this team is also starting to ask what they can do to further their posture with other tools. I would start by asking this team how they got to this ideal state and unpack the anatomy of this bright spot.
In terms of communication, chances are that this team figured out a system for continuously chipping away at their security debt by working closely with their leadership, consistently communicating their progress, and asking for help proactively when they needed it to a receptive audience. We all know that it is seldom feasible to halt all business operations to work on security issues. We must strive for balance and informed tradeoffs, which takes collaboration and communication.
These teams can share their knowledge through a simple lunch-and-learn to walk through a wiki page with lessons learned. And as a leader, if you see these bright-spot-teams and they haven’t taken time to codify and share their knowledge, encourage them to do so. Not only will you transform this local knowledge into shared knowledge, but you create a career opportunity for that team and its members to get visibility and establish thought leadership.
In order to make consistent progress, there must be consistent review and assessment of flaws that are coming in. A critical move to script to shrink the change in mindset to security thinking could be something as simple as a five-minute team review of what your security scans picked up in overnight builds. A short, daily review could go a long way for a team to understand recurring flaws and other patterns in its application that may have security implications. These recurrences could be due to languages and frameworks used, in which case the team has an opportunity to educate itself on known mitigations or prevention measures for things like memory management in C/C++. After all, preventing attack surface is the cheapest way we can be secure. Just rewrite your application in Rust! 😀
Joking aside, consistent, bite-sized reviews mean that developers can deal with flaws in a timely manner before they lose context on the area of code they were working on. Ideally, security tools return results in a matter of minutes, if not seconds – but in any case, timely, consistent review and proactive management of security concerns in the shortest feedback loops possible, make for cheaper and higher quality remediation due to intact context. If we can’t prevent attack surface, the least we can do is make it as affordable as possible to remediate.
It is also worth noting that properties of a system that are not specific to security can still have security implications. Software built to be testable and observable has an enormous security advantage over systems that are not. Thus, it behooves us to think about security from more than just a literal, security-focused lens and broaden our thinking to designing the simplest systems we can.
I love security. The blend of technical and cultural challenges inherent in this grand non-functional requirement never ceases to fascinate me. If you are curious about what xLabs is up to in the security space (Project Keswick, Project Trinidad, Project Narrows), we are looking for design partners – please reach out!