How do you envision security architectures of the future? Given a cyber-physical infrastructure domain of sizable proportion, for example, a transportation system, how would you realize the challenge of security and resilience?
In my previous companion blog, “Envisioning Tomorrow’s Secure Infrastructure,” I discussed the visioning event VMware helped to organize at MIT on August 10-11, 2022. The event, sponsored by the NSF Engineering Research Visioning Alliance (ERVA), brought together a sizable group of domain experts to discuss “Engineering R&D Solutions for Unhackable Infrastructure.”
In this blog, I highlight in more detail research directions discussed by ERVA participants in the area of autonomous security. As a discussion leader for the event, I believe the visioning outcomes to be among the most important for the ERVA visioning event as a whole.
Why autonomous security?
As cyberphysical systems advance, more sophisticated tools are needed to address the challenge of security and resilience. Autonomous security refers to security technologies that are self-guided in various respects. For example, they may be self-configuring, self-managing, self-tuning, self-learning, etc.
Key reasons for developing autonomous security approaches include, first of all, infrastructure scale and complexity. Tomorrow’s cyberphysical infrastructures will be larger and more complex as more physical domains become cyber-enabled, as cyber systems become more sophisticated, and as cyberphysical infrastructures become more interconnected. Humans will no longer be able to oversee threat detection and response in the manual way we often do today.
A second key point is that of threat complexity. Tomorrow’s adversaries will find ways to be more sophisticated in the threats they introduce, including multiphase adversarial actions that fly under the radar of detection mechanisms, and using AI techniques to circumvent detection mechanisms. Autonomous security can use various monitoring and data analysis tools to observe adversarial behaviors within a noisy sea of normal user activity.
Finally, autonomous security can be used to inform human decision-making. It may automate functions that involve tedious and complex data analysis in order to update the state of cyberphysical infrastructure and malicious activity. While autonomous security can be a substitute for human activity, it is better considered a compliment since humans will continue to be the designers, operators, administrators, and decision-makers.
While autonomous security can be a substitute for human activity, it is better considered a compliment since humans will continue to be the designers, operators, administrators, and decision-makers.
Considering more human synergies
Attendees of the visioning event discussed the need for automated security risk analyzers. Given an infrastructure design, could an autonomous security tool analyze the security risks and vulnerabilities without human Intervention? Formal verification in cyberphysical systems today relies on human insights to define critical safety and security invariants in system-specific ways. Future autonomous analyzers could formulate such invariants for humans and examine security at a multiplicity of levels, from low-level physical interfaces to key integration points between system components to human interfaces.
A much-discussed vision of the future was the role of virtual security assistants. Imagine an autonomous security assistant that interacts with humans using natural language and advises or informs in valuable ways. For a naive user, the assistant may provide guidance on difficult-to-understand security questions (“Do you want to update system.dll with a third-party library?”) or actions that violate sound security practices. For security professionals, an autonomous assistant may provide the information needed for reviewing low-level system state, examining network data, or performing forensic analysis.
Another important use of autonomous security is in translating human intentions to cyberphysical infrastructure actions. While humans are good at identifying what should be done to configure, monitor, or remediate a system, compute agents are better at implementing it at scale. Consider, for example, scaled infrastructures with thousands of components that need to be configured according to a new security policy. As infrastructures become more complex, the need for this kind of translation grows.
Challenges in AI
A grand challenge called out by event attendees is AI-enabled automated response. As we all know, we live in a rapidly expanding era of machine learning applications. ML is widely applied to everything from movie recommendations to driving directions to targeted advertising to DNA analysis. But a key omission to this picture is an automated response on a level that rivals human intelligence. For example, how might autonomous security respond in real time to adversarial actions — before it is too late to prevent a breach?
To achieve this, autonomous security needs significantly more advanced awareness of the surrounding context. What makes humans distinctively intelligent is our understanding of the context surrounding a specific system and our ability to deal with the unexpected. Our judgments comprehend the bigger picture of the task at hand, the effectiveness of a tool beyond the details of its function, and what actions should be taken when input is unexpected or the environment of operation changes. Autonomous security needs a richer understanding of context to understand adversarial behaviors, often crafted to lie outside system specifications.
Future autonomous security should also integrate data from multiple sources. This collection of data is often referred to as multimodal data analysis. While today’s machine learning techniques have been very effective in analyzing voluminous data sets, there is still a long way to go in the machine learning world to expand these techniques to multiple data sources (e.g., audio and video and sensor data), especially when delivered in real time. Such techniques would be helpful in monitoring the entirety of a cyberphysical infrastructure where adversarial behavior is not obvious within a single component but clearly seen when integrating multiple modes of observation.

Anticipating tomorrow’s adversaries
Of course, the techniques described above will presumably be available to tomorrow’s adversaries as well. As such, research on cyber-physical-human systems needs to consider adversarial approaches and what kind of hacking can be done with autonomous security technology and tools. What kind of malware, bot technologies, AI-driven vulnerability analysis tools, or attack-hiding techniques are possible? The notion of “unhackable” security implies anticipating the adversary who may be using these techniques.
Links for Further Information
good images