Back
#258
January 12, 2026

EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen

Guest:

Topics:

Artificial Intelligence CISO
29:29

Subscribe at Spotify

Subscribe at Apple Podcasts

Subscribe at YouTube

Topics covered:

  • The "God-Like Designer" Fallacy: You've argued that we need to move away from the "God-like designer" model of security—where we pre-calculate every risk like building a bridge—and towards a biological model. Can you explain why that old engineering mindset is becoming risky in today’s cloud and AI environments?
  • Resilience vs. Robustness: In your view, what is the practical difference between a robust system (like a fortress that eventually breaks) and a resilient system (like an immune system)? How does a CISO start shifting their team's focus from creating the former to nurturing the latter?
  • Securing the Unknown: We're entering an era where AI agents will call other agents, creating pathways we never explicitly designed. If we can't predict these interactions, how can we possibly secure them? What does "emergent security" look like in practice?
  • Primitives for Agents: You mentioned the need for new "biological primitives" for these agents—things like time-bound access or inherent throttling. Are these just new names for old concepts like Zero Trust, or is there something different about how we need to apply them to AI?
  • The Compliance Friction: There's a massive tension between this dynamic, probabilistic reality and the static, checklist-based world of many compliance regimes. How do you, as a leader, bridge that gap? How do you convince an auditor or a board that a "probabilistic" approach doesn't just mean "we don't know for sure"?
  •  "Safe" Failures: How can organizations get comfortable with the idea of designing for allowable failure in their subsystems, rather than striving for 100% uptime and security everywhere?

Do you have something cool to share? Some questions? Let us know:

Transcript

The central thesis of the discussion is that the traditional "engineering-led" mindset—which seeks to build perfectly robust, deterministic systems—is no longer viable in the face of modern complexity, Cloud, and AI. Hansen argues that the industry must adopt a biological model. In this framework, security is not a static wall but a living, adaptive system that accepts a non-zero rate of failure, prioritizes resilience over robustness, and utilizes AI "agents" to perform ecological functions like cleaning "dead" data and identifying anomalies.

Pillar I: From Deterministic Design to Biological Ecosystems

Hansen posits that the industry has transitioned through three distinct eras:

Early Engineering: Small-scale, end-to-end technical sophistication.

Compliance and Checklist: A response to growing complexity that attempted (and often failed) to regulate security through static audits.

The Biological Era: An acknowledgment that systems are too dynamic for any single designer to have a deterministic view of every use case.

The "Vulture" Metaphor

In a biological ecosystem, vultures serve as essential cleaning engines, removing decaying matter to prevent the spread of disease. Hansen suggests applying this to SecOps through "Vulture Systems." Instead of human teams manually cleaning up access permissions, automated agents should act as ecological scavengers, constantly identifying and "eating" unused data, orphaned privileges, and stale group memberships. This moves the burden from a centralized designer to a distributed, functional ecosystem.

The Adaptive Immune System

Hansen draws a parallel between security anomaly detection and the human T and B cell maturation process. Just as the immune system learns to distinguish "self" from "non-self" by matching against the body's own proteins during development, modern AI systems should be trained to understand "normalcy" within a specific corporate environment. By focusing on identifying the "self," AI can react only to genuine anomalies, reducing the noise and false positives that plague traditional deterministic firewalls.

Pillar II: Robustness vs. Resilience

A significant portion of the debate centers on the distinction between being robust and being resilient.

Robustness: Comparable to a castle or a palace built of the strongest materials. It is designed to never break; however, when it does break, the failure is catastrophic.

Resilience: Comparable to the human immune system. It acknowledges that infections will occur but focuses on the ability to recover, adapt, and maintain "deprecated" functionality during an attack.

Hansen argues that security leaders must move away from the "perfectionist" mindset. True resilience requires options and capabilities. This involves "break glass" protocols and modular architectures (like Kubernetes and VPCs) that allow a system to continue functioning in a degraded state even if a primary component fails.

Pillar III: The Role of AI Agents and "Agentic" Protocols

The conversation shifts to the technical implementation of these biological models via AI. Hansen suggests that foundational models are merely the starting point; the real future lies in the orchestration of agents using protocols like MCP (Model Context Protocol).

Semantic Controls

One of the most profound insights is the blurring of the line between the data plane and the control plane in LLMs. While this creates risks, Hansen views it as an asset. It allows for the creation of controls written in the semantics of the risk domain (e.g., healthcare, finance, HR) rather than just software languages. This eliminates the "translation layer" where attackers often find vulnerabilities.

The Necessity of "Death" (Apoptosis)

Hansen identifies a missing component in current technical systems: Apoptosis, or programmed cell death. In biology, the death of diseased or unnecessary cells is vital for the health of the organism. In cybersecurity, legacy systems are often "diseased tissue" that are never allowed to die, creating a massive attack surface. A healthy ecosystem must have a mechanism for the retirement and "death" of legacy code and services.

Pillar IV: Overcoming the Auditor’s Paradox

Acknowledging that a "probabilistic" biological model is inherently "triggering" to compliance-minded leaders and auditors, Hansen offers a three-pronged strategy for organizational change:

The Narrative: Building a compelling story that explains why the deterministic model is failing.

Liberating Complexity: Acknowledging that current systems have surpassed human "checklist" capacity. This creates a "release valve" for the pressure on security teams.

Agentic Auditing: "Putting shoes on the cobbler’s children." Auditors should be given their own AI agents to perform continuous, non-deterministic monitoring that approaches "eventual consistency" in security posture.

Pillar V: Measuring Success via OODA Loops

Finally, Hansen suggests a shift in how organizations define success. Instead of measuring how many "airplanes" (systems) were built perfectly, they should measure the speed of their OODA Loops (Observe, Orient, Decide, Act).

Using tools like Project Big Sleep (AI-driven vulnerability research) and Project Mender (AI-driven automated patching), the goal is to celebrate the cycle of finding and fixing problems. Success is defined by the rate of mutation and adaptation, not the absence of error. As in the credit card industry—where a 0% fraud rate indicates a failure to capture the market—a 0% error rate in software suggests a failure to evolve.

Topic Timeline

Introduction and the "Godlike Designer" Myth: The hosts and Royal Hansen discuss the transition from deterministic engineering to biological models.

Biological Analogies in Security: Introduction of the "Vulture" concept for cleaning unused privileges and data.

The Immune System as a Blueprint: Detailed discussion on T/B cells, "self" identification, and how AI improves anomaly detection.

The Resilience Mandate: Distinguishing between robustness (the castle) and resilience (the immune system); the importance of "break glass" options.

Agentic Orchestration and MCP: Moving beyond foundational models to a world of interacting agents and semantic risk controls.

The Concept of Technical Apoptosis: Why legacy systems must be allowed to "die" to keep the broader ecosystem healthy.

Addressing Compliance and Audits: Strategies for convincing traditional leadership to accept probabilistic, non-deterministic security models.

The OODA Loop and Speed of Fix: Shifting organizational goals from perfection to the speed of the find-and-fix cycle.

The "Correct" Rate of Error: Comparing genetic mutation and fraud rates to software vulnerabilities; why 100% correctness is an evolutionary dead end.

Closing Advice: The importance of "play" and experimentation with AI tools (e.g., NotebookLM) to build intuition for these new systems.

View more episodes