Back
#245
September 29, 2025

EP245 From Consumer Chatbots to Enterprise Guardrails: Securing Real AI Adoption

Guest:

Topics:

Artificial Intelligence
29:29

Subscribe at Spotify

Subscribe at Apple Podcasts

Subscribe at YouTube

Topics covered:

  • In what ways is the current wave of enterprise AI adoption different from previous technology shifts? If we say “but it is different this time”, then why?
  • What is your take on “consumer grade AI for business” vs enterprise AI?
  • A lot of this sounds a bit like the CASB era circa 2014. How is this different with AI? 
  • The concept of "routing prompts for risk and cost management" is intriguing. Can you elaborate on the architecture and specific AI engines Witness AI uses to achieve this, especially for large global corporations? 
  • What are you seeing in the identity space for AI access? Can you give us a rundown of the different tradeoffs teams are making when it comes to managing identities for agents? 

Do you have something cool to share? Some questions? Let us know:

Transcript

The hosts initiated the discussion with a self-reflective "call to action" regarding the podcast's evolving mandate. Co-host Anton Chuvakin observed that the content has significantly gravitated toward AI security and detection and response (D&R), suggesting a potential name change from the Cloud Security Podcast to the AI Security and Detection & Response Security Podcast. The hosts acknowledged this deviation from their core cloud security roots and invited audience input on whether they should cover more traditional cloud security topics (e.g., CSPM, identity lifecycle), though noting their commitment to covering AI and DNR regardless.

The Mindset Shift: From Securing AI to Enabling AI

A central theme introduced by host Timothy Peacock was the guest's focus not merely on securing AI but on enabling its use with confidence. The argument is that security must integrate to facilitate business adoption rather than act as a blocker.

Security as "Doctor Yes": Guest Rick Caccia noted a fundamental difference in the AI adoption wave compared to the cloud adoption wave: CISOs are universally aiming to be "Doctor Yes," aggressively pushing for AI enablement, whereas cloud security often felt like a battle between users and a security team trying to "squash" adoption.

Platform Necessity: Caccia argued that the market cannot sustain five or more separate point solutions for AI security (e.g., observability, policy, DLP). The long-term winner will be a single platform that addresses the confident and safe adoption of AI across the enterprise.

The Nature of the AI Adoption Wave

Caccia compared the current AI wave most closely to the early commercial web adoption wave of the late 1990s.

Uncertainty and Chaos: Similar to the dot-com era's mantra of "webify everything" without a clear strategy, many companies are currently saying "AI, AI, agents, agents" without a defined use case or plan. This creates a "good chaos" where things are moving into production despite a lack of clarity.

Hyperspeed: A key differentiator from the web adoption wave is the significantly higher speed of AI adoption.

Models are Not Databases: A crucial operational challenge is that LLMs are known to be wrong 15-20% of the time, leading to "unstructured, surprising responses." This stands in stark contrast to the expected accuracy of a traditional database, fundamentally breaking traditional security and data protection paradigms.

Shadow AI and the Pace of Enterprise Adoption

The discussion distinguished between consumer-grade (Shadow AI) adoption and enterprise model adoption.

Consumer Shadow AI: Employees are using consumer tools (ChatGPT, Gemini, Grammarly) at a high volume, creating major risks for IP and data loss. Attempts to simply block these services at the firewall are ineffective due to mobile use and the sheer long tail of applications. Governance for this requires "training wheels or bowling alley bumpers" rather than outright denial.

Enterprise Adoption Pace: In contrast, the official, internal standing up of proprietary models is proceeding at a slow, "enterprise pace." An example was provided of a major investment bank with 500 potential AI projects but zero expected to reach production within the following year.

The Looming Agentic Problem: The next major inflection point for risk is the Shadow Agent problem. Users will create agents that carry their credentials and authorizations, capable of making autonomous, system-altering changes (e.g., wiping file systems, terminating cloud resources).

Technical Challenges and the Trusted Enablement Solution

The core technical limitations of legacy security tools (CASB, DLP) were discussed, leading to the case for a specialized AI security platform.

Inability to Deal with Unstructured Data and Multilingualism: Traditional DLP tools rely on pattern matching and tagging, which fail when data is cut, pasted, rephrased, or input in another language.

The Emoji Injection Attack: Caccia cited the "emoji attack" as an example where prompts written with emojis can confuse traditional filters (which are based on RegEx) to exfiltrate data, further highlighting the inadequacy of legacy tools.

Confidence Features (The Doctor Yes Solution): The Witness AI platform is designed to provide enablement, not just blocking:

In-line Redaction: On the fly, a platform should be able to identify confidential data, redact it, send the redacted version (with tokens) to the LLM for processing, receive the unredacted answer, and present it to the user.

Model Rerouting: Automatically detecting a user's prompt (e.g., from Copilot) and securely rerouting the request to an internal, trusted LLM trained on proprietary code, even though the platform does not own the endpoint or the model.

B2C Model Identity: Enforcing model identity via system prompts to prevent brand-damaging "hallucinations" (e.g., a car company chatbot recommending a competitor's vehicle).

Identity and Agentic Complexity

The identity angle was deemed complex, especially for Agentic AI.

The Three Identity Problems:

Consumer Service Identity: Simple, reuse existing enterprise directory services (AD, Okta) to apply policy based on existing user groups.

Model Identity: Applying and enforcing a proprietary identity on the model itself (e.g., "You are Hotel Chain X's bot").

Agentic Identity: The most challenging. Agents cannot have the same blanket authorization as the human developer they represent. The solution requires a hybrid approach: the agent must be able to inherit a subset of the user's authorizations, and for critical steps, human approval must be inserted into the workflow.

Concluding Advice and Reading Recommendation

Tip for AI Security Adoption: The single most important first step is to establish observability. Every customer deployed has been universally surprised by what their users are actually doing with AI tools. Understanding current activity must precede policy creation.

Recommended Reading: The Soul of a New Machine by Tracy Kidder (Pulitzer Prize-winning book from the 1980s about Data General's crash program to build a new minicomputer). Recommended for anyone building a technology company or starting a large-scale technical project, as it provides profound insight into structuring technical organizations and managing teams.

Podcast Conversation Timeline

The discussion progressed through a logical arc, moving from self-critique of the podcast's focus to an in-depth analysis of the AI adoption wave, culminating in practical solutions and forward-looking risks.

Phase 1: Setting the Stage and Defining the Mandate

Podcast Identity and Focus: The hosts opened with a self-reflective "call to action," acknowledging the podcast's significant shift toward AI Security and Detection & Response (DNR), away from its original cloud security mandate.

The New Security Mindset: Introduction of the core theme: the need for security to transition from merely securing AI to enabling its use with confidence and acting as "Doctor Yes."

Phase 2: Analyzing the AI Adoption Wave

Historical Context and Speed: The guest compared the current AI wave to the dot-com era's web adoption (chaotic demand without a clear strategy), but noted the critical difference of hyperspeed adoption.

The Fundamental Flaw of LLMs: The conversation shifted to the inherent technical challenge of AI—that models are not databases and their inaccuracy (15–20% error rate) breaks traditional security expectations.

The Shadow AI Dichotomy: A distinction was drawn between the slow, bureaucratic pace of official Enterprise AI projects and the rapid, high-risk adoption of Consumer-Grade (Shadow) AI by employees.

Non-Cyber Security Risks: The scope of risk was broadened to include business confidence problems, using the example of B2C chatbots recommending competitors, highlighting issues that land on the CISO's desk but aren't traditional "cyber."

Phase 3: The Technology Solution and Future Risks

Inadequacy of Legacy Tools (CASB/DLP): The discussion pivoted to why existing security solutions are ill-equipped for AI, citing their failure to handle unstructured data, multilingual input, and novel threats like emoji injection attacks.

Enabling, Not Blocking: The required technical solution was defined not as filtering, but as trusted enablement through processes like in-line data redaction/unredaction and intelligent model rerouting to internal, secure LLMs.

The Agentic Identity Challenge: The focus narrowed to the immediate future risk of Shadow Agents, emphasizing the complex identity problem of granting a piece of software a safe subset of human authorization, often requiring human approval steps.

Phase 4: Conclusion

Key Actionable Tip: The guest provided a single, critical piece of advice for starting AI governance: begin with observability to truly understand how users are interacting with AI before implementing policy.

Resource Recommendation: The conversation concluded with a recommendation for The Soul of a New Machine as essential reading for building a technical organization.

View more episodes