'Inside Security' with Hardik Thakkar

'Inside Security'
Blog
Leen Security
August 28, 2025

In previous editions of Inside Security, we've examined the human factors that plague security teams and the infrastructure challenges of cloud-first organizations. But there's an emerging crisis that demands immediate attention: the reckless deployment of generative AI capabilities without corresponding security frameworks, creating unprecedented attack surfaces while simultaneously demanding new forms of multi-disciplinary expertise from security engineers.

This edition features insights from Hardik Thakkar (Sr. Security Prototyping Solutions Architect at AWS) who partners with global financial institutions to turn ideas into secure, scalable prototypes. With deep expertise in cloud security, architecture, and regulatory compliance, Hardik helps bridge the gap between vision and implementation by rapidly designing and building secure cloud-native solutions aligned with AWS best practices. His perspective reveals a sobering reality: while organizations race to implement AI features to remain competitive, they're systematically creating vulnerabilities that existing security frameworks cannot adequately address.

Defining the Gen AI Security Challenge

Gen AI security represents a fundamental expansion beyond traditional security paradigms. While conventional threats around network vulnerabilities, code exploits, and data leakage remain relevant, AI opens up an entirely new category of risks that operates at the intersection of human psychology and machine behavior.

"Gen AI security extends beyond traditional security and requires an evolving framework that keeps in mind traditional security perspectives while tapping into what's emerging from a Gen AI standpoint. It's not going to be static and standard where the traditional security lens just applies as it has been for decades."

The core challenge lies in prompt injections, context overflow attacks, and agent misbehavior – threats that exist because AI systems are designed to interpret and respond to natural language instructions in ways that can be manipulated. Unlike traditional security vulnerabilities that typically require technical exploitation, AI vulnerabilities can be triggered through carefully crafted conversational inputs.

The "AI for AI's Sake" Risk Multiplier

The market pressure to implement AI capabilities has created what we term the "wrapper problem" - organizations hastily adding AI functionality to existing products without understanding the security implications. This approach creates several critical vulnerabilities:

  1. Guardrail Degradation: When organizations modify foundation models through retrieval-augmented generation (RAG) or fine-tuning, the built-in safety mechanisms become less effective. The original model's protections were designed for specific use cases and may not transfer to customized implementations.
  2. Indirect Data Exposure: Traditional PII detection systems fail against sophisticated prompt engineering. Attackers can extract sensitive information by requesting data in non-standard formats that don't trigger regex-based detection patterns.
  3. Attack Surface Expansion: Web search capabilities, API integrations, and external data sources multiply potential entry points for malicious actors while making it harder to predict system behavior.
"The fundamental issue is that organizations are treating AI integration as just a software feature, which of course it is, but it is also a security transformation that requires comprehensive threat modeling and risk assessment. They are applying traditional software deployment practices to systems that can fundamentally alter their behavior based on user interactions. This creates unpredictable attack surfaces that conventional security controls weren't designed to handle."

The Scale-Dependent Security Blind Spots

Different organizational scales exhibit predictable blind spots in AI security implementation:

  1. Startups: The primary oversight centers on data protection. Early-stage companies prioritize speed-to-market over security controls, often viewing security incidents as acceptable publicity. This approach ignores the potential for catastrophic data breaches that can destroy nascent businesses before they establish market presence.
  2. Mid-Market: Compliance requirements become the critical blind spot. These companies often lack the resources for comprehensive compliance frameworks while facing increasing regulatory scrutiny. They risk significant penalties by deploying AI systems without proper governance structures.
  3. Large Enterprises: Governance complexity represents the greatest challenge. These organizations struggle with lifecycle management across multiple AI initiatives, often running outdated models while newer, more secure versions remain stuck in approval processes. The result is a security debt that compounds over time.

The Context Switching Crisis Intensifies

AI security demands that security professionals become proficient across multiple disciplines simultaneously. Unlike traditional security roles that could specialize in network, application, or infrastructure security, AI security requires understanding:

  • Software engineering principles for AI system architecture
  • Data engineering for training and inference pipelines
  • Behavioral psychology for social engineering and prompt injection attacks
  • Regulatory compliance across multiple jurisdictions
  • Real-time threat analysis for rapidly evolving attack vectors

This multidisciplinary requirement creates what Hardik describes as an essential but challenging evolution:

"Security engineers specifically need to understand that engineering in general is multidisciplinary thinking. It involves data, network, software engineering concepts, and when you add security to it, you're thinking about what risks can hamper the business. The traditional security playbook of detect, analyze, contain, eradicate, and recover remains relevant, but the methods for executing each phase require fundamental re-imagination for AI systems."

Security in an Agent-Driven World

Looking toward the next five years, the security landscape will likely shift from protecting static systems to governing dynamic, autonomous environments. As AI agents become capable of independent code review, feature development, and system maintenance, security teams will face unprecedented challenges:

  • Inter-Agent Security Protocols: New authentication and communication standards will be required for AI systems that interact across organizational boundaries. Traditional identity management approaches cannot scale to environments where autonomous agents make real-time decisions without human oversight.
  • Dynamic Threat Landscapes: Security threats will evolve continuously as AI systems adapt their behavior based on new data and interactions. Static security controls will become ineffective against systems that fundamentally change their operation patterns.
  • Shared Responsibility Evolution: The traditional boundaries between internal and external security will blur as AI agents interact with 3rd-party systems and services. Organizations will need frameworks for managing security responsibilities across autonomous system networks.

Practical Recommendations for AI Security Implementation

Based on the challenges identified across organizational scales, we asked Hardik to propose a few actionable approaches that teams (varying in size and scale) could undertake:

For All Organizations:

  • Implement decoupled AI features with independent kill switches
  • Maintain human-in-the-loop validation for all AI-generated outputs
  • Establish continuous monitoring for AI system behavior and cost anomalies
  • Develop incident response procedures specific to AI-related security events

For Startups:

  • Focus on data protection as the primary AI security priority
  • Implement staged rollouts (alpha, beta, production) for all AI features
  • Establish clear boundaries between AI capabilities and core business functions
  • Invest in prompt injection testing before external deployment

For Mid-Market Companies:

  • Develop compliance frameworks that account for AI-specific regulatory requirements
  • Balance speed of implementation with risk assessment procedures
  • Establish partnerships with AI security vendors for specialized expertise
  • Create cross-functional teams that include security, legal, and compliance perspectives

For Large Enterprises:

  • Implement governance frameworks for AI model lifecycle management
  • Develop standardized security assessments for AI implementations
  • Establish centers of excellence for AI security across business units
  • Invest in security engineering capabilities that can match the pace of AI development

The Engineering Imperative for All Security Professionals

The most critical insight from this analysis is that traditional security roles must evolve toward engineering-heavy positions. At Leen, we have always believed that security needs to be more engineering driven, and the emergence of AI systems validates this perspective. Security professionals without engineering backgrounds need organizational support to develop these capabilities through:

  • Capture-the-flag exercises focused on AI vulnerability scenarios
  • Internal workshops where security teams build and then attack AI systems
  • Dedicated project time for security professionals to develop AI applications
  • Cross-training programs with development teams building AI features

This evolution cannot be left to individual initiative – it requires organizational commitment to transforming security teams' capabilities.

Conclusion: Security as AI-Native Practice

The gen AI revolution demands that security teams fundamentally reimagine their role within organizations. The traditional approach of retrofitting security controls onto existing systems (aka wrappers) will not scale to environments where AI agents make autonomous decisions that can fundamentally alter system behavior.

Success in this environment requires security teams that can operate at the intersection of engineering, psychology, and risk management. Organizations that continue to treat AI security as a traditional compliance exercise will find themselves increasingly vulnerable to threats that exploit the unique characteristics of generative AI systems.

The opportunity exists for security teams to become essential partners in AI implementation rather than impediments to innovation. This requires embracing the engineering aspects of security while developing new frameworks for governing autonomous systems. The organizations that make this transition successfully will be the ones that can safely harness AI's transformative potential while protecting their critical assets.

The context switching crisis is real, but it's not insurmountable. It demands that security professionals become comfortable with continuous learning and adaptation while organizations provide the support structures necessary for this evolution. For security leaders willing to embrace this challenge, the opportunity to define the future of AI security has never been greater.

. . .

Thanks to Hardik for sharing his insights on generative AI security. The rapid evolution of this space demands continued dialogue between security practitioners navigating these emerging challenges.

Scale your security
integrations faster with Leen