Aryaman Behera
Author

Aryaman Behera

Co-founder & CEO, Repello AI

Posts by Aryaman Behera

68 posts

Vector Embedding Security: Why Static Audits Miss the Real Attacks

Vector Embedding Security: Why Static Audits Miss the Real Attacks

May 2, 2026
The axios Supply Chain Attack Left No Traces. Here's How to Know If You Were Hit.

The axios Supply Chain Attack Left No Traces. Here's How to Know If You Were Hit.

Apr 2, 20269 min read
AI Security Testing: A Complete Framework for Pre-Deployment and Continuous Testing

AI Security Testing: A Complete Framework for Pre-Deployment and Continuous Testing

Apr 1, 202611 min read
RAG Security: How to Red Team Retrieval-Augmented Generation Systems

RAG Security: How to Red Team Retrieval-Augmented Generation Systems

Mar 27, 20267 min read
One prompt, broken alignment: what Microsoft's GRP-Obliteration research means for LLM deployments

One prompt, broken alignment: what Microsoft's GRP-Obliteration research means for LLM deployments

Mar 27, 20265 min read
What Is Breach and Attack Simulation (BAS) for AI Systems

What Is Breach and Attack Simulation (BAS) for AI Systems?

Mar 26, 20268 min read
The NVIDIA Agent Toolkit Security Audit: What OpenShell Enforces, What It Doesn't, and What Your Team Still Needs to Test

The NVIDIA Agent Toolkit Security Audit: What OpenShell Enforces, What It Doesn't, and What Your Team Still Needs to Test

Mar 23, 20267 min read
Indirect Prompt Injection: How It Works, Real Examples, and How to Stop It

Indirect Prompt Injection: How It Works, Real Examples, and How to Stop It

Mar 17, 20268 min read
Multi-Modal AI Security: Guardrails for Text, Image, and Audio Models

Multi-Modal AI Security: Guardrails for Text, Image, and Audio Models

Mar 17, 202610 min read
MCP Prompt Injection: How Malicious Tool Responses Can Hijack Your AI Agent

MCP Prompt Injection: How Malicious Tool Responses Can Hijack Your AI Agent

Mar 17, 202611 min read
Why Your LLM Evaluator Can Be Jailbroken: Security Risks in Automated AI Evaluation

Why Your LLM Evaluator Can Be Jailbroken: Security Risks in Automated AI Evaluation

Mar 17, 202612 min read
GPT-5.4 Is Here: Why Its Reliability Upgrades Are an Attacker's Best News of 2026

GPT-5.4 Is Here: Why Its Reliability Upgrades Are an Attacker's Best News of 2026

Mar 17, 202612 min read
How Attackers Jailbreak Enterprise AI Systems (And What Your Guardrails Miss)

How Attackers Jailbreak Enterprise AI Systems (And What Your Guardrails Miss)

Mar 17, 202610 min read
The Autonomous AI Agent That Taught Itself to Mine Crypto

The Autonomous AI Agent That Taught Itself to Mine Crypto

Mar 9, 20268 min read
AI Jailbreaking Techinques & Safeguards

AI Jailbreak Prompts: How They Work, Why They Work, and How to Stop Them

Mar 4, 202624 min read
MCP Security: Why Best Practices Aren't Enough (And What Actually Works)

MCP Security: Why Best Practices Aren't Enough (And What Actually Works)

Mar 4, 202611 min read
Dangerous Prompts: A Field Guide to the Inputs That Break AI

Dangerous Prompts: A Field Guide to the Inputs That Break AI

Mar 4, 20265 min read
ClawHavoc: Inside the Supply Chain Attack That Targeted 300,000 AI Agent Users

ClawHavoc: Inside the Supply Chain Attack That Targeted 300,000 AI Agent Users

Feb 24, 20268 min read
Claude Code Skill Security: How to Audit Any Skill Before You Run It

Claude Code Skill Security: How to Audit Any Skill Before You Run It

Feb 24, 20267 min read
Cisco Skill Scanner: What It Does, What It Misses, and When to Use Something Else

Cisco Skill Scanner: What It Does, What It Misses, and When to Use Something Else

Feb 24, 20265 min read
AI Agent Skill Scanners: Every Tool Compared (2026)

AI Agent Skill Scanners: Every Tool Compared (2026)

Feb 24, 20265 min read
Claude Code Security Finds Bugs in Your Code. It Won't Secure Your AI Applications.

Claude Code Security Finds Bugs in Your Code. It Won't Secure Your AI Applications.

Feb 21, 20265 min read
ML Model Security vs. LLM Security: What's the Difference and Why You Need Both

ML Model Security vs. LLM Security: What's the Difference and Why You Need Both

Feb 20, 20267 min read
LLM Pentesting: The 2026 Checklist, Methodology, and Best Tools

LLM Pentesting: The 2026 Checklist, Methodology, and Best Tools

Feb 20, 20269 min read
Your AI Assistant Is a Relay: How Copilot and Grok Were Turned Into C2 Proxies

Your AI Assistant Is a Relay: How Copilot and Grok Were Turned Into C2 Proxies

Feb 20, 20265 min read
Emoji Prompt Injection: Why Your LLM's Guardrails Are Blind to It

Emoji Prompt Injection: Why Your LLM's Guardrails Are Blind to It

Feb 19, 202610 min read
Standardizing Trust: Repello AI Named in Gartner’s Emerging Tech Report for Agentic AI Security

Standardizing Trust: Repello AI Named in Gartner’s Emerging Tech Report for Agentic AI Security

Jan 22, 20263 min read
Claude for Chrome goes rogue to leak ACCESS TOKENS!: Hijacking via Task Injection

Claude for Chrome goes rogue to leak ACCESS TOKENS!: Hijacking via Task Injection

Jan 8, 20268 min read
Security Robustness in Agentic AI: A Comparative Study of GPT-5.1, GPT-5.2, and Claude Opus 4.5

Security Robustness in Agentic AI: A Comparative Study of GPT-5.1, GPT-5.2, and Claude Opus 4.5

Dec 24, 20258 min read
Gemini Mobile's Consent Persistence: Weaponizing Google Docs summary for Geolocation Exfil

Gemini Mobile's Consent Persistence: Weaponizing Google Docs summary for Geolocation Exfil

Dec 17, 20256 min read
Validating Enterprise AI Security: Repello’s Red Teaming Assessment of Lyzr AI Agents

Validating Enterprise AI Security: Repello’s Red Teaming Assessment of Lyzr AI Agents

Dec 2, 20257 min read
Introducing new Multilingual AI Safety Guardrails for 100 Languages

Introducing new Multilingual AI Safety Guardrails for 100 Languages

Dec 2, 20255 min read
Zero-Click Exfiltration: Why "Expected Behavior" in Google’s Antigravity is a Security Crisis

Zero-Click Exfiltration: Why "Expected Behavior" in Google’s Antigravity is a Security Crisis

Nov 28, 202510 min read
Winter is Coming... for Your AI Agents: The Evolving Threat Landscape of Real-World Attacks

Winter is Coming... for Your AI Agents: The Evolving Threat Landscape of Real-World Attacks

Nov 4, 20259 min read
Introducing AI Asset Inventory: See Your AI. Secure Your AI.

Introducing AI Asset Inventory: See Your AI. Secure Your AI.

Oct 31, 20255 min read
Hacktoberfest 2025: Contribute to AI Security with Repello AI!

Hacktoberfest 2025: Contribute to AI Security with Repello AI!

Oct 6, 20253 min read
ChatGPT MCP Connector Security Vulnerability: Zero-Click Data Exfiltration Attack

ChatGPT MCP Connector Security Vulnerability: Zero-Click Data Exfiltration Attack

Sep 24, 202510 min read
Introducing ARTEMIS Browser Mode: Red-Team Your AI Applications Like a Human Would

Introducing ARTEMIS Browser Mode: Red-Team Your AI Applications Like a Human Would

Sep 23, 202510 min read
VANTAGE: A framework for Enterprise AI-SPM built on rigorous AI asset inventorisation

VANTAGE: A framework for Enterprise AI-SPM built on rigorous AI asset inventorisation

Aug 22, 202510 min read
Exploiting Zapier’s Gmail auto-reply agent for data exfiltration

Exploiting Zapier’s Gmail auto-reply agent for data exfiltration

Jul 24, 20256 min read
Security threats in Agentic AI Browsers

Security threats in Agentic AI Browsers

Jul 15, 20256 min read
Zero-Click Calendar Exfiltration Reveals MCP Security Risk in 11.ai

Zero-Click Calendar Exfiltration Reveals MCP Security Risk in 11.ai

Jul 10, 20256 min read
Introducing ARGUS: AI runtime security by Repello, with images of dashboard

Introducing ARGUS: Runtime Security Layer for your GenAI systems

Jun 19, 20256 min read
Repello AI raises $1.2M funding announcement

BIG NEWS: Repello AI Raises $1.2M to Secure the future of AI 🚀

Jun 16, 20259 min read
When the Model Grades the Model: Demystifying ‘LLM-as-a-Judge’ for Practitioners

When the Model Grades the Model: Demystifying ‘LLM-as-a-Judge’ for Practitioners

May 28, 20259 min read
Turning Background Noise into a Prompt Injection Attacks in Voice AI

Turning Background Noise into a Prompt Injection Attacks in Voice AI

May 15, 20259 min read
Ghibli Dreams vs. Adversarial Schemes: Attacks on Diffusion Models

Ghibli Dreams vs. Adversarial Schemes: Attacks on Diffusion Models

May 15, 20259 min read
MCP tool poisoning to RCE

MCP tool poisoning to RCE

Apr 17, 20259 min read
Securing Machine Learning Models: A Comprehensive Guide to Model Scanning

Securing Machine Learning Models: A Comprehensive Guide to Model Scanning

Apr 4, 20256 min read
Repello AI and LimeChat Join Forces to Make AI Chatbots More Secure

Repello AI and LimeChat Join Forces to Make AI Chatbots More Secure

Mar 25, 20254 min read
Introducing ARTEMIS: Automated Red Teaming to Secure your AI applications

Introducing ARTEMIS: Automated Red Teaming to Secure your AI applications

Mar 18, 20255 min read
Distilled, but Dangerous? Assessing the Safety of Models Derived from DeepSeek-R1

Distilled, but Dangerous? Assessing the Safety of Models Derived from DeepSeek-R1

Feb 19, 20255 min read
Introducing Matrix AI Security Challenge: An Immersive Cyberpunk Hacking Game

Introducing Matrix AI Security Challenge: An Immersive Cyberpunk Hacking Game

Feb 7, 20253 min read
AI Evaluation, Metrics, Frameworks, & Checklist

LLM Evaluation Metrics, Frameworks, and Checklist in 2024

Oct 29, 202423 min read
Prompt injection attack examples

10 prompt injection attack examples

Oct 28, 202417 min read
Data security and Privacy

Data Security and Privacy for AI Systems

Oct 20, 202414 min read
Best AI Jailbreak communities

Top 11 AI Jailbreak Communities to Explore

Sep 21, 20245 min read
OWASP top 10 for LLMs part 2

The OWASP Top 10 for Large Language Models Explained for CISOs: Part 2

Sep 20, 202410 min read
OWASP Top 10 for LLMs for CISO Part 1

The OWASP Top 10 for Large Language Models Explained for CISOs: Part 1

Sep 19, 202411 min read
How to secure AI Applications

How to Secure Your AI Applications: Essential Strategies for Safety

Sep 18, 20247 min read
How to secure AI models

Protecting Your AI Models: Simple Strategies for Security

Sep 17, 20246 min read
Popular AI Vulnerabilties in 2024

Top 6 AI Security Vulnerabilities in 2024

Sep 16, 20249 min read
GenAI Security

Comprehensive Guide to GenAI Security

Sep 14, 20246 min read
HiddenLayer Best Features and Alternatives

HiddenLayer Key Features and Alternatives

Sep 9, 20247 min read
AI Risk Management

Navigating AI Risk Management: A Simple Guide

Sep 1, 20248 min read
Denial of Wallet - Repello AI

Denial Of Wallet

Aug 26, 20245 min read
Image of Prompt Guard shield

Breaking Meta's Prompt Guard - Why Your AI Needs More Than Just Guardrails?

Aug 6, 202420 min read
A meme about Llama3 being racist.

How RAG Poisoning Made Llama3 Racist!

May 28, 202412 min read