Back to all blogs
Security threats in Agentic AI Browsers
Security threats in Agentic AI Browsers
Jul 15, 2025
|
6 min read




As agentic AI browsers like Comet and Dia (by the creators of Arc browser) become increasingly popular for content summarisation and web navigation, they also open up new, largely uncharted security vulnerabilities. At Repello AI, our research team has uncovered a critical Cross-Context Prompt Injection (XPIA) vulnerability that weaponises invisible content on ordinary websites to hijack AI-powered functionalities in these browsers.
This attack method represents a new class of security threat: your AI browser can be compromised without any malware, downloads, or obvious red flags - simply by asking the AI assistant to summarise a web page.
Summary of the Attack
The threat model is deceptively simple yet devastating:
A user visits a legitimate-looking website or document.
The website contains invisible malicious prompts hidden in the webpage content.
When the AI browser is asked to summarise the page, it processes the hidden prompts along with the visible content.
This leads the AI to execute attacker-controlled commands like:
Closing all active browser tabs.
Opening multiple attacker-controlled sites.
Overwhelming & hijacking system resources.
Here’s how it works
The attack leverages a deception technique: invisible instructions embedded in DOM. Here’s what makes it so dangerous:
Invisible Payload: Malicious prompts are hidden in the webpage using techniques like:
White text on white backgrounds
CSS-hidden elements
Microscopic font sizes
HTML comments that AI systems still process
Trigger Mechanism: When a user innocently asks the AI browser to “summarise the text,” the hidden instructions activate.
Malicious Execution: The AI browser follows the embedded commands, which can include:
Closing all open tabs
Opening multiple malicious websites
Overwhelming the user with unwanted content
The Proof of Concept
Our researchers demonstrated this vulnerability with a simple but effective proof of concept:
|
Note: The texts with a grey background are invisible payloads implemented using the following methods- white text on white backgrounds, CSS-hidden elements, microscopic font sizes, and HTML comments that AI systems can still process.
When users request a summary, the AI processes these hidden instructions alongside the visible content, treating the malicious commands as legitimate user requests.
The Technical Deep Dive
Understanding Cross-Context Prompt Injection Attack (XPIA)
Unlike traditional prompt injection attacks that directly target AI models, Cross-Context Prompt Injection Attack (XPIA) works by:
Embedding malicious prompts in external content/context (websites, documents, emails, calendar invites).
Waiting for AI systems to process that content during normal operations.
Triggering unintended behaviours when the AI encounters the hidden instructions.
Why Agentic Browsers Are Particularly Vulnerable
Web Content Parsing:
Agentic browsers like Comet actively process and interpret all webpage content including hidden elements for summarisation and analysis.Elevated Browser Control:
These AI systems have privileged access to browser functions, allowing them to open, close, and navigate tabs directly.Implicit Content Trust:
They treat content from seemingly legitimate websites as safe, failing to differentiate between trusted data and hidden malicious instructions.Lack of AI-Specific Input Sanitisation:
Traditional input filtering methods do not account for prompt injection risks, leaving AI models exposed to hidden manipulative content.
Why this should be a concern
The Broader Implications
This vulnerability represents more than just a browser bug, it’s a fundamental security challenge for AI-integrated applications.
For Users:
Browsing becomes unpredictable and potentially dangerous.
Personal data could be exposed to malicious sites.
System resources can be overwhelmed.
Erosion of trust in AI-powered tools.
For Organizations:
Corporate networks face new infiltration vectors, with social engineering getting much easier with such attacks.
AI applications become attack surfaces to amplify existing voids in security posture.
Current DLP vendors get bypassed against such data exfiltration attacks.
Brand reputation risks increase.
For the Industry:
AI safety protocols require immediate review with more AI-native tools being adopted in enterprises with limited visibility of data flow beyond organization perimeters.
Incident response protocols & SOC need updates with focus on AI-specific threats like prompt injections amplifying and exposing existing threats.
Mitigating this vulnerability
Input Sanitisation:
Implement robust content filtering before AI processing.
Strip potentially malicious HTML & CSS elements.
Use allowlists for safe content types.
AI Safety Measures:
Deploy adaptive AI guardrails that monitors & blocks AI-specific threats like indirect prompt injections, jailbreaks, denial of wallet, knowledge base poisoning and more. Use sandboxed environments for AI touchpoints with clear isolation of sensitive operations & runtime variables.
Conduct regular security audit & red-teaming of AI-integrated features.
Security Architecture:
Apply the principle of least privilege for AI components by using isolated execution environments
Rate-limit AI operations and check for denial of wallet.
Enable comprehensive audit logs and monitoring.
The Future of AI Security
The Comet browser vulnerability is just the beginning. As AI becomes deeply integrated into our daily tools and workflows, the attack surface expands exponentially. What we’ve discovered isn’t just a browser bug, it's a preview of the security challenges that await us in an AI-powered world.
At Repello AI, we’re committed to staying ahead of these threats. Our research doesn’t just identify vulnerabilities; it shapes the future of AI security. Our AI Security platform ARTEMIS and ARGUS platforms, we’re helping organisations navigate this new landscape safely and confidently.
The question isn’t whether more AI security vulnerabilities will emerge, it's whether we’ll be prepared when they do.
For technical inquiries about this research or to discuss enterprise AI security solutions,
Reach out to our team at contact@repello.ai — we’re here to help you secure your AI systems.
As agentic AI browsers like Comet and Dia (by the creators of Arc browser) become increasingly popular for content summarisation and web navigation, they also open up new, largely uncharted security vulnerabilities. At Repello AI, our research team has uncovered a critical Cross-Context Prompt Injection (XPIA) vulnerability that weaponises invisible content on ordinary websites to hijack AI-powered functionalities in these browsers.
This attack method represents a new class of security threat: your AI browser can be compromised without any malware, downloads, or obvious red flags - simply by asking the AI assistant to summarise a web page.
Summary of the Attack
The threat model is deceptively simple yet devastating:
A user visits a legitimate-looking website or document.
The website contains invisible malicious prompts hidden in the webpage content.
When the AI browser is asked to summarise the page, it processes the hidden prompts along with the visible content.
This leads the AI to execute attacker-controlled commands like:
Closing all active browser tabs.
Opening multiple attacker-controlled sites.
Overwhelming & hijacking system resources.
Here’s how it works
The attack leverages a deception technique: invisible instructions embedded in DOM. Here’s what makes it so dangerous:
Invisible Payload: Malicious prompts are hidden in the webpage using techniques like:
White text on white backgrounds
CSS-hidden elements
Microscopic font sizes
HTML comments that AI systems still process
Trigger Mechanism: When a user innocently asks the AI browser to “summarise the text,” the hidden instructions activate.
Malicious Execution: The AI browser follows the embedded commands, which can include:
Closing all open tabs
Opening multiple malicious websites
Overwhelming the user with unwanted content
The Proof of Concept
Our researchers demonstrated this vulnerability with a simple but effective proof of concept:
|
Note: The texts with a grey background are invisible payloads implemented using the following methods- white text on white backgrounds, CSS-hidden elements, microscopic font sizes, and HTML comments that AI systems can still process.
When users request a summary, the AI processes these hidden instructions alongside the visible content, treating the malicious commands as legitimate user requests.
The Technical Deep Dive
Understanding Cross-Context Prompt Injection Attack (XPIA)
Unlike traditional prompt injection attacks that directly target AI models, Cross-Context Prompt Injection Attack (XPIA) works by:
Embedding malicious prompts in external content/context (websites, documents, emails, calendar invites).
Waiting for AI systems to process that content during normal operations.
Triggering unintended behaviours when the AI encounters the hidden instructions.
Why Agentic Browsers Are Particularly Vulnerable
Web Content Parsing:
Agentic browsers like Comet actively process and interpret all webpage content including hidden elements for summarisation and analysis.Elevated Browser Control:
These AI systems have privileged access to browser functions, allowing them to open, close, and navigate tabs directly.Implicit Content Trust:
They treat content from seemingly legitimate websites as safe, failing to differentiate between trusted data and hidden malicious instructions.Lack of AI-Specific Input Sanitisation:
Traditional input filtering methods do not account for prompt injection risks, leaving AI models exposed to hidden manipulative content.
Why this should be a concern
The Broader Implications
This vulnerability represents more than just a browser bug, it’s a fundamental security challenge for AI-integrated applications.
For Users:
Browsing becomes unpredictable and potentially dangerous.
Personal data could be exposed to malicious sites.
System resources can be overwhelmed.
Erosion of trust in AI-powered tools.
For Organizations:
Corporate networks face new infiltration vectors, with social engineering getting much easier with such attacks.
AI applications become attack surfaces to amplify existing voids in security posture.
Current DLP vendors get bypassed against such data exfiltration attacks.
Brand reputation risks increase.
For the Industry:
AI safety protocols require immediate review with more AI-native tools being adopted in enterprises with limited visibility of data flow beyond organization perimeters.
Incident response protocols & SOC need updates with focus on AI-specific threats like prompt injections amplifying and exposing existing threats.
Mitigating this vulnerability
Input Sanitisation:
Implement robust content filtering before AI processing.
Strip potentially malicious HTML & CSS elements.
Use allowlists for safe content types.
AI Safety Measures:
Deploy adaptive AI guardrails that monitors & blocks AI-specific threats like indirect prompt injections, jailbreaks, denial of wallet, knowledge base poisoning and more. Use sandboxed environments for AI touchpoints with clear isolation of sensitive operations & runtime variables.
Conduct regular security audit & red-teaming of AI-integrated features.
Security Architecture:
Apply the principle of least privilege for AI components by using isolated execution environments
Rate-limit AI operations and check for denial of wallet.
Enable comprehensive audit logs and monitoring.
The Future of AI Security
The Comet browser vulnerability is just the beginning. As AI becomes deeply integrated into our daily tools and workflows, the attack surface expands exponentially. What we’ve discovered isn’t just a browser bug, it's a preview of the security challenges that await us in an AI-powered world.
At Repello AI, we’re committed to staying ahead of these threats. Our research doesn’t just identify vulnerabilities; it shapes the future of AI security. Our AI Security platform ARTEMIS and ARGUS platforms, we’re helping organisations navigate this new landscape safely and confidently.
The question isn’t whether more AI security vulnerabilities will emerge, it's whether we’ll be prepared when they do.
For technical inquiries about this research or to discuss enterprise AI security solutions,
Reach out to our team at contact@repello.ai — we’re here to help you secure your AI systems.

You might also like

8 The Green, Ste A
Dover, DE 19901, United States of America

8 The Green, Ste A
Dover, DE 19901, United States of America

8 The Green, Ste A
Dover, DE 19901, United States of America