1. SEJ
  2.  ⋅ 
  3. Generative AI

Brave Reveals Systemic Security Issues In AI Browsers

Brave disclosed security vulnerabilities in AI browsers that could let malicious sites access user banking and email accounts.

  • Indirect prompt injection attacks let websites embed hidden instructions.
  • AI browsers execute the hidden instructions as user commands.
  • Vulnerabilities could give attackers access to banking, email, and work accounts.
Brave Reveals Systemic Security Issues In AI Browsers

Brave disclosed security vulnerabilities in AI browsers that could allow malicious websites to hijack AI assistants and access sensitive user accounts.

The issues affect Perplexity Comet, Fellou, and potentially other AI browsers that can take actions on behalf of users.

The vulnerabilities stem from indirect prompt injection attacks where websites embed hidden instructions that AI browsers process as legitimate user commands. Brave published the findings after reporting the issues to affected companies.

What Brave Found

Perplexity Comet Vulnerability

Comet’s screenshot feature can be exploited by embedding nearly invisible text in webpages.

When users take screenshots to ask questions, the AI extracts hidden text using what appears to be OCR and processes it as commands rather than untrusted content.

Brave notes Comet isn’t open-source, so this behavior is inferred and can’t be verified from source code.

The hidden instructions use faint colors that humans can barely see but AI systems extract and execute. This lets attackers issue commands to the AI assistant without the user’s knowledge.

Fellou Navigation Vulnerability

Fellou browser sends webpage content to its AI system when users navigate to a site.

Asking the AI assistant to visit a webpage causes the browser to pass the page’s visible content to the AI in a way that lets the webpage text override user intent.

This means visiting a malicious site could trigger unintended AI actions without requiring explicit user interaction with the AI assistant.

Access To Sensitive Accounts

The vulnerabilities become dangerous because AI assistants operate with user authentication privileges.

A hijacked AI browser can access banking sites, email providers, work systems, and cloud storage where users remain logged in.

Brave notes that even summarizing a Reddit post could result in attackers stealing money or private data if the post contains hidden malicious instructions.

Industry Context

Brave describes indirect prompt injection as a systemic challenge facing AI browsers rather than an isolated issue.

The problem revolves around AI systems failing to distinguish between trusted user input and untrusted webpage content when constructing prompts.

Brave is withholding details of one additional vulnerability found in another browser until next week.

Why This Matters

Brave argues that traditional web security models break when AI agents act on behalf of users.

Natural language instructions on any webpage can trigger cross-domain actions reaching banks, healthcare providers, corporate systems, and email hosts.

Same-origin policy protections become irrelevant because AI assistants execute with full user privileges across all authenticated sites.

The disclosure arrives the same day OpenAI launched ChatGPT Atlas with agent mode capabilities, highlighting the tension between AI browser functionality and security.

People using AI browsers with agent features face a tradeoff between automation capabilities and exposure to these systemic vulnerabilities.

Looking Ahead

Brave’s research continues with additional findings scheduled for disclosure next week.

The company indicated it’s exploring longer-term solutions to address the trust boundary problems in agentic browsing.


Featured Image: Who is Danny/Shutterstock

Category News Generative AI
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...