Comet, Perplexity’s new AI-powered web browser, recently suffered from a significant security vulnerability, according to a blog post last week from Brave, a competing web browser company. The vulnerability has since been fixed, but it points to the challenges of incorporating large language models into web browsers.
Unlike traditional web browsers, Comet has an AI assistant built in. This assistant can scan the page you’re looking at, summarize its contents or perform tasks for you. The problem is that Comet’s AI assistant is built on the same technology as other AI chatbots, like ChatGPT.
AI chatbots can’t think and reason the same way humans can, and if they read a piece of content meant to manipulate its output, it may end up following through. This is known as prompt engineering.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
A representative for Brave didn’t immediately respond to a request for comment.
AI companies try to mitigate the manipulation of AI chatbots, but that can be tricky, as bad actors always look at novel ways to break through protections.
“This vulnerability is fixed,” said Jesse Dwyer, Perplexity’s head of communications in a statement. “We have a pretty robust bounty program, and we worked directly with Brave to identify and repair it.”
Test used hidden text on Reddit
In its testing, Brave set up a Reddit page with invisible text on the screen and asked Comet to summarize the on-screen content. As the AI processed the page’s content, it couldn’t distinguish between the malicious prompts and began feeding Brave’s testers sensitive information.
In this case, the hidden text enabled Comet’s AI assistant to navigate to a user’s Perplexity account, extract the associated email address, and navigate to a Gmail account. The AI agent was essentially acting as an actual user, meaning that traditional security methods weren’t working.
Brave warns that this type of prompt injection can go further, accessing bank accounts, corporate systems, private emails and other services.
Brave’s senior mobile security engineer, Artem Chaikin, and VP of privacy and security, Shivan Kaul Sahib, laid out a list of possible fixes. First, AI web browsers should always treat page content as untrusted. AI models should check to make sure they’re following user intent. The model should always double-check with the user to ensure interactions are correct, and agentic browsing mode should only turn on when the user wants it to.
Brave’s blog post is the first in a series regarding challenges facing AI web browsers. Brave also has an AI assistant, Leo, embedded in its browser.
AI is increasingly embedded in all parts of technology, from Google searches to toothbrushes. While having an AI assistant is handy, these new technologies have different security vulnerabilities.
In the past, hackers needed to be expert coders to break into systems. When dealing with AI, however, it’s possible to use squirrely natural language to get past built-in protections.
Also, since many companies rely on major AI models, such as ones from OpenAI, Google and Meta, any vulnerabilities in those systems could extend to companies using those same models. AI companies haven’t been open about these types of security vulnerabilities as doing so might tip off hackers, giving them new avenues to exploit.
Great Job Imad Khan & the Team @ CNET Source link for sharing this story.