Does what’s said between you and your AI chat stay between you and your AI chat? Nope.
According to a report by Forbes, Elon Musk’s AI assistant Grok published more than 370,000 chats on the Grok website. Those URLs, which were not necessarily intended for public consumption by users, were then indexed by search engines and entered the public sphere.
It wasn’t just chats. Forbes reported that uploaded documents, such as photos, spreadsheets and other documents, were also published.
Representatives for xAI, which makes Grok, didn’t immediately respond to a request for comment.
The publishing of Grok conversations is the latest in a series of troubling reports that should spur chatbot users to be overly cautious about what they share with AI assistants. Don’t just gloss over the Terms and Conditions, and be mindful of the privacy settings.
Earlier this month, 404 Media reported on a researcher who discovered more than 130,000 chats with AI assistants Claude, Chat GPT and others were readable on Archive.org.
When a Grok chat is finished, the user can hit a share button to create a unique URL, allowing the conversation to be shared with others. According to Forbes, “hitting the share button means that a conversation will be published on Grok’s website, without warning or a disclaimer to the user.” These URLs were also made available to search engines, allowing anyone to read them.
There is no disclaimer that these chat URLs will be published for the open internet. But the Terms of Service outlined on the Grok website reads: “You grant, an irrevocable, perpetual, transferable, sublicensable, royalty-free, and worldwide right to xAI to use, copy, store, modify, distribute, reproduce, publish, display in public forums, list information regarding, make derivative works of, and aggregate your User Content and derivative works thereof for any purpose…”
Protect your privacy
E.M Lewis-Jong, director at the Mozilla Foundation, advises chatbot users keep a simple directive in mind: Don’t share anything you want to keep private, such as personal ID data or other sensitive information.
“The concerning issue is that these AI systems are not designed to transparently inform users how much data is being collected or under which conditions their data might be exposed,” Lewis-Jong says. “This risk is higher when you consider that children as young as 13 years old can use chatbots like ChatGPT.”
Lewis-Jong adds that AI assistants such as Grok and ChatGPT should be clearer about the risks users are taking when they use these tools.
“AI companies should make sure users understand that their data could end up on public platforms.,” Lewis-Jong says. “AI companies are telling people that the AI might make mistakes — this is just another health warning that should also be implemented when it comes to warning users about the use of their data.”
According to data from SEO and thought leadership marketing company First Page Sage, Grok has 0.6% of market share, far behind leaders ChatGPT (60.4%), Microsoft Copilot (14.10%) and Google Gemini (13.5%).
Great Job Alex Valdes & the Team @ CNET Source link for sharing this story.