Mark Zuckerberg has published his AI manifesto, making a case for a type of “personal superintelligence” that people can use to achieve their individual goals.
In a new blog post, the Meta CEO said he wanted to build a personalized AI that helps you “achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.” However, the company’s new aims come with a caveat: this powerful AI may soon be too powerful to be left open to the world.
“We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns,” Zuckerberg wrote. “We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible.”
Among those risks: That AI could become “a force focused on replacing large swaths of society,” he wrote.
Zuckerberg has traditionally positioned Meta as a proponent of open-source AI, especially compared to rivals like OpenAI and Google. While many argue the company’s LLaMA models don’t meet the strict definition of “open source,” the company has leaned more toward open-sourcing its frontier models than most of its Big Tech peers.
In a blog post last year, Zuckerberg made an impassioned case for open source, heralding Meta as taking the “next steps towards open source AI becoming the industry standard.”
“I believe that open source is necessary for a positive AI future,” Zuckerberg wrote last year. “Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society.”
The CEO has left himself some wiggle room, saying in a podcast last year that if there was a significant change in AI capabilities, it may not be safe to “open source” it.
Closed models give companies more control over monetizing their products. Zuckerberg pointed out last year that Meta’s business isn’t reliant on selling access to AI models, so “releasing Llama doesn’t undercut our revenue, sustainability, or ability to invest in research like it does for closed providers.” In contrast to competitors like OpenAI, Meta makes most of its money from selling internet advertising.
Closed vs. open source AI
AI safety experts have long debated whether open or closed-source models are more responsible for advanced AI development. Some argue that open-sourcing AI models democratizes access, accelerates innovation, and allows for broader scrutiny to improve safety and reliability. But others say that releasing powerful AI models openly could increase the risk of misuse by bad actors, including for misinformation, cyberattacks, or biological threats.
There’s a commercial argument against open source as well, which is why most leading AI labs keep their models private. Open-sourcing powerful AI models can erode a company’s competitive edge by allowing rivals to copy, fine-tune, or commoditize its core technology.
Meta is in a different position here than some of its rivals, as Zuckerberg said last year that Meta’s business isn’t reliant on selling access to AI models. “Releasing Llama doesn’t undercut our revenue, sustainability, or ability to invest in research like it does for closed providers,” he said.
Representatives for Meta did not immediately respond to a request for comment from Fortune, made outside normal working hours.
Great Job Beatrice Nolan & the Team @ Fortune | FORTUNE Source link for sharing this story.