In a state court in Los Angeles this week, 12 jurors are hearing opening arguments in a case that has the potential to change social media—maybe even the internet—as we know it.
The trial, which began today, is a bellwether: Similar individual cases have been filed all around the country, and a massive federal case with more than 2,000 plaintiffs is expected to proceed this summer. In each case, the plaintiffs accuse social-media companies of releasing defective products. The argument is that these products were built with dangerously habit-forming features—including the endless-scroll feed, algorithmic recommendations, and push notifications—that have led to an array of serious health problems. Plaintiffs also accuse the companies of failing to warn users about the risks of using their products and of deliberately concealing their dangers.
The L.A. case is the first to make it to trial. It is scheduled to last about six weeks, and it focuses heavily on Meta—in particular, Instagram. (The defendant originally included TikTok, Snap, and YouTube. TikTok and Snap settled with the plaintiff last month rather than go to trial. YouTube remains part of the case, though it is less central to the complaint. The company has said that allegations against it are “simply untrue.”) The lawsuit asks an existential question about Meta’s business: Can the fundamental design and most basic features of Instagram directly cause mental-health problems in kids and teenagers? The jury will be asked to answer that question, and Meta is taking a big risk by allowing it to do so (though it can appeal to a judge if it loses).
The plaintiff in this case is a 19-year-old California woman who is named only by her initials, K.G.M., because the events that she’s suing over happened when she was a minor. Her suit states that she began using social media at the age of 10 and alleges that her mental health was directly degraded by Instagram. According to her complaint, the app “targeted” her with “harmful and depressive content,” which led her to develop a negative body image and to commit acts of self-harm. She also says that she was a victim of “bullying and sextortion” in the app as a minor and that Instagram did not do “anything” until her friends and family spent two weeks repeatedly reporting the problem. Her older sister, a plaintiff in a separate case, suffered a life-threatening eating disorder that the family believes was also triggered by usage of Instagram and other social-media sites.
The basic allegations do not make Meta look very good. The company may be taking its chances in court now simply because it has to eventually. If it were to win this case, that might slow the momentum of all the others coming. The company may also relish an opportunity to set the record straight, as it were. For years now, Meta has been compared to Big Tobacco and accused of deliberately destroying children’s minds. Internal documents leaked by the whistleblower Frances Haugen in 2021 showing that some employees were worried about Instagram’s effects on young girls made matters worse. In response to the backlash, which has been ongoing ever since, the company has half-acquiesced to public pressure and made piecemeal efforts at image rehabilitation. It has explained itself in dry blog posts, created more ornate parental controls, and launched awkward ad campaigns emphasizing its commitment to safety and screen-life balance. (In its latest ad, Tom Brady describes his teen son’s ability to connect with friends online as “very much a value-add.”)
Now the company will see if it can possibly sway a group of ordinary Americans with its version of the facts. “This will be their first chance to tell their story to a jury and get a sense of how well those arguments are playing,” Eric Goldman, a professor at Santa Clara University School of Law, told me. Meta’s day in court has come.
K.G.M, like many of the other plaintiffs filing personal-injury suits against social-media companies, is represented by the Seattle-based Social Media Victims Law Center. In the spring of 2023, the organization filed a complaint on behalf of a number of plaintiffs, opening with this animating statement: “American children are suffering an unprecedented mental health crisis fueled by Defendants’ addictive and dangerous social media products.”
The complaint goes on to accuse social-media companies of deliberately “borrowing” tactics from the slot-machine and cigarette industries in an effort to make their products addictive, and argues that social-media apps have “rewired” kids so that they prefer digital “likes” to genuine friendship, “mindless scrolling” to offline play. “While presented as ‘social,’ Defendants’ products have in myriad ways promoted disconnection, disassociation, and a legion of mental and physical harms,” the complaint summarizes. In K.G.M.’s case, the listed harms include “dangerous dependency” on social media as well as “anxiety, depression, self-harm, and body dysmorphia.”
Her case is the first of potentially thousands. Numerous school districts, state attorneys general, tribal nations, and individuals have also filed suit against social-media companies. But this case is worth watching because it will hit on all of the big topics. To assess whether social media is generally harmful to kids and teens, lawyers will have to argue about the nitty-gritty of a complicated and conflicted scientific field. To get at the question of whether Meta hid specific knowledge of harm, they’ll debate the meaning of the documents Haugen leaked as well as others produced during discovery.
The jury will likely hear arguments about whether social-media addiction is real, what the murky concept of “the algorithm” actually means, and whether the richest companies in history really have allowed bad things to happen to children for the benefit of their bottom line. Reached for comment, a Meta spokesperson pointed me to an informational website the company has created about the lawsuit and highlighted a previous statement, which reads in part: “Plaintiffs’ lawyers have selectively cited Meta’s internal documents to construct a misleading narrative, suggesting our platforms have harmed teens and that Meta has prioritized growth over their well-being. These claims don’t reflect reality.”
Goldman, who often writes about internet law, said that he thinks Meta will have its work cut out for it with the jury. After 10 years of critical media coverage and political bickering about how to rein the tech companies in, “I assume that the jury is going to walk into the courtroom heavily skeptical of Facebook, Instagram, YouTube, and social media generally,” he said.
Meta’s lawyers can make a good scientific case on some of the broader questions. Researchers have looked for years for smoking-gun evidence that social-media use directly causes mental-health problems in young people at scale, and have mostly turned up weak and inconsistent correlations and no way to prove long-term causation. Major scientific bodies such as the National Academies of Sciences, Engineering, and Medicine have started to recognize that the story is more complicated than just saying that social media is dangerous in all forms and for all kids.
However, this case is about one kid. Even if social-media addiction is not “real” in the sense that it is not in the DSM-5, and even if it has not created a mental-health epidemic all on its own, certain people, perhaps many, could still be susceptible to what some clinicians prefer to call problematic internet use. The jury will have to decide whether that can cause further problems such as the ones K.G.M. has described (and whether it’s Meta’s fault if it does). Legally, the burden will be on her lawyers to convince them of that.
This is a sticky situation. Corbin Barthold, the internet-policy counsel at the think tank TechFreedom, told me that “having lawyers get up and give speech contests in front of a jury” is one of the worst ways he can imagine of settling the scientific disputes about social media and its effects on mental health. (Actually, he called it “crazy.”) And it is somewhat surprising that we’ve ended up here. Social-media companies are usually protected by a portion of the 1996 Communications Decency Act known as Section 230, which guarantees that online platforms are not considered legally responsible for what their users post or see. The law has been the subject of repeated controversy and legal challenge ever since it was written. Some people now argue that it is totally outdated, having been written at a time when the web was essentially a bunch of static pages, nothing like the complicated landscape we spend so much time in today.
Meta tried and failed to have the case dismissed on Section 230 grounds. Judge Carolyn Kuhl let it proceed because it will not consider specific posts or comments; instead, it will focus on design features such as the recommendation algorithm and the never-ending feed. Free-speech civil-society groups on the right and the left were irked by Kuhl’s decision. However, Kuhl is not the only judge who has recently allowed such arguments to go ahead. A similar product-liability claim was the basis of a lawsuit against Google and Character.AI, filed in 2024 by the mother of a 14-year-old boy who killed himself after forming an intense relationship with a chatbot. That case was settled out of court, but it signaled, as the University of Buffalo School of Law professor Mark Bartholomew put it to me in an email, a shift, and evidence of “a growing willingness” among the courts “to take old product liability doctrines for physical goods and apply them to software.”
This trial is just one specific personal-injury suit as well as, possibly, the first of many. “It’s a brick in a potential wall,” James Grimmelmann, a professor of digital and information law at Cornell Law School, told me. “If they think they’re going to keep on losing other cases, they’re going to have to make changes.” It’s not yet obvious what changes the company would have to make. No more content recommendations? No more feed? It’s not just Meta whose future would be in question. It would be any internet-based service that has any reason to believe that anyone under the age of 18 could be using it and getting “addicted” to it.
The possibly enormous stakes reflect how pitched the debate about social media has become. Pete Etchells, a professor of psychology and science communication at Bath Spa University, in England, told me that he finds the situation “really frustrating.” One side denies that anything is wrong; the other side compares social media to cigarettes, even though that makes little sense. “We’re not talking about a biological substance that you can consume that has a demonstrable chemical effect,” Etchells said.
Etchells wrote a book titled Unlocked: The Real Science of Screentime, which was published in 2024 and argued, in part, that a moral panic about social media and smartphones has been making it more difficult to learn how to use them in beneficial ways and how to pick apart what, specifically, might be wrong with them. At the same time, the public justifiably wants something done about the unaccountable tech companies, he said, and bridles when those companies seem to be cherry-picking scientific studies that fit their narrative, throwing them up as an ironclad defense in order to avoid reflection again.
Even if science is on those companies’ side in a general sense, that doesn’t necessarily mean that the facts are on their side when you talk about one girl, one series of particular events. And now, after years of hearings and reports and rebuttals and failed legislation and bad ideas and ad spots, it’s all up to that jury. They have the task of looking at this one story, hearing both sides, and making a decision.
Great Job Kaitlyn Tiffany & the Team @ The Atlantic for sharing this story.



