For years, many people treated social media addiction as a personal weakness or a parenting problem. In 2026, American courts are asking a much sharper question: if a platform is intentionally designed to keep children scrolling, can that design itself become a public harm? That is now the central issue in New Mexico’s case against Meta. On March 24, 2026, a jury ordered Meta to pay $375 million in civil penalties after finding that the company harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms. Then, on May 4, 2026, a second phase of the trial began in Santa Fe to decide whether Meta’s platforms amount to a “public nuisance.” (apnews.com)
New Mexico argues that the problem is not only harmful content, but harmful design. Prosecutors are targeting recommendation algorithms that reward constant engagement, along with features such as infinite scroll, push notifications, and visible “like” counts. They are asking the court to impose a $3.7 billion abatement plan, stronger age verification, default privacy protections, parent-linked child accounts, and a court-supervised monitor to track whether Meta actually makes its apps safer. Meta rejects that theory. The company says many of the proposed changes are unnecessary or unworkable, warns that they could restrict free expression and parental choice, and has even said it might stop offering its services in New Mexico if forced to adopt state-specific rules. (apnews.com)
What makes this lawsuit so important is that it no longer looks isolated. On April 10, 2026, Massachusetts’ highest court allowed the state attorney general’s youth-addiction case against Meta to move forward, ruling that the complaint focuses on Meta’s own conduct rather than user-created content that is often shielded by Section 230. In March 2026, a Los Angeles jury also found Meta and Google negligent in a bellwether case over harm to a young user, and thousands of similar lawsuits are now pending. Meta says it has expanded Teen Accounts, parental controls, and AI-based age detection, with at least 54 million active Teen Accounts reported in 2025. The law has not delivered a final answer yet, but the debate has clearly changed: courts are starting to ask whether “attention engineering” is not just persuasive design, but a danger imposed on society. (marketscreener.com)










