For years, many parents felt that social media companies were too hard to challenge. In March 2026, that started to change. On March 24, a jury in New Mexico said Meta knowingly harmed children’s mental health, hid what it knew about child sexual exploitation, violated state consumer-protection law, and should pay $375 million. Meta said it disagrees with the decision and plans to appeal. (apnews.com)
The pressure grew even more one day later. A jury in Los Angeles found both Meta and YouTube liable in another case about harm to children. These decisions matter because U.S. tech companies often point to Section 230, a law that usually protects them from being held responsible for user posts. In the New Mexico case, prosecutors argued that Meta should still be responsible for how its platforms were designed and how their algorithms pushed harmful material. (apnews.com)
Now governments are moving faster too. Australia already passed a law in 2024 to block social media accounts for children under 16. Indonesia began rolling out a similar under-16 ban on March 28, 2026. France approved a bill in January 2026 to ban social media for children under 15, and Austria announced on March 27, 2026 that it plans to ban social media for under-14s. In Europe, the European Commission has also started expert discussions on possible age limits and stronger online safety rules for minors. (apnews.com)
The big question is what happens next. Supporters say stronger rules can protect children from addictive design, sexual exploitation, bullying, and dangerous contact online. Critics worry about privacy, free speech, and whether age checks will really work. But one thing is clear: after the March 2026 verdicts, the debate is no longer only about technology. It is also about law, responsibility, and how much risk society is willing to accept for children online. (apnews.com)










