Meta will introduce a new system on Instagram that notifies parents when teenagers repeatedly search for suicide or self-harm content. Alerts trigger after multiple searches within a short period. Meta links the feature to its Teen Account supervision tools. The company says it strengthens protections for young users online.
Previously, Instagram blocked dangerous search terms and redirected teens to external support services. Meta now adds direct parental notifications to provide families with more oversight. Teen Accounts in the UK, US, Australia, and Canada will start receiving alerts next week. The company plans to expand the system globally over the coming months.
Molly Rose Foundation Warns of Potential Harm
The Molly Rose Foundation has criticized the alert system. Chief executive Andy Burrows says automatic notifications could have unintended consequences. He warns they may trigger panic rather than provide constructive help.
The foundation was created by the family of Molly Russell, who died by suicide in 2017 at age 14 after viewing self-harm and suicide content online, including on Instagram. Burrows says parents naturally want to know when their child struggles. He argues that sudden alerts could leave families shocked and unprepared for sensitive conversations.
Meta says it will include expert guidance with every alert. The company intends these resources to help parents navigate difficult discussions. Ian Russell, who chairs the foundation, questions whether the support will be enough. He says a parent receiving the alert at work could panic. Written guidance alone may not prevent immediate distress.
Experts Call for Preventive Action
Charities argue the alert system exposes deeper platform flaws. Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomes the alerts but says stronger prevention is needed. He says young people continue to encounter harmful material online.
Flynn notes parents contact his organization daily, worried about children’s exposure online. Families want platforms to prevent dangerous content from appearing, not just alert them afterward.
Leanda Barrington-Leach, executive director of 5Rights Foundation, urges Meta to redesign systems with child safety by default. Burrows cites research showing Instagram still recommends harmful content about depression, self-harm, and suicide to vulnerable teens.
He insists platforms must address systemic risks instead of shifting responsibility onto parents. Meta disputes the foundation’s September report, claiming it misrepresents its teen safety and parental support efforts.
Global Pressure on Social Media Companies
Instagram designed Teen Account alerts to detect sudden changes in search behavior. Meta says the system builds on existing safety tools. The platform already hides self-harm and suicide content and blocks related searches.
Parents will receive notifications by email, text, WhatsApp, or within the app. Meta chooses the channel based on the contact information provided. The company acknowledges the system may occasionally generate alerts without serious cause. It says it prefers caution when protecting young users.
Sameer Hinduja, co-director of the Cyberbullying Research Center, says alerts will naturally alarm parents. He emphasizes practical guidance must follow each notification. Companies cannot leave families alone with fear. Hinduja believes Meta understands this responsibility.
Instagram also plans to extend alerts to interactions with its AI chatbot. The company notes many teens increasingly turn to artificial intelligence tools for support. Governments worldwide continue pressuring social media firms to improve child safety.
Australia has banned social media for children under 16. Spain, France, and the UK are considering similar measures. Regulators closely monitor how tech companies engage young users. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court defending the company against claims it targeted underage users.
