As Meta launches Teen Accounts globally, a new report calls its safety tools a flop

0
1
As Meta launches Teen Accounts globally, a new report calls its safety tools a flop


Meta announced today (Sept. 25) that it would be expanding its youth safety feature, Teen Accounts, to Facebook, Messenger, and Instagram users around the world — a move that will place hundreds of millions of teens under the company’s default safety restrictions. 

The tech giant has spent the last year overhauling Teen Accounts, including placing limitations on communication and account discovery, filtering explicit content, and shutting down the option to go Live for users under the age of 16. 

Meta has labelled Teen Accounts a “significant step to help keep teens safe” and a tool that brings parents “more peace of mind.” But some child safety experts feel the feature is an even emptier promise than previously thought. 

A new report also released today accuses Meta’s Teen Accounts and related safety features of “abjectly failing” to keep users safe. The report, titled “Teen Accounts, Broken Promises” found that many of the features core to the Teen Account ecosystem — including Sensitive Content Controls, tools that prevent inappropriate contact, and screentime features — did not work as advertised. The analysis was conducted by Cybersecurity for Democracy and Meta whistleblower Arturo Béjar and based out of New York University and Northeastern University. The report was published in partnership with child advocacy groups based in the U.S. and UK, including Fairplay, Molly Rose Foundation, and ParentsSOS.

“We hope this report serves as a wake-up call to parents who may think recent high-profile safety announcements from Meta mean that children are safe on Instagram,” the report reads. “Our testing reveals that the claims are untrue and the purported safety features are substantially illusory.”

Meta safety tools don’t stand up to real-world pressure, expert says 

Researchers based their tests on 47 out of 53 safety features listed by Meta and that are visible by users. Thirty of the tested tools — that’s 64 percent — were given a red rating, which indicates that the feature was discontinued or entirely ineffective. Nine of the tools were found to reduce harm but came with limitations (yellow). Only eight of the 47 tested safety features were found to be working effectively to prevent harm (green), according to researchers.

For example, early tests showed adult accounts were still able to message teen users, despite Meta’s measures to prevent unwanted contact, and teens could message adults that didn’t follow them. Similarly, DMs with explicit bullying were able to slip past messaging restrictions. Teen Accounts were still recommended sexual and violent content, and content featuring self-harm. Researchers found there weren’t effective ways to report sexual messages or content.

The research relied on realistic user scenario testing to simulate how predators, parents, and teens themselves actually use platforms, explained Cybersecurity for Democracy co-director Laura Edelson. “For many of the risk scenarios that we are talking about, the teen is seeking out the risky content. That is a normal thing that any parent of a teen knows is, frankly, developmentally appropriate. This is why we parents parent, why we set up guardrails,” said Edelson. But Meta’s approach to addressing this behavioral tendency is ineffective and misinformed, she told Mashable in a press briefing. 

“If a teen needs to experience extortion in order to report, the damage is already done,” added Béjar. He compared Meta’s role as that of a car manufacturer, tasked with making a vehicle that’s equipped with robust safety measures like airbags and brakes that do what they’re supposed to do. Parents and their teens are the drivers, but “the car is not safe enough to get in.”

Mashable Light Speed

“What Meta tells the public is often very different from what their own internal research shows,” alleged Josh Colin, executive director of nonprofit kids advocacy organization and report publisher Fairplay. “[Meta] has a history of misrepresenting the truth.”

In statement to the press, Meta wrote: 

“This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today. Teen accounts lead the industry because they provide automatic safety protections and straightforward parental controls.

The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night. Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback – but this report is not that.”

Maurine Molak of David’s Legacy Foundation and ParentsSOS and Ian Russell of the Molly Rose Foundation signed on to the report as well — both of their children died by suicide following extensive cyberbullying. Parents around the world have expressed alarm at the growing role of technology, including AI chatbots, in teen mental health

Advocates debate the role of federal regulators

In April, Meta announced it was shifting its youth safety focus to bolstering Teen Accounts, following a year of federal scrutiny over its role in the youth mental health crisis. “We’re going to be increasingly using Teen Accounts as an umbrella, moving all of our [youth safety] settings into it” said Tara Hopkins, global director of public policy at Instagram, told Mashable at the time. 

Many tech companies have leaned on the importance of parent and teen education as they simultaneously launch platform features, offering training and information hubs for parents to sift through. Experts have criticized these as placing an undue burden on parents, rather than tech companies themselves. Hopkins previously explained to Mashable that Meta’s automatic tools, including AI age verification, are designed to take that pressure off of parents and caregivers. But “parents aren’t asking for a pass, they are just asking for the product to be made safer,” Molak said. 

Child safety nonprofits like Common Sense Media had long criticized the company’s slow-to-launch safety measures, calling Teen Accounts a “splashy announcement” made to cast themselves in a better light before Congress. After the roll out of Teen Accounts, other studies by safety watchdogs found that teens were still exposed to sexual content. Meta later removed over 600,000 accounts linked to predatory behavior. Most recently, Meta made interim changes to Teen Accounts that limit their access to the company’s AI avatars, following reports they could engage in “romantic or sensual” conversations with teen users. 

While child safety advocates agree on the pressing need for better safety measures online, many disagree on the extent of federal oversight. Some of the report’s authors, for example, are calling for the passing of the Kids Online Safety Act (KOSA), legislation that has become a divisive symbol of free speech and content moderation. The report also recommends the Federal Trade Commission and state attorneys general evoke the Children’s Online Privacy Protection Act and Section V of the FTC Act to pressure the company into action. UK-based participants urge leaders to strengthen the 2023 Online Safety Act. 

Just two weeks ago, Meta whistleblower Cayce Savage called for outside regulators to step in and evaluate Meta during a testimony in front of the Senate Judiciary Committee. 

“More research into social media user safety tools is urgently needed. Our findings show that many protections are ineffective, easy to circumvent, or have been quietly abandoned,” the report authors write. “User safety tools can be so much better than they are, and Meta’s users deserve a better, safer product than Meta is currently delivering to them.”

If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected] . If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here