Connect with us

Breaking News

Meta defends teen safety settings amid criticisms

Published

on

Read more on post.

Meta, the parent company of Facebook, Instagram and WhatsApp, has defended new tools designed to make its apps safer for teenage users.

Last year, Meta launched Instagram teen accounts with added protections and parental controls.

The company is now rolling out these accounts on Facebook and Messenger.

A recent study found Instagram’s tools, which are designed to protect teenagers from harmful content, are failing to stop them from seeing suicide and self-harm posts.

The investigation was carried out by the US research centre Cybersecurity for Democracy, as well as experts, including whistleblower Arturo Béjar, on behalf of child safety groups, including the Molly Rose Foundation.

Meta’s Global Head of Safety, Antigone Davis, visited Ireland today and was asked by RTÉ News about the report’s findings.

“We didn’t agree with the methodology of the report, but putting that aside, what we do know and our own research shows, and other research has shown, that teens are seeing less of all of that content through teen accounts, so these safeguards do work,” Ms Davis said.

Recommender systems are the algorithms that push content into users social media feeds.

Campaign groups said these systems can be toxic and often result in children seeing inappropriate content relating to issues such as self-harm, eating disorders and toxic masculinity.

“That’s not what those systems do,” Ms Davis said.

“What they do is take things people show an interest in, and then we try to offer up a personalised experience.”

“That’s important for safety because if you think about what you want a teen engaging with, we want to make sure they are seeing age-appropriate content, that’s part of the personalisation and part of what those algorithms do.”

“For things like suicide and self-injury content, we are not going to try to recommend that, our rules don’t allow it.”

“It can come up in people’s feeds, and we have some tools for protection.”

“In addition to a sensitive content control to filter out that content, we have something in Explore where you can touch on content you don’t want to see to shape your algorithm.”

“In addition, we have something called a digital reset – you can reset your algorithm, so you aren’t seeing that content.”

Can we not just switch off the recommender algorithms for teens?

“I think if you had a system without algorithms, you’d end up with a less safe experience, so if you don’t put in algorithms and try to personalise, the types of things that come into your feed will be unfiltered,” Ms Davis said.

Online safety group Common Sense Media recently said that Meta’s artificial intelligence tool Meta AI “poses unacceptable risks to teen safety”.

“Safety systems regularly fail when teens are in crisis, missing clear signs of self-harm, suicide risk, and other dangerous situations that require immediate intervention,” the report concluded.

In response, Ms Davis said they have put very strict measures in place when it comes to AI.

“We have put in place risk-reduction measures so that when teens are trying to engage with the AI to prompt it to potentially share suicide information, we just surface up resources and they get a refusal to that prompt,” she said.

Social media companies have been criticised for reducing the number of human content moderators and relying too much on AI to detect and remove harmful content.

“We still have human content moderators, we still do human review, we do also use AI,” Ms Davis said.

“AI can be highly accurate at removing things like age-inappropriate material.”

“You will always need human reviewers, and we still have that.”

With billions of users posting content, is it like playing ‘whack-a-mole’ to keep harmful content off Meta’s platforms?

“It is hard, but I don’t think it’s ‘whack-a-mole’,” Ms Davis said.

“We’ve gotten very good at removing most of the content that people try to post that violates our policies.”

“There are potentially areas where something may come through, and as soon as we are made aware of it, we’re going to remove that from our platform.”

“But it is an adversarial space, and for particular content you will see people in an adversarial way, and this (is) where AI can be very helpful.”

For anyone affected by this story, please see supports on our helpline page.