Connect with us

Business

Instagram plans warnings over self-harm searches

Published

on

DCM Editorial Summary: This story has been independently rewritten and summarised for DCM readers to highlight key developments relevant to the region. Original reporting by Irish Times, click this post to read the original article.

image

Instagram will soon start warning parents if their teen is repeatedly searching for content related to self-harm and suicide in a short period of time, giving parents an early alert that they may need support.

The warnings, which will roll out in the coming weeks, will be sent to those parents who supervise a teen Instagram account. Attempted searches that will prompt the alerts include phrases promoting suicide or self-harm or suggesting a teen wants to harm themselves, along with explicit terms such as “suicide” or “self-harm”.

Instagram will also point parents to expert resources to help parents tackle sensitive conversations.

Instagram owner Meta said it would start informing teen users and their parents about the alerts next week, with the US, UK, Australia and Canada among the first to get the feature from the week after. Other regions, including Ireland, will follow later this year.

Instagram already blocks content that promotes or glorifies suicide or self-harm and directs searches for this content to support organisations and resources.

“The vast majority of teens do not try to search for suicide and self-harm content on Instagram, and when they do, our policy is to block these searches, instead directing them to resources and helplines that can offer support,” Meta said in a post.

“These alerts are designed to make sure parents are aware if their teen is repeatedly trying to search for this content, and to give them the resources they need to support their teen.”

Instagram consulted experts from its suicide and self-harm advisory group to set the threshold for searches that trigger the warning.

“We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution,” Meta said.

“While that means we may sometimes notify parents when there may not be real cause for concern, we feel – and experts agree – that this is the right starting point, and we’ll continue to monitor and listen to feedback to make sure we’re in the right place.

Meta is also planning similar alerts for teens’ conversations with its AI later in the year.

Continue Reading