Connect with us

Breaking News

Social media firms criticise X for snubbing Oireachtas Committee on online safety

Published

on

DCM Editorial Summary: This story has been independently rewritten and summarised for DCM readers to highlight key developments relevant to the region. Original reporting by The Journal, click this post to read the original article.


LAST UPDATE
|
1 hr ago

THE CHAIR OF the Oireachtas Media Committee has said it is “incredibly worrying” and “incredibly disappointing” that X refused to appear before it today to discuss online regulation and safety.

The social media platform, owned by Elon Musk, has faced calls to answer questions on the Grok AI-generated non-consensual images controversy.

Last month, Grok, which is available as a standalone app but is most prominently used on the social media site X, began generating non-consensual sexual imagery at the behest of users. Some of this imagery included the digital “undressing” of children.

It prompted government figures to condemn the software and pledge to do more to protect users, particularly children.

The European Commission is now investigating X over reports that child sexual abuse material and non-consensual intimate images of adults were generated through the Grok tool and disseminated on the platform.

The company had given written assurances to Irish regulators and the minister of state with responsibility for AI, Niamh Smyth, that such functionality had been switched off “globally” on 20 January.

“It is extremely disappointing that a company with its European headquarters literally down the road here has refused to attend an Oireachtas meeting based on issues by which they were largely involved and have been involved,” Alan Kelly told the committee today.

Kelly said Taoiseach Micheál Martin also wrote to the company about appearing before the committee, but it still refused to do so.

“X, as a company, still refused to come before essentially the people of Ireland, the Oireachtas and this committee which has jurisdiction over this area,” Kelly said.

It is deeply worrying, dissatisfying, and, I think, pretty disrespectful that an ecommerce company of the scale of X aren’t here today to discuss these issues.

Representatives from Meta, Google and TikTok appeared before the committee this afternoon. When asked by Senator Malcolm Byrne, each of them agreed that X should also have attended the committee. 

“I think it is important that companies take the opportunity to appear before the committee,” Meta’s director of public policy in Ireland Dualta Ó Broin said. 

Google’s child safety public policy manager Chloe Setter said: “Yes, why not?”. 

Tiktok’s head of public policy Susan Moss said: “Yes, Deputy, we believe in transparency and accountability as a business. That is why we are showing our due respect to the committee here today.”

Grok

The companies were also in agreement that nudification apps are wrong. Google was further questioned about why the app was permitted to remain on its app store.

The search engine’s public policy manager Ryan Meade said the material that Grok was producing was “definitely not in line” with its policies and that when it was brought to its attention, they contacted the developer to ensure safeguards were put in place.

He said Grok has made representations that they have changed their policies and put new measures in place “to ensure the type of material that was being produced is no longer being produced”.

Social networks committee meeting-1_90742323
Tiktok’s head of public policy Susan Moss and minor safety public policy lead Richard Collard arriving at Leinster House. Leah Farrell / RollingNews.ie


Leah Farrell / RollingNews.ie / RollingNews.ie

Meade said it was not a closed issue, but that they were following the process that they would with any developer. He said they understand changes have been made to prevent the app from creating similar images and that they would continue to host it on their app store “only if it complies with our policies”.   

Advertisement

The representatives were asked who was responsible for content once it is posted on platforms. 

Meta’s  Dualta Ó Broin said the responsibility in the first instance is on the user that posts the content, but he said “it’s the responsibility of the companies to ensure that users” have an age appropriate experience.

Google’s Chloe Setter also said platforms have a responsibility. “If we want to have customers that include young people, then we have to make sure we put in place those age appropriate experiences and build for young people.”

Tiktok’s Susan Moss said a platform’s responsibility is “to ensure that we are minimising harm to as close to zero as possible”.

“Our responsibility really lies in getting that content down as quickly as possible. But it’s mere existence does not necessarily mean failure. What it means is that we have to do better in terms of enforcement.”

Ban for under-16s

The representatives were also asked if they would support a ban on social media for children aged 16 and under. 

Google’s Chloe Setter said the platform did not believe an outright ban is the answer.  

Meta’s Dualta Ó Broin said the company supports “a digital majority age, but we also support that under that age, parents should be empowered to decide whether their teens can or cannot access.”

The company’s safety policy director for Europe David Stiles said it is one of the reasons they designed teen accounts on Meta’s platforms, which restricts the content teenagers have access to.

“I think all young people have a right to participate online in a healthy way,” he said, whether it be connecting with friends or learning new skills.

By banning them from that, they then have no access to that online. They’re not developing online development skills and social media.

He said they remove inappropriate and harmful content from their platform, adding that 95% of those aged 13 to 15 are using teen accounts. 

Social networks committee meeting-3_90742322
Google’s public policy manager Ryan Meade and child safety public policy manager Chloe Setter arriving at Leinster House today. Leah Farrell / RollingNews.ie


Leah Farrell / RollingNews.ie / RollingNews.ie

TikTok’s Susan Moss said they “don’t agree with blunt instruments” but would consider supporting the “introduction of a measure that is done at a European level”.

The platform’s minor safety public policy lead Richard Collard said there are “inherent benefits” to being online for teenagers, but acknowledged that there are also risks.

“We need to differentiate between 13 to 15-year-olds who are on the platform, 16 to 17-year-olds and adults, and make sure that they have different experiences so they can avoid risks.”

Senator Alison Comyn told the committee that she was aware of a case where an 11-year-old was on Tiktok and seeing self-harm content. 

Collard said they try to ensure that children are not on the platform by asking new users to enter their date of birth when they create an account, and also using AI to detect the age of users through their interests. Flagged accounts who may be underage are then passed on to human moderators. 

He said Tiktok does not permit content that glorifies self harm. But he acknowledged “there is content that might not be harmful to see in isolation”. 

“Content that references sadness, for example… might not be against our terms of service but we recognise that when seen with other combinations of content, it could cause harm,” he said.

“And that’s why I believe that a child shouldn’t be there in the first place, under 16,” Comyn said. 

Children ‘loopholing everything’

Senator Evanne Ní Chuilinn criticised the emphasis put forward by the social media organisations on parental responsibility and control.

She compared them to tobacco companies, saying if those companies had offered “parental guides” for their products it would be “an absolute farce”.

Ní Chuilinn said phones are needed for school, adding she is “concerned about the narrative of parents having control” given the “insane amount of technology” phones contain.

She said there are 10 and 11-year-olds “completely loopholing everything” and “seeing stuff that they should not be seeing”.

Google’s Chloe Setter said a ban could lead to a “false sense of security”.

“We may well see kids going to other less safe platforms which have not had the decades-worth of investment and resource and child expertise built into them,” she said. 

Speaking before the committee, Tánaiste Simon Harris described a ban on under-16s being on social media as a “North Star”.

“I think we’re approaching a public health emergency,” he said. 

“Our children are not guinea pigs and yet their brains, their health, their wellbeing, their mental health, their anxiety levels, are effectively being experimented upon by technology companies, and that causes a significant concern.”

He said he had had an engagement with a number of his European counterparts over the weekend to see “if we can move at a European level” and “that’s why we want to make this a priority for our EU presidency”.

Taoiseach Micheál Martin said the government would “examine the practicalities” of a ban, but that there is a “parallel track” possible consisting of age verification and “stronger tools to protect children to be deployed by companies themselves”.

He also said young people need to be equipped to deal with social media “because as you grow in life and move out of school and into the world, you do need to have capacities to deal with online media”.

Age verification pilot

It comes as Media Minister Patrick O’Donovan brought a number of memos to Cabinet today, including a plan to make online safety a priority during Ireland’s European Union Presidency later this year.

In line with recommendations from the AI Advisory Council, Ireland’s EU Presidency will, in addition to prioritising online safety, be used to advocate for adding the AI generation of intimate images to the list of prohibited practices under the EU’s AI Act.

O’Donovan also intends to ask Coimisiún na Meán to lead an information and awareness campaign to support public understanding and reporting of this type of content.

The minister also updated colleagues on his plans in the coming weeks to begin the rollout of a pilot age verification through the national digital wallet in a bid to keep children safe online.

The need for further criminal justice legislative measures in this area is currently being considered by the minister for justice and the attorney general and online safety will be a core pillar underpinning the new National Digital and AI Strategy 2030, which will be brought to the government shortly by Enterprise Minister Peter Burke. 

With reporting from Jane Moore

Readers like you are keeping these stories free for everyone…
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article.

Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Continue Reading