
Tech firms have been warned to strengthen protections for young people online after MPs rejected a proposal to ban under-16s from social media this week.
Communications regulator Ofcom and the Information Commissioner’s Office (ICO) have written to major platforms, asking for details of how they plan to improve safety measures for children.
Companies like Facebook, Instagram, Roblox, Snapchat, Tiktok and Youtube have been given until the end of April to explain how they enforce age restrictions and reduce harmful algorithmic content.
The demand comes just days after MPs voted down a Conservative amendment that would have introduced a default blanket ban on social media accounts for under-16s.
The proposal was defeated in the House of Commons by 307 votes to 173, with ministers instead opting to run a consultation before deciding whether to legislate.
Ofcom claimed the move means responsibility now falls squarely on platforms, rather than individuals, to show they can protect young users.
Chief executive Dame Melanie Dawes said: “These online services are household names, but they’re failing to put children’s safety at the heart of their products”.
“There is a gap between what tech companies promise in private, and what they’re doing publicly to keep children safe on their platforms.”
She also added that without stronger safeguards, children were being “routinely exposed to risks they didn’t choose, on services they can’t realistically avoid”.
“That must now change quickly, or Ofcom will act,” Dawes said.
Stronger age checks urged
Ofcom’s intervention follows research showing that existing age limits have been widely ignored.
The regulator found 72 per cent of children aged eight to 12 are using platforms that officially require users to be at least 13 years old.
The ICO has also written to platforms including TikTok, Snapchat, Facebook, Instagram, YouTube and X asking them to explain how their age verification systems protect children.
ICO chief executive Paul Arnold said: “With ever-growing public concern, the status quo is not working and industry must do more to protect children”.
“Our message to platforms is simple: act today to keep children safe online. There’s now modern technology at your fingertips, so there is no excuse not to have effective age assurance measures in place.”
Ofcom also said it would publish a report in May detailing how the platforms responded to the regulator’s demands, adding that it would be prepared to take enforcement action if the answers were unsatisfactory.
That could include tighter regulatory requirements under the Online Safety Act, which came into force last year.
Chris Sherwood, chief executive of the NSPCC, said social media companies had “looked the other way while harmful and addictive content floods children’s feeds”.
“That’s why Ofcom’s demand for far greater transparency about the risks children face online, and how tech companies plan to protect them, is absolutely essential”, he said.
The intervention comes as pressure builds internationally for stricter controls on children’s social media use.
Australia introduced a nationwide ban on social media accounts for under-16s in December, becoming the first country to adopt such a rule.
In the UK, ministers have stopped short of backing an outright ban but say stronger safeguards are needed.
AI and online safety minister Kanishka Narayan told City AM earlier this week the government was consulting on potential measures before deciding whether to legislate.
“We’re running a very short, sharp consultation over three months to engage the entire country, including young people,” he said.
“The intent there is to act robustly, but to act robustly in a way that actually sticks over time.”
However, YouTube spokesperson said the company had spent more than a decade building products specifically designed for younger users.
“We are surprised to see Ofcom move away from a risk-based approach, particularly given that we routinely update them and other regulators on our industry-leading work on youth safety,” the spokesperson said.
Meta, the Facebook and Instagram owner, said it already uses AI to detect the age of users and automatically places teenagers into accounts with stricter protections.
Roblox said it had introduced over 140 safety measures in the past year alone, including mandatory age checks for certain chats.
Yet regulators insist on watching closely over the next few weeks as firms respond to the new deadline.
#Pressure #shifts #Big #Tech #Ofcom #delays #social #media #ban