Regulators in the UK have asked major social media platforms to strengthen their child protection measures after lawmakers rejected a proposal to ban access to social networks for people under 16, according to CNBC. The watchdogs Ofcom and the Information Commissioner's Office (ICO) have sent letters to platforms such as YouTube, TikTok, Facebook, Instagram and Snapchat, requesting concrete measures to improve the safety of minors online.
• Age verification and protection against manipulation
British authorities are asking platforms to combat a number of risks affecting young users, including: stricter age verification of users; preventing contact between unknown adults and minors; reducing dangerous or inappropriate content for teenagers; limiting the testing of some products or technologies, including artificial intelligence, on minors. Platforms have until April 30 to explain what steps they are taking to prevent children from accessing their services. Ofcom Director General Melanie Dawes has criticised the way tech companies are handling the issue. "Without adequate protections, such as effective age verification, children have been consistently exposed to risks on services that they cannot realistically avoid,” she said, according to CNBC.
• New technologies for age verification
The ICO has called on platforms to use more advanced methods of age verification, including: age estimation through facial recognition; digital identity; verification through a unique photo; authentication through official documents. Currently, many platforms rely solely on the user's declaration of age, a method considered easy to circumvent. ICO Director Paul Arnold warned that this practice exposes children under 13 to the risk of their personal data being illegally collected.
• International pressure for tougher rules
Several countries are considering similar restrictions. Australia in December became the first country to impose a blanket ban on social media for users under 16. In Europe, Spain, France and Denmark are considering similar measures. In Australia, Meta blocked more than 500,000 accounts suspected of belonging to under-16s on Instagram, Facebook and Threads shortly after the ban came into effect. However, the company warned that a blanket ban could push teenagers to circumvent the rules and access social media without protections.
• Platform lawsuits and investigations
In parallel, a major lawsuit against Meta and Alphabet began in January after a young woman and her mother accused Instagram and YouTube of using design elements that could be addictive. Meta CEO Mark Zuckerberg and Instagram boss Adam Mosseri have already testified, and a court decision is expected in March. The case could set an important legal precedent on platforms' liability for underage users. At the same time, the European Commission is investigating Elon Musk's X platform after the emergence of sexually explicit material generated by the artificial intelligence chatbot Grok. The ICO also fined Reddit £14 million for illegally processing children's personal data.
The technology companies say they have already implemented some protection mechanisms. Meta claims to use artificial intelligence to estimate the age of users and special accounts for teenagers, with additional restrictions. TikTok, for its part, says it uses technologies to detect accounts belonging to children under 13, including verifications through facial recognition, bank cards or official documents. However, British authorities believe that current measures are insufficient, and pressure for stricter regulation of social networks regarding minors continues to grow globally.

















































Reader's Opinion