Europe Wants to Keep Children Off Social Media: What’s It Really About?
Greece has become the latest European country to propose a legal minimum age for social media, with Prime Minister Kyriakos Mitsotakis saying children under 15 should be barred from accessing the platforms from January 1, 2027. Austria has
Greece has become the latest European country to propose a legal minimum age for social media, with Prime Minister Kyriakos Mitsotakis saying children under 15 should be barred from accessing the platforms from January 1, 2027. Austria has announced plans for a ban on under-14s, while France has already tried to impose a “digital majority” for under-15s. Australia has gone furthest in practice, presenting itself as the international model after platforms restricted access to 4.7 million underage accounts.
The policy language across these countries is remarkably similar. Governments say they are protecting children from addictive products, sleep disruption and psychological harm. The operational reality is more complicated. Australia’s experience suggests that these measures do not produce a clean ban so much as a large-scale age-verification and enforcement system, one that still leaves obvious routes around the rules while expanding the amount of monitoring and identity checking built into ordinary online life.
Greece Joins the Queue to Introduce Verification for Social Media
Greece says it will ban social media for under-15s from the start of 2027 and will push the European Union to adopt a common digital age of majority at 15, supported by mandatory age verification and a unified enforcement framework. Reuters reported that the Greek government has linked the proposal to rising concern about addiction, anxiety and sleep problems among children, and cited polling showing that public support for the plan is strong. In political terms, the appeal is obvious. Social media is widely distrusted, children are a sympathetic constituency, and governments can present themselves as intervening against powerful and unpopular technology companies.
The difficulty lies in turning the headlines into reality. A ban of this kind can only function if platforms are required to distinguish reliably between those who are old enough and those who are not. Once that becomes the practical focus, the issue is not just whether children should be on Instagram or TikTok, but also becomes a question of who is verifying age, what evidence is being collected, how long that information is retained, and how much new infrastructure is being built to control access to digital spaces. Greece may be selling the policy as child protection, but the route to implementation runs directly through verification systems that are likely to extend far beyond one narrow class of users.
Austria Also Plans to Ban Children from Social Media Apps
Austria is following a similar path. Al Jazeera reported that the Austrian government plans to ban children under 14 from using social media, with junior minister Alexander Proll arguing that the platforms are addictive and harmful to young people. The justification is familiar and, at a broad level, difficult to dispute. These platforms are designed to hold attention, reward compulsion and keep users returning. The weaker point is the assumption that a legal age threshold can be translated smoothly into a workable, proportionate system of enforcement.
As in Greece, the real challenge is not announcing the rule but administering it. If the state intends to stop under-14s from opening accounts, then platforms need stronger forms of age assurance. Stronger age assurance usually means more data collection, more intrusive account checks, greater reliance on third-party verification providers, or the deployment of biometric or behavioural systems to estimate age. Those mechanisms may be politically easier to introduce when framed around child protection, but they still represent an expansion of digital oversight. Austria’s proposal therefore deserves to be read not only as a social policy but also as part of a broader shift toward a more tightly gated and more heavily verified internet.
Australia Leads the Way, But Is the Ban Actually Working?
Australia is now the most important test case because it has moved beyond proposals and into enforcement. Under the government’s social media minimum age framework, age-restricted platforms must take “reasonable steps” to prevent Australians under 16 from holding accounts or face fines. The official government guidance lists major platforms including TikTok, X, Facebook, Instagram, Snapchat and YouTube among the covered services. In January, Australia’s eSafety Commissioner said platforms had restricted access to roughly 4.7 million under-16 accounts in the first half of December 2025 alone, a number that has since been used as proof that the law is working.
That figure is politically useful, but it does not prove that under-16s have been meaningfully excluded from social media. Reports published last week revealed that enforcement remains uneven, meaning many children are still able to access the platforms, and that a government investigation found widespread non-compliance by major companies. The law has therefore not produced the simple result implied by the rhetoric of a ban. Instead, it has produced a more heavily policed system in which millions of accounts are flagged or removed, while underage access continues and the platforms gather more signals in order to decide who should be allowed in.
Australia has not demonstrated that a social media ban for minors can be implemented cleanly. It has demonstrated that such a ban becomes an ongoing exercise in verification, suppression and surveillance.
France Tried to Restrict Social Media for Children Before, But Failed
France offers a different kind of warning. It passed a law in 2023 requiring parental consent for under-15s to create social media accounts and was often presented as one of the first European states to establish a digital age threshold. In practice, Euronews reported that the measure never properly took effect because it ran into legal and regulatory difficulties under the EU’s Digital Services Act. France therefore managed to achieve the symbolic politics of a ban without fully delivering the operational reality.
That experience is instructive for Greece and Austria because it shows that these policies face two kinds of friction. The first is legal, especially in Europe, where national rules must coexist with EU-wide regulatory frameworks. The second is technical and social. Children are adept at evading restrictions, platforms do not enforce rules consistently, and reliable age verification is still difficult to implement without collecting more personal information than many people would consider reasonable. A policy that begins as a promise to shield children from addictive apps can therefore end up stalled in law, weakened in practice, or translated into a much more expansive identification system than its supporters first described.
The Overall Trend Points Towards More Verification In Future
Countries including Britain, Spain, Slovenia, Denmark, Malaysia, Canada and parts of the United States are also considering or debating restrictions of this kind. That reflects a broad and understandable loss of confidence in social media companies, whose products are increasingly regarded as addictive and harmful, especially for children. It also reflects a political environment in which governments are under pressure to be seen responding to a problem that is now widely acknowledged.
Still, the spread of these laws points to a second and less openly discussed development. Every serious effort to keep minors off social media depends on stronger proof-of-age systems. Stronger proof-of-age systems lead almost inevitably to broader identity checks, more platform data collection, and more routine demands that users prove themselves before entering digital spaces. In that sense, the trend is not only about restricting children’s access. It is also about normalising a model of internet governance based on verification and controlled entry. Even when the policy objective is defensible, the infrastructure it requires deserves scrutiny on its own terms.
Final Thought
Europe’s turn toward social media age bans reflects a genuine problem and a popular political instinct. The platforms are addictive, they are damaging to children, and governments are no longer willing to leave the issue entirely to Silicon Valley. What Australia has shown, however, is that these bans do not function as simple prohibitions. They operate as systems of age assurance, account suppression and ongoing compliance, while underage users still find ways through. Greece and Austria may soon follow the same path. If they do, the headline will be child protection, but the deeper legacy may be a more heavily monitored internet in which access depends increasingly on verification and the recording of user identity.
The Expose Urgently Needs Your Help…
Can you please help to keep the lights on with The Expose’s honest, reliable, powerful and truthful journalism?
Your Government & Big Tech organisations
try to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.
The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.
Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.
Please choose your preferred method below to show your support.
Stay Updated!
Stay connected with News updates by Email
Categories: World News
Read the full article at the original website