Australia’s online watchdog has criticised the world’s biggest social platforms of not adequately implementing the country’s ban on under-16s using their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Exposed in First Major Review
Australia’s eSafety Commissioner has detailed a concerning pattern of failure to comply among the world’s most prominent social media platforms in her first formal review since the ban came into effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement adequate safeguards to prevent minors from accessing their services. Julie Inman Grant expressed particular concern about structural gaps in age verification systems, highlighting that some platforms have permitted children who originally stated themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings demonstrate a significant escalation in the regulatory response, with the eSafety Commissioner transitioning from monitoring towards active enforcement. The regulator has emphasised that simply showing some children still maintain accounts is inadequate; platforms must rather furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from creating accounts in the first place. This shift signals the government’s determination to hold tech giants responsible, with possible sanctions looming for companies that fail to meet the statutory obligations.
- Permitting formerly prohibited users to confirm again their age and restore account access
- Allowing multiple tries at the same age assurance method with no repercussions
- Weak safeguards to block accounts for under-16s from being opened
- Limited complaint mechanisms for families and the wider community
- Shortage of transparent data about compliance actions and account removals
The Extent of the Problem
The substantial scale of social media activity amongst Australian young people highlights the compliance challenge confronting both the government and the platforms in question. With millions of accounts already removed or restricted since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s findings indicate that the technical and procedural obstacles to enforcing age restrictions have proven far more complex than anticipated, with platforms having difficulty to distinguish genuine age declarations from fraudulent ones. This intricacy has left enforcement authorities wrestling with the fundamental question of whether current age verification technologies are adequate to the task.
Beyond the technical obstacles lies a broader concern about the readiness of companies to place compliance ahead of user growth. Social media companies have long resisted strict identity verification requirements, citing data protection worries and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms might not be demonstrating adequate commitment to implement the systems mandated legally. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they stand to incur significant penalties that could transform their operations in Australia and possibly affect regulatory approaches internationally.
What the Figures Indicate
In the initial month subsequent to the ban’s launch, Australian authorities reported that 4.7 million accounts had been restricted or deleted. Whilst this statistic initially appeared to demonstrate compliance achievement, further investigation reveals a more nuanced picture. The sheer volume of account takedowns implies that many under-16s had managed to establish accounts in the beginning, revealing that preventive controls were insufficient. Furthermore, the data casts doubt about whether removed accounts reflect authentic compliance or merely users deleting their profiles willingly in in light of the new restrictions.
The limited transparency regarding these figures has disappointed independent observers seeking to assess the ban’s true effectiveness. Platforms have revealed little data about their enforcement methodologies, success rates, or the profile of deleted profiles. This lack of clarity makes it hard for regulators and the public to evaluate whether the ban is operating as planned or whether teenagers are simply finding alternative ways to use social media. The Commissioner’s insistence on thorough documentation of systematic compliance measures reflects mounting dissatisfaction with platforms’ resistance to disclosing comprehensive data.
Sector Reaction and Opposition
The social media giants have addressed the regulator’s enforcement action with a combination of compliance assurances and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, emphasised its commitment to complying with Australian law whilst simultaneously arguing that accurate age determination remains a major challenge across the industry. The company has called for a different approach, proposing that strong age verification systems and parental consent requirements put in place at the application store level would be more efficient than platform-level enforcement. This stance reflects wider concerns across the industry that the existing regulatory system puts an unrealistic burden on separate platforms.
Snap, the developer of Snapchat, has adopted a more assertive public position, stating that it had suspended 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures reflect authentic adherence or merely reactive account management. The core conflict between platforms’ commercial structures—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to systematically remove an entire age demographic remains unresolved. Companies have long resisted rigorous age verification methods, citing privacy concerns and technical limitations, establishing an impasse between authorities and platforms over who carries responsibility for implementation.
- Meta argues age verification ought to take place at app store level instead of on individual platforms
- Snap claims to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups cite privacy issues and technical challenges as barriers to effective age verification
- Platforms assert they are doing their best whilst questioning the ban’s general effectiveness
More Extensive Considerations Concerning the Ban’s Efficacy
As Australia’s under-16 social media ban enters its implementation stage, fundamental questions remain about whether the legislation will accomplish its stated objectives or merely drive young users towards less regulated platforms. The regulatory authority’s initial compliance assessment reveals that despite months of implementation, significant loopholes exist—children continue finding ways to bypass age verification systems, and platforms have had difficulty stop new underage accounts from being established. Critics contend that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply migrate to alternative services, encrypted messaging applications, or virtual private networks designed to mask their age and location.
The ban’s worldwide effects add another layer of complexity to assessments of its effectiveness. Countries such as the United Kingdom, Canada, and several European nations are observing Australia’s approach closely, evaluating similar legislation for their own populations. If the ban proves ineffective at reducing children’s online activity or cannot protect them from damaging material, it could undermine the case for similar measures elsewhere. Conversely, if enforcement becomes sufficiently rigorous to truly restrict underage participation, it may encourage other nations to pursue similar approaches. The conclusion will likely influence global regulatory trends for the foreseeable future, making Australia’s implementation efforts scrutinised far beyond its borders.
Who Benefits and Who Loses
Mental health campaigners and organisations focused on child safety have endorsed the ban as a essential measure against algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health linked to social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates legitimate uses of social media for young people—keeping friendships alive, accessing educational content, and engaging with online communities around common interests. The regulatory approach assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s concrete implications extends beyond individual users to impact content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that are dependent on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban inadvertently favours large technology companies with resources to build age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects go well past the simple goal of child protection.
What Happens Next for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a significant shift from hands-off observation to direct intervention, marking a pivotal moment in the rollout of the youth access prohibition. The regulator will now gather evidence to ascertain whether services have omitted “reasonable steps” to restrict child participation, a legal standard that extends beyond simply documenting that children remain on these platforms. This approach necessitates concrete evidence that companies have implemented proper safeguards and procedures meant to keep out minors. The enforcement team has signalled it will pursue investigations methodically, developing arguments that could lead to significant fines for breach of requirements. This shift from oversight to intervention demonstrates mounting concern with the services’ existing measures and signals that voluntary cooperation alone will no longer suffice.
The enforcement phase highlights important questions about the appropriateness of fines and the practical mechanisms for ensuring platform accountability. Australia’s statutory provisions offers enforcement instruments, but their effectiveness relies on the eSafety Commissioner’s commitment to initiate regulatory enforcement and the platforms’ capacity to respond meaningfully. Overseas authorities, particularly regulators in the Britain and Europe, will carefully track Australia’s implementation tactics and consequences. A successful enforcement campaign could set a model for further jurisdictions contemplating equivalent prohibitions, whilst failure might compromise the entire regulatory framework. The coming months will be critical whether Australia’s innovative statutory framework translates into substantive defence for adolescents or stays primarily ceremonial in its impact.
