Australia Bans Social Media for Children
The Australian government is implementing a landmark, world-first law that prohibits individuals under the age of 16 from holding accounts on major social media platforms.
The Online Safety Amendment (Social Media Minimum Age) Act 2024 was passed by the Australian Parliament on November 29, 2024, and its key provisions came into effect on December 10, 2025.
This legislative action follows growing national and global concern over the detrimental mental health effects, exposure to harmful content, and addictive algorithms associated with youth social media use. The government has placed the sole onus of compliance on the technology companies.
What is the New Australian Law?
This is a new regulatory law that introduces a mandatory minimum age of 16 for accounts on specified “age-restricted social media platforms” within Australia. The list of platforms currently includes giants like Facebook, Instagram, TikTok, YouTube, Snapchat, X (formerly Twitter), Threads, Reddit, Kick, and Twitch.
The core capability is the legal requirement for platforms to take “reasonable steps” to prevent Australian residents under 16 from creating new accounts or accessing existing ones. The legislation imposes significant financial penalties—up to approximately AU$49.5 million—on companies that fail to comply, an unprecedented level of accountability for the protection of minors online.
Meta and TikTok Responding
Despite their strong ideological opposition, Meta and TikTok were among the first to demonstrate concrete steps toward compliance, highlighting the immediate and undeniable business necessity of avoiding substantial regulatory penalties.
- Meta’s Proactive Removal: Meta began its deactivation process in the weeks leading up to the deadline of December 10, 2025. They initiated the removal of accounts for suspected under-16 Australian users, offering them the option to freeze their account until they turn 16 or download their data. This action signaled a swift effort to mitigate financial and legal exposure.
- TikTok’s Age-Gating Approach: TikTok, facing immense scrutiny due to its popularity with younger audiences, announced that it would deactivate Australian accounts belonging to users aged 13 to 15 (the platform already has a global minimum age of 13). TikTok stated it would rely on its multi-layered age verification methods, which include facial age estimation services and ID document checks, to enforce the ban.
Compliance Strategy
The vendor-led implementation reveals the immediate strategic focus is on adopting and scaling age-assurance technologies. The consensus among the big tech firms is that the “reasonable steps” required by the law necessitate a combination of methods:
- AI/ML Inference: Using machine learning models to infer a user’s age based on behavioral patterns, content interaction, and metadata.
- Biometric Age Estimation: Employing third-party tools to estimate age from a video selfie or photograph (often with a focus on privacy-preserving methods).
- Document Verification: Offering government ID or official document uploads as a method for users who are mistakenly flagged as underage to verify they are 16 or older.
This scramble for effective age-gating technology shows that the ban has forced a rapid, industry-wide re-prioritization of identity and age verification, creating a sudden, lucrative, and complex new market for age-assurance service providers.
Challenges
The efficacy of the new laws hinges on the accuracy of age-verification technology.
No surprise, teens are already finding workarounds using fake credentials, VPNs, and older siblings’ devices, raising concerns about the law’s ultimate effectiveness. However, proponents argue that technology to govern social media will improve as providers face potential fines.
Furthermore, mandatory biometric or document-based age verification introduces data privacy and security risks for all users, including adults.
Also, critics, including some human rights advocates, argue the blanket ban curtails freedom of expression and may isolate vulnerable youth who rely on online communities for support. Banning accounts also removes access to parental control and safety features built into official youth accounts.
Bottom Line
Parents, advocacy groups and governments have been talking about children’s safety on social media for years, but no governments have taken this strong of a stance. The Australian ban sets an undeniable global template for proactive digital child safety regulation.
Its primary market impact is the pressure it exerts on multinational social media companies to standardize and enhance age verification systems globally. This move directly challenges the traditional “move fast and break things” development model by forcing compliance costs and a reduction in a key demographic.
While this new law delivers a crucial message of corporate accountability through substantial fines, its success rests on the unproven effectiveness and privacy implications of the age-verification technology it mandates.

Have a Comment on this?