Social media ban
Australia is about to launch the world’s first mandatory social media age restriction law — banning users under 16 from creating or continuing to use major platforms like Facebook, Instagram, TikTok, YouTube, Snapchat, X, Reddit, and Threads. The law is officially called the Online Safety Amendment (Social Media Minimum Age) Act 2024 and kicks in December 10, 2025, giving platforms 12 months to build compliant age verification systems. If the social media platforms fail, they could be fines up to A$49.5 million.
Table of Contents
What does the new social media ban actually do in Australia?

This regulatory shift is an amendment to Australia’s existing Online Safety Act. The law does not prescribe a static list of covered platforms; instead, it empowers the eSafety Commissioner to designate services that fall within its scope based on their functionality and risk profile. As of 2025, the platforms explicitly expected to comply include Facebook, Instagram, Threads, Snapchat, TikTok, X (formerly Twitter), YouTube, Reddit, and Kick.
The core obligations under the Act are as follows:
Proactive Prevention: Social media providers are legally required to take “reasonable steps” to prevent under-age users from accessing their services. This moves beyond the previous industry standard of self-reported age, which is easily circumvented.
Significant Penalties for Non-Compliance: The legislation establishes a substantial deterrent, with maximum penalties for corporations set at A$49.5 million.
Transition Period: The law provides a transition period, coming into force on December 10, 2025, to allow companies the necessary time to develop, test, and deploy robust age-assurance technologies.
What did Australia introduce this social media ban?
The Australian government’s initiative to this social media law is driven by growing, evidence-based concerns regarding the potential impact of social media on teenagers well-being. The key motivations for the Australian government include :
- Mitigating Mental Health Risks: A primary driver is the protection of young people from harms, including exposure to cyberbullying, pro-anorexia content, and other material that can contribute to anxiety, depression, and negative self-image.
- Closing the Age-Verification Gap: The government has identified the current self-declaration model as fundamentally flawed and is placing the onus on technology companies to develop more reliable methods for verifying user age.
- Establishing Regulatory Accountability: By imposing direct legal obligations and severe financial consequences, the government aims to force a structural shift in how platforms manage under-age access, moving from reactive tools to proactive prevention.
While the law has been praised by child safety advocates as a world-leading measure, it has also faced criticism from some industry groups and digital rights organizations. Critics have characterized the legislative process as “rushed,” citing insufficient technical consultation and a lack of clarity on what constitutes a “reasonable steps,” potentially leading to inconsistent implementation.
What is social media giant Meta doing about it?
Meta has publicly stated that it will comply with the law, even as it continues to question parts of it. They knew that it is going to be difficult to comply and had concerns all around the laws and its implementation approach.
Meta’s process concern:
Meta has publicly stated that while it will adhere to the new regulations, it believes the government enacted the law with undue haste. The company has argued that the legislation did not sufficiently consider the suite of existing, complex age-appropriate parental controls and tools already available on its platforms. Furthermore, Meta has pointed to a lack of conclusive, direct evidence linking social media usage to specific harms, suggesting that less invasive age-verification methods, such as leveraging app store data, could achieve the law’s objectives with a reduced burden on user privacy.
Technical and operational challanges:
Meta has acknowledged that the identification and removal of under-16 accounts present a “significant” operational challenge to how they operate. To meet the legal standard of “reasonable steps,” the company is deploying advanced age-assurance mechanisms. For accounts where the user’s age is uncertain or contested, Meta may require verification through third-party services. This could involve submitting government-issued identification or utilizing AI-powered video selfie analysis, such as the technology provided by Yoti, to estimate age. Did you know that the Italys population forcasted to decline 4.3 million by 2050.
Data and user notifications management on the platform:
In accordance with the law, Meta has started a phased compliance plan in Australia. Beginning in early December 2025, the platform will start to block new account registrations from users identifying as under 16. For existing accounts believed to belong to minors, Meta is soon to start issuing a series of notifications via in-app messages, email, and SMS, providing a 14-day grace period before account access is revoked. Any unknown activities that may happen within the 14 days grace period are tracked and flagged again by the platform.
Affected users are presented with several options:
- Data Download: Users can download a comprehensive copy of their personal data, including photographs, messages, and posts.
- Account Deletion: Users can elect to permanently delete their account and associated data.
- Data Archival for Reactivation: Users can choose to have their data stored by Meta, with the option to reactivate their account once they turn 16.
Meta has clarified that compliance is an ongoing process, with the bulk of known under-16 accounts to be removed by the December 10, 2025 deadline, followed by continuous age-assurance efforts.
Additional complexities for Meta to implement the Australian social media ban
The age-restriction law is not the only area of aggrement between Meta and Australian authorities. Actually, there are several parallel issues highlight the complex relationship between the tech giant and national regulators:
Generative AI Training and privacy:
To effectively implement the law, Meta will be highly dependent upon training its generative AI systems to act effeciently and will require access to the big data sets as a result it can create additional complexity for Meta when it comes to evaluating and understanding the human culture, languages, and intentions.
Fact checking and validations for misinformation
Recently, Meta announced it would not renew its third-party fact-checking program in Australia, raising multiple questions on how platform will ensure effective fact checking and validations of the contents posted on the platform.
Global impact of Australian social media law:
This legistaive approach of the the Australian government s being closely monitored by governments worldwide as a potential model for digital child protection. The implementation and effectiveness of this law will likely influence regulatory discussions in the United States, European Union, and other jurisdictions. In fact, Denmark recently passed a law that will ban kids age 15 and under from using social media.

Controversial Australia Social Media Ban for Users Under 16
Italys population forcasted to decline 4.3 million by 2050