Australia's under-16 social media ban: what the Act actually requires
From 10 December 2025, the Online Safety Amendment (Social Media Minimum Age) Act 2024 requires designated platforms to take "reasonable steps" to keep Australians under 16 off their service. Penalties run up to $49.5 million per platform. This is a plain-English read of what the Act requires and where the implementation is going to hurt - written for the operator who has to ship something by December.
Who's in scope
The Act applies to "age-restricted social media platforms" - services whose sole or significant purpose is enabling online social interaction between users. The Minister has discretion to add or exempt services by rule.
In scope:
- Facebook, Instagram (Meta)
- TikTok
- Snapchat
- X
- YouTube - added in July 2025 after the original "educational" carve-out was reversed
Out of scope:
- YouTube Kids
- WhatsApp and other messaging products
- Online gaming
- Education tools (e.g. Google Classroom)
- Health and support services (e.g. Kids Helpline)
The line the Act draws is between "post and follow" social products and everything else.
What the obligation actually is
The Act doesn't mandate a specific verification technology. It requires "reasonable steps" to prevent under-16s from creating accounts, deactivate existing under-16 accounts, and offer a path that isn't just "hand over your government ID".
The eSafety Commissioner is the regulator. Final guidance on what "reasonable steps" looks like is still being drafted - the Commissioner has consulted over 160 organisations - but platforms are expected to have something in place by 10 December regardless. "We were waiting for clarity" isn't a defence the Commissioner has signalled sympathy for.
Why the verification problem is the real story
The government's age assurance technology trial tested 53 systems. None were rated reliable on their own. Facial age estimation came in around 85% accurate within an 18-month band, with documented higher error rates for women and for users with darker skin tones - meaning a fair number of 14-year-olds get through and a fair number of 17-year-olds get locked out.
The available techniques, roughly in order of how invasive they are:
- ID document checks (driver licence, passport)
- Credit-card-on-file checks
- Facial age estimation
- Behavioural signals (typing patterns, posting cadence)
- Parental attestation
None of these scale to "block every under-16 in Australia" without one of two trade-offs: either a high false-block rate on legitimate adults, or a verification flow that collects far more identity data from the entire user base than the Privacy Act 1988 is comfortable with. The OAIC has already flagged that concern in its submissions on the Bill.
This is the same shape as the UK's Online Safety Act age-gating: VPN downloads spiked the week the rules took effect, and the kids most likely to circumvent are the ones the policy is meant to protect.
Penalties and who carries them
Civil penalty up to $49.5 million per contravention, applied to the platform. No penalties on users, parents, or under-16s who circumvent restrictions - liability sits entirely with the operator.
Australia chose corporate liability deliberately. The cost of getting this wrong - wrongly blocking adults, mishandling identity data, age-estimation systems that fail on certain demographics - is borne by platforms, but the privacy fallout is borne by every Australian user who has to prove their age.
The YouTube fight
YouTube was originally exempted as an "educational platform". That exemption was pulled in July 2025 after government research identified YouTube as the most-cited source of harmful content exposure for 10–15 year-olds. Google has signalled legal challenge. The current position: under-16s can still watch YouTube logged-out, but can't sign in, comment, upload, or subscribe.
Whether YouTube actually meets the statutory definition is a real legal question. A court challenge is the most likely path.
What changes on 10 December for an Australian platform operator
If you run a service that falls inside the definition:
- New under-16 sign-ups must be blocked at registration
- Existing under-16 accounts must be deactivated, with data handling per the Privacy Act 1988
- "Parental consent" is not a workaround the Act recognises - there's no consent path for under-16s
- Your age-assurance approach has to satisfy the Commissioner's "reasonable steps" test, and you can't make ID the only option
If you run an adjacent service (messaging, gaming, ed-tech) and you're not sure which side of the line you're on: get advice. The Minister's rule-making power means scope can move.
What's still unresolved
- Final eSafety Commissioner guidance on "reasonable steps"
- The full age-assurance trial report (10 volumes, partially released)
- Google's expected challenge on YouTube's classification
- Whether the Privacy Act amendments arriving in 2026 will conflict with the data-collection footprint this Act creates
Coda
The policy intent has broad public support. The implementation is the hard part, and the cost of getting it wrong lands on platform operators and on the privacy of every adult who now has to prove their age.
If you're operating a platform in scope, the work this quarter is: scope-check against the Act, pick an age-assurance approach that isn't ID-only, and write down your "reasonable steps" position before the Commissioner asks for it. See also our regulation coverage of the Privacy Act amendments and CPS 230.