Australia has enacted a world-first ban on social media for users aged under 16, causing millions of children and teenagers to lose access to their accounts.
Facebook, Instagram, Threads, X, YouTube, Snapchat, Reddit, Kick, Twitch and TikTok are expected to have taken steps from Wednesday to remove accounts held by users under 16 years of age in Australia, and prevent those teens from registering new accounts.
Platforms that do not comply risk fines of up to $49.5m.
There have been some teething problems with the ban’s implementation. Guardian Australia has received several reports of those under 16 passing the facial age assurance tests, but the government has flagged it is not expecting the ban will be perfect from day one.
All listed platforms apart from X had confirmed by Tuesday they would comply with the ban. The eSafety commissioner, Julie Inman Grant, said it had recently had a conversation with X about how it would comply, but the company had not communicated its policy to users.
Bluesky, an X alternative, announced on Tuesday it would also ban under-16s, despite eSafety assessing the platform as “low risk” due to its small user base of 50,000 in Australia.
Parents of children affected by the ban shared a spectrum of views on the policy. One parent told the Guardian their 15-year-old daughter was “very distressed” because “all her 14 to 15-year-old friends have been age verified as 18 by Snapchat”. Since she had been identified as under 16, they feared “her friends will keep using Snapchat to talk and organise social events and she will be left out”.
Others said the ban “can’t come quickly enough”. One parent said their daughter was “completely addicted” to social media and the ban “provides us with a support framework to keep her off these platforms”.
“The fact that teenagers occasionally find a way to have a drink doesn’t diminish the value of having a clear, national standard.”
Polling has consistently shown that two-thirds of voters support raising the minimum age for social media to 16. The opposition, including leader Sussan Ley, have recently voiced alarm about the ban, despite waving the legislation through parliament and the former Liberal leader Peter Dutton championing it.
The ban has garnered worldwide attention, with several nations indicating they will adopt a ban of their own, including Malaysia, Denmark and Norway. The European Union passed a resolution to adopt similar restrictions, while a spokesperson for the British government told Reuters it was “closely monitoring Australia’s approach to age restrictions”.
Its a very simple fix with a few law changes.
The act of promoting or curating user submitted data makes the company strictly liable for any damages done by the content.
The deliberate spreading of harmful false information makes the hosting company liable for damages.
This would bankrupt Facebook, Twitter, etc within 6 months.
I assume you don't mean simply providing the platform for the content to be hosted, in that case I agree this would definetly help.
This one is damn near impossible to enforce for the sole reason of the word "deliberate", the issue is that I would not support such a law without that part.
I left out the hosting part for just that reason. The company has to activately do something to gain the liability. Right now the big social media companies are deliberately prioritizing harmful information to maximize engagement and generate money.
As for enforcement hosters have had to develop protocols for removal of illegal content since the very beginning. Its still out there and can be found, but laws and mostly due diligence from hosters, makes it more difficult to find. Its the reason Lemmy is not full of illegal pics etc. The hosters are actively removing it and banning accounts that publish it.
Those protocols could be modified to include obvious misinformation bots etc. Think about the number of studies that have shown that just a few accounts are the source of the majority of harmful misinformation on social media.
Of course any reporting system needs to be protected from abuse. The DMCA takedown abusers are a great example of why this is needed.
It would also be easily abused, especially since someone would have to take a look and check, which would already put a bottleneck in the system, and the social media site would have to take it down to check, just in case, which gives someone a way to effectively remove posts.