this post was submitted on 09 Dec 2025
237 points (99.6% liked)

World News

51187 readers
2599 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Australia has enacted a world-first ban on social media for users aged under 16, causing millions of children and teenagers to lose access to their accounts.

Facebook, Instagram, Threads, X, YouTube, Snapchat, Reddit, Kick, Twitch and TikTok are expected to have taken steps from Wednesday to remove accounts held by users under 16 years of age in Australia, and prevent those teens from registering new accounts.

Platforms that do not comply risk fines of up to $49.5m.

There have been some teething problems with the ban’s implementation. Guardian Australia has received several reports of those under 16 passing the facial age assurance tests, but the government has flagged it is not expecting the ban will be perfect from day one.

All listed platforms apart from X had confirmed by Tuesday they would comply with the ban. The eSafety commissioner, Julie Inman Grant, said it had recently had a conversation with X about how it would comply, but the company had not communicated its policy to users.

Bluesky, an X alternative, announced on Tuesday it would also ban under-16s, despite eSafety assessing the platform as “low risk” due to its small user base of 50,000 in Australia.

Parents of children affected by the ban shared a spectrum of views on the policy. One parent told the Guardian their 15-year-old daughter was “very distressed” because “all her 14 to 15-year-old friends have been age verified as 18 by Snapchat”. Since she had been identified as under 16, they feared “her friends will keep using Snapchat to talk and organise social events and she will be left out”.

Others said the ban “can’t come quickly enough”. One parent said their daughter was “completely addicted” to social media and the ban “provides us with a support framework to keep her off these platforms”.

“The fact that teenagers occasionally find a way to have a drink doesn’t diminish the value of having a clear, ­national standard.”

Polling has consistently shown that two-thirds of voters support raising the minimum age for social media to 16. The opposition, including leader Sussan Ley, have recently voiced alarm about the ban, despite waving the legislation through parliament and the former Liberal leader Peter Dutton championing it.

The ban has garnered worldwide attention, with several nations indicating they will adopt a ban of their own, including Malaysia, Denmark and Norway. The European Union passed a resolution to adopt similar restrictions, while a spokesperson for the British government told Reuters it was “closely monitoring Australia’s approach to age restrictions”.

you are viewing a single comment's thread
view the rest of the comments
[–] porcoesphino@mander.xyz 11 points 11 hours ago* (last edited 11 hours ago) (2 children)

I think that's easier said than done. There are a lot of negatives associated with social media and some are easier to put restrictions on (say violent content) but I don't think we really have a good grasp of all the ways use is associated with depression for example. And wouldn't some of this still fall back to age restricted areas, kind of like with movies?

But yeah, it would be nice to see more push back on the tech companies instead of the consumers

[–] The_v@lemmy.world 7 points 9 hours ago (1 children)

Its a very simple fix with a few law changes.

  1. The act of promoting or curating user submitted data makes the company strictly liable for any damages done by the content.

  2. The deliberate spreading of harmful false information makes the hosting company liable for damages.

This would bankrupt Facebook, Twitter, etc within 6 months.

[–] Attacker94@lemmy.world 3 points 8 hours ago (2 children)

The act of promoting or curating user submitted data makes the company strictly liable for any damages done by the content.

I assume you don't mean simply providing the platform for the content to be hosted, in that case I agree this would definetly help.

The deliberate spreading of harmful false information makes the hosting company liable for damages.

This one is damn near impossible to enforce for the sole reason of the word "deliberate", the issue is that I would not support such a law without that part.

[–] T156@lemmy.world 1 points 4 hours ago

This one is damn near impossible to enforce for the sole reason of the word "deliberate", the issue is that I would not support such a law without that part.

It would also be easily abused, especially since someone would have to take a look and check, which would already put a bottleneck in the system, and the social media site would have to take it down to check, just in case, which gives someone a way to effectively remove posts.

[–] The_v@lemmy.world 1 points 4 hours ago

I left out the hosting part for just that reason. The company has to activately do something to gain the liability. Right now the big social media companies are deliberately prioritizing harmful information to maximize engagement and generate money.

As for enforcement hosters have had to develop protocols for removal of illegal content since the very beginning. Its still out there and can be found, but laws and mostly due diligence from hosters, makes it more difficult to find. Its the reason Lemmy is not full of illegal pics etc. The hosters are actively removing it and banning accounts that publish it.

Those protocols could be modified to include obvious misinformation bots etc. Think about the number of studies that have shown that just a few accounts are the source of the majority of harmful misinformation on social media.

Of course any reporting system needs to be protected from abuse. The DMCA takedown abusers are a great example of why this is needed.

[–] Arcane2077@sh.itjust.works 1 points 11 hours ago (1 children)

Oh definitely not easy, my point is that it’s even harder now

[–] porcoesphino@mander.xyz 1 points 11 hours ago (1 children)

Why do you say it's harder now?

[–] HK65@sopuli.xyz 6 points 9 hours ago

You can't use the think of the children line