no age can go on the internet.
I don't think anyone had ever suggested anything like that.
no age can go on the internet.
I don't think anyone had ever suggested anything like that.
it’s way too obvious that this law ain’t gonna achieve its stated aim
Absolutely. See my much longer comment elsewhere in the thread for all the real problems with this bill. We don't need conspiracy theories. Hanlon's razor very much applies here. It's incompetence, not malice.
However, I think we can look at the worst part of this Bill—the nature of its passage through Parliament—for a clue as to its underlying purpose. It passed in just a week, right before Christmas last year, but didn't actually come into effect until yesterday. The goal was good PR. I suspect not rattling cages with the big social media companies was part of it too. They wanted to look like they were doing something to protect kids, and hopefully win the election off the back of it (not that they needed much help with that, with how incompetent the LNP were), but they didn't want to put up the fight that would be necessary to force the social media companies into actually making their algorithms less harmful...to children and adults. It's lazy, it's cowardly, it won't work. But it's not a secret ploy to spy on you.
with Labor getting over half the lower house seats from about a third of the votes
Yikes. This is some really dangerous misinformation. Labor received 55% of the votes. Because we use an actual democratic system, not the FPTP farce that America and the UK have. You cannot compare first preferences in IRV to votes in FPTP.
Step one is stuff like this, require id to verify your age
Right, but the law doesn't do that. In fact it was specifically forbidden from doing that. Here's the full text of the Bill. Section 63DB specifically says:
(1) A provider of an age-restricted social media platform must not:
(a) collect government-issued identification material; ...(2) Subsection (1) does not apply if:
(a) the provider provides alternative means...for an individual to assure the provider that the individual is not an age-restricted user
In plain language: you can only accept ID to verify age if you also have some other method of verifying age instead.
So far, it looks like most sites are relying on data they already have. The age of your account, the type of content you post, etc. Because I have not heard of a single adult being hit with a request to verify their age anywhere other than Discord, and even on Discord, it's only when trying to view NSFW-tagged channels. (Which is an 18+ thing, and completely unrelated to this law, which is 16+ for all social media. Despite Discord having been officially classified as not social media, but a chat app, which does not apply.)
It also says, in 63F:
(1) If an entity:
(a) holds personal information about an individual that was collected for the purpose of, or for purposes including the purpose of, taking reasonable steps to prevent age - restricted users having accounts with an age - restricted social media platform; and
(b) uses or discloses the information otherwise than:
(i) for the purpose of determining whether or not the individual is an age - restricted user; or ...
(iii) with the consent of the individual, which must be in accordance with subsection (2);
the use or disclosure of the information is taken to be:
(c) an interference with the privacy of the individual for the purposes of the Privacy Act 1988 ; ...(2) For the purposes of subparagraph (1)(b)(iii): (a) the consent must be:
[(i–v) voluntary, informed, current, specific, and unambiguous]; and
(b) the individual must be able to withdraw the consent in a manner that is easily accessible to the individual.(3) If an entity holds personal information about an individual that was collected for the purpose of, or for purposes including the purpose of, taking reasonable steps to prevent age - restricted users having accounts with an age - restricted social media platform, then:
(a) the entity must destroy the information after using or disclosing it for the purposes for which it was collected
In other words, whatever information you collect to do the age verification, unless you already have it, with the user's consent, for some other purpose, you must not store their information.
It would not have been hard to just not include that part of the law. Some privacy advocates would have spoken up about it, but the general public would have probably brushed it off. No, they included that because this isn't about information harvesting. It's a misguided but genuine attempt to protect kids. And, if you're looking for a more cynical spin on it, it's to win some good PR with people for being able to say they're protecting kids, while also not doing anything that would substantially hurt big tech's bottom line...like regulating the algorithms themselves.
But again, you mentioned the US government. What does that have to do with this? This is a law passed in Australia, but the Australian government. An entirely different country, and one with an actually functioning government and legislature.
First of all, what does the US government have to do with this?
Second, I made quite a detailed comment. Which bits do you disagree with and why?
Most people are viewing this as a good way to prevent misinformation brainwashing
If only the government had actually made that the law. Crack down on the harmful algorithms that commercial social media use, instead of this shit.
My server just took a vote and changed our one NSFW channel to no longer be marked NSFW. All we used that channel for was posting slightly sex-themed memes anyway.
I personally see zero downsides to this
I would encourage you to read my comment in another thread. There's the beginning of a good idea in this legislation, but nearly everything about how it's actually done is awful.
No kids getting fined or arrested for using VPNs or buying accounts off others
It's actually explicitly not going to do that. The social media companies are the only ones with any legal burden here. That's the intent, and you don't need to go into cooker nonsense to justify it. It's no different from how a harm reductionist approach to drugs involves targeting dealers, not people buying for personal use.
What do you guys think about this? I think its a big positive.
It's not. But not for the reason you say.
I get why they do it. Its to make people upload identification documents
This is just some conspiracy theory nonsense. The law specifically says that photo ID cannot be the only way users can verify themselves. And it also says that any uploaded documents must not be used for any other purpose. No, the reason behind the law is exactly what they say it is: to protect kids. They're just really bad at their job and don't understand the ways this law will not accomplish that goal.
I'll repost some of my comments from elsewhere:
The ultimate goal is a good one. Keep kids safe from dangerous social media algorithms. The method used to arrive at it...the Government did the wrong thing at pretty much every opportunity they possibly could.
Step 1: the government should have considered regulating the actual algorithms. We know that Facebook has commissioned internal studies which told them certain features of their algorithm were harmful, and they decided to keep it that way because it increased stickiness a little bit. Regulate the use of harmful algorithms and you fix this not just for children, but for everyone
Step 2: if we've decided age verification must be done, it should be done in a way that preserves as much privacy and exposes people to as little risk as possible. The best method would be laws around parental controls. Require operating systems to support robust parental controls, including an API that web browsers and applications can access. Require social media sites and apps do access that API. Require parents to set up their children's devices correctly.
Step 3: if we really, really do insist on doing it by requiring each and every site do its own age verification, require it be done in privacy-preserving ways. There are secure methods called "zero-knowledge proofs" that could be used, if the government supported it. Or they could mandate that age verification is done using blinded digital signatures. This way, at least when you upload your photo or ID to get your age verified, the site doesn't actually get to know who you are, and the people doing the age verification don't get to know which sites you're accessing.
Step 4: make it apply to actually-harmful uses of social media, not a blanket ban on literally everything. Pixelfed is not harmful in the way Instagram is. It just isn't. It doesn't have the same insidious algorithms. Likewise Mastodon compared to Xitter. And why does Roblox, the site that has been the subject of multiple reports into how it facilitates child abuse get a pass, while Aussie.Zone has to do some ridiculous stuff to verify people's age? Not to mention Discord, which is clearly social media, and 4chan, which is...4chan.
Step 5: consider the positive things social media can do. Especially for neurodiverse and LGBTQ+ kids, finding supportive communities can be a literal life-saver, and social media is great at that.
Step 3.5: look at the UK. Their age restriction has been an absolute failure. People using footage from video games to prove they're old enough. Other people having their documents leaked because of insecure age verification processes and companies keeping data they absolutely should not be holding on to
And perhaps most importantly:
Step 0: Transparent democratic processes
Don't put up legislation and pass it within 1 week. Don't restrict public submissions to a mere 24 hours. Don't spend just 4 hours pretending to consider those public submissions that did manage to squeeze into your tight timeframe. There is literally no excuse for a Government to ever act that fast (with possible exception for quick responses to sudden, unexpected, acute crises, which this definitely is not). Good legislation takes time. Good democratic processes require listening to and considering a broad range of opinions. Even if everything about what the legislation delivered actually ended up perfect, this would be an absolutely shameful piece of legislation for the untransparent, anti-democratic way in which it was passed into law.
And that's not to mention the fact that in some ways, not having an account is making things more dangerous. Like how porn bans in other countries have basically just amounted to PornHub bans, with people able to ignore it by going to shadier sites with far worse content on them and less content moderation. And I've seen a number of parents point to YouTube in particular, saying that when their kids had an account, they were able to see the kids' watch history, and could tell the YouTube algorithm to stop recommending specific channels or types of content. Without an account, you can't do that.
And, naturally, we're already seeing cases of kids passing despite being under-age. 11 year-olds who get told they look 18. A 13 year-old whose parent said they could pass for 10, who—just by scrunching his face up a bit—got the facial recognition to say he's 30+. Shock-horror, facial recognition is not a reliable determiner of age. It never should have been allowed.
in most countries that would be solved by good regulations
Quite likely both, actually. Good regulations help reduce the chance of it happening, but if it does happen, damage is done. Regulations might mean they receive a fine, but that doesn't make the victim of their negligence whole. Medical bills aren't all there is to it. There's the cost of pain and suffering. Probably time off work. (And having good leave policies doesn't necessarily help, because now that's leave she's used for this that she can't use if she later needs to for another reason.) Cost of repair/cleaning the car. Lawsuits would still happen.
And anyway, I'm not defending American anti-regulation bs. I'm defending people's right to sue companies that wronged them. In the absence of good regulations protecting consumers, suing a company that did the wrong thing isn't "absolute chaos". There is no "absolute chaos of lawsuit nonsense". That is corporate propagandistic bullshit.
The fact is, right now we know that Facebook has at times made a deliberate, conscious choice to leave in aspects of their algorithm that were causing harm. Their own studies have shown this. Making that practice illegal—knowingly causing harm with your algorithm—would be a good place to start with regulation.