I don't actually have a problem with this. If people are stupid enough to admit to a crime or engage in criminal activity on a platform that they don't control, that's on them. I put this as the next step of evolution from people who would commit a crime on youtube for views then get shocked pikachu'd when the police arrest them for it. They have no one to blame but themselves, they brought a 3rd party AI company into it and they did not consent to be an accomplice and if there is any company out there with the resources to have AI scan conversations to flags to send to the police with good accuracy, openAi would definitely be at the front of it.
Not The Onion
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
Well, you should have a problem with it but not for the reasons you think. Any invasion of privacy is an issue when the people in control get to decide what is a reportable offense without explicitly telling you. I get it, you definitely shouldn't be admitting anything illegal or asking illegal advice from a chat bot. You shouldn't be doing anything that is illegal in the first place.That's basically the same as googling how to make a bomb and if you're that dumb you'll get what's coming to you. The issue arrives when you look at the bigger picture. If they have the ability to report anything they want to the police, what's stopping them from releasing anything they want to anyone they want at any time? And when it comes to those receiving the data that's been reported, what proof do you have that these entities have yours or anyone else's interests or safety in mind? What if they decide to change the rules on what they should report, they don't tell you, and then retroactively flag a bunch of your conversions with said LLM.
It's the same kinda situation that we face with these AI cameras that track us and our vehicles literally everywhere we go. There have already been multiple cases where people in law enforcement were using these tools to stalk people like ex girlfriends. All this is putting a lot of trust into people that none of us even know and expect them to have the best of intentions. What would stop them from reporting that you asked ChatGPT about the current situation in Gaza?
Fair points.
One thing I think we all miss: what happens when an overzealous government makes something a crime retroactively? Say, um, disparaging two Cheetos in an ill fitting suit masquerading as a world leader.
That’s part of why we should care about privacy and why we should care when data we expect to be private isn’t.
Most tech users are victims in a system they don’t understand. We might complain that they don’t want to understand but the truth is the providers don’t want them to understand - as it’s easier to sell them whatever crap they’re shilling.
You're fine with invasion of privacy as long as it only affects criminals.
I think you'll find that once privacy is broken you'd be surprised how many people end up under that umbrella.
Can we have it affect the oligarchs and authoritarian fascists, too?
Using the fucking GPT is the privacy invasion.
So yes, once the company has the logs and detects any criminal or dangerous activity, it should report it.
Stop using chatbots in the first place.
I kinda agree. While I do want these llm companies to be more private, in terms of data retention, I think it's native to say that a company which is selling artificial intelligence to hundreds of millions of users should be totally ambivalent in the face of llm induced psychosis and suicide. Especially when the technology only gets more hazardous as it becomes more capable.
Being criminally stupid when planning crimes is pretty stupid.
Ahh, the ol’ ‘nothing to hide’ defense.
Ever consider things that are labeled as ‘crimes’ can and will be anything the people in power want?
Just because, say, calling Republicans ‘shithead pedophiles’ on Lemmy isn’t illegal now doesn’t mean Cheeto Mussolini won’t make it illegal tomorrow.
Bro wants to comply ahead of time. lol You’re a weird little fool
There is no privacy if you don't self-host everything.
On-site self- hosting, on owned hardware. Who knows what's going on behind the closed doors of data centers around the world.
And let's not get into industry standard ~~hardware backdoors~~ remote control systems.
they will find out about my relation with uwu chatgpt mechahitler skibidi sigma wifu
This is why i keep my chat gpt under the sofa so when buckling up for safety my open ai stays extra crunk.
So it is not stupid enough to just use it, some people are so totally stupid and think what they put into a commercial, online thing would be private.
I kind of assumed it worked like this before anyway. Good reason to use local models.
Sadly, Local models arent there yet. I have tech nerds in my company spending $3-10k building their own systems and they're still not getting the speeds and quality that these subscriptions have.
Did they think there was patient-sycophantBot privilege or something?
As much as I hate the AI-gens, this is probably a good thing after that poor kid got talked into killing himself. I assume Google et al do similar already.
Now, if the cops react to being called for a person in crisis by tazing somebody, that's a different problem.
If the user is a Nazi it probably auto sends their resume to ICE
Stupid is, etc