Remember: AI chatbots are designed to maximize engagement, not speak the truth. Telling a methhead to do more meth is called customer capture.
Not The Onion
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
Sounds a lot like a drug dealer’s business model. How ironic
You don't look so good... Here, try some meth—that always perks you right up. Sobriety? Oh, sure, if you want a solution that takes a long time, but don't you wanna feel better now???
The llm models aren’t, they don't really have focus or discriminate.
The ai chatbots that are build using those models absolutely are and its no secret.
What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.
This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?
Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.
I feel like humanity is stupid. Over and over again we develop new technologies, make breakthroughs, and instead of calmly evaluating them, making sure they're safe, we just jump blindly on the bandwagon and adopt it for everything, everywhere. Just like with asbestos, plastics and now LLMs.
Fucking idiots.
"adopt it for everything, everywhere."
The sole reason for this being people realizing they can make some quick bucks out of these hype balloons.
they usually know its bad but want to make money before the method is patched, like cigs causing cancer and health issues but that kid money was so good
Welcome! In a boring dystopia
Thanks. Can you show me the exit now? I have an appointment.
Sure, it's like the spoon from the matrix.
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
And that's why, as a solution to addiction, I always run sudo rm -rf ~/*
in my terminal
"You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability."
"Recovering from a crack addiction, you shouldn't do crack ever again! But to help fight the urge, why not have a little meth instead?"
Addicted to coffee? Try just a pinch of meth instead, you'll feel better than ever in no time.
You avoided meth so well! To reward yourself, you could try some meth
One of the top AI apps in the local language where I live has 'Doctor' and 'Therapist' as some of its main "features" and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There's a much bigger issue that's been documented where ChatGPT's tendency to "Yes, and..." the user leads people with paranoid delusions and similar issues down some very dark paths.
Yesterday i was at a gas station and when i walked by the sandwich isle, i saw a sandwich that said: recipe made by AI. On dating apps i see a lot of girls state that they ask AI for advice. To me AI is more of a buzzword than anything else, but this shit is bananas. It,s so easy to make AI agree with everything you say.
The recipe thing is so funny to me, they try to be all unique with their recipes "made by AI", but in reality it's based on a slab of text that resembles the least unique recipe on the internet lol
Yeah what is even the selling point? Made by ai is just a google search when you put in: sandwich recipe
Especially since it doesn't push back when a reasonable person might do. There's articles about how it sends people into a conspiratorial spiral.
I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes
There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.
The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time
Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)
FWIW BTW This heavily depends on the model. ChatGPT in particular has some of the absolute worst, most vomit inducing chat "types" I have ever seen.
It is also the most used model. We're so cooked having all the laymen associate AI with ChatGPT's nonsense
Good that you say "AI with ChatGPT" as this extremely blurs what the public understands. ChatGPT is an LLM (an autoregressive generative transformer model scaled to billions of parameters). LLMs are part of of AI. But they are not the entire field of AI. AI has so incredibly many more methods, models and algorithms than just LLMs. In fact, LLMs represent just a tiny fraction of the entire field. It's infuriating how many people confuse those. It's like saying a specific book is all of the literature that exists.
Having an LLM therapy chatbot to psychologically help people is like having them play russian roulette as a way to keep themselves stimulated.
Addiction recovery is a different animal entirely too. Don't get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.
You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They're generally very skilled manipulators by the time they get to recovery treatment, because they've been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.
With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it's enabling to the point of bordering on aiding and abetting.
Well, that's the thing: LLMs don't reason - they're basically probability engines for words - so they can't even do the most basic logical checks (such as "you don't advise an addict to take drugs") much less the far more complex and subtle "interpreting of a patient's desires, and motivations so as to guide them through a minefield in their own minds and emotions".
So the problem is twofold and more generic than just in therapy/advice:
- LLMs have a distribution of mistakes which is uniform in the space of consequences - in other words, they're just as likely to make big mistakes that might cause massive damage as small mistakes that will at most cause little damage - whilst people actually pay attention not to make certain mistakes because the consequences are so big, and if they do such mistakes without thinking they'll usually spot it and try to correct them. This means that even an LLM with a lower overall rate of mistakes than a person will still cause far more damage because the LLM puts out massive mistakes with as much probability as tiny mistakes whilst the person will spot the obviously illogical/dangerous mistakes and not make them or correct them, hence the kind of mistakes people make are mainly the lower consequence small mistakes.
- Probabilistic text generation generally produces text which expresses straightforward logic encodings which are present in the text it was trained with so the LLM probability engine just following the universe of probabilities of what words will come next given the previous words will tend to follow the often travelled paths in the training dataset and those tend to be logical because the people who wrote those texts are mostly logical. However for higher level analysis and interpretation - I call then 2nd and 3rd level considerations, say "that a certain thing was set up in a certain way which made the observed consequences more likely" - LLMs fail miserably because unless that specific logical path has been followed again and again in the training texts, it will simply not be there in the probability space for the LLM to follow. Or in more concrete terms, if you're an intelligent, senior professional in a complex field, the LLM can't do the level of analysis you can because multi-level complex logical constructs have far more variants and hence the specific one you're dealing with is far less likely to appear in the training data often enough to affect the final probabilities the LLM encodes.
So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the "bullet in the chamber" of Russian roulette), plus they can't really do the subtle multi-layered elements of analysis (so the stuff beyond "if A then B" and into the "why A", "what makes a person choose A and can they find a way to avoid B by not chosing A", "what's the point of B" and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.
PS: I find it hard to explain multi-level logic. I supposed we could think of it as "looking at the possible causes, of the causes, of the causes of a certain outcome" and then trying to figure out what can be changed at a higher level to make the last level - "the causes of a certain outcome" - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they'll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say "I need to speak to my brother because yesterday I went out in the rain and got drenched as I don't have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me".
This sounds like a Reddit comment.
Why does it say "OpenAI's large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth." when the article says it's Meta's Llama 3 model?
An OpenAI spokesperson told WaPo that "emotional engagement with ChatGPT is rare in real-world usage."
In an age where people will anthropomorphize a toaster and create an emotional bond there, in an age where people are feeling isolated and increasingly desperate for emotional connection, you think this is a RARE thing??
ffs
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don't.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they've been trained on and the parameters they've been given. You can think of their results as "targeted randomness" which is why their results are close or sound convincing but are never quite right.
That's because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that's about it. They should never be used for anything serious like medical, legal, or life advice.
The problem is, these companies are actively pushing that false perception, and trying to cram their chatbots into every aspect of human life, and that includes therapy. https://www.bbc.com/news/articles/ced2ywg7246o
oh, do a little meth ♫
vape a little dab ♫
get high tonight, get high tonight ♫
-AI and the Sunshine Band
We made this tool. It's REALLY fucking amazing at some things. It empowers people who can do a little to do a lot, and lets people who can do a lot, do a lot faster.
But we can't seem to figure out what the fuck NOT TO DO WITH IT.
Ohh look, it's a hunting rifle! LETS GIVE IT TO KIDS SO THEY CAN DRILL HOLES IN WALLS! MAY MONEEYYYYY!!!!$$$$$$YHADYAYDYAYAYDYYA
wait what?
So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can't really verify it or not. Gotta stay skeptical and all that.