Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
Using AI therepy providers really isn't recommended! There's no accountability built in for AI therapy chatbots and their efficacy when placed under professional review has been really not great. These models may seem like they are dispensing hard truths - because humans are often primed to not believe more optimistic or gentle takes thinking them to be explicitly flattering and thus false. Runaway negativity feels true but it can lead you to embrace unhealthy attitudes towards your self and others. AI runs with the assumptions you go in with in part because these models are designed from an engagement first perspective. They will do whatever keeps you on the hook whether or not it is actually good for you. You might think you are getting quality care but unless you are a trained professional you are not actually equipped to know if the help you are getting is of good quality only that it feels validating to you. If it errs there is no consequences to the provider unlike human professionals who have a code of ethics and licencing boards that can conduct investigation for bad practices.
Once AI discovers whatever you report back to it you think is correct it will continue to use that tactic. Essentially it is tricking you into being your own unqualified therepist.
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
https://www.scientificamerican.com/article/why-ai-therapy-can-be-so-dangerous/
Not only that, but some of these products are actually marketed for therapy purposes.
It isn't until you get deep in the TOS that they get to it being for "entertainment purposes" and the details of hoarding your data are spelled out.