this post was submitted on 28 Jun 2025
28 points (100.0% liked)

Hacker News

1890 readers
366 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 9 months ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] Tim_Bisley@piefed.social 11 points 4 days ago (5 children)

AI models are annoyingly affirming even for the most benign questions. I can be like what shape is a stop sign? It would reply with something like "Way to think on your toes and you are so right for asking about that!"

[–] Swedneck@discuss.tchncs.de 7 points 4 days ago (2 children)

give me a model that responds "it's an octagon dipshit, sesame street taught you this"

[–] grrgyle@slrpnk.net 4 points 4 days ago

Give me a model that eats other models and then dies.

[–] piefood@feddit.online 2 points 3 days ago

They can do that. I have an AI system that I've been working on, and I told it to be grumpy, and question me if I'm wrong. It gives me some sassy, angry answers. I'm guessing they set up the prompts to be overly-nice

[–] minyakcurry@monyet.cc 2 points 4 days ago (1 children)

I know of people who (proudly) post screenshots of GPT calling them insightful, as if the matrix multiplier didn't already tell everyone that

[–] slaneesh_is_right@lemmy.org 1 points 2 days ago

I saw a huge uptick of women on dating apps being proud to ask chat gpt for advice. Not just a crazy thing to be proud off, but also a good sign that they are probably narcissists who just like the affirmation

[–] owl@infosec.pub 1 points 4 days ago

I feel like it started doing so not too long ago. The first couple times it worked on me and was kinda proud I asked a clever question. But eventually I noticed, it does it no matter what I ask, and I felt so foolish.

[–] shalafi@lemmy.world 1 points 4 days ago

Gotta hit LLMs with utterly unbiased questions, and that's hard for most. I get pretty solid results, but still, gotta look into the reply, not take it at face value. And the further you pursue a certain tack, the less valuable the output.

[–] JayGray91@piefed.social 1 points 4 days ago

I want them to stop yapping fake platitudes and just give me the answer straight with no conversational fluff.

This has probably been said a lot, but partly I think is the tech techbros want to make HAL, Jarvis, any other fictional AI with personality.

[–] Saleh@feddit.org 8 points 4 days ago* (last edited 4 days ago) (1 children)

I wonder if this is related to how ChatGPT and other models provided as a service have been filtered.

E.g. them being "forced" to be nice and more agreeable.

If that turns out to be the case, i'd wager that it is impossible to filter for every possible constellation and outcome as we have seen with people hacking through clever prompting in ever more sophisticated ways.

I find it particularly worrying that people without any prior signs of mental health issues got sucked into severe delusions and the article suggests that the "AI" being marketed as reliable and impartial is key to it. This means the companies behind will not address this fundamental misconception as their business model is built on it.

I dont see how these cases could be prevented without extreme regulatory intervention.

[–] wizardbeard@lemmy.dbzer0.com 2 points 4 days ago (1 children)

I don't see how these cases could be prevented even with regulation. It would take a massive change in how these things work on a fundamental level.

[–] Saleh@feddit.org 3 points 4 days ago

Regulations that make the companies in question fully liable for any damages that are incurred in using the product for instance. Or regulations that prohibit selling these to the general public, or requiring a human supervision at all times of use.

That's what i mean by

extreme regulatory intervention

As a consequence it would lead to fundamental changes in the products themselves.