this post was submitted on 23 Nov 2025
34 points (81.5% liked)

Technology

77090 readers
3677 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?

you are viewing a single comment's thread
view the rest of the comments
[–] panda_abyss@lemmy.ca 5 points 3 days ago

The hard coding here is basically fine tuning.

They generate a set of example cases and then paired prompt with good and bad responses. Then they update the model weights until it does well on those cases.

So they only do this with cases they’ve seen, and they can’t really say how well it does with cases they haven’t.

Having this in their fine tune dataset will juice the results, but also hopefully it actually identifies these issues correctly.

The other thing is a lot of the raw data in these systems is generated by cheap workers in third world countries who will not have a good appreciation for mental health.