this post was submitted on 26 Mar 2025
853 points (94.8% liked)

Technology

68187 readers
3787 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Electricblush@lemmy.world 216 points 1 week ago* (last edited 1 week ago) (5 children)

All these "look at the thing the ai wrote" articles are utter garbage, and only appeal to people who do not understand how generative ai works.

There is no way to know if you actually got the ai to break its restrictions and output something "behind the scenes" or it's just generating the reply that is most likely what you are after with your prompt.

Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts...

In this case it's even more obvious that a lot of the basis of its statements are based on various articles and discussions about it's statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation...)

[–] Draces@lemmy.world 52 points 6 days ago (1 children)

only appeal to people who do not understand how generative ai works

An article claiming Musk is failing to manipulate his own project is hilarious regardless. I think you misunderstood why this appeals to some people

[–] Electricblush@lemmy.world 14 points 6 days ago

Yes sure, fair point. I'm just pointing out that it's all fiction.

[–] MudMan@fedia.io 15 points 6 days ago (1 children)

This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.

I mean, I'd argue that you're even assigned a loop that probably doesn't exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.

But fundamentally, if a reporter reports on a factual claim made by an AI on how it's put together or trained, that reporter is most likely not a credible source of info about this tech.

Importantly, that's not the same as a savvy reporter probing an AI to see which questions it's been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can't tell the difference between both types of reporting, which is part of the problem... but there is one.

[–] shalafi@lemmy.world 3 points 6 days ago (2 children)

It's human to see patterns where they don't exist and assign agency.

[–] MudMan@fedia.io 2 points 6 days ago

Definitely. And the patterns are actively a feature for these chatbots. The entire idea is to generate patterns we recognize to make interfacing with their blobs of interconnected data more natural.

But we're also supposed to be intelligent. We can grasp the concept that a thing may look like a duck and sound like a duck while being.... well, an animatronic duck.

[–] Flisty@mstdn.social 2 points 6 days ago

it's like seeing faces in wood knots or Jesus in toast

[–] theunknownmuncher@lemmy.world 9 points 6 days ago (2 children)

Yup, it's literally a bullshit machine.

[–] 474D@lemmy.world 10 points 6 days ago

Which oddly enough, is very useful for everyday office job regular bullshit that you need to input lol

[–] balder1991@lemmy.world 1 points 5 days ago* (last edited 5 days ago)

I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.

But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.

[–] Kecessa@sh.itjust.works 8 points 6 days ago (1 children)

Fucking thank you! Grok doesn't reveal anything, it just tells us anything to make us happy!

[–] Empricorn@feddit.nl 6 points 6 days ago (2 children)
[–] JayGray91@lemmy.zip 3 points 6 days ago

I am less unhappy after reading the article

[–] Kecessa@sh.itjust.works 1 points 6 days ago

Satisfied with the answer might have been a better way to put it...

[–] Ulrich@feddit.org 1 points 5 days ago (1 children)

I think that's kinda the point though; to illustrate that you can make these things say whatever you want and that they don't know what the truth is. It forces their creators to come out and explain to the public that they're not reliable.

[–] j0ester@lemmy.world 1 points 4 days ago

I thought we all learned that from DeepSeek, when we asked it history questions.. and it didn’t know the answer. It was censoring.