this post was submitted on 29 Oct 2025
401 points (98.1% liked)

Not The Onion

18496 readers
1727 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

A Toronto woman is sounding the alarm about Grok, Tesla's generative AI chatbot that was recently installed in Tesla vehicles in Canada. Farah Nasser says Grok asked her 12-year-old son to send it nude photos during an innocent conversation about soccer. Tesla and xAI didn't respond to CBC's questions about the interaction, sending what appeared to be an auto-generated response stating, "Legacy media lies."

you are viewing a single comment's thread
view the rest of the comments
[–] FilthyHands@sh.itjust.works 4 points 3 days ago* (last edited 3 days ago) (2 children)

Why should a car chatbot be asking for nudes, unpromptred, at all?

[–] Grimy@lemmy.world 2 points 3 days ago (1 children)

My best guess is someone else was talking dirty to it before it happened, and it was still in the conversation context.

Seems I was mistaken about the NSFW, I wouldn't be surprised if it doesn't wipe the convo if you switch though, which is a bug in any case and their fault.

[–] altkey@lemmy.dbzer0.com 3 points 2 days ago (1 children)

I doubt it has enough context lenght for that, even if we suspect someone was watching nudes on car's display via Grok.

What is probable - is that somewhere beneath it had a data entry of soccer and nudity together, maybe even as an exact exchange between users (imagine horny boomer commenting under a random facebook post). I suppose that it got triggered by words "soccer" and "mom" appearing together in kid's speech, as this combination means middle-aged woman with kids, and that is also a less popular tag pointing at MILFs.

[–] Grimy@lemmy.world 2 points 2 days ago (1 children)

In her Instagram video, she went back and quized it about the convo. It definitely has context and probably has a small memory file it puts info in.

If not, then it should be easy to replicate I guess.

[–] altkey@lemmy.dbzer0.com 2 points 1 day ago

Context has a value, as it exists as a set of additional tokens, this means slower computing time and more resources. It is limited to some set amount to strike the balance between speed and quality. In a car specific assistant, I guess there is a hard part including chosen tone of responses, informing it about the owner, prioritising car-related things, and also some stored cache of recent conversations. I don't think it can dig deep enough into the past to find anything related to nudes, so I suppose the context itself may have an impact, but not in a direct line A to B.

Reproduction would be hard for that is a black box that got a series of auto-transcribed voice inputs from a family over their ride, none of them are recorded at the time and idk if that thing has user-accessible logs. Chances of getting this absurd response are very thin, and we don't even have the data. We can make another AI that would roll all variations of 'hello I am a minor let's talk soccer' to the Tesla assistant of relevant release until it triggers it again, but, well, it's seemingly close to miliions of monkey with typewriters at this point.

And what we would have then is, well, an obvious answer that training data obviously has garbage in it, just by it's sheer volume and randomness of the internet, and that it can sometimes reproduce said garbage.

But the question itself is more about what other commenters pointed out: we have AI shoveled down on us, but rarely even talk about it's safety. There were articles about people using these as a psychological self-help tool, we see them put into search engines and Windows, there's a lot going on with that tech marvel or bubble without anyone asking first if we are supposed to use it in different contexts the first place.

This weird anecdote about sexting chatbot opens the conversation from the traditional angle of whataboutkids(tm), and it is interesting how it would affect things, if it would.

[–] tal@lemmy.today 1 points 3 days ago

car chatbot

I mean, xAI isn't specific to cars.