this post was submitted on 29 Oct 2025
401 points (98.1% liked)
Not The Onion
18496 readers
1751 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Context has a value, as it exists as a set of additional tokens, this means slower computing time and more resources. It is limited to some set amount to strike the balance between speed and quality. In a car specific assistant, I guess there is a hard part including chosen tone of responses, informing it about the owner, prioritising car-related things, and also some stored cache of recent conversations. I don't think it can dig deep enough into the past to find anything related to nudes, so I suppose the context itself may have an impact, but not in a direct line A to B.
Reproduction would be hard for that is a black box that got a series of auto-transcribed voice inputs from a family over their ride, none of them are recorded at the time and idk if that thing has user-accessible logs. Chances of getting this absurd response are very thin, and we don't even have the data. We can make another AI that would roll all variations of 'hello I am a minor let's talk soccer' to the Tesla assistant of relevant release until it triggers it again, but, well, it's seemingly close to miliions of monkey with typewriters at this point.
And what we would have then is, well, an obvious answer that training data obviously has garbage in it, just by it's sheer volume and randomness of the internet, and that it can sometimes reproduce said garbage.
But the question itself is more about what other commenters pointed out: we have AI shoveled down on us, but rarely even talk about it's safety. There were articles about people using these as a psychological self-help tool, we see them put into search engines and Windows, there's a lot going on with that tech marvel or bubble without anyone asking first if we are supposed to use it in different contexts the first place.
This weird anecdote about sexting chatbot opens the conversation from the traditional angle of whataboutkids(tm), and it is interesting how it would affect things, if it would.