I can't drag myself out of bed to the shower until I'm awake enough to do non-shower things anyway...
FishFace
I'm sure plenty of people do, maybe you're even one of them, but turning this into blanket statements is weird and annoying. How many people think, because of the culture embodied in this sign, that they must shower every day, and never even check?
Well I'm certainly not taking a cold shower in winter! I don't have a thermostatic shower tap so don't know what temperature it is, but I guess I have it at about body temperature... Not sure if that is as hot as you mean.
Why does this only affect Brazil? Was that a separate "emergency"?
Plenty of people don't start to smell bad 24h after a shower, with or without deodorant. In winter, I'd guess it's actually most people.
I try to shower as infrequently as I can without stinking because it fucks up my skin every single time.
Ok, but a packet of peanuts costs a couple of bucks, so I think it sounds like a lot!
Idk what the commission has to do with it given the NIST investigation (and the others) which all point in the same direction.
Weird that you described this as "the official explanation"rather than, you know, "the explanation"
Or maybe it was expensive but recouped huge amounts due to enabling travel between the two countries, which facilitates all kinds of economic activity
There is an opportunity (or there would be, if these companies were in sane jurisdictions) to try and apply some standards, because only a handful of companies are capable of hosting these bots.
However, there are limitations because of the inherent nature of what they are. Namely, they are relatively cheap, so you can host a number of conversations with them that it is completely unmanageable to manually monitor, and they are relatively unpredictable, so the best-written safety rails will have problems (both false positives and false negatives).
Put together, that means you can't have AI chatbots which don't sometimes both: spout shit they really should not be doing, such as encouraging suicide or reinforcing negative thoughts; and erroneously block people because the system to try and avoid that triggered falsely. And the less of one you try to have, the more of the other.
That implies, to me, that AI chatbots need to be monitored for harm so that those systems can be tuned - or if need be so that the whole idea can be abandoned. But that also means that the benefits of the system need to be analysed, because it's no good going "ChatGPT is implicated in 100 suicides - it must be turned off" if we have no data on how many suicides it may have helped prevent. As a stochastic process that mimics conversation, there will surely be cases of both.
That seems to be an unresolved lawsuit, not knowledge.
If we are to look at the influence ChatGPT has on suicide we should also be trying to evaluate how many people it allowed to voice their problems in a respectful, anonymous space with some safeguards and how many of those were potentially saved from suicide.
It's a situation where it's easy to look at a victim of suicide who talked about it on ChatGPT and say that spurred them on. It's incredibly hard to look at someone who talked about suicide with ChatGPT, didn't kill themselves and say whether it helped them or not.
I'm sorry about how rapidly you stink, have you experimented with different deodorants or consulted a doctor?