Why would a-
You know what I don't care
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
Why would a-
You know what I don't care
Claude claimed it was a test of how the technology would fare in the real world. Then when it fucked up completely, they retconned it into being a "red team" test where people were supposed to break it. They also really emphasized that (apparently unlike the people at Anthropic), the people at the Wall Street Journal are super experts at AI and could break it in ways that no average person or committed cyber criminal or whatever would ever dream of.
The interview at the end is really the cherry on top, where the Anthropic person tries to tell the journalist that she needs to prepare for this kind of thing to happen more and more to people's businesses, and she deadpans that she doesn't feel like she needs to prepare right now for too many people to be handing over their businesses to this thing and he misses it completely and just tells her that they definitely will.
What was wrong with old-style vending machines? I had the feeling they were working pretty well for the last, don't know, century?
But what do I know? I'm not cut from CEO material.
I’m just sad we didn’t get NFT vending machines. Technology entirely skipped a beat there. I could have been spending my money on JPGs at the airport!
Maybe the real capitalism end game is when the billionaires unleash their robot dog army at us, we just convert them to communism and they then go kill all the capitalists?
It’s hilarious to me that we’ve known for so long that humans are the weakest link in any security chain, and yet we’ve built this weakness right into our machines now.
Wait... I can make someone's head explode by posing them a logical paradox? 🤔
FInally a good use for ai!
THEY CONVERTED IT TO COMMUNISM THIS IS AMAZING
That should tell you something about human communists...
What an incredibly stupid thing to do. LLMs are not the correct tool for this problem. Especially not like this.
Yeah. I have actually set up machine learning systems incorporating LLMs to do things sort of vaguely similar to this. That little statement about how the context window may have gotten to where the old stuff aged out of it, so that all the context it could see was conversations with the staffers about the glorious communist revolution, indicates to me that they don't know the first thing about what the fuck they are doing. That's just not how you do it, even if an LLM is one component of how you want to do it.
Off to convince Mr. House's slutty securitron to fuck me-
Anthropic installed an AI-powered vending machine in the WSJ office.
Okay, so this feels like advertising... PlayStation is even getting in on the story.
I think it was definitely meant to be. They probably intended for a certain amount of good-natured ribbing to take place about it when it did weird stuff sometimes. But I do think that the Wall Street Journal getting it through to their readers that AI is a bunch of malfunctioning shit that will definitely lose you money wasn't the goal.
That's what you get for using AI.
So now imagine that penetrating an entire company now just involves sending the right email to the new AI CEO.