this post was submitted on 15 Dec 2025
659 points (98.5% liked)

Technology

77090 readers
3054 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] LavaPlanet@sh.itjust.works 6 points 3 hours ago (1 children)

Remember before they were released and the first we heard of them, were reports on the guy training them or testing or whatever, having a psychotic break and freaking out saying it was sentient. It's all been downhill from there, hey.

[–] Tattorack@lemmy.world 5 points 2 hours ago (2 children)

I thought it was so comically stupid back then. But a friend of mine said this was just a bullshit way of hyping up AI.

[–] LavaPlanet@sh.itjust.works 2 points 2 hours ago

That tracks. And It's kinda on brand, still. Skeezy af.

load more comments (1 replies)
[–] AppleTea@lemmy.zip 6 points 5 hours ago (1 children)

And this is why I do the captchas wrong.

load more comments (1 replies)
[–] Hegar@fedia.io 6 points 1 day ago (1 children)

I don't know that it's wise to trust what anthropic says about their own product. AI boosters tend to have an "all news is good news" approach to hype generation.

Anthropic have recently been pushing out a number of headline grabbing negative/caution/warning stories. Like claiming that AI models blackmail people when threatened with shutdown. I'm skeptical.

[–] BetaDoggo_@lemmy.world 6 points 21 hours ago

They've been doing it since the start. OAI was fear mongering about how dangerous gpt2 was initially as an excuse to avoid releasing the weights, while simultaneously working on much larger models with the intent to commercialize. The whole "our model is so good even we're scared of it" shtick has always been marketing or an excuse to keep secrets.

Even now they continue to use this tactic while actively suppressing their own research showing real social, environmental and economic harms.

[–] jaybone@lemmy.zip 5 points 22 hours ago

lol nice BSD brag thrown in there

[–] morto@piefed.social 4 points 1 day ago (1 children)

I used to think it wasn't viable to poison llms, but are you saying there's a chance? [a meme comes to mind]

[–] No1@aussie.zone 2 points 19 hours ago

You and me. We just need 248 more volunteers and we can save the world!

[–] Telorand@reddthat.com 4 points 1 day ago (1 children)

On that note, if you're an artist, make sure you take Nightshade or Glaze for a spin. Don't need access to the LLM if they're wantonly snarfing up poison.

[–] _cryptagion@anarchist.nexus 5 points 1 day ago (2 children)

the reason more people haven't adopted that is because they don't work.

[–] Telorand@reddthat.com 1 points 1 day ago (3 children)

I haven't seen any objective evidence that they don't work. I've seen anecdotal stories, but nothing in the way of actual proof.

[–] Buffalox@lemmy.world 5 points 1 day ago (4 children)

You can't prove a negative, what you should look for is evidence that it works, without such evidence, there is no reason to believe it does.

load more comments (4 replies)
load more comments (2 replies)
[–] _cryptagion@anarchist.nexus 4 points 1 day ago

if that's true, why hasn't it worked so far then?

[–] NuXCOM_90Percent@lemmy.zip 4 points 1 day ago

found that with just 250 carefully-crafted poison pills, they could compromise the output of any size LLM

That is a very key point.

if you know what you are doing? Yes, you can destroy a model. In large part because so many people are using unlabeled training data.

As a bit of context/baby's first model training:

  • Training on unlabeled data is effectively searching the data for patterns and, optimally, identifying what those patterns are. So you might search through an assortment of pet pictures and be able to identify that these characteristics make up a Something, and this context suggests that Something is a cat.
  • Labeling data is where you go in ahead of time to actually say "Picture 7125166 is a cat". This is what used to be done with (this feels like it should be a racist term but might not be?) Mechanical Turks or even modern day captcha checks.

Just the former is very susceptible to this kind of attack because... you are effectively labeling the training data without the trainers knowing. And it can be very rapidly defeated, once people know about it, by... just labeling that specific topic. So if your Is Hotdog? app is flagging a bunch of dicks? You can go in and flag maybe 10 dicks and 10 hot dogs and ten bratwurst and you'll be good to go.

All of which gets back to: The "good" LLMs? Those are the ones companies are paying for to use for very specific use cases and training data is very heavily labeled as part of that.

For the cheap "build up word of mouth" LLMs? They don't give a fuck and they are invariably going to be poisoned by misinformation. Just like humanity is. Hey, what can't jet fuel melt again?

[–] Vupware@lemmy.zip 3 points 4 hours ago (1 children)

The only way I could do that was if you had to do a little more work and I would be happy with it but you have a hard day and you don’t want me working on your day so you don’t want me doing that so you can get it all over with your own thing I would be fine if I was just trying not being rude to your friend or something but you don’t want me being mean and rude and rude and you just want me being mean I would just like you know that and you know I would like you and you know what I’m talking to do I would love you to do and you would love you too and you would like you know what to say and you would like you to me

[–] biggeoff@sh.itjust.works 3 points 1 hour ago (1 children)
load more comments (1 replies)
[–] WhatGodIsMadeOf@feddit.org 2 points 1 day ago

Isn't this applicable to all human societies as well though?

[–] yardratianSoma@lemmy.ca 2 points 1 day ago* (last edited 1 day ago)

Well, I'm still glad offline LLM's exist. The models we download and store are way less popular then the mainstream, perpetually online ones.

Once I beef up my hardware (which will take a while seeing how crazy RAM prices are), I will basically forgo the need to ever use an online LLM ever again, because even now on my old hardware, I can handle 7 to 16B parameter models (quantized, of course).

load more comments
view more: ‹ prev next ›