this post was submitted on 15 Dec 2025
534 points (98.4% liked)

Technology

77090 readers
3920 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] ceenote@lemmy.world 162 points 18 hours ago (1 children)

So, like with Godwin's law, the probability of a LLM being poisoned as it harvests enough data to become useful approaches 1.

[–] Gullible@sh.itjust.works 89 points 18 hours ago (3 children)

I mean, if they didn’t piss in the pool, they’d have a lower chance of encountering piss. Godwin’s law is more benign and incidental. This is someone maliciously handing out extra Hitlers in a game of secret Hitler and then feeling shocked at the breakdown in the game

[–] saltesc@lemmy.world 29 points 18 hours ago* (last edited 18 hours ago) (1 children)

Yeah but they don't have the money to introduce quality governance into this. So the brain trust of Reddit it is. Which explains why LLMs have gotten all weirdly socially combative too; like two neckbeards having at it—Google skill vs Google skill—is a rich source of A+++ knowledge and social behaviour.

[–] yes_this_time@lemmy.world 11 points 17 hours ago (2 children)

If I'm creating a corpus for an LLM to consume, I feel like I would probably create some data source quality score and drop anything that makes my model worse.

[–] wizardbeard@lemmy.dbzer0.com 11 points 16 hours ago (1 children)

Then you have to create a framework for evaluating the effect of the addition of each source into "positive" or "negative". Good luck with that. They can't even map input objects in the training data to their actual source correctly or consistently.

It's absolutely possible, but pretty much anything that adds more overhead per each individual input in the training data is going to be too costly for any of them to try and pursue.

O(n) isn't bad, but when your n is as absurdly big as the training corpuses these things use, that has big effects. And there's no telling if it would actually only be an O(n) cost.

[–] yes_this_time@lemmy.world 6 points 15 hours ago

Yeah, after reading a bit into it. It seems like most of the work is up front, pre filtering and classifying before it hits the model, to your point the model training part is expensive...

I think broadly though, the idea that they are just including the kitchen sink into the models without any consideration of source quality isn't true

[–] hoppolito@mander.xyz 5 points 16 hours ago (2 children)

As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons:

  1. The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.

    There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.

  2. Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.

    If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.

[–] WhiteOakBayou@lemmy.world 8 points 16 hours ago

like the LLM that was finding cancers and people were initially impressed but then they figured out the LLM had just correlated a DR's name on the scan to a high likelihood of cancer. Once the complicating data point was removed, the LLM no longer performed impressively. Point #2 is very Goodhart's law adjacent.

load more comments (1 replies)
[–] Arancello@aussie.zone 6 points 16 hours ago (1 children)

i understood that reference to handing out secret hitlers. played that game first during hike called ‘three capes’ in Tasmania. laughed ‘til my cheeks hurt.

load more comments (1 replies)
load more comments (1 replies)
[–] supersquirrel@sopuli.xyz 82 points 18 hours ago* (last edited 18 hours ago) (8 children)

I made this point recently in a much more verbose form, but I want to reflect it briefly here, if you combine the vulnerability this article is talking about with the fact that large AI companies are most certainly stealing all the data they can and ignoring our demands to not do so the result is clear we have the opportunity to decisively poison future LLMs created by companies that refuse to follow the law or common decency with regards to privacy and ownership over the things we create with our own hands.

Whether we are talking about social media, personal websites... whatever if what you are creating is connected to the internet AI companies will steal it, so take advantage of that and add a little poison in as a thank you for stealing your labor :)

[–] korendian@lemmy.zip 53 points 18 hours ago (3 children)

Not sure if the article covers it, but hypothetically, if one wanted to poison an LLM, how would one go about doing so?

[–] expatriado@lemmy.world 84 points 18 hours ago (5 children)

it is as simple as adding a cup of sugar to the gasoline tank of your car, the extra calories will increase horsepower by 15%

[–] Beacon@fedia.io 44 points 18 hours ago (1 children)

I can verify personally that that's true. I put sugar in my gas tank and i was amazed how much better my car ran!

[–] setsubyou@lemmy.world 40 points 18 hours ago

Since sugar is bad for you, I used organic maple syrup instead and it works just as well

[–] Scrollone@feddit.it 12 points 17 hours ago (1 children)

Also, flour is the best way to put out a fire in your kitchen.

[–] SaneMartigan@aussie.zone 6 points 11 hours ago

Flour is bang for buck some of the cheapest calories out there. With its explosive potential it's a great fuel source .

[–] demizerone@lemmy.world 11 points 8 hours ago

I give sugar to my car on its birthday for being a good car.

[–] crank0271@lemmy.world 8 points 12 hours ago (2 children)

This is the right answer here

load more comments (2 replies)
[–] _cryptagion@anarchist.nexus 6 points 17 hours ago (1 children)

you're more likely to confuse a real person with this than a LLM.

load more comments (1 replies)
[–] PrivateNoob@sopuli.xyz 32 points 18 hours ago* (last edited 18 hours ago) (11 children)

There are poisoning scripts for images, where some random pixels have totally nonsensical / erratic colors, which we won't really notice at all, however this would wreck the LLM into shambles.

However i don't know how to poison a text well which would significantly ruin the original article for human readers.

Ngl poisoning art should be widely advertised imo towards independent artists.

[–] turdas@suppo.fi 23 points 18 hours ago (1 children)

The I in LLM stands for "image".

[–] PrivateNoob@sopuli.xyz 6 points 17 hours ago

Fair enough on the technicality issues, but you get my point. I think just some art poisoing could maybe help decrease the image generation quality if the data scientist dudes do not figure out a way to preemptively filter out the poisoned images (which seem possible to accomplish ig) before training CNN, Transformer or other types of image gen AI models.

[–] partofthevoice@lemmy.zip 5 points 10 hours ago (1 children)

Replace all upper case I with a lower case L and vis-versa. Fill randomly with zero-width text everywhere. Use white text instead of line break (make it weird prompts, too).

[–] killingspark@feddit.org 6 points 7 hours ago* (last edited 4 hours ago) (1 children)

Somewhere an accessibility developer is crying in a corner because of what you just typed

Edit: also, please please please do not use alt text for images to wrongly "tag" images. The alt text important for accessibility! Thanks.

load more comments (1 replies)
load more comments (9 replies)
[–] recursive_recursion@piefed.ca 14 points 18 hours ago (1 children)

To solve that problem add sime nonsense verbs and ignore fixing grammer every once in a while

Hope that helps!🫡🎄

[–] YellowParenti@lemmy.wtf 12 points 18 hours ago (1 children)

I feel like Kafka style writing on the wall helps the medicine go down should be enough to poison. First half is what you want to say, then veer off the road in to candyland.

[–] TheBat@lemmy.world 6 points 16 hours ago

Keep doing it but make sure you're only wearing tighty-whities. That way it is easy to spot mistakes. ☺️

[–] ProfessorProteus@lemmy.world 11 points 17 hours ago

Opportunity? More like responsibility.

[–] benignintervention@piefed.social 7 points 14 hours ago

I'm convinced they'll do it to themselves, especially as more books are made with AI, more articles, more reddit bots, etc. Their tool will poison its own well.

load more comments (5 replies)
[–] kokesh@lemmy.world 62 points 17 hours ago (1 children)

Is there some way I can contribute some poison?

[–] ZoteTheMighty@lemmy.zip 37 points 12 hours ago (1 children)

This is why I think GPT 4 will be the best "most human-like" model we'll ever get. After that, we live in a post-GPT4 internet and all future models are polluted. Other models after that will be more optimized for things we know how to test for, but the general purpose "it just works" experience will get worse from here.

[–] krooklochurm@lemmy.ca 12 points 9 hours ago (1 children)

Most human LLM anyway.

Word on the street is LLMs are a dead end anyway.

Maybe the next big model won't even need stupid amounts of training data.

load more comments (1 replies)
[–] Rhaedas@fedia.io 31 points 18 hours ago

I'm going to take this from a different angle. These companies have over the years scraped everything they could get their hands on to build their models, and given the volume, most of that is unlikely to have been vetted well, if at all. So they've been poisoning the LLMs themselves in the rush to get the best thing out there before others do, and that's why we get the shit we get in the middle of some amazing achievements. The very fact that they've been growing these models not with cultivation principles but with guardrails says everything about the core source's tainted condition.

[–] absGeekNZ@lemmy.nz 15 points 14 hours ago

So if someone was to hypothetically label an image in a blog or a article; as something other than what it is?

Or maybe label an image that appears twice as two similar but different things, such as a screwdriver and an awl.

Do they have a specific labeling schema that they use; or is it any text associated with the image?

[–] Hackworth@piefed.ca 15 points 17 hours ago

There's a lot of research around this. So, LLM's go through phase transitions when they reach the thresholds described in Multispin Physics of AI Tipping Points and Hallucinations. That's more about predicting the transitions between helpful and hallucination within regular prompting contexts. But we see similar phase transitions between roles and behaviors in fine-tuning presented in Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.

This may be related to attractor states that we're starting to catalog in the LLM's latent/semantic space. It seems like the underlying topology contains semi-stable "roles" (attractors) that the LLM generations fall into (or are pushed into in the case of the previous papers).

Unveiling Attractor Cycles in Large Language Models

Mapping Claude's Spirtual Bliss Attractor

The math is all beyond me, but as I understand it, some of these attractors are stable across models and languages. We do, at least, know that there are some shared dynamics that arise from the nature of compressing and communicating information.

Emergence of Zipf's law in the evolution of communication

But the specific topology of each model is likely some combination of the emergent properties of information/entropy laws, the transformer architecture itself, language similarities, and the similarities in training data sets.

[–] PumpkinSkink@lemmy.world 10 points 2 hours ago (2 children)

So you're saying that thorn guy might be on to somthing?

load more comments (2 replies)
[–] Fandangalo@lemmy.world 8 points 17 hours ago

Garbage in, garbage out.

[–] Sam_Bass@lemmy.world 7 points 1 hour ago

Thats a price you pay for all the indiscriminate scraping

[–] mudkip@lemdro.id 7 points 18 hours ago (1 children)

Great, why aren't we doing it?

load more comments (1 replies)
[–] Hegar@fedia.io 6 points 18 hours ago (1 children)

I don't know that it's wise to trust what anthropic says about their own product. AI boosters tend to have an "all news is good news" approach to hype generation.

Anthropic have recently been pushing out a number of headline grabbing negative/caution/warning stories. Like claiming that AI models blackmail people when threatened with shutdown. I'm skeptical.

[–] BetaDoggo_@lemmy.world 5 points 14 hours ago

They've been doing it since the start. OAI was fear mongering about how dangerous gpt2 was initially as an excuse to avoid releasing the weights, while simultaneously working on much larger models with the intent to commercialize. The whole "our model is so good even we're scared of it" shtick has always been marketing or an excuse to keep secrets.

Even now they continue to use this tactic while actively suppressing their own research showing real social, environmental and economic harms.

[–] 87Six@lemmy.zip 5 points 1 hour ago

Yea that's their entire purpose, to allow easy dishing of misinformation under the guise of

it's bleeding-edge tech, it makes mistakes

[–] Telorand@reddthat.com 5 points 18 hours ago (10 children)

On that note, if you're an artist, make sure you take Nightshade or Glaze for a spin. Don't need access to the LLM if they're wantonly snarfing up poison.

load more comments (10 replies)
[–] jaybone@lemmy.zip 5 points 15 hours ago

lol nice BSD brag thrown in there

load more comments
view more: next ›