this post was submitted on 25 Jun 2025
152 points (97.5% liked)

Technology

71866 readers
4343 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FaceDeer@fedia.io 25 points 12 hours ago (1 children)

Did you read the actual order? The detailed conclusions begin on page 9. What specific bits did he get wrong?

[–] ViatorOmnium@piefed.social 5 points 10 hours ago (1 children)

I'm on page 12 and I already saw a false equivalence between human learning and AI training.

[–] FaceDeer@fedia.io 8 points 9 hours ago (2 children)

Is it this?

First, Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic from this use (Opp. 16).

That's the judge addressing an argument that the Authors made. If anyone made a "false equivalence" here it's the plaintiffs, the judge is simply saying "okay, let's assume their claim is true." As is the usual case for a preliminary judgment like this.

[–] MeaanBeaan@lemmy.world 1 points 1 hour ago

Wait, the authors argued that? Why? That's literally the opposite of the thing they needed to argue.

[–] ag10n@lemmy.world -4 points 9 hours ago (2 children)

Page 6 the judge writes the LLM “memorized” the content and could “recite” it.

Neither is true in training or use of LLMs

[–] FaceDeer@fedia.io 9 points 8 hours ago

The judge writes that the Authors told him that LLMs memorized the content and could recite it. He then said "for purposes of argument I'll assume that's true," and even despite that he went ahead and ruled that LLM training does not violate copyright.

It was perhaps a bit daring of Anthropic not to contest what the Authors claimed in that case, but as it turns out the result is an even stronger ruling. The judge gave the Authors every benefit of the doubt and still found that they had no case when it came to training.

[–] Artisian@lemmy.world 2 points 8 hours ago

Depends on the content and the method. There are tons of ways to encrypt data, and under relevant law they may still count as copies. There are certainly weaker NN models where we can extract a lot of the training data, even if it's not easy, from the model parameters (even if we can't find a prompt that gets the model to regurgitate).