this post was submitted on 26 Apr 2025
287 points (99.3% liked)

politics

23163 readers
3395 users here now

Welcome to the discussion of US Politics!

Rules:

  1. Post only links to articles, Title must fairly describe link contents. If your title differs from the site’s, it should only be to add context or be more descriptive. Do not post entire articles in the body or in the comments.

Links must be to the original source, not an aggregator like Google Amp, MSN, or Yahoo.

Example:

  1. Articles must be relevant to politics. Links must be to quality and original content. Articles should be worth reading. Clickbait, stub articles, and rehosted or stolen content are not allowed. Check your source for Reliability and Bias here.
  2. Be civil, No violations of TOS. It’s OK to say the subject of an article is behaving like a (pejorative, pejorative). It’s NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
  3. No memes, trolling, or low-effort comments. Reposts, misinformation, off-topic, trolling, or offensive. Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
  4. Vote based on comment quality, not agreement. This community aims to foster discussion; please reward people for putting effort into articulating their viewpoint, even if you disagree with it.
  5. No hate speech, slurs, celebrating death, advocating violence, or abusive language. This will result in a ban. Usernames containing racist, or inappropriate slurs will be banned without warning

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.

That's all the rules!

Civic Links

Register To Vote

Citizenship Resource Center

Congressional Awards Program

Federal Government Agencies

Library of Congress Legislative Resources

The White House

U.S. House of Representatives

U.S. Senate

Partnered Communities:

News

World News

Business News

Political Discussion

Ask Politics

Military News

Global Politics

Moderate Politics

Progressive Politics

UK Politics

Canadian Politics

Australian Politics

New Zealand Politics

founded 2 years ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] Boddhisatva@lemmy.world 64 points 17 hours ago (6 children)

How many lawyers need to screw themselves over by using LLMs to write legal briefs before the others realize that doing so just might be a bad idea?

I mean, come on, people. There is no such thing as actual artificial "intelligence." There are programs that try to mimic intelligence like LLMs but they are not actually intelligent. These models are trained using data from all over the internet with no vetting as to accuracy. When the thing searches for legal cases to cite, it is just as likely to cite a fictional case from some story as it is to cite an actual case.

[–] JcbAzPx@lemmy.world 39 points 16 hours ago (2 children)

It's not like it's looking up anything either. It's just putting words together that sound right to us. It could hallucinate a citation that never even existed as a fictional case, let alone a real one.

[–] takeda@lemm.ee 20 points 16 hours ago (2 children)

Absolutely this. LLM basically is trained to be good at fooling us into thinking it is intelligent, and it is very good at it.

It doesn't demonstrate how good it is in what it is doing, it demonstrates how easy it is to fool us.

My company provides copilot for software engineering and I use it in my IDE.

The problem is that it produces code that looks accurate, but it often isn't. I frequently tend to disable it. I think it might help in area where I don't know what I'm doing, so it can get some working code, but it is a double edged sword, because if I don't know what I'm doing I will not be able to catch issues.

I also noticed that what it produces when correct, I can frequently write a simpler and shorter version that fits my use case. It looks very likely like code you see students put on GitHub when they post their homework assignment, and I guess that's what it was trained on.

[–] capt_wolf@lemmy.world 16 points 15 hours ago

And you pinpointed exactly the issue right there...

People who don't know what they're doing asking something that can't reason to do something that neither of them understand. It's like the dumbest realization of the singularity we could possibly achieve.

[–] Boddhisatva@lemmy.world 2 points 12 hours ago

LLM basically is trained to be good at fooling us into thinking it is intelligent, and it is very good at it.

That's a fascinating concept. An LLM is really just a specific kind of machine learning. Machine learning can be amazing. It can be used to create algorithms that can detect cancer, predict protein functions, or develop new chemical structures. An LLM is just an algorithm generated using machine learning that deceives people into thinking it's intelligent. That seem like a very accurate description to me.

[–] ImplyingImplications@lemmy.ca 10 points 11 hours ago

It could hallucinate a citation that never even existed as a fictional case

That's what happened in this case reviewed by Legal Eagle.

The lawyer provided a brief that cited cases that the judge could not find. The judge requested paper copies of the cases and that's when the lawyer handed over some dubious documents. The judge then called the lawyer into the court to ask why he submitted fraudulent cases and why he shouldn't have his law licence revoked. The lawyer fessed up that he asked ChatGPT to write the brief and didn't check the citations. When the judge asked for the cases, the lawyer went back to ask ChatGPT for them, and it generated the cases...but they were clearly not real. So much so that the defendants names would change throughout the case, the judges who ruled on the cases were from different districts, and they were all about a page long when real case rulings tend to be dozens of pages.

[–] dhork@lemmy.world 12 points 16 hours ago (1 children)

At this point, everyone should understand that every single thing a public AI "writes" needs to be vetted by a human, particularly in the legal field. Lawyers who don't understand this need to no longer be lawyers.

(On the other hand, I bet all the good law firms are maintaining their own private AI, where they feed it the relevant case histories directly, and specifically instruct it to provide citations to published works and not make shit up on its own. Then they validate it all, anyway, because their professional reputation depends on it).

[–] ToastedRavioli@midwest.social 7 points 15 hours ago* (last edited 15 hours ago)

I think it would be quite reasonable for any lawyer who files something that includes references to case law that doesn’t exist to simply be disbarred.

The courts are backed up enough without having to deal with this bullshit. And it shows clear lack of concern for properly representing their client

[–] Cethin@lemmy.zip 9 points 13 hours ago

It's just as likely to just make something up than site a case, real or fictional. It'll use very confident language to gaslight you into thinking it's real though, as it does with everything else.

Humans are stupid. The issue with LLMs is that they're all just confidence men. They speak with authority so people believe them without question, even though it doesn't actually know anything.

[–] Witchfire@lemmy.world 6 points 15 hours ago (1 children)

It's one thing to use it as a fancy spell check, it's another to have it generate AI slop then present that as a legal argument without reading it

[–] sp3ctr4l@lemmy.dbzer0.com 3 points 12 hours ago* (last edited 12 hours ago)

LLMs are basically extremely complex text autocomplete systems.

Most smartphones these days have such systems learn from yourself, personally (and of course use all of your vocab data to make a profile of you and sell it to marketers, law enforcement, whoever is buying)... but LLMs learn from... a little bit of everything, all of the time.

[–] bold_atlas@lemmy.world 5 points 9 hours ago* (last edited 9 hours ago)

The fact that so many lawyers are pulling this shit should have people terrified about how much AI generated documents are making it into the record without being noticed.

It's probably a matter of time before one these non-existent cases results in decisions that will cause serious harm.

[–] vegeta@lemmy.world 3 points 16 hours ago
[–] seaside@reddthat.com 35 points 17 hours ago (1 children)

This kind of AI use is a plague. I'm a fourth-year student at one of Romania's top medical universities, and it's insane how many of my peers can no longer write proper essays, conduct research, or carry out studies independently. Critical thinking, attention span, and management skills have all taken a huge hit. My girlfriend attends a highly ranked private high school (where annual tuition is in the five figures, €) - the same issues are present there as well. Depressing times.

[–] Kyle_The_G@lemmy.world 23 points 16 hours ago (2 children)

AI is completely unreliable to the point of almost being dangerous in sciences. The more niche you get with the question the more likely it is to give you a completely incorrect answer. I'd rather it admit that it doesn't know.

[–] brucethemoose@lemmy.world 6 points 14 hours ago* (last edited 14 hours ago)

Chatbots are text completion models, improv machines basically, so they don’t really have that ability. You could look at logprobs I guess (aka is it guessing a bunch of words pretty evenly?), but that’s unreliable. Even adding a “I don’t know” token wouldn’t work because that’s not really trainable into text datasets: they don't know when they don’t know, it’s all just modeling what next word is most likely.

Some non-autoregressive architectures would be better, but unfortunately “cutting edge” models people interact with like ChatGPT are way more conservatively developed than you’d think. Like, they’ve left tons of innovations unpicked.

[–] floofloof@lemmy.ca 3 points 9 hours ago* (last edited 9 hours ago)

Is there even any suitable "confidence" measure within the LLM that it could use to know when it needs to emit an "I don't know" response? I wonder whether there's even any consistent and measurable difference between times when it seems to know what it's talking about and times when it is talking BS. That might be something that exists in our own cognition but has no counterpart in the workings of an LLM. So it may not even be feasible to engineer it to say "I don't know" when it doesn't know. It can't just straightforwardly look at how many sources it has for an answer and how good they were, because LLMs have typically worked in a more holistic way: each item of training data nudges the behaviour of the whole system, but it doesn't leave behind any sign that says "I did this," or any particular piece of knowledge or behaviour that can be ascribed to that training item.

[–] takeda@lemm.ee 13 points 16 hours ago

And trump admin used LLM to generate tariff policy and also to decide who should lose their visa and get deported. And I'm sure there's more.

The whole AI craze is showing that billionaires really got fooled what LLM is, and also shows us that to be a billionaire the requirement isn't to be smart, but to be born to already a wealthy family and be a psychopath.

[–] sp3ctr4l@lemmy.dbzer0.com 12 points 12 hours ago (1 children)

Mike Lindell has a lawyer?

That's the real news to me.

Anyway, I'm sure Mr. Lindell, noted cybersecurity expert and crack addict, will figure this out in due time.

[–] YerbaYerba@lemm.ee 5 points 10 hours ago (1 children)

"L, L, M and Partners" is the name of his new lawfirm.

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 9 hours ago
[–] Sam_Bass@lemmy.world 7 points 9 hours ago

You've got to be completely brainless and utterly lazy to let AI build you anything

[–] VaalaVasaVarde@sopuli.xyz 5 points 16 hours ago

In this case I'll allow it, why waste more time on the pillow guy, just deliver some AI slop and go fishing.

And if I lose my BAR, then it was god's plan all along.

[–] PattyMcB@lemmy.world 2 points 16 hours ago

Some of the lawyers I've dealt with can't write correctly even without using AI