this post was submitted on 30 Dec 2025
651 points (98.8% liked)

Technology

78098 readers
3048 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] U7826391786239@lemmy.zip 154 points 17 hours ago* (last edited 17 hours ago) (7 children)

i don't think it's emphasized enough that AI isn't just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be "real," but the source itself is complete AI slop bullshit

https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

the actual danger of it all should be apparent, especially in any field related to health science research

and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

[–] BreadstickNinja@lemmy.world 63 points 16 hours ago (1 children)

It's a shit ouroboros, Randy!

load more comments (1 replies)
[–] tym@lemmy.world 39 points 15 hours ago (3 children)

the movie idiocracy was a prophecy that we were too arrogant to take seriously.

now go away, I'm baitin

[–] IronBird@lemmy.world 18 points 13 hours ago (1 children)

we would be lucky to have a president as down to earth as camacho

[–] Cethin@lemmy.zip 14 points 12 hours ago

Yep. I don't care if a president is smart. I care if they listen to the experts. I don't want one who thinks they know everything, because no one can.

[–] CheeseNoodle@lemmy.world 10 points 15 hours ago (1 children)

When is that movie set again? I want to mark my calender for the day the US finally gets a compitent president.

[–] tym@lemmy.world 18 points 14 hours ago (3 children)

Movie was set in 2505... We're speed-running it. We should get our first pro-wrestler president in our lifetime.

[–] PalmTreeIsBestTree@lemmy.world 14 points 14 hours ago

Trump technically is one. We are all ready there.

[–] Evkob@lemmy.ca 12 points 14 hours ago
load more comments (1 replies)
load more comments (5 replies)
[–] brsrklf@jlai.lu 113 points 14 hours ago (6 children)

Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.

Arthur C. Clarke was not wrong but he didn't go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.

[–] clay_pidgin@sh.itjust.works 36 points 13 hours ago (2 children)

I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?

[–] mushroommunk@lemmy.today 36 points 12 hours ago (6 children)

I don't think most people know there's built in instructions. I think to them it's legitimately a magic box.

load more comments (6 replies)
[–] Tyrq@lemmy.dbzer0.com 6 points 12 hours ago* (last edited 12 hours ago)

Almost as if misinformation is the product either way you slice it

[–] InternetCitizen2@lemmy.world 18 points 12 hours ago* (last edited 12 hours ago)

Grok, enhance this image

(•_•)
( •_•)>⌐■-■
(⌐■_■)

[–] Wlm@lemmy.zip 8 points 9 hours ago (1 children)

Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.

[–] NikkiDimes@lemmy.world 12 points 9 hours ago (4 children)

That's more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.

load more comments (4 replies)
load more comments (3 replies)
[–] nulluser@lemmy.world 103 points 16 hours ago (1 children)

Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

No, no, apparently not everyone, or this wouldn't be a problem.

[–] FlashMobOfOne@lemmy.world 25 points 13 hours ago

In hindsight, I'm really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

[–] SleeplessCityLights@programming.dev 60 points 8 hours ago (8 children)

I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how "smart" a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

[–] hardcoreufo@lemmy.world 21 points 7 hours ago (6 children)

Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I'm asking for better than a search engine. The rest of the time it runs me in circles that don't work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

[–] MrScottyTay@sh.itjust.works 8 points 5 hours ago

It's fucking awful isn't it. Summer day soon when i can be arsed I'll have to give one of the paid search engines a go.

I'm currently on qwant but I've already noticed a degradation in its results since i started using it at the start of the year.

[–] ironhydroxide@sh.itjust.works 5 points 5 hours ago

Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.

load more comments (4 replies)
load more comments (7 replies)
[–] b_tr3e@feddit.org 50 points 10 hours ago* (last edited 10 hours ago) (5 children)

No AI needed for that. These bloody librarians wouldn't let us have the Necronomicon either. Selfish bastards...

[–] Naevermix@lemmy.world 13 points 9 hours ago (1 children)

I swear, librarians are the only thing standing between humanity and true greatness!

[–] b_tr3e@feddit.org 8 points 9 hours ago (3 children)

There's only the One High and Mighty who can bring true greatness to humanity! Praise Cthulhu!

load more comments (3 replies)
[–] smh@slrpnk.net 12 points 7 hours ago (2 children)
[–] b_tr3e@feddit.org 8 points 5 hours ago

Limited preview - some pages are unavailable.

Very funny... Yäääh! Shabb nigurath.... wrdlbrmbfd,

[–] glitchdx@lemmy.world 6 points 4 hours ago (1 children)

Some pages are omitted. Yeah. There's like four pages of 300. I'm disappointed beyond measure and my day is ruined.

load more comments (1 replies)
[–] RalfWausE@feddit.org 10 points 9 hours ago (1 children)

This one is on you. MY copy of the necronomicon firmly sits in my library in the west wing...

load more comments (1 replies)
load more comments (2 replies)
[–] pHr34kY@lemmy.world 46 points 16 hours ago* (last edited 16 hours ago) (3 children)

There's an old Monty Python sketch from 1967 that comes to mind when people ask a librarian for a book that doesn't exist.

They predicted the future.

[–] palordrolap@fedia.io 15 points 16 hours ago

Are you sure that's not pre-Python? Maybe one of David Frost's shows like At Last the 1948 Show or The Frost Report.

Marty Feldman (the customer) wasn't one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)

[–] 5too@lemmy.world 6 points 16 hours ago (1 children)

Thanks for this, I hadn't seen this one!

load more comments (1 replies)
load more comments (1 replies)
[–] MountingSuspicion@reddthat.com 41 points 14 hours ago (1 children)

I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that "the following information has no relation to reality" or some other thing. The other person kept insisting it was not needed. I'm not saying it would stop all of these events, but it couldn't hurt.

[–] glitchdx@lemmy.world 28 points 12 hours ago (2 children)

https://www.explainxkcd.com/wiki/index.php/2501:_Average_Familiarity

People who understand the technology forget that normies don't understand the technology.

[–] eli@lemmy.world 9 points 11 hours ago (1 children)

TIL there is a whole ass mediawiki for explaining XKCD comics.

load more comments (1 replies)
[–] TubularTittyFrog@lemmy.world 9 points 11 hours ago* (last edited 11 hours ago) (1 children)

and normies think you're an asshole if you try to explain the technology to them, and cling to their ignorance of it basic it's more 'fun' to believe in magic

load more comments (1 replies)
[–] panda_abyss@lemmy.ca 18 points 14 hours ago* (last edited 14 hours ago) (1 children)

I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.

So it’s not really much better.

Hallucinations become a bigger problem the more info they have (that you now have to double check)

[–] FlashMobOfOne@lemmy.world 6 points 13 hours ago (2 children)

At my work, we don't allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it's good info, and then add it in ourselves.

[–] SkybreakerEngineer@lemmy.world 21 points 13 hours ago (1 children)

That's still looking for sources that fit a predetermined conclusion, not real research

load more comments (1 replies)
load more comments (1 replies)
[–] SethTaylor@lemmy.world 16 points 8 hours ago

I guess Thomas Fullman was right: "When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle". That's from Automating the Mind. One of his best.

[–] vacuumflower@lemmy.sdf.org 8 points 16 hours ago (1 children)

This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. "This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books". Easy.

Then they'll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.

[–] porcoesphino@mander.xyz 7 points 16 hours ago

Its a fun problem trying to apply this to the while internet. I'm slowly adding sites with obvious generated blogs to Kagi but it's getting worse

[–] Imgonnatrythis@sh.itjust.works 8 points 10 hours ago

They really should stop hiding them. We all deserve to have access to these secret books that were made up by AI since we all contributed to the training data used to write these secret books.

load more comments
view more: next ›