this post was submitted on 30 May 2025
29 points (76.4% liked)

Ask Lemmy

32230 readers
1705 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

top 50 comments
sorted by: hot top controversial new old
[–] oakey66@lemmy.world 22 points 1 week ago (5 children)

No. It hallucinates all the time.

[–] leftzero@lemmynsfw.com 5 points 1 week ago

Yes, but search engines will serve you LLM generated slop instead of search results, and sites like Stack Overflow will die due to lack of visitors, so the internet will become a reddit-like useless LLM ridden hellscape completely devoid of any human users, and we'll have to go back to our grandparents' old dusty paper encyclopedias.

Eventually, in a decade or two, once the bubble has burst and google, meta, and all those bastards have starved each other to death, we might be able to start rebuilding a new internet, probably reinventing usenet over ad-hoc decentralised wifi networks, but we won't get far, we'll die in the global warming wars before we get it to any significant size.

At least some bastards will have made billions out of the scam, though, so there's that, I suppose. 🤷‍♂️

load more comments (4 replies)
[–] FeelzGoodMan420@eviltoast.org 15 points 1 week ago (1 children)

Probably, however I will not be doing that because LLM models are dogshit and hallucinate bullshit half the time. I wouldn't trust a single fucking thing that a LLM provides.

[–] chaosCruiser@futurology.today 6 points 1 week ago (1 children)

Fair enough, and that’s actually really good. You’re going to be one of the few who actually go through the trouble of making an account on a forum, ask a single question, and never visit the place after getting the answer. People like you are the reason why the internet has an answer to just about anything.

[–] FeelzGoodMan420@eviltoast.org 5 points 1 week ago

Haha. Yes I'll be a tech Boomer. Stuck in my old ways. Although answers on forums are often straight misinformation so really there's no perfect solution to get answers. You just have to cross check as many sources as possible.

[–] psx_crab@lemmy.zip 14 points 1 week ago (3 children)

And where does LLM take the answer? Forum and socmed. And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.

So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.

[–] leftzero@lemmynsfw.com 9 points 1 week ago* (last edited 1 week ago) (1 children)

And if LLM don't have the actual answer they blabbering like a redditor, and if someone can't get an accurate answer they start asking forum and socmed.

LLM's are completely incapable of giving a correct answer, except by random chance.

They're extremely good at giving what looks like a correct answer, and convincing their users that it's correct, though.

When LLMs are the only option, people won't go elsewhere to look for answers, regardless of how nonsensical or incorrect they are, because the answers will look correct, and we'll have no way of checking them for correctness.

People will get hurt, of course. And die. (But we won't hear about it, because the LLM's won't talk about it.) And civilization will enter a truly dark age of mindless ignorance.

But that doesn't matter, because the company will have already got their money, and the line will go up.

load more comments (1 replies)
[–] oyo@lemm.ee 4 points 1 week ago (1 children)

The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It's very difficult to even locate the original sources or forums anymore.

load more comments (1 replies)
load more comments (1 replies)
[–] FistingEnthusiast@lemmynsfw.com 11 points 1 week ago (5 children)

No, because I ignore whatever AI slop comes up when I search for something

I have never found it to be anything other than useless. I will actively search for a qualified answer to my questions, rather than being lazy and relying on the first thing that pops up

[–] leave_it_blank@lemmy.world 7 points 1 week ago (2 children)

To be fair, at the current state search engines work LLMs might not be the worst idea.

I'm looking for the 7800x3d, not 3D shooters, not the 1234x3d, no not the pentium 4, not the 4700rtx. It takes more and more effort to search something, and the first pages show every piece of crap I'm not interested in.

[–] palordrolap@fedia.io 2 points 1 week ago

To be fair, at the current state search engines work LLMs might not be the worst idea.

The current state of search engines is at least partially because the search engine owners have been trying to shove AI down the users' throats already. Saying "go full LLM" is like saying "hmm, it's hot in this pan, maybe it's better to be in the fire underneath".

The other, perhaps more important, part of search engine corruption is from trying to shove advertising down users' throats. LLM in search will be twisted into doing the same thing, so that won't save it either.

The, other, other part is the fact that an increasing percentage of the Internet is made up of walled gardens and web apps that are all but impossible to index, and LLMs can't help there. Pigboys that run the hard-to-index sites selling the content out from under the users notwithstanding.

Finally, as has been pointed out elsewhere, an LLM can only give an answer based on what was correct yesterday. Or last week. Or a decade ago. Even forums have this problem. Take the fact that unchangeable, "irreplaceable" answers on sites like StackOverflow reflect the state of things when the answers were written, not how things are now, years later.

load more comments (1 replies)
[–] sorghum@sh.itjust.works 7 points 1 week ago* (last edited 1 week ago) (2 children)

What I'm worried about are traditional indexers being intentionally nerfed, discontinued, or left unmaintained at best. I've often wondered what it would take to self host a personal indexer. I remember a time when search giant Alta Vista had a full text index of the then known internet on their DEC Alpha server(s).

Alta Vista was great!

Now I'm definitely showing my age...

[–] SW42@lemmy.world 2 points 1 week ago

The problem lies with the way the “modern” internet works by loading everything dynamically. Static pages to index are becoming more rare. Also a lot of information is being “lost” in proprietary systems like discord. Those also can’t be indexed (easily)

[–] LambdaRX@sh.itjust.works 3 points 1 week ago (2 children)

I think you will be in a loud minority, people don't like additional work.

Probably

But I don't see it as work

"Work" is unfucking a situation that I created by being lazy in the first place rather than doing something properly

I'm probably showing my age though...

[–] ArgumentativeMonotheist@lemmy.world 2 points 1 week ago (1 children)

Even "let me Google that for you" was popular only some years ago. Yes, people are lazy, unthinking hedonists most of the time. In the absence of some sort of strict moral basis, society degenerates because only the tiniest minority will even think about things to try to establish some personal rules.

[–] sorghum@sh.itjust.works 4 points 1 week ago

I still use https://lmgtfy.com/ as a public shame for anyone that can't be arsed to put in a bit of effort to find something.

load more comments (2 replies)
[–] Kolanaki@pawb.social 10 points 1 week ago (1 children)

Maybe in the sense that the Internet may become so inundated with AI garbage that the only way to get factual information is by actually reading a book or finding a real person to ask, face to face.

[–] SpicyColdFartChamber@lemm.ee 4 points 1 week ago* (last edited 1 week ago) (3 children)

You know how the steel from prenuclear proliferation is prized? I wonder if that's going to happen with data from before 2022 as well now. Lol.

load more comments (3 replies)
[–] quediuspayu@lemmy.world 8 points 1 week ago (3 children)

LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

What they call hallucinations in other areas was called fabulations, to invent tales or stories.

I'm curious about what is the shortest acceptable answer for these things and if something close to "I don't know" is even an option.

[–] prex@aussie.zone 2 points 1 week ago

Sound similar to betteridges law of headlines.
Im sure there are tricks like adding 'fact check your response' but I suspect there is something intrinsic to these models that makes it a super difficult problem.

[–] chaosCruiser@futurology.today 2 points 1 week ago* (last edited 1 week ago) (2 children)

I get the feeling that LLMs are designed to please humans, so uncomfortable answers like “I don’t know” are out of the question.

  • This thing is broken. How do I fix it?
  • Don’t know. 🤷
  • Seriously? I need an answer? Any ideas?
  • Nope. You’re screwed. Best of luck to you. Figure it out. I believe in you. ❤️
load more comments (2 replies)
[–] FaceDeer@fedia.io 2 points 1 week ago (4 children)

LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.

This applies equally well to human-generated answers to stuff.

[–] quediuspayu@lemmy.world 3 points 1 week ago

True, the difference is that with humans it's usually more public, it is easier for someone to call bullshit. With LLMs the bullshit is served with the intimacy of embarrassing porn so is less likely to see any warnings.

load more comments (3 replies)
[–] Rhynoplaz@lemmy.world 7 points 1 week ago (1 children)

There have been enough times that I googled something, saw the AI answer at the top, and repeated it like gospel. Only to look like a buffoon when we realize the AI was completely wrong.

Now I look right past the AI answer and read the sources it's pulling from. Then I don't have to worry about anything misinterpreting the answer.

[–] Quazatron@lemmy.world 10 points 1 week ago (1 children)

True, but soon the sources will be AI generated too, in a big GIGO loop.

[–] chaosCruiser@futurology.today 3 points 1 week ago (1 children)

That’s exactly what I’m worried about happening. What If one day there are hardly any sources left?

[–] Quazatron@lemmy.world 5 points 1 week ago (3 children)

At this rate that day is not too distant, I'm affraid.

I was expecting either Huxley or Orwell to be right, not both.

load more comments (3 replies)
[–] kalkulat@lemmy.world 5 points 1 week ago* (last edited 1 week ago) (4 children)

Trouble is that 'quick answers' mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.

When you need the details to be verified by trustworthy sources, it's still do-it-yourself time. If you -don't- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.

A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer ... 'none' ... answered as if it had no doubt. It was -so- wrong it hadn't even tried. I pointed it to the right answer ('an infinite number') and to the proof. It then verified that.

A couple of days ago, I asked it the same question ... and it was completely wrong again. It hadn't learned a thing. After some conversation, it told me it couldn't learn. I'd already figured that out.

[–] Tar_alcaran@sh.itjust.works 4 points 1 week ago

Trouble is that 'quick answers' mean the LLM took no time to do a thorough search.

LLMs don't "search". They essentially provide weighted parrot-answers based on what they've seen elsewhere.

If you tell an LLM that the sky is red, they will tell you the sky is red. If you tell them your eyes are the colour of the sky, they will repeat that your eyes are red. LLMs aren't capable of checking if something is true.

Theyre just really fast parrots with a big vocabulary. And every time they squawk, it burns a tree.

load more comments (3 replies)
[–] Seasoned_Greetings@lemm.ee 4 points 6 days ago* (last edited 6 days ago) (2 children)

My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.

They obviously missed the "AI Generated" tag on the Google search and couldn't figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn't exist.

These are average people and they didn't realize that they were even using ai much less how unreliable it can be.

I think there's going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.

load more comments (2 replies)
[–] FaceDeer@fedia.io 3 points 1 week ago (5 children)

People will use whatever method of finding answers that works best for them.

Stuck, you contact tech support, wait weeks for a reply, and the cycle continues

Why didn't you post a question on a public forum in that scenario? Or, in the future, why wouldn't the AI search agent itself post a question? If questions need to be asked then there's nothing stopping them from still being asked.

[–] Dragonstaff@leminal.space 3 points 1 week ago (3 children)

If you cut a forum's population by 90% it will die.

This is one of the biggest problems with AI. If it becomes the easiest way to get good answers for most things, it will starve the channels that can answer the things it can't (including everything new).

load more comments (3 replies)
load more comments (4 replies)
[–] Oberyn@lemmy.world 3 points 1 week ago

If the tech matures enough , potentially !

Not wrong about LLMs (currently )? bad with tech support , but so are search engines lol

[–] haui_lemmy@lemmy.giftedmc.com 2 points 1 week ago (21 children)

LLMs are the big block V8 of search engines. They can do things very fast and consume tons of resources with subterranean efficiency. On top of that, they are privacy invasive, easy to use for manipulation and speed up the problem of less mature users being spoon fed. General purpose LLMs need to be outlawed immediately.

load more comments (21 replies)
load more comments
view more: next ›