this post was submitted on 05 Aug 2025
399 points (99.3% liked)

Technology

73771 readers
3814 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] AbouBenAdhem@lemmy.world 92 points 3 days ago* (last edited 3 days ago) (2 children)

The typical pattern for leaders is to get "second opinions" from advisors who tell them whatever they want to hear, so... maybe asking the equivalent of a magic 8 ball is a marginal improvement?

[–] RobotZap10000@feddit.nl 59 points 3 days ago

I would rather have the politicians consult a plain old magic 8 ball than one controlled by Scam Altman.

[–] boonhet@sopuli.xyz 7 points 1 day ago

Most LLMs are literally "tell you whatever you want to hear " machines unfortunately. I've gotten high praise from ChatGPT for all my ideas until I go "but hang on, wouldn't this factor stop it from being feasible" and then it agrees with me that my original idea was a bit shit lmao

[–] HubertManne@piefed.social 54 points 3 days ago (4 children)

I really don't get it. These things are brand new. How can anyone get so into these things so quickly. I don't take advice from people I barely know, much less ones that can be so easily and quickly reprogrammed.

[–] kamenlady@lemmy.world 28 points 3 days ago* (last edited 3 days ago) (1 children)

This is the unintentional uncanny valley for me in AI.

I ( was forced to ) use chatGTP for work. It can talk about everything and sounds very confident and seems reliable to always come up with something to help you solve your problems.

You talk with it about some niche content and suddenly have an ardent fan of said niche content responding. It surely knows every little bit of info of that niche and surprises you with funny, but apt quotes from your favorite show in the middle of conversations about something else.

This is just from a tiny bit of interaction, while at work.

I can imagine people completely overwhelmed, by having their thoughts confirmed and supported by something that seems so intelligent, responsive and remembers all your conversations. It literally remembers each word.

For many people it may be the first time in their life, that they experience a positive response to their thoughts. Not only that, they also found someone eager to talk with them about it.

[–] HubertManne@piefed.social 32 points 3 days ago

Everyones initial use of chatbots should be on the thing they are most knowledgable about so they can get an idea of how wrong it can be and how it can be useful but you have to treat it like some eager wet behind the ears intern just did for you.

[–] greybeard@feddit.online 7 points 2 days ago (3 children)

One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don't know enough to know it.

I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that's how your job works.

load more comments (3 replies)
load more comments (2 replies)
[–] roofuskit@lemmy.world 34 points 3 days ago (2 children)

It's weird for a head of state to consult their mentally challenged imaginary friend?

[–] Medic8teMe@lemmy.ca 18 points 3 days ago* (last edited 3 days ago) (1 children)

William MacKenzie King, the longest serving Prime Minister in Canada used to commune with spirits via psychic mediums including those of his dead dogs. It was only revealed after his death but was a big part of his life.

I agree it's weird.

[–] MNByChoice@midwest.social 10 points 3 days ago (1 children)

Didn't Nancy Regan, wife of former USA President Ronald Regan, did this as well. (Ronald was apparently not mentally fit for the last few years as well.)

[–] mr_account@lemmy.world 14 points 3 days ago

Nor was he mentally fit for the first years

[–] surewhynotlem@lemmy.world 8 points 3 days ago (1 children)

Bad news friend. The number of atheist heads of state is surprisingly low.

[–] Decq@lemmy.world 32 points 2 days ago (7 children)

Let's be honest though the majority of politicians are so terrible at their job, that this might actually be one of the rare occurrences where AI actually improves the work. But it is very susceptible to unknown influences.

[–] breecher@sh.itjust.works 20 points 2 days ago (6 children)

Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.

[–] Decq@lemmy.world 7 points 2 days ago

I'm just making an objective observation. I don't condone it. I rather we just have competent politicians. But it seems only people who can't function elsewhere are drawn to the position..

load more comments (5 replies)
load more comments (6 replies)
[–] caveman8000@lemmy.world 20 points 2 days ago

Meanwhile the American president uses no intelligence at all. Artificial or otherwise

[–] tal@lemmy.today 15 points 3 days ago (1 children)

“You have to be very careful,” Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information.

I mean, sending queries to a search engine or an LLM are about the same in terms of exposing one's queries.

If the guy were complaining about information from an LLM not being cited or something, then I think I could see where he was coming from more.

[–] j4yt33@feddit.org 4 points 2 days ago

It's a woman

[–] UnderpantsWeevil@lemmy.world 14 points 3 days ago (1 children)
load more comments (1 replies)
[–] yumyumsmuncher@feddit.uk 13 points 2 days ago (3 children)

Politicians and CEOs should be replaced with LLMs

[–] Warl0k3@lemmy.world 11 points 2 days ago* (last edited 2 days ago)

It can't make things any worse...

[–] Humana@lemmy.world 5 points 1 day ago

Speed running us towards the Dune timeline, nice

load more comments (1 replies)
[–] UnfortunateShort@lemmy.world 7 points 3 days ago (1 children)

It surely can't hurt, if it's to sanity check your highly paid advisors...

[–] jonne@infosec.pub 12 points 3 days ago (2 children)

Except those prompts are retained by OpenAI, and you don't know who's got access to that. They've had chats leak before.

load more comments (2 replies)
[–] alvyn@discuss.tchncs.de 7 points 2 days ago (18 children)

I’m not against the technology, I’m against people who runs it. I have problem with how they teach their LLMs on code, user data, music, books, webs all without author’s / user’s consent and worse even with authors / users explicit NO consent to scrape or to use it for learning. Another level is lack of security - ChatGPT chats available to everyone. Deep fakes everywhere, just see the latest Taylor Swift one. Sorry, but fuck you with all of this. There is lack of basic security, privacy and ignoring all of its danger. Only what that fucking AI firms want is easy, cheep and quick money. All that hype for nothing = means you cannot even rely on the output.

load more comments (18 replies)
[–] Beacon@fedia.io 7 points 3 days ago (5 children)

Some of y'all are crazy reactionary. There's absolutely nothing wrong with asking an ai chatbot for an additional opinion. The ai shouldn't be making the decisions, and the ai shouldn't be the only way you look for opinions, but there's nothing wrong with ai being ONE OF the opinions you consider

[–] FerretyFever0@fedia.io 19 points 3 days ago (4 children)

But it doesn't know anything. At all. Does Sweden not have a fuck ton of people that are trained to gather intelligence?

[–] Beacon@fedia.io 10 points 3 days ago

It doesn't matter if it knows anything or not. The purpose is to acquire other ideas that you and the people in your cabinet didn't think of. Or ideas they didn't want to say, because no one wants to tell the boss that their idea is bad. It's a GOOD thing when a politician seeks out multiple different viewpoints to consider. It doesn't matter if one of the viewpoints being considered was created by "a fancy auto-complete" as some haters like to say

load more comments (3 replies)
[–] roofuskit@lemmy.world 13 points 3 days ago* (last edited 3 days ago) (26 children)

AI chat bots don't have their own opinions. All they do is regurgitate other opinions, and you have no idea the motivation for how those opinions are weighted.

load more comments (26 replies)
[–] lime@feddit.nu 8 points 3 days ago (8 children)

there absolutely is something wrong with sending the basis for decisions in matters of state to a foreign actor, though.

load more comments (8 replies)
load more comments (2 replies)
[–] Perspectivist@feddit.uk 4 points 3 days ago (3 children)

Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.

[–] lime@feddit.nu 47 points 3 days ago (3 children)

here's my kneejerk reaction: my prime minister is basing his decisions partly on the messages of an unknown foreign actor, and sending information about state internals to that unknown foreign actor.

whether it's ai or not is a later issue.

load more comments (3 replies)
[–] audaxdreik@pawb.social 8 points 3 days ago* (last edited 3 days ago) (1 children)

Absolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.

(Succinct response taken from Reddit post discussing the topic)

"Horseshoe Theory is slapping "theory" on a strawman to simplify WHY there's crossover from two otherwise conflicting groups. It's pseudo-intellectualizing it to make it seem smart."

This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don't offer a counter argument in your response so I can only assume it's along the lines of, "technology is inevitable, would you have said the same if the Internet?" Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.

I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.

Furthermore, your insistance continues to push an acceptance of AI on those who clearly don't want it, contributing to the anger we feel at having it forced upon us

load more comments (1 replies)
[–] RememberTheApollo_@lemmy.world 7 points 2 days ago* (last edited 2 days ago) (1 children)

If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.

[–] Perspectivist@feddit.uk 4 points 2 days ago

I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.

The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.

load more comments
view more: next ›