this post was submitted on 09 Aug 2025
84 points (97.7% liked)

Hacker News

2326 readers
346 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 11 months ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] N0t_5ure@lemmy.world 47 points 4 days ago (3 children)

I like how confident it is. Now imagine that this is a topic you know nothing about and are relying on it to get information.

[–] victorz@lemmy.world 27 points 4 days ago

I really wish people understood how it works, so that they wouldn't rely on it for literally anything.

[–] burgerchurgarr@lemmus.org 8 points 4 days ago (1 children)

I tried putting together a research plan using an LLM. Like nothing crazy I just wanted it to help me structure my thoughts and write LaTeX for me. Horrible experience.

I gave it a reference paper and said "copy that methodology exactly“ and then said exactly what steps I would like to see included.

It kept making bold claims and suggesting irrelevant methods and just plain wrong approaches. If I had no idea about the topic I might have believed it because that thing is so confident but especially if you know what you’re doing they’re bullshit machines.

[–] TrickDacy@lemmy.world 1 points 4 days ago

It only seems confident if you treat it like a person. If you realize it's a flawed machine, the language it uses shouldn't matter. The problem is that people treat it like it's a person, ie. That its confident sounding responses mean anything.

[–] Marshezezz@lemmy.blahaj.zone 28 points 4 days ago (1 children)

Just think of all the electricity wasted on this shit

[–] peetabix@sh.itjust.works 4 points 4 days ago
[–] ch00f@lemmy.world 11 points 4 days ago

I want to know where the threshold is between "this is a trivial thing and not what GPT is for" and "I don't understand how it got this answer, but it's really smart."

"It's basically like being able to ask questions of God himself." --Sam Altman (Probably)

[–] Deebster@infosec.pub 4 points 4 days ago (1 children)

Given that it was identified that genAI couldn't do maths and should instead write a small python program, why hasn't this other well-known failing been special cased? AI sees text as tokens, but surely it can convert tokens to a stream of single-character tokens (i.e. letters) and work with that?

[–] jrs100000@lemmy.world 3 points 4 days ago

Cause its a useless skill unless you are making crossword puzzles or verifying that an LLM is using tokens.

[–] bulwark@lemmy.world 1 points 4 days ago

What you guys doing like blubeberries?