this post was submitted on 09 Jun 2025
794 points (92.0% liked)

Technology

71371 readers
3484 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] krigo666@lemmy.world 5 points 4 days ago* (last edited 4 days ago)

Next, pit ChatGPT against 1K ZX Chess in a ZX81.

[–] NotMyOldRedditName@lemmy.world 5 points 4 days ago (2 children)

Okay, but could ChatGPT be used to vibe code a chess program that beats the Atari 2600?

load more comments (2 replies)
[–] FourWaveforms@lemm.ee 5 points 3 days ago

If you don't play chess, the Atari is probably going to beat you as well.

LLMs are only good at things to the extent that they have been well-trained in the relevant areas. Not just learning to predict text string sequences, but reinforcement learning after that, where a human or some other agent says "this answer is better than that one" enough times in enough of the right contexts. It mimics the way humans learn, which is through repeated and diverse exposure.

If they set up a system to train it against some chess program, or (much simpler) simply gave it a tool call, it would do much better. Tool calling already exists and would be by far the easiest way.

It could also be instructed to write a chess solver program and then run it, at which point it would be on par with the Atari, but it wouldn't compete well with a serious chess solver.

[–] muntedcrocodile@lemm.ee 4 points 4 days ago* (last edited 4 days ago) (1 children)

This isn't the strength of gpt-o4 the model has been optimised for tool use as an agent. That's why its so good at image gen relative to other models it uses tools to construct an image piece by piece similar to a human. Also probably poor system prompting. A LLM is not a universal thinking machine its a a universal process machine. An LLM understands the process and uses tools to accomplish the process hence its strengths in writing code (especially as an agent).

Its similar to how a monkey is infinitely better at remembering a sequence of numbers than a human ever could but is totally incapable of even comprehending writing down numbers.

[–] cheese_greater@lemmy.world 3 points 4 days ago (2 children)

Do you have a source for that re:monkeys memorizing numerical sequences? What do you mean by that?

load more comments
view more: ‹ prev next ›