this post was submitted on 04 May 2025
92 points (79.5% liked)

Technology

69772 readers
4476 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Buffalox@lemmy.world 1 points 4 hours ago* (last edited 4 hours ago) (1 children)

Just because you can't make a mathematical proof doesn't mean you don't understand the very simple truth of the statement.

What might an operational definition look like?

I think if I could describe that, I might actually have solved the problem of strong AI.
You are asking unreasonable questions.

[–] General_Effort@lemmy.world 1 points 4 hours ago (1 children)

Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.

If I can't prove it, I don't know how I can claim to understand it.

It's axiomatic that equality is symmetric. It's also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.

What makes the difference between a human learning these things and an AI being trained for them?

I think if I could describe that, I might actually have solved the problem of strong AI.

Then how will you know the difference between strong AI and not-strong AI?

[–] Buffalox@lemmy.world 1 points 4 hours ago

Then how will you know the difference between strong AI and not-strong AI?

I've already stated that that is a problem:

From a previous answer to you:

Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.

Because I don't think we have a sure methodology.

I think therefore I am, is only good for the conscious mind itself.
I can't prove that other people are conscious, although I'm 100% confident they are.
In exactly the same way we can't prove when we have a conscious AI.

But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don't accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.