Tartas1995

joined 2 years ago
[–] Tartas1995@discuss.tchncs.de -1 points 1 week ago

I didn't say you made it up.

And the alignment problem doesn't imply agency.

Complaining that it draws focus away from ai ethics, is an fundamentally misguided view. That is like saying workers rights draws focus away from human rights. You can have both, you should have both. And there is some overlaps.

[–] Tartas1995@discuss.tchncs.de 4 points 1 week ago (2 children)

Sam altman? You went for Sam altman as an "ai safety researcher that is really an hype man"? Is there a worst example? The CEO of the most well known ai company tries to undermine the actual ai safety research and be perceived as "safety aware" for good pr. Shocking...

Elsevier yudkowsky seems to claim that misalignment is a continuous problem in continuously learning ai models and that misalignment could turn into a huge issue without checks and balances and continuously improving on them. And he makes the case that developers have to be aware and counteract unintented harm that their ai can cause (you know the thing that you called ai ethics)

[–] Tartas1995@discuss.tchncs.de 5 points 1 week ago (4 children)

It is so funny to me that you equate "AI Safety" with "fear mongering for a god AI".

  1. They are hype men
  2. Then you highlight why AI Safety is important by linking a blog post about the dangers of poorly thought-out AI systems
  3. While calling it fear mongering for a god AI.

What are they now?

If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.