That still only covers a tiny fraction of what is reported. Objectivity in the real world is an illusion.
Prunebutt
The joke is that not everything (or almost nothing) that gets reported can be viewed from a lens of "objective truth". Your examples wouldn't be able to give me information of a statement that someone did, or if something happened... anywhere.
I'm still waiting for the bill that states what happened in Nice last week. /j
You can also just download any binary file you find online and run it. Or use any install.sh
script you happen to find anywhere.
Package managers are simply a convenient offer to manage packages with their dynamically linked libraries and keep them up to date (important for security). But it's still just an offer.
The alignment problem is already the wrong narrative, as it implies agency where there is none. All that talk about "alignment problem" draws focus from AI ethics (not a term I made up).
Read the article.
Then you highlight why AI Safety is important by linking a blog post about the dangers of poorly thought-out AI systems
Have you read the article? it clearly states the difference of AI safety vs AI ethics and argues why the formerare quacks and the latter is ignored.
If you read AI Safety trolley problems and think they are warning you about an ai god, you misunderstood the purpose of the discussion.
Have you encountered what Sam Altman or Elsevier Yudkowsky claim about AI safety? It's literally "AI might make humanity go extinct" shit.
The fear mongering Sam Altman is doing is a sales tactic. That's the hypeman part.
I know of him and I enjoy his videos.
This post is especially ironic, since AI and its "safety researchers" make climate change worse by ridiculously increasing energy demands.
So-called "AI safety" researchers are nothing but hypemen for AI companies.
What about that bloke who started all this stuff about alpha and beta wolves?
Yeah, but the real world usually revolves around more complicated questions than "do you die from lava?"