this post was submitted on 15 Mar 2025
607 points (97.9% liked)

Technology

66687 readers
4328 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] nargis@lemmy.dbzer0.com 46 points 1 day ago* (last edited 1 day ago) (4 children)

eliminates mention of “AI safety”

AI datasets tend to have a white bias. White people are over-represented in photographs, for instance. If one trains AI to with such datasets in something like facial recognition( with mostly white faces), it will be less likely to identify non-white people as human. Combine this with self-driving cars and you have a recipe for disaster; since AI is bad at detecting non-white people, it is less likely to prevent them from being crushed underneath in an accident. This both stupid and evil. You cannot always account for any unconscious bias in datasets.

“reducing ideological bias, to enable human flourishing and economic competitiveness.”

They will fill it with capitalist Red Scare propaganda.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes.

Interesting.

“The AI future is not going to be won by hand-wringing about safety,” Vance told attendees from around the world.

That was done before. A chatbot named Tay was released into the wilds of twitter in 2016 without much 'hand-wringing about safety'. It turned into a neo-Nazi, which, I suppose is just what Edolf Musk wants.

The researcher who warned that the change in focus could make AI more unfair and unsafe also alleges that many AI researchers have cozied up to Republicans and their backers in an effort to still have a seat at the table when it comes to discussing AI safety. “I hope they start realizing that these people and their corporate backers are face-eating leopards who only care about power,” the researcher says.

[–] MuskyMelon@lemmy.world 7 points 1 day ago (2 children)

capitalist Red Scare propaganda

I've always found it interesting that the US is preoccupied with fighting communism propaganda but not pro-Fascist propaganda.

[–] ivanafterall@lemmy.world 4 points 11 hours ago* (last edited 11 hours ago)

It wasn't always that way.

tl;dr: 1946 Department of Defense film called "Don't Be a Sucker" that "dramatizes the destructive effects of racial and religious prejudice" and the dangers of fascism pretty blatantly.

[–] kkj@lemmy.dbzer0.com 2 points 14 hours ago (1 children)

Communism threatens capital. Fascism mostly does not.

[–] MuskyMelon@lemmy.world 3 points 13 hours ago

So it's never been about democracy after all.

[–] captainlezbian@lemmy.world 6 points 21 hours ago

Yeah but the current administration wants Tay to be the press secretary

[–] rottingleaf@lemmy.world 3 points 3 hours ago

They will fill it with capitalist Red Scare propaganda.

I feel as if "capitalist" vs "Red" has long stopped being a relevant conflict in the real world.

[–] Miaou@jlai.lu -1 points 1 day ago

Autonomous vehicles don't use facial recognition datasets to detect people