this post was submitted on 28 Oct 2025
81 points (95.5% liked)

Technology

75756 readers
7991 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] XLE@piefed.social 31 points 2 days ago (1 children)

The expectation is for the Foundation to use its equity stake in the OpenAI Group to help fund philanthropic work. That will start with a $25 billion commitment to “health and curing diseases” and “AI resiliance” to counteract some of the risks presented by the deployment of AI.

Paying yourself to promote your own product. Promising to fix vague "risks" that make the product sound more powerful than it is, with "fixes" that won't be measurable.

In other words, Sam is cutting a $25 billion check to himself.

[–] etherphon@piefed.world 5 points 2 days ago (1 children)

So they're already aware of the risks, AI companies are being run with the same big oil/big tobacco playbook lol. You can have all the fancy new technology but if the money is still coming from the same group of rich inbred douchebags it doesn't matter because it will turn to shit.

[–] XLE@piefed.social 7 points 2 days ago

AI companies are definitely aware of the real risks. It's the imaginary ones ("what happens if AI becomes sentient and takes over the world?") that I imagine they'll put that money towards.

Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that's expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.