this post was submitted on 03 Aug 2025
15 points (94.1% liked)

Technology

73581 readers
3200 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

When the Biden administration created a safety institute at the standards agency and then used it to run “x-risk evals, I think we kind of lost our way there,” he said. (“X-risk” is a shortened term for “existential risk” that’s associated with the idea that AI poses major threats to humanity.)

“To me, I think we need to go back to basics at NIST, and back to basics around what NIST exists for, and that is to promulgate best-in-class standards and do critical metrology or measurement science around AI models,” Kratsios said.

Kratsios’s comments about the body once known as the AI Safety Institute came a day after the White House released its anticipated AI Action Plan — which made dozens of recommendations to do things like deregulate and rid AI of “ideological bias” — as well as three executive orders that set parts of that plan into motion. The Thursday panel, moderated by CTA’s CEO and vice chair Gary Shapiro, was focused on those actions.

The discussion also followed the Trump administration’s move last month to rename the NIST-located safety institute to the Center for AI Standards and Innovation, cutting “safety” from the name. That component was initially announced by the Biden administration in November 2023 at the UK AI Safety Summit and, over the next year, focused on working with industry, establishing testing agreements with companies, and conducting evaluations.

I get that he's most likely just "following orders" from Thiel, and probably not coming up with any of this policy, but I still hate this guy so much. I have to give Thiel credit. Once again proving he sure knows how to craft a good public scapegoat for when things inevitably go horribly wrong.

you are viewing a single comment's thread
view the rest of the comments
[–] vacuumflower@lemmy.sdf.org 1 points 4 hours ago (1 children)

From the perspective of a Russian you don't know what you are talking about.

[–] Alphane_Moon@lemmy.world 1 points 3 hours ago (1 children)

I would argue that's part of the (unfortunate) effectiveness of libertarianism as an oligarch polemic.

[–] vacuumflower@lemmy.sdf.org 1 points 25 minutes ago

I've recently refreshed my mind on Khmer Rouge, and have gotten a very nasty feeling that, in a right (wrong) combination of circumstances, my ideological ideas could eventually lead to something like that. Despite being libertarian.

But one thing very notable about them - despite in ideology being frankly very fascist in addition to communist (fascist in a deep sense, the anti-intellectualism, the reliance on emotion, rejection of modernity and complexity, feeling of soil and violence, the almost deified organization, using 12-14 year olds as the main armed force, all that), many things, like their "struggle sessions" and the "quick and radical" solutions, were, one can say, reliant on wide participation and popular approval.

So. An oligarch is a businessman with power bending the law and allowing them to capture, together with other oligarchs, a sphere of the economy.

Oligarchy is not nice, and eventually always leads to authoritarianism (initially oligarchs install their tools at the top of the state, and then eventually those tools become the primary bearers of power and oligarchs their pockets, and then eventually oligarchs are robbed and the relatives and clansmen of the tools own everything).

However, it has nothing to do with libertarianism, because libertarianism is principally based on freedom of association (oligarchy usually involves suppressing unions and customer associations and cooperatives, and suppressing competition ; this also is about freedom of making a deal), non-aggression (understood as oligopoly being aggression in the means to enforce it, and the same about IP and patents) and natural law, the latter being rigid idea of ownership where what you create fully is yours fully, what you didn't create is not yours at all, and the intermediate (real) things being all compromises between these. That notoriously makes owning territory dubious, which, ahem, is not very good for oligarchy.

That's if there's a working system of enforcing such a libertarian order, and if there's none, then it's not libertarianism.

And why did I mention Khmer Rouge - I don't think blaming everything upon oligarchs and such is useful. Most of the people supporting any existing order are not bosses. If a society has oligarchy, then this means its wide masses are in general in favor of morality of oligarchy (who managed to capture a portion of an industry, deserves to milk it forever, and who managed to capture an institution regulating it, deserves the spoils, and so on), just like wide masses of Khmer peasants were more or less in agreement with that party's ideas, until, of course, it became fully empowered.

It's a failure of education, and I don't think libertarianism is a component in that failure, after all, Kato institute is one of the organizations which haven't ideologically drifted and just do what they are openly intended to do - provide the libertarian perspective on any events. Not drifting into lies in attempt to secure support is something I'd consider a good commendation. Maybe carriers of other ideologies should look at how that was achieved and build their own similar institutions. Then at some point problems might start being resolved by people knowing what they are doing.