this post was submitted on 25 Jul 2025
524 points (98.3% liked)

Technology

73254 readers
4496 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tlekiteki@lemmy.dbzer0.com 74 points 1 day ago (5 children)

Misleading title. Applies only to AI bought by the Feds.

[–] floofloof@lemmy.ca 107 points 1 day ago* (last edited 1 day ago) (4 children)

Watch all the AI companies scramble to comply in a quest for government contracts. This will affect everyone who uses American LLMs and generative AI.

It should also open an opportunity for international competition from less censored models.

[–] bigfondue@lemmy.world 48 points 1 day ago* (last edited 1 day ago)

And this is one of the best arguments against depending on LLMs. People are outsourcing their thinking to linear algebra machines owned by the wealthy. LLMs are a tool of social control.

[–] tonytins@pawb.social 14 points 1 day ago

Considering how much they bleed cash regularly, I can see them jumping on the government contract bandwagon quickly.

[–] panda_abyss@lemmy.ca 9 points 1 day ago* (last edited 1 day ago)

They all got $200M last week

[–] leftytighty@slrpnk.net 6 points 1 day ago

To be fair to the executive order (ugh) many of the examples cited are due to well intentioned system prompts that encourage the LLM to actively be diverse.

The example of a female pope or whatever (read this earlier) is an example of that.

Generally speaking the LLMs have left-bias because they're trained on information unlike conservatives, but they aren't necessarily asking the models to be censored

[–] panda_abyss@lemmy.ca 16 points 1 day ago

But anything the US feds contracted them for, like building data centres, they have to comply or they face penalties and have to pay all the costs back.

10 days ago, a week before this was announced, they awarded $200M contracts each to Anthropic, OpenAI, Google and xAI

This doesn’t doom the public versions, but they now have a pretty strong incentive to save money and make them comply with the US governments new definition of truth.

[–] forrgott@lemmy.sdf.org 10 points 1 day ago (2 children)

Well, in practice, no.

Do you think any corporation is going to bother making a separate model for government contracts versus any other use? I mean, why would they. So unless you can pony up enough cash to compete with a lucrative government contract (and the fact none of us can is, on fact, the while point), the end result will involve these requirements being adopted by the overwhelming majority of generative AI available on the market.

So in reality, no, this absolutely will not be limited to models purchased by the feds. Frankly, I believe choosing to think otherwise to be dangerously naive.

[–] MrMcGasion@lemmy.world 7 points 1 day ago

Based on the attempts we've seen at censoring AI output so far, there doesn't seem to me to be a way to actually do this without building a new model with pre-censored training data.

Sure they can tune models, but even "MechaHitler" Grok was still giving some "woke" answers on occasion. I don't see how this doesn't either destroy AI's "usefulness" (not that there's any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).

[–] Jozav@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

[–] forrgott@lemmy.sdf.org 0 points 1 day ago (1 children)

So many examples of this method failing I don't even know where to start. Most visible, of course, was how that approach failed to stop Grok from "being woke" for like, a year or more.

Frankly, you sound like you're talking straight out of your ass.

[–] Jozav@lemmy.world 1 points 1 day ago (1 children)

Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

BTW. There are many theories about Grok's unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.

[–] jumping_redditor@sh.itjust.works -1 points 1 day ago (1 children)

why should any llm care about "ethics"?

[–] MouldyCat@feddit.uk 1 points 1 day ago

well obviously it won't, that's why you need ethical output restrictions

[–] mic_check_one_two@lemmy.dbzer0.com 9 points 1 day ago (1 children)

Because Executive Orders aren’t laws. They’re just guidelines for the executive branch of the federal government, which the POTUS is in charge of. It can’t affect private entities like AI businesses, because that would require an actual act of congress.

Notably, this would potentially determine what kinds of contracts the executive branch was able to make. For instance, maybe the government wants to contract out a LLM instead of building their own. This EO could affect which companies are able to bid on that contract, by adding these same restrictions to any LLM that they provide. But on its own, the EO is just that; an order to the executive branch of the federal government.

[–] callouscomic@lemmy.zip 4 points 1 day ago

Then that contracted AI gets used for customer service at a public facing federal agency.

[–] dontmindmehere@lemmy.world 2 points 1 day ago (4 children)

Honestly this order seems empty. Does the government even have a need for general LLMs? Why would they need an AI to answer simple questions?

As much as I dislike Trump, this shouldn't impact any AI available to the general public.

[–] Feyd@programming.dev 14 points 1 day ago

Does the government even have a need for general LLMs?

Will this stop them from spending our hard earned tax money on it?

[–] wewbull@feddit.uk 7 points 1 day ago

They don't, but they think they do.

[–] jerakor@startrek.website 3 points 1 day ago (1 children)

Would you rather our current administration make their decisions by using the lowest bidder LLM, or their own brains?

[–] a_person@piefed.social 1 points 1 day ago* (last edited 1 day ago)

The LLM probably makes better decisions lol

[–] WhyJiffie@sh.itjust.works 2 points 1 day ago

Why would they need an AI to answer simple questions?

to shift blame and responsibility, to create a more modern deity, ...