this post was submitted on 16 Dec 2025
477 points (99.2% liked)

Games

44005 readers
1427 users here now

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you're submitting before posting to see if it's already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don't share it here, there are other places to find it. Discussion of piracy is fine.

We don't want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform

By type

By games

Language specific

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 13igTyme@piefed.social 83 points 2 days ago (4 children)

Nothing wrong with using AI to organize or supplement workflow. That's literally the best use for it.

[–] iAmTheTot@sh.itjust.works 120 points 2 days ago (4 children)

Except for the ethical question of how the AI was trained, or the environmental aspect of using it.

[–] Hackworth@piefed.ca 46 points 2 days ago (2 children)

There are AI's that are ethically trained. There are AI's that run on local hardware. We'll eventually need AI ratings to distinguish use types, I suppose.

[–] utopiah@lemmy.world 21 points 2 days ago (4 children)

There are AI’s that are ethically trained

Can you please share examples and criteria?

[–] dogslayeggs@lemmy.world 20 points 2 days ago (3 children)

Sure. My company has a database of all technical papers written by employees in the last 30-ish years. Nearly all of these contain proprietary information from other companies (we deal with tons of other companies and have access to their data), so we can't build a public LLM nor use a public LLM. So we created an internal-only LLM that is only trained on our data.

[–] Fmstrat@lemmy.world 4 points 1 day ago

I'd bet my lunch this internal LLM is a trained open weight model, which has lots of public data in it. Not complaining about what your company has done, as I think that makes sense, just providing a counterpoint.

[–] tb_@lemmy.world 3 points 1 day ago

Completely from scratch?

[–] utopiah@lemmy.world 3 points 1 day ago

You are solely using your own data or rather you are refining an existing LLM or rather RAG?

I'm not an expert but AFAIK training an LLM requires, by definition, a vast mount of text so I'm skeptical that ANY company publish enough papers to do so. I understand if you can't share more about the process. Maybe me saying "AI" was too broad.

[–] Fmstrat@lemmy.world 6 points 1 day ago (1 children)

Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.

https://www.swiss-ai.org/apertus

Fully open source, even the training data is provided for download. That being said, this is the only one I know of.

[–] utopiah@lemmy.world 1 points 1 day ago

Thanks, a friend recommended it few days ago indeed but unfortunately AFAICT they don't provide the CO2eq in their model card nor an analogy equivalence non technical users could understand.

[–] oplkill@lemmy.world 6 points 2 days ago (1 children)

It can use public domain licenced data

[–] utopiah@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

Right, and to be clear I'm not saying it's not possible (if fact I some models in mind but I'd rather let others share first). This isn't a trick question, it's a genuine request to hopefully be able to rely on such tools.

[–] Hackworth@piefed.ca 1 points 1 day ago (1 children)

Adobe's image generator (Firefly) is trained only on images from Adobe Stock.

[–] utopiah@lemmy.world 1 points 1 day ago (1 children)

Does it only use that or doesn't it also use an LLM to?

[–] Hackworth@piefed.ca 1 points 1 day ago* (last edited 1 day ago) (1 children)

The Firefly image generator is a diffusion model, and the Firefly video generator is a diffusion transformer. LLMs aren't involved in either process - rather the models learn image-text relationships from meta tags. I believe there are some ChatGPT integrations with Reader and Acrobat, but that's unrelated to Firefly.

[–] utopiah@lemmy.world 1 points 14 hours ago (1 children)

Surprising, I would expect it'd rely at some point on something like CLIP in order to be prompted.

[–] Hackworth@piefed.ca 1 points 48 minutes ago* (last edited 9 minutes ago)

As I understand it, CLIP (and other text encoders in diffusion models) aren't trained like LLMs, exactly. They're trained on image/text pairing, which ya get from the metadata creators upload with their photos in Adobe Stock. Open AI trained CLIP with alt text on scraped images, but I assume Adobe would want to train their own text encoder on the more extensive tags on the stock images its already using.

All that said, Adobe hasn't published their entire architecture. And there were some reports during the training of Firefly 1 back in '22 that they weren't filtering out AI-generated images in the training set. At the time, those made up ~5% of the full stock library. Currently, AI images make up about half of Adobe Stock, though filtering them out seems to work well. We don't know if they were included in later version of Firefly. There's an incentive for them to filter them out, since AI trained on AI tends to lose its tails (the ability to handle edge cases well), and that would be pretty devastating for something like generative fill.

I figure we want to encourage companies to do better, whatever that looks like. For a monopolistic giant like Adobe, they seem to have at least done better. And at some point, they have to rely on the artists uploading stock photos to be honest. Not just about AI, but about release forms, photo shoot working conditions, local laws being followed while shooting, etc. They do have some incentive to be honest, since Adobe pays them, but I don't doubt there are issues there too.

[–] 13igTyme@piefed.social 13 points 2 days ago

There's more to AI than LLM.

[–] Bronzebeard@lemmy.zip 8 points 2 days ago

No one [intelligent] is using an LLm for workflow organization. Despite what the media will try to convince you, Not every AI is an LLM or even and LLM trained on all the copyrighted shit you can find in the Internet.

[–] UnderpantsWeevil@lemmy.world 29 points 2 days ago* (last edited 2 days ago) (2 children)

We've had tools to manage workflows for decades. You don't need Copilot injected into every corner of your interface to achieve this. I suspect the bigger challenge for Larian is working in a development suite that can't be accused of having "AI Assist" hiding somewhere in the internals.

[–] Hackworth@piefed.ca 20 points 2 days ago (2 children)

Yup! Certifying a workflow as AI-free would be a monumental task now. First, you'd have to designate exactly what kinds of AI you mean, which is a harder task than I think people realize. Then, you'd have to identify every instance of that kind of AI in every tool you might use. And just looking at Adobe, there's a lot. Then you, what, forbid your team from using them, sure, but how do you monitor that? Ya can't uninstall generative fill from Photoshop. Anyway, that's why anything with a complicated design process marked "AI-Free" is going to be the equivalent of greenwashing, at least for a while. But they should be able to prevent obvious slop from being in the final product just in regular testing.

[–] P1nkman@lemmy.world 6 points 2 days ago (4 children)

It's simple: go back to binary.

Keep going. Handmade analog mediums only.

[–] Hackworth@piefed.ca 3 points 2 days ago

Coincidentally, this paper published yesterday indicates that LLMs are worse at coding the closer you get to the low level like assembly or binary. Or more precisely, ya stop seeing improvements pretty early on in scaling up the models. If I'm reading it right, which I'm probably not.

[–] mcforest@feddit.org 3 points 2 days ago

Just stop using computers at all to program computer games.

[–] plateee@piefed.social -1 points 2 days ago

Or just have a hard cut-off for software released after 2022.

It's the only way I search for recipes anymore - a date filter from 1/1/1990 - 1/1/2022.

[–] Bronzebeard@lemmy.zip 2 points 2 days ago

Yeah, do you use any Microsoft products at all (like 98% of corporate software development does)? Everything from teams to word to visual studio has copilot sitting there. It would just take one employee asking it a question to render a no-AI pledge a lie.

[–] rtxn@lemmy.world 14 points 2 days ago* (last edited 2 days ago) (1 children)

You know it doesn't have to be all or nothing, right?

In the early design phase, for example, quick placeholder objects are invaluable for composing a scene. Say you want a dozen different effigies built from wood and straw -- you let the clanker churn them out. If you like them, an environment artist can replace them with bespoke models, as detailed and as optimized as the scene needs it. If you don't like them, you can just chuck them in the trash and you won't have wasted the work of an artist, who can work on artwork that will actually appear in the released product.

Larian haven't done anything to make me question their credibility in this matter.

[–] UnderpantsWeevil@lemmy.world 5 points 2 days ago (2 children)

You know it doesn’t have to be all or nothing, right?

Part of the "magic" of AI is how much of the design process gets hijacked by inference. At some scale you simply don't have control of your own product anymore. What is normally a process of building up an asset by layers becomes flattened blobs you need to meticulously deconstruct and reconstruct if you want them to not look like total shit.

That's a big part of the reason why "AI slop" looks so bad. Inference is fundamentally not how people create complex and delicate art pieces. It's like constructing a house by starting with the paint job and ending with the framing lumber, then asking an architect to fix where you fucked up.

If you don’t like them, you can just chuck them in the trash and you won’t have wasted the work of an artist

If you engineer your art department to start with verbal prompts rather than sketches and rough drawings, you're handcuffing yourself to the heuristics of your AI dataset. It doesn't matter that you can throw away what you don't like. It matters that you're preemptively limiting yourself to what you'll eventually approve.

[–] Prove_your_argument@piefed.social 7 points 2 days ago (1 children)

That’s a big part of the reason why “AI slop” looks so bad. Inference is fundamentally not how people create complex and delicate art pieces. It’s like constructing a house by starting with the paint job and ending with the framing lumber, then asking an architect to fix where you fucked up.

This is just the whole robot sandwich thing to me.

A tool is a tool. Fools may not use them well, but someone who understands how to properly use a tool can get great things out of it.

Doesn’t anybody remember how internet search was in the early days? How you had to craft very specific searches to get something you actually wanted? To me this is like that. I use generative AI as a search engine and just like with altavista or google, it’s up to my own evaluation of the results and my own acumen with the prompt to get me where I want to be. Even then, I still need to pay attention and make sure what I have is relevant and useful.

I think artists could use gen AI to make more good art than ever, but just like a photographer… a thousand shots only results in a very small number of truly amazing outcomes.

Gen AI can’t think for itself or for anybody, and if you let it do the thinking and end up with slop well… garbage in, garbage out.

At the end of the day right now two people can use the same tools and ask for the same things and get wildly different outputs. It doesn’t have to be garbage unless you let it be though.

I will say, gen AI seems to be the only way to combat the insane BEC attacks we have today. I can’t babysit every single user’s every email, but it sure as hell can bring me a shortlist of things to look at. Something might get through, but before I had a tool a ton of shit got through, and we almost paid tens of thousands of dollars in a single bogus but convincing looking invoice. It went so far as a fucking bank account penny test (they verified two ach deposits) Four different people gave their approvals - head of accounting included… before a junior person asked us if we saw anything fishy. This is just one example for why gen AI can have real practical use cases.

[–] UnderpantsWeevil@lemmy.world -4 points 2 days ago

This is just the whole robot sandwich thing to me.

If home kitchens were being replaced by pre-filled Automats, I'd be equally repulsed.

A tool is a tool. Fools may not use them well, but someone who understands how to properly use a tool can get great things out of it.

The most expert craftsman won't get a round peg to fit into a square hole without doing some damage. At some point, you need to understand what the tool is useful for. And the danger of LLMs boils down to the seeming industrial scale willingness to sacrifice quality for expediency and defend the choice in the name of business profit.

Doesn’t anybody remember how internet search was in the early days? How you had to craft very specific searches to get something you actually wanted?

Internet search was as much constrained by what was online as what you entered in the prompt. You might ask for a horse and get a hundred different Palominos when you wanted a Clydesdale, not realizing the need to be specific. But you're never going to find a picture of a Vermont Morgan horse if nobody bothered to snap a photo and host it where a crawler could find it.

Taken to the next level with LLMs, you're never going to infer a Vermont Morgan if it isn't in the training data. You're never going to even think to look for one, if the LLM hasn't bothered to index it properly. And because these AI engines are constantly eating their own tails, what you get is a basket of horses that are inferred between a Palomino and a Clydesdale, sucked back into training data, and inferred in between a Palomino and a Palomino-Clydesdale, and sucked back into the training data, and, and, and...

I think artists could use gen AI to make more good art than ever

I don't think using an increasingly elaborate and sophisticated crutch will teach you to sprint faster than Hussein Bolt. Removing steps in the artistic process and relying on glorified Clipart Catalogs will not improve your output. It will speed up your output and meet some minimum viable standard for release. But the goal of that process is to remove human involvement, not improve human involvement.

I will say, gen AI seems to be the only way to combat the insane BEC attacks we have today.

Which is great. Love to use algorithmic defenses to combat algorithmic attacks.

But that's a completely different problem than using inference to generate art assets.

[–] False@lemmy.world 4 points 2 days ago

How do you think a human decides what to sketch? They talk about the requirements.

[–] finitebanjo@lemmy.world 9 points 2 days ago

The only good use for LLMs and generative AI is spreading misinformation.

[–] Darkcoffee@sh.itjust.works 1 points 2 days ago

I was saying that as well.

I get the knee jerk reaction because everything has been so horrible everywhere lately with AI, but they're actually one of the few companies using it right.