this post was submitted on 30 Oct 2025
723 points (97.1% liked)

Political Memes

9745 readers
1459 users here now

Welcome to politcal memes!

These are our rules:

Be civilJokes are okay, but don’t intentionally harass or disturb any member of our community. Sexism, racism and bigotry are not allowed. Good faith argumentation only. No posts discouraging people to vote or shaming people for voting.

No misinformationDon’t post any intentional misinformation. When asked by mods, provide sources for any claims you make.

Posts should be memesRandom pictures do not qualify as memes. Relevance to politics is required.

No bots, spam or self-promotionFollow instance rules, ask for your bot to be allowed on this community.

No AI generated content.Content posted must not be created by AI with the intent to mimic the style of existing images

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] ricecake@sh.itjust.works 2 points 1 day ago

It depends on which type of ai upscaling is being used.
Some are basically a neural net that understands how pixelation works with light, shadow, and color gradients and can work really well. They leave the original pixels intact, figure out the best guess for the gaps using traditional methods and then correct the guesses using feedback from the neural net.
Others are way closer to "generate me an image that looks exactly the same as this one but had three times the resolution". It uses a lot more information about how people look (in photos it was trained on) than just how light and structure interact.

The former is closer to how your brain works. Shadow and makeup can be separated because you (in the squishy level, not consciously) know shadows don't do that, and the light reflection hints at depth and so on.
The latter is more concerned with fixing "errors", which might involve changing the original image data if it brings the total error down, or it'll just make up things that aren't there because it's plausible.

Inferring detail tends to look nicer, because it's using information that's there to fil the gaps. Generating detail is just smearing in shit that fits and tweaking it until it passes a threshold of acceptability.
The first is more likely to be built into a phone camera to offset a smaller lens. The second is showing up a lot more to "make your pictures look better" by tweaking them to look like photos people like.