this post was submitted on 07 Dec 2025
552 points (98.4% liked)

Technology

77090 readers
2685 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] alexsantee@infosec.pub 74 points 1 day ago (3 children)

It's a shame to see the journalist trusting an AI chat-bot to verify the trustworthiness of the image instead of asking a specialist. I feel like they should even have an AI detecting specialist in-house since we're moving to having more generative AI material everywhere

[–] Tuuktuuk@piefed.ee 3 points 1 day ago (1 children)

If the part of the image that reveals the image was made by an AI is obvious enough, why contact a specialist? Of course, reporters should absolutely be trained to spot such things with their bare eyes without something telling them specifically where to look. But still, once the reporter can already see what's ridiculously wrong in the image, it would be waste of the specialist's time to call them to come look at the image.

[–] MagicShel@lemmy.zip 6 points 1 day ago (2 children)
[–] azertyfun@sh.itjust.works 4 points 1 day ago

My guess is the same thing as "critics say [x]". The journalist has an obvious opinion but isn't allowed by their head of redaction to put it in, so to maintain the illusion of NeutTraLITy™©® they find a strawman to hold that opinion for them.

I guess now they don't even need to find a tweet with 3 likes to present a convenient quote from "critics" or "the public" or "internet commenters" or "sources", they can just ask ChatGPT to generate it for them. Either way any redaction where that kind of shit flies is not doing serious journalism.

[–] Tuuktuuk@piefed.ee 1 points 1 day ago* (last edited 1 day ago)

It is implied in the article that the chatbot was able to point out details about the image that the reporter either could not immediately recognize without some kind of outside help or did not bother looking for.

So, the chatbot added making the reporter notice something on the photo in a few seconds that would have taken several minutes for the reporter to notice without aid of technology.

[–] ohulancutash@feddit.uk 1 points 23 hours ago

Did they though? They mentioned a journalist ran it through a chat bot. They also mention it was verified by a reporter on the ground.

It’s like criticising a weather report because the reporter looked outside to see if it was raining, when they also consulted the simulation forecasting.