this post was submitted on 07 Dec 2025
552 points (98.4% liked)

Technology

77090 readers
2685 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] MagicShel@lemmy.zip 276 points 1 day ago* (last edited 1 day ago) (8 children)

A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

What the actual fuck? You couldn't spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?

A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged

So they did. Why are we talking about ChatGPT then? You could just leave that part out. It's useless. Obviously a fake photo has been manipulated. Why bother asking?

[–] Deestan@lemmy.world 86 points 1 day ago (17 children)

I tried the image of this real actual road collapse: https://www.tv2.no/nyheter/innenriks/60-mennesker-isolert-etter-veiras/12875776

I told ChatGPT it was fake and asked it to explain why. It assured me I was a special boy asking valid questions and helpfully made up some claims.

collapsed inline media

[–] Atropos@lemmy.world 54 points 1 day ago

God damn I hate this tool.

Thanks for posting this, great example

load more comments (16 replies)
[–] IcyToes@sh.itjust.works 65 points 1 day ago

They needed time for their journalists to get there. They're too busy on the beaches counting migrant boat crossings.

[–] BanMe@lemmy.world 53 points 1 day ago (1 children)

I am guessing the reporter wanted to remind people tools exist for this, however the reporter isn't tech savvy enough to realize ChatGPT isn't one of them.

[–] 9bananas@feddit.org 25 points 1 day ago* (last edited 1 day ago) (1 children)

afaik, there actually aren't any reliable tools for this.

the highest accuracy rate I've seen reported for "AI detectors" is somewhere around 60%; barely better than a random guess...

edit: i think that way for text/LLM, to be fair.

kinda doubt images are much better though...happy to hear otherwise, if there are better ones!

[–] rockerface@lemmy.cafe 16 points 1 day ago (1 children)

The problem is any AI detector can be used to train AI to fool it, if it's publicly available

[–] 9bananas@feddit.org 10 points 1 day ago* (last edited 1 day ago) (2 children)

exactly!

using a "detector" is how (not all, but a lot of) AIs (LLMs, GenAI) are trained:

have one AI that's a "student", and one that's a "teacher" and pit them against one another until the student fools the teacher nearly 100% of the time. this is what's usually called "training" an AI.

one can do very funny things with this tech!

for anyone that wants to see this process in action, here's a great example:

Benn Jorda: Breaking The Creepy AI in Police Cameras

[–] SLVRDRGN@lemmy.world 3 points 17 hours ago

Someone commented a reply which I thought worthy of highlighting:

"I need privacy, not because my actions are questionable, but because your judgement and intentions are."

[–] Wren@lemmy.today 16 points 1 day ago

My best guess is SEO. Journalism that mentions ChatGPT gets more hits. It might be they did use a specialist or specialized software and the editor was like "Say it was ChatGPT, otherwise people get confused, and we get more views. No one's going to fact check whether or not someone used ChatGPT."

That's just my wild, somewhat informed speculation.

[–] Railcar8095@lemmy.world 11 points 1 day ago (1 children)

Devils advocate, AI might be an agent that detects tapering with a NLP frontend.

Not all AI is LLMs.

[–] MagicShel@lemmy.zip 33 points 1 day ago* (last edited 1 day ago) (1 children)

A "chatbot" is not a specialized AI.

(I feel like maybe I need to put this boilerplate in every comment about AI, but I'd hate that.) I'm not against AI or even chatbots. They have their uses. This is not using them appropriately.

[–] Railcar8095@lemmy.world 8 points 1 day ago* (last edited 1 day ago) (1 children)

A chatbot can be the user facing side of a specialized agent.

That's actually how original change bots were. Siri didn't know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to call on those cases.

[–] MagicShel@lemmy.zip 22 points 1 day ago* (last edited 1 day ago) (3 children)

Okay I get you're playing devil's advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I'm wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn't be.

My second point still stands. If you sent someone to look at the thing and it's fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.

[–] brbposting@sh.itjust.works 2 points 1 day ago

If the article were written 10 years ago I would’ve just assumed they had used something like:

https://fotoforensics.com/

[–] squaresinger@lemmy.world 1 points 1 day ago

ChatGPT is a fronted for specialized modules.

If you e.g. ask it to do maths, it will not do it via LLM but run it through a maths module.

I don't know for a fact whether it has a photo analysis module, but I'd be surprised if it didn't.

[–] Railcar8095@lemmy.world -1 points 1 day ago (1 children)

It's not like BBC is a single person with no skill other than a driving license and at least one functional eye.

Hell, they don't even need to go, just call the local services.

For me it's most likely that they have a specialized tool than an LLM detecting correctly tampering with the photo.

But if you say it's unlikely you're wrong, then I must be wrong I guess.

[–] MagicShel@lemmy.zip 7 points 1 day ago (2 children)

what is the message to the audience? That ChatGPT can investigate just as well as BBC.

What about this part?

Either it's irresponsible to use ChatGPT to analyze the photo or it's irresponsible to present to the reader that chatbots can do the job. Particularly when they've done the investigation the proper way.

Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that's just as valid as BBC reporting.

[–] Railcar8095@lemmy.world 1 points 1 day ago (1 children)

About that part I would say the article doesn't mention ChatGPT, only AI.

[–] MagicShel@lemmy.zip 5 points 1 day ago* (last edited 1 day ago) (1 children)

"AI Chatbot". Which is what to 99% of people, almost certainly including the journalist who doesn't live under a rock? They are just avoiding naming it.

[–] Railcar8095@lemmy.world -2 points 1 day ago (1 children)

Yes. It's ChatGPT. You got them good. You passed the test Neo. Now get the pills.

[–] riskable@programming.dev -2 points 1 day ago* (last edited 1 day ago) (1 children)

I don't think it's irresponsible to suggest to readers that they can use an AI chatbot to examine any given image to see if it was AI-generated. Even the lowest-performing multi-model chatbots (e.g. Grok and ChatGPT) can do that pretty effectively.

Also: Why stop at one? Try a whole bunch! Especially if you're a reporter working for the BBC!

It's not like they give an answer, "yes: Definitely fake" or "no: Definitely real." They will analyze the image and give you some information about it such as tell-tale signs that an image could have been faked.

But why speculate? Try it right fucking now: Ask ChatGPT or Gemini (the current king at such things BTW... For the next month at least hahaha) if any given image is fake. It only takes a minute or two to test it out with a bunch of images!

Then come back and tell us that's irresponsible with some screenshots demonstrating why.

[–] MagicShel@lemmy.zip 4 points 1 day ago* (last edited 1 day ago)

I don't need to do that. And what's more, it wouldn't be any kind of proof because I can bias the results just by how I phrase the query. I've been using AI for 6 years and use it on a near-daily basis. I'm very familiar with what it can do and what it can't.

Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?

[–] Tuuktuuk@piefed.ee 9 points 1 day ago* (last edited 1 day ago)

There's hoping that the reporter then looked at the image and noticed, "oh, true! That's an obvious spot there!"

[–] HugeNerd@lemmy.ca 1 points 1 day ago

But the stories of Russians under my bed stealing my washing machine's CPU are totally real.