this post was submitted on 07 Dec 2025
380 points (98.5% liked)

Technology

77090 readers
2990 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] MagicShel@lemmy.zip 211 points 12 hours ago* (last edited 12 hours ago) (12 children)

A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

What the actual fuck? You couldn't spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?

A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged

So they did. Why are we talking about ChatGPT then? You could just leave that part out. It's useless. Obviously a fake photo has been manipulated. Why bother asking?

[–] IcyToes@sh.itjust.works 52 points 12 hours ago

They needed time for their journalists to get there. They're too busy on the beaches counting migrant boat crossings.

[–] BanMe@lemmy.world 45 points 11 hours ago (1 children)

I am guessing the reporter wanted to remind people tools exist for this, however the reporter isn't tech savvy enough to realize ChatGPT isn't one of them.

[–] 9bananas@feddit.org 16 points 9 hours ago* (last edited 9 hours ago) (1 children)

afaik, there actually aren't any reliable tools for this.

the highest accuracy rate I've seen reported for "AI detectors" is somewhere around 60%; barely better than a random guess...

edit: i think that way for text/LLM, to be fair.

kinda doubt images are much better though...happy to hear otherwise, if there are better ones!

[–] rockerface@lemmy.cafe 11 points 8 hours ago (1 children)

The problem is any AI detector can be used to train AI to fool it, if it's publicly available

[–] 9bananas@feddit.org 6 points 8 hours ago* (last edited 8 hours ago) (1 children)

exactly!

using a "detector" is how (not all, but a lot of) AIs (LLMs, GenAI) are trained:

have one AI that's a "student", and one that's a "teacher" and pit them against one another until the student fools the teacher nearly 100% of the time. this is what's usually called "training" an AI.

one can do very funny things with this tech!

for anyone that wants to see this process in action, here's a great example:

Benn Jorda: Breaking The Creepy AI in Police Cameras

load more comments (1 replies)
[–] Deestan@lemmy.world 35 points 8 hours ago (6 children)

I tried the image of this real actual road collapse: https://www.tv2.no/nyheter/innenriks/60-mennesker-isolert-etter-veiras/12875776

I told ChatGPT it was fake and asked it to explain why. It assured me I was a special boy asking valid questions and helpfully made up some claims.

collapsed inline media

[–] Atropos@lemmy.world 14 points 7 hours ago

God damn I hate this tool.

Thanks for posting this, great example

load more comments (5 replies)
[–] Wren@lemmy.today 12 points 7 hours ago

My best guess is SEO. Journalism that mentions ChatGPT gets more hits. It might be they did use a specialist or specialized software and the editor was like "Say it was ChatGPT, otherwise people get confused, and we get more views. No one's going to fact check whether or not someone used ChatGPT."

That's just my wild, somewhat informed speculation.

[–] Railcar8095@lemmy.world 10 points 12 hours ago (1 children)

Devils advocate, AI might be an agent that detects tapering with a NLP frontend.

Not all AI is LLMs.

[–] MagicShel@lemmy.zip 30 points 12 hours ago* (last edited 12 hours ago) (1 children)

A "chatbot" is not a specialized AI.

(I feel like maybe I need to put this boilerplate in every comment about AI, but I'd hate that.) I'm not against AI or even chatbots. They have their uses. This is not using them appropriately.

[–] Railcar8095@lemmy.world 8 points 12 hours ago* (last edited 12 hours ago) (1 children)

A chatbot can be the user facing side of a specialized agent.

That's actually how original change bots were. Siri didn't know how to get the weather, it was able to classify the question as a weather question, parse time and location and which APIs to call on those cases.

[–] MagicShel@lemmy.zip 19 points 12 hours ago* (last edited 12 hours ago) (14 children)

Okay I get you're playing devil's advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT? Even in the unlikely event I'm wrong, what is the message to the audience? That ChatGPT can investigate just as well as BBC. Which may well be the case, but it oughtn't be.

My second point still stands. If you sent someone to look at the thing and it's fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.

[–] brbposting@sh.itjust.works 1 points 7 hours ago

If the article were written 10 years ago I would’ve just assumed they had used something like:

https://fotoforensics.com/

load more comments (13 replies)
[–] Tuuktuuk@piefed.ee 9 points 10 hours ago* (last edited 7 hours ago)

There's hoping that the reporter then looked at the image and noticed, "oh, true! That's an obvious spot there!"

load more comments (6 replies)
[–] alexsantee@infosec.pub 52 points 12 hours ago (2 children)

It's a shame to see the journalist trusting an AI chat-bot to verify the trustworthiness of the image instead of asking a specialist. I feel like they should even have an AI detecting specialist in-house since we're moving to having more generative AI material everywhere

[–] Tuuktuuk@piefed.ee 3 points 10 hours ago (1 children)

If the part of the image that reveals the image was made by an AI is obvious enough, why contact a specialist? Of course, reporters should absolutely be trained to spot such things with their bare eyes without something telling them specifically where to look. But still, once the reporter can already see what's ridiculously wrong in the image, it would be waste of the specialist's time to call them to come look at the image.

[–] MagicShel@lemmy.zip 5 points 9 hours ago (2 children)

What does that chatbot add?

[–] azertyfun@sh.itjust.works 3 points 8 hours ago

My guess is the same thing as "critics say [x]". The journalist has an obvious opinion but isn't allowed by their head of redaction to put it in, so to maintain the illusion of NeutTraLITy™©® they find a strawman to hold that opinion for them.

I guess now they don't even need to find a tweet with 3 likes to present a convenient quote from "critics" or "the public" or "internet commenters" or "sources", they can just ask ChatGPT to generate it for them. Either way any redaction where that kind of shit flies is not doing serious journalism.

[–] Tuuktuuk@piefed.ee 1 points 7 hours ago* (last edited 7 hours ago)

It is implied in the article that the chatbot was able to point out details about the image that the reporter either could not immediately recognize without some kind of outside help or did not bother looking for.

So, the chatbot added making the reporter notice something on the photo in a few seconds that would have taken several minutes for the reporter to notice without aid of technology.

load more comments (1 replies)
[–] mavu@discuss.tchncs.de 28 points 6 hours ago (2 children)

A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.

WTF?

Doesn't the fucking BBC have at least 1 or 2 experts for spotting fakes? RAN THROUGH AN AI CHATBOT?? SERIOUSLY??

[–] Pyr_Pressure@lemmy.ca 17 points 6 hours ago

They have vibe journalists now

[–] Blackmist@feddit.uk 10 points 6 hours ago

They do, they have like a daily article debunking shit.

[–] ExLisper@lemmy.curiana.net 22 points 6 hours ago (3 children)

WTF? Why nothing like this ever happened during Photoshop times? Are people just dumber now?

[–] Vitaly@feddit.uk 4 points 2 hours ago* (last edited 2 hours ago)

The thing is you actually need some skill to do it in Photoshop, but now every dumb fuck who knows how to read can do shit like this.

[–] SocialMediaRefugee@lemmy.world 4 points 2 hours ago

These are more realistic and far far easier to make.

[–] Rhoeri@lemmy.world 2 points 1 hour ago

It doesn’t require skill anymore. AI has enabled children with the ability to pretend they have a skill, and to use it to fool people for fun.

[–] rami@ani.social 14 points 7 hours ago (1 children)

I'm surprised to see no one else mention that it only took them an hour and a half to get an inspection done, signed of on and the lines reopened? That seems pretty impressive for something as important as a rail bridge.

[–] leftzero@lemmy.dbzer0.com 18 points 7 hours ago (1 children)

I mean, it's the time to get an inspector off of bed, on the road, to the site, and for them to go “yup, bridge's still there” and call back...

[–] ronigami@lemmy.world 1 points 6 hours ago

For a “once in decades” event you would normally expect that people aren’t really on call to respond in a few minutes.

[–] Honytawk@lemmy.zip 10 points 12 hours ago

I mean, even if it isn't true, better to be sure than to have a train derail and kill a bunch of people.

[–] abbiistabbii@lemmy.blahaj.zone 8 points 6 hours ago

For anyone outside the UK, the bridge in the picture is carrying the West Coast Mainline (WCML).

The UK basically has two major routes between Edinburgh and Glasgow (where most people live in Scotland) and London, the East Coast Mainline and the West Coast Mainline. They also connect several major cities and regions.

The person who posted this basically claimed that a bridge on one of the UK's busiest intercity rail routes had started to collapse, which is not something you say lightly. It's like saying all of New York's airports had shut down because of three co-incidental sinkholes.

[–] Blackmist@feddit.uk 8 points 6 hours ago

Wait until this shit starts an actual war.

[–] SocialMediaRefugee@lemmy.world 8 points 2 hours ago (2 children)

It is time to start holding social media sites liable for posting AI deceptions. FB is absolutely rife with them.

[–] Rhoeri@lemmy.world 2 points 1 hour ago

Sites AND the people that post them. The age of consequence-less action needs to end.

[–] ImmersiveMatthew@sh.itjust.works 1 points 24 minutes ago

I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.

[–] SocialMediaRefugee@lemmy.world 5 points 2 hours ago

People who post this stuff without identifying it as fake should be held liable.

load more comments
view more: next ›