this post was submitted on 27 Oct 2025
439 points (99.3% liked)

Programmer Humor

27104 readers
4064 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] TheReturnOfPEB@reddthat.com 176 points 2 days ago (5 children)

couldn't ai, then also, break code faster than we could fix it ?

[–] NuXCOM_90Percent@lemmy.zip 44 points 2 days ago* (last edited 2 days ago) (1 children)

I mean, at a high level it is very much the concept of ICE from Gibson et al back in the day.

Intrusion Countermeasures Electronics. The idea that you have code that is constantly changing and updating based upon external stimuli. A particularly talented hacker, or AI, can potentially bypass it but it is a very system/mental intensive process and the stronger the ICE, the stronger the tools need to be.

In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor... if your models are good (big ol' "if" that).

Which then gets into the idea of Black ICE that is actively antagonistic towards those who are detected as attempting to bypass it. In the books it would fry brains. In the modern day it isn't overly dissimilar from how so many VPN controlled IPs are just outright blocked from services and there is always the risk of getting banned because your wifi coffee maker is part of a botnet.

But it is also not hard to imagine a world where a counter-DDOS or hack is run. Or a message is sent to the guy in the basement of the datacenter to go unplug that rack and provide the contact information of whoever was using it.

[–] Kyrgizion@lemmy.world 8 points 2 days ago (1 children)

In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor… if your models are good (big ol’ β€œif” that).

Turns out Harlan Ellison was a goddamn prophet when he wrote I Have No Mouth And I Must Scream.

[–] bleistift2@sopuli.xyz 10 points 2 days ago* (last edited 2 days ago) (1 children)

I have no clue how you think these two are related in any way, except for the word β€œAI” occurring in both.

load more comments (1 replies)
[–] PattyMcB@lemmy.world 19 points 2 days ago (1 children)

AI WRITES broken code. Exploiting is is even easier.

[–] MajorasTerribleFate@lemmy.zip 8 points 2 days ago (2 children)

How do you exploit that which is too broken to run?

load more comments (2 replies)
[–] marcos@lemmy.world 5 points 2 days ago

AI should start breaking code much sooner than it can start fixing it.

Maybe breaking isn't even far, because the AI can be wrong 90% of the time and still be successful.

load more comments (2 replies)
[–] 30p87@feddit.org 94 points 2 days ago* (last edited 2 days ago) (5 children)

Genius strategy:

  • Replace Juniors
  • Old nerds knowing stuff die out
  • Now nobody knows anything about programming and security
  • Everything's now a battle between LLMs
[–] jaybone@lemmy.zip 19 points 2 days ago (1 children)

I’ve already had to reverse engineer shitty old spaghetti code written by people who didn’t know what they were doing, so I could fix obscure bugs.

I can wait until I have to do the same thing for AI generated code.

load more comments (1 replies)
[–] MelodiousFunk@slrpnk.net 12 points 2 days ago

If it's good enough for COBOL...

[–] OctopusNemeses@lemmy.world 5 points 2 days ago

This is a generalized problem. It's not only programming. The world faces a critical collapse of expertise if we defer to AI.

load more comments (1 replies)
[–] bleistift2@sopuli.xyz 68 points 2 days ago (2 children)
[–] Susaga@sh.itjust.works 21 points 2 days ago (1 children)

I wonder why they don't work there anymore...

[–] i_stole_ur_taco@lemmy.ca 14 points 2 days ago (1 children)

Replaced by AI, ironically.

load more comments (1 replies)
load more comments (1 replies)
[–] itkovian@lemmy.world 62 points 2 days ago

Execs and managers showing Dunning-Kruger in full effect.

[–] DupaCycki@lemmy.world 45 points 2 days ago (1 children)

At this point, they're just rage baiting and saying random shit to squeeze that bubble before it bursts.

load more comments (1 replies)
[–] Routhinator@startrek.website 43 points 2 days ago

AI is opening so many security HOLES. Its not solving shit. AI browsers and MCP connectors are wild west security nightmares. And that's before you even trust any code these things write.

[–] HazardousBanjo@lemmy.world 34 points 2 days ago

As usual, the biggest advocates for AI are the ones who understand its limitations the least.

[–] violentfart@lemmy.world 32 points 2 days ago
[–] Mikina@programming.dev 31 points 2 days ago* (last edited 2 days ago) (2 children)

I have worked as a pentester and eventually a Red Team lead before leaving foe gamedev, and oh god this is so horrifiying to read.

The state of the industry was alredy extremely depressing, which is why I left. Even without all of this AI craze, the fact that I was able to get from a junior to Red Team Lead, in a corporation with hundreds of employees, in a span of 4 years is already fucked up, solely because Red Teaming was starting to be a buzz word, and I had passion for the field and for Shadowrun while also being good at presentations that customers liked.

When I got into the team, the "inhouse custom malware" was a web server with a script that pools it for commands to run with cmd.exe. It had a pretty involved custom obfuscation, but it took me lile two engagements and the guy responsible for it to leave before I even (during my own research) found out that WinAPI is a thing, and that you actually should run stuff from memory and why. And I was just a junior at the time, and this "revelation" got me eventually a unofficial RT Lead position, with 2 MDs per month for learning and internal development, rest had to be on engagements.

And even then, we were able to do kind of OK in engagements, because the customers didn't know and also didn't care. I was always able to come up with "lessons learned", and we always found out some glaring sec policy issues, even with limited tools, but the thing is - they still did not care. We reported something, and two years ago they still had the same bruteforcable kerberos tickets. It already felt like the industry is just a scam done for appearances, and if it's now just AIs talking to the AIs then, well, I don't think much would change.

But it sucks. I love offensive security, it was really interresting few years of my carreer, but ot was so sad to do, if you wanted to do it well :(

load more comments (2 replies)
[–] onlinepersona@programming.dev 30 points 2 days ago (6 children)

I tried using AI in my rust project and gave up on letting it write code. It does quite alright in python, but rust is still too niche for it. Imagine trying to write zig or Haskell, it would make a terrible mess of it.

Security is an afterthought in 99.99% of code. AI barely has anything to learn from.

[–] krooklochurm@lemmy.ca 33 points 2 days ago (2 children)

If you're using Hannah Montana Linux you can just open a terminal and type "write me ____ in the language ____" and the Hannai Montanai will produce perfectly working code every time.

[–] jaybone@lemmy.zip 15 points 2 days ago (1 children)
[–] krooklochurm@lemmy.ca 20 points 2 days ago

Hannah Montana Linux is serious business. I would never joke about Hannah Montana Linux.

load more comments (1 replies)
[–] buddascrayon@lemmy.world 6 points 2 days ago

It does quite alright in python

That's cause python is the most forgiving language you could write in. You could drop entire pages of garbage into a script and it would figure out a way to run properly.

load more comments (4 replies)
[–] IzzyScissor@lemmy.world 30 points 1 day ago (4 children)

SchrΓΆdinger's AI: It's so smart it can build perfect security, but it's too dumb to figure out how to break it.

load more comments (4 replies)
[–] deadbeef79000@lemmy.nz 28 points 2 days ago (1 children)

Ha ha ha ha ha!

Oh wait, you're serious. Let me laugh even harder.

HA HA HA HA HA!

load more comments (1 replies)
[–] melfie@lemy.lol 20 points 2 days ago
[–] rozodru@pie.andmc.ca 19 points 2 days ago

Not with any of the current models, none of them are concerned with security or scaling.

[–] death_to_carrots@feddit.org 16 points 2 days ago (3 children)

It takes a good person with ~~a gun~~ AI to stop a bad person with ~~a gun~~ AI.

load more comments (3 replies)
[–] tidderuuf@lemmy.world 16 points 2 days ago

Ah yes, I'm sure AI just patched that software so that other AI could use that patched software and make things so much more secure. What a brilliant idea from an Ex-CISA head.

[–] Randelung@lemmy.world 15 points 1 day ago

ahahahaha

Oh, you're serious. Let me laugh even harder.

AHAHAHAHA

[–] biotin7@sopuli.xyz 15 points 1 day ago (1 children)

Because then Security would be non-existent.

[–] VonReposti@feddit.dk 12 points 1 day ago

The S in AI stands for security.

One of the most idiotic takes I've read in a long time

[–] Darkcoffee@sh.itjust.works 13 points 2 days ago

Is that why she's Ex-CISA? 🀣

[–] jaybone@lemmy.zip 13 points 2 days ago

Fix what code? The code it broke or wrote like shit in the first place?

[–] Blackmist@feddit.uk 12 points 2 days ago

Ron Howard narrator: Actually, they would need more.

[–] kn0wmad1c@programming.dev 12 points 2 days ago

Clearly she's never seen AI code.

[–] skuzz@discuss.tchncs.de 10 points 23 hours ago

All these brainwashed AI-obsessed people should be required to watch I, Robot on loop for a month or two.

[–] PattyMcB@lemmy.world 9 points 2 days ago (1 children)
load more comments (1 replies)
[–] CAWright@infosec.pub 7 points 2 days ago (2 children)

Except that most risks are from bad leadership decisions. Exhibit A: patches exist for so many vulnerabilities that remain unpatched because of bad business decisions.

I think in a theoretical sense, she is correct. However, in practice things are much different.

[–] Kyrgizion@lemmy.world 10 points 2 days ago (1 children)

My old job had so many unpatched servers, mostly Linux ones. Because of the general idea that "Linux is safe anyway". And because of how Windows updates would often break critical infrastructure, so they were staggered and phased.

But we've seen plenty of infected Linux packages since, so it's almost a given there's huge open holes in that security somewhere.

load more comments (1 replies)
load more comments (1 replies)
[–] Bennyboybumberchums@lemmy.world 6 points 2 days ago (1 children)

I just asked an AI what the minimum wage was in 2003 in the UK and it told me that it was Β£4.50 and that on a 40 hour work week, that came out to 18k a year... But sure, trust it to write and fix code...

load more comments (1 replies)
[–] MashedTech@lemmy.world 6 points 2 days ago

Who is paying her?

[–] Kyrgizion@lemmy.world 6 points 2 days ago (1 children)

If an AI can be used for automatic scalable defense, it can also be used offensively. It'll just be another digital arms race between blackhats and everyone else.

load more comments (1 replies)
load more comments
view more: next β€Ί