this post was submitted on 27 Oct 2025
439 points (99.3% liked)

Programmer Humor

27104 readers
4135 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TheReturnOfPEB@reddthat.com 176 points 2 days ago (5 children)

couldn't ai, then also, break code faster than we could fix it ?

[–] NuXCOM_90Percent@lemmy.zip 44 points 2 days ago* (last edited 2 days ago) (1 children)

I mean, at a high level it is very much the concept of ICE from Gibson et al back in the day.

Intrusion Countermeasures Electronics. The idea that you have code that is constantly changing and updating based upon external stimuli. A particularly talented hacker, or AI, can potentially bypass it but it is a very system/mental intensive process and the stronger the ICE, the stronger the tools need to be.

In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor... if your models are good (big ol' "if" that).

Which then gets into the idea of Black ICE that is actively antagonistic towards those who are detected as attempting to bypass it. In the books it would fry brains. In the modern day it isn't overly dissimilar from how so many VPN controlled IPs are just outright blocked from services and there is always the risk of getting banned because your wifi coffee maker is part of a botnet.

But it is also not hard to imagine a world where a counter-DDOS or hack is run. Or a message is sent to the guy in the basement of the datacenter to go unplug that rack and provide the contact information of whoever was using it.

[–] Kyrgizion@lemmy.world 8 points 2 days ago (1 children)

In the context of AI on both sides? Higher quality models backed by big ass expensive rigs on one side should work for anything short of a state level actor… if your models are good (big ol’ “if” that).

Turns out Harlan Ellison was a goddamn prophet when he wrote I Have No Mouth And I Must Scream.

[–] bleistift2@sopuli.xyz 10 points 2 days ago* (last edited 2 days ago) (1 children)

I have no clue how you think these two are related in any way, except for the word “AI” occurring in both.

[–] Warl0k3@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

Tbf, every day that goes by is starting to feel more and more like we're all being being tortured by a psychotic omnipotent AI... With a really boring sense of humor.

[–] PattyMcB@lemmy.world 19 points 2 days ago (1 children)

AI WRITES broken code. Exploiting is is even easier.

[–] MajorasTerribleFate@lemmy.zip 8 points 2 days ago (1 children)

How do you exploit that which is too broken to run?

[–] anomnom@sh.itjust.works 3 points 2 days ago (1 children)

They say it's healthy to self-exploit several times per month.

[–] marcos@lemmy.world 5 points 2 days ago

AI should start breaking code much sooner than it can start fixing it.

Maybe breaking isn't even far, because the AI can be wrong 90% of the time and still be successful.

[–] ronigami@lemmy.world 2 points 1 day ago

It’s like the “bla bla bla, blablabla… therefore God exists”

Except for CEOS it’s “blablablabla, therefore we can fire all our workers”

Same shit different day

[–] notarobot@lemmy.zip 1 points 2 days ago

A few years back someone made virus that connected to an llm server and kept finding ways to infect computers in the simulated network. I think it was kind of successful. Not viable for a virus though, but an interesting idea non the less