this post was submitted on 07 Dec 2025
784 points (97.8% liked)

Technology

77090 readers
3049 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

you are viewing a single comment's thread
view the rest of the comments
[–] LiveLM@lemmy.zip 31 points 21 hours ago (2 children)

Aren't these LLM detectors super inaccurate?

[–] dsilverz@calckey.world 20 points 20 hours ago (1 children)

@LiveLM@lemmy.zip @rimu@piefed.social

This!

Also, the irony: those are AI tools used by anti-AI people who use AI to try and (roughly) determine if a content is AI, by reading the output of an AI. Even worse: as far as I know, they're paid tools (at least every tool I saw in this regard required subscription), so Anti-AI people pay for an AI in order to (supposedly) detect AI slop. Truly "AI-rony", pun intended.

[–] rimu@piefed.social -2 points 20 hours ago (1 children)

https://gptzero.me/ is free, give it a try. Generate some slop in ChatGPT and copy and paste it in.

[–] dsilverz@calckey.world 4 points 20 hours ago

@rimu@piefed.social @technology@lemmy.world

Thanks, didn't know about that one. It seems interesting (but limited, according to their "Pricing" ; every time a tool has a "pricing" menu item, betcha they'll either be anything but gratis or extremely limited in their "free tier"), I created an account and I'll soon try it with some of the occult poetry I use to write. I'm ND so I'm fully aware of how my texts often sound like AI slop.