this post was submitted on 24 Jul 2025
226 points (96.3% liked)
Showerthoughts
36191 readers
1647 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
People always talk about it in relation to programmers, but what about us non-programmers that have been able to code things only becuase of chatgpt?
I have some python, sysadmin, and computer security knowledge. I actually obtained the security+ cert a few years ago.
I do not work in tech anymore, and chatgpt has helped me so much, by basically coding stuff for me to do random work tasks that I was either unqualified to do or didn;t have the time to do.
That's perfectly fine though. And I say that as a professional dev. The problem is when people assume you can actually build an entire software/service architecture of any complexity just through vibe coding.
Currently LLMs are great for helping me pick out the curtains or even to help me assemble some furniture, but I would NEVER let them build the entire house, if that makes sense.
Welcome to CEO handling 101. It's an art, a very soft skill, and not for the faint of heart. I worked for a mid sized (50 employee) company once where I'd "speak truth to power" in our weekly meeting, get shot down rather enthusiastically by the CEO during the meeting, then after I and the rest of R&D left his office, he'd go out to production and have them start implementing all the concepts of my pitch - as his own ideas, naturally.
Sure, I get it. Once my business is in a more profitable place I’ll bring someone on to fix up the code, but for now it’s more than enough.
AKA: technical debt. I actually approve of this approach when you're testing the market and don't have any paying customers. Where it gets ugly is when customers start placing trust in your product, trust that might be costly if your code fails, and management doesn't budget the resources to actually fix up the code. I was very glad to leave the place that was doing this...
The tools I’m using are for internal use and there is a backup in place. So, if something does go down, my contractors can pull the files up in one drive in a view only format.
It's an interesting tool.
It can shave hours off of experienced programmers work if they use it in the right scenarios. You can use it in places where you need to do something that's mundane but fiddly. It's suboptimal for crapping out a large project, But it's super effective at generating a single function or module to do a task. It might even come up with a better idea than you would use for some things. The key is if it does something that's not quite right or not the best idea You need to be able to read it to understand that it's going a little off the rails.
If you're a spreadsheet junkie, It's capable of writing really really complicated rules without getting lost in the minutia.
For non-developers that don't know anything it's a dicer proposition. After a couple thousand lines of code You might start running into interesting problems. When it starts having to go and do problem solving mode, and you're just feeding it back The errors and asking it to fix the problem You can get bogged down pretty quickly.
For DevOps it's the diggity bomb. Practically everything in that profession is either a one-off quick emergency script or a well thought out plan of templates.
Here are my five Amazon accounts give me a shell script that goes into every account in every availability zone, enumerate every security group and give me a tool to add remove or replace a given IP with a description and port based on the existence of other IPs descriptions or ports. Or write me an ansible script to install zabix monitoring playbooks with these templates.
At least 2/3 of the time I spend with AI coding is getting it to compile without errors - that's more than a little off the rails, but it's also much more helpful when you finally do get to a working example that you can look at, instead of beating your own head against the Stack Exchange archives hoping for inspiration, let it try for you.
This is what I’m talking about. So many people talk about it in white or black.
I was able to “code” a front end that my contractors can log into to view the files they are authorized to see.
It helped me write so many different things that all work together to solve my problem.
It may or may not be the most efficient code, but in terms of overall business operation, it’s extremely efficient.
Tip, when you're done having it do your project, restart the chat, tell it that it's a security engineer and ask it to check for any vulnerabilities or anything that should be done to protect the site against malicious activities. Ask it if there's anything with your hosting or site that should be addressed.
Most of the training data out there is on how to get a task done and the best way to do the task, there's a lot less training on completing a project with security in mind. There is however a lot of data on specifically how to secure already written code so it can do it, but it generally will not unless you ask it to.
That's a great tip: having it review the security of code that an earlier context generated.
I plan on having it write unit tests, or at least try to...
Thanks! I’m going to do that Nita a great idea.
I think LLM is fine for shorter scripts. As a professional programmer, it has helped me with writing simple throwaway scripts. Those circumstances are rare.
My stance is that if you think LLM help you get your job done, then use LLM. It’s just another tool to your arsenal.
I don’t trust using LLM for large long running software projects though.
I have been building various things with AI coding tools for a month or so now. I rate the various engines on how far I can take them before they get hopelessly lost, unable to correct their own errors. For the best tools this seems to come after about 50 to 70 iterations of asking for small feature additions or error corrections, weaker tools (like Copilot) hit these infinite loops of fixing their errors with other errors much faster.
It's a good limit, because after 2-3 hours of AI interactive development, I can then spend 4-6 hours going through the resulting code - cleaning it up and understanding how it works. I suspect if AI were taking me farther, like 100-150 iterations, it would probably take me more like 15-20 hours to unravel the various things it comes up with - kind of a point of diminishing returns.
Bottom line: think of your project in terms of microservices. AI is pretty good at microservices. As long as the individual services are each robust in their delivery of the required functions, you're in good shape.
If it ever becomes "mystery meat," it's time to recode by hand.
It’s a neat tool, but be careful what you do with it. I wouldn’t make anything web-connected or otherwise requiring security considerations, for example.
AI coding is actually a very powerful tool, almost like a light saber. Do you notice how many amputations and artificial limbs there are in that galaxy far far away?
That’s pretty funny.
I'm working on a physics project, and my simulator suits my purposes and produces reliable results. And I learned a teeny bit about coding building it.
What's the largest program, in line count (wc -l will be close enough, or open the file in Notepad++ and scroll to the end), that you've created this way?
If you count only 100% vibed code, it's probably a 20 lines long script.
Usually, I tweak the code to fit my needs, so it's not 100% vibes at that point. This way, I have built a bunch of scripts, each about 200 lines long, but that arbitrary limit is just my personal preference. I could put them all together into a single horribly unreadable file, which could be like 1000 lines per project. However, vast majority of them were modified by me, so that doesn't count.
If you ask something longer than 20 lines, there's a very high probability that it won't work on the 15th round of corrections. Either GPT just can't handle things that complicated, or maybe my needs are so obscure and bizarre that the training data just didn't cover those cases.
Try Claude by Anthropic. I noticed Copilot and Google getting hung up much faster than Claude.
Also, I find that if you encourage a good architecture, like a formalized system of variables with Atomic / Mutexed access and getter/setter functions, that seems to give a project more legs than letting the AI work out fiddly access protection schemes one by one.
I got one up around 500 lines before it started falling apart when trying to add new features. That was a mix of Rust and HTML, total source file size was around 14kB, with what I might call a "normal amount" of comments in the code.
Using a tool as a tool to solve a problem?! Blasphemy, hectic, you shall bring about the end times!
But to be serious ai is here, it exists, you can't put things back in the box. Bitching about ai at this point is like bitching the sun is bright and hot.
People need to just get the fuck over it and move on. Focus on regulating and updating laws to the new status quo. Not just bitching like some child having a tantrum because people are using a fucking tool.
I'm not bitching about the existence of code agents in general, I'm bitching about the general attitude of "Code Agents will replace programmers" because no the fuck they are not.
They can produce one-off apps and scripts fairly well to the point where non-programmers can solve their problems (great!) but they lack the necessary sophistication and context to build long-lasting, maintainable and scalable applications, which is what you are hiring developers for in the first place
I had a period of about 10 years where I bounced from company to company fixing non-programmers' code so that it could actually be used in commercial products that brought in revenue.
I’m with you on this. The only legit concern I hear is its environmental impact.
But things will become more efficient over time and it had led to increased interest in nuclear energy, so i think it’s a problem that will take care of itself.
Commuting my fat ass to a climate controlled office, out to lunch, back home, parking spaces, highway lane miles, fuel, periodic vehicle replacements... that all has environmental impacts too, if I can do my job in half the time, that's a big win for the environment.
I have been doing this stuff for over 40 years, the tools get faster and the ecosystems get more complex.
What would be really nice is a return to simplicity, using the fast tools to make simple stuff fast-squared, but nobody seems to want that.