I think that it's an astute observation. AI wouldn't need to be hyped by those running AI companies if the value was self-evident. Personally I've yet to see any use beyond an advanced version of Clippy.
Showerthoughts
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
I use it to romanize Farsi song texts. I cannot read their script and chatGPT can. The downside is that you have to do it a few lines at a time or else it starts hallucinating like halfway through. There is no other tool that reliably does this, the one I used before from University of Tehran seems to have stopped working.
Did the same yesterday with some Russian songs and was told by my Russian date that it was an excellent result.
My top reasons I have no interest in ai:
- if it was great, it wouldn’t be pushed on us (like 3D TVs were)
- there is no accountability, so how can it be trusted without human verification which then means ai wasn’t needed
- environmental impact
- privacy/security degradation
If AI truly was the next frontier, we wouldn’t be staring at the start of another depression (or a bad recession). There would be a revolution of innovations and most people’s lives would improve.
The idea that technological improvements would improve everyone's life is based on the premise that capitalists wouldn't keep the productivity gains for themselves.
AI does offer some efficiency improvements. But the workers won't get that money.
Note that it was improving the average worker's life proportionately until the capitalists broke the system in the early 70s.
I'm guessing that the consistent trend wasn't consistent prior to WWII, that it was just a blip from about 1945 - 1970/.
Long ago, I'd make a Google search for something, and be able to see the answer in the previews of my search results, so I'd never have to actually click on the links.
Then, websites adapted by burying answers further down the page so you couldn't see them in the previews and you'd have to give them traffic.
Now, AI just fucking summarizes every result into an answer that has a ~70% of being correct and no one gets traffic anymore and the results are less reliable than ever.
Make it stop!
Best I can offer is https://github.com/searxng/searxng
I run it at home and have configured it as the default search engine in all my browsers.
AI has become a self-enfeeblement tool.
I am aware that most people are not analytically minded, and I know most people don't lust for knowledge. I also know that people generally don't want their wrong ideas corrected by a person, because it provokes negative feelings of self worth, but they're happy being told self-satisfying lies by AI.
To me it is the ultimate gamble with one's own thought autonomy, and an abandonment of truth in favor of false comfort.
To me it is the ultimate gamble with one's own thought autonomy, and an abandonment of truth in favor of false comfort.
So, like church? lol
No wonder there's so much worrying overlap between religion and AI.
Had the exact same thought. If it was revolutionary and innovative we would be praising it and actual tech people would love it.
Guess who actually loves it? Authoritarians and corporations. Yay.
AI got tons of money from investors they will eventually want ROI… this why they are trying to force it down our throats
I've been wondering about a similar thing recently - if AI is this big, life-changing thing, why were there so little rumblings among tech-savy people before it became "mainstream"? Sure, Machine Learning was somewhat talked about, but very little of it seemed to relate to LLM-style Machine learning. With basically all other innovations technology, the nerds tended to have it years before everyone else, so why was it so different with AI?
Because AI is a solution to a problem individuals don't have. The last 20 years we have collected and compiled an absurd amount of data on everyone. So much that the biggest problem is how to make that data useful by analyzing and searching it. AI is the tool that completes the other half of data collection, analyzing. It was never meant for normal people and its not being funded by average people either.
Sam altman is also a fucking idiot yes-man who could talk himself into literally any position. If this was meant to help society the AI products wouldnt be assisting people with killing themselves so that they can collect data on suicide.
And additionally, I’ve never seen an actual tech-savy nerd that supports its implementation, especially in this draconian ways.
LLMs are a really cool toy, I would lose my shit over them if they weren't a catalyst for the whole of western society having an oopsie economic crash moment.
This is some amazing insight. 100% correct. This is an investment scam, likely an investment bubble that will pop if too many realize the truth.
AI at this stage is basically just an overrefined search engine, but companies are selling it like its JARVIS from Iron Man.
As someone (forgot which blog I read it on, sorry) recently observed: if AI made software development so much easier, we'd be drowning in great new apps by now.
It's advertising. It's shoved in your face so you use Copilot instead of Google.
I setup a brother all in one printer for my mother in law and it wanted to install software that loads at startup that pops up constantly with their printer toner sales and marketing.
I learned a long time ago to never install manufacturer printer drivers. Or, at least, never install them from the provided Setup.exe.
They've always installed a bunch of bloatware (HP has always been the worst but other brands are just as bad).
If you look in the setup folder, there's usually the raw drivers you can install from Device Manager. If the driver package is just a single .exe file, you can usually unpack it with 7zip and get at its inner contents.
If that fails, the system-included HP LaserJet 4200 PCL driver is about as close to a universal print driver as you can find lol.
Most things are nothing more than smoke and mirrors to get your money. Tech especially. Welcome to end stage capitalism.
I was reading a book the other day, a science fiction book from 2002 (Kiln People), and the main character is a detective. At one point, he asks his house AI to call the law enforcement lieutenant at 2 am. His AI warns him that he will likely be sleeping and won't enjoy being woken. The mc insists, and the AI says ok, but I will have to negotiate with his house AI about the urgency of the matter.
Imagine that. Someone calls you at 2 am, and instead of you being woken by the ringing or not answering because the phone was on mute, the AI actually does something useful and tries to determine if the matter is important enough to wake you.
Some of the older lemmings here will remember what it was like when every company wanted to make a website, but they didn’t really have anything to put in there. People were curious to look at websites, because you hadn’t seen that many yet, so visiting them was kinda fun and interesting at first. After about a year, the novelty had worn off completely, and seeing YetAnotherCompanyName.com on TV or a road side billboard was beginning to get boring.
Did it ever get as infuriating the current AI hype though? I recall my grandma complaining about TV news. “They always tell me to read more online.” she says. I guess it can get just as annoying if you manage to successfully ignore the web for a few decades.
I was an adult during that time, and I don't recall it being anywhere near as annoying. Well, except the TV and radio adverts spelling at you like "...or visit our website at double-you double-you double-you dot Company dot com. Again, that's double-you double-you double-you dot C-O-M-P-A-N-Y dot com."
YMMV, but it didn't get annoying until apps entered the picture and the only way to deal with certain companies was through their app. That, of if they did offer comparable capabilities on their website but kept a persistent banner pushing you toward their app.
I think back then, they had a product that was ahead of its time, and just needed time for us to adapt to.*
Now, they have a solution in search of a problem, and they don't know what the good use cases are, so they're just slapping it on like randomly and aggressively.
- I hate the way we did though, and hope AI destroys the current corporate internet.
The 0.001% have stolen $76,000,000,000,000 from US workers alone in the last 40 years.
AI is just a way for them to burn money rather than allow the poors to have a raise.
A couple years ago I read a news article written by a woman who had just left her silicon valley career because she was one of the people forerunning the implementation of AI and it terrified her and she saw how bad it was and the long-lasting implications on society and she bailed out due to conscientious objections.
Most obviously OpenAI is still burning money like crazy and they also start offering porn AI as everyone else. 🤷♂️ Sometimes the current AI is useful, but as long as the hallucinations and plain wrong answers are still a thing I don’t see it eliminating all jobs.
It’s unfortunate that they destroy the text and video part of the internet on the way. Text was mostly broken before, but now images and videos are also untrustworthy and will be used for spam and misinformation.
Like my parent's Amazon Echo with "Ask me what famous person was born this day."
Like, if you know that, just put it up on the screen. But the assistant doesn't work for you. Amazon just wants your voice to train their software.
To be fair, the internet was fucking everywhere once the dotcom bubble kicked off. Everyone had a website, and even my mum was like "Don't bother your dad, he's on the internet" like it was this massive thing.
Those trying to sell it are trying to figure out where it's most useful. In one way, I think it's an amazing technology, and I also wonder how it can be best used. However, I can't stand it being pushed on me, and I wish I could easily say no. Acrobat Reader is particularly unbearable with it. Trying to describe a drawing?? Ughhh. Waste of space and energy like nothing else.
TL;DR
4 layers of stupidification. The (possibly willfully) ignorant user, the source bias, the bias/agenda of the media owner, then shitty AI.
AI should be a backup to human skill and not a replacement for it. It isn’t good enough, and who knows when or if it will ever be at a reasonable cost. The problem with the current state of AI is that it’s being sold as a replacement for many human jobs and knowledge. 30-40 years ago we had to contend with basic human bias and nationalism filtering facts and news before it got to the end user, then we got the mega-media companies owned by the ultra wealthy who consolidated everything and injected yet more bias with the internet and social media but at least you got provided with multiple sources, now we have AI being pushed as a source that can be programmed to use biased sources and/or objectively wrong sources that people don’t even bother checking another source about. AI should be used to find unique solutions to medical research, materials design, etc. Not whether or not microwaving your phone is a good idea.
The more you use AI the more data you are providing it.
- They want data in the hope they can train their data centre hosted LLM's to be accurate enough to take many jobs.
- If they achieve this, and make every company and country dependent on their LLM, they rinse, and the former middle class is as fucked as the working class have been since 1980's de-industrialisation. You're either made redundant, or your job can now be done by a long line of redundant people.
It is a race to the bottom.
At least, this is one possible outcome. There is a decent chance their data centre hosted LLM's just won't be accurate enough for mass deployment.
Canva just announced the next generation of Affinity. Instead of giving us Linux support, while Affinity is “free” now they crammed in a bunch of AI to upsell you on a subscription.

