Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I like my project manager, they find me work, ask how I'm doing and talk straight.
It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.
COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.
This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.
I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you're having on a personal project or what toy to buy for your cat's birthday.
Right, that sweet spot between too less stimuli so your brain just wants to sleep or run away and enough stimuli so you can't just zone out (or sleep).
Optimizing AI performance by “scaling” is lazy and wasteful.
Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.
Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.
They're throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.
I mean it's pretty clear they're desperate to cut human workers out of the picture so they don't have to pay employees that need things like emotional support, food, and sleep.
They want a workslave that never demands better conditions, that's it. That's the play. Period.
If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren't willing to prioritize efficiency.
It's basically the same argument they have with people. They don't wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don't want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It's the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.
Absolutely. It's maddening that I've had to go from "maybe we should make society better somewhat" in my twenties to "if we're gonna do capitalism, can we do it how it actually works instead of doing it stupid?" in my forties.
The oligarchs running these companies have suffered a psychotic break. What the cause exactly is I don't know, but the game theyre playing is a lot less about profits now. They care about control and power over people.
I theorize it has to do with desperation over what they see as an inevitable collapse of the United States and they are hedging their bets on holding onto the reigns of power for as long as possible until they can fuck off to their respective bunkers while the rest of humanity eats itself.
Then, when things settle they can peak their heads out of their hidie holes and start their new Utopian civilization or whatever.
Whatever's going on, profits are not the focus right now. They are grasping at ways to control the masses...and failing pretty miserably I might add...though something tells me that scarcely matters to them.
And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.
It's ironic how conservative the spending actually is.
Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?
No.
Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.
Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.
The actual survey result:
Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.
So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe
Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.
It's becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what's been built is a glorified homework cheater and a better search engine.
what's been built is a glorified homework cheater and an ~~better~~ unreliable search engine.
I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.
They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).
With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.
The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.
AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.
Complete waste.
Technology in most cases progresses on a logarithmic scale when innovation isn't prioritized. We've basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we're in the "bells and whistles" phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.
I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.
There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.
So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).
Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.
In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.
Me and my 5.000 closest friends don't like that the website and their 1.300 partners all need my data.
I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).
The peak of AI for me was generating images Muppet versions of the Breaking Bad cast; it's been downhill since.
There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.
The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don't give a fuck. Spend 1 trillion on AI and say we're near over and over again and literally nobody can stop them right now.
Imo our current version of ai are too generalized, we add so much information into the ai to make them good at everything it all mixes together into a single grey halucinating slop that the ai ends up being good at nothing.
We need to find ways to specialize ai and give said ai a more consistent and concrete personality to move forward.
Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none
It's because customers don't want it or care for it, it's only the corporations themselves are obsessed with it
Pump and dump. That’s how the rich get richer.
Current big tech is going to keeping pushing limits and have SM influencers/youtubers market and their consumers picking up the R&D bill. Emotionally I want to say stop innovating but really cut your speed by 75%. We are going to witness an era of optimization and efficiency. Most users just need a Pi 5 16gb, Intel NUC or an Apple air base models. Those are easy 7-10 year computers. No need to rush and get latest and greatest. I’m talking about everything computing in general. One point gaming,more people are waking up realizing they don’t need every new GPU, studios are burnt out, IPs are dying due to no lingering core base to keep franchise up float and consumers can't keep opening their wallets. Hence studios like square enix going to start support all platforms and not do late stage capitalism with going with their own launcher with a store. It’s over.
Meanwhile a huge chunk of the software industry is now heavily using this "dead end" technology 👀
I work in a pretty massive tech company (think, the type that frequently acquires other smaller ones and absorbs them)
Everyone I know here is using it. A lot.
However my company also has tonnes of dedicated sessions and paid time to instruct it's employees on how to use it well, and to get good value out of it, abd the pitfalls it can have
So yeah turns out if you teach your employees how to use a tool, they start using it.
I'd say LLMs have made me about 3x as efficient or so at my job.
Your labor before they had LLMs helped pay for the LLMs. If you're 3x more efficient and not also getting 3x more time off for the labor you put in previously for your bosses to afford the LLMs you got ripped off my dude.
If you're working the same amount and not getting more time to cool your heels, maybe, just maybe, your own labor was exploited and used against you. Hyping how much harder you can work just makes you sound like a bitch.
Real "tread on me harder, daddy!" vibes all throughout this thread. Meanwhile your CEO is buying another yacht.
I am indeed getting more time off for PD
We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.
Some companies are indeed pretty cool about it.
I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.
At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there's actual financial incentive for me to perform way better.
You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.
Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn't miss anything.
And instead of mad rushing 8 hours straight, it's just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I'm literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.
I will say that I am genuinely glad to hear your business is giving you breaks instead of breaking your backs.
It's not that LLMs aren't useful as they are. The problem is that they won't stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:
-
Investors believe LLMs are going to get better and they keep pouring money into "AI" companies, allowing them to operate at a loss for longer That's tied to the promise of an actual "intelligence" emerging out of a statistical model.
-
Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that's called enshttificarion.