this post was submitted on 12 Mar 2025
560 points (97.9% liked)

Technology

66353 readers
4321 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The onrushing AI era was supposed to create boom times for great gadgets. Not long ago, analysts were predicting that Apple Intelligence would start a “supercycle” of smartphone upgrades, with tons of new AI features compelling people to buy them. Amazon and Google and others were explaining how their ecosystems of devices would make computing seamless, natural, and personal. Startups were flooding the market with ChatGPT-powered gadgets, so you’d never be out of touch. AI was going to make every gadget great, and every gadget was going to change to embrace the AI world.

This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.

There was just one problem with the whole theory: the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.

In the meantime, the tech industry allowed itself to be so distracted by these shiny language models that it basically stopped trying to make otherwise good gadgets. Some companies have more or less stopped making new things altogether, waiting for AI to be good enough before it ships. Others have resorted to shipping more iterative, less interesting upgrades because they have run out of ideas other than “put AI in it.” That has made the post-ChatGPT product cycle bland and boring, in a moment that could otherwise have been incredibly exciting. AI isn’t good enough, and it’s dragging everything else down with it.

Archive link: https://archive.ph/spnT6

you are viewing a single comment's thread
view the rest of the comments
[–] metaStatic@kbin.earth 44 points 2 days ago (16 children)

I've heard it put very well that AI is either having a Napster moment in which case we will not recognise the world 10 years from now, or it's having an iPhone moment and it will get marginally better at best but is essentially in it's final form.

I personally think it's more like 3D movies and in 20 years when it comes back around we'll look at this crap like it was Red and Blue glasses.

[–] DaGeek247@fedia.io 25 points 2 days ago (15 children)

I think it's iphone stage. We've had predictive text in some form or other for a long time now. But that's just LLMs. Can't speak for the image/video generators, but I expect those will become another tool in the box that gets better but does the same thing.

I just can't see a whole lot of improvement in these products making any changes top how we use them already.

[–] pennomi@lemmy.world 9 points 2 days ago (12 children)

Transformer based LLMs are pretty much at their final form, from a training perspective. But there’s still a lot of juice to be gotten from them through more sophisticated usage, for example the recent “Atom of Thoughts” paper. Simply by directing LLMs in the correct flow, you can get much stronger results with much weaker models.

How long until someone makes a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.

[–] Robaque@feddit.it 3 points 1 day ago

... a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.

Turing Completeness maybe?

load more comments (11 replies)
load more comments (13 replies)
load more comments (13 replies)