One arm hair in the hand is better than two in the bush
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Honestly, I’m kind of impressed it’s able to analyze seemingly random phrases like that. It means its thinking and not just regurgitating facts. Because someday, such a phrase could exist in the future and AI wouldn’t need to wait for it to become mainstream.
It's not thinking. It's just spicy autocomplete; having ingested most of the web, it "knows" that what follows a question about the meaning of a phrase is usually the definition and etymology of that phrase; there aren't many examples online of anyone asking for the definition of a phrase and being told "that doesn't exist, it's not a real thing." So it does some frequency analysis (actually it's probably more correct to say that it is frequency analysis) and decides what the most likely words to come after your question are, based on everything it's been trained on.
But it doesn't actually know or think anything. It just keeps giving you the next expected word until it meets its parameters.
I mean are you asking it if there is a history of an idiom existing or just what the idiom could mean?
I for one will not be putting any gibberish into Google's AI for any reason. I don't find it fun. I find it annoying and have taken steps to avoid it completely on purpose. I don't understand these articles that want to throw shade at AI LLM's by suggesting their viewers go use the LLM's which only helps the companies that own the LLM's.
Like. Yes. We have established that LLM's will give misinformation and create slop because all their data sets are tainted. Do we need to continue to further this nonsense?