this post was submitted on 05 Aug 2025
399 points (99.3% liked)

Technology

73771 readers
3814 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Beacon@fedia.io 7 points 3 days ago (4 children)

Some of y'all are crazy reactionary. There's absolutely nothing wrong with asking an ai chatbot for an additional opinion. The ai shouldn't be making the decisions, and the ai shouldn't be the only way you look for opinions, but there's nothing wrong with ai being ONE OF the opinions you consider

[–] FerretyFever0@fedia.io 19 points 3 days ago (3 children)

But it doesn't know anything. At all. Does Sweden not have a fuck ton of people that are trained to gather intelligence?

[–] Beacon@fedia.io 10 points 3 days ago

It doesn't matter if it knows anything or not. The purpose is to acquire other ideas that you and the people in your cabinet didn't think of. Or ideas they didn't want to say, because no one wants to tell the boss that their idea is bad. It's a GOOD thing when a politician seeks out multiple different viewpoints to consider. It doesn't matter if one of the viewpoints being considered was created by "a fancy auto-complete" as some haters like to say

[–] Perspectivist@feddit.uk 0 points 3 days ago* (last edited 3 days ago)

It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You simply don’t absorb that much data without some kind of learning taking place.

It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.

It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.

[–] roofuskit@lemmy.world 13 points 3 days ago* (last edited 3 days ago) (2 children)

AI chat bots don't have their own opinions. All they do is regurgitate other opinions, and you have no idea the motivation for how those opinions are weighted.

[–] Zexks@lemmy.world 0 points 3 days ago (1 children)

How is that any different than you. Objectively prove to everyone here than none of your opinions have ever been influence by anything youve ever seen, read or heard.

[–] vacuumflower@lemmy.sdf.org 1 points 1 day ago

Your own opinions are a result of much bigger amount of much more relevant data in any case.

An AI model is a set of coefficients averaging a dataset by "one size fits all" measure. Those coefficients are found by an expensive process using criteria (again "one size fits all") set by a company making it. From them its machine generates (looks up actually) the most probable text, it's like a music box. A beautiful toy.

So you have different motivations and abstract ideas in different situations, you also have something like a shared codebook with other people making decisions - your instincts and associations. Reading what they say or seeing what they do, you get a mirror model in you head, it might be worse, but it's something very hard for text analysis to approach.

That model doesn't, it has the same average line for all situations, and also it can't determine (on the level described) that it doesn't know something. To determine that you don't know something you need an abstract model, not a language model.

I dunno what is their current state, all I've read and kinda understood was seemingly about optimization of computation for language models and structuring their application to imitate a syllogism system.

I think with the current approaches making a system of translating language to a certain abstract model (tokenization isn't even close to that, you need to have some topology with areas that can be easily merged or split instead of token points with distances, in any case) and abstract entities to language would be very computationally expensive.

[–] Beacon@fedia.io -1 points 3 days ago (3 children)

As i said in other comments, none of that matters. The point is to gather ideas that you otherwise might not have come across. And also as said already, ai generated opinion pieces should never be the only opinions you hear.

[–] roofuskit@lemmy.world 7 points 3 days ago (3 children)

What use is an opinion that can neither be explained or defended by the person giving it? How is that useful to a person making decisions for millions of people?

[–] eager_eagle@lemmy.world 2 points 2 days ago (1 children)

LLMs can defend what you tell it to defend. What are you on about?

[–] roofuskit@lemmy.world -1 points 2 days ago (1 children)

No it cannot. It does not understand anything so it cannot actually defend its points. It can make something that looks like a defense, but it doesn't understand what it is telling you. It can spit text back at you until the cows come home but none of it can ever be trusted or relied on.

[–] eager_eagle@lemmy.world 1 points 2 days ago

it sounds like you've never used an LLM, mate.

You don't need to get philosophical into the definition of what is understanding to realize they give you arguments as valid as anyone else would.

[–] FrameXX@discuss.tchncs.de 1 points 2 days ago* (last edited 2 days ago) (1 children)

Just throw out LLM ideas you don't find reasonable and only use ideas that you yourself find reasonable. You don't instantly turn into a zombie when you use LLM. You can still use your head.

[–] vacuumflower@lemmy.sdf.org 1 points 2 days ago (1 children)

It doesn't work like that. Which is why con artists and propaganda function often by "offering advice".

[–] FrameXX@discuss.tchncs.de 1 points 1 day ago (1 children)

You protect yourself against "propaganda" by educating yourself and not falling for emotions. Not reading LLM responses won't safe you from propaganda.

[–] vacuumflower@lemmy.sdf.org 1 points 1 day ago

This is similar to saying that you protect against malware by using an antivirus and not being careful in what you download and run. I hope you can see the analogy.

There's no such thing as "not falling for emotions" for humans.

[–] Beacon@fedia.io -4 points 3 days ago (1 children)

It can do those things. And you need to re-read what's already been said on this comments page, because how it's useful has been explained several times already

[–] roofuskit@lemmy.world 8 points 3 days ago (3 children)

It really can't. It does not understand things.

[–] Perspectivist@feddit.uk 2 points 3 days ago

How is "not understanding things" preventing an LLM from bringing up a point you hadn't thought of before?

[–] Beacon@fedia.io 0 points 3 days ago

It really can. I've used ai, I've personally seen it do it tons of times. It can make cohesive arguments backed up by reasonable rationales. It doesn't need to "understand" anything to do this. Do you need me to write some prompts and copy n paste ai's response to prove it to you?

[–] Beacon@fedia.io -2 points 3 days ago (2 children)

EDIT - y'all are so silly, you're just gonna downvote this incontrovertible proof of it doing exactly what you said it couldn't do, and instead just pretend it doesn't exist


Since it's so easy to do i just went ahead and did it for you. Here's my prompts and ai's replies. It will write any opinion you want it to talk about, and give cogent reasonings and arguments for that opinion.

Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is bad. Support this with clear arguments explaining your position

Why Removing Car Lanes for Bike Lanes Can Be Counterproductive

Reallocating car lanes to create bike lanes may seem progressive, but it can have unintended consequences that undermine its goals. In densely populated urban areas, vehicular traffic remains essential for commerce, emergency services, and commuting—especially for individuals who cannot rely on biking due to age, disability, or long travel distances. When car lanes are removed, congestion intensifies, leading to increased emissions as drivers idle longer and search for alternative routes.

Public transportation often depends on road infrastructure too. Buses trapped in traffic delays become less reliable, disproportionately affecting lower-income riders who depend on them. Additionally, essential services like ambulances and delivery trucks face new bottlenecks, risking delays that can have serious implications.

The assumption that cyclists will replace drivers ignores demographic and logistical realities. Bike lanes benefit a specific subset of urban users—but they shouldn’t come at the cost of broader mobility. Instead of eliminating car lanes, cities should consider expanding bike networks where space allows or investing in multimodal infrastructure that improves mobility for all users.

Urban planning should strive for balance, not substitution. Redesigning streets must enhance accessibility across the board, not create new friction between commuters who are simply trying to get where they need to go.


Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is good. Support this with clear arguments explaining your position

Opinion: Why Converting Car Lanes to Bike Lanes Benefits Everyone

Urban spaces must evolve to meet the needs of modern communities—and converting car lanes into bike lanes is one of the smartest, most equitable moves cities can make. First, bike lanes promote healthier lifestyles. By making cycling safer and more accessible, people are more likely to trade short car trips for active commuting, which supports physical health and lowers emissions.

Second, cities thrive on efficiency. Cars take up tremendous space and often carry just one passenger. Bikes, by contrast, use a fraction of that space and cause significantly less wear on infrastructure. Fewer car lanes mean reduced traffic volume over time as behavior adjusts, alleviating congestion—not worsening it.

Third, these changes make streets safer. Studies consistently show that well-designed bike infrastructure leads to fewer traffic collisions and fatalities. Cyclists separated from speeding vehicles feel safer, and drivers operate with greater awareness.

Finally, this shift reflects a cultural transformation: from car dominance to sustainable mobility. Prioritizing bike infrastructure signals that our cities value cleaner air, quieter streets, and more livable neighborhoods.

Reshaping roads isn’t just about bikes—it’s about building cities for people, not just machines. Let’s pedal forward together.

[–] patatahooligan@lemmy.world 1 points 2 days ago

Your examples where an LLM is defending a position you chose for it while producing obviously conflicting arguments actually proves what the others have been telling you. This is meaningless slop. It clearly has no connection to any position an LLM might have appeared to have on a subject. If it did, you would not be able to make it defend the opposite side without objections.

[–] roofuskit@lemmy.world 1 points 2 days ago (1 children)

I'm going to say this as nicely as possible, you've been taken in. You may think it is good at this because it appears to be and it relays information with confidence. Sometimes it even gets things entirely correct. But it is not good at these things and will often get things wrong. Important details are inaccurate or even entirely made up. It does not vet sources, and it could never do so because it does not understand anything it is relaying to you. It is a tool designed to give you a more or less visual approximation of what you requested, but it can never truly understand what it's talking about so it is entirely unreliable.

You're being downvoted because you bought the snake oil and are now trying to promote it to other people.

[–] Beacon@fedia.io -1 points 2 days ago

Dude you're saying nonsense. You said ai can't defend its arguments. I provided incontrovertible evidence that it can. I'm going to say this as nicely as possible - you want to think that LLMs are unable to do anything useful so that you can hold onto a false belief about ourselves. And that is the reason why my comment is being half downvoted

[–] vacuumflower@lemmy.sdf.org 1 points 2 days ago

You can't trust yourself to be impartial. That's why scientific method and various approaches (even like dialectics god forbid) were invented to leading a discussion.

[–] frog_brawler@lemmy.world -1 points 3 days ago (2 children)

I certainly don’t want my attorney wasting time considering the opinions and motivations of the person that’s suing me.

[–] Beacon@fedia.io 4 points 3 days ago

Well you're gonna have a shitty attorney then

[–] lime@feddit.nu 8 points 3 days ago (1 children)

there absolutely is something wrong with sending the basis for decisions in matters of state to a foreign actor, though.

[–] Beacon@fedia.io 0 points 2 days ago (1 children)

As i wrote in another comment, you can run a local instance of chatgpt or other ai on your own computer, no internet involved

[–] lime@feddit.nu 5 points 2 days ago (1 children)

of course you can. why would a career politician who has very visibly been interested only in politics since his teens know how to do that?

[–] Beacon@fedia.io 1 points 2 days ago (1 children)

Hah, true. But the point still stands that using an llm itself wasn't a bad thing

[–] lime@feddit.nu 7 points 2 days ago (1 children)

i'd say it's still bad because this is the leader of a government consulting with a stochastic parrot instead of experts.

[–] Beacon@fedia.io 1 points 2 days ago (1 children)

Presumably it wasn't instead of, it was in addition to, and therefore totally fine

[–] lime@feddit.nu 4 points 2 days ago (1 children)

it's still not fine, as per my first point. it's leaking information to foreign interests.

[–] Beacon@fedia.io -1 points 2 days ago (1 children)

Right, but we already addressed that above. If it's done on a local pc's ai that doesn't operate using a net connection, and is used in addition to rather than instead of consulting with people, then it's totally fine

[–] lime@feddit.nu 3 points 2 days ago

yeah but then we're no longer discussing the topic the thread is about.

[–] Perspectivist@feddit.uk 1 points 3 days ago (1 children)

There’s a certain irony in people reacting in an extremely predictable way - spewing hate and criticism the moment someone mentions AI - while seemingly not realizing that they’re reflexively responding to a prompt without any real thought, just like an LLM.

A tool isn’t bad just because it doesn’t do what you thought it would do. You just take that into account and adjust how you use it. Hammer isn't a scam just because it can't drive in screws.

[–] Beacon@fedia.io 0 points 3 days ago

Beautifully put