CheeseNoodle

joined 2 years ago
[–] CheeseNoodle@lemmy.world 3 points 2 hours ago

Ask deepseek then.

[–] CheeseNoodle@lemmy.world 13 points 20 hours ago (1 children)

Its 100% a huge issue with Amazon, at this point pretty sure everyone knows how all the different sellers get lumped in together so you order from seller A and recieve a product from seller B because Amazon considers them the same thing. Means you can't even be sure wether the seller you thought you purchased from is the one that scammed you; and means even buying from a companies official amazon listing isn't protection.

[–] CheeseNoodle@lemmy.world 5 points 23 hours ago

Saw a picture of him without sunglasses recently and man has bloodshot ketamine addict eyes, the sunglasses are to hide that.

[–] CheeseNoodle@lemmy.world 2 points 1 day ago (1 children)

Crows are really communal though so might be a better bet for human like intelligence in the long run. Magpies too, not because they'd help but because they are both intelligent and total jerks so even the eventual crow people wouldn't get to be happy thus maximizing the chaos.

[–] CheeseNoodle@lemmy.world 3 points 4 days ago

Does that still apply when the CEO is also Emperor of the United States?

[–] CheeseNoodle@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

So I'd go with no at the moment because I can easily get an LLM to contradict itself repeatedly in increcibly obvious ways.

I had a long ass post but I think it comes down to that we don't know what conciousness or self awareness even are and just kind of collectively agree upon it when we think we see it, sort of like how morality is pretty much a mutable group consensus.

The only way I think we could be truly sure would be to stick it in a simulated environment and see how it reacts over a few thousand simulated years to figure out wether its one of the following:

  • Chinese room: The potential AI in question just keeps dying because despite seeming intelligent when prompted with training data it has no ability to function when its not spoon-fed the required information in advance. (I think current LLMs are here given my initial statement in this post).
  • Animal: It survives but never really advances beyond figuring out the behaviours required for survival, its certainly concious at this point but works more like a dog where it can follow commands and carry out tasks but has no true understanding of the meaning behind them.
  • Person: It starts seeking out information in ways not immediately neccesary for its survival and basically does what we did with the whole tool thing and speculative reasoning skills, if it invents an equivelent to writing then we can be pretty damn certain its human level and not more like corvids (tools) or ants (agriculture)

Now personally I think that test is likely impractical so we're probably going to default to its concious when it can convince the majority of people that its concious for a sustained period.... So I guess it has free will when it can start or at least spark a large grass roots civil rights movement?

[–] CheeseNoodle@lemmy.world 1 points 1 week ago (2 children)

I'd say it ends when you can't predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there's an additional random number generator I don't have access too.

[–] CheeseNoodle@lemmy.world 6 points 1 week ago

R.I.P Razer, robbed of a championship victory despite dropping its opponent in the pit (and iirc suffering 0 damage itself) because the judges decided to ignore a clear violation of the rules regarding overall size.

[–] CheeseNoodle@lemmy.world 5 points 1 week ago (1 children)

Possible dementia? I watched one of his speeches live recently and he randomly jumped backwards almost to the beginning and repeated himself without batting an eye at least twice. The creepy part was the reporters just ignoring it and continuing to ask questions as if nothing had happened.

[–] CheeseNoodle@lemmy.world 1 points 2 weeks ago (3 children)

It got worse than this, the ticketing company really wanted to get the money from him so when he got hold of a copy of the records and pointed out that one ticket was for a completely different car they modified the records on their end to change the make of car so it would match his. iirc he only got out of it because he had paper copies.

[–] CheeseNoodle@lemmy.world 1 points 1 month ago

So I read some interesting stuff on this recently, (ignoring that brain size isn't as important as brain compelxity for intelligence) a lot of creatures that have big brains including our ancestors and elephants had/have most of the extra mass in regions related to memory. The theory goes that simply remembering where everything is and picking the most likely solution (e.g. the neares watering hole that you saw water at this time last year) is generally more effective than traits like creativity and imagination... right up until you hit a break point where you start making tools and seriously modifying your environment. As we developed agriculture we had less of a need to remember every little thing so while we didn't get less intelligent we did end up with worse memories, possibly gaining an even greater degree of creativity in return as those parts of the brain became more valuable in the new self created environment.

view more: next ›