you're not gonna get one.
melpomenesclevage
a reflection of who is in charge of it
not even that. it's an inherently more regressive version of whatever data that person feeds it.
the two arguments for deploying this shit outside of very narrow laboratory uses, where everyone was already using other statistical models.
A. this is one last grasp at fukuyama's 'end of history', one last desperate scream of the liberal order that they want to be regressive shit heads and build the abdication machine as their grand industrial-philosophical project, so they can do whatever horrible shit they want, and claim that they're still compassionate and only doing it because computer said so.
B. this is a project by literal monarchists. people who wish to kill democracy. to murder truth and collaboration; replace it with blind tribalistic loyalty to a fuhrer/king. the rhetoric coming from a lot of the funders of these things supports this.
this technology is existentially evil, and will be the end of our society either way. it must be stopped. the people who work on it must be stopped. the people who fund it must be hanged.
AI would be fine. we do not have artificial intelligence. full stop. none of the technologies being talked about even approach intelligence. it's literally just autocorrect. do you know how the autocorrect on your phone's software keyboard works? then you know how a large language model works. it's exactly the same formulae, just scaled up and recursed a bunch. I could have endless debates about what 'intelligence' is, and I don't know that there's a single position I would commit to very hard, but I know, dead certain, that it is not this. turing and minsky agreed when they first threw this garbage away in 1951-too many hazards, too few benefits, and insane unreasonable costs.
but there's more to it than that. large (whatever) models are inherently politically conservative. they are made of the past, they do not struggle, they do not innovate, and they do not integrate new concepts, because they don't integrate any concept's, they just pattern match. you cannot have social progress when decisions are made by large (whatever) models. you cannot have new insights. you cannot have better policies, you cannot improve. you can only cleave closer and closer to the past, and reinforce it by feeding it its own decisions.
It could perhaps be argued, in a society that had once been perfect and was doing pretty well, that this is tolerable in some sectors, as long as someone keeps an eye on it. right now we're a smouldering sacrifice zone of a society. that means any training data would be toxic horror or toxic horror THAT IS ON FIRE. this is bad. these systems are bad. anyone who advocates for these systems outside extremely niche uses that probably all belong in a lab is a bad person.
and I think, if that isn't your enemy, your priorities are deeply fucked, to the point you belong in a padded room or a pine box.
no but see it's not violence at all, because it's against minorities. so it's fine. whereas the mildest fraud of your bank is literally worse than the holocaust.
no but see it's not propaganda if it validates my worldview.
tbf those are just as prevalent.
kill everyone in valhalla and then
isn't that kind of the whole vibe? that's just, like, every day there right?
crazy! like this is what rules were always for, or something! but then that's insane; if that were true, the wealthy, the powerful, and their servants, would never be punished for violating them. so i guess it couldn't be that.
there are ways other than capitalism to support people who do stuff, but im guessing most of these people wouldn't support you in those ways either, and it's just a hierarchy issue.
no but see it's about hierarchy. the factory farm is bigger than me, and so I must serve it and obey on its terms. the local farm is just a person. I bet I could take them in a fight. therefore they should serve me and all interactions should be on my terms.
no, it makes sense. devaluing human life, and spreading the idea that sometimes the weak just die, with nothing anyone can do about it, is very much something they want to do. plus, burying your children is one hell of a sunk cost.
that's not to say she's cognitively functional, but that's why her masters won't put their foot down.
A lot of systems we have already made are super fucked up. this is true. a lot of them were designed to fuck shit up, and be generally evil. we do that sometimes.
these systems only serve to magnify them. see, there's been a massive marketing push is to call these things "artificial intelligence". they're not. they tell you it's all to complex to explain, but type something on your phone. no, really, do it. like a sentence or two. anything.
you just used the small easily comprehensible version of a large (thing) model. the problem is, as you try to scale complexity on these, both accuracy and compute resources grow exponentially, because it's literally the same kind of algorithm as your software keyboard uses to autocorrect, but with a bunch of recursion in it and much larger samples to reference every time someone hits a key.
there are some philosophical implications to this!
see, there is no neutral. there is no such thing as a view from nowhere. which means these systems are not. they need to be trained on something. you don't just enter axioms. that would be actual AI. this, again, isn't that. these are tools for making statistical correlations.
there's no way to do this that is 'neutral' or 'objective'. so what data do you think these tools get fed? Lets say you're a bank, let's say you're wells fargo, and you want to make a large home-loan-assessment model. so you feed it all the data from your institution going back to the day your company was founded. back in stagecoach and horse times.
so you have names of applicants, and house statistics, and geographic location, and all sorts of variables to correlate and weigh in deciding who gets a home loan.
which is great if your last name is, for example: hapsburg. less good if your last name is, for example: freeman. and you can try to find ways to compensate, if you want to. keeping in mind that the people who made this system may actively want to stop you. but it's possible. but these systems are very very good at finding secret little correlations. they're fucking amazing at it. it's kind of their shit. this is the thing they're actually good at. so you'll find weird new incomprehensibly cryptic markers for how to be a racist piece of shit, all of which will stay within the black box and be used to entrench historical financial bigotry.
death is the great equalizer, but this system can be backed up indefinitely. it will not die unless somebody kills it. which could be really hard. people can learn to be less shit, at least in theory-we can have experiences off the job that wake us up to ways we used to suck. this system can't though. people can be audited, but aside from rebuilding the whole damn thing, you can't really do maintenance on these things. the webs of connections are too complicated, and maybe on purpose, we can't know what changing an already trained large (whatever) model will do.
so these systems are literally incapable of being better than us. they are designed to be worst. they are designed to justify our worst impulses. they are designed to excuse the most vile shit we always wanted to do. they are forged from the jungian shadow of our society, forged from the sins, and only the sins, of our ancestors, forged with the intent of severing our connection to material reality, and forcing all people to surrender. to lay down arms once raised in support of the great titan truth that has always stood between regressive agendas and their thousand year reich, so they can finally kill it, and replace it with blind tribalistic loyalty to their fuhrer.
so please stop shilling for this neon-genesis-evangellion-ass-fuckery.