They said that the keff value doesn't exceed 1.27, is that within the normal range for this? Anything over 1 is supercritical, so shouldn't any value of greater than 1 be concerning?
thespcicifcocean
Okay, but the entire idea was to allow the electors to basically go against the will of the people, if the people are a bunch of idiots and elect a despot wannabe. And when a despot wannabe actually got elected, the electors didn't go against the idiot electorate.
the electoral college experiment should be abandoned. It clearly didn't serve the function it was intended to serve when it was implemented 200 years ago.
I really miss my family and friends. And also pulled pork.
Shouldn't pokemon be catch, tag and release, then?
Make it de the shit I don't want to do, then we'll talk
when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there's a process that has a high probability of being late based on the inputs, I'll say it'll be late, with the confidence. that's completely different from feeding the figures into a gpt and saying whatever the llm will say.
and when i say "ml" i mean a model I trained on specific data to do a very specific thing. there's no prompting, and no chatlike output. it's not a language model
find the radiator that's at the highest point/farthest from the boiler, get a container, open the valve you have pictured there and close it when there doesn't seem to be any more air coming out. use the container to catch any water. make sure to refill the water heater for your heating system, they recycle the water, so the input valves are normally shut.
edit, just saw that you have a shared system, so ignore that last bit. but when in doubt, call the building super.
also, the fact that your GF gave you "one afternoon" to fix it, it kinda sounds like a toxic personality trait. if there's anything else worrisome about her behavior, she might benefit from therapy. Having been in a relationship with someone with the dark triad, it was shit, and i kinda wish i would have been able to get her the help she needed.
What's worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it's an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.
I hate that ai just means llm now. ML can actually be useful to make predictions based on past trends. And it's not nearly as power hungry
Follow them home and shit in front of their house