Trump is trying to ride Mamdani’s co
My mind completed that a totally different way.
Trump is trying to ride Mamdani’s co
My mind completed that a totally different way.
Actually, it probably would have been better for it to get that far, for a medical professional to actually assess her earlier...
Making sure they are really dead could be done a couple of ways...
Since embalming happens before a wake, I guess someone has "made sure" they are really dead before the wake even begins...
Hurray, everyone is off the platform. Honestly, that would be for the best...
Video encoding is generally not a likely workload in an HPC environment. Also those results I'm not sure if that is really FreeBSD versus everyone else, or clang vs. everyone else. I would have liked to see clang results on the Linux side. It's possible that BSDs core libraries did better, but they probably weren't doing that much and odds are the compiler made all the difference, and HPC is notorious for just offering users every compiler they can get their hands on.
Kernel specifically makes a difference from some of those tests (forking, favoring linux strongly, semaphores favoring BSD strongly). The vector math and particularly the AVX512 results would be most applicable to HPC users, and the Linux results are astoundingly better. This might be due to some linear algebra library that only bothered to do Linux and the test suite used that when it was available. Alternatively, it could have been because BSD either lacked or defaulted to a different CPU frequency management strategy that got in the way of vector math performance.
Keep in mind that AVX-512 would be a key factor in HPC (in fact the factor for Top500 specifically), and there the BSDs lag hugely. Also, the memory copy for whatever reason favors linux, and Stream is another common HPC benchmark.
Unclear how much of the benefit when it happened was compiler versus OS. E.g. you can run clang on Linux, and HPC shops frequently have multiple compilers available.
This is before keeping in mind that a lot of HPC participants only bother with Linux. So the best linear algebra library, the best interconnect, the best MPI, your chances are much better under Linux just by popularity.
Yeah, but in relatively small volumes and mostly as a 'gimmick'.
The Cell processors were 'neat' but enough of a PITA is to largely not be worth it, combined with a overall package that wasn't really intended to be headless managed in a datacenter and a sub-par networking that sufficed for internet gaming, but not as a cluster interconnect.
IBM did have higher end cell processors, at predictable IBM level pricing in more appropriate packaging and management, but it was pretty much a commercial flop since again, the Cell processor just wasn't worth the trouble to program for.
Yeah, that graph scale is absurd for comparison... I get it, they want to highlight the 'trend' but the scale of the US graph is nothing but a neglible slice of the boottom of the China graph, it's just impossible to intelligently compare the 'trends' in that manner...
Also skeptical of a claim of 0.0% for anyone. It looks to me that, by the criteria of the graph, china has managed to effectively tie the US on this sort of metric, and the US has roughly held it flat for the last 30 years.
As others point out, this particular metric may not be a good one, and depending on how you slice the other metrics, either China or US technically comes out ahead, but broadly a more comparable standard of living.
FreeBSD is unlikely to squeeze performance out of these. Particularly disadvantaged because the high speed networking vendors favored in many of these ignore FreeBSD (Windows is at best an afterthought), only Linux is thoroughly supported.
Broadly speaking, FreeBSD was left behind in part because of copyleft and in part by doing too good a job of packaging.
In the 90s, if a company made a go of a commercial operating system sourced from a community, they either went FreeBSD and effectively forked it and kept their variant closed source and didn't contribute upstream, or went Linux and were generally forced to upstream changes by copyleft.
Part of it may be due to the fact that a Linux installation is not from a single upstream, but assembled from various disparate projects by a 'distribution'. There's no canonical set of kernel+GUI+compilers+utilities for Linux, but FreeBSD owns a much more prescriptive project. I think that's gotten a bit looser over time, but back in the 90s FreeBSD was a one-stop-shop, batteries included project that included everything the OS needed maintained under a single authority. Linux needed distributions and that created room for entities like RedHat and SUSE to make their mark.
So ultimately, when those traditionally commercial Unix shops started seeing x86 hardware with a commercially supported Unix-alike, they could pull the trigger. FreeBSD was a tougher pitch since they hadn't attracted something like a RedHat/SUSE that also opted into open source model of business engagement.
Looking at the performance of these applications on these systems, it's hard to imagine an OS doing better. Moving data is generally as close to zero copy as a use case can get, these systems tend to run essentially a single application at a time, so the cpu and io scheduling hardly matter. The community used to sweat 'jitter' but at this point those background tasks are such a rounding error in the overall system performance they aren't worth even thinking about anymore.
Unlikely.
Businesses generally aren't that stoked about anything other than laptops or servers.
To the extent they have desktop grade equipment, it's either:
On servers, the steam machine isn't that attractive since it's not designed to either be slapped in a closet and ignored on slotted in a datacenter.
Putting all this aside, businesses love simplicity in their procurement. They aren't big on adding a vendor for a specific niche when they can use an existing vendor, even if in theory they could shave a few dollars in cost. The logistical burden of adding Steam Machine would likely offset any imagined savings. Especially if they had to own re-imaging and licensing when they are accustomed to product keys embedded in the firmware when they do vendor preloads today.
Maybe you could worry a bit more about the consumer market, where you have people micro-managing costs and will be more willing to invest their own time, but even then the market for non-laptop home systems that don't think they need nVidia but still need something better than integrated GPUs is so small that it shouldn't be a worry either.
Consoles are sold at a loss, and they recover it with games because the platform is closed.
Sometimes, but evidently not currently. Sources seem to indicate that only Microsoft seems to say they are selling at a loss, though it seems odd since their bill of materials looks like it should be pretty comparable to PS5...
I'll agree with the guess of around $800, but like you say, the supply pressure on RAM and storage as well as the tariff situation all over the place, hard to say.
Problem is that AI didn't present as a "genre" and you get AI slop across the gamut.
I started a video because the title seemed like something I was interested in and the thumbnail seemed fine. Then within the first few seconds it was obviously lazy ai slop.
Short of limiting yourself to known acceptable channels. You can't really stave off the AI slop. Some categories get hit less often, but they are all over the place.