in addition to what others have said, i’d say a lot of civil infrastructure—hospitals, clinics, government facilities, etc—are locked in either because of bad politics or weird vendor lock in. my dad ran his own dental clinic, and he had to run a Windows server because it was required by his software vendor that did everything from appointment reminders, to the web portal, to billing, to showing which of your teeth were missing, to integrating with scanners or other equipment. it was shit software that looked like Windows 3.1 well into the 2020s, but it did the job and 24hr support was reliable. just an anecdote, but as a software engineer i was fascinated by it.
chrash0
definitely. Qualcomm provides the SoC and drivers for what comes on that package, but you’ll want to add a battery controller, power controls, and other embedded systems onto the motherboard to make it act like a real system. it’s also a way different boot process in my experience than a normal x86 platform. the difference between ARM and x86 isn’t just the instruction set. plus at this level nothing is ever plug and play.
as for how Valve was able to ship an ARM device, they stuck to the normal kinds of IO a mobile device with a SD8gen3 would have and already have a great OS for fast iteration that they have tight controls over.
i’m excited for this XElite line, but i can see how it’s not in Qualcomm’s best interest to spend their engineering labor on porting to desktop Linux, not with Microsoft and Dell etc already having bids on that time. as long as Qualcomm is upstreaming and not actively blocking open source development, i don’t understand the kind of resentment i see for them. because they work with Google? i see them becoming more open as they become more prolific outside of embedded systems and Android. i see it as an exposure problem.
he’s been salty about this for years now and frustrated at companies throwing training and compute scaling at LLMs hoping for another emergent breakthrough like GPT-3. i believe he’s the one that really tried to push the Llama models toward multimodality
there’s clearly some Stallman level hyperbole in here that makes this rant (that i mostly agree with) hard to take seriously.
one thing that stands out to me is that the author speaks with authority on the way the web and HTTP are “supposed to be”, and as someone who has been in a crash course in application networking systems the past couple of years, i wonder: why does no one agree on what the web is “supposed to be”? people will recite RFCs at each other like Bible verses and similarly will show contradiction between RFCs or RFCs that are simply not adhered to (we spent a long time on cookies once that different browsers treated differently, where Chromium wasn’t following the RFC). this is kind of my problem with the application layer as it exists today: it feels like feature bloat from parties over generations that have tried to assert their vision for it, Cloudflare in this case like Chromium and others before.
maybe i’m the crazy one, but it feels like there’s room for disruption in this layer.
i don’t understand. don’t they operate in one of the largest Linux platforms around, Android? if you mean they don’t support your desktop wifi chipset or publish modules for their SoCs, then i guess that’s fair to say. but i think a deeper integration with Linux can only be a good thing. i guess my perspective on Qualcomm is colored by the fact that i worked with them briefly on an embedded project, have seen their docs, and have booted their dev kits into a full Ubuntu environment.
sucks
(but also maybe yay for Linux on ARM?)
i guess the point that’s being missed is that when i say “hard” i mean practically impossible
my point is that it’s hard to program someone’s subjective, if written in whatever form of legalese, point of view into a detection system, especially when those same detection systems can be used to great effect to train systems to bypass them. any such detection system would likely be an “AI” in the same way the ones they ban are and would be similarly prone to mistakes and to reflecting the values of the company (read: Jack Dorsey) rather than enforcing any objective ethical boundary.
but what are the criteria? just because you think you have a handle on it doesn’t mean everyone else does or even shares your conclusion. and there’s no metric here i can measure, to for example block it from my platform.
what about the neural networks that power the DSP modules in all modern cell phones cameras? does a neural network filter that generates a 3D mesh or rather imposes a 3D projection, eg putting dog ears on yourself or Memojis, count? what if i record a real video and have Gemini/Veo/whatever edit the white balance? i don’t think it’s as cut and dry as most people think
three, maybe four things:
flake.nixsome things are resistant to documentation and have a lot of stateful components (HomeAssitant is my biggest problem child from an infra perspective), but mainly being in that graph mindset of “how would i find a path here if i forgot where this was” helps a lot