cm0002

joined 2 weeks ago
 

FEX 2511 is out today for this open-source emulator akin to Apple's Rosetta that allows running x86/x86+64 applications on ARM64. But in the case of FEX, for ARM64 Linux devices and akin to other open-source projects like Box64.

 

Hey it's me, the guy that posted here a couple weeks ago asking for bare minimum concepts new Linux users should understand. I really appreciate the responses I got last time, and now I'm back with my first draft! It's not 100% complete, but I'd love some feedback from the Linux community, let me know if there's anything I missed or that you think should be covered that I didn't yet talk about.

This will eventually be published as a permanent article on the site it is currently published on, as well as a video essay in the style of my other videos. I want it to be a resource for people to share with others making the switch and I'd like for it to be relatively future proof for a good while at least. Please let me know if there's anything I should tweak, cover a little more in depth, add or remove, etc. I'd love the input!

Author @bpt11@reddthat.com

 

This paper comes up with a really clever architectural solution to LLM hallucinations, especially for complex, technical topics. The core idea is that all our knowledge, from textbooks to wikis, is "radically compressed". It gives you the conclusions but hides all the step-by-step reasoning that justifies them. They call it a vast, unrecorded network of derivations the "intellectual dark matter" of knowledge. LLMs being trained on this compressed, conclusion-oriented data is one reason why they fail so often. When you ask them to explain something deeply, they just confidently hallucinate plausible-sounding "dark matter".

The solution the paper demonstrates is to use a massive pipeline to "decompress" all of the steps and make the answer verifiable. It starts with a "Socrates agent" that uses a curriculum of about 200 university courses to automatically generate around 3 million first-principles questions. Then comes the clever part, which is basically a CI/CD pipeline for knowledge. To stop hallucinations, they run every single question through multiple different LLMs. If these models don't independently arrive at the exact same verifiable endpoint, like a final number or formula, the entire question-and-answer pair is thrown in the trash. This rigorous cross-model consensus filters out the junk and leaves them with a clean and verified dataset of Long Chains-of-Thought (LCoTs).

The first benefit of having such a clean knowledge base is a "Brainstorm Search Engine" that performs "inverse knowledge search". Instead of just searching for a definition, you input a concept and the engine retrieves all the diverse, verified derivational chains that lead to that concept. This allows you to explore a concept's origins and see all the non-trivial, cross-disciplinary connections that are normally hidden. The second and biggest benefit is the "Plato" synthesizer, which is how they solve hallucinations. Instead of just generating an article from scratch, it first queries the Brainstorm engine to retrieve all the relevant, pre-verified LCoT "reasoning scaffolds". Its only job is then to narrate and synthesize those verified chains into a coherent article.

The results are pretty impressive. The articles generated this way have significantly higher knowledge-point density and, most importantly, substantially lower factual error rates, reducing hallucinations by about 50% compared to a baseline LLM. They used this framework to automatically generate "SciencePedia," an encyclopedia with an initial 200,000 entries, solving the "cold start" problem that plagues human-curated wikis. The whole "verify-then-synthesize" architecture feels like it could pave the way for AI systems that are able to produce verifiable results and are therefore trustworthy.

 

Among the notable improvements, the driver introduces a new environment variable, CUDA_DISABLE_PERF_BOOST, allowing users to disable CUDA’s default behavior of automatically boosting GPU clock speeds to higher power states during compute workloads.

https://www.nvidia.com/en-us/drivers/details/257493/

[–] cm0002@infosec.pub 6 points 1 day ago (2 children)

Weird I didn't get one, oh well edited with bypass link

[–] cm0002@infosec.pub 23 points 3 days ago* (last edited 3 days ago)

Lol what bots? This is all manual hence why some posts get through with the dumb proxy URL that I typically manually edit

Why am I cross-posting .ml content?

I cross-post from .ml to the nearest relevant non-.ml comm to reduce the influence of .ml comms and indirectly, the instance as a whole, to make it an easier decision for other instance admins to defederate because one key reason I identified that admins don't want to defederate is because .ml still has some very large comms and some niche comms.

Megathread on the issue

Some highlights from the link:

"Don't worry guys, the Uyghur Genocide was REALLY just birth control! ~dessalines, .ml admin, dev https://lemmy.world/post/30580167

"See! nobody died IN Tiananmen Square, just AROUND it, so it doesn't count!!" ~ Davel, .ml admin https://lemmy.world/post/30673342

.ml admin, Nutomics continued transphobia https://lemmy.world/post/29222558 The original transphobic Comment from Nutomic: https://lemmy.world/post/18236068

"NK is actually good and anything counter to that is Western propaganda!" ~dessalines, .ml admin, dev https://lemmy.world/post/31595035

General negative sentiment to other instances who haven't "seen the way" yet ~davel, .ml admin https://lemmy.world/post/27426510

"If you don't support Russia then you just don't understand geopolitics" ~dessalines, .ml admin, dev https://lemmy.world/post/27352415

And so so much documentation on clear heavy handed censorship and bias also on the link. So much I can't even put them all here because this comment would be really long.

I believe the behavior of its admins (the main admins are Lemmy devs) does harm to the overall growth of the Lemmy-verse and maybe even the Thrediverse (since Lemmy kinda kicked off the Thrediverse) because of its association with the devs of Lemmy and their insistence to use .ml as their personal political platform to spread harmful propaganda

On the outside, bringing up Lemmy frequently leads to comments like "Lemmy? Isn't that the place with a bunch of tankies?" Or "Tried Lemmy, but found it full of pro Russia crap so I left". The best way forward from that I see is to either widely defederate from .ml like the rest of the Triad, or pressure them to put a fair and unbiased as possible admin team.

view more: next ›