Septimaeus

joined 2 years ago
[–] Septimaeus@infosec.pub 11 points 21 hours ago* (last edited 8 hours ago) (1 children)

It’s a user-friendly wrapper for existing fake quantum. It’s not a “physics shortcut” and it doesn’t “tackle quantum problems.”

Also no quantum problems have ever been “reserved for AI.” Some quantum solutions borrow optimization techniques from machine learning, but classical machine learning algorithms aren’t designed to leverage (or even consider) quantum effects.

I’m putting this out there because there’s a tendency to lump together all the buzzwords, like AI and quantum, into one big category of powerful-technologies-I-don’t-understand that results in hyperbolic projections and magical thinking that thwarts progress.

[–] Septimaeus@infosec.pub 3 points 5 days ago* (last edited 3 days ago)

In case anyone’s curious: it is likely a cell wrapper misprint/typo.

4300mWh AA lithium ion cells are a standard extended-life chemistry. 2866 mAh is their actual rated capacity.

Edit: 2866 x 1.5 = 4,300

[–] Septimaeus@infosec.pub 7 points 6 days ago* (last edited 3 days ago)

Edit: I wasn’t actually disagreeing with the comment above. You should downvote me too.

Board of directors

Correct. The board defines the company, not the CEO.

CEOs are usually puppets. Whatever role they play, you can bet they were hired specifically to play it, and were incentivized to stick to the script.

Their job (legally, their fiduciary obligation) is to maximize shareholder value, to take the credit or blame, and fuck off.

The board (typically key stakeholders) are so pleased when the public focuses on their CEOs, even if it’s for their shitty opinions, behavior, or obnoxious salaries.

Because the worst thing that could happen to them would be for the public eye to actually follow the money, and it’s easy to see why.

If the rabble truly fathomed just how many of those “golden parachutes” stakeholders stockpile with every disgraced CEO, however ceremoniously disavowed…

Accountability would shift to more permanent targets yes but, more importantly, it would quickly become common knowledge that, all this time, there were in fact more than enough golden parachutes to go around.

[–] Septimaeus@infosec.pub 2 points 1 week ago (3 children)

This joke only works in Spanish.

[–] Septimaeus@infosec.pub 13 points 1 week ago (2 children)

Wait, is this happy tree friends?

[–] Septimaeus@infosec.pub 1 points 1 week ago

Implications or assignment? They didn’t specify notation.

[–] Septimaeus@infosec.pub 1 points 1 week ago* (last edited 1 week ago)

Because what we call intelligence (the human kind) usually is just an emergent property of the wielding of various combinations of fist or second-hand experience by “consciousness” which itself is…

What we like to call the tip of a huge fucking iceberg of constant lifelong internal dialogues, overlapping and integrating experiences all the way back to the memories (engrams, assemblies, neurons that wired together to represent something), even the ones so old or deep we can’t even summon them any longer, but often are still measurable, still there, integrating like lego bricks with other assemblies.

Humans continuously, reflexively, recursively tell and re-tell our own stories to ourselves all day, and even at night, just to make sense of the connections we made today, how to use them tomorrow, to know how they relate to connections we made a lifetime ago, and how it fits in the larger story of us. That “context integration window” absolutely DWARFS even the deepest language model, even though our own organic “neural net” is low-power, lacks back-propagation, etc etc, and it is all done using language.

So yes, language is not the same as intelligence (though at some point some would ask “who can tell the difference?”) HOWEVER… The semantic taxonomies, symbolic cognition, and various other mental tools that are enabled by language are absolutely, verifiably required for this gargantuan context integration to take place.

[–] Septimaeus@infosec.pub 2 points 2 weeks ago

New York or Disney World

Got me

[–] Septimaeus@infosec.pub 6 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

For example the tools for the really tedious stuff, like large codebase refactoring for style keeping, naming convention adherence, all kinds of code smells, whatever. Lots of those tools have gotten ML upgrades and are a lot smarter and more powerful than what I remember from a decade ago (intellisense, jetbrains helper functions, various opinionated linter toolchains, and so forth).

While I’ve only experimented a little with some the more explicitly generative LLM-based coding assistant plugins, I’ve been impressed (and a little spooked) at how good they often were at guessing what I’m doing way before I finished doing it.

I haven’t used the prompt-based LLMs at all, because I’m just not used to it, but I’ve watched nearby devs use them for stuff like manipulating a bunch of files in a repeated pattern, breaking up a spaghetti method into reusable functions, or giving a descriptive overview of some gnarly undocumented legacy code. They seem pretty damn useful.

I’ll integrate the prompt-based tools once I can host them locally.

[–] Septimaeus@infosec.pub 33 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

I’ll admit, some tools and automation are hugely improved with new ML smarts, but nothing feels dumber than hunting for problems to fit the boss’s pet solution.

[–] Septimaeus@infosec.pub 21 points 2 weeks ago

It seems like the US patent system today is rarely anything but a solution to its own problem. In most cases a patent is little more than an expensive troll ward or a way to demonstrate due diligence to investors. What’s taken its place is time to market. If that’s true, the patent system should either be replaced with something that serves its intended purpose or that office should stop accepting applications.

view more: next ›