theHRguy

joined 1 year ago
 

There exists a peculiar phenomenon in the intellectual landscape of our time — a man who hides behind the armor of credentials while spouting nonsense with the conviction of Moses descending from the mountain. Jordan Peterson, that professor emeritus of psychology at the University of Toronto, has mastered the art of rhetorical sleight-of-hand, dazzling the credulous with bombastic verbiage while the discerning observer witnesses nothing but a carnival barker hawking pseudointellectual snake oil.

Let us not mince words here. The man is, to put it in terms that would likely send him scrambling for his thesaurus, full of shit.

This self-appointed messiah to disaffected young men began his meteoric rise by lying — yes, lying — about Bill C-16, a modest piece of Canadian legislation that simply added gender identity to existing anti-discrimination laws. Peterson, with the dramatism of a third-rate Shakespearean actor, declared he would rather starve himself in prison than comply with imaginary pronoun police that existed only in the fever dreams of his increasingly baroque paranoia. Legal experts universally condemned his interpretation as nonsense, yet his followers, desperate for a champion against the phantom menace of “postmodern neo-Marxism,” lapped it up like kittens at a saucer of milk. The Carnivore Carnival: Peterson’s Dietary Delusions

Perhaps nowhere is Peterson’s intellectual charlatanism more nakedly exposed than in his evangelical promotion of the so-called “carnivore diet” — an absurd nutritional regimen that would make even the most committed Paleolithic revivalist blush with embarrassment. “I eat beef and salt and water. That’s it. And I never cheat. Ever,” he proclaimed on Joe Rogan’s podcast, with all the zealotry of a man experiencing a religious conversion rather than a nutritional change.

According to the Gospel of Peterson, this miraculous meat-only diet cured his depression, anxiety, gastric reflux, snoring, gum disease, and psoriasis. One half-expects him to claim it also restored his virginity and taught his pet lobster to recite Solzhenitsyn.

Any qualified nutritionist — those inconvenient experts with actual knowledge — would tell you this dietary approach lacks scientific support, defies basic nutritional science, and potentially endangers those foolish enough to follow it. But why let evidence intrude upon a good story? Peterson, ever the clinical psychologist, naturally feels qualified to dispense nutritional advice with the certainty of someone who has never encountered the concept of epistemic humility.

The man speaks with the conviction of Moses on Sinai while peddling advice that wouldn’t pass muster in a high school health class. The Fascist Whisperer: Dog Whistles and Authoritarian Tendencies

Peterson’s flirtation with far-right talking points reveals the hollowness at the core of his supposed classical liberalism. His incessant railing against “postmodernism” and “cultural Marxism” — the latter term having deeply problematic roots in literal Nazi propaganda — provides just enough plausible deniability while sending clear signals to the darkest corners of the internet. His work has been enthusiastically embraced by the alt-right not because they’ve misunderstood him, but because they hear exactly what he’s saying.

The man who claims to stand for individual rights has called for the creation of a website identifying “postmodern neo-Marxist” professors and courses so students can avoid them — a blacklist by any other name would smell as foul. Such calls for punitive measures against ideological opponents reveal the authoritarian instincts lurking beneath the veneer of intellectual freedom.

 

According to a recent Gallup poll, 70% of Americans now believe the American Dream is no longer attainable for the average person. This stark figure represents a seismic shift in national consciousness.

The myth of American exceptionalism is crumbling under the weight of reality. Economic Collapse

The numbers tell a story of economic devastation wrought by unfettered capitalism. Since 1978, CEO compensation has grown 940%, while typical worker compensation has risen just 12%.

Three men now own more wealth than the bottom 50% of Americans combined.

Nearly 40% of Americans cannot afford a $400 emergency expense, according to the Federal Reserve, while Wall Street posts record profits.

“What we’re witnessing isn’t just inequality — it’s a systematic transfer of wealth from the working class to the ultra-wealthy,” says economist Dr. Thomas Piketty, author of “Capital in the Twenty-First Century.”

The average American worker needs to work 2.8 jobs to afford rent in most major cities, according to the National Low Income Housing Coalition.

Real wages, when adjusted for inflation, have remained essentially stagnant since the 1970s while productivity has increased by over 250%. Healthcare Crisis

Americans pay 2.5 times more for healthcare than citizens in other developed nations while receiving worse outcomes.

Medical debt is the leading cause of bankruptcy in America, with 66.5% of all bankruptcies tied to medical issues, according to the American Journal of Public Health.

“The American healthcare system isn’t broken — it’s functioning exactly as designed: to extract maximum profit while providing minimum care,” notes Dr. Marcia Angell, former editor-in-chief of the New England Journal of Medicine.

Over 500,000 families go bankrupt each year due to medical bills — a phenomenon that doesn’t exist in any other developed nation.

[–] theHRguy@lemmy.world 0 points 1 month ago

The Luddites were right to be upset at technology because the rapid introduction of automated textile machinery directly threatened their livelihoods and the economic stability of their communities. Skilled workers who had long relied on their craft were suddenly replaced by cheaper, less skilled labor operating new machines, leading to mass unemployment, falling wages, and widespread poverty. The new factory system also undermined established labor practices, eroded job security, and forced workers into harsher conditions for lower pay, all while the government and factory owners prioritized profit over workers’ well-being. Their protests were not against technology itself, but against the way it was used to exploit labor and destabilize traditional ways of life without offering protections or fair compensation to those displaced.

 

Prelude to a Machine-Governed World

One cannot help but marvel at the spectacular intellectual fraud being perpetrated upon the global public — a deception so grand in scope and ambition that it makes religious dogma seem quaint by comparison. We are being sold, with remarkable efficiency, the notion that artificial intelligence represents humanity’s crowning achievement rather than what it increasingly appears to be: the final abdication of human agency to algorithmic governance by corporate proxy.

The evidence of this great surrender manifests most visibly in what can only be described as the AI sovereignty wars — a geopolitical reshuffling that would be comical were it not so catastrophically consequential. At the vanguard stands the United States and China, locked in what observers politely term “strategic competition” but what history will likely record as mutual technological determinism of the most reckless variety.

“We stand at a moment of transformation,” intoned President Trump at the unveiling of the Stargate Project, his administration’s $500 billion AI initiative, “where American ingenuity will once again demonstrate supremacy over authoritarian models.” The irony that this declaration of technological liberation came packaged with unprecedented surveillance capabilities was apparently lost on those applauding.

Let us not delude ourselves about what this escalation represents: not a race toward human flourishing but a contest to determine which flavor of algorithmic control — corporate-capitalist or state-authoritarian — will dominate the coming century. The distinctions between these models grow increasingly academic as their practical implementations converge toward remarkably similar ends. The European Regulatory Mirage

Meanwhile, across the Atlantic, the European bureaucracy performs its familiar dance of regulatory theater — drafting documents of magnificent verbosity that accomplish precisely nothing. The EU’s Code of Practice for generative AI stands as perhaps the most spectacular example of this performative governance: a masterclass in how to appear concerned while remaining steadfastly ineffectual.

According to the European Digital Rights organization, fully 71% of the AI systems deployed within EU borders operate without meaningful human oversight, despite regulatory frameworks explicitly requiring such supervision. Rules without enforcement are merely suggestions, and suggestions are what powerful entities traditionally ignore with impunity.

This regulatory charade would be merely disappointing were it not so perfectly designed to create the worst possible outcome: sufficient regulation to stifle meaningful innovation from smaller entities while leaving dominant corporate actors essentially untouched behind minimal compliance facades. One searches in vain for evidence that European regulators have encountered a technology they couldn’t render simultaneously overregulated and underprotected.

“The gap between regulatory ambition and enforcement capacity has never been wider,” notes Dr. Helena Maršíková of the Digital Ethics Institute in Prague. “We have created paper tigers that tech companies have already learned to navigate around before the ink has dried.”

Civil society groups across Europe have responded with predictable outrage, organizing demonstrations that political leaders acknowledge with sympathetic nods before returning to business as usual. The pattern has become depressingly familiar: public concern, followed by regulatory promises, culminating in implementation that bears only passing resemblance to the original intent.

What makes this cycle particularly pernicious in the AI context is that each iteration further normalizes algorithmic intrusion while simultaneously lowering expectations for meaningful constraints. The Overton window shifts not through sudden movements but through the gradual acclimatization to what previously would have been considered unacceptable overreach. The Great Replacement: Human Labor in the Crosshairs

If the geopolitical dimensions of the AI sovereignty wars weren’t sufficiently alarming, the economic disruption promises to be equally profound. The techno-optimist fairytale — that automation creates more jobs than it displaces — faces its ultimate test against technologies explicitly designed to replace human cognition across increasingly sophisticated domains.

Statistical models from the McKinsey Global Institute suggest that over 10 million jobs across professional sectors could face displacement within the next three years — a figure that may prove conservatively low as generative AI capabilities continue their exponential improvement. Perhaps most concerning is that unlike previous technological transitions, the jobs most immediately threatened include those requiring advanced education and specialized training.

The notion that we will smoothly transition to some nebulous “knowledge economy” where humans add value through uniquely human qualities becomes increasingly implausible when those supposedly unique qualities — creativity, contextual understanding, ethical judgment — are precisely what AI systems are being engineered to simulate.

Reddit threads devoted to “AI anxiety” have grown by 840% over the past year, with users increasingly expressing what mental health professionals term “purpose dislocation” — the growing fear that one’s contributions have been rendered superfluous by algorithmic alternatives.

“We’re seeing patients expressing profound existential concerns about their future relevance,” explains Dr. Jonathan Keller, a psychologist specializing in technology-related anxiety disorders. “These aren’t Luddites or technophobes — they’re often highly educated professionals watching their expertise being rapidly commoditized.”

The psychological consequences of this transition remain insufficiently examined, perhaps because they raise uncomfortable questions about the social contract underlying modern capitalism. If work provides not just economic sustenance but identity and purpose, what happens when that work becomes algorithmically obsolete for a substantial percentage of the population?

References to a “Wall-E future” — where humans are reduced to passive consumers while automated systems manage society — have migrated from science fiction circles to mainstream discourse with disturbing speed. The comparison is imperfect but illuminating: not that humans will become physically incapacitated, but that their agency may be systematically diminished through computational convenience.

 

Prelude to a Machine-Governed World

One cannot help but marvel at the spectacular intellectual fraud being perpetrated upon the global public — a deception so grand in scope and ambition that it makes religious dogma seem quaint by comparison. We are being sold, with remarkable efficiency, the notion that artificial intelligence represents humanity’s crowning achievement rather than what it increasingly appears to be: the final abdication of human agency to algorithmic governance by corporate proxy.

The evidence of this great surrender manifests most visibly in what can only be described as the AI sovereignty wars — a geopolitical reshuffling that would be comical were it not so catastrophically consequential. At the vanguard stands the United States and China, locked in what observers politely term “strategic competition” but what history will likely record as mutual technological determinism of the most reckless variety.

“We stand at a moment of transformation,” intoned President Trump at the unveiling of the Stargate Project, his administration’s $500 billion AI initiative, “where American ingenuity will once again demonstrate supremacy over authoritarian models.” The irony that this declaration of technological liberation came packaged with unprecedented surveillance capabilities was apparently lost on those applauding.

Let us not delude ourselves about what this escalation represents: not a race toward human flourishing but a contest to determine which flavor of algorithmic control — corporate-capitalist or state-authoritarian — will dominate the coming century. The distinctions between these models grow increasingly academic as their practical implementations converge toward remarkably similar ends. The European Regulatory Mirage

Meanwhile, across the Atlantic, the European bureaucracy performs its familiar dance of regulatory theater — drafting documents of magnificent verbosity that accomplish precisely nothing. The EU’s Code of Practice for generative AI stands as perhaps the most spectacular example of this performative governance: a masterclass in how to appear concerned while remaining steadfastly ineffectual.

According to the European Digital Rights organization, fully 71% of the AI systems deployed within EU borders operate without meaningful human oversight, despite regulatory frameworks explicitly requiring such supervision. Rules without enforcement are merely suggestions, and suggestions are what powerful entities traditionally ignore with impunity.

This regulatory charade would be merely disappointing were it not so perfectly designed to create the worst possible outcome: sufficient regulation to stifle meaningful innovation from smaller entities while leaving dominant corporate actors essentially untouched behind minimal compliance facades. One searches in vain for evidence that European regulators have encountered a technology they couldn’t render simultaneously overregulated and underprotected.

“The gap between regulatory ambition and enforcement capacity has never been wider,” notes Dr. Helena Maršíková of the Digital Ethics Institute in Prague. “We have created paper tigers that tech companies have already learned to navigate around before the ink has dried.”

Civil society groups across Europe have responded with predictable outrage, organizing demonstrations that political leaders acknowledge with sympathetic nods before returning to business as usual. The pattern has become depressingly familiar: public concern, followed by regulatory promises, culminating in implementation that bears only passing resemblance to the original intent.

What makes this cycle particularly pernicious in the AI context is that each iteration further normalizes algorithmic intrusion while simultaneously lowering expectations for meaningful constraints. The Overton window shifts not through sudden movements but through the gradual acclimatization to what previously would have been considered unacceptable overreach. The Great Replacement: Human Labor in the Crosshairs

If the geopolitical dimensions of the AI sovereignty wars weren’t sufficiently alarming, the economic disruption promises to be equally profound. The techno-optimist fairytale — that automation creates more jobs than it displaces — faces its ultimate test against technologies explicitly designed to replace human cognition across increasingly sophisticated domains.

Statistical models from the McKinsey Global Institute suggest that over 10 million jobs across professional sectors could face displacement within the next three years — a figure that may prove conservatively low as generative AI capabilities continue their exponential improvement. Perhaps most concerning is that unlike previous technological transitions, the jobs most immediately threatened include those requiring advanced education and specialized training.

The notion that we will smoothly transition to some nebulous “knowledge economy” where humans add value through uniquely human qualities becomes increasingly implausible when those supposedly unique qualities — creativity, contextual understanding, ethical judgment — are precisely what AI systems are being engineered to simulate.

Reddit threads devoted to “AI anxiety” have grown by 840% over the past year, with users increasingly expressing what mental health professionals term “purpose dislocation” — the growing fear that one’s contributions have been rendered superfluous by algorithmic alternatives.

“We’re seeing patients expressing profound existential concerns about their future relevance,” explains Dr. Jonathan Keller, a psychologist specializing in technology-related anxiety disorders. “These aren’t Luddites or technophobes — they’re often highly educated professionals watching their expertise being rapidly commoditized.”

The psychological consequences of this transition remain insufficiently examined, perhaps because they raise uncomfortable questions about the social contract underlying modern capitalism. If work provides not just economic sustenance but identity and purpose, what happens when that work becomes algorithmically obsolete for a substantial percentage of the population?

References to a “Wall-E future” — where humans are reduced to passive consumers while automated systems manage society — have migrated from science fiction circles to mainstream discourse with disturbing speed. The comparison is imperfect but illuminating: not that humans will become physically incapacitated, but that their agency may be systematically diminished through computational convenience. Algorithmic Governance: Democracy’s Silent Subversion

Perhaps nowhere is the surrender to algorithmic authority more concerning than in government itself. Trump’s Office of Management and Budget memoranda directing federal agencies to implement AI systems across government services represents a watershed moment in the relationship between democratic governance and automated decision-making.

The OMB directive calls for “leveraging artificial intelligence to improve efficiency and customer experience across government services” — benign-sounding language that obscures the profound shift in how citizens interact with the state. What goes unmentioned is how these systems fundamentally alter accountability structures, creating layers of algorithmic intermediation between policy and implementation.

The OECD has warned repeatedly about the risks of “accountability gaps” in algorithmic governance, noting that “when decisions previously made by elected officials or civil servants are delegated to automated systems, traditional mechanisms of democratic accountability may no longer function effectively.”

Despite these warnings, the implementation proceeds with remarkable speed and minimal public debate. Government by algorithm arrives not through constitutional amendment or legislative overhaul but through administrative procurement decisions and technical implementations largely invisible to the public.

A particularly troubling 2024 audit of AI implementation across federal agencies found that 68% of deployed systems lacked comprehensive explainability features — meaning they operated as functional black boxes even to those nominally responsible for their oversight. When governance becomes algorithmically mediated, explanation shifts from democratic right to technical inconvenience.

“We’re witnessing the greatest transformation in how government functions since the administrative state emerged in the early 20th century,” argues Professor Elaine Kamarck of the Brookings Institution. “Yet unlike that transition, which was accompanied by robust public debate and institutional adaptation, this one is occurring largely beyond public scrutiny.”

The implications for democratic legitimacy are profound and largely unexplored. Citizens who already feel alienated from governmental processes will likely experience further distancing when their interactions are mediated through algorithmic interfaces optimized for efficiency rather than democratic engagement.

view more: next ›