this post was submitted on 06 Jul 2025
62 points (67.4% liked)

Technology

72669 readers
3168 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 41 comments
sorted by: hot top controversial new old
[–] hendrik@palaver.p3x.de 69 points 5 days ago* (last edited 5 days ago) (1 children)

LLMs reproducing stereotypes is a well researched topic. They do that due to what they are. Stereotypes and bias in (in the training data), bias and stereotypes out. That's what they're meant to do. And all AI companies have entire departments to tune that, measure the biases and then fine-tune it to whatever they deem fit.

I mean the issue aren't women or anything, it's using AI for hiring in the first place. You do that if you want whatever stereotypes Anthropic and OpenAI gave to you.

[–] kambusha@sh.itjust.works 17 points 5 days ago (1 children)

Just pattern recognition in the end, and extrapolating from that sample size.

[–] hendrik@palaver.p3x.de 7 points 5 days ago

Issue is they probably want to pattern-recognize something like merit / ability / competence here. And ignore other factors. Which is just hard to do.

[–] technocrit@lemmy.dbzer0.com 67 points 5 days ago* (last edited 4 days ago) (1 children)

I dunno why people even care about this bullshit pseudo-science. The study is dumb AF. The dude didn't even use real resumes. He had an LLM generate TEN fake resumes and then the "result" is still within any reasonable margin of error. Reading this article is like watching a clown show.

It's all phony smoke and mirrors. Clickbait. The usual "AI" grift.

[–] kozy138@slrpnk.net 10 points 4 days ago

I feel as though generating these "fake" resumes is one of the top uses for LLMs. Millions of people are probably using LLMs to write their own resumes, so generating random ones seems on par with reality.

[–] MysticKetchup@lemmy.world 28 points 4 days ago (2 children)

Seems like a normal, sane and totally not-biased source

collapsed inline media

[–] AbidanYre@lemmy.world 8 points 4 days ago

What the fuck did I just read?

[–] ohwhatfollyisman@lemmy.world 16 points 5 days ago

and their companies are biased against humans in hiring.

[–] ter_maxima@jlai.lu 8 points 4 days ago (1 children)

I don't care what bias they do and don't have ; if you use an LLM to select résumés, you don't deserve to hire me. I make my résumé illegible for LLMs on purpose.

( But don't follow my advice. I don't actually need a job so I can pull this kinda nonsense and be selective, most people probably can't )

[–] patrick@lemmy.bestiver.se 1 points 4 days ago (2 children)

How do you make it illegible for LLMs?

You write a creative series of deeply offensive curse words in small white on white print.

[–] ter_maxima@jlai.lu 1 points 1 day ago

Add a whole bunch of white on white nonsense ! You can also insert letters in the middle of words with a font size of 0, although that fucks up a human copy-pasting too, so probably not recommended.

The simplest way is to make your CV an image, and include no OCR data (or nonsense OCR data) in the PDF

[–] burgerpocalyse@lemmy.world 7 points 5 days ago

these systems cannot run a lemonade stand without shitting their balls

[–] MuskyMelon@lemmy.world 6 points 4 days ago

Even before LLMs, resumes were processed through keyword filters already. You have to optimize your resume for keyword readers, which should work for LLMs as well.

I use the ARCI model to describe my roles.

[–] OutlierBlue@lemmy.ca 4 points 4 days ago (1 children)

So we can use Trump's own anti-DEI bullshit to kill off LLMs now?

[–] thann@lemmy.dbzer0.com 1 points 4 days ago

Well, ya see, trump isnt racist against computers

They're as biased as the data they were trained on. If that data leaned toward male applicants, then yeah, it makes complete sense.

[–] mienshao@lemmy.world 3 points 4 days ago

Would be cool if the Technology community found literally any other topic to discuss beyond AI. I’m really over it, and I don’t care.

[–] LovableSidekick@lemmy.world 2 points 4 days ago* (last edited 4 days ago)

Only half kidding now... the way morality and ethics get extrapolated now by the perfection police, this must mean anti-AI = misogynist.

[–] berno@lemmy.world 1 points 4 days ago

Bias was baked in via RLHF and also existed in the datasets used for training. Reddit cancer grows