this post was submitted on 26 Mar 2025
1 points (100.0% liked)

Science Memes

13901 readers
144 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] DrBob@lemmy.ca 0 points 4 days ago (10 children)

When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.

load more comments (10 replies)
[–] zephorah@lemm.ee 0 points 4 days ago (1 children)

Another basic demonstration on why oversight by a human brain is necessary.

A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers

[–] thedeadwalking4242@lemmy.world 0 points 4 days ago (3 children)

To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit

[–] lengau@midwest.social 0 points 4 days ago (2 children)

Give it a few billion years.

[–] thedeadwalking4242@lemmy.world 0 points 4 days ago (7 children)

As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.

[–] dustyData@lemmy.world 0 points 4 days ago (9 children)

The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.

[–] Akrenion@slrpnk.net 0 points 4 days ago (2 children)

Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.

[–] Cethin@lemmy.zip 0 points 4 days ago (1 children)

I don't get how the ethics of that are questionable. It's not like they're taking brains out of people and using them. It's just cells that are not the same as a human brain. It's like taking skin cells and using those for something. The brain is not just random neurons. It isn't something special and magical.

[–] Akrenion@slrpnk.net 0 points 4 days ago (2 children)

We haven't yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.

There wouldn't be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.

load more comments (2 replies)
load more comments (1 replies)
load more comments (8 replies)
[–] Tlaloc_Temporal@lemmy.ca 0 points 4 days ago

I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.

It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren't close to doing yet, but that could change quickly.

There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don't even know about yet.

I'd give it a 50-50 chance for singularity this century, if development isn't stopped for some reason.

load more comments (5 replies)
[–] chuckleslord@lemmy.world 0 points 4 days ago (2 children)
load more comments (2 replies)
[–] Cethin@lemmy.zip 0 points 4 days ago (2 children)

The LLM systems are pattern recognition without any logic or awareness is the issue. It's pure pattern recognition, so it can easily find some patterns that aren't desired.

load more comments (2 replies)
load more comments (1 replies)
[–] LibertyLizard@slrpnk.net 0 points 4 days ago (1 children)

Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?

[–] KickMeElmo@sopuli.xyz 0 points 4 days ago (1 children)

AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.

[–] TheTechnician27@lemmy.world 0 points 4 days ago (4 children)

Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That's repugnant.

[–] JohnDClay@sh.itjust.works 0 points 4 days ago (2 children)

I don't think it's even a hot take

[–] SpaceNoodle@lemmy.world 0 points 4 days ago (1 children)

It's a hot take, but it's also objectively the correct opinion

[–] OpenStars@piefed.social 0 points 4 days ago

Unfortunately, the former is rather what should be the case, although so many times it is not:-(.

[–] 1stTime4MeInMCU@mander.xyz 0 points 4 days ago (2 children)

Yeah, this is a hot take: I think it’s totally fine if researchers who have done their studies and collected their data want to use AI as a language tool to bolster their paper. Some researchers legitimately have a hard time communicating, or English is a second language, and would benefit from a pass through AI enhancement, or as a translation tool if they’re more comfortable writing in their native language. However, I am not in favor of submitting it without review of every single word, or using it to synthesize new concepts / farm citations. That’s not research because anybody can do it.

load more comments (2 replies)
[–] Pregnenolone@lemmy.world 0 points 4 days ago (1 children)

I have an actual hot take: the ability to communicate productive science shouldn’t be limited by the ability to write.

[–] pupbiru@aussie.zone 0 points 4 days ago (1 children)

if you’re contribution is a paper that you don’t even proof read to ensure it makes any sense at all then your contribution isn’t “productive science”; it’s a waste of everyone’s time

[–] moakley@lemmy.world 0 points 4 days ago (2 children)
[–] pupbiru@aussie.zone 0 points 4 days ago

well at least you know my comment wasn’t written by AI 😞

load more comments (1 replies)
[–] jjagaimo@sh.itjust.works 0 points 4 days ago* (last edited 4 days ago) (1 children)

There are people in academia now that just publish bullshit incomprehensible papers that may be wrong just to justify continuing funding and not rock the boat. It keeps them employed and paid. I belive this person discussed this

[–] TheTechnician27@lemmy.world 0 points 4 days ago* (last edited 4 days ago)

I knew who this was going to be before I even clicked, and I highly suggest you ignore her. She speaks well outside of fields she has any knowledge about (she's a physicist but routinely extrapolates that to other fields in ways that aren't substantiated) and is constantly spreading FUD about academia because it drives clicks. She essentially hyper-amplifies real problems present in academia in a way that basically tells the public not to trust science.

load more comments (1 replies)
[–] kibiz0r@midwest.social 0 points 4 days ago

The most disappointing timeline.

[–] lvxferre@mander.xyz 0 points 4 days ago (1 children)

I think you can use vegetative electron microscopy to detect the quantic social engineering of diatomic algae.

[–] TheTechnician27@lemmy.world 0 points 4 days ago

My lab doesn't have a retro encabulator for that yet, unfortunately. 😮‍💨

[–] SomeAmateur@sh.itjust.works 0 points 4 days ago* (last edited 4 days ago) (2 children)

tRusT tHe sCiEncE!!1

The Science:

/s ...kinda. AI is going to make so many things very hard to trust at first glance and it will cause chaos in all kinds of important fields.

[–] MTK@lemmy.world 0 points 4 days ago

You are not wrong that AI is a whole new level of misinformation. But trusting the science never was a "trust any published paper" it is about trusting scientific consensus. And yeah, if there is a scientific consensus based on multiple papers and peer reviews, it is almost certainly going to be more trustworthy than your opinion/online search/intuition

Trust the science is still true, even in the face of AI, you just need to differentiate between trust scientists and trust scientific consensus.

[–] iAvicenna@lemmy.world 0 points 4 days ago* (last edited 4 days ago)

Without AI, science already had its share of tons of problems, confirmation bias being one of the most innocent nowadays. Now with AI, it is ascending to something else entirely. Hopefully some people come up with AI based solutions on how to filter through the AI garbage.

[–] Phoenix3875@lemmy.world 0 points 4 days ago
[–] Slovene@feddit.nl 0 points 4 days ago (2 children)

I thought vegetative electron microscopy was one of the most important procedures in the development of the Rockwell retro encabulator?

load more comments (2 replies)
[–] ZkhqrD5o@lemmy.world 0 points 4 days ago (4 children)

Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?

[–] Simyon@lemmy.world 0 points 2 days ago (1 children)

Wouldn't it be OCR in this case? At least the scanning?

[–] ZkhqrD5o@lemmy.world 0 points 2 days ago (1 children)

Yes, but the LLM does the writing. Someone probably carelessly copy pasta'd some text from OCR.

[–] Simyon@lemmy.world 0 points 2 days ago

Fair enough, though another possibility I see is that the automated training process for LLMs used OCR for those papers (Or an already existing text version in the internet was using bad OCR) and those papers with the mashed word were written partially or fully by an LLM.

Either way, the blanket term "AI" sucks and it's honestly getting kind of annoying. Same with how much LLMs are used.

load more comments (3 replies)
[–] moktor@lemmy.world 0 points 4 days ago (5 children)
load more comments (5 replies)
load more comments
view more: next ›