When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- !abiogenesis@mander.xyz
- !animal-behavior@mander.xyz
- !anthropology@mander.xyz
- !arachnology@mander.xyz
- !balconygardening@slrpnk.net
- !biodiversity@mander.xyz
- !biology@mander.xyz
- !biophysics@mander.xyz
- !botany@mander.xyz
- !ecology@mander.xyz
- !entomology@mander.xyz
- !fermentation@mander.xyz
- !herpetology@mander.xyz
- !houseplants@mander.xyz
- !medicine@mander.xyz
- !microscopy@mander.xyz
- !mycology@mander.xyz
- !nudibranchs@mander.xyz
- !nutrition@mander.xyz
- !palaeoecology@mander.xyz
- !palaeontology@mander.xyz
- !photosynthesis@mander.xyz
- !plantid@mander.xyz
- !plants@mander.xyz
- !reptiles and amphibians@mander.xyz
Physical Sciences
- !astronomy@mander.xyz
- !chemistry@mander.xyz
- !earthscience@mander.xyz
- !geography@mander.xyz
- !geospatial@mander.xyz
- !nuclear@mander.xyz
- !physics@mander.xyz
- !quantum-computing@mander.xyz
- !spectroscopy@mander.xyz
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and sports-science@mander.xyz
- !gardening@mander.xyz
- !self sufficiency@mander.xyz
- !soilscience@slrpnk.net
- !terrariums@mander.xyz
- !timelapse@mander.xyz
Memes
Miscellaneous
Another basic demonstration on why oversight by a human brain is necessary.
A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers
To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit
Give it a few billion years.
As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.
The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.
Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.
I don't get how the ethics of that are questionable. It's not like they're taking brains out of people and using them. It's just cells that are not the same as a human brain. It's like taking skin cells and using those for something. The brain is not just random neurons. It isn't something special and magical.
We haven't yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.
There wouldn't be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.
I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.
It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren't close to doing yet, but that could change quickly.
There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don't even know about yet.
I'd give it a 50-50 chance for singularity this century, if development isn't stopped for some reason.
The LLM systems are pattern recognition without any logic or awareness is the issue. It's pure pattern recognition, so it can easily find some patterns that aren't desired.
Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?
AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.
Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That's repugnant.
I don't think it's even a hot take
It's a hot take, but it's also objectively the correct opinion
Unfortunately, the former is rather what should be the case, although so many times it is not:-(.
Yeah, this is a hot take: I think it’s totally fine if researchers who have done their studies and collected their data want to use AI as a language tool to bolster their paper. Some researchers legitimately have a hard time communicating, or English is a second language, and would benefit from a pass through AI enhancement, or as a translation tool if they’re more comfortable writing in their native language. However, I am not in favor of submitting it without review of every single word, or using it to synthesize new concepts / farm citations. That’s not research because anybody can do it.
I have an actual hot take: the ability to communicate productive science shouldn’t be limited by the ability to write.
if you’re contribution is a paper that you don’t even proof read to ensure it makes any sense at all then your contribution isn’t “productive science”; it’s a waste of everyone’s time
There are people in academia now that just publish bullshit incomprehensible papers that may be wrong just to justify continuing funding and not rock the boat. It keeps them employed and paid. I belive this person discussed this
I knew who this was going to be before I even clicked, and I highly suggest you ignore her. She speaks well outside of fields she has any knowledge about (she's a physicist but routinely extrapolates that to other fields in ways that aren't substantiated) and is constantly spreading FUD about academia because it drives clicks. She essentially hyper-amplifies real problems present in academia in a way that basically tells the public not to trust science.
The most disappointing timeline.
I think you can use vegetative electron microscopy to detect the quantic social engineering of diatomic algae.
My lab doesn't have a retro encabulator for that yet, unfortunately. 😮💨
tRusT tHe sCiEncE!!1
The Science:
/s ...kinda. AI is going to make so many things very hard to trust at first glance and it will cause chaos in all kinds of important fields.
You are not wrong that AI is a whole new level of misinformation. But trusting the science never was a "trust any published paper" it is about trusting scientific consensus. And yeah, if there is a scientific consensus based on multiple papers and peer reviews, it is almost certainly going to be more trustworthy than your opinion/online search/intuition
Trust the science is still true, even in the face of AI, you just need to differentiate between trust scientists and trust scientific consensus.
Without AI, science already had its share of tons of problems, confirmation bias being one of the most innocent nowadays. Now with AI, it is ascending to something else entirely. Hopefully some people come up with AI based solutions on how to filter through the AI garbage.
I thought vegetative electron microscopy was one of the most important procedures in the development of the Rockwell retro encabulator?
Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?
Wouldn't it be OCR in this case? At least the scanning?
Yes, but the LLM does the writing. Someone probably carelessly copy pasta'd some text from OCR.
Fair enough, though another possibility I see is that the automated training process for LLMs used OCR for those papers (Or an already existing text version in the internet was using bad OCR) and those papers with the mashed word were written partially or fully by an LLM.
Either way, the blanket term "AI" sucks and it's honestly getting kind of annoying. Same with how much LLMs are used.