Ai has been in drug discovery long before LLMs were a thing. It's revolutionized our ability to identify possible molecules and proteins that can save lives.
Bluesky
People skeeting stuff.
Bluesky Social is a microblogging social platform being developed in conjunction with the decentralized AT Protocol. Previously invite-only, the flagship Beta app went public in February 2024. All are welcome!
Downvoters: lookup alpha fold! It's really quite incredible.
I agree, and anyone who is interested may enjoy reading this awesome blog post from Mohammed AlQuraishi. It gets pretty deep into the architecture of how alphafold works, but is pretty good at giving a sense of scope of how impressive alphafold is, even for curious laypeople (if those laypeople are willing to skim over parts they don't understand)
Alphafold is super cool, but I see the same problem with the rhetoric around AI as I do with a lot of AI: an overconfidence around the capabilities of the model and a lack of understanding about what problems it's useful for. For example, Google published Alphafold's predicted structures for every protein in the human proteome, including proteins that we have no experimental structural data.
That's super cool and useful, but it wouldn't be accurate to say that Alphafold has solved the entire human proteome, as I often saw reported back when this was released. There's a huge amount of the human proteome that is referred to as "the dark proteome" because of how little we understand about those proteins, which means that Alphafold's predictions are way less reliable due to there not being much training data.
To give a general example, most of our high resolution, experimentally determined structures in the protein database (which alphafold was trained on) are structures solved using X-ray crystallography, which isn't great at resolving long, unstructured loop regions, or other highly flexible areas. That doesn't mean Alphafold or other computational tools (like RosettaFold) aren't useful at all in these areas, but it's necessary to critically consider the tools one uses in the context of what problem you're actually trying to solve.
One might think "okay, but the scientists who are working in this space surely know this stuff", but I have anecdotally seen a split in the community that feels analogous to what we see in AI discourse more generally: there seems to be a sharp divide between the pro- and anti- camps, where people who are in the "anti-" camp may not be necessarily opposed to alphafold and other machine learning tools, but are often completely oblivious to how they work and how they, as experimental scientists, can interact with them. This is exacerbated by the pro- side being pretty hard to parse as an outsider. I guess I'd describe myself as being the pro- camp, because I've tinkered with this stuff and understand enough about how it works under the hood that I am blown away by how cool it is. However, I often feel like I need to take the position of "machine learning buzzkill" to temper the excessive hype that I see coming from others within this pro- side of the debate. There's a nebulous sense of distrust towards these tools, and I understand why (especially when non-ML researchers are also increasingly sick of seeing AI shoved everywhere in their regular lives)
The root problem here isn't necessarily the machine learning though. Personally, I see this is a sort of separate subfield, and the communication difficulties that arise (such as overhype making it hard for outsiders to parse what's worth caring about) can be attributed to the "publish or perish" pressure that's prevalent across basically all fields of research.
Bad take.
Technology on the production line doesn't exempt them from quality control nor clinical trials. And as several others have noted, AI is already being used for drug discovery, and automation has long been the goal for repetitive menial tasks like pipetting.
Don't be an anti-vaxxer just because they don't use homegrown, organic needles.
Yes, thank you for saying this.
I also want to add that it gets more complex than that because it's something the general population isn't aware of.
During manufacturing, every batch also needs to be manually inspected in a lab via random sampling, and significant deviations sometimes result in the entire batch being discarded and made again from scratch. Or at least that's what should happen under proper regulatory supervision.
Unfortunately, the majority of manufacturers overseas are only regulated by the FDA (yes, really), which was running thin before the great rounds of layoffs. There have been multiple incidents where these inspections aren't done properly due to staff shortages and logistics, so patients end up with things like pieces of glass in their medication or even wrong dosages per bottle, and the manufacturer says fuck it, who's gonna look, and ships it instead of wasting money redoing the batch.
These are only two of the major scandals of the now-defunct company Ranbaxy Laboratories, where administrative oversight was often skirted in the name of profit.
So, while AI is directly involved in mixing, conditioning, and packaging, there are other significant issues with the proper oversight of these companies, which will no doubt continue to expand the use of AI to more sensitive areas, like quality control.
For anyone interested in the Ranbaxy scandal and its tragic and unsettling ending, I highly recommend the approachable and eye-opening book "A Bottle of Lies: The Inside Story of the Generic Drug Boom" (2019) by Catherine Eban.
Using computers and simulations is good, actually.
Vast parameter spaces cannot be robustly explored by experimental means. Medicine has been using ML for a while now and for good reason. AI and ML is much more than LLMs or whatever the current poster child is.
Medicine discovery is one of the few fields with ai where the juice is worth the squeeze. Filtering through billions of different molecule geometries to compare them all to eachother is a task that ai does well and humans do extremely slowly
The issue is the overloading of the word “AI”.
“Machine Learning using Neural Networks” is a technique that can come up with decent but rough solutions to problems where it’s hard to come up with any solution.
“Large Language Models” is the application of “Machine Learning using Neural Networks” to natural language processing, and it is incredibly good at that.
The problem comes when people apply models trained for natural language processing onto other random problems just because you can formulate anything as a natural language problem.
That's a fair distinction. That being said, at their core llms are just big functions. You could cover a dartboard in subfields of physics, toss a dart randomly, and I'd bet money you hit a field that finds use for the bessel functions for instance. I am not informed enough on the specifics of llms to say either way, but there's definitely precedent for "we found this really powerful function and it turns out it accurately predicts 10 shitloads of unrelated systems."
To double down on my devils advocacy, the projects I have personally seen or been consulted for that fit the form "use llm to solve non-nlp problem" are 99% propelled by "funding for Ai buzzwords flows freely" and "understanding of the limitations of different kinds of ai is rare"
I think the lesson in that is that these things are tools that experts who are actually focused on checking the outputs can use to great benefit.
The average schmuck trying to write vibe code way beyond their understanding is the one fooling themselves about their utility.
When a dork at work gives me a proposal that is clearly hot LLM garbage they hardly read? I make sure they know they are still responsible for producing shitty work that needs to be re-done.
Yeah. That's the most frustrating part of it to me. Statistical learning is really limited in the applications it's good at, but it excels wildly where it excels. Its a specialized scalpel, but the people who are the public face call it "ai" and market it as a cure all for the pesky problem of having to pay workers
Isn't the production of medicine a miniscule part of its cost? My understanding is that ads and profit are the two biggest costs
What you say is partly correct.
Most of the cost of a medicine is research, so when new patented medicines come to market, they are usually very expensive because they have to generate profits.
Then, when the patents expire, generic drugs become available, which are very cheap because they are cheap to produce.
Research isn't the really the expensive part. Or rather most of research is paid by governments. The expensive part is clinical trials. They are paid by the companies that later sell the drugs. Vast majority of clinical trials fails at various stages, but still has to be paid. So any drug that makes the cut has to pay for tens of other very promising compounds that turned out to be toxic, not effective in humans, or just not worth it (also including buying and shutting down competitors or discontinuing tests on drugs that might work, but would hurt company's other business).
That being said, there's something very wrong with drug prices in the US.
7 of the 10 largest pharma companies spent more on marketing than r&d during the height of the covid 19 pandemic. In aggregate, the 10 companies spent 37% more on marketing than r&d.
It's not like they're not testing the results. Production chemistry with chiral molecules is hard with a lot of trial, error, and magic. Never really know what variable is going to produce a better batch or screw something up.
Step 1: Put AI into production just to lower costs and increase efficiency
Step 2: People get angry about AI in the process
Step 3: Release a drug that doesn't have "AI" in the process and has a higher selling price, even though it's actually the same drug
Step 4: People buy it en masse
Step 5: Earn more money thanks to irrational hatred of AI
Step 6: You convince the Democrats that M4F only has AI drugs. But there are also private insurers
Step 7: People hire private insurance companies at higher prices and with misleading terms, just to get medications without "AIs."
Step 8: Pharmaceutical companies, insurance companies, and funeral homes make more money, while ordinary people become increasingly poorer.
why does the reply shown have absolutely nothing to do with the headline claim
It's ok. TACO is going to lower drug prices 1200%.