this post was submitted on 10 Jul 2025
318 points (94.7% liked)

Technology

72745 readers
1650 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A robot trained on videos of surgeries performed a lengthy phase of a gallbladder removal without human help. The robot operated for the first time on a lifelike patient, and during the operation, responded to and learned from voice commands from the team—like a novice surgeon working with a mentor.

The robot performed unflappably across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios typical in real life medical emergencies.

you are viewing a single comment's thread
view the rest of the comments
[–] finitebanjo@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (13 children)

See the part that I dont like is that this is a learning algorithm trained on videos of surgeries.

That's such a fucking stupid idea. Thats literally so much worse than letting surgeons use robot arms to do surgeries as your primary source of data and making fine tuned adjustments based on visual data in addition to other electromagnetic readings

[–] echodot@feddit.uk 7 points 1 day ago (3 children)

Yeah but the training set of videos is probably infinitely larger, and the thing about AI is that if the training set is too small they don't really work at all. Once you get above a certain data set size they start to become competent.

After all I assume the people doing this research have already considered that. I doubt they're reading your comment right now and slapping their foreheads and going damn this random guy on the internet is right, he's so much more intelligent than us scientists.

[–] finitebanjo@lemmy.world 0 points 1 day ago (2 children)

Theres no evidence they will ever reach quality output with infinite data, either. In that case, quality matters.

[–] echodot@feddit.uk 0 points 18 hours ago* (last edited 18 hours ago) (1 children)

No we don't know. We are not AI researchers after all. Nonetheless I'm more inclined to defer to experts then you. No offence, (I mean there is some offence, because this is a stupid conversation) but you have no qualifications.

[–] finitebanjo@lemmy.world 1 points 18 hours ago* (last edited 18 hours ago)

It's less of an unknown and more of a "it has never demonstrated any such capability."

Btw both OpenAI and Deepmind wrote papers proving their then models would never approach human error rate with infinite training. It correctly predicted performance of ChatGPT4.

load more comments (9 replies)