this post was submitted on 16 May 2025
619 points (98.9% liked)

Technology

70287 readers
2821 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

More than half of Americans reported receiving at least one scam call per day in 2024. To combat the rise of sophisticated conversational scams that deceive victims over the course of a phone call, we introduced Scam Detection late last year to U.S.-based English-speaking Phone by Google public beta users on Pixel phones.

We use AI models processed on-device to analyze conversations in real-time and warn users of potential scams. If a caller, for example, tries to get you to provide payment via gift cards to complete a delivery, Scam Detection will alert you through audio and haptic notifications and display a warning on your phone that the call may be a scam.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Showroom7561@lemmy.ca 7 points 1 week ago (3 children)

We use AI models processed on-device

If it's opt-in, and the processing is done on-device, then I have no reason to be outraged.

But the skeptic in me asks "what's in it for google?".

[–] stardreamer@lemmy.blahaj.zone 6 points 1 week ago (1 children)

This is common for companies that like to hire PhDs.

PhDs like to work on interesting and challenging projects.

With nobody to reign them in, they do all kinds of cool stuff that makes no money (e.g. Intel Optane and transactional memory).

Designing a realtime scam analysis tool with resource constraints is interesting enough to be greenlit but makes no money.

Once released, they'll move on to the next big challenge, and when nobody is there to maintain their work, it will be silently dropped by Google.

I'm willing to bet more than 70% of the Google graveyard comes from projects like these.

[–] btaf45@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (1 children)

With nobody to reign them in, they do all kinds of cool stuff

And they never ever ask themselves "Is this ethically the right thing to do". And so they create things that do way more harm for society than good. For selfish reasons, just because it is a "fun" project. And I'm sure management figures they will profit one way or another the more they control everything thru their AIs they shove on people.

[–] stardreamer@lemmy.blahaj.zone 2 points 1 week ago* (last edited 1 week ago)

I may be biased (PhD student here) but I don't fault them for being as such. Ethics is something that 1) requires formal training 2) requires oversight 3) contains to are different to every person. Quite frankly, it's not part of their training, never been emphasized as part of their training, and subjective based on cultural experiences.

What is considered unreasonable risk of harm is going to be different to everybody. To me, if the entire design runs locally and does not collect data for Google's use then it's perfectly ethical. That being said, this does not prevent someone else from adding the data collection features. I think the original design of such a system should put in a reasonable amount of effort in stopping that. But if that is done then there's nothing else to blame them about. The moral responsibility lies with the one who pulled the trigger.

Should the original designer have anticipated this issue thus never took the first step? Maybe. But that depends on a lot of circumstance that we don't know so it's hard to predict anything meaningful.

As for the more "harm than good" analysis, I absolutely detest that sort of reasoning since it attempts to quantify social utility in a pure mathematical sense. If this reasoning holds, an extreme example would be justifying harm to any minority group as long as it maximizes benefit for society. Basically Omelas. I believe a good quantitative reasoning would be checking if harm is introduced to ANY group of people, as long as that's the case the whole is considered unethical.

[–] plyth@feddit.org 5 points 1 week ago (1 children)

Like always, google is doing things for free to get training data.

All things are going into an authoritarian direction which needs control of the opposition. Google will have the infrastructure to identify people with opposing mindsets. There won't be a rebellion if the rebel leaders can be locked up in time.

[–] btaf45@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

All things are going into an authoritarian direction which needs control of the opposition.

Now that we don't have a real president anymore Google started including white house announcements in Google News. I know every one of these is written by an idiot who is not qualified for their job so I turned them off. But why the fuck didn't they do this back when we actually had a real president and these announcements were legitimate newsworthy information? The announcements of a real president were never considered newsworthy information but the babblings of a traitorapist with the mind of a 13 year old are? Fuck you google!

[–] BetaDoggo_@lemmy.world 5 points 1 week ago

They get a new feature to boast about

[–] Gammelfisch@lemmy.world 6 points 1 week ago

More AI testing...

[–] Lootboblin@lemmy.world 6 points 1 week ago* (last edited 1 week ago)

And? Google’s been listening to you for years and not only in english.

[–] Sixtyforce@sh.itjust.works 6 points 1 week ago

Deciding to install GrapheneOS is constantly validated for me! But I never want to give Google money ever again.

[–] hoss@lemmynsfw.com 6 points 1 week ago

I have graphing OS. I still get a ton of spam phone calls a day.

I'm apparently not enough of a software developer to figure out how to use SpamBlocker app. Anybody have any suggestions?

[–] capuccino@lemmy.world 5 points 1 week ago
[–] MunkysUnkEnz0@lemmy.world 5 points 1 week ago (3 children)

Spam protection is turned on automatically, and you’ll be notified when this happens. You can turn it off anytime in your settings:

Open Google Messages . At the top right, tap your Profile picture or Initials.

Tap Messages settings and then Spam protection. You'll only find "Spam protection" if it's available on your device. Turn Enable spam protection on or off.

I'm not seeing in my message settings. Anyone else?

They said it's rolling out in beta. Spam Protection is already in the Messages app. Scam Protection is coming soon. But to listen to telephone audio that means they want to add it into the native dialer\phone app. Google has a dialer app named "Phone" with a Spam filter feature currently.

I assume that's what is coming -- a.i. into the dialer\phone app.

load more comments (2 replies)
[–] jaykrown@lemm.ee 5 points 6 days ago

This isn't really anything new. https://signal.org/

[–] not_IO@lemmy.blahaj.zone 2 points 1 week ago

is this really official? seems like a fake site

[–] whereisk@lemmy.world 2 points 1 week ago

Most people here: Yes, I bought an advertiser’s device, hooked up in a million ways to that advertiser’s services, who’s well known for monitoring every aspect of the life of every person they can, but how dare they monitor this part?

load more comments
view more: ‹ prev next ›