this post was submitted on 28 Oct 2025
419 points (99.1% liked)

Technology

75756 readers
6803 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.zip/post/51866711

Signal was just one of many services brought down by the AWS outage.

top 50 comments
sorted by: hot top controversial new old
[–] blakemiller@lemmy.world 220 points 3 days ago (8 children)

Her real comment was that there are only 3 major cloud providers they can consider: AWS, GCP, and Azure. They chose AWS and AWS only. So there are a few options for them going forward — 1) keep doing what they’re doing and hope a single cloud provider can improve reliability, 2) modify their architecture to a multi-cloud architecture given the odds of more than one major provider going down simultaneously is much rarer, or 3) build their own datacenters/use colos which have a learning curve yet are still viable alternatives. Those that are serious about software own their own hardware, after all.

Each choice has its strengths and drawbacks. The economics are tough with any choice. Comes down to priorities, ability to differentiate, and value in differentiation :)

[–] axx@slrpnk.net 53 points 3 days ago

I'm sorry, what, a balanced and informed answer? Surely you must be joking!

[–] blah3166@piefed.social 27 points 3 days ago

Meredith mentioned in a reply to her posts that they do leverage multi-cloud and were able to fall back onto GCP (Google Cloud Platform), which enabled Signal to recover quicker than just waiting on AWS. I'd link to source but on phone, it's somewhere in this thread: https://mastodon.world/@Mer__edith/115445701583902092

load more comments (6 replies)
[–] magguzu@midwest.social 99 points 3 days ago* (last edited 3 days ago) (4 children)

So much talking out of ass in these comments.

Federation/decentralization is great. It's why we're here on Lemmy.

It also means you expect everyone involved, people you've never met or vetted, to be competent and be able to shell out the cash and time to commit to a certain level of uptime. That's unacceptable for a high SLA product like Signal. Hell midwest.social, the Lemmy instance I'm on, is very often quite slow. I and others put up with it because we know it's run by one person on one server that he's presumably paying for himself. But that doesn't reflect Lemmy as a whole.

AWS isn't just a bunch of servers. They have dedicated services for database clusters, cache store, data warehouse, load balancing, container clusters, kubernetes clusters, CDN, web access firewall, to name just a few. Every region has multiple datacenters, the largest by far of which is North Virginia's. By default most people use one DC but multi region while being a huge expensive lift is something they already have tools to assist with. Also, and maybe most importantly, AWS, Azure and GCP run their own backbones between the datacenters rather than rely on the shared one that you, me, and most other smaller DCs are using.

I'm a DevOps Engineer but I'm no big tech fan. I run my own hobby server too. Amazon is an evil company. But the claim that "multi cloud is easy, smaller CSPs are just as good" is naive at best.

Ideally some legislation comes in and forces these companies to simplify the process for adopting multi cloud, because right now you have to build it all yourself and it becomes still very imperfect when you start to factor things like databases and DNS, and this is what they rely on hard for vendor lock-in.

[–] shalafi@lemmy.world 19 points 3 days ago

Can't find a screenshot, but when you're logged in and click for the screen to show all AWS products, holy shit. AWS is far more than most people think.

[–] douglasg14b@lemmy.world 18 points 3 days ago

Not to mention the fact that the grand majority of federalized services have extremely unsustainable performance characteristics that make them effectively impossible to scale from hobby projects

[–] Dragonstaff@leminal.space 5 points 2 days ago (5 children)

AWS needs to be broken up way more than Ma Bell ever did. We need to have open protocols developed so that there can be actual competition.

load more comments (5 replies)
[–] rumba@lemmy.zip 4 points 3 days ago* (last edited 3 days ago)

DevOps here too, I've been starting to slide my smaller redundant services into k8s. I had to really defend my position not to use ECS.

No, we're using kubeadm because I don't want to give a damn if it's running in the office, or google or amazon or my house. It's WAY harder and more expensive than setting up an eks and a EC/Aurora cluster, but I can bypass vendor lock in. Setting up my own clusters and replicas is a never ending source of work.

[–] qwerty@discuss.tchncs.de 30 points 3 days ago (6 children)

Session is a decentralized alternative to signal. It doesn't require a phone number and all traffic is routed through a tor like onion network. Relays are run by the community and relay operators are rewarded with some crypto token for their troubles. To prevent bad actors from attacking the network, in order to run a relay you have to stake some of those tokens first and if your node misbehaves thay will get slashed.

[–] tengkuizdihar@programming.dev 69 points 3 days ago (5 children)

shame their entire node system relies on cryptobros tech.

tor doesnt need currency to back it up. i2p doesnt need currency to back it up. why the hell lokinet does?

[–] qwerty@discuss.tchncs.de 20 points 3 days ago (13 children)

Tor relays only relay the traffic, they don't store anything (other than HSDirs, but that's miniscule). Session relays have to store all the messages, pictures, files until the user comes online and retrieves them. Obviously all that data would be too much to store on every single node, so instead it is spread across only 5-7 nodes at a time. If all of those nodes ware to go offline at the same time, messages would be lost, so there has to be some mechanism that discourages taking nodes offline without giving a notice period to the network. Without the staking mechanism, an attacker could spin up a bunch of nodes and then take them all down for relatively cheap, and leave users' messages undelivered. It also incentivizes honest operators to ensure their node's reliability and rewards them for it, which, even if you run your node purely for altruistic reasons, is always a nice bonus, so I don't really see any downside to it, especially since the end user doesn't need to interact with it at all.

[–] hanke@feddit.nu 6 points 3 days ago (1 children)

Where does the reward come from?

Who pays the node maintainers for keeping stable nodes online?

[–] qwerty@discuss.tchncs.de 5 points 3 days ago (4 children)

Inflation, those are new tokens generated by the network, the same way new bitcoin is generated by the miners roughly every 10 minutes, just without the proof of work mining part. It's called proof of stake, ethereum uses it as well.

load more comments (4 replies)
load more comments (12 replies)
load more comments (4 replies)
[–] e8d79@discuss.tchncs.de 29 points 3 days ago

I would not recommend it. Session is a signal fork that deliberately removes forward secrecy from the protocol and uses weaker keys. The removal of forward security means that if your private key is ever exposed all your past messages could be decrypted.

[–] arcterus@piefed.blahaj.zone 22 points 3 days ago (2 children)

The main issue with Session is they removed PFS when they redesigned everything. Also, it's admittedly been years since I tried it, but I remember the app being noticeably buggy.

load more comments (2 replies)
[–] hash@slrpnk.net 5 points 3 days ago

I found it workable when I tried it recently, but wound up going with simpleX. I like the multi identity system and you can proxy it through tor. Found the app customization more flushed out too.

[–] balance8873@lemmy.myserv.one 5 points 3 days ago

This is a bad tool but even if it weren't the no phone number thing is an anti-feature for most of the population.

load more comments (1 replies)
[–] axum@lemmy.blahaj.zone 20 points 3 days ago* (last edited 3 days ago)

SimpleX literally solves the messaging problem. You can bounce through their default relay nodes or run your own to use exclusively or add to the mix. It's all very transparent to end users.

At most, aws outage would have only affected chats relayed on those aws servers.

SimpleX also doesn't require a fukkin phone number.

[–] net00@lemmy.today 13 points 3 days ago (4 children)

Didn't only 1 AWS region go down? maybe before even thinking about anything else they should focus on redundancy within AWS

[–] shalafi@lemmy.world 15 points 3 days ago* (last edited 3 days ago) (2 children)

us-east-1 went down. Problem is that IAM services all run through that DC. Any code relying on an IAM role would not be able to authenticate. Think of it as a username in a Windows domain. IAM encompasses all that you are allowed to view, change, launch, etc.

I didn't hardly touch AWS at my last job, but listening to my teammates and seeing their code led me to believe IAM is used everywhere.

load more comments (2 replies)
[–] magguzu@midwest.social 7 points 3 days ago

This is the actual realistic change a lot of people are missing. Multi cloud is hard and imperfect and brings its own new potential issues. But AWS does give you tools to adopt multi region. It's just very expensive.

Unfortunately DNS transcends regions though so that can't really be escaped.

[–] Evotech@lemmy.world 6 points 3 days ago (3 children)

Apparently even if you are fully redundant there's a lot of core services in US east 1 that you rely on

load more comments (3 replies)
[–] lando55@lemmy.zip 5 points 3 days ago (1 children)

This has been my biggest pet peeve in the wake of the AWS outage. If you'd built for high-availability and continuity then this event would at most have been a minor blip in your services.

load more comments (1 replies)
[–] goatinspace@feddit.org 13 points 3 days ago (1 children)
[–] victorz@lemmy.world 4 points 3 days ago

Gifs you can hear ❤️

[–] majster@lemmy.zip 13 points 3 days ago (1 children)

They are serving 1on1 chats and group chats. That practically partitions itself. There are many server lease options all over the world. My assumption is that they use some AWS service and now can't migrate off. But you need an oncall team anyway so you aren't buying that much convenience.

[–] boonhet@sopuli.xyz 18 points 3 days ago (5 children)

There are many server lease options all over the world

It increases complexity a lot to go with a bunch of separate server leases. There's a reason global companies use hyperscalers instead of getting VPSes in 30 or 40 different countries.

I hate the centralization as much as everyone else, but for some things it's just not feasible to go on-prem. I do know an exception. Used to work at a company with a pretty large and widely spread out customer base (big corps on multiple continents) that had its own k8s cluster in a super secure colocation space. But our backend was always slow to some degree (in multiple cases I optimized multi-second API endpoints into 10-200ms), we used asynchronous processing for the truly slow things instead of letting the user wait for a multi-minute API request, and it just wasn't the sort of application that you need to be super fast anyway, so the extra milliseconds of latency didn't matter that much, whether it was 50 or 500.

But with a chat app, users want it to be fast. They expect their messages to be sent as soon as they hit the send button. It might take longer to actually reach the other people in the conversation, but it needs to be fast enough that if the user hits send and then immediately closes the app, it's sent already. Otherwise it's bad UX.

load more comments (5 replies)
[–] ICastFist@programming.dev 7 points 2 days ago

Tangent: Jami is p2p, so the only risk of going offline is if everyone in the groups go offline. It does lack several quality of life features, though.

[–] sugar_in_your_tea@sh.itjust.works 5 points 3 days ago (9 children)

Why is it that only the larger cloud providers are acceptable? What's wrong with one of the smaller providers like Linode/Akamai? There are a lot of crappy options, but also plenty of decent ones. If you build your infrastructure over a few different providers, you'll pay more upfront in engineering time, but you'll get a lot more flexibility.

For something like Signal, it should be pretty easy to build this type of redundancy since data storage is minimal and sending messages probably doesn't need to use that data storage.

load more comments (9 replies)
load more comments
view more: next ›