New design sets a high standard for post-quantum readiness.
Great. Now we just have to get Signal off AWS and we be good.
Signal puts a lot of effort into their threat model that assumes a hostile host (i.e. AWS). That’s the whole point of end to end encryption, even if the host is compromised the attackers do not get any information. They even go as far as padding out the lengths of encrypted messages so everyone looks like they are sending identical blocks of data
I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.
You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.
no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues
Monero, Nostr, Lemmy, and Mastodon did not go down. Why? Because they are decentralized
Monero isn’t like the other three, it’s P2P with no single points of failure.
I haven’t looked too closely at Nostr, but I’m assuming it’s typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it’s a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you’re just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn’t go down but a major one does, your experience will be significantly degraded.
The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn’t kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.
Nostr is a weird being. You are correct that it is not peer-to-peer like Monero is. However, it’s not quite federated in the same way that ActivityPub is.
When using Nostr clients, you actually publish your same data to like six different relays at the same time. It has the built-in assumption that some of those relays are going to be down at any given time and so by publishing to like six at once you get data redundancy.
Come on, mate… Lemmy as a whole didn’t go down, but instances of Lemmy absolutely did go down. As they regularly do, because shit happens.
that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account
centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery
Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.
Services might be temporarily degraded, but not gone entirely.
but that’s a compromise… it’s not categorically better
you can’t run a bank like you run distributed instances, for example
services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this
and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)
That was my point. But as somebody else pointed out here, the difficulties in maintaining the degree of security we currently enjoy as Signal users starts to get eroded away
Padding isn’t anything special. Most practical uses of block ciphers require it.
sending identical blocks of data
Nitpicking here but assuming from the previous words in your comment that you mean blocks of data of identical length.
Although it should be as if we are sending multiples of identical size, I suppose.
Anyway, sorry for nitpicking.
Just use Matrix…
I did, it’s a buggy undercooked mess that doesn’t work half the time. The app that’s officially supported is missing half the features. Trying to get people to switch to it is like pulling teeth as the onboarding process in overly complicated for the average user.
Meanwhile Signal works right out of the box with very little fuss.
I could. Presumably so could the others commenting on this post. But then what are we to do about the privacy or tech illiterate people we’ve carried to Signal over the years?
It’s easy to winge about just doing what you perceive as the optimal solution. It’s more difficult when you need to navigate the path to get there from where we are now.
No
or federated server
Would be very cool to be able to host a Signal homeserver.
https://signal.org/blog/the-ecosystem-is-moving/ here is Moxi’s take on that (former Signal CEO).
So I don’t think it’s happening.
they won’t do that.
Matrix tried for quite a while to get interoperability, but signal is just too paranoid about distributed hosting or interoperability of their software/protocol. it’s quite annoying
And yet simplex exists.
Wait, simplex isn’t paid?
No, it’s totally free and open source, and you can host it on your own server if you wish.
I guess the research doesn’t have to be limited to signal. If other apps can benefit from it the more resilient “private communications over the internet” get.
So that’s why Signal didn’t send my messages very quickly today then, maybe.
It’s not completely out yet. That was likely AWS being down.
Also, the new quantum protected message encryption headers are about 2kb. If that’s causing issues with your internet, you may want to consider looking at new internet.
2kb? While it may not sound like much, that’s at least three packets worth of data (depending on MTU). If you think about it in terms of how TCP sends packets and needs ACKs, there’s actually a lot of round trip data processing going on for just that one part.
TCP will generally send up to 10 packets immediately without waiting for the ACKs (depending on the configured window size).
Generally any messages or websites under 14kb will be transmitted in a single round-trip assuming no packets are dropped.
That was likely AWS being down.
Sorry, yeah, that’s the only thing I was referring to.
My internet connection is 500/500 Mbps, and I can’t change it. 😄👍
Should have been pretty obvious to anyone reading any tech news whatsoever today, especially in the context of where you responded. No apology from you should have been necessary!
You would think 😅 The sorry was sightly sarcastic, but shhh, nobody need know
Why do we keep caring about signal when there’s Matrix?
Because Signal works and Matrix doesn’t.
Because my grandpa can work with signal which is still encrypted communication. Thus its a low threshhold to adoption and significant increase in cyber hygiene. Even for his type of audience.
Because Matrix barely works half the time and has some significant security/privacy flaws still. One of which is: if there’s a bug that makes it possible for someone to snoop your metadata and the fix requires a server update… You’re SOL if the people you’re talking to don’t get the update.
Having in mind we are not even close to breaking classical cryptography with quantum computing I doubt this was their best investment of time
Once quantum computers break classical cryptography, it’s going to be too late to develop post-quantum cryptography, mate.
The best time to develop resilience is right now.
It’s not going to happen this century, probably never
Even if quantum computing turns out to actually be infeasible and classical cryptography is secure for the next millennia, it’s still a good feature to have a third independent encryption layer in the protocol. It makes it that much less likely reliant on the other two being bulletproof.
Maybe. I don’t know at which point all that extra processing stops being worth it.
How sure are you? Assign a percentage chance to it and the cost of exposing old messages, and compare that to the cost of this dev effort.
We know governments are using it, and there’s likely a lot of sensitive data transmitted through Signal, so the cost of it happening in the next 20 years would still be substantial, so even if the chance of that timeline happening is small, there’s still value in investing in forward secrecy.
They also want nuclear fusion reactors and there is none in the horizon after 50 years of research and development (even though many want to sell the idea that there are).
You can start preparing for post hypercomputation cryptography too if you believe your argument.
There’s hardly ever glory in prevention…
Their core feature is secure messaging, so I’d say this result highlights their dedication to the secure aspect of it. So an excellent feature in terms of branding, and probably has more benefits in other places e.g. attracting talent, as developers now can see Signal offers great opportunities to work on complex problems.
So I’m curious; what do you think would be better investment of their time?
Like allowing a federated system instead of a central one, not depending in external libraries and services, and so on. I bet there are many things that would actually improve the security instead of this that is more of a marketing point.
they will not make a federated system and they said so, quite strongly. if you want that you’ll need to wait for matrix to grow up.
Simplex is ready today, assuming you just want 1:1 messaging.
the best time was yesterday. the next best time is today. securing systems after they’re broken, when data could actively be collected prior to the breakthrough, is not the way to approach security.
There are nation states just straight up intercepting and storing signal data on their networks in hopes that it can be decrypted in the future. 20 year old messages will still be useful.
Also known as Harvest now, decrypt later. And it’s a serious security threats that Signal must consider and handle
Lol, it shows the hype quantum computing has sold and how detached the public thought is about it from reality.
I’m friends with two quantum computing researchers and they are pretty sure quantum computing will never be a practical application because of how the noise and errors scale with the system size.
The quantum computing hype is really annoying but we don’t know the future. One day there might be a breakthrough in noise reduction. I’d rather signal have post-quantum cryptography and not need it than get blindsided if there is suddenly a qc that can break rsa with shor. Not to mention intelligence agencies doing store now/decrypt later stuff.
It’s future-proofing. It means my messages are not only safe today but, even if they are intercepted or leaked somehow, will also be safe in the future.
I doubt that the first ones to break it will be eager to communicate their findings to the public.
This tech is far to valuable for military/spionage goals. For all we know it already exists.
We’re as close to quantum computers as we are to ChatGPT becoming sentient.