Egregoros

Signal feed

Timeline

Post

Remote status

Replies

50

@dougwade Delta Chat would still be a better option, besides being YEARS ahead of such new solution, in Delta Chat you don't depend on a particular server, for example here in fediverse if your instance goes down suddenly you lose your profile, with delta chat you can have several instances at the same time so if one goes down the other keeps working (this is new and still under development), the fedi solution would also need that level of resilience to be comparable

@rysiek

@dougwade @arcanechat @rysiek I don't want to break your heart but E2EE messaging will never happen on fedi no matter what anyone says, even Soatok himself. (furself?)

1. Fedi is accessed by users from multiple clients. So now you have a key synchronization problem that Matrix hasn't been able to get working correctly over the course of... 10 plus years?

2. every fedi app that exists which people use will have to updated to support it, and it will NOT be trivial. People are not going to give up their preferred apps just for E2EE messaging.

3. every web interface will have to be updated to support it properly. so now we're doing all this crypto in the browser. Your private key will have to live in browser local storage. Not great.

4. This of course implies that every fedi server will have to be updated to support it: Mastodon, Pleroma, Akkoma, all the Misskey forks, Lemmy, Pixelfed, Gotosocial.... and this is going to go smoothly without giant security issues happening due to poor implementations right? RIGHT????? :newlol:

It's not going to happen.

What could happen is that this could become a Mastodon specific feature that only works with Mastodon and the official Mastodon app. Or perhaps there will be a specific e2ee messaging mobile app created that only works with Mastodon. But I doubt it.

The biggest problem with this idea is that the entire ecosystem will be so broken/fractured that people will instead choose something else that doesn't have this problem. Whichever is easiest to onboard and doesn't leave you guessing "will they be able to receive my messages?" will win. It will be a dedicated E2EE messaging service such as DeltaChat.

The people who keep talking about E2EE coming to fedi are only doing so for clout. They're either dishonest or just stupid and have no idea what it takes to build such an app that will be accessible to the masses.

And even if such a thing did exist, it is too easily blocked anyway. Not like it would have helped people in Iran or anything.

@feld @dougwade @arcanechat @rysiek 2. and 4. is just ‘we can’t improve just the part of fedi we’re on’. if it was any true, Pleroma would be merely a worse Mastodon clone. there are apps creating new kinds of experiences using ActivityPub and they’re planning to implement the MLS over AP spec. just telling users you can’t use E2EE messaging with some of your friends is much less confusing than the experience mainstream IM users are used to, like whatever recently happened to Facebook Messenger when they switched to E2EE by default

@mkljczk @arcanechat @feld @dougwade @rysiek At least with Pleroma, the chat part has been partly solved. Using classic AP DMs (to: ["https://example.com/users/recipient"]) for E2EE isn't doable without lots of added complexity, because you can add new mentions at least according to how most implementations do it.

E2EE over AP is suffering from the Linux case of reinventing the same wheel all over again. Instead of Ciscoware and XML, we have AP/JSON(-LD) and endless extensions, neither work great. Instead of reinventing a protocol never meant for private communications (although never explicitly said) as a simple transport layer, people should have tried to fix XMPP.

Here's Pleroma Chats as they should have been from the start, and I'm not joking that much: https://docs.ejabberd.im/developer/extending-ejabberd/elixir/#embed-ejabberd-in-an-elixir-app

@feld @arcanechat @dougwade @rysiek @mkljczk It does, but it also is the closest to an open and extensible ideal messaging platform that currently exists. And it mostly works on the server side. What is not working at all is the client side where 3 different OMEMO versions co-exist, neither of which are compatible with each other and clients seemingly choose which one to implement at random.

In some tangential way, it suffers from the same issue as AP always did. Way too extensible to its detriment.

I have not looked at the Delta Chat internals yet, but so far after trying to package the relay (probably should continue on that endeavor when I find some inspiration/time), I'm not a fan. If the core is a pile of unportable madness that vendors openssl of all (thanks Rust), it has little hope of surviving long-term. Unless a different implementation(s) (the Golang one's) get more traction than the current reference one.
@phnt @arcanechat @dougwade @rysiek @mkljczk

> I have not looked at the Delta Chat internals yet, but so far after trying to package the relay

you absolutely can package the relay and I did it for FreeBSD but I don't see a point because half of it is based on very specific configuration of multiple services and that's not something that "packaging" alone can solve.

Now if you're annoyed about there being so many different services involved, you can look at other work being done in this area. There's a custom version of the Maddy mail server written in Go being worked on (and actively used in a certain country right now) so you can deploy servers with a single binary: https://github.com/themadorg/madmail

> If the core is a pile of unportable madness that vendors openssl of all (thanks Rust), it has little hope of surviving long-term.

The core as I package it on FreeBSD does not vendor openssl, and the reliance on any openssl at all can likely be removed in the not so distant future
@feld @arcanechat @dougwade @rysiek @mkljczk My annoyance with the packaging was more with the configuration being stuck in the debian-specific install tool (at least as of ~2 months ago). I've heard there were improvements on that front I think. The number of services involved was expected since it's email. If you want a normal Dovecot/Postfix setup, you need all of that anyway.

Packaging the core wasn't that bad, after packaging a bunch of python dependencies, because I decided to try my chances with packaging it on RHEL8. I think I have it successfully packaged, but I never tried testing it.
@feld @phnt @arcanechat @dougwade @rysiek @mkljczk the most ideal way to use json-ld is to use expanded form and put it directly in a triple store and make that searchable, but I found it was hard to do this on a FLOSS stack. the triple store space isn't in great shape, you either get expensive proprietary systems or you get poorly documented systems that fall over when you try to CRUD fediverse data at realtime speeds. or sometimes just fall over because they don't work at all, I had multiple projects where release builds just didn't even work right and I had to reach out to developers to find out all the "tricks" to get a working system, terrible bugs, etc.

there are other unrelated problems, like you need to have predictable data structure to index and you need indices to make your system work so in practice you have to constrain what you accept.

currently I am dealing with the fact that activitypub json-ld documents can have multiple types. in practice I think no system supports this, they just reject documents with an array instead of a string. I extended an activitypub server to support Verifiable Credentials 2.0, and if you want to support Open Badges, it is a hard requirement that the type is ["VerifiableCredential", "OpenBadge"]. So I ended up compromising and internally made our server use heuristics to pick one primary type, and keep supplementary type array for later use. And, internally it only works for non-Activity objects that are the object of an Activity. Hard limitation of the system. Couldn't support full flexibility. Made a compromise. The compromise still is ugly and added annoying complexity to the code. Even if you made a commitment to supporting multiple types, how do you even do that? you can't support it arbitrarily. you can only just hardcode how you deal with specific combinations of types.
@feld @phnt @arcanechat @dougwade @rysiek @mkljczk

I have run into situations where it seems literally impossible to make two things that use json-ld interoperate by making the document function for both, which is kind of an explicit promise of json-ld. but, it works a lot of the time, life isn't perfect and if you think about it a bit you realize the promise could never be 100% filled.

json-ld works as a substrate for representing a graph of arbitrary-content triple "documents". it is your responsibiliity when you make a real-world system to constrain what you accept.

the problem as I see it is that it has no constraints on real-world profiles of usage. its ok at the activitypub level because it is another substrate but if you build something on top of activitypub you should have a spec defining narrowly and rigorously what is valid. so for example if you're building a microblog network you define a microblog interop spec and you also don't pretend it will mesh with for example a subreddit spec or forum spec. you might even make practical constraints like "json-ld allows multiple types but this spec mandates one"
@feld @arcanechat @dougwade @mkljczk @phnt @rysiek > json-ld works as a substrate for representing a graph of arbitrary-content triple "documents". it is your responsibiliity when you make a real-world system to constrain what you accept.

to clarify this, you could have a system that still lets you have a document representing something that has all kinds of arbitrary data in it. but maybe your system just tracks the graph and a few predictable properties, but lets someone looking up that document by id see any kind of data in there you want, which some other consuming system can handle. json-ld is great for that. you still need to say "it at least has this; it must never have this"
@sun @arcanechat @feld @dougwade @rysiek @mkljczk I couldn't have said it better and I'm not well versed in document parsing and JSON-LD, but I'll add this.

The only reason why I mentioned the LD part at first is because it is an entirely optional part of the spec that not many projects use. Contrary to what some say, it is not mandatory at all to use LD. But for something like E2EE you might do things that are more LD-friendly. But if you want schemas (purpose of LD), realistically XML is better at it even though support for XML parsers hasn't been great for the last few years.

Which creates an interesting issue. You can remap types in JSON-LD, so you can create a document that has two different meanings to a JSON consumer and a JSON-LD consumer.

And the way LD is currently treated isn't by using it properly in a triple store and the usual way you handle documents like this. For the most part, it is handled as pure JSON that is compacted/expanded when processed by a JSON parser with extra logic on top of it. Which of course makes JSON handling a notable performance issue in federation for at least one project and a constant source of issues for the projects.
of course you can. they just won't get it before they come online. which is kind of duh! for instant messaging, isn't it?

now, if you're concerned about the two of us never being online at the same time, install Jami on your home server, or on a VPS, link your account there, and it will get a copy of your messages whenever you send or receive them, and it will transfer them to your peers or to your own device whenever they come online.

now, if that's not good enough for you, I guess you really prefer to share your conversations with third parties for them to do this for you. me, I prefer my autonomy.

CC: @davep@infosec.exchange @rysiek@mstdn.social
nope. I'm told they don't even have access to data, or even metadata, thanks to some technology indistinguishable from magic in its protocol. but I won't pretend I really understand how that works.

the main problem with signal is their insistence on demanding a snoop phone to get started. that spoils the entire experience, and probably exposes its users' conversations, metadata and even secret keys to third parties. see https://blog.lx.oliva.nom.br/2026-02-01-signal-of-awareness.en.html and https://blog.lx.oliva.nom.br/2026-01-25-compromising-encryption-keys.en.html

the secondary problem with signal is its insistence on centralization. this makes the "not being online at the same time" a problem for all its users, when their centralized servers are not online

CC: @feld@friedcheese.us @rysiek@mstdn.social

@lxo @feld @rysiek
I agree with the centralisation risk. But those articles have nothing to do with needing a telephone number. They're more of an indictment of Windows and tend to back up Signal's worry about LLMs embedded into the OS.

If your endpoint is compromised, anything you read is also compromised.

As for the "magic" comment, it's just that they encrypt basically all the metadata that the likes of WhatsApp don't. And with the double ratchet protocol they can't decrypt that data. They *could* make logs of who called or messaged who, but don't. If this were decentralised, what's to stop a bad actor logging such information? Just curious. It may need a rethink of the whole architecture (I'm not saying that's a bad thing by the way).

@davep @lxo @rysiek

> If this were decentralised, what's to stop a bad actor logging such information?

From the DeltaChat perspective, it's assumed that the servers may get compromised.

So if you and another contact are using the same server (relay), and the relay is compromised, the attacker will be able to see the IP addresses of the clients. This is not ideal, but it's about all they get. They can measure message sizes and guess what's inside but it's not very useful in most cases unless they're trying to pin down the transfer of a specific file or something.

If each contact is using a different server (relay), then this is trickier. They can only see the IP address of the user that logs directly into the server they've compromised, and they can't even be sure the same contact is sending the surveilled target messages if the other client's email address keeps changing -- even bouncing around and coming from completely different servers (relays). This is a thing you can do now and will be automated in the not too distant future.
@davep @rysiek @lxo DeltaChat makes it relatively easy to setup your account on a relay that exists in a different legal jurisdiction than you are in to make it even harder for legal authorities to try to get anything on your account activity. But if your account (email address) can change so easily, they start chasing ghosts.

If you had any concern that you might be surveilled the smart thing to do would be to additionally use proxies/VPNs if possible, and change your DeltaChat relay regularly. Change it, send your contacts a message so their app will automatically learn your new address to contact you at. Much easier than getting new phone numbers!
you seem knowledgeable about signal. I hope you don't mind if I shoot you some questions.

does it use TPM features on mobile phones as well?

how does it deal with linking multiple devices to an account? does each device get a separate key generated locally using TPM? or do they all share the keys first generated in a compromised mobile phone?

when you link a new device to an account, does it gain access to past messages, or only to future messages?

is there any way for you to tell in case someone else uses your compromised keys/credentials to gain access to your account, e.g. by linking a device that becomes visible to other devices or somesuch?

thanks in advance,

CC: @feld@friedcheese.us @rysiek@mstdn.social
@lxo @davep @rysiek

> does it use TPM features on mobile phones as well?

yes

> how does it deal with linking multiple devices to an account? does each device get a separate key generated locally using TPM? or do they all share the keys first generated in a compromised mobile phone?

AIUI same keys, there's just a different identifier that tells you which device it is. Someone wrote a tool that can sniff "read receipts" and determine if someone is "at home" based on if it was sent from their phone or desktop.

> when you link a new device to an account, does it gain access to past messages, or only to future messages?

Yes, as of last year you can choose to sync old messages when you link a new device (like your Desktop)

> is there any way for you to tell in case someone else uses your compromised keys/credentials to gain access to your account, e.g. by linking a device that becomes visible to other devices or somesuch?

There is *now* after Russian soldiers were infiltrating Ukrainian military Signal chats by linking their own devices to existing Ukrainian military members accounts through hacks/tricking them into following links, or just taking phones off their dead bodies.


Not mentioned in this thread is that your Signal account key is stored in Signal's cloud as you can recover your account with a PIN which wouldn't be possible if they didn't have your key
wow, that is clever indeed: they don't get your key, they get the random part that goes into forming the key, while the other part is derived from the PIN, so they can (i) authenticate the pin without knowing it or ever getting it, and (ii) extract the part they hold from the enclave and send it back to you (if you provided the right authentication within a limited number of attempts) so you can hash it along with a separate key also derived from the PIN they don't know to recover your master and application keys. it feels sound even without the replicated enclaves. even if they retained the random number outside an enclave, they'd still have to brute-force the PIN to recover your key, and IIUC all this would get them would be your social graph. (maybe your backups too?)

but then again, the weakness is the computing device where PIN gets entered and random part gets generated. whoever controls that device gets both, and can thus derive all the keys and gain access to whatever the keys were supposed to protect

CC: @feld@friedcheese.us @rysiek@mstdn.social

@davep
Doesn't Intel hold private keys for SGX enclaves or something? I remember hearing something like that. Is that a concern?

Then again, I guess we are trusting chip designers anyway. But Intel has recently been partly bought out by the US gov; which is concerning as all the Minneapolis ICE watchers and similar groups are using Signal.
@lxo @feld @rysiek