I was wondering why is there no Web app for Signal.
I remember when I used to use telegram desktop and then the web-app came along I switched entirely to that because it was so much more convenient.
Is there any technical reason as to why there is no Web App as it can be less secure than Chrome app or other native apps.
Currently, I see a discussion on making an electron app which I think is a solid step.
But It would have been awesome to see some efforts for making a web app.
All major messaging apps have web app version.
So my question is Is Signal disadvantaged - due to its design for local storage and/or because itâs open source on both client and server side - that itâs not pragmatic to make a Web app.
Iâm not sure, if the CA is compromised what would prevent the attacker from modifying the signature in addition to modifyng the script? The file could be compromised and then signed and the signature and the script injected in web page.
However, one can also argue that Signal relies on HTTPS to authenticate the server anyway. Maybe what can be done is to use use Subresource Integrity and sign the real hashes with an anti-forgery token only known by the real server. Thereâd be one token per page served. Then the Signal Server would only need to validate this token to ensure that the client has not been compromised by a rogue CA.
This strikes me as very general advice (which is IMHO the cancer of the software world). This mostly applies when you have for example an ecommerce website built using wordpress and loading dozens of javascript files from obscure vendors through plugins.
When youâre in total control of the web page (you develop it from the bottom up) and are a professional knowing what you are doing. You can for example audit the code you insert, and you can even sandbox things using iframes. The only legitimate concern for a web client is the CA and code-signing aspects. I think the simple scheme I described should provide good enough security, even compared to the robustness of signing a binary.
One thing in favor of web clients from a security perspective is that web browsers can constitute a great sandbox that is cleaned on every reload (while a user device may be compromised, since not every device has advanced security features).
However one thing against them from a similar perspective is what I think is a surprisingly underrated threat : web browser plugins. Most plugins request and are given access to the whole execution context of all the pages visited by users. I think this can be quite dangerous for an application like Signal, especially considering the current state of naiveness/unawareness regarding the browser plugins situation. As much as I hate to say it, this alone strikes me as a deal breaker until web browsers improve on this front. Standalone applications win hands-off here thanks to the segmentation/security features of OSes.
I have also been thinking about how to make a client-side-javascript-only version of signalapp, as an alternative-slash-augmentation to the standalone electron app. Under the hood, electron actually ships a forked copy of the chromium browser, which gets around the âwhat about untrusted pluginâ problem by simply controlling the entire browser-binary! This has a downside in RAM usage, but avoids some security-pitfalls that the signal4chromeApp approach (and any future browser-hosted signal4browser idea) will have to overcome.
I think one of the most serious problems is endusers wanting to use signal4browser on machines where they ought not be using it at all. If you are chatting in a secure groupchat with your thumb-keyboard and small screen of your signal4smartphone, and get tired of the mechanical inefficiencies those impose, you can always:
100% trusted system == sit down somewhere and get your laptop out of your bag, which is a device you control and where you can run full-blown signal4desktop because you installed it long ago. This is the signal4desktop use-case, but has the downside that you must lug around a bulky laptop (or perhaps buy a weird kind of tablet/convertible which runs fullblown linux rather than android or fullblown windows rather than winCE-and-friends)
99% trusted system == sit down somewhere and use a machine that you mostly-trust, but which you do not necessarily have sole control over â the shared PC in the family room, your desktop PC at work, or whatever. This is the primary use-case of signal4browser, where you want to be able to open up a new tab, use it for secure messaging, and then close the tab and have all traces disappear from THAT system (which other people have access unto â you trust them enough that you do not believe they will install a keylogger / rootkit / spycam but would prefer that you not tempt them to be nosy). The stuff would still be syncâd to your own devices e.g. signal4smartphone, of course. See this related thread, Scan barcode when opening Signal for a real-world use-case example
50% trusted system == unfortunately, endusers would also want the convenience of signal4browser in situations where it is not very intelligent to trust the hardware and software of the device in question. Public kiosk in the library (security depends on how tech-savvy the librarian is primarily), public kiosk in an internet cafe (again the security of the box in question is a crapshoot), borrowing a device with a browser â might be a laptop or a desktop or a smartphone or a tablet or a videogame console â from an acquaintance of some kind (again the security of this move will primarily depend on who the sysadmin is).
completely untrusted system == unless the helpdocs and UI/UX pathways of signal4browser are extremely clear on this point, endusers will also be tempted to think âhey I can use signal4browser and speak freelyâ when they KNOW they cannot trust the sysadmin: draconian boss who installs spyware on all endpoint devices, neighborhood script-kiddie that offers a honeypot system for unwary passerby, and other unwise choices.
If we offer them the option to âinstallâ signal4browser onto any random device with an internet connection, simply by opening up a URL with a little green âyou are secureâ checkbox next to it, then plenty of endusers will believe they are secure even when they are not. So in addition to solving the various technological hurdles that make signal4browser difficult to implement securely even in the case when the base-machine in question is 99% trusted, we also need to consider how to make the UI/UX and the âlogin processâ properly convey that signal4browser should only be used on an endpoint device where the enduser trusts the sysadmin of that device.
Totally agree that users should only ever use a web version on systems they can totally (or almost totally?) trust. The thing is, you canât trust users, can you?
As you say, itâs a convenience factor - I am either tired of looking at my small phone screen and typing on its frustrating on-screen keyboard and want to switch my busy chat to a real keyboard and a nice big screen. Thatâs my own main reason for wanting a web or desktop app. My other reason is in case my phone is lost or stolen, so I can have another active Signal instance where I can still communicate with my contacts until I replace my phone.
Both solved with signal4desktop⌠unless all my devices were stolen, in which case signal4web would do nicely. But then, where would I access that from? A public PC in a library/cafe/airport/school/workplace? Bingo, I am no longer secure or private.
A common scenario where webapps can be compromised is a workplace that uses HTTPS interception. While that happens silently behind the scenes when using a work device, many workplace networks will ask to install their own certificate on a userâs own personal device if that device is used on the company network. If the user doesnât understand what HTTPS interception is and clicks âyes please install this certificate thing on my machine so I can avoid all these annoying HTTPS warnings that keep showing on my browserâ then signal4web might as well be running on HTTP.
In a whistleblowing scenario where someone wants to send a Signal message regarding their employer to an outside recipient, this situation can even become life-threatening, while the user might still believe they are perfectly private and secure.
SRI is entirely orthogonal to the problem. Every single time you start the hypothetical web app, the server and anyone else able to intercept your SSL connection to it can modify, add or remove any of these hashes - or anything else youâre downloading and executing, really.
Since browsers are allowed in most corporate and institutional environments, a browser app would allow more users to securely communicate using Signal.
I think heâs saying that JavaScript cryptography has significant issues that render it as good as non-existent if anyone is actually interested in seeing the content of your messages. The article linked in this post explains it.
I believe many of the issues in this article are outdated and can be solved in modern browsers.
You can use the Web Crypto API to code sign any javascript files you download. From there you can cache all downloaded scripts using Service Workers. So any update that gets downloaded goes through the same code signing process and since everything is cached, you can ensure that scripts havenât been tampered with. The only time a third party can tamper with the page is when downloading the initial script that checks the code signature. But the same can be said about downloading the desktop app from Signal >> Download Signal for the first time.
I think the only real issue is that browser extensions are a serious security threat. And I donât expect an api to disable extension any time soon, since that would make any adblocker entirely useless.
Perhaps Iâm missing something crucial, but I think that in terms of security PWAs are almost identical to native applications, except for the extensions issue that is.
if you read the article carefully, you will see, that most aspects he is going to analyze there, are fundamental problems of how javascript works in browsers. it canât change in modern browsers, and it will never change, as long as browsers work the way they are right now.
there is a different though, between a web app (chrome app), and a direct web version for encrypted messaging, like whatsapp is offering.
pure web access is unsecure, and that will never change.
Yes, WA has built a mini webserver into it Android and iOS clients. For access at not entirely safe places,like work PCâs , thatâs ideal IMO. Using tricks with hardlinking the Signal desktop data directory to a TrueCrypt image are not for the masses (and require admin rights on the computer so unusable for many work environments).
I beg to differ, I believe browsers have already changed. That article is from 2011 and mentions things like people being stuck with browsers from 2008, CSPRNGs being practically impossible in browsers, and being unable to control caching behavior of the whole browser with specificity. Furthermore I think the article is mostly targeted at using crypto as a replacement for https.
Fast forward to 2020 and we now have browsers that automatically keep themselves up to date, random number generators are built into 93% of the browsers, and caching can now be controlled very precisely using ServiceWorkers.
I am by no means an expert on encryption or code signing applications. Itâs just that I read through the article again in its entirety and I couldnât find anything major that prevents us from creating a secure chat client in a browser today, except for the side-channels (a.k.a. extensions)
Could you tell me what part of the article specifically is still an issue?