The Encryption Debate Is Over - Dead At The Hands Of Facebook

Retraction from Schneier on 8/2:

Yesterday, I blogged about a Facebook plan to backdoor WhatsApp by adding client-side scanning and filtering. It seems that I was wrong, and there are no such plans.

1 Like
1 Like

I knew this would happen, I assumed it was happening already in WhatsApp, so in a way it’s nice to know for sure.

I 100% see this becoming a thing on Google’s Android, and in devices from Samsung and some Chinese vendors. I’m sure it’ll be something along the lines of software embedded in Google Keyboard for the sake of user convenience, the way Google is already beginning pushing Allo-type “Google Assistant” in the default SMS Messages app on Android.

I don’t see this ever happening on AOSP itself, or on any Apple devices, or on most computer hardware. The security aspects of this are huge, and while I think most consumers will happily ignore it, it’s hard to see something like this passing through an enterprise IT department for example. So an OEM like Dell or HP embedding “similar tools directly into devices themselves” would kill their business.

I think this is a very bad thing, but I also think it’s silly to proclaim there will be no more secure devices. Devices that don’t have this functionality will always exist.

1 Like

We are doomed :expressionless:
I doubt governments will ban (more) secure devices and programs, eg Purism or Briar. The problem is, whats the point of me using Librem or Pinephone or Lineage OS with e2ee, when the other side uses regular Android phone with all Google services and programs (g board)


The title of the Forbes article is mendacious and click-baiting. Facebook is not the master of the world. Just because they intend to put a backdoor in their instant messaging app does not mean everybody will follow suit.

Forbes says in effect that this will allow censorship of content, but why is that a problem ? Why is that pointed out only when big bad Facebook is doing it ?

Mastodon, for one, also has censorship rules on its books. It will ban you if you say certain things, which pretty much overlap with Facebook pet hates.

However, Mastodon is tiny-weeny and nice and fluffy and all “privacy” oriented, so it’s supposed to be good for you. It’s also open source, so it can’t do evil, can it ?

Smacks of double standards, if you ask me.

You can host your own Mastodon instance with your own rules. Or even make a fork. But if owners of other instances don’t want to “speak” with you, that’s not censorship. It is owner’s choice


This is actually pretty funny. FB are really marketing their setup as “encryption,” WTF?

What they should really do is “encrypt” everything with ROT13 in Y1 and then in Y2 switch to ROT 26 and say they they improved encryption strength by 200%.

I never believed WhatsApp had real encryption, now there is some evidence that its BS. Surprise, surprise.

I’m not speaking about self-hosting. As soon as you pronounce this word, you reduce the potential audience dramatically.

I’m speaking about platforms having the potential to grow to a mass audience. Free speech is antithetic with limited audiences. It’s not free if it stays limited to fringe participants.

Never mind forking. This requires being a software developer to be able to speak one’s mind.

Something worth considering is that Mastadon is hyper niche. If it was a mainstream thing, one could indeed host their own Mastadon instance with their own rules (and I am assuming others could block that instance if they felt it wasn’t worth dealing with). But at this point, we are in a state where you pretty much have to use FB/Insta/WA and to a lesser extent Twitter if you want to have an impact and/or communicate with friends. Same with Google/Apple/MS products.

That’s why in addition supporting the break up of FB/Google/Amazon/Apple/etc., I think we need strict and open ended rules around federated interaction/interoperability. Perhaps to the point where it would be legal to violate their copyrights (and other corporate allowances) to guarantee true service federation.

Look at something like MS office. It is a 100% monopoly in terms of business office suites (I am not talking about a startup sharing a WIP spreadsheet) and perhaps even consumer level office functions (try sending an open doc resume to a recruiter). If the consequences of them “acting out” against interoperability would be getting their source code GPL’d, they would do everything in their power to truly comply with federation/interoperability.

Didn’t MS actually add support for open doc formats in office as way to manage PR around monopoly back in the day? Same thing with Windows N versions. If the risk of them pulling a quick one (ala Windows N/open doc format support) was getting Windows/Office GPL’d, they will act in good faith. They won’t have any other option.

I recognize that what I am saying is somewhat of a pipe dream, but you also have to remember that copyright doesn’t actually exist (in the physical sense). We decided that it that has benefit for society (and it does), but that also means we can modify it to suit broader social goals. There is no utilitarian argument around why we should have special rules for MS/Google/Apple/FB/etc. shareholders.


Would it be possible to avoid if you used a different smartphone OS, or is it something you could change by rooting your device and installing your own operating system (or something along those lines)? I think we just need to get smarter about it.

That’s a great remark. Indeed, Mastodon is very niche at the moment. It could grow to be a viable alternative some day, and that’s why I mentioned it. We need more of them, and they need to be more free.

The original model of Mastodon was to allow every community to enforce its own rules, and that makes sense, because it mimics the way men organize in real life. It became useless the minute Mastodon added common rules for everyone, and unsurprisingly, those rules turned out to be exactly the same as Facebook’s, Twitter’s et al. So why not use the original, which has much broader reach anyway ?

It’s interesting you mention Microsoft, because in my opinion, Microsoft, the old Microsoft to be precise, is the gold standard for “empowering” users (remember that word ?), meaning adding to their freedom, not taking away from it.

I still use Microsoft Office 2003, and it’s a joy compared to the web of obligations and constraints that modern computing has become.

The big corporations you mention have indeed grown to monopoly status, and whatever the (very real) benefits they bring, this calls for regulation, maybe breakup in some cases, maybe something else.

has anyone come across a second source? i’ve only seen this speculation from Forbes / Kalev Leetaru

ultimately - if we’re going to stay true to the focus area of this forum/website, knowing which apps are definitely doing this vs. not would be most helpful - i.e. whatsapp is still relatively safe, vs. facebook has indeed extended it’s reaches into your device, now via whatsapp (we already know the extent of intrusion Messenger has).

1 Like

PTIO especially names WhatsApp as something to avoid

If you are currently using an Instant Messenger like LINE, Telegram, Viber, WhatsApp or plain SMS messages you should pick an alternative here.

yes i know it’s discouraged (and i agree), but the difference between queustionable encryption and the app enabling device-level trojans which are a) allowing agency backdoors and b) ending back device activity for marketing purposes to facebook datacenters through unencrypted means is a big difference.

1 Like


(translated to english) snip from the third root-level source from the forbes articles.

[Update 24.05.2019 - 16:30 clock] Meanwhile, the Federal Ministry of the Interior has commented and speaks of “considerations”. A statement states, "Necessary is a clear and technology-neutral approach that reconciles the freedom to use encryption with the unavoidable needs of security authorities, leaving providers free to decide how to use encrypted communication as a rule and state access. as far as the providers can technically enable the access - to ensure the communication content as a legally regulated exception for their users. "

sorry, but using “WhatsApp” or whatever closed source, proprietary company service makes any discussion about “real privacy” (if such would even exist) non-sense. at least governments (i.e. in europe) have access to any plain text communication / data processed by commercial communication product - be it an “app”, a service or links/carriers. And with the new DSGVO/GDPR this is even easier for the gov. to localize “Interesting” data from peoples to “obtain” them. Any company has to provide “access” to their customers/users data in any way - be it a backdoor, sniffer, hard disks to handing out a key or let them access running systems - if the government “need” it, what any judge (or even less gov people) can force and it is known, how easy and widely used this is for gov. entities. The company is not allowed to inform any customer or other party about that forced leakage (even if the case is closed).

secondly, any “full proof encryption” is of low value with traffic correlation, which even allows govs. do “de-correlize” Tor traffic in most cases (at least in germany where there is no real decentral internet infrasctructure because of strong regulations and a lot of “mirror ports” driven by gov. security services). for govs. often most interesting is who is communication with who when. encryption and even p2p enc. does not help.

maximum personal privacy (“real” is the wrong term) requires personal responsibility as self-determinated usage of internet-IT (and so at least a fundamental knowledge about the (real) internet and it’s real open independent standards / protocols and how to use them - independently from any single service provider and proprietary software builder). And consciousness about what privacy against which entities is basic.

But this is what typical end users don’t want: Giving that responsibility, consciousness to others means giving privacy in others hands. So they will be forced by others in that - as they (at the end) want…

Most important: protecting privacy by government means givin up privacy against the gov…

yeah, no. i wasn’t asking for a lecture, i was asking for a second source for what sparked this thread.

You make a great point, and that is happening. There are tags used like “Privacy Concerns” on app recommendations that can point out troublesome issues. There is a debate right now about whether to eliminate Riot from recommendation primarily because of data privacy concerns (and not concerns about the encryption it uses).

However this website does not focus on specifically mentioning what apps NOT to use. That would be a very daunting task. This forum is good for that though. The website may offer some suggestions on what not to use, but the primary focus is to feature recommendations for apps we SHOULD use.

1 Like

I agree and that is where this article reaches the wrong conclusion. Using an app like WhatsApp is totally opt-in, no one is forcing you to use it and people should know that they are willingly giving up their data when they do use it. The govt requiring all apps to include data capture in a mass surveillance program is a totally different thing and requires a vastly different debate. Outlawing the use of apps without surveillance is certainly something that requires push back.

1 Like

This hits the nail right on the head, imho: The following was posted a few places on this topic by /u/FvDijk on Reddit, which I just came across. OP direct link:

In a similar post by the website CCN, I wrote the comment below. Its conclusions hold up for this part as well.

TL;DR: Having read the piece, its sources and eventually watched the Facebook Developer talk it came from, I can say that there is a lot of speculation in this article. As such, I would not recommend it as a trusted source.

The rest:

The article states the following:

Mark Zuckerberg’s Facebook is reportedly working on a back-door content-scanner for WhatsApp, tantamount to a wiretapping algorithm. If the reports are correct, Facebook will scan your messages before you send them and report anything suspicious.

This Forbes (F1) link goes to another Forbes article (F2), which links to the Developer talk.

F2 is a speculative article based on the Facebook talk, which one can figure out by its second paragraph:

I have long suggested that the encryption debate would not be ended by forced vulnerabilities in the underlying communications plumbing but rather by monitoring on the client side and that the catalyst would be not governmental demands but rather the needs of companies themselves to continue their targeted advertisements, harvest training data for deep learning and combat terroristic speech and other misuse of their platforms.

Facebook suggests that it wants to use AI on the device (Edge AI) to use automated content moderation for its platform. One of the challenges they name is that they don’t know whether the algorithms work, which requires that they send violating content to their servers. They name this as a challenge for privacy .

F2 also makes the inference that this could be used to bypass E2E encryption if they do send moderated content to Facebook servers. F2 suggests that encrypted messaging may fall target to these same algorithms, although Facebook never stated this. Instead they used the vague ‘our platform’, so it’s not an entirely strange conclusion to make.

F1 then declares the death of encryption by hands of Facebook, magnifying the suggestions of F2 as conclusions. We find the link to F2 in this piece of text:

Facebook announced earlier this year preliminary results from its efforts to move a global mass surveillance infrastructure directly onto users’ devices where it can bypass the protections of end-to-end encryption.

One the same site, it went from speculative to conclusive. The presented CCN piece then links to F1, blindly taking over its alarmist tone and suggestions presented as conclusions.

Why did I do this?

I dislike misinformation a lot, especially this kind of confirmation bias. When I finished with the Facebook Developer talk, I looked at the original article and found it alarmist and wrong. Let’s instead discuss whether Edge AI should send information to its maintainer. That’s an actual privacy tradeoff question.

1 Like