Jon Callas | 17 Aug 08:04 2013

Reply to Zooko (in Markdown)

Also at

# Reply to Zooko

(My friend and colleague, [Zooko
Wilcox-O'Hearn]( wrote an
open letter to me and Phil [on his blog at](
Despite this appearing on Silent Circle's blog, I am speaking mostly for myself, only slightly for Silent
Circle, and not at all for Phil.)


Thank you for writing and your kind words. Thank you even more for being a customer. We're a startup and
without customers, we'll be out of business. I think that everyone who believes in privacy should support
with their pocketbook every privacy-friendly service they can afford to. It means a lot to me that you're
voting with your pocketbook for my service.

Congratulations on your new release of [LeastAuthority's S4]( and
[Tahoe-LAFS]( Just as you are a fan of my work, I am an
admirer of your work on Tahoe-LAFS and consider it one of the best security innovations on the planet.

I understand your concerns, and share them. One of the highest priority tasks that we're working on is to get
our source releases better organized so that they can effectively be built from [what we have on
GitHub]( It's suboptimal now. Getting the source releases is
harder than one might think. We're a startup and are pulled in many directions. We're overworked and
understaffed. Even in the old days at PGP, producing effective source releases took years of effort to get
down pat. It often took us four to six weeks to get the sources out even when delivering one or two releases
per year.

The world of app development makes this harder. We're trying to streamline our processes so that we can get a
release out about every six weeks. We're not there, either.

However, even when we have source code to be an automated part of our software releases, I'm afraid you're
going to be disappointed about how verifiable they are. 

It's very hard, even with controlled releases, to get an exact byte-for-byte recompile of an app. Some
compilers make this impossible because they randomize the branch prediction and other parts of code
generation. Even when the compiler isn't making it literally impossible, without an exact copy of the
exact tool chain with the same linkers, libraries, and system, the code won't be byte-for-byte the same.
Worst of all, smart development shops use the *oldest* possible tool chain, not the newest one because
tool sets are designed for forwards-compatibility (apps built with old tools run on the newest OS) rather
than backwards-compatibility (apps built with the new tools run on older OSes). Code reliability almost
requires using tool chains that are trailing-edge.

The problems run even deeper than the raw practicality. Twenty-nine years ago this month, in the August
1984 issue of "Communications of the ACM" (Vol. 27, No. 8) Ken Thompson's famous Turing Award lecture,
"Reflections on Trusting Trust" was published. You can find a facsimile of the magazine article at
<> and a text-searchable
copy on Thompson's own site, <>.

For those unfamiliar with the Turing Award, it is the most prestigious award a computer scientist can win,
sometimes called the "Nobel Prize" of computing. The site for the award is at <>.

In Thompson's lecture, he describes a hack that he and Dennis Ritchie did in a version of UNIX in which they
created a backdoor to UNIX login that allowed them to get access to any UNIX system. They also created a
self-replicating program that would compile their backdoor into new versions of UNIX portably. Quite
possibly, their hack existed in the wild until UNIX was recoded from the ground up with BSD and GCC.

In his summation, Thompson says:

    The moral is obvious. You can't trust code that you did not totally
    create yourself. (Especially code from companies that employ people
    like me.) No amount of source-level verification or scrutiny will
    protect you from using untrusted code. In demonstrating the
    possibility of this kind of attack, I picked on the C compiler. I
    could have picked on any program-handling program such as an
    assembler, a loader, or even hardware microcode. As the level of
    program gets lower, these bugs will be harder and harder to detect.
    A well installed microcode bug will be almost impossible to detect.

Thompson's words reach out across three decades of computer science, and yet they echo Descartes from
three centuries prior to Thompson. In Descartes's 1641 "Meditations," he proposes the thought
experiment of an "evil demon" who deceives us by simulating the universe, our senses, and perhaps even
mathematics itself. In his meditation, Descartes decides that the one thing that he knows is that he,
himself, exists, and the evil demon cannot deceive him about his own existence. This is where the famous
saying, "*I think, therefore I am*" (*Cogito ergo sum* in Latin) comes from.

(There are useful Descartes links at: <> and
<> and <>.)

When discussing thorny security problems, I often avoid security ratholes by pointing out Descartes by
way of Futurama and saying, "I can't prove I'm not a head in a jar, but it's a useful assumption that I'm not." 

Descartes's conundrum even finds its way into modern physics. It is presently a debatable, yet legitimate
theory that our entire universe is a software simulation of a universe . Martin Savage of University of
Washington <> has an interesting paper from last November
on ArXiV <>.

You can find an amusing video at 
<> in
which Savage even opines that our descendants are simulating us to understand where they came from. I
suppose this means we should be nice to our kids because they might have root.

Savage tries to devise an experiment to show that you're actually in a simulation, and as a mathematical
logician I think he's ignoring things like math. The problem is isomorphic to writing code that can detect
it's on a virtual machine. If the virtual machine isn't trying to evade, then it's certainly possible (if
not probable -- after all, the simulators might want us to figure out that we're in a simulation). Unless,
of course, they don't, in which case we're back not only to Descartes, but Godel's two Incompleteness
Theorems and their cousin, The Halting Problem.

While I'm at it, I highly, highly recommend Scott Aaronson's new book, "Quantum Computing Since
Democritus" <> which I believe is so important a book that I
bought the Dead Tree Edition of it. ([Jenny Lawson]( has already autographed
my Kindle.)

Popping the stack back to security, the bottom line is that you're asking for something very, very hard and
asking for a solution to an old philosophical problem as well as suggesting I should prove Godel wrong. I'm
flattered by the confidence in my abilities, but I believe you're asking for the impossible. Or perhaps
I'm programmed to think that.

This limitation doesn't apply to just *my* code. It applies to *your* code, and it applies to all of us.
(Tahoe's architecture makes it amazingly resilient, but it's not immune.) It isn't just mind-blowing
philosophy mixed up with Ken Thompson's Greatest Hack.

Whenever we run an app, we're trusting it. We're also trusting the operating system that it runs on, the
random number generator, the entropy sources, and so on. You're trusting the CPU and its microcode.
You're trusting the bootloader, be it EFI or whatever as well as
[SMM]( on Intel processors -- which could
have completely undetectable code running, doing things that are scarily like Descartes's evil demon.
The platform-level threats are so broad that I could bore people for another paragraph or two just
enumerating them.

You're perhaps trusting really daft things like [modders who slow down entropy
and [outright

Ironically, the attack vector you suggest (a hacked application) is one of the harder ways for an attacker
to feed you bad code. On mobile devices, apps are digitally signed and delivered by app stores. Those app
stores have a vetting process that makes *targeted* code delivery hard. Yes, someone could hack us, hack
Google or Apple, or all of us, but it's very, very hard to deliver bad code to a *specific* person through
this vector, and even harder if you want to do it undetectably.

In contrast, targeted malware is easy to deploy. Exploits are sold openly in exploit markets, and can be
bundled up in targeted advertising. Moreover, this *has* happened, and is known to be a mechanism that's
been used by the FBI, German Federal Police, the Countries Starting With the Letter 'I' (as a friend puts
it), and everyone's favorite The People's Liberation Army. During Arab Spring, a now-defunct
government just procured some Javascript malware and dropped it in some browsers to send them passwords
on non-SSL sites.

Thus, I think that while your concern does remind me to polish up my source code deployment, if we assume an
attacker like a state actor that targets people and systems, there are smarter ways for them to act.

I spend a lot of time thinking, "*If I were them, what would I do?*" If you think about what's possible, you
spend too much time on low-probability events. Give yourself that thought experiment. Ask yourself what
you'd do if you were the PLA, or NSA, or a country starting with an 'I.' Give yourself a budget in several
orders of magnitude. A grand, ten grand, a hundred grand, a million bucks. What would you do to hack
yourself? What would you do to hack your users without hacking you? That's what I think about.

Over the years, I've become a radical on usability. I believe that usability is all. It's easy to forget it
now, but PGP was a triumph because you didn't have to be a cryptographer, you only had to be a techie. We
progressed PGP so that you could be non-technical and get by, and then we created PGP Universal which was
designed to allow complete ease of use with a trusted staff. That trusted staff was the fly in the ointment
of Silent Mail and the crux of why we shut it down -- we created it because of usability concerns and killed it
because of security concerns. Things that were okay ideas in May 2013 were suddenly not good ideas in
August. I'm sure you've noted when using our service our belief in usability. Without usability that is
similar to the non-secure equivalent, we are nothing b
 ecause the users will just not be secure.

I also stress Silent Circle is a *service*, not an app. This is hard to remember and even we are not as good at it
as we need to be. The service is there to provide its users with a secure analogue of the phone and texting
apps they're used to. The difference is that instead of having utterly no security, they have a very high
degree of it.

Moreover, our design is such to minimize the trust you need to place in us. Our network includes ourselves as
a threat, which is unusual. You're one of the very few other people who do something similar. We have
technology and policy that makes an attack on *us* to be unattractive to the adversary. You will soon see
some improvements to the service that improve our resistance to traffic analysis.

The flip side of that, however, is that it means that the device is the most attractive attack point. We can't
help but trust the OS (from RNG to sandbox), bootloader, hardware, etc.

Improvements in our transparently (like code releases) compete with tight resources for improvements in
the service and apps. My decisions in deploying those resources reflect my bias that I'd rather have an A
grade in the service with a B grade in code releases than an A in code releases and a B service. Yes, it makes it
harder for you and others, but I have to look at myself in the mirror and my emphasis is on service quality
first, reporting just after that. Over time, we'll get better. We've not yet been running for a year.
Continuous improvement works.

I'm going to sum up with the subtitle of the ACM article of Ken Thompson's speech. It's not on his site, but it
is on the facsimile article:

    To what extent should one trust a statement that a program is free
    of Trojan horses? Perhaps it is more important to trust the people
    who wrote the software.

Thank you very much for your trust in us, the people. Earning and deserving your trust is something we do
every day.