We've lost something precious — perhaps irretrievably. We've lost trust in our information systems, thanks to Edward Snowden's leaks of National Security Agency documents.
The loss of trust runs deep. We've lost trust in our security APIs, our crypto, our random-number generators, our network security protocols, our operating systems, our firmware, our hardware. To paraphrase my colleague Gary McGraw, it really is turtles all the way down, only these are snapping turtles, and they are bearing down on our most private of private data.
In the six months or so since Snowden's revelations began, our trust has been hog-tied, beaten to a pulp and left for dead.
Restoring that trust will take much longer than it took to be lost, if it can be done at all. It's going to be a long and arduous process. If it's even possible, it is going to take years.
While we work to rebuild our lost trust, we have to continue to use the systems we no longer trust. We can't just abandon them and start from scratch. It will be something like completely refurbishing an airplane, engines and all, while it's already in the air.
I'm not here to pass judgment on Snowden, the NSA or any of the other parties caught up in this mess. Let the right and wrong be argued elsewhere. For the purposes of this column, the important point is that the whipped cream is out of the can and there's no getting it back in. So let's focus on how we can start to fix what has been broken.
Since we can't just throw everything away and start over, I think we have to tackle the problem from two directions: from the top down and from the bottom up.
Working from the top down, standards organizations, including the Internet Engineering Task Force, have to assemble the brightest minds to develop security standards that are outside the reach of any one government or agency. Everything, from our crypto algorithms to security protocols, needs to be re-evaluated, and fixed or rewritten if necessary. This will take years of hard work by smart people — digital patriots, I'll call them.
And that's just the start. Just how paranoid do we need to be? If there have been hardware and firmware compromises, as some stories have suggested, the answer might be more than we can bear. But we've got to start somewhere, and our critical evaluations should certainly include supply chain management from the chip level upwards.
And then there's the bottom-up side of things. That effort will mostly rely on software developers. How do we write software that can be trusted at all? We start by always questioning our trust. Why should we trust an API, an algorithm, etc.? I've covered quite a bit of this ground here in my columns, but some of the principles we need to strive for include doing our own key management whenever possible, building our code on robust, rigorously peer-reviewed security foundations, and so on.
Sign up for Computerworld eNewsletters.