The Limits of Formal Security Models

National Computer Systems Security Award Acceptance Speech

Dorothy E.  Denning

October 18, 1999



Thank you very much for this award. I am grateful to receive it, but especially grateful to all of you. You encouraged me and supported my research. You gave me feedback on my papers and talks. You challenged me and gave me new ideas to think about. You corrected my mistakes and you were not afraid to tell me when you thought I was wrong.

I would like to take this opportunity to talk about what I have learned about formal security models and what that means to the future of our field. The story begins at Purdue. I had come to pursue a Ph.D. in computer science, having earlier obtained my bachelors and masters degrees in math.

I was quickly attracted to computer security and chose it for my thesis topic. I wrote a proposal to generalize beyond access control, into the flow of information. My advisor, Herb Schwetman, liked the concept, but said it needed a model. So I set about developing a mathematical model, which eventually became the lattice model of secure information flow. By the time I finished, I was sure that mathematics held the secret to computer security. I spent the next eight years at Purdue and four more at SRI International working on security models for statistical databases, multilevel databases, and intrusion detection.

Computer security models were driving the 70s and early 80s. There was a general consensus in the field that security had to be designed into a computer system, that it could not be achieved with add-ons and patches. Further, the design had to based on a solid foundation, one that could be modeled and analyzed. The use of tiger teams as a means of demonstrating security was considered totally inadequate. Everyone chanted the mantra: "Testing can only prove the presence of bugs, not their absence." Only after proving mathematically that a model satisfied certain properties could one be sure that no security holes existed.

I learned my first lesson on the limits of formal security models when Mike Harrison, Larry Ruzzo, and Jeff Ullman showed that it was theoretically undecidable whether an arbitrary access-matrix model was safe, that is, whether a subject could get unauthorized access to an object. This result was not surprising in that there are many undecidable questions in computing, but it showed there were limits to the widely-used access-matrix model. The results inspired a new class of models, called take-grant models, in which safety was decidable under certain constraints. However, nobody was quite sure what any of this really meant in terms of real systems, which were vastly more complex than any of the models.

A few years later, Fred Cohen showed that the theoretical problems went beyond access control models to models about malicious code. He proved that it was impossible to build an anti-viral tool that would detect all possible computer viruses. The problem of protecting against malicious code would always be a "learn as we go" business. Again, the results were not surprising, but they reinforced my awareness on the limits of mathematics to provide security. Now, it seems, mathematics was showing that security was complex, sometimes hopelessly intractable.

I learned my second lesson on the limits of models in the mid 80s while working at SRI. I was co-PI for a project to develop a model for a multilevel-secure database system based on views. Peter Neumann, Teresa Lunt, Roger Schell, Bill Shockley, and Mark Heckman were all working with me. Our model, which we called SeaView, grew progressively more complex as we attempted to address the real issues. By the time I left SRI in 1987, I was convinced that I would never want to use a system based on SeaView. Any hope of usability had been killed by a concept called polyinstantiation, which involved instantiating multiple data values within a single field of a record, all with different security classifications. Polyinstantiation was needed to satisfy the mathematical models of multilevel security, but it got uglier and uglier the deeper we went. I learned then that security models could lead to dreadful systems that nobody would ever use.

I left SRI in 1987 and went to Digital because I was tired of security and disillusioned by it. I wanted to work on user interfaces -- on ways of making systems more usable. But security was in my blood, and I never really gave it up. While there, I learned my third lesson, namely that building systems based on formal models was extraordinarily time consuming and costly. I saw this earlier, but it was really brought home to me at DEC. From my West coast office, I tracked Digital's largest security project -- a multi-million dollar effort on the East coast to develop the VAX Secure Virtual System, an A1 operating system. After years of work, the system was scheduled to ship in 1990 and enter formal evaluation, but in February of that year, the project was canceled instead. The projected volume of sales did not justify the projected costs of continuing development and enhancement.

After seeing the cost associated with building systems from security models, I realized that most commercial systems would never be based on formal models. Technology was moving way too fast. Customers wanted the latest systems. They wanted Windows. They wanted to hook up to the Internet. Systems with security flaws were going out the door, and customers gobbled them up. Whatever risks they saw were offset by anticipated benefits. Buyers were not about to wait for something that would be expensive, overly constrained, and obsolete even before it was delivered. Anyone who thought otherwise would miss out on the information technology revolution taking place.

Another thing happened at DEC that taught me a fourth lesson on the limits of security models. I was contacted by a former member of the Legion of Doom who wanted to interview me for his zine called W.O.R.M. I had always wanted to know more about the hacker threat, so I accepted his invitation. Thus began a journey through the computer underground, where I interviewed hackers and read their publications. I emerged with a recognition that our security models failed to address major threats. While we fretted over the star property and covert channels, hackers exploited weak or non-existent passwords, lousy defaults, and implementation errors. And if they couldn't hack the technology, they hacked the people and called it social engineering.

I should have seen the inherent weaknesses of models from my earlier research. For example, while still at Purdue, a graduate student, Giovanni Sacco, found a flaw in the Needham-Schroeder protocol for key establishment. He found this by stepping outside the current thinking about protocols and seeing a new form of attack. We found a solution with time stamps, but several years later someone else found a flaw with that. And I'll never forget the day that George Davida called to tell me he had cracked the RSA cryptosystem. Of course he hadn't broken the math, but he found a way of getting plaintext from digital signatures. The problem is solved by hashing messages before signing them, but the attack, like Sacco's, showed that the way to crack a system is to step outside the box, to break the rules under which the protocol was proved secure.

The lesson I learned was that security models and formal methods do not establish security. They only establish security with respect to a model, which by its very nature is extremely simplistic compared to the system that is to be deployed, and is based on assumptions that represent current thinking. Even if a system is secure according to a model, the most common (and successful) attacks are the ones that violate the model's assumptions. Over the years, I have seen system after system defeated by people who thought of something new.

I returned to academia in 1991, accepting a position on the faculty at Georgetown. I was not there long before I learned my fifth lesson on the limits of security models. Maybe they were not entirely bad! My long-held view that security served only noble objectives was shattered when I saw how criminals and terrorists used encryption and other security technologies to evade law enforcement and intelligence collection, and how individuals and organizations got locked out of their own data. This led to my supporting the Clipper chip and key recovery technologies -- as long as government access was tightly controlled through court orders and other safeguards. It also led to a study with Bill Baugh, a former FBI Assistant Director, on how criminals were actually using encryption and emerging technologies to hide their crimes. One of our findings was that law enforcement had so far been successful in many if not most cases. Part of the reason for their success was that commercial systems were usually breakable - but not because of the math.

Before drawing conclusions from these lessons, let me briefly review them.

  1. Security models have theoretical limits. You cannot always prove that a model satisfies certain security conditions.
  2. Security models based on strict mathematical properties can lead to systems that are totally unusable.
  3. Building systems from rigorous mathematical security models is extremely time-consuming and costly. Most commercial systems will not be based on formal models.
  4. Security models and formal methods do not establish security. Systems are hacked outside the models' assumptions.
  5. Provable security, even if it were achievable, is not a panacea.
From these lessons I gradually came to see that commercial systems would never be perfectly secure. They would always have weaknesses. Even in the domain of cryptography, the race will never end between the code makers and code breakers.

Because systems will never be fully secure, I have come to see the value of - indeed necessity for - security patches, testing, and add-on products and services that cover-up underlying holes - the very things that researchers condemned in the early days. But here we are in 1999, where patches and testing are commonplace, and there is a booming market in add-ons products and remote services. While not perfect, they have helped us manage the risk. Let's look at each of these.

First, security patches and updates. There are tons of them. True, they do not solve the problems, but you ignore them at your peril. There are thousands of hackers who know how to exploit existing vulnerabilities and how to find you if you lag behind with the fixes and updates. They are an inevitable outcome of our desire for new technology and the difficulties of building flaw-free systems.

Second, testing, not only in the lab but against deployed systems. And not only by hired tiger teams, but also by thousands of hackers who gleefully bang away at your system or the products you use. And when they find a flaw, they announce it to the world and post an exploit script on the Web. The good news is that this "service" is free. The other good news is that the vendors are generally quick to post a fix -- when they can. The bad news is that some of the problems are extremely difficult to fix. The even worse news is that your system might be compromised or trashed before the fixes are released or you have a chance to install them. Of course, some of those who find flaws act responsibly and withhold details on exploits, at least until the problems are solved. But we can't expect a 14-year-old teenager to exercise such discretion, especially when publication brings so much personal recognition and many argue that it serves the public good.

As I mentioned earlier, testing is often discredited on the grounds that it shows only the presence of flaws, not their absence. But what the critics fail to notice is that nothing can establish the absence of flaws, not even security models and formal methods. But regardless of where you come down on this issue, the fact is that testing has become one of our primary means of conferring - and denying -- trust. For example, take the Data Encryption Standard. After being beaten to death for 10-20 years, most people came to trust it -- that is, until 1997 when a group on the Internet cracked the 56-bit DES key of an RSA challenge cipher. At that point, trust in DES fell through the floor, though trust in Triple-DES, with its 168-bit keys, remained intact. Imperfect as it may be, testing represents one of the best methods we have to date of establishing trust. Another is the reputation of the vendor, but that too comes from a history of experience, that is, real-world testing, not from models and proofs.

A third approach to security has been through add-on products. There are access tokens, smart cards, biometrics, one-time passwords, firewalls, intrusion-detection systems, anti-viral tools, encryption tools, integrity tools, digital signatures, location signatures, vulnerability scanners, and more. As with patches, they won't solve all your problems, but they have gone a long way toward improving security and helping with incident handling and computer forensics when security breaches occur.

A fourth approach has been remote security services. Oddly, the very media that is a source of threats is becoming one of the biggest sources of safeguards.  There are remote services that check a computer or network for vulnerabilities, scan outgoing e-mail for viruses, check suspicious code for new forms of viruses, monitor intrusion-detection systems, and detect and locate stolen laptops. You can download security products and security patches from the Web, and you can find out about new problems by subscribing to one of several security alert services. There are remote encryption and certificate services, and a California company, TriStrata, has developed a system and service to manage the security of an entire enterprise, integrating encryption with access control, security policy management, and auditing.

In summary, I see a bigger future in security patches, testing, add-on products, and remote services than I do in the use of formal models and methods as a means of addressing security threats. It is the only way of keeping up with the rapid pace of technology. This does not mean that I see no value in formal models and security proofs, only that I think their value is limited. Some people might find this outcome discouraging. I do not. It opens up tremendous opportunities for innovation, which is what has made our field so intellectually exciting.

Again, thank you for the award and I look forward to continued work with all of you as we try to address the enormous security challenges that we face today.