MALICIOUS USE OF KEYS VIA CONTENT-ONLY ATTACKS
FRAMESETS AND SSL
If server SA offers a frameset over server-side SSL but specifies that the browser loads an SSL page from SB in the hidden frame, then many browser configurations will happily negotiate SSL handshakes with both servers but the browser will only report the SA certificate. So, wondered what would happen if SB requested client-side authentication. In Mozilla 1.0.1/Linux (RedHat 7.3 with 2.4.18-5 kernel), using default options, the browser will happily use a client key to authenticate, without informing the user. In IE 6.0/WindowsXP, using default options and any level key, the browser will happily use a client key to authenticate, without informing the user, if the user has already client-side authenticated to SB. If the user has not, a window will pop up saying that the server with a specified hostname has requested client-side authentication; which key, and is it OK? In Netscape 4.79/ Linux (RedHat 7.3 with 2.4.18-5 kernel), using default options, the browser will pop up a window saying that the server with a specified hostname has requested client-side authentication; which key, and is it OK? Then the browser will authenticate. The request to SB can easily be a GET request, forging the response of a user to a Web form.
Suppose the operator of an honest server SB offers a service where authorization or authentication is important. For example, perhaps SB wanted to prove that its content was served to particular authorized parties (and perhaps to prove that those parties requested it done thinks of Pete Townshend or a patent challenge), or perhaps SB is offering email or class registration services, via form elements, to a campus population. If SB had used server-side SSL and required basic authentication or some other password scheme, then one might argue that a service can be executed in a user’s name only if that user authorized it, or shared their password. However, suppose SB uses ‘‘stronger’’ client-side SSL. With Mozilla, NSS security tools, and default options, a user’s request to SB can be forged by a visit to an adversarial site SA. With IE and default options, a user’s request can be forged if the user has already visited SB.
MALICIOUS USE OF KEYS VIA API ATTACKS
IE on Windows is by far the dominant client platform. In trying to establish such a proper browser configuration for IE, web noticed that IE would only prompt for a password (on our high-security key) once per visit to a particular domain. Specifically, web would visit site A, perform a client-side authentication which prompted us for the password, leave site A, and then return only web was never prompted for the key’s password again. The inability to configure our browser so that the advertised behavior of a high-security key (which reads ‘‘Request my permission with a password when this item is to be used’’) led us to believe that the flaw must be at a lower level. So web began the third experiment with the question.
‘‘Can web use some of our previous techniques such as API hijacking to understand what is happening and then to use the key?’’
THE DEFAULT CSP IS BROKEN
The first step was to convince ourselves that IE was really using our high-security key to perform client-side authentication without requesting our permission, and watching network traffic with a sniffer confirmed our suspicion. Web then attempted to reproduce the behavior web observed. Using API hijacking, web was able to attach a debugger to IE and watch the parameters it passes to the CryptoAPI. Reverse engineering in this way allowed us to build a standalone executable that made the same sequence of calls to the CryptoAPI as IE does and uses the same parameters. Our program opens the same Keystore IE uses during a CryptAquireContext. Our code sits in an infinite loop taking a line of input from the command line. It then mimics the sequence of calls that IE makes to the CryptoAPI in order to get data signed: CryptCreateHash, CryptHashData, and CryptSignHash. Since our key is high security, the first call to CryptSignHash prompts for a password, as expected. However, no subsequent calls prompt for a password, even if the data are completely different. Thus, the CSP is failing to ‘‘request my permission with a password when this item is to be used’’.
THE PUNCHLINE: NO CONFIGURATION PREVENTS THIS ATTACK
The attack is possible because the system is designed with the assumption that the entire system is trusted. If one small malicious program with user privileges (such as can happen by a user clicking on an unknown attachment) finds its way into the system, the security can be undermined even with high-security non-exportable keys, and even assuming everyone does the right thing, no matter how awkward: browser users clear SSL state or kill the browser after each session, and server application writers use forms with hidden nonces.
MALICIOUS USE OF KEYS ON A USB TOKEN
Many in the field suggest getting the private key out of the system altogether and placing it in a separate secure device of some sort. Taking the key to a specialty device (such as an inexpensive USB token) would seem to reduce the likelihood of key theft as well as shrink the amount of software that has to be trusted in order for the system to be secure. Specifically, at first glance, it would appear that just the device and the software which provides access to the device (i.e., its CSP) need to be trusted. Since the keys on the devices, had been non-exportable, key theft seemed impossible (assuming we leave ‘‘rubber hose cryptanalysis’’ and hardware attacks out of our attack model), but we wondered if we could use the key as in the previous attacks.
THE TRUST BOUNDARIES DO NOT SHRINK
Unfortunately, just putting the private key on a token isn’t enough. The token’s CSP is still interacting with the whole system (the OS and CryptoAPI), and the entire system still has to be trusted. Putting the private key on a token gives some physical security and makes it harder to steal the key (physical violence notwithstanding), but it doesn’t protect against malicious use, and it doesn’t increase usability. For client-side PKI to be usable, it must behave as expected it must only allow transactions that the client is aware of and approved. If web trust the entire desktop, and users ‘‘clear SSL state’’ or kill their browsers after each session, and application writers include and verify hidden nonces, then web might conclude that client-side PKI works. But these are not reasonable assumptions and as demonstrated, relaxing them even a little yields security trouble.
It should be easy for a browser user to perceive and approve of the use of their private key; it should be easy for an application writer to build on this.
- ‘‘The path of least resistance’’ for users should result in a secure configuration.
- The interface should expose ‘‘appropriate boundaries’’ between objects and actions.
- Things should be authenticated in the user’s name only as of the ‘‘result of an explicit user action that is understood to imply granting.’’
- One might quip that it has hard to find a principle here that the current paradigm does not violate.
- In order for client-side PKI to work, these principles should apply to both the client user, as well as the IT staffer setting up a Web page.
THE MINIMUM TRUST BOUNDARY
Clearly, the web would like to find the minimum number of components that have to be trusted, as this shrinks the number of potential targets. How can we shrink the trust boundary so that buggy desktops which have almost weekly ‘‘Critical Security Updates’’ are not the cornerstone of our secure systems? Trusting just the kernel doesn’t solve the problem. Trusting a separate cryptographic token doesn’t solve the problem.
One natural area for further attention is a trusted path. Web, too, needs trusted paths in the other direction (e.g., a Web equivalent of the ‘‘secure attention key’’) and an easy way for Web service writers to invoke that. This may not be as much of a stretch as one might think; already, the standard browsers depart from the HTML specification and require that a user type a value into a file input tag. (Without this feature, malicious servers can provide content that quietly uploads a file of their choosing.) Wouldn’t an authenticate input tag be much easier than trying to work through cryptographic hidden fields? Adding another level of personal certificate that only was invokable via such a tag (and perhaps even signed something) would help.
TOKENS WITH UI
On a system level, it is recommended that further examination be given to the module that stores and wields private keys: perhaps a trustable sub-system with a trusted path to the user. As a device that has a very rich and complex interaction with the rest of the world, browsers can often behave in unexpected and unclear ways. Such a device should not be the cornerstone of a secure system. Many researchers have long advocated that private keys are too important to be left exposed on a general-purpose desktop. However, in light of experiments, we might go further and assert that the user interface governing the use of the private key is too important to be left on the desktop and too important to be left to the sole determination of the server programmer, through a content language not designed to resist spoofing.
Experiments show that the natural mental model which arises for client-side PKI is not representative of the actual system’s behavior. This fact, coupled with the underlying assumption that all of the system’s components are trusted, creates opportunities for a number of devastating attacks. Much work is being done in many places to try to bring PKI to users; considerable investment of effort is being focused on the client-side PKI paradigm. We humbly suggest that some of this investment might be better spent rethinking the basic model.