People's Newsroom


Currently, the Web is the dominant paradigm for information services. Typically, the browser issues a request to a server and the server responds with material that the browser renders. 


From the initial perspective of a browser user (or the crafter of a home page), these ‘‘requests’’ correspond to explicit user actions, such as clicking a link or typing a URL; these ‘‘responses’’ consist of HTML files. However, the language of the interaction is richer than this, and not necessarily well-defined. The HTML content a server provides can include references to other HTML content at other servers. Depending on the tastes of the server operator and the browser, the content can also include executable code; Java and Javascript are fairly universal. This richer content language provides many ways for the browser to issue requests that are more complex than a user might expect, and not necessarily correlated to user actions like ‘‘clicking on a link.’’

As part of a request, the browser will quietly provide parameters such as the browser platform and the REFERER (sic) the URL of the page which contained the link that generated this request. In the current computing paradigm, we also see continual bleeding between Web interaction and other applications. For example, in many desktop configurations, a server can send a file in an application format (such as PDF or Word), which the browser happily hands off to the appropriate application; non-Web content (such as PDF or Word) can contain Web links, and cause the application to happily issue Web requests.


Surfing through hypertext documents constituted the initial vision for the Web and, for many users, its initial use. However, in current enterprise settings, the interaction is typically much richer: users (both the browser and server) want to map non-electronic processes into the Web, by having client users fill out forms that engender personalized responses (e.g., a list of links matching a search term, or the user’s current medical history) and perhaps have non-Web consequences (such as registering for classes or placing an Amazon order). In the standard way of doing this, the server provides an HTML form element which the browser user fills out and returns to a common gateway interface (CGI) script. The form element can contain input tags that (when rendered by the browser) produce the familiar elements of a Web form: boxes to enter text, boxes with a ‘‘browse’’ tag to enter file names for upload, radio buttons, checkboxes, etc.

For each of these tags, the server may specify a name that names the parameter being collected from the user and a default value. The server content associates this form with a submit action (typically triggered by the user pressing a button labeled ‘‘Submit’’), which transforms the parameters and their values into a request for a specific URL. If the submit action specified the GET method, the parameters are passed onto the end of the URL; if the POST method was specified, the parameters are sent back in a second request part. However, the submit URL specifies an executable script, not a passive HTML file, in the ‘‘Web directory’’ at the server. When a server receives a request for such a script, it invokes the script. The script can interrogate request parameters, such as the form responses, interact with other software on the server-side, and also dynamically craft content to return to the browser.


In enterprise settings, the server operator may wish to restrict content only to browser users that are authorized. In a situation where the browser user is requesting a service via a form, the server operator may wish to authenticate specific attributes about the user, such as identity and the fact that the user authorizes this request. The Web paradigm provides several standard avenues to do this.

Client Address

For one example, the server may restrict requests to client machines with the specific hostname or IP address properties.


With basic authentication (or the digest authentication variant), the server can require that the user present a username and password, which the browser collects via a special user interface channel and returns to the server. The server requesting the authentication can provide some text that the browser will display in the password-prompt box. Alternatively, the server may also collect such authenticators as part of the form responses from the user. With these various forms of password-based authentication, the server operator would be wise to take steps to ensure that sensitive data are protected in transit. Some of the common approaches include offering the entire service over an SSL channel, and having the form submitted by the POST method, so the responses are not cataloged in histories, logs, REFERER fields, etc. Indeed, if neither the user nor server otherwise expose a user’s password, and if the user has authenticated that he is talking to the intended server, then a strong case can be made that a properly authenticated request requires the user’s awareness and approval. The password had to come from somewhere!


The server can establish a longer state at a browser by saving a cookie at the browser. The server can choose the contents, expiration date, and access policy for a specific cookie; a properly functioning browser will automatically provide the cookie along with any request to a server that satisfies the policy.

Client-side PKI

When prodded, PKI researchers (such as ourselves) will recite a litany of reasons why PKI is a much better way than the alternatives to carry out authentication and authorization in distributed, multi-organizational settings. In practical settings, many enterprises are adopting PKI technology because it allows single sign-on, minimizes the impact of keyboard sniffers, is trendy, and is being heavily marketed by PKI vendors. As mentioned, using various key stores and client-side SSL is a dominant emerging paradigm for bringing PKI to large populations.  On the application end, numerous players preach that client-side SSL is a better way to authenticate users than passwords.

  • At the Server

As noted earlier, SSL permits the browser and user to establish an encrypted, integrity-protected channel over which to carry out their Web interaction: requests, cookies, form responses, basic authentication data, etc. The typical SSL use includes server authentication; newer SSL uses permit the browser to authenticate via PKI as well. The server operator can require that a client authenticate via PKI, and can restrict access based on how it chooses to validate the client certificate; server-side CGI scripts can interrogate client-certificate information, along with the other parameters available.

  • At the browser

Different browsers take different approaches to storing keys and certificates. The experiment focuses on the two browsers which are the most commonly used: Netscape and Internet Explorer. Netscape stores its security information in a subdirectory of the application named .netscape (.mozilla in Mozilla). There are two files of primary interest: key3.db which stores the user’s private key, and cert8.db which stores the certificates recognized by the browser’s security module. Both these files are binary data, stored in the 1.85 formats. Additionally, the information in these files is encrypted with a keyphrase so that any application capable of reading the Berkeley DB format is still required to provide a password to read the plaintext or to modify the files without detection.

Internet Explorer relies on the Windows key store and CSP to store the private key. One unfortunate result of this tight coupling between IE and the OS is that versions of IE which run on MacIntosh computers have no support for storing or using private keys. By default, Windows uses its own CSPs to store the private key, which generates low-security keys (i.e., not password-protected) by default. Many organizations (such as the DoD and even Microsoft) recommend against this behavior, noting that the key is only as secure as the user’s account  This implies that if an attacker were to gain access to a user’s account or convince the user to execute code with the user’s privileges, the attacker would be able to use the private key at will, without having to go through any protections on the key (such as a password challenge).

One way to remedy the lack of password protection is to ‘‘export’’ the private key, placing it in a password-protected .pwl file (for IE 3 and earlier) or a .pfx file that stores the key in PKCS#12 (for IE 4 to current versions). Once the key is exported, a user must then ‘‘import’’ it at a higher security level: medium-security, which prompts the user when the key is used; or highsecurity, which requires a password to use the key (assuming the user does not check the box marked ‘‘Remember password’’, which immediately demotes it to a low-security key. While exporting and reimporting the private key may seem like a cumbersome process, it has become a standard practice in many organizations.

Back to top button