People's Newsroom

MODERN WEB CRYPTOGRAPHY AND CIPHERTEXT

Security mechanisms in this category assume that an intruder may access the desired resource but the information is modified in a way that makes it unusable for the attacker. The information is pre-processed at the sender before it is transmitted over the communication channel and post-processed at the receiver. While the information is transported over the communication channel, it resists attacks by being nearly useless to an intruder. One notable exception is attacks against the availability of the information as an attacker could still interrupt the message. During the processing step at the receiver, modifications or errors that might have previously occurred can be detected (usually because the information can not be correctly reconstructed). When no modification has taken place, the information at the receiver is identical to the one at the sender before the pre-processing step.

The most important member in this category is cryptography which is defined as the science of keeping messages secure. It allows the sender to transform information into a random data stream from the point of view of an attacker but to have it recovered by an authorized receiver.

The original message is called plain text (sometimes clear text). The process of converting it through the application of some transformation rules into a format that hides its substance is called encryption. The corresponding disguised message is denoted ciphertext and the operation of turning it back into clear text is called decryption. It is important to notice that the conversion from plain to cipher text has to be lossless in order to be able to recover the original message at the receiver under all circumstances.

The transformation rules are described by a cryptographic algorithm. The function of this algorithm is based on two main principles: substitution and transposition. In the case of substitution, each element of the plain text (e.g. bit, block) is mapped into another element of the used alphabet. Transposition describes the process where elements of the plain text are rearranged. Most systems involve multiple steps (called rounds) of transposition and substitution to be more resistant to cryptanalysis. Cryptanalysis is the science of breaking the cipher, i.e. discovering the substance of the message behind its disguise. When the transformation rules process the input elements one at a time the mechanism is called a stream cipher, in the case of operating on fixed-sized input blocks it is called a block cipher. If the security of an algorithm is based on keeping the way how the algorithm works (i.e. the transformation rules) secret, it is called a restricted algorithm. Those algorithms are no longer of any interest today because they don’t allow standardization or public quality control. In addition, when a large group of users is involved, such an approach cannot be used. A single person leaving the group makes it necessary for everyone else to change the algorithm.

Modern cryptosystems solve this problem by basing the ability of the receiver to recover encrypted information on the fact that he possesses a secret piece of information (usually called the key). Both encryption and decryption functions have to use a key and they are heavily dependent on it. When the security of the cryptosystem is completely based on the security of the key, the algorithm itself may be revealed. Although the security does not rely on the fact that the algorithm is unknown, the cryptographic function itself and the used key together with its length must be chosen with care. A common assumption is that the attacker has the fastest commercially available hardware at his disposal in his attempt to break the ciphertext.

The most common attack, called known plain text attack, is executed by obtaining ciphertext together with its corresponding plain text. The encryption algorithm must be so complex that even if the codebreaker is equipped with plenty of such pairs and powerful machines, it is infeasible for him to retrieve the key. An attack is infeasible when the cost of breaking the cipher exceeds the value of the information or the time it takes to break it exceeds the lifespan of the information. Given pairs of the corresponding cipher and plain text, it is obvious that a simple key guessing algorithm will succeed after some time. The approach of successively trying different key values until the correct one is found is called a brute force attack because no information about the algorithm is utilized whatsoever. In order to be useful, it is a necessary condition for an encryption algorithm that brute force attacks are infeasible. Depending on the keys that are used, one can distinguish two major cryptographic approaches – public and secret key cryptosystems.

SECRET KEY CRYPTOGRAPHY

This is the kind of cryptography that has been used for the transmission of secret information for centuries, long before the advent of computers. These algorithms require that the sender and the receiver agree on a key before communication is started. It is common for this variant (which is also called single-key or symmetric encryption) that a single secret key is shared between the sender and the receiver. It needs to be communicated in a secure way before the actual encrypted communication can start and has to remain secret as long as the information is to remain secret. Encryption is achieved by applying an agreed function to the plain text using the secret key. Decryption is performed by applying the inverse function using the same key.

The classic example of a secret-key block cipher that is widely deployed today is the Data Encryption Standard (DES). It is a block cipher that operates on 64-bit plain text blocks and utilizes a key with a 56-bits length. The algorithm uses 16 rounds that are key-dependent. During each round, 48 key bits are selected and combined with the block that is encrypted. Then, the resulting block is piped through a substitution and a permutation phase (which use known values and are independent of the key) to make cryptanalysis harder. Although there is no known weakness of the DES algorithm itself, its security has been much debated. The small key length makes brute force attacks possible and several cases have occurred where DES-protected information has been cracked. A suggested improvement called 3DES uses three rounds of the simple DES with three different keys. This extends the key length to 168 bits while still resting on the very secure DES base.

A well-known stream cipher that has been debated recently is RC4 which has been developed by RSA. It is used to secure the transmission in wireless networks that follow the IEEE 802.11 standard and form the core of the WEP (wired equivalent protection) mechanism. Although the cipher itself has not been broken, current implementations are flawed and reduce the security of RC4 down to a level where the user key can be recovered by statistical analysis within a few hours.

PUBLIC KEY CRYPTOGRAPHY

Since the advent of public-key cryptography, the knowledge of the key that is used to encrypt a plain text also allowed the inverse process, the decryption of the ciphertext. Public key cryptography utilizes two different keys, one called the public key, the other one called the private key. The public key is used to encrypt a message while the corresponding private key is used to do the opposite. Their innovation was the fact that it is infeasible to retrieve the private key given the public key. This makes it possible to remove the weakness of secure key transmission from the sender to the receiver. The receiver can simply generate his public/private key pair and announce the public key without fear. Anyone can obtain this key and use it to encrypt messages that only the receiver with his private key is able to decrypt.

Mathematically, the process is based on the trap door of one-way functions. A one-way function is a function that is easy to compute but very hard to inverse. That means that given x it is easy to determine f(x) but given f(x) it is hard to get x. Hard is defined as computationally infeasible in the context of cryptographically strong one-way functions. Although it is obvious that some functions are easier to compute than their inverse (e.g. square of a value in contrast to its square root) there is no mathematical proof or definition of one-way functions. There are a number of problems that are considered difficult enough to act as one-way functions but it is more an agreement among crypto analysts than a rigorously defined set (e.g. factorization of large numbers).

A one-way function is not directly usable for cryptography, but it becomes so when a trap door exists. A trap door is a mechanism that allows one to easily calculate x from f(x) when additional information y is provided. A common misunderstanding about public-key cryptography is thinking that it makes secret key systems obsolete, either because it is more secure or because it does not have the problem of secretly exchanging keys. As the security of a cryptosystem depends on the length of the used key and the utilized transformation rules, there is no automatic advantage of one approach over the other. Although the key exchange problem is elegantly solved with a public key, the process itself is very slow and has its own problems. Secret key systems are usually a factor of 1000 (for exact numbers) faster than their public key counterparts. Therefore, most communication is stilled secured using secret-key systems, and public-key systems are only utilized for exchanging the secret key for later communication. This hybrid approach is the common design to benefit from the high-speed of conventional cryptography (which is often implemented directly in hardware) and from a secure key exchange.

A problem in public-key systems is the authenticity of the public key. An attacker may offer the sender his own public key and pretend that its origins from the legitimate receiver. The sender then uses the faked public key to perform his encryption and the attacker can simply decrypt the message using his private key. In order to thwart an attacker that attempts to substitute his public key for the victim’s one, certificates are used. A certificate combines user information with the user’s public key and the digital signature of a trusted third party that guarantees that the key belongs to the mentioned person. The trusted third party is usually called a certification authority (CA). The certificate of a CA itself is usually verified by a higher-level CA that confirms that the CA’s certificate is genuine and contains its public key. The chain of third parties that verify their respective lower level CAs has to end at a certain point which is called the root CA. A user that wants to verify the authenticity of a public key and all involved CAs needs to obtain the self-signed certificate of the root CA via an external channel. Web browsers (e.g. Netscape Navigator, Internet Explorer) usually ship with a number of certificates of globally known root CAs. A framework that implements the distribution of certificates is called a public key infrastructure (PKI). An important protocol for key management is X.509.

Another important issue is revocation, the invalidation of a certificate when the key has been compromised. It is a block cipher that is still utilized for the majority of current systems, although the key length has been increased over recent years. This has put a heavier processing load on applications, a burden that has ramifications, especially for sites doing electronic commerce. A competitive approach that promises similar security as RSA using far smaller key lengths is elliptic curve cryptography. However, as these systems are new and have not been subject to sustained cryptanalysis, the confidence level in them is not yet as high as in RSA.

AUTHENTICATION AND DIGITAL SIGNATURES

An interesting and important feature of public-key cryptography is its possible use for authentication. In addition to making the information unusable for attackers, a sender may utilize cryptography to prove his identity to the receiver. This feature is realized by digital signatures. A digital signature must have similar properties as a normal handwritten signature. It must be hard to forge and it has to be bound to a certain document. In addition, one has to make sure that a valid signature cannot be used by an attacker to replay the same (or different) messages at a later time.

A way to realize such a digital signature is by using the sender’s private key to encrypt a message. When the receiver is capable of successfully decrypting the ciphertext with the sender’s public key, he can be sure that the message is authentic. This approach obviously requires a cryptosystem that allows encryption with the private key, but many (such as RSA) offer this option. It is easy for a receiver to verify that a message has been successfully decrypted when the plain text is in a human-readable format. For binary data, a checksum or similar integrity checking footer can be added to verify successful decryption. Replay attacks are prevented by adding a time-stamp to the message (e.g. Kerberos uses timestamps to prevent messages to the ticket-granting service are replayed).

Usually, the storage and processing overhead for encrypting a whole document is too high to be practical. This is solved by one-way hash functions. These are functions that map the content of a message onto a short value (called a message digest). Similar to one-way functions it is difficult to create a message when given only the hash value itself. Instead of encrypting the whole message, it is enough to simply encrypt the message digest and send it together with the original message. The receiver can then apply the known hash function (e.g. MD5) to the document and compare it to the decrypted digest. When both values match, the message is authentic.

Back to top button