The ability to create, manipulate, and share digital docu- ments has created a host of new applications (email, word processing, and e-commerce websites) but also created a new set of problems, namely, how to protect the privacy and integrity of digital data when stored and transmitted. The invention of public key cryptography in the 1970s  pointed the way to a solution to those problems: most important, the ability to encrypt data without a shared key and the ability to “sign” data, ensuring its origin and integrity. Although these operations are conceptually straightforward, they both rely on the ability to bind a public key (which is typically a large essentially random number) reliably with an identity sensible to the application or user (for example, a globally unique name, a legal identiﬁer, or an email address.) Public key infrastructure (PKI) is the umbrella term used to refer to the protocols and machinery used to perform this binding. The most important security protocols used on the Internet rely on PKI to bind names to keys, a crucial function that allows authentication of users and websites. A set of attacks in 2011 called into question the security of the PKI architecture [18,19], especially when governmental entities might be tempted to subvert Internet security assumptions. A number of interesting proposed evolutions of the PKI architecture have been proposed as potential countermeasures to these attacks. Even in the face of these attacks, PKI remains the most important and reliable method of authenticating networked entities.
- CRYPTOGRAPHIC BACKGROUND To understand how PKI systems function, it is necessary to grasp the basics of public key cryptography. PKI systems enable the use of public key cryptography and also use public key cryptography as the basis for their operation. Although there are thousands of varieties of cryptographic
algorithms, we can understand PKI operations by looking at only two: signature and encryption.
Digital Signatures The most important cryptographic operation in PKI sys- tems is the digital signature. If two parties are exchanging some digital document, it may be important to protect those data so that the recipient knows that the document has not been altered since it was sent, and that any document received was indeed created by the sender. Digital signa- tures provide these guarantees by creating a data item, typically attached to the document in question that is uniquely tied to the data and the sender. The recipient then has some veriﬁcation operation that conﬁrms that the signature data matches the sender and the document. Fig. 48.1 illustrates the basic security problem that motivates signatures. An attacker controlling communica- tions between the sender and receiver can insert a bogus document, fooling the receiver. The aim of the digital signature is to block this attack by attaching a signature that can only be created by the sender, as shown in Fig. 48.2. Cryptographic algorithms can be used to construct secure digital signatures. These techniques (for example, the RivesteShamireAdleman (RSA) Algorithm or Digital Signature Algorithm) all have the same three basic opera- tions, as shown in Table 48.1.
Public Key Encryption Variants of the three operations used to construct digital sig- natures can also be used to encrypt data. Encryption uses a publickeytoscrambledatainsuchawaythatonlytheholder ofthecorrespondingprivatekeycanunscrambleit(Fig.48.3). Public key encryption is accomplished with variants of the same three operations used to sign data, as shown in
Table 48.2. Note that in actual implementations, the algo- rithms used to encrypt and sign may be different. The security of signature and encryption operations depends on two factors: ﬁrst, the ability to keep the private key private; and, second, the ability to tie a public key reliably to an application or user identity. If a private key is known to an attacker, he can then perform the signing operation on arbitrary bogus documents, and can also
decrypt any document encrypted with the matching public key. The same attacks can be performed if an attacker can convince a sender or receiver to use a bogus public key. PKI systems are built to distribute public keys securely, thereby preventing attackers from inserting bogus public keys. They do not directly address the security of private keys, which are typically defended by measures at a particular end point, such as keeping the private key on a smart card, encrypting private key data using operating system facilities, or other, similar mechanisms. The remainder of this section will detail the design, imple- mentation, and operation of public key distribution systems.
- OVERVIEW OF PUBLIC KEY INFRASTRUCTURE PKI systems solve the problem of associating meaningful names with essentially meaningless cryptographic keys. For example, when encrypting an email, the user will typically specify a set of recipients that should be able to decrypt that mail. The user will want to specify these as some kind of name (email address or a name from a directory), not as a set of public keys. In the same way, when signed data are received and veriﬁed, the user will want to know what user signed the data, not what public key correctly veriﬁed the signature. (By way of contrast, some systems such as the Bitcoin currency protocol use keys directly as identities and thereby avoid some of the complexities associated with PKI-based designs.) The design goal of PKI systems is to connect user identities securely and efﬁciently to the public keys used to encrypt and verify data.
The original DifﬁeeHellman article  that outlined public key cryptography proposed that this binding would be done through storing public keys in a trusted directory. Whenever users wanted to encrypt data other users, they would consult the “public ﬁle” and request the public key corresponding to some users. The same operation would yield the public key needed to verify the signature on signed data. The disadvantage of this approach is that the directory must be online and available for every new encryption and veriﬁcation operation. (Although this older approach was never widely implemented, variants of this approach are now reappearing in newer PKI designs. For more information, see the section on Alternative Public Key Infrastructure Architectures.) PKI systems solve this online problem and accomplish identity binding by distributing “digital certiﬁcates,” data structures that contain an identity and a key, bound together by a digital signature. They may also provide a mechanism to check the validity of these certiﬁcates. Certiﬁcates, which were ﬁrst invented by Kohnfelder in 1978, are essentially a digitally signed message from some authority stating that “Entity X is associated with public key Y.” Communicating parties can then rely on this statement (to the extent that they trust the authority signing the certiﬁ- cate) to use the public key Y to validate a signature from
X or to send an encrypted message to X. Because time may pass and identities may change between when the signed certiﬁcate was produced and when someone uses that cer- tiﬁcate, it may be useful to have a validation mechanism to check that the authority still stands by a particular certiﬁ- cate. We will describe PKI systems in terms of producing and validating certiﬁcates. There are multiple standards that describe how certiﬁ- cates are formatted. The X.509 standard, promulgated by the International Telecommunication Union (ITU) , is the most widely used and is the certiﬁcate format used in the Transport Layer Security/Secure Socket Layer (TLS/ SSL) protocols for secure Internet connections, and the Secure/Multipurpose Internet Mail Extensions (S/MIME) standards for secured email. The X.509 certiﬁcate format also implies a particular model of how certiﬁcation works. Other standards have attempted to deﬁne alternate models of operation and associated certiﬁcate models. Among the other standards that describe certiﬁcates are: Pretty Good Privacy (PGP) and the Simple Public Key Infrastructure (SPKI) [spki]. In this section, we will describe the X.509 PKI model and then describe how these other standards attempt to remediate problems with X.509.
- THE X.509 MODEL The X.509 model is the most prevalent standard for certiﬁcate-based PKIs, although the standard has evolved such that PKI-using applications on the Internet are mostly based on the set of Internet Engineering Task Force (IETF) standards that have evolved and extended the ideas in X.509. X.509-style certiﬁcates are the basis for SSL, TLS, many virtual private networks, the US Federal Government PKI, and many other widely deployed systems.
The History of X.509 A quick historical preface here is useful to explain some of the properties of X.509. X.509 is part of the X.500 direc- tory standard owned by the ITU Telecommunications Standardization Sector. X.500 speciﬁes a hierarchical directory useful for the X.400 set of messaging standards. As such, it includes a naming system (called “distinguished naming”) that describes entities by their position in some hierarchy. A sample X.500/X.400 name might look like this: l CN¼Joe Davis, OU¼Human Resources, O¼ WidgetCo, C¼US This name describes a person with a Common Name (CN) of “Joe Davis” that works in an Organizational Unit (OU) called “Human Resources,” in an Organization called “WidgetCo” in the United States. These name components were intended to be run by their own directory components
(so, for example, there would be “Country” directories that would point to “Organizational” directories, etc.), and this hierarchical description was ultimately reﬂected in the design of the X.509 system. Many of the changes made by IETF and other bodies that have evolved the X.509 stan- dard were made to reconcile this hierarchical naming system with the more distributed nature of the Internet.
The X.509 Certiﬁcate Model The X.509 model speciﬁes a system of Certifying Authorities (CAs) that issue certiﬁcates for end entities (users, websites, or other entities that hold private keys). A CA-issued certiﬁcate will contain (among other data) the name of the end entity, the name of the CA, the end entity’s public key, a validity period, and a certiﬁcate serial number. All of this information is signed with the CA’s private key. (Additional details on the information in a certiﬁcate and how it is encoded is in Section 6.) To validate a certiﬁcate, a relying party uses the CA’s public key to verify the signature on the certiﬁcate, checks that the time falls within the validity period, and may also consult a server associated with the CA to ensure that the CA has not revoked the certiﬁcate. This process leaves out on important detail: Where did the CA’s public key come from? The answer is that another certiﬁcate is typically used to certify the public key of the CA. This “chaining” action of validating a certiﬁcate by using the public key from another certiﬁcate can be per- formed any number of times, allowing for arbitrarily deep hierarchies of CAs. Of course, this must terminate at some point, typically at a self-signed certiﬁcate that is trusted by the relying party. Trusted self-signed certiﬁcates are typi- cally referred to as “root” certiﬁcates. Once the relying party has veriﬁed the chain of signatures from the end- entity certiﬁcate to a trusted root certiﬁcate, it can conclude that the end-entity certiﬁcate is properly signed, and then move onto whatever other validation steps (proper key usage ﬁelds, validity dates in some time window, etc.) are required to trust the certiﬁcate fully. Fig. 48.4 shows the structure of a typical certiﬁcate chain. One other element is required for this system to function securely: CAs must be able to “undo” a certiﬁcation action. Whereas a certiﬁcate binds an identity to a key, there are many events that may cause that binding to become invalid. For example, a CA operated by a bank may issue a cer- tiﬁcate to a newly hired person that gives that user the ability to sign messages as an employee of the bank. If that person leaves the bank before the certiﬁcate expires, the bank needs some way to undo that certiﬁcation. The physical compromise of a private key is another circum- stance that may require invalidating a certiﬁcate. This is accomplished by a validation protocol in which (in the abstract) a user examining a certiﬁcate can ask the CA
whether a certiﬁcate is still valid. In practice, revocation protocols are used that delegate processing revocation checks to a dedicated set of servers. Root certiﬁcates are critical to the process of validating public keys through certiﬁcates. They must be inherently trusted by the application, because no other certiﬁcate signs these certiﬁcates. This is most commonly done by installing the certiﬁcates as part of the application that will use the certiﬁcates under a set of root certiﬁcates. For example, Internet Explorer uses X.509 certiﬁcates to validate keys used to make TLS/SSL connections. Internet Explorer has a large set of root certiﬁcates installed that can be examined by opening the Internet Options menu item and selecting “Certiﬁcates” in the “Content” tab of the Options dialogue. A list like the one in Fig. 48.5 will appear. In Windows, the list of allowed root certiﬁcates for a given computer can be viewed in the Control Panel under Administrative Tools/Manage Computer Certiﬁcates. Both certiﬁcate dialogues can also be used to inspect these root certiﬁcates. Microsoft Root certiﬁcate details are shown in Fig. 48.6. The meaning of these ﬁelds will be explored in subsequent sections.
- X.509 IMPLEMENTATION ARCHITECTURES Although in theory the Certiﬁcation Authority is the entity that creates and validates certiﬁcates, in practice it may be desirable or necessary to delegate the actions of user authentication and certiﬁcate validation to other servers. The security of the CA’s signing key is crucial to the security of a PKI system. By limiting the functions of the server that holds that key, it should be subject to less risk of
disclosure or illegitimate use. The X.509 architecture deﬁnes a delegated server role, the Registration Authority (RA), which allows delegation of authentication. Subse- quent extensions to the core X.509 architecture have created a second delegated role, the Validation Authority (VA), which owns answering queries about the validity of a certiﬁcate after creation. An RA is typically used to distribute the authentication function needed to issue a certiﬁcate without needing to distribute the CA key. The RA’s function is to perform the authentication needed to issue a certiﬁcate, and then send a signed statement containing the fact that it performed the authentication, the identity to be certiﬁed, and the key to be certiﬁed. The CA validates the RA’s message and issues a certiﬁcate in response. For example, a large multinational corporation wants to deploy a PKI system using a centralized CA. It wants to issue certiﬁcates on the basis of in-person authentication, so it needs some way to distribute authentication to multiple locations in different countries. Copying and distributing the CA signing key creates a number of risks, not only because the CA key will be present on multiple servers, but also because of the complexities of creating and managing these copies. Sub-CAs could be created for each location, but this requires careful attention to controlling the identi- ties allowed to be certiﬁed by each sub-CA (otherwise, an attacker compromising one sub-CA could issue a certiﬁcate for any identity he liked.) One possible way to solve this problem is to create RAs at each location and have the CA check that the RA is authorized to authenticate a particular employee when a certiﬁcate is requested. If an attacker subverts a given RA signing key, he can request certiﬁcates for employees in the purview of that RA, but it is straightforward, once discovered, to deauthorize the RA, solve the security problem, and create a new RA key. VAs are given the ability to revoke certiﬁcates (the speciﬁc methods used to effect revocation are detailed in the X.509 Revocation Protocols section) and ofﬂoad that function from the CA. Through judicious use of RAs and VAs, it is possible to construct certiﬁcation architectures in which the critical CA server is accessible to only a small number of other servers, and network security controls can be used to reduce or eliminate threats from outside network entities.
- X.509 CERTIFICATE VALIDATION X.509 certiﬁcate validation is a complex process that can be done to several levels of conﬁdence. This section will outline a typical set of steps involved in validating a cer- tiﬁcate, but it is not an exhaustive catalog of the possible methods that can be used. Different applications will often require different validation techniques, depending on the application’s security policy. It is rare for an application to
implement certiﬁcate validation, because there are several application program interfaces and libraries available to perform this task. Microsoft CryptoAPI, OpenSSL, and Java JCE all provide certiﬁcate validation interfaces. The Server-based Certiﬁcate Validity Protocol (SCVP) can also be used to validate a certiﬁcate. However, all of these interfaces offer a variety of options, and understanding the validation process is essential to using these interfaces properly. Although a complete speciﬁcation of the certiﬁcate validation process would require hundreds of pages, we supply a sketch of what happens during certiﬁcate valida- tion. It is not a complete description and is purposefully simpliﬁed. The certiﬁcate validation process typically pro- ceeds in three steps and typically takes three inputs. The ﬁrst is the certiﬁcate to be validated, the second is any intermediate certiﬁcates acquired by the applications, and the third is a store containing the root and intermediate certiﬁcates trusted by the application. The following steps are a simpliﬁed outline of how certiﬁcates are typically validated. In practice, the introduction of bridge CAs and other nonhierarchical certiﬁcation models have led to more
complex validation procedures. IETF Request for Com- ments (RFC) 3280  presents a complete speciﬁcation for certiﬁcate validation, and RFC 4158  presents a speciﬁcation for constructing a certiﬁcation path in envi- ronments where nonhierarchical certiﬁcation structures are used.
Validation Step 1: Construct the Chain and Validate Signatures The contents of the target certiﬁcate cannot be trusted until the signature on the certiﬁcate is validated, so the ﬁrst step is to check the signature. To check the signature, the cer- tiﬁcate for the authority that signed the target certiﬁcate must be located. This is done by searching the intermediate certiﬁcates and certiﬁcate store for a certiﬁcate with a Subject ﬁeld that matches the Issuer ﬁeld of the target certiﬁcate. If multiple certiﬁcates match, the validator can search the matching certiﬁcates for a Subject Key Identiﬁer extension that matches the Issuer Key Identiﬁer extension in the candidate certiﬁcates. If multiple certiﬁcates still match, the most recently issued candidate certiﬁcate can be
used. (Note that, because of potentially revoked interme- diate certiﬁcates, multiple chains may need to be con- structed and examine through Steps 2 and 3 to ﬁnd the actual valid chain.) Once the proper authority certiﬁcate is found, the validator checks the signature on the target certiﬁcate using the public key in the authority certiﬁcate. If the signature check fails, the validation process can be stopped, and the target certiﬁcate deemed invalid. If the signature matches and the authority certiﬁcate is a trusted certiﬁcate, the constructed chain is then subjected to Steps 2e4. If not, the authority certiﬁcate is treated as a target certiﬁcate, and Step 1 is called recursively until it returns a chain to a trusted certiﬁcate or fails. Constructing the complete certiﬁcate path requires that the validator be in possession of all certiﬁcates in that path. This requires that the validator keep a database of inter- mediate certiﬁcates or that the protocol using the certiﬁcate supplies the needed intermediates. The SCVP provides a mechanism to request a certiﬁcate chain from a server, which can eliminate these requirements. The SCVP pro- tocol is described in more detail in a subsequent section.
Step 2: Check Validity Dates, Policy and Key Usage Once a chain has been constructed, various ﬁelds in the certiﬁcate are checked to ensure that the certiﬁcate was
issued correctly and that it is currently valid. The following checks should be run on the candidate chain: The certiﬁcate chain times are correct. Each certiﬁcate in the chain contains a validity period with a not before and not after time. For applications outside validating the signature on a document, the current time must fall after the not before time and before the not after time. Some appli- cations may require “time nesting,” meaning that the val- idity period for a certiﬁcate must fall entirely within the validity period of the issuer’s certiﬁcate. It is up to the policy of the application if it treats out-of-date certiﬁcates as invalid or treats it as a warning case that can be overridden by the user. Applications may also treat certiﬁcates that are not yet valid differently from certiﬁcates that have expired. Applications that are validating the certiﬁcate on a stored document may have to treat validity time as the time when the document was signed, as opposed to the time when the signature was checked. There are three cases of interest. The ﬁrst, and easiest, is where the document signature is checked, and the certiﬁcate chain validating the public key contains certiﬁcates that are currently within their validity time interval. In this case, the validity times are all good, and veriﬁcation can proceed. The second case is where the certiﬁcate chain validating the public key is currently invalid because one or more certiﬁcates are out of date and the document is believed to be signed at a time when the chain was out of date. In this case, the validity times are all invalid, and the user should be at least warned. The ambiguous case arises when the certiﬁcate chain is currently out of date but the chain is believed to have been valid with respect to time when the document was signed. Depending on its policy, the application can treat this case in several different ways. It can assume that the certiﬁcate validity times are strict, and fail to validate the document. Alternatively, it can assume that the certiﬁcates were good at the time of signing, and validate the document. The application can also take steps to ensure that this case does not occur, by using a time-stamping mechanism in conjunction with signing the document, or provide some mechanism for resigning documents before certiﬁcate chains expire. Once the certiﬁcate chain has been constructed, the veriﬁer must also verify that various X.509 extension ﬁelds are valid. Some common extensions that are relevant to the validity of a certiﬁcate path are: l BasicConstraints: This extension is required for CAs, and limits the depth of the certiﬁcate chain below a spe- ciﬁc CA certiﬁcate. l NameConstraints: This extension limits the namespace of identities certiﬁed underneath the given CA certiﬁ- cate. This extension can be used to limit a speciﬁc CA to issuing certiﬁcates for a given domain or X.400 namespace.
l KeyUsage and ExtendedKeyUsage: These extensions limit the purposes for which a certiﬁed key can be used. CA certiﬁcates must have KeyUsage set to allow certiﬁcate signing. Various values of ExtendedKeyUs- age may be required for some certiﬁcation tasks.
Step 3: Consult Revocation Authorities Once the veriﬁer has concluded that it has a suitably signed certiﬁcate chain with valid dates and proper KeyUsage extensions, it may want to consult the revocation authorities named in each certiﬁcate to check whether the certiﬁcates are currently valid. Certiﬁcates may contain extensions that point to Certiﬁcate Revocation List (CRL) storage locations or to Online Certiﬁcate Status Protocol (OSCP) responders. These methods allow the veriﬁer to check that a CA has not revoked the certiﬁcate in question. The next section details these methods in more detail. Note that each certiﬁcate in the chain may need to be checked for revocation status. The next section on certiﬁcate revocation details the mecha- nisms used to revoke certiﬁcates.
- X.509 CERTIFICATE REVOCATION Because certiﬁcates are typically valid for a signiﬁcant period of time, it is possible that during the validity period of the certiﬁcate a key may be lost or stolen, an identity may change, or some other event may occur that causes a certiﬁcate’s identity binding to become invalid or suspect. To deal with these events, it must be possible for a CA to revoke a certiﬁcate, typically by some kind of notiﬁcation that can be consulted by applications examining the validity
of a certiﬁcate. Two mechanisms are used to perform this task: CRLs and the OCSP. The original X.509 architecture implemented revocation via a CRL. A CRL is a periodically issued document containing a list of certiﬁcate serial numbers that are revoked by that CA. X.509 has deﬁned two basic CRL formats, V1 and V2. When CA certiﬁcates are revoked by a higher-level CA, the serial number of the CA certiﬁcate is placed on an Authority Revocation List (ARL), which is formatted identically to a CRL. CRLs and ARLs, as deﬁned in X.509 and IETF RFC 3280, are ASN.1 encoded objects that contain the information shown in Table 48.3. This header is followed by a sequence of revoked cer- tiﬁcate records. Each record contains the information shown in Table 48.4. The list of revoked certiﬁcates is optionally followed by a set of CRL extensions that supply additional information about the CRL and how it should be processed. To process a CRL, the verifying party checks that the CRL has been signed with the key of the named issuer, and that the cur- rent date is between the thisUpdate time and the nextUp- date time. This time check is crucial because if it is not performed, an attacker could use a revoked certiﬁcate by supplying an old CRL where the certiﬁcate had not yet appeared. Note that expired certiﬁcates are typically removed from the CRL, which prevents the CRL from growing unboundedly over time. The costs of maintaining and transmitting CRLs to verifying parties has been repeatedly identiﬁed as an important component of the cost of running a PKI system [3,13], and several alternative revocation schemes have been proposed to lower this cost. The cost of CRL
distribution was also a factor in the emergence of online certiﬁcate status-checking protocols such as OCSP and SCVP.
Delta Certiﬁcate Revocation Lists In large systems that issue many certiﬁcates, CRLs can potentially become lengthy. One approach to reducing the network overhead associated with sending the complete CRL to every veriﬁer is to issue a Delta CRL along with a Base CRL. The Base CRL contains the complete set of revoked certiﬁcates up to some point in time, and the accompanying Delta CRL contains only the additional certiﬁcates added over some time period. Clients that are capable of processing the Delta CRL can then download the Base CRL less frequently and download the smaller Delta CRL to obtain recently revoked certiﬁcates. Delta CRLs are formatted identically to CRLs but have a critical extension added in the CRL that denotes that they are a Delta, not Base CRL. IETF RFC 3280  details how Delta CRLs are formatted, and the set of certiﬁcate extensions that indicate that a CA issues Delta CRLs.
Online Certiﬁcate Status Protocol The OSCP was designed with the goal of reducing the costs of CRL transmission and eliminating the time lag between certiﬁcate invalidity and certiﬁcate revocation inherent in CRL-based designs. The idea behind OCSP is straightfor- ward. A CA certiﬁcate contains a reference to an OSCP server. A client validating a certiﬁcate transmits the cer- tiﬁcate serial number, a hash of the issuer name, and a hash of the subject name to that OSCP server. The OSCP server checks the certiﬁcate status and returns an indication as to the current status of the certiﬁcate. This removes the need to download the entire list of revoked certiﬁcates and also allows for essentially instantaneous revocation of invalid certiﬁcates. It has the design trade-off of requiring that
clients validating certiﬁcates have network connectivity to the required OCSP server. OSCP responses contain the basic information as to the status of the certiﬁcate in the set of “good,”“revoked,” or “unknown.” They also contain a thisUpdate time, similar to a CRL, and are signed. Responses can also contain a nextUpdate time, which indicates how long the client can consider the OSCP response deﬁnitive. The reason the certiﬁcate was revoked can also be returned in the response. OSCP is deﬁned in IETF RFC 2560 .
- SERVER-BASED CERTIFICATE VALIDITY PROTOCOL The X.509 certiﬁcate path construction and validation process requires a nontrivial amount of code, the ability to fetch and cache CRLs, and, in the case of mesh and bridge CAs, the ability to interpret CA policies. The SCVP  was designed to reduce the cost of using X.509 certiﬁcates by allowing applications to delegate the task of certiﬁcate validation to an external server. SCVP offers two levels of functionality: Delegated Path Discovery (DPD), which attempts to locate and construct a complete certiﬁcate chain for a given certiﬁcate, and Delegated Path Validation (DPV), which performs a complete path validation, including revocation checking, on a certiﬁcate chain. The main reason for this division of functionality is that a client can use an untrusted SCVP server for DPD operations, because it will validate the resulting path itself. Only trusted SCVP servers can be used for DPV, because the client must trust the server’s assessment of a certiﬁcate’s validity. SCVP also allows certiﬁcates to be checked according to some deﬁned certiﬁcation policy. They can be used to centralize policy management for an organization that wishes all clients to follow some set of rules with respect to what set of CAs are trusted, what certiﬁcation policies are trusted, etc. To use SCVP, the client sends a query to an SCVP server, which contains the following parameters: l QueriedCerts. This is the set of certiﬁcates for which the client wants the server to construct (and optionally validate) paths. l Checks. The Checks parameter speciﬁes what the client wants the server to do. The checks parameter can be used to specify that the server should build a path, should build a path and validate it without checking revocation, or should build and fully validate the path. l WantBack. The WantBack parameter speciﬁes what the server should return from the request. This can range from the public key from the validated certiﬁcate path (in which case the client is fully delegating certiﬁcate validation to the server) to all certiﬁcate chains that the server can locate.
l ValidationPolicy. The ValidationPolicy parameter instructs the server how to validate the resultant certiﬁ- cation chain. This parameter can be as simple as “use the default RFC 3280 validation algorithm” or it can specify a wide range of conditions that must be satis- ﬁed. Some of the conditions that can be speciﬁed with this parameter are: l KeyUsage and Extended Key Usage. The client can specify a set of KeyUsage or ExtendedKeyUsage ﬁelds that must be present in the end-entity certiﬁ- cate. This allows the client to accept, for example, only certiﬁcates that are allowed to perform digital signatures. l UserPolicySet. The client can specify a set of certiﬁ- cation policy Object Identiﬁer (OIDs) that must be present in the CAs used to construct the chain. CAs can assert that they follow some formally deﬁned policy when issuing certiﬁcates and this parameter allows the client to accept only certiﬁcates issued under some set of these policies. For example, if a client wanted to accept only certiﬁcates accept- able under the Medium Assurance Federal Bridge CA policies, it could assert that policy identiﬁer in this parameter. For more information on policy iden- tiﬁers, see the section on X.509 Extensions. l InhibitPolicyMapping. When issuing bridge or cross-certiﬁcates, a CA can assert that a certiﬁcate policy identiﬁer in one domain is equivalent to some other policy identiﬁer within its domain. By using this parameter, the client can state that it does not want to allow these policy equivalences to be used when validating certiﬁcates against values in the UserPolicySet parameter. l TrustAnchors. The client can use this parameter to specify some set of certiﬁcates that must be at the top of any acceptable certiﬁcate chain. By using this parameter, a client could, for example, say that only VeriSign Class 3 certiﬁcates were acceptable in this context. l ResponseFlags. This speciﬁes various options as to how the server should respond (if it needs to sign or otherwise protect the response) and if a cached response is acceptable to the client. l ValidationTime. The client may want a validation performed as if it were a speciﬁc time, so that it can ﬁnd whether a certiﬁcate was valid at some point in the past. Note that SCVP does not allow for “spec- ulative” validation in terms of asking whether a cer- tiﬁcate will be valid in the future. This parameter allows the client to specify the validation time to be used by the server. l IntermediateCerts. The client can use this parameter to give additional certiﬁcates that can potentially be used to construct the certiﬁcate chain. The server is
not obligated to use these certiﬁcates. This parameter is used where the client may have received a set of intermediate certiﬁcates from a communicating party, and is not certain whether the SCVP server has possession of these certiﬁcates. l RevInfos. Like the IntermediateCerts parameter, the RevInfos parameter supplies extra information that may be needed to construct or validate the path. Instead of certiﬁcates, the RevInfos parameter sup- plies revocation information such as OSCP responses, CRLs, or Delta CRLs.
- X.509 BRIDGE CERTIFICATION SYSTEMS In practice, large-scale PKI systems proved to be more complex than could easily be handled under the X.509 hierarchical model. For example, Polk and Hastings  identiﬁed a number of policy complexities that presented difﬁculties when attempting to build a PKI system for the US Federal Government. In this case, certainly one of the largest PKI projects ever undertaken, they found that the traditional model of a hierarchical certiﬁcation system was simply unworkable. They stated:
The initial designs for a federal PKI were hierarchical in nature because of government’s inherent hierarchical organizational structure. However, these initial PKI plans ran into several obstacles. There was no clear organization within the government that could be identiﬁed and agreed upon to run a governmental “root” CA. While the search for an appropriate organization dragged on, federal agencies began to deploy autonomous PKIs to enable their electronic processes. The search for a “root” CA for a hierarchical federal PKI was abandoned, due to the difﬁ- culties of imposing a hierarchy after the fact. Their proposed solution to this problem was to use a “meshCA” systemtoestablishaFederalBridgeCertiﬁcation Authority.ThisBridgearchitecturehassincebeenadoptedin large PKI systems in Europe and the ﬁnancial services community in the United States. The details of the European Bridge CA can be found at http://www.bridge-ca.org. This part of the chapter will detail the technical design of bridge CAs, and the various X.509 certiﬁcate features that enable bridges.
Mesh Public Key Infrastructures and Bridge Certifying Authorities Bridge CA architectures are implemented using a nonhi- erarchical certiﬁcation structure called a mesh PKI. The classic X.509 architecture joins together multiple PKI systems by subordinating them under a higher-level CA. All certiﬁcates chain up to this CA, and that CA essentially
creates trust between the CAs below it. Mesh PKIs join together multiple PKI systems using a process called “cross-certiﬁcation” that does not create this type of hierarchy. To cross-certify, the top-level CA in a given hierarchy creates a certiﬁcate for an external CA called the Bridge CA. This bridge CA then becomes, in a manner of speaking, a sub-CA under the organization’s CA. However, the Bridge CA also creates a certiﬁcate for the organiza- tional CA, so it can also be viewed as a top level CA certifying that organizational CA. The end result of this cross-certiﬁcation process is that if, two organizations, A and B have joined the same bridge CA, the can both create certiﬁcate chains from their respective trusted CAs through the other organization’s CA to end-entity certiﬁcates that it has created. These chains will be longer than traditional hierarchical chains but have the same basic veriﬁable properties. Fig. 48.7 shows how two organizations might be connected through a bridge CA, and what the resultant certiﬁcate chains look like. In the case illustrated in Fig. 48.7, a user that trusts certiﬁcates issued by PKI A (that is, PKI A Root is a “trust anchor”) can construct a chain to certiﬁcates issued by the PKI B Sub-CA, because it can verify Certiﬁcate 2 via its trust of the PKI A Root. Certiﬁcate 2 then chains to Cer- tiﬁcate3,whichchainstoCertiﬁcate6.Certiﬁcate6thenisa trusted issuer certiﬁcate for certiﬁcates issued by the PKI B Sub-CA.
Mesh architectures create two signiﬁcant technical problems: path construction and policy evaluation. In a hierarchical PKI system, there is only one path from the root certiﬁcate to an end-entity certiﬁcate. Creating a cer- tiﬁcate chain is as simple as taking the current certiﬁcate, locating the issuer in the subject ﬁeld of another certiﬁcate, and repeating until the root is reached (completing the chain) or no certiﬁcate can be found (failing to construct the chain.) In a mesh system, there can be cyclical loops where this process can fail to terminate with a failure or success. This is not a difﬁcult problem to solve, but it is more complex to deal with than the hierarchical case. Policy evaluation becomes much more complex in the mesh case. In the hierarchical CA case, the top-level CA can establish policies that are followed by Sub-CAs, and these policies can be encoded into certiﬁcates in an un- ambiguous way. When multiple PKIs are joined by a bridge CA, these PKIs may have similar policies but may be expressed by different names. PKI A and PKI B may both certify “medium assurance” CAs that perform a certain level of authentication before issuing certiﬁcates, but may have different identiﬁers for these policies. When joined by a bridge CA, clients may reasonably want to validate cer- tiﬁcates issued by both CAs, and understand the policies under which that those certiﬁcates are issued. The Policy- Mapping technique allows similar policies under different names from disjoint PKIs to be translated at the bridge CA.
Although none of these problems is insurmountable, they increase the complexity of certiﬁcate validation code and helped drive the invention of server-based validation protocols such as SCVP. These protocols delegate path discovery and validation to an external server rather than require applications to integrate this functionality. This may lower application complexity, but the main beneﬁt of this strategy is that questions of acceptable policies and trans- lation can be conﬁgured at one central veriﬁcation server rather than distributed to every application doing certiﬁcate validation.
- X.509 CERTIFICATE FORMAT The X.509 standard (and the related IETF RFCs) specify a set of data ﬁelds that must be present in a properly formatted certiﬁcate, a set of optional extension data ﬁelds that can be used to supply additional certiﬁcate information, how these ﬁelds must be signed, and how the signature data are encoded. All of these data ﬁelds (mandatory ﬁelds, optional ﬁelds, and the signature) are speciﬁed in Abstract Syntax Notation (aka ASN.1), a formal language that al- lows for exact deﬁnitions of the content of data ﬁelds and how those ﬁelds are arranged in a data structure. An associated speciﬁcation, Determined Encoding Rules (DER), is used with speciﬁc certiﬁcate data and the ASN.1 certiﬁcate format to create the actual binary certiﬁcate data. The ASN.1 standard is authoritatively deﬁned in ITU Recommendation X.693. (For an introduction to ASN.1 and DER, see [kaliski].)
X.509 V1 and V2 Format The ﬁrst X.509 certiﬁcate standard was published in 1988 as part of the broader X.500 directory standard. X.509 was intended to provide public keyebased access control to an
X.500 directory, and deﬁned a certiﬁcate format for that use. This format, which is now referred to as X.509 v1, deﬁned a static format containing an X.400 Issuer name (the name of the CA), an X.400 Subject name, a validity period, the key to be certiﬁed, and the signature of the CA. Whereas this basic format allowed for all basic PKI oper- ations, the format required that all names be in the X.400 form and it did not allow for any other information to be added to the certiﬁcate. The X.509 v2 format added two more Unique ID ﬁelds but did not ﬁx the primary de- ﬁciencies of the v1 format. As it became clear that name formats would have to be more ﬂexible and certiﬁcates would have to accommodate a wider variety of information, work began on a new certiﬁcate format.
X.509 V3 Format The X.509 certiﬁcate speciﬁcation was revised in 1996 to add an optional extension ﬁeld that allows a set of optional additional data ﬁelds to be encoded into the certiﬁcate (Table 48.5). This change may seem minor, but in fact it allowed certiﬁcates to carry a wide array of information useful for PKI implementation, and also for the certiﬁcate to contain multiple, non-X.400 identities. These extension ﬁelds allow for key usage policies, CA policy information, revocation pointers, and other relevant information to live in the certiﬁcate. The V3 format is the most widely used X.509 variant and is the basis for the certiﬁcate proﬁle in RFC 3280  issued by the IETF.
X.509 Certiﬁcate Extensions This section is a partial catalog of common X.509 V3 ex- tensions. There is no existing canonical directory of V3 extensions, so there are undoubtedly extensions in use outside this list. The most common extensions are deﬁned
in RFC 3280 , which contains the IETF certiﬁcate proﬁle, which are used by S/MIME and many SSL/TLS implementations. These extensions address a number of deﬁciencies in the base X.509 certiﬁcate speciﬁcation, and which in many cases are essential for constructing a prac- tical PKI system. In particular, the Certiﬁcate Policy, Policy Mapping, and Policy Constraints extensions form the basis for the popular bridge CA architectures.
Authority Key Identiﬁer The Authority Key Identiﬁer extension identiﬁes which speciﬁc private key owned by the certiﬁcate issuer was used to sign the certiﬁcate. The use of this extension allows a single issuer to use multiple private keys and unambig- uously identiﬁes which key was used. This allows issuer keys to be refreshed without changing the issuer name and enables handling events such as an issuer key being compromised or lost.
Subject Key Identiﬁer Like the Authority Key Identiﬁer, the Subject Key Identiﬁer extension indicates which subject key is contained in the certiﬁcate. This extension provides a way to identify quickly whichcertiﬁcatesbelongtoaspeciﬁckeyownedbyasubject. Ifthe certiﬁcateisa CAcertiﬁcate,the SubjectKey Identiﬁer can be used to construct chains by connecting a Subject Key Identiﬁer with a matching Authority Key Identiﬁer.
Key Usage A CA may wish to issue a certiﬁcate that limits the use of a public key. This may lead to an increase in overall system security by segregating encryption keys from signature keys, and even segregating signature keys by use. For example, an entity may have a key used for signing doc- uments and a key used for decryption of documents. The signing key may be protected by a smart card mechanism that requires a personal identiﬁer number per signing, whereas the encryption key is always available when the user is logged in. The use of this extension allows the CA to express that the encryption key cannot be used to generate signatures, and notiﬁes communicating users that they should not encrypt data with the signing public key. The key usage capabilities are deﬁned in a bit ﬁeld, which al- lows a single key to have any combination of the deﬁned capabilities (see checklist, “An Agenda for Action to Deﬁne Key Usage Capabilities”).
Subject Alternative Name This extension allows the certiﬁcate to deﬁne non- X.400eformatted identities for the subject. It supports a variety of namespaces, including email addresses, Domain Name System names for servers, Electronic Document Interchange party names, Uniform Resource Identiﬁers, and Internet Protocol (IP) addresses, among others.
Policy Extensions Three important X.509 certiﬁcate extensions (Certiﬁcate Policy, Policy Mapping, and Policy Constraints) form a complete system for communicating CA policies regarding how certiﬁcates are issued or revoked, and how CA secu- rity is maintained. They are interesting in that they communicate information that is more relevant to business and policy decision making than the other extensions that are used in the technical processes of certiﬁcate chain construction and validation. As an example, a variety of CAs run multiple Sub-CAs that issue certiﬁcates according to a variety of issuance policies, ranging from “Low Assurance” to “High Assurance.” The CA will typically formally deﬁne in a policy document all of its operating policies, state them in a practice statement, deﬁne an ASN.1 OID that names this policy, and distribute it to parties that will validate those certiﬁcates. The policy extensions allow CAs to attach a policy OID to its certiﬁcate, translate policy OIDs among PKIs, and limit the policies that can be used by Sub-CAs.
Certiﬁcate Policy The Certiﬁcate Policy extension, if present in an issuer certiﬁcate, expresses the policies that are followed by the CA, both in terms of how identities are validated before certiﬁcate issuance and how certiﬁcates are revoked, as well as the operational practices that are used to ensure integrity of the CA. These policies can be expressed in two ways: as an OID, which is a unique number that refers to one given policy, and as a human-readable Certiﬁcate Practice Statement (CPS). One Certiﬁcate Policy extension can contain both the computer-sensible OID and a printable CPS. One special OID has been set aside for “AnyPolicy,” which states that the CA may issue certiﬁcates under a free- form policy. IETF RFC 2527  gives a complete description of what should be present in a CA policy document and CPS. More details on the 2527 guidelines are given in the PKI Policy Description section.
Policy Mapping The Policy Mapping extension contains two policy OIDs: one for the Issuer domain and the other for the Subject domain. When this extension is present, a validating party can consider the two policies identical, which is to say, the Subject OID, when present in the chain below the given certiﬁcate, can be considered to be the same as the policy named in the Issuer OID. This extension is used join together two PKI systems with functionally similar policies that have different policy reference OIDs.
Policy Constraints The Policy Constraints extension enables a CA to disable policy mapping for CAs farther down in the chain, and to require explicit policies in all of the CAs below a given CA.
- PUBLIC KEY INFRASTRUCTURE POLICY DESCRIPTION In many application contexts, it is important to understand how and when CAs will issue and revoke certiﬁcates. Especially when bridge architectures are used, an admin- istrator may need to evaluate a CA’s policy to determine how and when to trust certiﬁcates issued under that au- thority. For example, the US Federal Bridge CA maintains a detailed speciﬁcation of its operating procedures and re- quirements for bridged CAs at the US Chief Information Ofﬁcers ofﬁce website (http://www.cio.gov/fpkipa/ documents/FBCA_CP_RFC3647.pdf). More information about the Federal Bridge CA can be found at http://www. idmanagement.gov. Many other commercial CAs, such as VeriSign, maintain similar documents. To make policy evaluation easier and more uniform, IETF RFC 2527  speciﬁes a standard format for CAs to communicate their policy for issuing and revoking certiﬁ- cates. This speciﬁcation divides a policy speciﬁcation document into the following sections: l Introduction: This section describes the type of certiﬁ- cates that the CA issues, the applications in which those certiﬁcates can be used, and the OIDs used to identify CA policies. The Introduction also contains the contact information for the institution operating the CA. l General Provisions: This section details the legal obli- gations of the CA, any warranties given as to the reli- ability of the bindings in the certiﬁcate, and details as to the legal operation of the CA, including fees and rela- tionship to any relevant laws. l Identiﬁcation and Authentication: This section details how certiﬁcate requests are authenticated at the CA or RA, and how events such as name disputes or revoca- tion requests are handled. l Operational Requirements: This section details how the CA will react in case of key compromise, how it re- news keys, how it publishes CRLs or other revocation information, how it is audited, and what records are kept during CA operation. l Physical, Procedural, and Personnel Security Con- trols: This section details how the physical location of the CA is controlled and how employees are vetted. l Technical Security Controls: This section explains how the CA key is generated and protected though its life cy- cle. CA key generation is typically done through an
audited, recorded key generation ceremony to assure certiﬁcate users that the CA key was not copied or otherwise compromised during generation. l Certiﬁcate and CRL Proﬁle: The speciﬁc policy OIDs published in certiﬁcates generated by the CA are given in this section. The information in this section is sufﬁ- cient to accomplish the technical evaluation of a certif- icate chain published by this CA. l Speciﬁcation Administration: The last section explains the procedures used to maintain and update the certiﬁ- cate policy statement itself. These policy statements can be substantial documents. The Federal Bridge CA policy statement is at least 93 pages long, and other certiﬁcate authorities have similarly exhaustive documents. The aim of these statements is to provide enough legal backing for certiﬁcates produced by these CAs so that they can be used to sign legally binding contracts and automate other legally relevant applications.
- PUBLIC KEY INFRASTRUCTURE STANDARDS ORGANIZATIONS The PKI X.509 (PKIX) Working Group was established in the fall of 1995 with the goal of developing Internet stan- dards to support X.509-based PKIs. These speciﬁcations form the basis for numerous other IETF speciﬁcations that use certiﬁcates to secure various protocols, such as S/MIME (for secure email), TLS [for secured Transmission Control Protocol (TCP) connections], and Internet Protocol Security (for securing internet packets).
Internet Engineering Task Force Public Key Infrastructure X.509 The PKIX working group has produced a complete set of speciﬁcations for an X.509-based PKI system. These speciﬁcations span 36 RFCs; at least eight more RFCs are being considered by the group. In addition to the basic core of X.509 certiﬁcate proﬁles and veriﬁcation strategies, the PKIX drafts cover the format of certiﬁcate request mes- sages, certiﬁcates for arbitrary attributes (rather than for public keys), and a host of other certiﬁcate techniques. Other IETF groups have produced a group of speciﬁ- cations that detail the use of certiﬁcates in various protocols and applications. In particular, the S/MIME group, which details a method for encrypting email messages, and the SSL/TLS group, which details TCP/IP connection security, use X.509 certiﬁcates. In May 2015, the IETF established the Automated Certiﬁcate Management Environment (ACME) working group to build protocols that would simplify the issuance of certiﬁcates with the goal of making encrypted network connections easier to establish and thus, it was hoped, more
commonplace. The ACME group published a draft RFC for the ACME protocol, which enables a CA to establish domain ownership and create certiﬁcates with little or no user intervention. This protocol was then used to create the “Let’s Encrypt” CA, which issues zero-cost certiﬁcates using ACME. Websites can use Let’s Encrypt (https:// letsencrypt.org) to provision certiﬁcates automatically and thereby enable encryption with no need to pay a commer- cial CA. This approach has proven popular, with more than a million certiﬁcates issued in the ﬁrst year of operation. Several commercial CAs have followed with offerings of low- or zero-cost domain authentication certiﬁcates, and this approach may lead to wider use of encrypted SSL/TLS connections for Web communications.
SDSI/SPKI The Simple Distributed Security Infrastructure (SDSI) group was chartered in 1996 to design a mechanism for distributing public keys that would correct some of the perceived complexities inherent in X.509. In particular, the SDSI group aimed to build a PKI architecture [sdsi] that would not rely on a hierarchical naming system, but would instead work with local names that would not have to be enforced to be globally unique. The eventual SDSI design, produced by Ron Rivest and Butler Lampson, has a number of unique features: l Public key-centric design. The SDSI design uses the public key itself (or a hash of the key) as the primary identifying name. SDSI signature objects can contain naming statements about the holder of a given key, but the names are not intended to be the “durable” name of a entity. l Free-form namespaces. SDSI imposes no restrictions on what form names must take and imposes no hierar- chy that deﬁnes a canonical namespace. Instead, any signer may assert identity information about the holder of a key, but no entity is required to the use (or believe) the identity bindings of any other particular signer. This allows each application to create a policy about who can create identities, how those identities are veriﬁed, and even what constitutes an identity. l Support for groups and roles. The design of many se- curity constructions (access control lists, for example) often includes the ability to refer to groups or roles instead of the identity of individuals. This allows access control and encryption operations to protect data for groups, which may be more natural in some situations. The SPKI group was started at nearly the same time, with goals similar to the SDSI effort. In X, the two groups were merged, and the SDSI/SPKI 2.0 speciﬁcation was produced, incorporating ideas from both architectures
Internet Engineering Task Force Open Pretty Good Privacy The PGP public key system, created by Phillip Zimmer- mann, is a widely deployed PKI system that allows for the signing and encryption of ﬁles and email. Unlike the X.509 PKI architecture, the PGP PKI system uses the notion of a “Web of Trust” to bind identities to keys. The Web of Trust (WoT)  replaces the X.509 idea of identity binding via an authoritative server with identity binding via multiple semitrusted paths. In a WoT system, the end user maintains a database of matching keys and identities, each of which is given two trust ratings. The ﬁrst trust rating denotes how trusted the binding is between the key and the identity, and the second denotes how trusted a particular identity is to “introduce” new bindings. Users can create and sign a certiﬁcate, and import certiﬁcates created by other users. Importing a new certiﬁcate is treated as an introduction. When a given identity and key in a database are signed by enough trusted identities, that binding is treated as trusted. Because PGP identities are not bound by an authorita- tive server, there is also no authoritative server that can revoke a key. Instead, the PGP model states that the holder of a key can revoke that key by posting a signed revocation message to a public server. Any user seeing a properly signed revocation message then removes that key from the database. Because revocation messages must be signed, only the holder of the key can produce them, so it is impossible to produce a false revocation without compromising the key. If an attacker does compromise the key, production of a revocation message from that compromised key actually improves the security of the overall system, because it warns other users not to trust that key.
- PRETTY GOOD PRIVACY CERTIFICATE FORMATS To support the unique features of the WoT system, PGP invented a ﬂexible packetized message format that can encode encrypted messages, signed messages, key database entries, key revocation messages, and certiﬁcates. This packetized design, described in IETF RFC 2440, allows a PGP certiﬁcate to contain a variable number of names and signatures, as opposed to the single-certiﬁcation model used in X.509. A PGP certiﬁcate (known as a transferable public key) contains three main sections of packetized data. The ﬁrst section contains the main public key itself, potentially followed by some set of relevant revocation packets. The next section contains a set of User ID packets, which are identities to be bound to the main public key. Each User ID packet is optionally followed by a set of Signature packets,
each of which contains an identity and a signature of the User ID packet and the main public key. Each of these Signature packets essentially forms an identity binding. Because each PGP certiﬁcate can contain any number of these User ID/Signature elements, a single certiﬁcate can assert that a public key is bound to multiple identities (for example, multiple email addresses that correspond to a single user), certiﬁed by multiple signers. This multiple signer approach enables the WoT model. The last section of the certiﬁcate is optional and may contain multiple subkeys, which are single-function keys (for example, an encryption- only key) also owned by the holder of the main public key. Each of these subkeys must be signed by the main public key. PGP Signature packets contain all information needed to perform a certiﬁcation, including time intervals for which the signature is valid. Fig. 48.8 shows how the multiname, multisignature PGP format differs from the single-name, single-signature X.509 format.
- PRETTY GOOD PRIVACY PUBLIC KEY INFRASTRUCTURE IMPLEMENTATIONS The PGP PKI system is implemented in commercial prod- ucts sold by the PGP Corporation and several open source
projects including Gnu Privacy Guard and OpenPGP. Thawte offers a WoT service that connects people with “WoT notaries” that can build trusted introductions. PGP Corporation operates a PGP Global Directory that contains PGP keys along with an email conﬁrmation service to make key certiﬁcation easier. The OpenPGP group (www. openpgp.org) maintains IETF speciﬁcation (RFC 2440) for the PGP message and certiﬁcate format.
- WORLD WIDE WEB CONSORTIUM The World Wide Web Consortium standards group pub- lished a series of standards on encrypting and signing eXtensible Markup Language (XML) documents. These standards, XML Signature and XML Encryption, have a companion PKI speciﬁcation called XML Key Manage- ment Speciﬁcation (XKMS). The XKMS speciﬁcation describes a meta-PKI that can be used to register, locate, and validate keys that may be certiﬁed by an outside X.509 CA, a PGP referrer, an SPKI key signer, or the XKMS infrastructure itself. The speciﬁ- cation contains two protocol speciﬁcations: XML Key Information Service Speciﬁcation (X-KISS) and XML Key Registration Service Speciﬁcation (X-KRSS). X-KISS is used to ﬁnd and validate a public key referenced in an XML document, and X-KRSS is used to register a public key so that it can be located by X-KISS requests.
- IS PUBLIC KEY INFRASTRUCTURE SECURE? PKI has formed the basis of Internet security protocols such as S/MIME for securing email, and SSL/TLS protocols for securing communications between clients and Web servers. The essential job of PKI in these protocols is to bind a name such as an email address or domain name to a key that is controlled by that entity. As seen in this chapter, that job boils down to a CA issuing a certiﬁcate for an entity. The security of these systems then rests on the trustworthiness of the CAs trusted within an application. If a CA issues a set of bad certiﬁcates, the security of the entire system can be called into question. The issue of a subverted CA was largely theoretical until attacks on the Comodo and DigiNotar CAs [18,19] in 2011. Both of these CAs discovered that an attacker had bypassed their internal controls and obtained certiﬁcates for prominent Internet domains (google.com, yahoo.com) These certiﬁcates were revoked, but the incident caused the major browser vendors to revisit their policies about what CA roots are trusted, and the removal of many CAs. In the case of DigiNotar, these attacks ultimately led to the bankruptcy of the company. By attacking a CA and obtaining a false certiﬁcate for a given domain, the attacker can set up a fake version of the
domain’s website and, using that certiﬁcate, create secure connections to clients that trust that CA’s root certiﬁcate. This secure connection can be used as a “man-in-the- middle” server that reveals all trafﬁc between the client (or clients) and the legitimate website. Can these attacks be prevented? There are research protocols such as “Perspectives”  that attempt to detect false certiﬁcates that might be signed by a legitimate CA. These protocols use third-party repositories to track what certiﬁcates and keys are used by individual websites. A change that is noticed by only some subset of users may indicate an attacker using a certiﬁcate to gain access to secured trafﬁc. In 2015, the IETF published RFC 7469, which speciﬁes “certiﬁcate pinning,” a method to allow websites to specify an acceptable key or certiﬁcate that cannot be changed outside a speciﬁed time window. The pinning method (which is now implemented in several browsers) prevents a rogue CA from publishing illegitimate certiﬁcates for a given site. To pin a certiﬁcate, the site sends a hash that must match the SubjectPublicKeyInfo ﬁeld of a certiﬁcate in the site’s certiﬁcate chain. Although a malicious actor could spoof this ﬁeld, it would fool only browsers that had not visited the legitimate site. Once a browser has seen the pinning data, it will refuse to connect to sites with nonconforming certiﬁcate chains. Pinning can also be used in other protocols by providing a way to communicate an essentially permanent constraint on the certiﬁcate chain used to validate an entity. Pinning has become more popular over time and may evolve into a standard mecha- nism to limit CA-based attacks on Internet protocols. Some other products that rely on PKI certiﬁcation have introduced new features to make these attacks harder to execute. Google’s Chrome browser also has incorporated security features  intended to foil attacks on PKI infrastructure. The Mozilla Foundation, owners of the Firefox browser, have instituted an audit and review system that requires all trusted CAs to attest that they have speciﬁc kinds of security mechanisms in place to prevent the issu- ance of illegitimate certiﬁcates. As a general principle, systems built to rely on PKI for security should understand the risks involved in CA compromise, and also understand how critical it is to control exposure to these kinds of attacks. One simple mechanism for doing this is to restrict the number of CAs that are trusted by the application. The large number of roots trusted by the average Web browser is large, which makes auditing of the complete list of CAs difﬁcult.
- ALTERNATIVE PUBLIC KEY INFRASTRUCTURE ARCHITECTURES PKI systems have proven to be remarkably effective tools for some protocols, most notably SSL, which has emerged
as the dominant standard for encrypting Internet trafﬁc. Deploying PKI systems for other types of applications or as a general key management system has not been as successful. The differentiating factor seems to be that PKI keys for machine end-entities (such as websites) do not encounter usability hurdles that emerge when issuing PKI keys for human end-entities. Peter Guttman [notdead] has a number of overviews of PKI that present the funda- mental difﬁculties of classic X.509 PKI architectures. Alma Whitten and Doug Tygar  published “Why Johnny Can’t Encrypt, ” a study of various users attempt- ing to encrypt email messages using certiﬁcates. This study showed substantial user failure rates resulting from the complexities of understanding certiﬁcate naming and validation practices. A subsequent study  showed similar results when using X.509 certiﬁcates with S/MIME encryption in Microsoft Outlook Express. Most of the research on PKI alternatives has focused on making encryption easier to use and deploy.
- MODIFIED X.509 ARCHITECTURES Some researchers have proposed modiﬁcations or redesigns of the X.509 architecture to make obtaining a certiﬁcate easier, and lower the cost of operating appli- cations that depend on certiﬁcates. The goal of these systems is often to allow internet based services to use certiﬁcate based signature and encryption service without requiring the user to consciously interact with certiﬁcation services or even understand that certiﬁcates are being used.
Perlman and Kaufman’s User-Centric Public Key Infrastructure Perlman and Kaufman proposed the “User-centric PKI” [Perlman], which allows the user to act as his own CA, with authentication provided through individual registra- tion with service providers. It has several features that attempt to protect user privacy by allowing the user to pick what attributes are visible to a speciﬁc service provider.
Guttman’s Plug and Play Public Key Infrastructure Guttman’s proposed “Plug and Play PKI” [gutmann-pnp] provides for similar self-registration with a service provider and adds location protocols to establish how to contact certifying services. The goal is to build a PKI that provides a reasonable level of security and that is essentially trans- parent to the end user.
Callas’ Self-assembling Public Key Infrastructure In 2003, Jon Callas  proposed a PKI system that would use existing, standard PKI elements bound together by a “robot” server that would examine messages sent between users, and attempt to ﬁnd certiﬁcates that could be used to secure the message. In the absence of an available certiﬁ- cate, the robot would create a key on behalf of the user, and send a message requesting authentication. This system has the beneﬁt for speeding deployment of PKI systems for email authentication, but loses many of the strict authenti- cation attributes that drove the development of the X.509 and IETF PKI standards.
- ALTERNATIVE KEY MANAGEMENT MODELS PKI systems can be used for encryption as well as digital signatures, but these two applications have different oper- ational characteristics. In particular, systems that use PKIs for encryption require an encrypting party to have the ability to locate certiﬁcates for its desired set of recipients. In digital signature applications, a signer only requires access to his own private key and certiﬁcate. The certiﬁ- cates required to verify the signature can be sent with the signed document, so there is no requirement for veriﬁers to locate arbitrary certiﬁcates. These difﬁculties have been identiﬁed as factors contributing to the difﬁculty of prac- tical deployment of PKI-based encryption systems such as S/MIME. In 1984, Adi Shamir  proposed an Identity-Based Encryption (IBE) system for email encryption. In the identity-based model, any string can be mathematically transformed into a public key, typically using some public information from a server. A message then can be encrypted with this key. To decrypt, the message recipient contacts the server and requests a corresponding private key. The server is able to derive a private key mathemati- cally, which is returned to the recipient. Shamir disclosed how to perform a signature operation in this model, but did not give a solution for encryption. This approach has signiﬁcant advantages over the traditional PKI model of encryption. The most obvious is the ability to send an encrypted message without locating a certiﬁcate for a given recipient. There are other points of differentiation: l Key recovery. In the traditional PKI model, if a recipient loses the private key corresponding to a certiﬁcate, all messages encrypted to that certiﬁcate’s public key cannot be decrypted. In the IBE model, the server can recompute lost private keys. If messages must be
recoverable for legal or other business reasons, PKI sys- tems typically add mandatory secondary public keys to which senders must encrypt messages. l Group support. Because any string can be transformed into a public key, a group name can be supplied instead of an individual identity. In the traditional PKI model, groups are either done by expanding a group to a set of individuals at encrypt time or by issuing group certif- icates. Group certiﬁcates pose serious difﬁculties with revocation, because individuals can only be removed from a group as often as revocation is updated. In 2001, Boneh and Franklin gave the ﬁrst fully described secure and efﬁcient method for IBE . This was followed by a number of variant techniques, including Hierarchical Identity-Based Encryption (HIBE) and Cer- tiﬁcateless Encryption. HIBE allows multiple key servers to be used, each of which controls part of the namespace used for encryption. Certiﬁcateless  Encryption adds the ability to encrypt to an end user using an identity, but in such a way that the key server cannot read messages. IBE systems have been commercialized and are the subject of standards under the IETF (RFC 5091) and the Institute of Electrical and Electronics Engineers(1363.3).
- SUMMARY A PKI is the key management environment for public key information about a public key cryptographic system. As discussed in this chapter, there are three basic PKI archi- tectures based on the number of Certiﬁcate Authorities (CAs) in the PKI, in which users of the PKI place their trust (known as a user’s trust point), and the trust relationships between CAs within a multi-CA PKI. The most basic PKI architecture is one that contains a single CA that provides the PKI services (certiﬁcates, certiﬁcate status information, etc.) for all users of the PKI. Multiple CA PKIs can be constructed using one of two architectures based on the trust relationship between CAs. A PKI constructed with superioresubordinate CA relationships is called a hierarchical PKI architecture. Alternatively, a PKI constructed of peer-to-peer CA relationships is called a mesh PKI architecture.
Directory Architectures As discussed in this chapter, early PKI development was conducted under the assumption that a directory infra- structure (speciﬁcally a global X.500 directory) would be used to distribute certiﬁcates and CRLs. Unfortunately, the global X.500 directory did not emerge, which resulted in PKIs being deployed using various directory architectures based on how directory requests are serviced. If the initial directory cannot service a request, the directory can forward the request to other known directories using directory
chaining. Another way a directory can resolve an unser- viceable request is to return a referral to the initiator of the request indicating a different directory that might be able to service the request. If the directories cannot provide directory chaining or referrals, pointers to directory servers can be embedded in a PKI certiﬁcate using the Authority Information Access and Subject Information Access extensions. In general, all PKI users interface to the directory infrastructure using the Lightweight Directory Access Protocol regardless of how the directory infra- structure is navigated.
Bridge Certiﬁcation Authorities and Revocation Modeling Bridge Certiﬁcation Authorities provide the means to leverage the capabilities of existing corporate PKIs as well as federal PKIs. PKIs are being ﬁelded in increasing size and numbers, but operational experience to date has been limited to a relatively small number of environments. As a result, there are still many unanswered questions about the ways in which PKIs will be organized and operated in large-scale systems. Some of these questions involve the ways in which individual certiﬁcation authorities (CAs) will be interconnected. Others involve the ways in which revocation information will be distributed. Most of the proposed revocation distribution mecha- nisms have involved variations of the original CRL scheme. Examples include the use of segmented CRLs and Delta CRLs. However, some schemes do not involve the use of any type of CRL (online certiﬁcate status protocols and hash chains). A model of certiﬁcate revocation presents a mathe- matical model for describing the timings of validations by relying parties. The model is used to determine how request rates for traditional CRLs change over time. This model is then extended to show how request rates are affected when CRLs are segmented. This chapter also presented a tech- nique for distributing revocation information, overissued CRLs. Overissued CRLs are identical to traditional CRLs but are issued more frequently. The result of overissuing CRLs is to spread out requests from relying parties and thus to reduce the peak load on the repository. A more efﬁcient use of Delta CRLs employs the model described in a model of certiﬁcate revocation to analyze various methods of issuing Delta CRLs. It begins with an analysis of the “traditional” method of issuing Delta CRLs and shows that under some circumstances, issuing Delta CRLs in this manner fails to provide the efﬁciency gains for which Delta CRLs were designed. A new method of issuing Delta CRLs, sliding window Delta CRLs, was presented. Sliding window Delta CRLs are similar to traditional Delta CRLs, but provide a constant amount of historical infor- mation. Whereas this does not affect the request rate for
Published @ September 27, 2021 5:09 am