Chapter 1. Introduction
In today’s networked world, many applications need security, and cryptography is one of the
primary tools for providing that security. The primary goals of cryptography, data confidentiality,
data integrity, authentication, and non-repudiation (accountability) can be used to thwart
numerous types of network-based attacks, including eavesdropping, IP spoofing, connection
hijacking, and tampering. OpenSSL is a cryptographic library; it provides implementations of the
industry’s best-regarded algorithms, including encryption algorithms such as 3DES (“Triple DES”),
AES and RSA, as well as message digest algorithms and message authentication codes.
Using cryptographic algorithms in a secure and reliable manner is much more difficult than most
people believe. Algorithms are just building blocks in cryptographic protocols, and cryptographic
protocols are notoriously difficult to get right. Cryptographers have a difficult time devising
protocols that resist all known attacks, and the average developer tends to do a lot worse. For
example, developers often try to secure network connections simply by encrypting data before
sending it, then decrypting it on receipt. That strategy often fails to ensure the integrity of data. In
many situations, attackers can tamper with data, and sometimes even recover it. Even when
protocols are well designed, implementation errors are common. Most cryptographic protocols
have limited applicability, such as secure online voting. However, protocols for securely
communicating over an insecure medium have ubiquitous applicability. That’s the basic purpose
of the SSL protocol and its successor, TLS (when we generically refer to SSL, we are referring to
both SSL and TLS): to provide the most common security services to arbitrary (TCP-based)
network connections in such a way that the need for cryptographic expertise is minimized.
Ultimately, it would be nice if developers and administrators didn’t need to know anything about
cryptography or even security to protect their applications. It would be nice if security was as
simple as linking in a different socket library when building a program. The OpenSSL library
strives toward that ideal as much as possible, but in reality, even the SSL protocol requires a good
understanding of security principles to apply securely. Indeed, most applications using SSL are
susceptible to attack.
Nonetheless, SSL certainly makes securing network connections much simpler. Using SSL doesn’t
require any understanding of how cryptographic algorithms work. Instead, you only need to
understand the basic properties important algorithms have. Similarly, developers do not need to
worry about cryptographic protocols; SSL doesn’t require any understanding of its internal
workings in order to be used. You only need to understand how to apply the algorithm properly.
The goal of this book is to document the OpenSSL library and how to use it properly. This is a
book for practitioners, not for security experts. We’ll explain what you need to know about
cryptography in order to use it effectively, but we don’t attempt to write a comprehensive
introduction on the subject for those who are interested in why cryptography works. For that, we
recommend Applied Cryptography, by Bruce Schneier (John Wiley & Sons). For those interested
in a more technical introduction to cryptography, we recommend Menezes, van Oorschot, and
Vanstone’s Handbook of Applied Cryptography (CRC Press). Similarly, we do not attempt to
document the SSL protocol itself, just its application. If you’re interested in the protocol details,
we recommend Eric Rescorla’s SSL and TLS (Addison-Wesley).
1.1 Cryptography for the Rest of Us
For those who have never had to work with cryptography before, this section introduces you to the
fundamental principles you’ll need to know to understand the rest of the material in this book. First,
we’ll look at the problems that cryptography aims to solve, and then we’ll look at the primitives
that modern cryptography provides. Anyone who has previously been exposed to the basics of
cryptography should feel free to skip ahead to the next section.
1.1.1 Goals of Cryptography
The primary goal of cryptography is to secure important data as it passes through a medium that
may not be secure itself. Usually, that medium is a computer network.
There are many different cryptographic algorithms, each of which can provide one or more of the
following services to applications:
Data is kept secret from those without the proper credentials, even if that data travels
through an insecure medium. In practice, this means potential attackers might be able to
see garbled data that is essentially “locked,” but they should not be able to unlock that
data without the proper information. In classic cryptography, the encryption (scrambling)
algorithm was the secret. In modern cryptography, that isn’t feasible. The algorithms are
public, and cryptographic keys are used in the encryption and decryption processes. The
only thing that needs to be secret is the key. In addition, as we will demonstrate a bit later,
there are common cases in which not all keys need to be kept secret.
The basic idea behind data integrity is that there should be a way for the recipient of a
piece of data to determine whether any modifications are made over a period of time. For
example, integrity checks can be used to make sure that data sent over a wire isn’t
modified in transit. Plenty of well-known checksums exist that can detect and even
correct simple errors. However, such checksums are poor at detecting skilled intentional
modifications of the data. Several cryptographic checksums do not have these drawbacks
if used properly. Note that encryption does not ensure data integrity. Entire classes of
encryption algorithms are subject to “bit-flipping” attacks. That is, an attacker can change
the actual value of a bit of data by changing the corresponding encrypted bit of data.
Cryptography can help establish identity for authentication purposes.
Cryptography can enable Bob to prove that a message he received from Alice actually
came from Alice. Alice can essentially be held accountable when she sends Bob such a
message, as she cannot deny (repudiate) that she sent it. In the real world, you have to
assume that an attacker does not compromise particular cryptographic keys. The SSL
protocol does not support non-repudiation, but it is easily added by using digital
These simple services can be used to stop a wide variety of network attacks, including:
Snooping (passive eavesdropping)
An attacker watches network traffic as it passes and records interesting data, such as
credit card information
An attacker monitors network traffic and maliciously changes data in transit (for example,
an attacker may modify the contents of an email message).
An attacker forges network data, appearing to come from a different network address than
he actually comes from. This sort of attack can be used to thwart systems that authenticate
based on host information (e.g., an IP address).
Once a legitimate user authenticates, a spoofing attack can be used to “hijack” the
In some circumstances, an attacker can record and replay network transactions to ill effect.
For example, say that you sell a single share of stock while the price is high. If the
network protocol is not properly designed and secured, an attacker could record that
transaction, then replay it later when the stock price has dropped, and do so repeatedly
until all your stock is gone.
Many people assume that some (or all) of the above attacks aren’t actually feasible in practice.
However, that’s far from the truth. Especially due to tool sets such as dsniff
(http://www.monkey.org/~dugsong/dsniff/), it doesn’t even take much experience to launch all of
the above attacks if access to any node on a network between the two endpoints is available.
Attacks are equally easy if you’re on the same local network as one of the endpoints. Talented high
school students who can use other people’s software to break into machines and manipulate them
can easily manage to use these tools to attack real systems.
Traditionally, network protocols such as HTTP, SMTP, FTP, NNTP, and Telnet don’t provide
adequate defenses to the above attacks. Before electronic commerce started taking off in mid-1990,
security wasn’t really a large concern, especially considering the Internet’s origins as a platform for
sharing academic research and resources. While many protocols provided some sort of
authentication in the way of password-based logins, most of them did not address confidentiality
or integrity at all. As a result, all of the above attacks were possible. Moreover, authentication
information could usually be among the information “snooped” off a network.
SSL is a great boon to the traditional network protocols, because it makes it easy to add
transparent confidentiality and integrity services to an otherwise insecure TCP-based protocol. It
can also provide authentication services, the most important being that clients can determine if
they are talking to the intended server, not some attacker that is spoofing the server.
1.1.2 Cryptographic Algorithms
The SSL protocol covers many cryptographic needs. Sometimes, though, it isn’t good enough. For
example, you may wish to encrypt HTTP cookies that will be placed on an end user’s browser.
SSL won’t help protect the cookies while they’re being stored on that disk. For situations like this,
OpenSSL exports the underlying cryptographic algorithms used in its implementation of the SSL
Generally, you should avoid using cryptographic algorithms directly if possible. You’re not likely
to get a totally secure system simply by picking an algorithm and applying it. Usually,
cryptographic algorithms are incorporated into cryptographic protocols. Plenty of nonobvious
things can be wrong with a protocol based on cryptographic algorithms. That is why it’s better to
try to find a well-known cryptographic protocol to do what you want to do, instead of inventing
something yourself. In fact, even the protocols invented by cryptographers often have subtle holes.
If not for public review, most protocols in use would be insecure. Consider the original WEP
protocol for IEEE 802.11 wireless networking. WEP (Wired Equivalent Privacy) is the protocol
that is supposed to provide the same level of security for data that physical lines provide. It is a
challenge, because data is transmitted through the air, instead of across a wire. WEP was designed
by veteran programmers, yet without soliciting the opinions of any professional cryptographers or
security protocol developers. Although to a seasoned developer with moderate security knowledge
the protocol looked fine, in reality, it was totally lacking in security.
Nonetheless, sometimes you might find a protocol that does what you need, but can’t find an
implementation that suits your needs. Alternatively, you might find that you do need to come up
with your own protocol. For those cases, we do document the SSL cryptographic API.
Five types of cryptographic algorithms are discussed in this book: symmetric key encryption,
public key encryption, cryptographic hash functions, message authentication codes, and digital
22.214.171.124 Symmetric key encryption
Symmetric key algorithms encrypt and decrypt data using a single key. As shown in Figure 1-1,
the key and the plaintext message are passed to the encryption algorithm, producing ciphertext.
The result can be sent across an insecure medium, allowing only a recipient who has the original
key to decrypt the message, which is done by passing the ciphertext and the key to a decryption
algorithm. Obviously, the key must remain secret for this scheme to be effective.
Figure 1-1. Symmetric key cryptography
The primary disadvantage of symmetric key algorithms is that the key must remain secret at all
times. In particular, exchanging secret keys can be difficult, since you’ll usually want to exchange
keys on the same medium that you’re trying to use encryption to protect. Sending the key in the
clear before you use it leaves open the possibility of an attacker recording the key before you even
begin to send data.
One solution to the key distribution problem is to use a cryptographic key exchange protocol.
OpenSSL provides the Diffie-Hellman protocol for this purpose, which allows for key agreement
without actually divulging the key on the network. However, Diffie-Hellman does not guarantee
the identity of the party with whom you are exchanging keys. Some sort of authentication
mechanism is necessary to ensure that you don’t accidentally exchange keys with an attacker.
Right now, Triple DES (usually written 3DES, or sometimes DES3) is the most conservative
symmetric cipher available. It is in wide use, but AES, the new Advanced Encryption Standard,
will eventually replace it as the most widely used cipher. AES is certainly faster than 3DES, but
3DES has been around a lot longer, and thus is a more conservative choice for the ultra-paranoid.
It is worth mentioning that RC4 is widely supported by existing clients and servers. It is faster
than 3DES, but is difficult to set up properly (don’t worry, SSL uses RC4 properly). For purposes
of compatibility with existing software in which neither AES nor 3DES are supported, RC4 is of
particular interest. We don’t recommend supporting other algorithms without a good reason. For
the interested, we discuss cipher selection in Chapter 6.
Security is related to the length of the key. Longer key lengths are, of course, better. To ensure
security, you should only use key lengths of 80 bits or higher. While 64-bit keys may be secure,
they likely will not be for long, whereas 80-bit keys should be secure for at least a few years to
come. AES supports only 128-bit keys and higher, while 3DES has a fixed 112 bits of effective
security. Both of these should be secure for all cryptographic needs for the foreseeable future.
Larger keys are probably unnecessary. Key lengths of 56 bits (regular DES) or less (40-bit keys
are common) are too weak; they have proven to be breakable with a modest amount of time and
 3DES provides 168 bits of security against brute-force attacks, but there is an attack that reduces
the effective security to 112 bits. The enormous space requirements for that attack makes it about
as practical as brute force (which is completely impractical in and of itself).
126.96.36.199 Public key encryption
Public key cryptography suggests a solution to the key distribution problem that plagues
symmetric cryptography. In the most popular form of public key cryptography, each party has two
keys, one that must remain secret (the private key) and one that can be freely distributed (the
public key). The two keys have a special mathematical relationship. For Alice to send a message to
Bob using public key encryption (see Figure 1-2), Alice must first have Bob’s public key. She then
encrypts her message using Bob’s public key, and delivers it. Once encrypted, only someone who
has Bob’s private key can successfully decrypt the message (hopefully, that’s only Bob).
Figure 1-2. Public key cryptography
Public key encryption solves the problem of key distribution, assuming there is some way to find
Bob’s public key and ensure that the key really does belong to Bob. In practice, public keys are
passed around with a bunch of supporting information called a certificate, and those certificates
are validated by trusted third parties. Often, a trusted third party is an organization that does
research (such as credit checks) on people who wish to have their certificates validated. SSL uses
trusted third parties to help address the key distribution problem.
Public key cryptography has a significant drawback, though: it is intolerably slow for large
messages. Symmetric key cryptography can usually be done quickly enough to encrypt and
decrypt all the network traffic a machine can manage. Public key cryptography is generally
limited by the speed of the cryptography, not the bandwidth going into the computer, particularly
on server machines that need to handle multiple connections simultaneously.
As a result, most systems that use public key cryptography, SSL included, use it as little as
possible. Generally, public key encryption is used to agree on an encryption key for a symmetric
algorithm, and then all further encryption is done using the symmetric algorithm. Therefore,
public key encryption algorithms are primarily used in key exchange protocols and when nonrepudiation
RSA is the most popular public key encryption algorithm. The Diffie-Hellman key exchange
protocol is based on public key technology and can be used to achieve the same ends by
exchanging a symmetric key, which is used to perform actual data encryption and decryption. For
public key schemes to be effective, there usually needs to be an authentication mechanism
involving a trusted third party that is separate from the encryption itself. Most often, digital
signature schemes, which we discuss below, provide the necessary authentication.
Keys in public key algorithms are essentially large numbers with particular properties. Therefore,
bit length of keys in public key ciphers aren’t directly comparable to symmetric algorithms. With
public key encryption algorithms, you should use keys of 1,024 bits or more to ensure reasonable
security. 512-bit keys are probably too weak. Anything larger than 2,048 bits may be too slow,
and chances are it will not buy security that is much more practical. Recently, there’s been some
concern that 1,024-bit keys are too weak, but as of this writing, there hasn’t been conclusive proof.
Certainly, 1,024 bits is a bare minimum for practical security from short-term attacks. If your keys
potentially need to stay protected for years, then you might want to go ahead and use 2,048-bit
When selecting key lengths for public key algorithms, you’ll usually need to select symmetric key
lengths as well. Recommendations vary, but we recommend using 1,024-bit keys when you are
willing to work with symmetric keys that are less than 100 bits in length. If you’re using 3DES or
128-bit keys, we recommend 2,048-bit public keys. If you are paranoid enough to be using 192-bit
keys or higher, we recommend using 4,096-bit public keys.
Requirements for key lengths change if you’re using elliptic curve cryptography (ECC), which is a
modification of public key cryptography that can provide the same amount of security using faster
operations and smaller keys. OpenSSL currently doesn’t support ECC, and there may be some
lingering patent issues for those who wish to use it. For developers interested in this topic, we
recommend the book Implementing Elliptic Curve Cryptography, by Michael Rosing (Manning).
188.8.131.52 Cryptographic hash functions and Message Authentication Codes
Cryptographic hash functions are essentially checksum algorithms with special properties. You
pass data to the hash function, and it outputs a fixed-size checksum, often called a message digest,
or simply digest for short. Passing identical data into the hash function twice will always yield
identical results. However, the result gives away no information about the data input to the
function. Additionally, it should be practically impossible to find two inputs that produce the same
message digest. Generally, when we discuss such functions, we are talking about one-way
functions. That is, it should not be possible to take the output and algorithmically reconstruct the
input under any circumstances. There are certainly reversible hash functions, but we do not
consider such things in the scope of this book.
For general-purpose usage, a minimally secure cryptographic hash algorithm should have a digest
twice as large as a minimally secure symmetric key algorithm. MD5 and SHA1 are the most
popular one-way cryptographic hash functions. MD5’s digest length is only 128 bits, whereas
SHA1’s is 160 bits. For some uses, MD5’s key length is suitable, and for others, it is risky. To be
safe, we recommend using only cryptographic hash algorithms that yield 160-bit digests or larger,
unless you need to support legacy algorithms. In addition, MD5 is widely considered “nearly
broken” due to some cryptographic weaknesses in part of the algorithm. Therefore, we
recommend that you avoid using MD5 in any new applications.
Cryptographic hash functions have been put to many uses. They are frequently used as part of a
password storage solution. In such circumstances, logins are checked by running the hash function
over the password and some additional data, and checking it against a stored value. That way, the
server doesn’t have to store the actual password, so a well-chosen password will be safe even if an
attacker manages to get a hold of the password database.
Another thing people like to do with cryptographic hashes is to release them alongside a software
release. For example, OpenSSL might be released alongside a MD5 checksum of the archive.
When you download the archive, you can also download the checksum. Then you can compute the
checksum over the archive and see if the computed checksum matches the downloaded checksum.
You might hope that if the two checksums match, then you securely downloaded the actual
released file, and did not get some modified version with a Trojan horse in it. Unfortunately, that
isn’t the case, because there is no secret involved. An attacker can replace the archive with a
modified version, and replace the checksum with a valid value. This is possible because the
message digest algorithm is public, and there is no secret information input to it.
If you share a secret key with the software distributor, then the distributor could combine the
archive with the secret key to produce a message digest that an attacker shouldn’t be able to forge,
since he wouldn’t have the secret. Schemes for using keyed hashes, i.e., hashes involving a secret
key, are called Message Authentication Codes (MACs). MACs are often used to provide message
integrity for general-purpose data transfer, whether encrypted or not. Indeed, SSL uses MACs for
The most widely used MAC, and the only one currently supported in SSL and in OpenSSL, is
HMAC. HMAC can be used with any message digest algorithm.
184.108.40.206 Digital signatures
For many applications, MACs are not very useful, because they require agreeing on a shared
secret. It would be nice to be able to authenticate messages without needing to share a secret.
Public key cryptography makes this possible. If Alice signs a message with her secret signing key,
then anyone can use her public key to verify that she signed the message. RSA provides for digital
signing. Essentially, the public key and private key are interchangeable. If Alice encrypts a
message with her private key, anyone can decrypt it. If Alice didn’t encrypt the message, using her
public key to decrypt the message would result in garbage.
There is also a popular scheme called DSA (the Digital Signature Algorithm), which the SSL
protocol and the OpenSSL library both support.
Much like public key encryption, digital signatures are very slow. To speed things up, the
algorithm generally doesn’t operate on the entire message to be signed. Instead, the message is
cryptographically hashed, and then the hash of the message is signed. Nonetheless, signature
schemes are still expensive. For this reason, MACs are preferable if any sort of secure key
exchange has taken place.
One place where digital signatures are widely used is in certificate management. If Alice is willing
to validate Bob’s certificate, she can sign it with her private key. Once she’s done that, Bob can
attach her signature to his certificate. Now, let’s say he gives the certificate to Charlie, and Charlie
does not know that Bob actually gave him the certificate, but he would believe Alice if she told
him the certificate belonged to Bob. In this case, Charlie can validate Alice’s signature, thereby
demonstrating that the certificate does indeed belong to Bob.
Since digital signatures are a form of public key cryptography, you should be sure to use key
lengths of 1,024 bits or higher to ensure security.
1.2 Overview of SSL
SSL is currently the most widely deployed security protocol. It is the security protocol behind
secure HTTP (HTTPS), and thus is responsible for the little lock in the corner of your web
browser. SSL is capable of securing any protocol that works over TCP.
An SSL transaction (see Figure 1-3) starts with the client sending a handshake to the server. In the
server’s response, it sends its certificate. As previously mentioned, a certificate is a piece of data
that includes a public key associated with the server and other interesting information, such as the
owner of the certificate, its expiration date, and the fully qualified domain name associated with
 By fully qualified, we mean that the server’s hostname is written out in a full, unambiguous
manner that includes specifying the top-level domain. For example, if our web server is named
“www”, and our corporate domain is “securesw.com”, then the fully qualified domain name for that
host is “www.securesw.com”. No abbreviation of this name would be considered fully qualified.
Figure 1-3. An overview of direct communication in SSL
During the connection process, the server will prove its identity by using its private key to
successfully decrypt a challenge that the client encrypts with the server’s public key. The client
needs to receive the correct unencrypted data to proceed. Therefore, the server’s certificate can
remain public—an attacker would need a copy of the certificate as well as the associated private
key in order to masquerade as a known server.
However, an attacker could always intercept server messages and present the attacker’s certificate.
The data fields of the forged certificate can look legitimate (such as the domain name associated
with the server and the name of the entity associated with the certificate). In such a case, the
attacker might establish a proxy connection to the intended server, and then just eavesdrop on all
data. Such an attack is called a “man-in-the-middle” attack and is shown in Figure 1-4. To thwart a
man-in-the-middle attack completely, the client must not only perform thorough validation of the
server certificate, but also have some way of determining whether the certificate itself is
trustworthy. One way to determine trustworthiness is to hardcode a list of valid certificates into
the client. The problem with this solution is that it is not scalable. Imagine needing the certificate
for every secure HTTP server you might wish to use on the net stored in your web browser before you even begin surfing.
The practical solution to this problem is to involve a trusted third party that is responsible for
keeping a database of valid certificates. A trusted third party, called a Certification Authority,
signs valid server certificates using its private key. The signature indicates that the Certification
Authority has done a background check on the entity that owns the certificate being presented,
thus ensuring to some degree that the data presented in the certificate is accurate. That signature is
included in the certificate, and is presented at connection time.
The client can validate the authority’s signature, assuming that it has the public key of the
Certification Authority locally. If that check succeeds, the client can be reasonably confident the
certificate is owned by an entity known to the trusted third party, and can then check the validity
of other information stored in the certificate, such as whether the certificate has expired.
Although rare, the server can also request a certificate from the client. Before certificate validation
is done, client and server agree on which cryptographic algorithms to use. After the certificate
validation, client and server agree upon a symmetric key using a secure key agreement protocol
(data is transferred using a symmetric key encryption algorithm). Once all of the negotiations are
complete, the client and server can exchange data at will.
The details of the SSL protocol get slightly more complex. Message Authentication Codes are
used extensively to ensure data integrity. Additionally, during certificate validation, a party can go
to the Certification Authority for Certificate Revocation Lists (CRLs) to ensure that certificates
that appear valid haven’t actually been stolen.
We won’t get into the details of the SSL protocol (or its successor, TLS). For our purposes, we can
treat everything else as a black box. Again, if you are interested in the details, we recommend Eric
Rescorla’s book SSL and TLS.
S/MIME is a competing standard to PGP (Pretty Good Privacy) for the secure exchange of email.
It provides authentication and encryption of email messages using public key cryptography, as
does PGP. One of the primary differences in the two standards is that S/MIME uses a public key
infrastructure to establish trust, whereas PGP does not. Trust is established when there is some
means of proving that someone with a public key is actually that person, and that the key belongs
to that person.
PGP was written and released in 1991 by Phil Zimmermann. It quickly became the de facto
standard for the secure exchange of information throughout the world. Today, PGP has become an
open standard known as OpenPGP, and is documented in RFC 2440. Because PGP does not rely
on a public key infrastructure to establish trust, it is easy to set up and use. Today, one of the most
common methods of establishing trust is obtaining someone’s public key either from a key server
or directly from that person, and manually verifying the key’s fingerprint by comparing it with the
fingerprint information obtained directly from the key’s owner over some trusted medium, such as
the telephone or paper mail. It is also possible to sign a public key, so if Alice trusts Bob’s key,
and Bob has used his key to sign Charlie’s key, Alice knows that she can trust Charlie’s key if the
signature matches Bob’s. PGP works for small groups of people, but it does not scale well.
S/MIME stands for Secure Multipurpose Internet Mail Exchange. RSA Security developed the
initial version in 1995 in cooperation with several other software companies; the IETF developed
Version 3. Like PGP, S/MIME also provides encryption and authentication services. A public key
infrastructure is used as a means of establishing trust, which means that S/MIME is capable of
scaling to support large groups of people. The downside is that it requires the use of a public key
infrastructure, which means that it is slightly more difficult to set up than PGP because a
certificate must be obtained from a Certification Authority that is trusted by anyone using the
certificate to encrypt or verify communications. Public keys are exchanged in the form of X.509
certificates, which require a Certification Authority to issue certificates that can be used. Because
a Certification Authority is involved in the exchange of public keys, trust can be established if the
Certification Authority that issued a certificate is trusted. Public key infrastructure is discussed in detail in Chapter 3.
S/MIME messages may have multiple recipients. For an encrypted message, the body of the
message is encrypted using a symmetric cipher, and the key for the symmetric cipher is encrypted
using the recipient’s public key. When multiple recipients are involved, the same symmetric key is
used, but the key is encrypted using each recipient’s public key. For example, if Alice sends the
same message to Bob and Charlie, two encrypted copies of the key for the symmetric cipher are
included in the message. One copy is encrypted using Bob’s public key, and the other is encrypted
using Charlie’s public key. To decrypt a message, the recipient’s certificate is required to
determine which encrypted key to decrypt.
The command-line tool provides the smime command, which supports encryption, decryption,
signing, and verifying S/MIME v2 messages (support for S/MIME v3 is limited and is not likely
to work). Email applications that do not natively support S/MIME can often be made to support it
by using the command-line tool’s smime command to process incoming and outgoing messages.
The smime command does have some limitations, and it is not recommended in any kind of
production environment. However, it provides a good foundation for building a more powerful
and fully featured S/MIME implementation.
The following examples illustrate the use of the S/MIME commands:
$ openssl smime -encrypt -in mail.txt -des3 -out mail.enc
Obtains a public key from the X.509 certificate in the file cert.pem and encrypts the
contents of the file mail.txt using that key and 3DES. The resulting encrypted S/MIME
message is written to the file mail.enc.
$ openssl smime -decrypt -in mail.enc -recip cert.pem -inkey
key.pem -out mail.txt
Obtains the recipient’s public key from the X.509 certificate in the file cert.pem and
decrypts the S/MIME message from the file mail.enc using the private key from the file
key.pem. The decrypted message is written to the file mail.txt.
$ openssl smime -sign -in mail.txt -signer cert.pem -inkey
key.pem -out mail.sgn
The signer’s X.509 certificate is obtained from the file cert.pem, and the contents of the
file mail.txt are signed using the private key from the file key.pem. The certificate is
included in the S/MIME message that is written to the file mail.sgn.
$ openssl smime -verify -in mail.sgn -out mail.txt
Verifies the signature on the S/MIME message contained in the file mail.sgn and writes
the result to the file mail.txt. The signer’s certificate is expected to be included as part of the S/MIME message
Example 3-2. A simple CA configuration definition
[ ca ]
default_ca = exampleca
[ exampleca ]
dir = /opt/exampleca
certificate = $dir/cacert.pem
database = $dir/index.txt
new_certs_dir = $dir/certs
private_key = $dir/private/cakey.pem
serial = $dir/serial
default_crl_days = 7
default_days = 365
default_md = md5
policy = exampleca_policy
x509_extensions = certificate_extensions
[ exampleca_policy ]
commonName = supplied
stateOrProvinceName = supplied
countryName = supplied
emailAddress = supplied
organizationName = supplied
organizationalUnitName = optional
[ certificate_extensions ]
basicConstraints = CA:false
Now that we’ve created a configuration file, we need to tell OpenSSL where to find it. By default,
OpenSSL uses a system-wide configuration file. Its location is determined by your particular
installation, but common locations are /usr/local/ssl/lib/openssl.cnf or /usr/share/ssl/openssl.cnf.
Since we’ve created our own configuration file solely for the use of our CA, we do not want to use
the system-wide configuration file. There are two ways to tell OpenSSL where to find our
configuration file: using the environment variable OPENSSL_CONF, or specifying the filename
with the config option on the command line. Since we will issue a sizable number of commands
that should make use of our configuration file, the easiest way for us to tell OpenSSL about it is
through the environment (see Example 3-3).
Example 3-3. Telling OpenSSL where to find our configuration file
3.3.3 Creating a Self-Signed Root Certificate
Before we can begin issuing certificates with our CA, it needs a certificate of its own with which
to sign the certificates that it issues. This certificate will also be used to sign any CRLs that are
published. Any certificate that has the authority to sign certificates and CRLs will do. By this
definition, a certificate from another CA or a self-signed root certificate will work. For our
purposes, we should create our own self-signed root certificate to do the job.
The first thing that we need to do is add some more information to our configuration file. Example
3-4 shows the newly added information. Note that we’ll be using the command-line tool’s req
command, so we’ll start by adding a new section by the same name. Since we will use only this
configuration file for our CA, and since we will use only the command-line tool’s req command
this one time, we’ll put all of the necessary information that OpenSSL allows in the configuration
file rather than typing it out on the command line. It’s a little more work to do it this way, but it is
the only way to specify X.509v3 extensions, and it also allows us to keep a record of how the selfsigned
root certificate was created.
Example 3-4. Configuration file additions for generating a self-signed root
[ req ]
default_bits = 2048
default_keyfile = /opt/exampleca/private/cakey.pem
default_md = md5
prompt = no
distinguished_name = root_ca_distinguished_name
x509_extensions = root_ca_extensions
[ root_ca_distinguished_name ]
commonName = Example CA
stateOrProvinceName = Virginia
countryName = US
emailAddress = firstname.lastname@example.org
organizationName = Root Certification Authority
[ root_ca_extensions ]
basicConstraints = CA:true
The default_bits key in the req section tells OpenSSL to generate a private key for the
certificate with a length of 2,048 bits. If we don’t specify this, the default is to use 512 bits. A key
length of 2,048 bits provides significantly more protection than 512, and for a self-signed root
certificate, it’s best to use all of the protection afforded to us. With the vast computing power that
is affordable today, the speed penalty for using a 2,048-bit key over a 512-bit key is well worth the
trade-off in protection, since the security of this one key directly impacts the security of all keys
issued by our CA.
The default_keyfile key in the req section tells OpenSSL where to write the newly
generated private key. Note that we’re specifying the same directory for output as we specified
earlier in the ca section as the location of the private key for the certificate. We can’t use the $dir
“macro” here because the dir key is private to the ca section, so we need to type out the full path
The default_md key in the req section tells OpenSSL which message digest algorithm to use to
sign the key. Since we specified MD5 as the algorithm to use when signing new certificates and
CRLs, we’ll use the same algorithm here for consistency. The SHA1 algorithm is actually a
stronger algorithm and would be preferable, but for the sake of this example, we’ve chosen MD5
because it is more widely used and all but guaranteed to be supported by any software that could
possibly be using our certificates. If you will be using only software that you know supports
SHA1 with your certificates, we recommend that you use SHA1 instead of MD5.
The prompt and distinguished_name keys determine how OpenSSL gets the information it
needs to fill in the certificate’s distinguished name. By setting prompt to no, we’re telling
OpenSSL that it should get the information from the section named by the
distinguished_name key. The default is to prompt for the information, so we must explicitly
turn prompting off here. The keys in the distinguished_name section that we’ve defined by
the name of root_ca_distinguished_name are the names of the fields making up the
distinguished name, and the values are the values that we want placed in the certificate for each
field. We’ve included only the distinguished name fields that we previously configured as required
and omitted the one optional field.
Finally, the x509_extensions key specifies the name of a section that contains the extensions
that we want included in the certificate. The keys in the section we’ve named
root_ca_extensions are the names of the extension fields that we want filled in, and the
values are the values we want them filled in with. We discussed the basicConstraints key
earlier in this chapter. We’ve set the cA component of the extension to true, indicating that this
certificate is permitted to act as a CA to sign both certificates and CRLs.
Now that we have the configuration set up for generating our self-signed root certificate, it’s time
to actually create the certificate and generate a new key pair to go along with it. The options
required on the command line are minimal because we’ve specified most of the options that we
want in the configuration file. From the root directory of the CA, /opt/exampleca, or whatever
you’ve used on your system, execute the following command. Make sure that you’ve set the
OPENSSL_CONF environment variable first so that OpenSSL can find your configuration file!
openssl req -x509 -newkey rsa -out cacert.pem -outform PEM
When you run the command, OpenSSL prompts you twice to enter a passphrase to encrypt your
private key. Remember that this private key is a very important key, so choose your passphrase
accordingly. If this key is compromised, the integrity of your CA is compromised, which
essentially means that any certificates issued, whether they were issued before the key was
compromised or after, can no longer be trusted. The key will be encrypted with 3DES, using a key
derived from your passphrase. Example 3-5 shows the results of the command we just generated
followed by a textual dump of the resulting certificate. Although your certificate will be different
because your public and private key will be different from ours, your output should look similar.
Example 3-5. Output from generating a self-signed root certificate
the command output shown is incorrect (it shows a 1024 bit CA key, but given the
example and the configuration file, the key would in fact be 2048 bits)
openssl req -x509 -newkey rsa -out cacert.pem -outform PEM
Using configuration from /opt/exampleca/openssl.cnf
Generating a 1024 bit RSA private key
writing new private key to ‘/opt/exampleca/private/cakey.pem’
Enter PEM pass phrase:
Verifying password – Enter PEM pass phrase:
openssl x509 -in cacert.pem -text -noout
Version: 3 (0x2)
Serial Number: 0 (0x0)
Signature Algorithm: md5WithRSAEncryption
Issuer: CN=Example CA, ST=Virginia,
Not Before: Jan 13 10:24:19 2002 GMT
Not After : Jan 13 10:24:19 2003 GMT
Subject: CN=Example CA, ST=Virginia,
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public Key: (1024 bit)
Modulus (1024 bit):
Exponent: 65537 (0x10001)
X509v3 Basic Constraints:
Signature Algorithm: md5WithRSAEncryption
You’ll notice in Example 3-5’s output that when OpenSSL displays a DN in a shortened form, it
uses a nonstandard representation that can be somewhat confusing. In this example, we see
C=US/Emailemail@example.com as an example of this representation. What’s confusing here
is the slash separating the two fields. The reason for this is that the Email and O fields are
nonstandard in a DN. OpenSSL lists the standard fields first and the nonstandard fields second,separating them with a slash.
Published @ January 5, 2022 9:29 am