- INTRODUCTION When UNIX was ﬁrst booted on a PDP-8 computer at Bell Labs, it already had a basic notion of user isolation, separation of kernel and user memory space, and process security. It was originally conceived of as a multiuser sys- tem, and as such, security could not be added on as an afterthought. In this respect, UNIX was different from a whole class of computing machinery that had been targeted for single-user environments. Linux is mostly a GNU software-based operating system (OS) with a kernel originally written by Linus Torvalds, with many popular utilities from the GNU Software Foundation and other open-source organizations added. GNU/Linux implements the same interfaces as most current UNIX systems, including the Portable Operating System Interface (POSIX) standards. As such, Linux is a UNIX-style OS, even though it was not derived from the original AT&T/Bell Labs UNIX code base. Debian is a distribution originally developed by Ian Murdock of Purdue University. Debian’s express goal is to use only open and free software, as deﬁned by its guide- lines. Ubuntu is a derivative Linux distribution based on the Debian system. It emphasizes ease of use and allows beginning users easy access to a comprehensive Linux distribution. All versions of MacOS X are built on UNIX OSs, namely the Mach microkernel and the University of Cal- ifornia’s FreeBSD code. Although the graphical user interface and some other systems enhancements are pro- prietary, MacOS has a XNU kernel and includes most of the command-line utilities commonly found in UNIX OSs. The examples in this chapter refer to Solaris, MacOS, and Ubuntu Linux, a distribution by Canonical, Inc., built on the popular Debian distribution.
- UNIX AND SECURITY As already indicated, UNIX was originally created as a multiuser system. Initially systems were not necessarily networked, but with the integration of the Berkley Software Distribution (BSD) TCP/IP V4 stack in 1984, UNIX-based systems quickly became the backbone of the rapidly growing Internet. As such, UNIX servers started to provide critical services to network users as well.
The Aims of System Security In general, secure computing systems must guarantee the conﬁdentiality, integrity, and availability of resources. This is achieved by combining different security mechanisms and safeguards, including policy-driven access control and process separation.
Authentication When a user is granted access to resources on a computing system,itvitallyimportanttoestablishandverifytheidentity of the requesting entity. This process is commonly referred to as authentication (sometimes abbreviated as AuthN).
Authorization As a multiuser system, UNIX must protect resources from unauthorizedaccess.Toprotectuserdatafromotherusersand nonusers, the OS has to put up safeguards against unautho- rized access. Determining the eligibility of an authenticated (oranonymous)usertoaccessormodifyaresourceisusually called authorization (sometimes abbreviated as AuthZ).
Availability Guarding a system (including all of its subsystems, such as the network) against security breaches is vital to keep the
Computer and Information Security Handbook. http://dx.doi.org/10.1016/B978-0-12-803843-7.00011-9 Copyright © 2017 Elsevier Inc. All rights reserved.
system available for its intended use. The availability of a system must be properly deﬁned: Any system is physically available, even if it is turned off; however, a shutdown system would not be useful. In the same way, a system that has only the core OS running but not the services that are supposed to run on the system is considered unavailable.
Integrity Similar to availability, a system that is compromised cannot be considered available for regular service. Ensuring that the UNIX system is running in the intended way is crucial, especially because the system might otherwise be used maliciously by a third party, such as for a relay or member in a botnet.
Conﬁdentiality Protecting resources from unauthorized access and safe- guarding the content is referred to as conﬁdentiality. As long as it is not compromised, a UNIX system will main- tain the conﬁdentiality of system user data by enforcing access control policies and separating processes from each other. There are two fundamentally different types of access control: discretionary and mandatory. Users themselves manage the former, whereas the system owner sets the latter. We will discuss the differences later in this chapter.
- BASIC UNIX SECURITY OVERVIEW UNIX security has a long tradition, and although many concepts of the earliest UNIX systems still apply, a large number of changes have fundamentally altered the way the OS implements these security principles. One of the reasons why it is complicated to talk about UNIX security is that a lot of variants of UNIX and UNIX-like OSs are on the market. In fact, if you look at only some of the core POSIX standards that have been set forth to guarantee minimal consistency across different UNIX ﬂavors (Fig. 11.1), almost every OS on the market qualiﬁes as UNIX (or, more precisely, POSIX compliant). Examples include not only traditional UNIX OSs such as Solaris, HP-UX, and AIX but also Windows NTebased OSs (such as Windows XP,
through the native POSIX subsystem or the Services for Windows extensions) and even z/OS.
Traditional UNIX Systems Most UNIX systems share some internal features, though: Their approaches to authentication and authorization are similar, their delineation between kernel space and user space goes along the same lines, and their security-related kernel structures are roughly comparable. In the past few years, however, there have been major advancements in extending the original security model by adding role-based access control1 (RBAC) models to some OSs. In addition to RBAC, most UNIX-based system can support mandatory access control (MAC) models by implementing kernel- level object tagging and rule enforcement. A more detailed discussion of MAC is provided later in this chapter.
Kernel Space Versus User Land UNIX systems typically execute instructions in one of two general contexts: the kernel or the user space. Code executed in a kernel context has (at least in traditional systems) full access to the entire hardware and software capabilities of the computing environment. Although some systems extend security safeguards into the kernel, in most cases not only can a rogue kernel execution thread cause massive data corruption, it can effectively bring down the entire OS. Obviously, a normal user of an OS should not wield so much power. To prevent this, user execution threads in UNIX systems are not executed in the context of the kernel but in a less privileged context, the user space, which is sometimes facetiously called “user land.” It is common to restrict user-land access to certain more privileged execu- tion commands by switching the operational context of the processor. For example, the common Intel x386 architec- ture (including the AMD 64 bit extensions) has a ring model in which privileged commands are available only in higher rings. Most OSs execute the kernel in ring 0 and user processes in ring 3. The UNIX kernel deﬁnes a structure called process (Fig. 11.2) that associates metadata about the user as well as potentially other environmental factors with the execu- tion thread and its data. Access to computing resources such as memory, inputeoutput (I/O) subsystems, and so on
is safeguarded by the kernel; if a user process wants to allocate a segment of memory or access a device, it has to make a system call, passing some of its metadata as pa- rameters to the kernel. The kernel then performs an authorization decision2 and either grants the request or returns an error. It is then the process’s responsibility to react properly to either the results of the access or the error. If this model of user space process security is so effective, why not implement it for all OS functions, including most kernel operations? The answer is that to a large extent the overhead of evaluating authorization met- adata is computation-expensive. If most or all operations (which are, in the classical kernel space, often hardware- related device access operations) are run in user space or in a comparable way, the performance of the OS would suffer severely. There is a class of OS with a microkernel that implements this approach; the kernel implements only the most rudimentary functions (processes, scheduling, and basic security); all other operations, including device access and those typically carried out by the kernel, run in separate user processes. The advantage is a higher level of security and better safeguards against rogue device drivers. Furthermore, new device drivers or other OS functionality can be added or removed without having to reboot the kernel. The performance penalties are so severe, however, that no major commercial OS implements a comprehensive microkernel architecture.3
Many modern UNIX-based systems operate in a mixed mode: Whereas many device drivers such as hard drives, video, and I/O systems operate in the kernel space, they also provide a framework for user space drivers. For example, the “Filesystems in User Space” allows additional drivers to be loaded for ﬁle systems. This allows the mounting of devices that are not formatted with the default ﬁle system types that the OS supports, without having to execute with elevated privileges. It also allows the use of ﬁle systems or other hardware drivers in situations in which license conﬂicts prevent the integration of a ﬁle system driver into the kernel (for example, the Common Devel- opment and Distribution License ZFS ﬁle system into the General Public License Linux kernel).
Semantics of User Space Security In most UNIX systems, security starts with access control to resources. Because users interact with the systems through processes and ﬁles, it is important to know that every user space process structure has two important security ﬁelds: the user identiﬁer (UID) and the group identiﬁer (GID). These identiﬁers are typically positive integers that are unique for each user.4 Every process that is started5 by or on behalf of a user inherits the UID and GID values for that user account. These values are usually immutable for the lifetime of the process. Access to system resources must go through the kernel by calling the appropriate function that is accessible to user processes. For example, a process that wants to reserve some system memory for data access will use the malloc() system call and pass the requested size and an (unin- itialized) pointer as parameters. The kernel then evaluates this request, determines whether enough virtual memory (physical memory plus swap space) is available, reserves a section of memory, and sets the pointer to the address where the block starts. Users who have the UID 0 have special privileges: They are considered superusers, able to override many of the security guards that the kernel sets up. The default UNIX superuser is named root.
Standard File and Device Access Semantics File access is a fundamental task, and it is important that only authorized users get read or write access to a given
ﬁle. If any user were able to access any ﬁle, there would be no privacy and security could not be maintained, because the OS would not be able to protect its own permanent records, such as conﬁguration information or user creden- tials. Most UNIX OSs use an identity-based access control (IBAC) model, in which access policies are expressed in terms of the identity of user. The most common IBAC policy describing who may access or modify ﬁles and directories is commonly referred to as an access control list (ACL). Note that there is more than just one type of ACL; standard UNIX ACLs are well- known, but different UNIX variants or POSIX-like OSs might implement different ACLs and only deﬁne a map- ping to the simple POSIX 1003 semantics. A good example is the Windows NTFS ACL or the NFS v4 ACLs. ACLs for ﬁles and devices represented though device ﬁles are stored within the ﬁle system as metadata for the ﬁle information itself. This is different from other access con- trol models in which policies may be stored in a central repository.
Read, Write, Execute From its earliest days, UNIX implemented a simple but effective way to set access rights for users. Normal ﬁles can be accessed in three fundamental ways: read, write, and execute. The ﬁrst two ways are obvious; execution requires a little more explanation. A ﬁle on disk may be executed only as a binary program or a script if the user has the right to execute this ﬁle. If the execute permission is not set, the system call exec() or execve() to execute a ﬁle image will fail. In addition to a user’s permissions, there must be a notion of ownership of ﬁles and sometimes other resources. In fact, each ﬁle on a traditional UNIX ﬁle system is associated with a user and a group. The user and group are not identiﬁed by their name but by UID and GID. In addition to setting permissions for the user owning the ﬁle, two other sets of permissions are set for ﬁles: for the group and for all others. Similar to being owned by a user, a ﬁle is also associated with one group. All members of this group6 can access the ﬁle with the permissions set for the group. In the same way, the other set of permissions applies to all users of the system.
Special Permissions In addition to the standard permissions, there are a few special permissions.
SetID Bit This permission applies only to executable ﬁles, and it can be set only for the user or the group. If this bit is set, the process for this program is not set to the UID or GID of the invoking user, but instead the UID or GID of the ﬁle. For example, a program owned by the superuser can have the SetID bit set and execution allowed for all users. In this way a normal user can execute a speciﬁc program with elevated privileges.
Sticky Bit When the sticky bit is set on an executable ﬁle, its data (speciﬁcally the text segment) are kept in memory even after the process exits. This is intended to speed execution of commonly used programs. A major drawback of setting the sticky bit is that when the executable ﬁle changes (for example, through a patch), the permission must be unset and the program started once more. When this process exits, the executable is unloaded from memory and the ﬁle can be changed.
Mandatory Locking Mandatory ﬁle and record locking refer to a ﬁle’s ability to have its reading or writing permissions locked while a program is accessing that ﬁle. In addition, there might be additional implementation-speciﬁc permissions. These depend on the capabilities of the core operating facilities, including the kernel, but also on the type of ﬁle system. For example, most UNIX OSs can mount Microsoft DOS- based ﬁle allocation tableebased ﬁle systems, which do not support any permissions or user and group ownership. Because the internal semantics require some values for ownership and permissions, these are typically set for the entire ﬁle system.
Permissions on Directories The semantics of permissions on directories (Fig. 11.3) are different from those on ﬁles.
Read and Write Mapping these permissions to directories is fairly straight- forward: The read permission allows ﬁles to be listed in the directory and the write permission allows us to create ﬁles. For some applications it can be useful to allow writing but not reading.
Execute If this permission is set, a process can set its working directory to this directory. Note that with the basic per- missions there is no limitation on traversing directories, so a process might change its working directory to a child of a directory, even if it cannot do so for the directory itself.
SetID Semantics may differ here. For example, on Solaris this changes the behavior for default ownership of newly created ﬁles from the System V to the BSD semantics.
Other File Systems As mentioned, the set of available permissions and autho- rization policies depends on the underlying OS capabilities, including the ﬁle system. For example, the UFS ﬁle system in Solaris since version 2.5 allows additional ACLs on a per-user basis. Furthermore, NFS version 4 or higher deﬁnes additional ACLs for ﬁle access; it is obvious that the NFS server must have an underlying ﬁle system capable of recording these additional metadata. Fig. 11.4 exhibits a list of extended ﬁle system ACLs available on MacOS- based UFS. This list is similar for FreeBSD and other UFS-based ﬁle systems.
Discretionary Versus Mandatory Access Control The access control semantics described so far establish a “discretionary” access control (DAC) model: any user may determine what type of access he or she wants to give to other speciﬁc users, groups, or anybody else. For many applications this is sufﬁcient: for example, for systems that deliver a single network service and do not allow interac- tive login, the service’s ability to determine itself what data will be shared with network users may be sufﬁcient. In systems that need to enforce access to data based on centralized, system operatoreadministered policies, DAC may not be sufﬁcient. For example, for systems that need to operate in multilevel security environments, conﬁdentiality of data can be achieved only through a MAC model. A number of MAC implementations for UNIX-based systems are currently available, including Solaris Trusted Extensions for Solaris 10, SELinux for Linux-based OSs, jails for
FreeBSD, the sandbox facility for MacOS and iOS, and TrustedBSD for BSD-based distributions. MAC can be designed and implemented in many different ways. A common approach is to label OS objects in user both and kernel space with a classiﬁcation level and enforce appropriate MAC policies, such as the Belle LaPadua (BLP) model for data conﬁdentiality or the Biba model for data integrity. Many UNIX OSs provide a rudimentary set of MAC options by default: The SELinux-based Linux Security Module interface is part of the core kernel, and some OS vendors such as Red Hat ship their OSs with a minimal set of MAC policies enabled. To operate in a true multilevel security environment, further conﬁguration and often additional software modules are necessary to enable a BLP- or Biba-compliant set of MAC policies. Whereas MAC-based systems have traditionally been employed in government environments, modern enterprise architectures have a growing need for enforced access control regarding conﬁdentiality or integrity. For example, data leakage protection systems or auditing systems will beneﬁt from certain types of centralized MAC policies. FreeBSD jails are commonly used to isolate subsystems or applications, such as in the popular FreeNAS distribution. Consumer-facing architectures are also increasingly adopt- ing additional application isolation through MAC: iOS applications and many commercial types of MacOS soft- ware leverage the sandbox facility to limit process activity.
- ACHIEVING UNIX SECURITY Achieving a high level of system security for a UNIX system is a complex process that involves technical, oper- ational, and managerial aspects of system operations. The subsequent list is a cursory overview of some of the most important aspects of securing a UNIX system. To achieve a level of OS security suitable for operating Internet-facing systems or in a mission-critical environment, additional conﬁguration steps should be taken. These need to include all vendor-suggested conﬁguration and standard mainte- nance procedures, but they should include additional measures. For example, the Center for Internet Security7 (CIS) publishes “Security Benchmarks” for most major OSs and major software packages. These are freely avail- able from the CIS in the form of PDF document and pro- vide detailed secure conﬁguration options for most OS subsystems. To its participating members, the CIS also makes a Security Content Automation Protocol (SCAP)- compliant conﬁguration scanner and the associated SCAP8 ﬁles available. Other sources for conﬁguration proﬁles can be obtained through the US Department of Defense Infor- mation System Agency (DISA) in the form of SCAP- compliant Security Technical Implementation Guide (STIG)9 ﬁles.
System Patching Before anything else, it is vitally important to emphasize the need to keep UNIX systems up to date. No OS or other pro- gram can be considered safe without being patched up; this point cannot be stressed enough. Having a system with the latestsecuritypatchesistheﬁrstandmostoftenthebestlineof defense against intruders and other cyber security threats.
All major UNIX systems have a patching mechanism; this is a way to get the system up to date. Depending on the vendor and the mechanism used, it is possible to “back out” of the patches. For example, on Solaris it is usually possible to remove a patch through the patchrm (1 m) command. On Debian-based systems this is not as easy, because in a patch the software package to be updated is replaced by a new version. Undoing this is possible only by installing the earlier package.
Locking Down the System In general, all system services and facilities that are not needed for regular operation should be disabled or even uninstalled. Because any software package increases the attack surface of the system, removing unnecessary soft- ware ensures a better security posture.
Minimizing User Privileges User accounts that have far-reaching access rights within a system have the ability to affect or damage a large number of resources, potentially including system management or system service resources. As such, user access rights should be minimized by default, in line with the security principle of “least privilege.” For example, unless interactive access to the system is absolutely required, users should not be permitted to login.
Detecting Intrusions With Audits and Logs By default, most UNIX systems log kernel messages and important system events from core services. The most common logging tool is the syslog facility, which is controlled from the /etc/syslog.conf ﬁle.
- PROTECTING USER ACCOUNTS AND STRENGTHENING AUTHENTICATION In general, a clear distinction must be made between users obtaining a command shell for a UNIX system (“interactive users”) and consumers of UNIX network service (“noninter- activeusers”).Inmostinstances,theformershouldbelimited to administrators who need to conﬁgure and monitor the system,especiallybecauseinteractiveaccessisalmostalways a necessaryﬁrst step to obtaining administrative access.10
Establishing Secure Account Use For any interactive session, UNIX systems require the user to log into the system. To do so, the user must pre- sent a valid credential that identiﬁes him (he must authenticate to the system). The type of credentials a UNIX system uses depends on the capabilities of the OS software itself and on the conﬁguration set forth by the systems administrator. The most traditional user creden- tial is a username and a text password, but there are many other ways to authenticate to the OS, including Kerberos, SSH-based public and private keys, and X.509 security certiﬁcates.
The UNIX Login Process Depending on the desired authentication mechanism (Fig. 11.5 shows some commonly used authentication systems), the user will have to use different access pro- tocols or processes. For example, console or directly attached terminal sessions usually support only password credentials or smart card logins, whereas a secure shell connection supports only RivesteShamireAdleman (RSA) or digital signature algorithm (DSA)-based cryptographic tokens over the Secure Shell (SSH) protocol. The login process is a system daemon that is responsible for coordinating authentication and process setup for interactive users. To do this, the login process does the following: 1. Draw or display the login screen. 2. Collect the credential. 3. Present the user credential to any of the conﬁgured user databases [typically these can be ﬁles, NIS, Kerberos servers, or Lightweight Directory Access Protocol (LDAP) directories] for authentication.
- Create a process with the user’s default command-line shell, with the home directory as the working directory. 5. Execute system-wide, user, and shell-speciﬁc startup scripts. The commonly available X11 windowing system does not use the text-oriented login process but instead provides its own facility to perform roughly the same kind of login sequence. Access to interactive sessions using the SSH protocol follows a similar general pattern, but the authen- tication is signiﬁcantly different from the traditional login process.
Controlling Account Access Simple conﬁguration ﬁles were the ﬁrst method available to store and manage user account data. Over the course of years many other user databases have been implemented. We examine these here.
Local Files Originally, UNIX supported only a simple password ﬁle for storing account information. The username and the infor- mation required for the login process (UID, GID, shell, home directory, password hashes, and General Electric Comprehensive Operating System information) are stored in this ﬁle, which is typically at /etc/passwd. This approach is highly insecure because this ﬁle needs to be readable by all for a number of different services, which thus exposes the password hashes to potential hackers. In fact, a simple dictionary or even brute force attack can reveal simple or even more complex passwords. To protect against an attack like this, most UNIX variants use a separate ﬁle to store the password hashes (/etc/shadow) that is readable and writable only by the system.
Network Information System The Network Information System (NIS) was introduced in the 1980s to simplify the administration of small groups of computers. Originally, Sun Microsystems called this ser- vice Yellow Pages, but the courts decided that this name constituted a trademark infringement on the British Tele- com Yellow Pages. However, most commands that are used to administer the NIS still start with the yp preﬁx (such as ypbind and ypcat). Systems within the NIS are said to belong to an NIS domain. Although there is no correlation between the NIS domain and the Domain Name System (DNS) of the system, it is common to use DNS-style domain names to name NIS domains. For example, a system with the DNS name system1.sales.example.com might be a member of the NIS domain nis.sales.Example.COM. Note that NIS domains (other than DNS domains) are case sensitive.
The NIS uses a simple mastereslave server system: The master NIS server holds all authoritative data and uses an Open Network Computing Remote Procedure Callebased protocol to communicate with the slave servers and clients. Slave servers cannot easily be upgraded to a master server, so careful planning of the infrastructure is highly recommended. Client systems are bound to one NIS server (master or slave) during runtime. The addresses for the NIS master and the slaves must be provided when joining a system to the NIS domain. Clients (and servers) can always be members of only one NIS domain. To use the NIS user database (and other NIS resources, such as automount maps, netgroups, and host tables) after the system is bound, use the name service conﬁguration ﬁle (/etc/nsswitch.conf), as shown in Fig. 11.6.
Using Pluggable Authentication Modules to Modify Authentication These user databases can easily be conﬁgured for use on a given system through the /etc/nsswitch.conf ﬁle. However, in more complex situations, the administrator might want to ﬁne-tune the types of acceptable authentication methods, such as Kerberos, or even conﬁgure multifactor authenti- cation. Traditionally, the pluggableauthenticationmodule (PAM) is conﬁgured through the /etc/pam.conf ﬁle, but more modern implementations use a directory structure, similar to the System V init scripts. For these systems the administrator needs to modify the conﬁguration ﬁles in the /etc/pam.d/directory. Using the systemauth PAM, administrators can also enforce users to create and maintain complex passwords, including the setting of speciﬁc lengths, the minimal number of numeric or nonletter characters, etc. Fig. 11.7 illustrates a typical systemauth PAM conﬁguration.
Noninteractive Access The security conﬁguration of noninteractive services can vary signiﬁcantly. In particular, popular network services, such as LDAP, Hypertext TransferProtocol (HTTP), or Windows File Shares, can use a wide variety of authenti- cation and authorization mechanisms that do not even need to be provided by the OS. For example, an Apache Web server or a MySQL database server might use its own user database without relying on any OS services such as passwd ﬁles or LDAP directory authentication. Monitoring how noninteractive authentication and authorization is performed is critically important because most users of UNIX systems will use them in only nonin- teractive ways. To ensure the most comprehensive control over the system, it is highly recommended to follow the suggestions in Sections 7 and 8 of this chapter to minimize the attack surface and verify that the system makes only a clearly deﬁned set of services available on the network.
Other Network Authentication Mechanisms In 1983, BSD introduced the rlogin service. UNIX administrators have been using RSH, RCP, and other tools from this package for a long time; they are easy to use and conﬁgure and provide simple access across a small network of computers. The login was facilitated through a simple trust model: Any user could create a.rhosts ﬁle in her home directory and specify foreign hosts and users from which to accept logins without proper credential checking. Over the rlogin protocol (TCP 513), the username of the rlogin client would be transmitted to the host system, and in lieu of an
authentication, the rshd daemon would simply verify the preconﬁgured values. To prevent access from untrusted hosts, the administrator could use the /etc/hosts.equiv ﬁle to allow or deny individual hosts or groups of hosts (the latter through the use of NIS netgroups).
Risks of Trusted Hosts and Networks Because no authentication takes place, this trust mechanism should not be used. Not only does this system rely entirely on the correct functioning of the hostname resolution system, there is no way to determine whether a host was actually replaced.11 Also, although rlogin-based trust systems might work for small deployments, they become extremely hard to set up and operate with large numbers of machines.
Replacing Telnet, rlogin, and File Transfer Protocol Servers and Clients With Secure Shell The most sensible alternative to traditional interactive ses- sion protocols such as Telnet is the SSH system. It is pop- ular on UNIX systems, and pretty much all versions ship with a version of SSH. Where SSH is not available, the open-source package OpenSSH can easily be used instead.12
SSH combines the ease-of-use features of the rlogin tools with a strong cryptographic authentication system. On the one hand, it is fairly easy for users to enable access from other systems; on the other hand, the SSH protocol uses strong cryptography to: l authenticate the connection: that is, establish the authen- ticity of the user; l protect the privacy of the connection through encryption; l guarantee the integrity of the channel through signatures. This is done using either the RSA or DSA security algorithm, both of which are available for the SSH v213 protocol. The cipher (Fig. 11.8) used for encryption can be explicitly selected. It is important to review the cipher suites used for SSH access periodically and update them to the latest cryptographic algorithms. The user must ﬁrst create a public/private key pair through the ssh-keygen(1) tool. The output of the key generator is placed in the .ssh subdirectory of the user’s home directory. This output consists of a private key ﬁle called id_dsa or id_rsa. This ﬁle must be owned by the user and is readable only by the user. In addition, a ﬁle con- taining the public key is created, named in the same way, with the extension .pub appended. The public key ﬁle is then placed into the .ssh subdirectory of the user’s home directory on the target system.
Once the public and private keys are in place and the SSH daemon is enabled on the host system, all clients that implement the SSH protocol can create connections. There are four common applications using SSH: l Interactive session is the replacement for Telnet and rlo- gin. Using the ssh(1) command line, the sshd daemon creates a new shell and transfers control to the user. l In a remotely executed script/command, ssh(1) allows a single command with arguments to pass. This way, a single remote command (such as a backup script) can be executed on the remote system as long as this com- mand is in the default path for the user. l An SSH-enabled ﬁle transfer program can be used to replace the standard FTP or FTP over Secure Sockets Layer (SSL) protocol. l Finally, the SSH protocol is able to tunnel arbitrary pro- tocols. This means that any client can use the privacy and integrity protection offered by SSH. In particular, the X-Window system protocol can tunnel through an existing SSH connection by using the -X command- line switch. SSH can also be conﬁgured to leverage PKCS#11 encoded security certiﬁcates: Instead of relying on conﬁg- uring the appropriate public keys for each user, the SSH daemon can be conﬁgured to trust a certiﬁcate authority’s signature.
Other Authentication Options: Example of One-Time Password There can be any number of other network-based or local authentication systems including Kerberos v5 or pure LDAPv3-based authentication, but it goes beyond the scope of this chapter to discuss all of them. A slightly arcane but interesting authentication system sometimes used in environments with a high risk of replay attacks (such as public networks) is the use of one-time passwords (OTPs). An implementation commonly found in BSD- based systems, One Time Passwords in Everything (OPIE) is also available on Windows, MacOS, and most Linux distributions. The basic idea behind OPIE is to create an initial set of single-use passwords and enable their use through insecure channels. Once the OS allowing OPIE authentication has created the user’s OTPs, they can be used each time the user logs in. Obviously it would be inconvenient and insecure to write down, e.g., 500 OTPs, so the OPIE system implements two components: a server-side OTP manage- ment system and a client-side on-demand generation facility. Once the OPIE database has been initialized, it uses a secret password, a seed (consisting of two letters and ﬁve numbers), and the iteration count as input into the OTP generation. Upon interactive login (e.g., through Telnet or
SSH), the user is presented with an OTP challenge con- sisting of the iteration count and the seed. The user then needs to use a client side tool (opiekey) to create the OTP and enter it for authentication. Because opiekey requires the use of the secret generation password, the client must be trusted. Fig. 11.9 shows an example session.
- LIMITING SUPERUSER PRIVILEGES The superuser14 has almost unlimited power on a UNIX system, which can be a signiﬁcant problem. On systems that implement mandatory access controls, the superuser account can be conﬁgured not to affect user data, but the problem of overly powerful root accounts for standard, DAC-only systems remains. From an organizational and managerial perspective, access to privileged functions on an UNIX OS should be tightly controlled. For example, operators who have access to privileged functions on an UNIX system not only should undergo special training but should be investigated regarding their personal background and trustworthiness. Finally, it may be advisable to enforce a policy according to which operators of critical systems can access privileged functions only with at least two operators present (e.g., through multifactor authentication
technologies). There are a number of technical ways to limit access for the root user. Let us look at a few.
Conﬁguring Secure Terminals Most UNIX systems allow us to restrict root logins to special terminals, typically the system console. This approach is effective, especially if the console or allowed terminals are under strict physical access control. The obvious downside of this approach is that remote access to the system can be limited: Using this approach, access through any Transmission Control Protocol(TCP)/Internet Protocol (IP)-based connection cannot be conﬁgured, thus requiring a direct connection such as a directly attached terminal or a modem. Conﬁguration is different for the various UNIX sys- tems. Fig. 11.10 compares Solaris and Debian.
Gaining Root Privileges With Su The su(1) utility allows the identity of an interactive session to be changed. This is an effective mediation of the issues that come with restricting root access to secure terminals: Although only normal users can obtain access to the machine through the network (ideally by limiting the access protocols to those that protect the privacy of the commu- nication, such as SSH), they can change their interactive session to a superuser session.
Using Groups Instead of Root If users should be limited to executing certain commands with superuser privileges, it is possible and common to create special groups of users. For these groups, we can set the execution bit on programs (while disabling execution for all others) and the SetID bit for the owner, in this case the superuser. Therefore, only users of such a special groupcan executethegivenutility with superuser privileges.
Using the sudo(1) Mechanism The sudo(1) mechanism is by far more ﬂexible and easier to manage than the approach for enabling privileged execution basedongroups.Originallyanopen-sourceprogram,sudo(1) is available for most UNIX distributions. The detailed conﬁgurationiscomplexandthemanualpageisinformative. From a process perspective, sudo(1) allows the execution ofspeciﬁccommands(includingcommandlineshells)undera different UID or GID. Although the implementations for various OSs may slightlyvary, sudo(1) basically ensures that the command to be executed is created in an appropriate execution environment. Depending on OS and compile-time options, sudo(1) may use the PAM framework, stay alive until the command ﬁnishes, or fork the child process. Conﬁguration for sudo(1) is typically performed in the /etc/ sudoers ﬁle,whichcansupportanumberofdifferentoptions: l The usermay beallowed toexecute certain commandsas a different user (including root) without authentication. l Execution of speciﬁc commands, all commands, and shells may be permitted.
l Detailed logging and mail notiﬁcations can be set. l Environmental variables and other settings are taken into account. Fig. 11.11 contains a sample sudoers ﬁle from an Ubuntu distribution. Note that this ﬁle must be edited with the visudo(8) command to ensure that the respective priv- ileges are set correctly.
- SECURING LOCAL AND NETWORK FILE SYSTEMS For production systems, there is an effective way to prevent the modiﬁcationofsystem-critical resources byunauthorized users or malicious software. Critical portions of the ﬁle sys- tems(suchasthelocationsofbinaryﬁles,systemlibraries,and some conﬁguration ﬁles) do not necessarily change often.
Directory Structure and Partitioning for Security In fact, any system-wide binary code should probably be modiﬁed only by systems administrators. In these cases, it is effective to partition the ﬁle system properly.
Employing Read-Only Partitions The reason for partitioning the ﬁle system properly (Fig. 11.12) is so that only frequently changing ﬁles (such as user data, log ﬁles, and the like) are hosted on readable ﬁle systems. All other storage then can be mounted on read- only partitions.
Finding Special Files To prevent inadvertent or malicious access to critical data, it is vitally important to verify the correct ownership and permission set for all critical ﬁles in the ﬁle system.
Ownership and Access Permissions The UNIX ﬁnd(1) command is an effective way to locate ﬁles with certain characteristics. In the following, a number of sample command-line options for this utility are given to locate ﬁles.
Locate SetID Files Because executables with the SetID bit set are often used to allow the execution of a program with superuser privileges, it is vitally important to monitor these ﬁles on a regular basis. Another critical permission set is that of world-writable ﬁles; there should be no system-critical ﬁles in this list, and users should be aware of any ﬁles in their home directories that are world-writable (Fig. 11.13). Finally, ﬁles and di- rectories that are not owned by current users can be found by the code shown in Fig. 11.14. For groups, just use- nogroup instead.
Locate Suspicious Files and Directories Malicious software is sometime stored in nonstarted di- rectories such as subdirectories named “.” that will not
$ find / nouser
immediately be noticed. Administrators should pay special attention to such ﬁles and verify whether the content is part of a legitimate software package. In addition, appropriate end-point monitoring tools (including ﬁle integrity moni- toring, signature-based antimalware tools, etc.) should be deployed to sensitive or critical systems.
Encrypting File Systems Setting up encrypting ﬁle systems used to be a complex process and the resulting on-demand decryption during runtime was processor- and disk-intensive. As such, encrypting ﬁle systems usually were used only for fairly sensitive environments in which loss of mobile devices could result in signiﬁcant damage to the system owner. However, signiﬁcant improvements in processing capabil- ities, including high-performing multicore processors and the easy availability of solid-state disk drives, make the use of encrypting ﬁle systems achievable for many systems that do not require extreme throughput. In general, it is advisable to perform a general risk assessment before deciding on an encryption strategy. It is important to remember that data-at-rest encryption provides signiﬁcant protection against data exﬁltration when the drive is in the adversary’s physical possession.15 The ﬁrst decision point should be the likely exposure of the system to potential disk drive theft. For example, a system that is
located within a private suite in a secure data center with limited physical access, security guards, and strong access protocols has a signiﬁcantly lower likelihood of drive theft than does a laptop or other mobile device. The second major evaluation point should be the sensitivity of the data on the system: A simple test or demo system with limited data is less critical than is a mobile point of sales device or laptop used to access and manage health records. Finally, perfor- mance, scalability, and commercial constraints will need to be put into perspective as well: Whereas it may be possible to encrypt ﬁle systems with a high-performance enterprise resource planning database, it will likely be cost prohibitive. For UNIX systems, encrypting ﬁle systems can typi- cally be injected as kernel drivers into the system call stack. This enables a seamless environment with little or no interaction from regular users. Interaction with the kernel standard device mapper dm-crypt can be through crypt- setup(8), which can be used to create encrypted partitions on any block devices (such as hard drive partitions), but also logical volumes. Most Linux systems offer the possi- bility of setting up encrypted partitions during the initial installation process to ensure that the root ﬁle system can be encrypted. The installation program also uses cryptsetup(8) underneath to create and manage the encrypted ﬁle systems. Note that if you encrypt the root ﬁle system, the OS will boot from a separate partition to obtain the cryptsetup(8) runtime and then prompt for a decryption password during startup. In addition to the root ﬁle system, the user may want to encrypt the swap space, especially when expecting to pause (or “sleep”) a mobile device. Other OSs such as MacOS have built-in support for encrypted ﬁle systems as well. For MacOS, enabling Fil- eVault 2 will automatically encrypt the root ﬁle systems with a strong symmetric key. This symmetric key is then stored encrypted in the boot partition, and users that have been conﬁgured with permission to boot the system can decrypt the disk key with their user password.
- NETWORK CONFIGURATION Because many UNIX systems are used as network servers, most users will never log into these systems interactively. Consequently, the most signiﬁcant threat sources for UNIX-based systems are initially defective or badly conﬁgured network services. However, such initial attacks are often used only to get initial interactive access to the system. Once an attacker can access a command line shell, other layers of security must be in place to prevent an elevation of privileges (superuser access).
Basic Network Setup UNIX user space processes can access networks by calling a variety of functions from the system libraries: namely the socket() system call and related functions. Whereas other
network protocols such as DECNet or IPX may still be supported, the TCP/IP family of protocols has by far most important role in today’s networks. As such, we will focus solely on these protocols. A number of ﬁles are relevant for conﬁguring access to networks: l /etc/hostname (and sometimes also /etc/nodename) sets the name under which the system identiﬁes itself. This name is also often used to determine its own IP address based on hostname resolution. l /etc/protocols deﬁnes the available list of protocols such as IP, TCP, Internet Control Message Protocol, and User Datagram Protocol (UDP). l /etc/hosts and /etc/networks ﬁles deﬁne what IP hosts and networks are locally known to the system. They typically include the local host deﬁnition (which is al- ways 127.0.0.1 for IPv4 networks) and the loopback network (deﬁned to be 127.0.0.0/24), respectively. l /etc/nsswitch.conf is available on many UNIX systems and allows ﬁne-grained setting of name resolution for a number of network and other resources, including the UIDs and GIDs. Typical settings include purely local resolution (i.e., through the ﬁles in the /etc direc- tory), resolution through NIS, host resolution through DNS, user and group resolution through LDAP, etc. l /etc/resolv.conf is the main conﬁguration ﬁle for the DNS resolver libraries used in most UNIX systems. It points to the IP addresses of the default DNS name servers, and may include the local domain name and any other search domains. l /etc/services (and/or/etc/protocols) contains a list of well-known services and the port numbers and protocol types to which they are bound. Some system commands (such as, e.g., Netstat) use this database to resolve ports and protocols into user-friendly names. Depending on the UNIX ﬂavor and the version, there are many other network conﬁguration ﬁles that apply to the base OS. In addition, many other services that are commonly used on UNIX systems such as HTTP servers, application servers, databases, etc., have their own conﬁg- uration ﬁles that will need to be conﬁgured and monitored in deployed systems.
Detecting and Disabling Standard UNIX Services To protect systems against outside attacks and remove the overall attack surface, it is highly recommended to disable any service not needed to provide intended functionality. To do this, a simple process can be followed that will likely turn off most system services that are not needed: 1. Examine the startup scripts for your system. Startup procedures have been changing signiﬁcantly for UNIX
systems over time. Early systems used the /etc/inittab to determine runlevels and startup scripts. UNIX System V introduced the /etc/init.d scripts and the symbolic links from the /etc/rc*.d/directories. Most current UNIX sys- tems either still use this technology or implement an interface to start and stop services (such as Solaris). Debian-based distributions have a System V backward compatibility facility. BSD-based systems typically use an and/etc/rc.d-based service management interface. Administrators should determine their startup system and disable any services and feature not required for the function of that system. Ideally, facilities and ser- vices not needed should be uninstalled to minimize the potential attack surface for external attacks as well as privilege escalation attacks by running potentially harmful binaries. 2. In addition, administrators can examine processes that are currently running on a given system [e.g., through running the ps(1) command]. Processes that cannot be traced to a particular software package or functionality should be killed and their ﬁle images ideally uninstalled or deleted. 3. The netstat(1) command can be used to display currently open network sockets, speciﬁcally for TCP and UDP connections. By default, netstat(1) will use the /etc/services and /etc/protocols databases to map numeric values to well-known services. Administrators should verify only those ports are open that are ex- pected to be used by the software installed on the sys- tem. In addition, the lsof(1) command with the ei parameter (where implemented) provides a mapping be- tween listen ports or established connections and the associated processes (Fig. 11.15).
Host-Based Firewall One of the best ways to limit the attack surface for external attackers is to close down all network sockets that are not
being actively used by network clients. This is true for systems attached directly to the Internet as well as systems on private networks. The IP stacks of most UNIX systems can be conﬁgured to accept only speciﬁc protocols (such as TCP) and connections on speciﬁc ports (such as port 80). Fig. 11.16 shows how to limit ssh(1) access to systems on a speciﬁc IP subnet. Depending on the network stack, this can be achieved with a setting in the System Preferences for MacOS, or the iptables(1) command for Linux systems.
Restricting Remote Administrative Access If possible, interactive access to UNIX-based systems should be limited to dedicated administrative terminals. This may be achieved by limiting root access to directly attached consoles and terminals or by creating dedicated private networks for the express purpose of allowing remote access through ssh(1), SNMP, or Web administra- tion utilities.
Consoles and Terminals on Restricted Networks As described previously, root access to a terminal can be limited to speciﬁc devices such as terminals or consoles. If the terminals or consoles are provided through TCP/IP- capable terminal concentrators or keyboardevideoe mouse switches, interactive network access can be achieved by connecting these console devices through restricted networks to dedicated administrative workstations.
Dedicated Administrative Networks Similarly, interactive access can be restricted to a small number of workstation and access points through the following technologies: l Dedicated physical interface or virtual local area network (VLAN) segmentation: If any interactive or
administrative access is limited to separate networks, preferably disconnected from operational networks, the potential attack surface is signiﬁcantly reduced. l Logical interface: If no physical or VLAN infrastructure is available, UNIX networking stack typically allow the assignment of additional IP addresses to a single phys- ical networking interface. Although it is more suscepti- ble to lower-level attacks, this approach may be sufﬁcient for the effective separation of networks. l Routing and ﬁrewall table design: As a fairly high-level approach, administrators may limit access to speciﬁc services from preconﬁgured IP addresses or networks through careful design of the host-based ﬁrewall and the routing tables of the IP stack. l Yale University has an old but useful UNIX networking checklist at http://security.yale.edu/ network/unix.html that describes a number of general security settings for UNIX systems in general, and Solaris speciﬁcally. A similar, older checklist is avail- able from Carnegie Mellon University’s Software En- gineering Institute Computer Emergency Readiness Team (CERT) at https://www.cert.org/tech_tips/unix_ conﬁguration_guidelines.html. l Special topics in system administration that also address security issues such as auditing, conﬁguration manage- ment, and recovery can be found on the Usenix website at https://www.usenix.org/lisa/books.
l Apple provides a detailed document on locking down MacOS X 10.6 Server: http://images.apple.com/support/ security/guides/docs/SnowLeopard_Server_Security_ Conﬁg_v10.6.pdf. l TheUSFederalGovernmentoperatesUS-CERTathttps:// www.us-cert.gov, targeted at technical and nontechnical users in both the government and the private sector. In additiontogeneralinformation,US-CERTprovidesinfor- mationfromtheNationalVulnerabilityDatabase,security bulletins, and current threat information. Finally, let us brieﬂy look at how to improve the se- curity of Linux and UNIX systems. The following part of the chapter describes how to modify Linux and UNIX systems and ﬁx their potential security weaknesses.
- IMPROVING THE SECURITY OF LINUX AND UNIX SYSTEMS A security checklist should be structured to follow the life cycle of Linux and UNIX systems, from planning and installation to recovery and maintenance. The checklist is best applied to a system before it is connected to the network for the ﬁrst time. In addition, the checklist can be reapplied on a regular basis, to audit conformance (see checklist: “An Agenda for Action for Linux and UNIX Security Activities”).
- ADDITIONAL RESOURCES There is a large number of useful tools to assist adminis- trators in managing UNIX systems. This also includes verifying their security.
Useful Tools The following discussion of useful tools should not be seen as exhaustive, but much more of a simple starting point.
Webmin Webmin is a useful general-purpose graphical system management interface that is available for a large number of UNIX systems. It is implemented as a Web application running on port 10,000 by default. Webmin allows the management of basic UNIX functionality such as user and group management, network and printer conﬁguration, ﬁle system management, and much more. It also comes with modules for managing commonly used services such as OpenLDAP directory server, the BIND DNS server, a number of different mail transfer agents, databases, etc. Webmin is particularly useful for casually maintained systems that do not require tight conﬁguration management and may expose a Web application interface. It is not rec- ommended to use Webmin on mission critical systems or in environments where systems are exposed to unknown external users (such as on the Internet or on large private
networks). Even for systems where Webmin is an accept- able risk, it is recommended to ensure that the Web inter- face is protected by transport-level security (HTTP with SSL) and preferably restricted to dedicated administration networks or stations.
Nmap For testing the open ports on a given host or subnet, Nmap is an excellent tool. It allows a given IP address or IP address range to be scanned and to test what TCP and UDP ports are accessible. It is ﬂexible and can easily be extended, but it comes with a number of modules that allow the OSs of an IP responder to be determined based on the ﬁngerprint of the TCP/IP stack responses.
Local Conﬁguration System Local Conﬁguration System (LCFG) is an effective conﬁguration management system for complex UNIX de- ployments. It compiles machine-speciﬁc and default con- ﬁgurations for all aspects of a given UNIX system into an XML ﬁle and distributes these ﬁles to the client machines. More information on LCFG can be found at http://www. lcfg.org/.
Further Information Because this chapter can provide only an introduction to fully securing UNIX-based systems, the following list of
resources is recommended for a more in-depth treatment of this topic. Users are also advised to consult vendor-speciﬁc information about secure conﬁguration of their products. By far the most comprehensive guidance on security conﬁguration for UNIX systems is available through the US DISA. DISA and the National Institutes for Standards and Technology (NIST) create, publish, and update STIGs for a number of OSs at http://iase.disa.mil/stigs/os/. Beyond the general STIG for UNIX security, there are vendor- speciﬁc STIGs for Red Hat Linux, Solaris, HP-UX, and AIX. Industry best practices for secure system conﬁgurations are published and frequently updated by CIS at https:// cisecurity.org/. The Security Benchmarks are freely avail- able in the form of PDF documents that provide detailed recommended settings. Members of the CIS can also download the CIS-CAT tool, which is a Java-based scanner that uses SCAP ﬁles, created by the CIS. The SCAP ﬁles can also be leverage for other SCAP-compliant conﬁgura- tion scanners.
- SUMMARY This chapter covered communications interfaces between HP-UX, Solaris, Linux, and AIX servers and the commu- nications infrastructure (ﬁrewalls, routers, etc.). The use of Oracle in conﬁguring and managing HP-UX, Solaris, Linux, and AIX servers to support large databases and applications was also covered. There was also a discussion of other UNIX systems such as Solaris, Linux, AIX, etc., as well as how to perform alternate information assurance ofﬁcer duties for HP-UX, Solaris, Linux, and AIX midtier systems. This chapter also showed entry-level security professionals how to provide support for UNIX security error diagnosis, testing strate- gies, and resolution of problems normally found in SMC Ogden server HP-UX, AIX, Solaris, and Linux environ- ments. In addition, the chapter showed security pro- fessionals how to provide implementation of CIS Benchmarks and DISA security requirements (STIG) and UNIX Security Readiness Reviews. This chapter helped security professionals gain experi- ence in installing and managing applications in UNIX/Sun/ Linux/AIX environments. It also showed security professionals how to apply DISA STIG with regard to installing, conﬁguring, and setting up UNIX/Linux envi- ronments under mandatory security requirements. The chapter also showed security professionals how to work with full life-cycle information technology projects, as well as how to obtain proﬁciency in the environments of J2EE, EJB, Sun Solaris, IBM WebSphere, Oracle, DB/2, Hibernate, JMS/MQ Series, Web Service, SOAP, and XML. It also helped UNIX/Solaris administrators on a large scale with regard to multiuser enterprise systems. With regard to certiﬁcation exams, this chapter helped students gain general experience (including operations experience) on large-scale computer systems or multiserver local area networks; broad knowledge and experience with system technologies (including networking concepts, hardware, and software); and the capability of determining system and network and application performance capabil- ities. It also helped students gain specialized experience in administrating UNIX-based systems and Oracle conﬁgu- ration knowledge, with security administration skills. Finally, let us move on to the real interactive part of this chapter: review questions/exercises, hands-on projects, case projects, and an optional team case project. The answers and/or solutions by chapter can be found in Appendix K.
Published @ September 30, 2021 11:07 am