Intranet security
Headline dramas such as the ones shown in the accompa- nying sidebar (Intranet Security as News in the Media) (in the mainstream media) are embarrassing nightmares to top brass in any large corporation. These events have a lasting impact on a company’s bottom line because the company’s reputation and customer trust take a direct hit. Once events such as these occur, customers and current and potential investors never look at the company in the same trusting light again, regardless of remediation measures. The smart thing, then, is to avoid the limelight. The onus of pre- venting such embarrassing security gaffes falls squarely on the shoulders of the information technology (IT) security chiefs (chief information security officer and security offi- cers), who are sometimes hobbled by unclear mandates from government regulators and a lack of sufficient budg- eting to tackle the mandates. However, federal governments across the world are not taking breaches of personal data lightly (see sidebar: TJX: Data Breach With 45 Million Data Records Stolen). In view of a massive plague of publicized data thefts over the past decade, mandates such as the Health Insurance Portability and Accountability Act (HIPAA), Sarbanese Oxley, and the Payment Card IndustryeData Security Standard (PCI-DSS) Act within the United States have teeth. These laws even spell out stiff fines and personal jail sentences for chief executive officers who neglect data breach issues.
exercise; security inside the firewall was all but nonexis- tent. There was a feeling of implicit trust in the internal user. After all, if you hired that person, training him for years, how could you not trust him? In the new millennium, the Internet has come of age, and so have its users. The last largely computer-agnostic generation has exited the user scene; their occupational shoes have been filled with the X and Y generations. Many of these young people have grown up with the Internet, often familiar with it since elementary school. It is common today to find young college students who started their programming interests in the fifth or sixth grade. With such a level of computer expertise in users, the game of intranet security has changed (see sidebar: Network Breach Readiness: Many Are Still Complacent). Resourceful as ever, these new users have gotten used to the idea of being hyperconnected to the Internet using mobile technology such as personal digital assistants (PDAs), smartphones, and firewalled barriers. For a corporate intranet that uses older ideas of employing access control as the cornerstone of data security, such mobile access to the Internet at work needs careful analysis and control. The idea of building a virtual moat around your well-constructed castle (investing in a firewall and hoping to call it an intranet) is gone. Hyperconnected “knowledge workers” with laptops, PDAs, and universal serial bus (USB) keys that have whole operating systems built in have made sure of it.
Network Breach Readiness: Many Are Still Complacent The level of readiness for breaches among IT shops across the country is still far from optimal. The Ponemon Institute, a security think tank, surveyed some industry personnel and came up with some startling revelations. It is hoped that these statistics will change in the future: l A total of 85% of industry respondents reported that they had experienced a data breach. l Of those responding, 43% had no incident response plan in place and 82% did not consult legal counsel before responding to the incident. l After a breach, 46% of respondents still had not imple- mented encryption on portable devices (laptops or PDAs) with company data stored on them.8
- “Ponemon Institute Announces Result of Survey Assessing the Business Impact of a Data Security Breach,” May 15, 2007, www.ponemon.org/ press/Ponemon_Survey_Results_Scott_and_Scott_FINAL1.pdf.
If we could reuse the familiar vehicle advertising tagline of the 1980s, we would say that the new intranet is no longer “your father’s intranet.” The intranet as just a simple place to share files and list a few policies and procedures has ceased to be. The types of changes can be summed up in the following list of features, which shows that the
intranet has become a combined portal as well as a public dashboard. Some of the features include: l a searchable corporate personnel directory of phone numbers by department. Often the list is searchable only if the exact name is known l expanded activity guides and a corporate calendar with links for various company divisions l several Really Simple Syndication (RSS) feeds for news according to divisions such as IT, human resources (HR), finance, accounting, and purchasing l company blogs (weblogs) by top brass that talk about the current direction of the company in reaction to recent events, a sort of “mission statement of the month” l a search engine for searching company information, often helped by a search appliance from Google. Microsoft also has its own search software on offer that targets corporate intranets l one or several “wiki” repositories for company intellec- tual property, some of which is of a mission-critical nature. Usually granular permissions are applied for access here. One example could be court documents for a legal firm with rigorous security access applied l a section describing company financials and other mission-critical indicators. This is often a separate Web page linked to the main intranet page l a “live” section with IT alerts regarding specific downtimes, outages, and other critical time-sensitive company notifications. Often embedded within the por- tal, this is displayed in a “ticker-tape” fashion or similar to an RSS-type dynamic display Of course, this list is not exhaustive; some intranets have other unique features not listed here. In any case, intranets these days do a lot more than simply list corporate phone numbers. Knowledge management systems have presented another challenge to intranet security postures. Companies that count knowledge as a prime protected asset (virtually all companies these days) have started deploying “mash- able” applications (apps) that combine social networking (such as FaceBook and LinkedIn), texting, and micro- blogging (such as Twitter) features to encourage employees to “wikify” their knowledge and information within intranets. One of the bigger vendors in this space, Social- text, has introduced a mashable wiki app that operates like a corporate dashboard for intranets.9,10
Socialtext has individual widgets, one of which, “Social- text signals,” is a microblogging engine. In the corporate context, microblogging entails sending Short Message Ser- vice (SMS) messages to apprise colleagues of recent de- velopments in the daily routine. Examples could be short messages on progress on any major project milestone: for example,joiningupmajorairplaneassembliesorgettingFood and Drug Administration testing approval for a special experimental drug. These emerging scenarios present special challenges to security personnel guarding the borders of an intranet. The border as it once existed has ceased to be. One cannot block stored knowledge from leaving the intranet when most corporate mobile users are accessing intranet wikis from anywhere using inexpensive mininotebooks that are given away with cell phone contracts.11 If we consider the impact of national and international privacy mandates on these situations, the situation is compounded further for C-level executives in multinational companies who have to come up with responses to privacy mandates in each country in which the company does business. Privacy mandates regarding private customer data have always been more stringent in Europe than in North America, which is a consideration for doing business in Europe. It is hard enough to block entertainment-related Flash video traffic from time-wasting Internet abuse without blocking a video of last week’s corporate meeting at headquarters. Letting in traffic only on an exception basis becomes untenable or impractical because of a high level of personnel involvement needed for every ongoing security change. Simply blocking YouTube.com or Vimeo.com is not sufficient. Video, which has myriad legitimate work uses nowadays, is hosted on all sorts of content-serving (caching and streaming) sites worldwide, which makes it well near impossible to block using Web filters. The evolution of the Internet Content Adaptation Protocol (ICAP), which standardizes website categories for content-filtering purposes, is under way. However, ICAP still does not solve the problem of the dissolving networking “periphery.”12 Guarding movable and dynamic data, which may be moving in and out of the perimeter without notice, flouting every possible mandate, is a key feature of today’s intranet. The dynamic nature of data has rendered the traditional confidentiality, integrity, and availability (CIA) architecture somewhat less relevant. The changing nature
of data security necessitates some specialized security considerations: l Intranet security policies and procedures (P&Ps) are the first step toward a legal regulatory framework. The P&Ps needed on any of the security controls listed sub- sequently should be compliant with federal and state mandates (such as HIPAA, SarbaneseOxley, European Directive 95/46/EC on the protection of personal data, and PCI-DSS, among others). These P&Ps have to be signed off by top management and placed on the intranet for review by employees. There should be sufficient teeth in all procedural sections to enforce the policy, explicitly spelling out sanctions and other conse- quences of noncompliance, leading up to discharge. l To be factual, none of these government mandates spell out details on implementing any security controls. That is the vague nature of federal and international man- dates. Interpretation of the security controls is better left after the fact to an entity such as the National Insti- tute of Standards and Technology (NIST) in the United States or the Geneva-based International Organization for Standardization (ISO). These organizations have extensive research and publication guidance for any specific security initiative. Most of NIST’s documents are offered as free downloads from its website.13 ISO security standards such as 27,002e27,005 are also available for a nominal fee from the ISO site. Once finalized, P&Ps need to be automated as much as possible (one example is mandatory password changes every 3 months). Automating policy compliance takes the error-prone human factor out of the equation (see sidebar: Access Control in the Era of Social Networking). Numerous software tools are available to help accomplish security policy automation.
- SMARTPHONES AND TABLETS IN THE INTRANET The proliferation of mobile devices for personal and business use has gained an unprecedented momentum, which only reminds one of the proliferation of personal computers (PCs) at the start of the 1980s. Back then, the rapid proliferation of PCs was rooted in the wide avail- ability of common PC software and productivity pack- ages such as Excel or Borland. Helping with kids’ homework and spreadsheets at home was part of the wide appeal. A large part of the PC revolution was also rooted in the change in interactivity patterns. Interaction using graphica
user interfaces (GUIs) and mice had made PCs widely popular compared with the Disk Operating System (DOS) character screen. The consumer PC revolution did not really take off until Windows PCs and Mac Classics brought along mice, starting in the early 1990s. It was a quantum leap for ordinary people unfamiliar with DOS commands. Today, which some now call the post-PC era,14 the interaction between people and computers has again evolved. The finger (touch) has again replaced keyboards and mice as an input device in smartphones and tablets, which invariably use a mobile-oriented operating system (OS) such as Android or iOS, as opposed to MAC OS, Linux, or Windows. Android and iOS were built from the ground up with the “touch interface” in mind. This marks a sea change.15 By the next couple of years, most smartphones will end up with the computing power of a full-size PC that is only 5 years older. These powerful smartphones and portable tablets (such as iPads and Android devices) enabled with multimedia and gaming capabilities are starting to converge toward becoming one and the same device. The increasing speed and function- ality for the price (“bang for the buck”) will only gather a
more rapid pace as user demand becomes more intense. The success of smartphones and tablet devices over traditional full-size laptops stems from two primary reasons: 1. the functionality and ease of use of using “voice,” “gesture,” and “touch” interfaces. As opposed to the use of mice and keyboards, voice-enabled, touch, and gesture-based interfaces used in mobile devices offer a degree of ease unseen in traditional laptops. 2. the availability of customized apps (applications). Given the number of specialized apps found in the Apple App Store (and Android’s equivalent “market”), they offer increased versatility for these new mobile devices compared with traditional laptops. In Apple’s case, the closed ecosystem of apps (securely allowed only after testing for security) decreases the possibility of hacking iPads using uncertified apps. In iPhone 4s, the use of “Siri” as a speech-aware app only portends the increasing ease of use for this class of device.16 Using Siri, the iPhone can be issued voice commands to set appointments, read back messages, and notify people if one is going to be late, among a myriad other things, all without touching a keypad. In the Android version 4.0 or higher, face recognition authentication using the onboard camera is also an ease-of-use feature. There are bugs in these applications, of course, but they indis- putably point to a pattern of interactivity change compared with a traditional laptop. There are a few other trends to watch in the integration of mobile devices in the enterprise: 1. Mobile devices let today’s employees stretch work far beyond traditional work hours. Because of rich interac- tivity and ease of use, these devices blur the boundary between work and play. Companies benefit from this employee availability at nontraditional work hours. The very concept of being at work has changed compared with even 10 years ago. 2. The iteration life cycles of mobile devices are now more rapid. Unlike laptops that had life cycles of almost 3 years, new version of the iPad comes out almost every year with evolutionary changes. This makes it increasingly less feasible for IT to set stan- dardization for mobile devices or even pay for them. IT is left in most cases with supporting these devices. However, IT can put in recommendations about which device it is able or unable to support for feasibility reasons. 3. Because of these cost reasons, it is no longer feasible for most IT departments to dictate the brand or platform of
mobile device that employees use to access the corporate network. It is often a Bring Your Own Device situation. As long as specialized software can be used to partition the personally owned mobile device to store company data safely (which cannot be breached), this approach is feasible. 4. The mobile device that seems to be numerically most ready for the enterprise is the same one that has been the most successful commercially, the Apple iPad. Already in its third iteration in 2012 since the first one came out in 2010, the iPad 3 with its security and VPN features seems to be ready to be managed by most mobile device management (MDM) packages. It has been adopted extensively by executives and sales staff at larger corporate entities.17 It is also starting to be adopted by state and local government to expedite certain e-government initiatives.18 5. Compared with the Apple iPhone, however, Android smartphones generally have had better adoption
rates. BlackBerry adoption, however, is on the wane.19 Smartphones will need to have more specially designed Web pages to cope with interactivity on a smaller screen. 6. According to a major vendor in the MDM space (“Good Technology”), the financial services sector saw the highest level of iPad (iOS) activation, account- ing for 46% for the third quarter in 2011, which tripled the amount of activation in any other industry (Fig. 15.1).20 When it comes to mobile devices (see sidebar: The Commoditization of Mobile Devices and Impact Upon Businesses and Society) and smartphones, one can reasonably surmise that the act of balancing security versus business considerations has clearly tilted toward the latter. The age of mobile devices is here, and IT has to adapt security measures to conform to it. The common IT security concept of protecting the host may have to convert to protecting the network from itinerant hosts.

The Commoditization of Mobile Devices and Impact Upon Businesses and Society There are many increasingly visible trends that portend the commoditization of mobile devices and their resulting impact on businesses. 1. The millennial generation is more familiar with mobile technology. Because of the widespread use of smartphones and iPhones as communication devices for computing use (other than simply voice), and now iPads, which have taken their place, familiarity with mobile hardware is far higher than it was for previous generations. The technological so- phistication of these devices has forced most of the current generation of young people to be far more tech savvy than previous generations because they have grown up around this mobile communicationeenabled environment. 2. Mobile devices are eating into the sales of PCs and laptops. The millennial generation is no longer simply content with traditional bulky PCs and laptops as computing devices. Lightweight devices with pared-down mobile OSs and battery-efficient mobile devices with 10-h lives are quickly approaching the computing power of traditional computers andarealsofarmoreportable.Thedemandsofloweredcost, immediacy, and ease of use of mobile devices have caused traditionallaptopsalestoslowinfavoroftabletandiPadsales. 3. Social media use by mobile devices such as FaceBook, Twitter, and Flickr encourage collaboration and have given rise to use of these sites employing nothing other than mobile devices, whereas this was not possible previously. Most smartphones (even an increasingly common class of global positioning systemeenabled point-and-shoot cam- eras) enable upload of images and videos to these sites directly from the device itself using Wi-Fi connections and Wi-Fieenabled storage media (SD cards). This has engen- dered a mobile lifestyle for this generation, to which sites such as FaceBook, Twitter, and Flickr are only too happy to cater. Employees using social media represent new chal- lenges to businesses because protection of business data (enforced by privacy-related federal and state regulations and mandates) has become imperative. Business data have to be separated and protected from personal use of public social media if they have to exist on the same mobile de- vice. Several vendors already offer products that enable this separation. In industry parlance, this area of ITis known as MDM. 4. The mobile hardware industry has matured. Margins have been pared to the bone. Mobile devices have become a buyers’ market. Because of wider appeal, it is no longer just the tech savvy (opinion leaders and early adopters) who determine the success of a mobile product. It will be the wider swath of nontechie users looking for attractive devices with a standardized set of often-used functions that will determine the success of a product. Sooner than later, this will force manufacturers to compete based on price rather than product innovation. This downward pressure on price will also make it cost-effective for businesses to adopt these mobile devices within their corporate ITinfrastructure. Amazon’s Kindle and Barnes & Noble’s Nook are both nimbly designed and priced at around a $200 price point compared with the $500 iPad. They have begun to steal some market share from full- featured tablets such as the iPad, but of course enter- prise adoption for the cheaper devices remains to be sorted out. 5. Users expect the “cloud” to be personal. In their personal lives, users have become used to customizing their Internet digital persona by being offered limitless choices in doing so. In their use of the business “cloud,” they expect the same level of customization. This customization can easily be built into the back end using the likes of Active Direc- tory and collaboration tools such as Microsoft’s SharePoint once a user authenticates through the VPN. Customization can be built upon access permissions in a granular manner per user while tracking their access to company informa- tion assets. This accumulated data trend can be used later to refine ease of use for remote users using the company portal.
In 2011, Apple shipped 172 million portable computing devices including iPads, iPods, and iPhones. Among these were 55 million iPads. The sales of mostly Android tablets by other manufacturers are also increasing at a rapid pace. By the end of the first decade of the new millennium, it became clear that sales of iPads and tablets made serious dents in the sale of traditional PCs, indicating a shift in consumer preference toward a rapid commoditization of computing. Fig. 15.2 from the Forrester Research consumer PC and Tablet forecast helps illustrate this phenomenon.21 Several reasons can be attributed to this shift. The millennial generation was already reinventing the idea of how business is conducted, and where. Because of increased demands on employee productivity, business had to be conducted in real time at the employee’s location (home, hotel, and airport) and not just at the traditional workplace. With gas prices hovering in the United States at around $5.00/gallon (essentially a doubling of prices over the first half-decade of the millennium), traditional 9-to-5 work hours with long commutes are no longer practical. iPads and tablets therefore had become de rigueur not only for field employees but also for workers at headquarters. Instead of imposing rigid 9-to-5 attendance on employees, progressive companies had to let employees be flexible in meeting sales or deliverables on their own deadlines, which improved employee morale and productivity. Popular devices such as the iPad, Samsung Galaxy Android tablet, and many types of smartphones are already
capable of accessing company intranets using customized intranet apps. This ensures that access to critical company data needed for a sale or demonstration will not stand in the way of closing an important deal. All this had already been enabled by laptops, but the touchpad-enabled tablet eases this process by using more functional media usage and richer interactivity features. Ultimately, it will matter less what device employees use toaccessthe company portalorwhere theyare,because their identity will be the deciding factor regarding the information to which they will gain access. Companies will do better in providing customized “private cloud environments” for workers accessible from anywhere. Employees may still use PCs while at the office (see sidebar: Being Secure in the
PostePersonal Computer Era) and mobile devices while in the field conducting business, but they will increasingly de- mand the same degree of ease in accessing company data regardlessoftheirlocationorthemeansusedtoaccessit.The challenge for IT will be to cater to these versatile demands without losing sight of security and protecting privacy.

SECURITY CONSIDERATIONS Many risks need to be resolved when approaching intranet security with regard to mobile devices: 1. risk of size and portability: Mobile devices are prone to loss. An Apple staffer’s “loss” of a fourth-generation iPhone to a Gizmodo staffer during a personal outing to a bar is well-known. There is no denying that, because of their size, smartphones are easy theft targets in the wrong place at the wrong time. Loss of a few hundred dollars of hardware, however, is nothing when an invaluable client list is lost and falls into a competitor’s hands. These are nightmare scenarios that keep chief information officers (CIOs) up at night. 2. risk of access via multiple paradigms: Mobile devices can access unsafe sites using cellular networks and download malware into storage. The malware, in turn, can bypass the company firewall to enter the company network to wreak havoc. Old paradigms of security by controlling security using perimeter network access are no longer feasible. 3. social media risks: By definition, mobile devices are designed in such a way that they can easily access social media sites, which are the new target for malware- propagating exploits. Because they are personal devices, mobilemediadevicesaremuchmoreatriskforgettingex- ploits sent to them and being “pwned” (so to speak).
These issues can be approached and dealt with by using a solid set of technical as well as administrative controls: 1. Establish a customized corporate usage policy for mobile devices. This policy/procedure must be signed by new hires at orientation and by all employees who ask for access to the corporate VPN using mobile devices (even personal ones). Ideally this should be in the form of a contract and should be signed by the employee before a portion of the employee’s device storage is partitioned for access and storage of corporate data. Normally, there should be yearly training high- lighting the do’s and don’ts of using mobile devices in accessing a corporate VPN. The first thing empha- sized in this training should be how to secure company data using passwords and, if cost-effective, two-factor authentication using hardware tokens. 2. Establish a policy for reporting theft or misplace- ment. This policy should identify at the least how quickly one should report thefts of mobile devices con- taining company data and how quickly remote wipe should be implemented. The policy can optionally detail how the mobile devices feature (app) enabling location of the misplaced stolen device will proceed. 3. Establish a well-tested SSL VPN for remote access. Reputed vendors that have experience with mobile device VPN clients should be chosen. The quality, functionality, and adaptability of use (and proven repu- tation) of the VPN clients should be key in determining
the choice of the vendor. The advantage of an SSL VPN compared with IPsec or L2TP for mobile use is well- known. The SSL VPNs should be capable of supporting two-factor authentication using hardware tokens. For example, Cisco’s “Cisco AnyConnect Secure Mobility Client” and Juniper’s “Junos Pulse App” are free app downloads available within the Apple iTunes App store. Other VPN vendors will have these apps available and they can be tested to see how smooth and functional the access process is. 4. Establish inbound and outbound malware scanning. Inbound scanning should occur for obvious reasons, but outbound scanning should also be scanned in case the company’s email servers become spam relays and get blacklisted on sites such as Lashback or get blocked to external sites by force. 5. Establish Wi-Fi Protected Access 2 (WPA2) encryp- tion for Wi-Fi traffic access. For now, WPA2 is the best encryption available compared with Wireless Equivalent Privacy (WEP) encryption, which is dated and not recommended. 6. Establish logging metrics and granular controls. Keeping regular tabs on information asset access by users and configuring alerting on unusual activity (such as large-scale access or exceeded failed-logon thresholds) is a good way to prevent data leakage. Mobile devices accessing enterprise intranets using VPNs are subject to the same factors as any other device remotely accessing VPNs, namely (Fig. 15.3)

- protection of data while in transmission 2. protection of data while at rest 3. protection of the mobile device itself (in case it fell into the wrong hands) 4. app security Ataminimum,thefollowingstandardsarerecommended formanagingtabletsandsmartphoneswithMDMappliances: 1. protection of data while in transmission: Transmis- sion security for mobile devices is concerned primarily with VPN security as well as Wi-Fi security. With regard to VPNs, the primary preference for most mobile devices should be for Web-based or SSL VPNs. The reason is that IPsec and L2TP VPN implementations are still buggy as of this writing on all but iOS devices (iPhones and iPads). SSL VPNs can also be imple- mented as clientless. Regarding Wi-Fi, the choice is simply to configure WPA2 Enterprise using 128-bit Advanced Encryption Standard (AES) encryption for mobile devices connecting via Wi-Fi. Again, MDM appliances can be used to push out these policies to the mobile devices. 2. protection of data while at rest: The basis of protect- ing stored data on a mobile device is the password. The stronger the password is, the harder it is to break the encryption. Some devices (including the iPad) support 256-bit AES encryption. Most recent mobile devices also support remote wipe and progressive wipe. The latter feature will progressively increase the time of the lockout duration until finally initiating an automatic remote wipe of all data on the device. These wipe fea- tures are designed to protect company data from falling into the wrong hands. All of these features can be queried and are configurable for mobile devices via either Exchange ActiveSync policies or configuration policies from MDM appliances. 3. protection of the mobile device: Passwords for mobile devices have to conform to the same corporate “strong password” policy as for other wired network devices. This means the password length, content (minimum of eight characters, alphanumeric, special characters, etc.), password rotation and expiry (remember: last 3 and every 2e3 months), and password lockout (three to five attempts) have to be enforced. Complete sets of configuration profiles can be pushed to tablets, smart- phones, and iPads using MDM appliances specifying app installation privileges, YouTube, and iTunes con- tent ratings permissions, among many others. 4. App security: In both Android and iOS, significant changes have been made so that app security has become more bolstered. For example, in both OSs, apps run in their own silos and cannot access other app or system data. Although iPhone apps are theoreti- cally capable of accessing the users’ contact information
and also their locations in some cases, Apple’s signing process for every app that appears in the iTunes app store takes care of this. It is possible on the iOS devices to encrypt data using either software methods such as AES, RC4, 3DES, or hardware-accelerated encryption activated when a lockout occurs. In iOS, designating an app as managed can prevent its content from being uploaded to iCloud or iTunes. In this manner, MDM appliances or Exchange ActiveSync can prevent leakage of sensitive company data. Although there are many risks in deploying mobile devices within the intranet, with careful configuration these risks can be minimized to the point where the myriad benefits outweigh the risks. One thing is certain: These mobile devices and the efficiency they promise are for real, and they are not going away. Empowering employees is the primary idea in the popularity of these devices. Corporate IT will only serve its own interest by designing enabling security regarding these devices and letting employees be more productive. - PLUGGING THE GAPS: NETWORK ACCESS CONTROL AND ACCESS CONTROL The first priority of an information security officer in most organizations is to ensure that there is a relevant corporate policy on access controls. Simple on the surface, the subject of access control is often complicated by the variety of ways the intranet is connected to the external world. Remote users coming in through traditional or SSL (browser-based) VPNs, control over use of USB keys, printouts, and CD-ROMs all require that a comprehensive end-point security solution be implemented. Past years have seen large-scale adoption of network ac- cess control (NAC) products in the midlevel and larger IT shops to manage end-point security. End-point security en- suresthatwhoeverispluggingintooraccessinganyhardware anywherewithintheintranethastocomplywiththeminimum baselinecorporatesecuritypolicystandards.Thiscaninclude add-onaccesscredentialsbutitgoesfarbeyondaccess.Often these solutions ensure that traveling corporate laptops are compliantwithaminimumpatchinglevel,scans,andantivirus definition levels before being allowed to connect to the intranet. NAC appliances that enforce these policies often require a NAC fat client to be installed on every PC and laptop. This rule can be enforced during logon using a logon script. The client can also be part of the standard OS image for deploying new PCs and laptops. Microsoft has built an NAC-type framework into some versions of its client OSs (Vista and XP SP3) to ease compliance with its NACserverproductcalledMSNetwork Policy Server, which closely works with its Windows 2008 Server product (see sidebar: The Cost of a Data Breach). The company has been able to convince many industry networking heavyweights (notably Cisco and Juniper) to adopt its NAP standard.22
Essentially, the technology has three parts: a policy- enforceable client, a decision point, and an enforcement point. The client could be an XP SP3 or Vista client (either a roaming user or guest user) trying to connect to the company intranet. The decision point in this case would be the Network Policy Server product, checking to see whether the client requesting access meets the minimum baseline to allow it to connect. If it does not, the decision point product would pass this data on to the enforcement point, a network access product such as a router or switch, which would then be able to cut off access. The scenario would repeat at every connection attempt, allowing the network’s health to be maintained on an ongoing basis. Microsoft’s NAP page has more details and animation to explain this process.24 Access control in general terms is a relationship triad among internal users, intranet resources, and the actions internal users can take on those resources. The idea is to give users only the least amount of access they require to perform their job. Tools used to ensure this in Windows shops employ Active Directory for Windows logon scripting and Windows user profiles. Granular classification is needed for users, actions, and resources to form a logical
and comprehensive access control policy that addresses who gets to connect to what, yet keeping the intranet safe from unauthorized access or data security breaches. Many off-the-shelf solutions geared toward this market combine inventory control and access control under a “desktop life- cycle” planning umbrella. Typically, security administrators start with a “deny-all” policy as a baseline before slowly building in the access per- missions. As users migrate from one department to another, arepromoted,orleavethecompany,inlargeorganizationsthis jobcaninvolveonepersonbyherself.Thispersonoftenhasa close working relationship with Purchasing, Helpdesk, and HR, getting coordination and information from these de- partments about users who have separated from the organi- zation and computers that have been surplused, deleting and modifyinguseraccountsandassignmentsofPCsandlaptops. Helpdesk software usually has an inventory control component that is readily available to Helpdesk personnel to update and/or pull up to access details on computer assignments and user status. Optimal use of form automa- tion can ensure that these details occur (such as deleting a user on the day of separation) to avoid the possibility of an unwelcome data breach.
- MEASURING RISK: AUDITS Audits are another cornerstone of a comprehensive intranet security policy. To start an audit, an administrator should know and list what he is protecting, as well as know the relevant threats and vulnerabilities to those resources. Assets that need protection can be classified as either tangible or intangible. Tangible assets are, of course, removable media (USB keys), PCs, laptops, PDAs, Web servers, networking equipment, digital video recording (DVR) security cameras, and employees’ physical access cards. Intangible assets can include company intellectual property,suchascorporateemailandwikis,userpasswords, and, especially for HIPAA and SarbaneseOxley mandates, personally identifiable health and financial information, which the company could be legally liable to protect. Threats can include the theft of USB keys, laptops, PDAs, and PCs from company premises. This results in a data breach (for tangible assets), weak passwords, and unhardened operating systems in servers (for intangible assets). Once a correlated listing of assets and associated threats and vulnerabilities has been made, we have to measure the impact of a breach, which is known as risk. The common rule of thumb to measure risk is: Risk ¼ Value of assetThreatvulnerability It is obvious that an Internet-facing Web server faces greater risk and requires priority patching and virus scan- ning because the vulnerability and threat components are high in that case (these servers routinely get sniffed and scanned over the Internet by hackers looking to find holes in their armor). However, this formula can standardize the priority list so that the actual audit procedure (typically carried out weekly or monthly by a vulnerability-scanning device) is standardized by risk level. Vulnerability- scanning appliances usually scan server farms and networking appliances only because these are high-value targets within the network for hackers who are looking for either unhardened server configurations or network switches with default factory passwords left on by mistake. To illustrate the situation, look at Fig. 15.4, which illus- trates a Structured Query Language (SQL) injection attack on a corporate database.25

The value of an asset is subjective and can be assessed only by the IT personnel in that organization (see sidebar: Questions for a Nontechnical Audit of Intranet Security). If the IT staff has an Information Technology Infrastructure Library (ITIL) process under way, often the value of an asset willalready havebeenclassifiedand can beused. Otherwise, a small spreadsheet can be created with classes of various tangible and intangible assets (as part of a hardware/software cataloging exercise) and values assigned that way.
- GUARDIAN AT THE GATE: AUTHENTICATION AND ENCRYPTION To most lay users, authentication in its most basic form is two-factor authentication, meaning a username and a pass- word. Although adding further factors [such as additional autogeneratedpersonalidentificationnumbers(PINs)and/or biometrics] makes authentication stronger by magnitudes, one can do a lot with just the password within a two-factor situation. Password strength is determined by how hard the password is to crack using a password-cracker application that uses repetitive tries employing common words (some- timesfromastoreddictionary)tomatchthepassword.Some factors will prevent the password from being cracked easily and make it a stronger password: l password length (more than eight characters) l use of mixed case (both uppercase and lowercase) l use of alphanumeric characters (letters as well as numbers) l use of special characters (such as !, ?, %, and #) The access control list (ACL) in a Windows AD envi- ronment can be customized to demand up to all four factors inthe setting orrenewal ofa password, which willrenderthe passwordstrong.Beforea few years ago,the complexityofa password (the last three items in the preceding list) was favored as a measure of strength in passwords. However, the latest preference as of this writing is to use uncommon passwords, joined-together sentences to form passphrases thatare long but do not have much inthe way of complexity. Password authentication (“what you know”) as two-factor authentication is not as secure as adding a third factor to the equation (a dynamic token password). Common types of third-factor authentication include biometrics (fingerprint scan, palm scan, or retina scan: in other words, “what you are”) and token-type authentication (software or hardware PINegeneratingtokens: thatis, “whatyou have”).Proximity or magnetic swipe cards and tokens have seen common use for physical premises-access authentication in high-security buildings (such as financial and R&D companies), but not for network or hardware access within IT.
When remote or teleworker employees connect to the intranet via VPN tunnels or Web-based SSL VPNs (the outward extension of the intranet once called an extranet), the connection needs to be encrypted with strong Triple Data Encryption Algorithm (3DES) or AES-type encryption to comply with patient data and financial data privacy mandates. The standard authentication setup is usually a username and a password, with an additional hardware token-generated random PIN entered into a third box. Until lately, RSA as a company was one of the bigger players in the hardware-token field; incidentally, it also invented the RSA algorithm for public-key encryption. As of this writing, hardware tokens cost under $30 per user in quantities of greater than a couple hundred pieces, compared with about a $100 only a decade ago. Most vendors offer free lifetime replacements for hardware tokens. Instead of a separate hardware token, some inex- pensive software token generators can be installed within PC clients, smartphones, and BlackBerry devices. Tokens are probably the most cost-effective enhancement to secu- rity today. - WIRELESS NETWORK SECURITY Employees using the convenience of wireless to log into the corporate network (usually via laptop) need to have their laptops configured with strong encryption to prevent data breaches. The first-generation encryption type known as WEP was easily deciphered (cracked) using common hacking tools and is no longer widely used. The latest standard in wireless authentication is WPA or WPA2 (802.11i), which offers stronger encryption compared with WEP. Although wireless cards in laptops can offer all of the choices previously noted, they should be configured with WPA or WPA2 if possible. There are many hobbyists roaming corporate areas looking for open wireless access points (transmitters) equipped with powerful Wi-Fi antennas and wardriving software; a common package is Netstumbler. Wardriving was originally meant to log the presence of open Wi-Fi access points on websites (see sidebar: Basic Ways to Prevent Wi-Fi Intrusions in Corporate Intranets), but there is no guarantee that actual access and use (piggybacking, in hacker terms) will not occur, because curiosity is human nature. If there is a profit motive, as in the TJX example, access to corporate networks will take place, although the risk of getting caught and the resulting risk of criminal prosecution will be high. Furthermore, installing a RADIUS server is a must to check access authentication for roaming laptops.
- SHIELDING THE WIRE: NETWORK PROTECTION Firewalls are, of course, the primary barrier to a network. Typicallyrulebased, firewallspreventunwarrantedtrafficfrom gettingintotheintranetfromtheInternet.Thesedays, firewalls also do some stateful inspections within packets to peer a little intotheheadercontentsofanincomingpacket,tocheckvalidity: thatis,tocheckwhetherastreamingvideopacketisreallywhat itsaysitis,andnotmalwaremasqueradingasstreamingvideo. Intrusion prevention systems (IPSs) are a newer type of inlinenetworkappliancethatusesheuristicanalysis(basedon a weekly updated signature engine) to find patterns of mal- ware identity and behavior and block malware from entering the periphery of the intranet. The IPS and the intrusion detection system (IDS) operate differently, however. IDSs are typically not sitting inline; they sniff traffic occurring anywhere in the network, cache extensively, and can correlate events to find malware. The downside of IDSs
is that unless their filters are modified extensively, they generate copious amounts of false positives, so much so that “real” threats become impossible to sift out of all the noise. IPSs, in contrast, work inline and inspect packets rapidly to match packet signatures. The packets pass through many hun- dreds of parallel filters, each containing matching rules for a differenttypeofmalwarethreat.Mostvendorspublishnewsets ofmalwaresignaturesfortheirapplianceseveryweek.However, signatures for common worms and injection exploits such as SQL-slammer,Code-red,andNIMDAaresometimeshardcoded into the application-specific integrated chip that controls the processingforthe filters.Hardware-enhancinga filterhelpsavert massive-scaleattacksmoreefficientlybecauseitisperformedin hardware, which is more rapid and efficient compared with software signature matching. Incredible numbers of malicious packetscanbedroppedfromthewireusingtheformermethod. The buffers in an enterprise-class IPS are smaller than those in IDSs and are fast, akin to a high-speed switch to preclude latency (often as low as 200 ms during the highest


load). A top-of-the-line midsize IPS box’s total processing threshold for all input and output segments can exceed 5 gigabits per second using parallel processing.26 However, to avoid overtaxing CPUs and for efficiency’s sake, IPSs usually block only a limited number of impor- tant threats out of the thousands of malware signatures listed. Tuning IPSs can be tricky: just enough blocking to silence the false-positive noise but enough to make sure all critical filters are activated to block important threats. The most important factors in designing a critical data infrastructure are resiliency, robustness, and redundancy regarding the operation of inline appliances. Whether one is talking about firewalls or inline IPSs, redundancy is
paramount (see sidebar: Types of Redundancy for Inline Security Appliances). Intranet robustness is a pri- mary concern where data have to be available on a 24/7 basis. Most security appliances come with syslog reporting (event and alert logs sent usually via port 514 User Data- gram Protocol) and email notification (set to alert beyond a customizable threshold) as standard. The syslog reporting can be forwarded to a security event management appli- ance, which consolidates syslogs into a central threat con- sole for the benefit of event correlation and forwards warning emails to administrators based on preset threshold criteria. Moreover, most firewalls and IPSs can be config- ured to forward their own notification email to adminis- trators in case of an impending threat scenario. For special circumstances in which a wireless-type local area network (LAN) connection is the primary one
(whether microwave beam, laser beam, or satellite-type connection), redundancy can be ensured by a secondary connection of equal or smaller capacity. For example, in certain northern Alaska towns where digging trenches into the hardened icy permafrost is expensive and rigging wire across the tundra is impractical because of the extreme cold, the primary network connections between towns are always via microwave link, often operating in dual redun- dant mode.
- WEAKEST LINK IN SECURITY: USER TRAINING Intranet security awareness is best communicated to users in two primary ways: during new employee orientation and by ongoing targeted training for users in various de- partments with specific user audiences in mind. A formal security training policy should be drafted and signed off by management, with well-defined scopes, roles, and re- sponsibilities of various individuals, such as the CIO and the information security officer, and posted on the intranet. New recruits should be given a copy of all security policies to sign off on before they are granted user access. The training policy should also spell out the roles of the HR, compliance, and public relations departments in the training program. Training can be given using the PowerPoint Seminar method in large gatherings before monthly “all-hands” departmental meetings and also via an emailed Web link to a Flash video format presentation. The latter can also be configured to have an interactive quiz at the end, which should pique audience interest in the subject and help people remember relevant issues.
With regard to topics to be included in the training, any applicable federal or industry mandate such as HIPAA, SarbaneseOxley, PCI-DSS, or ISO 27002 should be dis- cussed extensively first, followed by discussions on tack- ling social engineering, spyware, viruses, and so on. The topics of data theft and corporate data breaches are frequently in the news. These topics can be discussed extensively, with emphasis on how to protect personally identifiable information in a corporate setting. Password policy and access control topics are always good things to discuss; at a minimum, users need to be reminded to sign off their workstations before going on a break. - DOCUMENTING THE NETWORK: CHANGE MANAGEMENT Controlling the IT infrastructure configuration of a large organization is more about change control than other things. Often the change control guidance comes from documents such as the ITIL series of guidebooks. After a baseline configuration is documented, change control, a deliberate and methodical process that ensures that any changes are made to the baseline IT configuration of the organization (such as changes to network design, AD design, and so on), is extensively documented and autho- rized only after prior approval. This is done to ensure that unannounced or unplanned changes are not allowed to hamper the day-to-day efficiency and business functions of the overall intranet infrastructure. In most government entities, even small changes are made to go through change management (CM); however, management can give managers leeway to approve a certain minimal level of ad hoc change that has no potential
to disrupt operations. In most organizations in which mandates are a day-to-day affair, no ad hoc change is allowed unless it goes through supervisory-level CM meetings. The goal of CM is largely to comply with mandates, but for some organizations, waiting for a weekly meeting can slow things significantly. If justified, an emergency CM meeting can be called to approve a time-sensitive change. Practically speaking, the CM process works as fol- lows: A formal CM document is filled out (usually a multitab online Excel spreadsheet) and forwarded to the CM ombudsman (maybe a project management person). For some CM form details, see the sidebar: Change Management Spreadsheet Details to Submit to a CM Meeting. The document must have supervisory approval from the requestor’s supervisor before proceeding to the ombudsman. The ombudsman posts this change document on a section of the intranet for all other supervisors and managers within the CM committee to review in advance. Done this way, the CM committee, meeting in its weekly or biweekly change approval meetings, can voice reservations or ask clarification questions of the change-initiating per- son, who is usually present to explain the change. At the end of the deliberations the decision is then voted on to approve, deny, modify, or delay the change (sometimes with preconditions).
Change Management Spreadsheet Details to Submit to a Change Management Meeting l name and organizational details of the change requestor l actual change details, such as the time and duration of the change l any possible impacts (high, low, or medium) to signifi- cant user groups or critical functions l the amount of advance notice needed for affected users via email (typically 2 working days) l evidence that the change has been tested in advance l signature and approval of the supervisor and her supervisor (manager) l whether and how rollback is possible l postchange, a “postmortem tab” has to confirm whether the change process was successful, and any revealing comments or notes for the conclusion l one of the tabs can be an “attachment tab” containing embedded Visio diagrams or word documentation embedded within the Excel sheet to aid discussion
If approved, the configuration change is then made (usually within the following week). The postmortem sec- tion of the change can then be updated to note any issues that occurred during the change (such as a rollback after change reversal, and the causes).
Some organizations have started to operate the CM collaborative process using social networking tools at work. This allows disparate flows of information, such as emails, departmental wikis, and file-share documents, to belong to a unified thread for future reference.
- REHEARSE THE INEVITABLE: DISASTER RECOVERY Possible disaster scenarios can range from the mundane to the biblical in proportion. In intranet or general IT terms, recovering successfully from a disaster can mean resuming critical IT support functions for mission-critical business functions. Whether such recovery is smooth and hassle-free depends on how prior disaster recovery (DR) planning occurred and how this plan was tested to address all rele- vant shortcomings adequately. The first task when planning for DR is to assess the business impact of a certain type of disaster on the func- tioning of an intranet using business impact analysis (BIA). BIA involves certain metrics; again, off-the shelf software tools are available to assist with this effort. The scenario could be a natural hurricane-induced power outage or a human-induced critical application crash. In any one of these scenarios, one needs to assess the type of impact in terms of time, productivity, and finance. BIAs can take into consideration the breadth of impact. For example, if the power outage is caused by a hurricane or an earthquake, support from generator vendors or the electricity utility could be hard to get because of the large demands for their services. BIAs also need to take into account historical and local weather priorities. Although there could be possibilities of hurricanes occurring in California or earthquakes occurring along the Gulf Coast of Florida, for most practical purposes the chances of those disasters taking place in those locales are pretty remote. Historical data can be helpful for prioritizing contingencies. Once the business impacts are assessed to categorize critical systems, a DR plan can be organized and tested. Criteria for recovery have two types of metrics: a recovery point objective (RPO) and a recovery time objective (RTO). In the DR plan, the RPO refers to how far back or “back to what point in time” that backup data have to be recov- ered. This time frame generally dictates how often tape backups are taken, which again can depend on the criticality of the data. The most common scenario for medium-sized IT shops is daily incremental backups and a weekly full backup on tape. Tapes are sometimes changed automatically by tape backup appliances. One important thing to remember is to rotate tapes (that is, put them on a life-cycle plan by marking them for ex- piry) to make sure that tapes have complete data integrity
during a restore. Most tape manufacturers have marking schemes for this task. Although tapes are still relatively expensive, the extra amount spent on always having fresh tapes ensures that there are no nasty surprises at the time of a crucial data recovery. RTO refers to how long it takes to restore backed up or recovered data to its original state for resuming normal business processes. The critical factor here is cost. It will cost much more to restore data within an hour using an online backup process or to resume operations using a hotsite rather than a 5-h restore using stored tape backups. If business process resumption is critical, cost becomes a less important factor. DR also has to take into account resumption of communication channels. If network and telephone links are not up, having a timely tape restore does little good to resume business functions. Extended campus network links often depend on leased lines from major vendors such as Verizon and AT&T, so having a trusted vendor relationship with agreed-on service level agreement (SLA) standards is a requirement. Depending on budgets, one can configure DR to happen almost instantly, if so desired, but that is a far more costly option. Most shops with “normal” data flows are okay with business being resumed within the span of about 3e4 h or even a full working day after a major disaster. Balancing costs with business expectations is the primary factor in the DR game. Spending inordinately for a rare disaster that might never happen is a waste of resources. It is fiscally imprudent (not to mention futile) to try to prepare for every contingency possible. Once the DR plan is more or less finalized, a DR committee can be set up under an experienced DR pro- fessional to orchestrate the routine training of users and managers to simulate disasters on a frequent basis. In most shops this means management meeting every 2 months to simulate a DR “war room” (command center) situation and employees going through a mandatory interactive 6-month disaster recovery training, listing the DR personnel to contact. Within the command center, roles are preassigned, and each member of the team carries out his or her role as though it were a real emergency or disaster. DR coordi- nation is frequently modeled after the US Federal Emer- gency Management Agency guidelines, an active entity that has training and certification tracks for DR management professionals. Simulated “generator shutdowns” in most shops are scheduled on a biweekly or monthly basis to see how the systems actually function. The systems can include UPSs, emergency lighting, email and cell phone notification methods, and alarm enunciators and sirens. Because elec- tronics items in a server room are sensitive to moisture damage, gas-based Halon fire-extinguishing systems are
used. These Halon systems also have a provision to be tested (often twice a year) to determine their readiness. The vendor will be happy to be on retainer for these tests, which can be made part of the purchasing agreement as an SLA. If equipment is tested on a regular basis, shortcomings and major hardware maintenance issues with major DR systems can easily be identified, documented, and redressed. In a severe disaster situation, priorities need to be exercised regarding what to salvage first. Clearly, trying to recover employee records, payroll records, and critical business mission data such as customer databases will take precedence. Anything irreplaceable or not easily replace- able needs priority attention. We can divide the levels of redundancies and backups to a few progressive segments. The level of backup sophistication would, of course, depend on (1) criticality and (2) time-to-recovery criteria of the data involved. At the basic level, we can opt not to back up any data or not even have procedures to recover data, which means that data recovery would be a failure. Understandably, this is not a common scenario. More typical is contracting with an archival company of a local warehouse within a 20-mile periphery. Tapes are backed up onsite and stored offsite, with the archival company picking up the tapes from your facility on a daily basis. The time to recover depends on retrieving the tapes from archival storage, getting them onsite, and starting a restore. The advantages here are lower cost. However, the time needed to transport tapes and recover them might not be acceptable, depending on the type of data and the recovery scenario. Often a “coldsite” or “hotsite” is added to the intranet backup scenario. A coldsite is a smaller and scaled-down copy of the existing intranet data center that has only the most essential pared-down equipment supplied and tested for recovery but not in a perpetually ready state (powered down as in “cold,” with no live connection). These cold- sites can house the basics, such as a Web server, domain name servers, and SQL databases, to get an informational site started up in very short order. A hotsite is the same thing as a coldsite, except that in this case the servers are always running and the Internet and intranet connections are “live” and ready to be switched over much more quickly than on a coldsite. These are just two examples of how the business resumption and recovery times can be shortened. Recovery can be made rapidly if the hotsite is linked to the regular data center using fast leased-line links (such as a DS3 connection). Backups synched in real time with an identical redundantarrayof inexpensivedisks at the hotsite over redundant high-speed data links afford the shortest recovery time. In larger intranet shops based in defense-contractor companies, sometimes there are requirements for even
faster data recovery with far more rigid standards for data integrity. To-the-second real-time data synchronization in addition to hardware synchronization ensure that duplicate sites thousands of miles away can be up and running within a matter of seconds, even faster than a hotsite. Such extreme redundancy typically is needed for critical national databases (that is, air traffic control or customs databases that are accessed 24/7, for example). At the highest level of recovery performance, most large database vendors offer “zero data loss” solutions, with a variety of cloned databases synchronized across the country that automatically failover and recover in an instantaneous fashion to preserve a consistent status, often free from human intervention. Oracle’s version is called Data Guard; most mainframe vendors offer a similar product, varying in their offerings of tiers and features. The philosophy here is simple: The more dollars you spend, the more readiness you can buy. However, the expense has to be justified by the level of criticality for the availability of the data.
- CONTROLLING HAZARDS: PHYSICAL AND ENVIRONMENTAL PROTECTION Physical access and environmental hazards are relevant to security within the intranet. People are the primary weak link in security (as previously discussed), and controlling the activity and movement of authorized personnel and preventing access to unauthorized personnel fall within the purview of these security controls. This important area of intranet security must first be formalized within a management-sanctioned and published P&P. Physical access to data center facilities (as well as IT working facilities) is typically controlled using card readers. These were scanning types in the past 2 decades but are increasingly being converted to near-field or proximity-type access card systems. Some high-security facilities (such as bank data centers) use smartcards, which use encryption keys stored within the cards for matching keys. Some important and commonsense topics should be discussed within the subject of physical access. First, dis- bursal of cards needs to be a deliberate and high-security affair requiring the signatures of at least two supervisory- level people who can be responsible for the authenticity and actual need to access credentials for a person to specific areas. Access card permissions need to be highly granular. An administrative person probably will never need to be in the server room, so that person’s access to the server room should be blocked. Areas should be categorized and cata- loged by sensitivity, and access permissions granted accordingly.
Physical data transmission access points to the intranet have to be monitored via DVR and closed-circuit cameras if possible. Physical electronic eavesdropping can occur to unmonitored network access points in both wireline and wireless ways. There have been known instances of thieves intercepting LAN communication from unshielded Ethernet cable (usually hidden above the plenum or false ceiling for longer runs). All a data thief needs is to place a tap box and a miniature (Wi-Fi) wireless transmitter at entry or exit points to the intranet to copy and transmit all communi- cations. At the time of this writing, these transmitters are the size of a USB key. The miniaturization of electronics has made data theft possible for part-time thieves. Spy store sites give determined data thieves plenty of workable options at relatively little cost. Using a DVR solution to monitor and store access logs to sensitive areas and correlating them to the timestamps on the physical access logs can help forensic investigations in case of a physical data breach malfeasance, or theft. It is important to remember that DVR records typically rotate and are erased every week. One person has to be in charge of the DVR so records are saved to optical disks weekly before they are erased. DVR tools need some tending to because their sophistication level often does not come up to par with other network tools. Written or PC-based sign-in logs must be kept at the front reception desk, with timestamps. Visitor cards should have limited access to private and/or secured areas. Visitors must provide official identification and log times coming in and going out, as well as names of persons to be visited and the reason for their visit. If possible, visitors should be escorted to and from the specific person to be visited, to minimize the chances of subversion or sabotage. Entries to courthouses and other special facilities have metal detectors, but these may not be needed for every facility. The same goes for bollards and concrete entry barriers, to prevent car bombings. In most government facilities where security is paramount, even physical entry points to parking garages have special personnel (usually deputed from the local sheriff’s department) to check under cars for hidden explosive devices. Contractor laptops must be registered and physically checked in by field support personnel. If these laptops are going to be plugged into the local network, the laptops need to be virus-scanned by data-security personnel and checked for unauthorized utilities or suspicious software (such as hacking utilities, Napster, or other P2P threats). Supply of emergency power to the data center and the servers has to be robust to protect the intranet from cor- ruption caused by power failures. Redundancy has to be exercised all the way from the utility connection to the servers themselves. This means there has to be more than one power connection to the data center (from more than one substation/transformer, if it is a larger data center).
There has to be provision of an alternate power supply (a ready generator to supply some, if not all, power requirements) in case of a power failure. Power supplied to the servers has to come from more than one single UPS, because most servers have two removable power inputs. Data center racks typically have two UPSs on the bottom supplying power to two separate power strips on both sides of the rack for this redundancy purpose (for seamless switchover). In case of a power failure, the UPSs instantly take over the supply of power and start beeping, alerting personnel to shut down servers gracefully. UPSs usually have reserve power for brief periods (less than 10 min) until the generator kicks in, relieving the UPS of the large burden of the server power loads. Generators come on trailers or are skid-mounted and are designed to run as long as fuel is available in the tank, which can be about 3e5 days, depending on the model and capacity to generate (in thousands of kilowatts). Increasingly, expensive, polluting batteries have made UPSs in larger data centers fall out of favor compared with flywheel power supplies, which are a cleaner, battery-less technology to supply interim power. Maintenance of this technology is half as costly as UPS, and it offers the same functionality. Provision has to be made for rechargeable emergency luminaires within the server room, as well as all areas occupied by administrators, so the entry and exit are not hampered during a power failure. Provision for fire detection and firefighting must also be made. As mentioned previously, Halon gas fire-suppression systems are appropriate for server rooms because sprinklers will inevitably damage expensive servers if the servers are still turned on during sprinkler activation. Sensors have to be placed close to the ground to detect moisture from plumbing disasters and resultant flooding. Master shutoff valve locations for water have to be marked and identified and personnel need to be periodically trained on performing shutoffs. Complete environmental control packages with cameras geared toward detecting any type of temperature, moisture, and sound abnormality are offered by many vendors. These sensors are connected to moni- toring workstations using Ethernet LAN cabling. Reporting can occur through emails if customizable thresholds are met or exceeded.
- KNOW YOUR USERS: PERSONNEL SECURITY Users working within intranet-related infrastructures have to be known and trusted. Often data contained within the intranet is highly sensitive, such as new product designs and financial or market-intelligence data gathered after much research and at great expense. Assigning personnel to sensitive areas in IT entails attaching security categories and parameters to the
positions, especially within IT. Attaching security param- eters to a position is akin to attaching tags to a photograph or blog. Some parameters will be more important than others, but all describe the item to some extent. The cate- gories and parameters listed on the personnel access form should correlate to access permissions to sensitive in- stallations such as server rooms. Access permissions should be compliant to the organizational security policy in force at the time. Personnel, especially those who will be handling sensitive customer data or individually identifiable health records, should be screened before hiring, to ensure that they do not have felonies or misdemeanors on their records. During transfers and terminations, all sensitive access tools should be reassessed and reassigned (or deassigned, in case termination happens) for logical and physical access. Access tools can include such items as encryption tokens, company cell phones, laptops or PDAs, card keys, metal keys, entry passes, and any other company identification provided for employment. For people who are leaving the organization, an exit interview should be taken. System access should be terminated on the hour after former personnel have ceased to be employees of the company. - PROTECTING DATA FLOW: INFORMATION AND SYSTEM INTEGRITY Information integrity protects information and data flows while they are in movement to and from users’ desktops to the intranet. System integrity measures protect the systems that process the information (usually servers such as email or file servers). Processes to protect information can include antivirus tools, IPS and IDS tools, Web-filtering tools, and email encryption tools. Antivirus tools are the most common security tools available to protect servers and users’ desktops. Typically, enterprise-level antivirus software from larger vendors such as Symantec and McAfee will contain a console listing all machines on the network and will enable administrators to see graphically (color or icon differentiation) which machines need virus remediation or updates. All machines will have a software client installed that does some scanning and reporting of the individual machines to the console. To save bandwidth, the management server that contains the console will be updated with the latest virus (and spyware) definition from the vendor. Then it is the management console’s job to update the software client slowly in each computer with the latest definitions. Sometimes the client itself will need an update, and the console allows this to be done remotely. IDS detects malware within the network from the traffic and communication malware used. Certain patterns of behavior are attached to each type of malware, and those signatures are what IDSs are used to match. Currently,
IDSs are mostly defunct. The major problems with IDSs were that (1) IDSs used to produce too many false positives, which made sifting out actual threats a huge, frustrating exercise; and (2) IDSs had no teeth: that is, their functionality was limited to reporting and raising alarms. IDS devices could not stop malware from spreading because they could not block it. Compared with IDSs, IPSs have seen much wider adoption across corporate intranets because IPS devices sit inline processing traffic at the periphery and can block traffic or malware, depending on a much more sophisticated heuristic algorithm than IDS devices. Although IPSs are mostly signature based, there are experimental IPS devices that can stop threats not on signature, but based only on suspicious or anomalous behavior. This is good news because the numbers of “zero-day” threats are on the increase, and their signatures are mostly unknown to the security vendors at the time of infection. Web-filtering tools have become more sophisticated as well. A decade ago, Web filters could block traffic to specific sites only if the URL matched. Today, most Web filter vendors have large research arms that try to categorize specific websites under certain categories. Some vendors have realized the enormity of this task and have allowed the general public to contribute to this effort. The website www.trustedsource.org is an example; a person can go in and submit a single or multiple URLs for categorization. If they are examined and approved, the site category will then be added to the vendor’s next signature update for their Web filter solution. Web filters not only match URLs, they also do a fair bit of packet examining these days, just to make sure that a JPEG frame is indeed a JPEG frame and not a worm in disguise. The categories of websites blocked by a typical midsized intranet vary, but some surefire blocked cate- gories would be pornography, erotic sites, discrimination/ hate, weapons/illegal activities, and dating/relationships. Web filters are not just there to enforce the moral values of management. These categories, if not blocked at work, openly enable an employee to offend another employee (especially pornography or discriminatory sites) and are fertile grounds for a liability lawsuit against the employer. Finally, email encryption has been in the news because of various mandates such as SarbaneseOxley and HIPAA. Both mandates specifically mention email and communi- cation encryption to encrypt personally identifiable finan- cial or patient medical data while in transit. California (among other states) has adopted a resolution to discontinue fund disbursements to any California health organization that does not use email encryption as a matter of practice. This has caught many Californian companies and local government entities unaware because email encryption software is relatively hard to implement. The toughest
challenge yet is to train users to get used to the tool. Email encryption works by entering a set of credentials to access the email rather than just getting email pushed to the user, as within the email client Outlook.
- SECURITY ASSESSMENTS A security assessment (usually done on a yearly basis for most midsized shops) not only uncovers various mis- configured items on the network and server-side sections of IT operations, it also serves as a convenient blueprint for IT to activate necessary changes and get credibility for budgetary assistance from the accounting folks. Typically, most consultants take 2e4 weeks to conduct a security assessment (depending on the size of the intranet), and they primarily use open-source vulnerability scanners such as Nessus. GFI LANguard, Retina, and Core Impact are other examples of commercial vulnerability- testing tools. Sometimes testers also use other proprietary suites of tools (special open-source tools such as the Metasploit Framework or Fragrouter) to conduct “payload- bearing attack exploits,” thereby evading the firewall and the IPS to gain entry. In the case of intranet Web servers, cross-site scripting attacks can occur (see sidebar: Types of Scans Conducted on Servers and Network Appliances During a Security Assessment).
Types of Scans Conducted on Servers and Network Appliances During a Security Assessment l firewalls and IPS device configuration l regular and SSL VPN configuration l Web server hardening (most critical; available as guides from vendors such as Microsoft) l Demilitarized zone (DMZ) configuration l email vulnerabilities l domain name server anomalies l database servers (hardening levels) l network design and access control vulnerabilities l internal PC health such as patching levels and incidence of spyware, malware, and so on
The results of these penetration tests are usually compiled as two separate items: (1) as a full-fledged technical report for IT, and (2) as a high-level executive summary meant for and delivered to top management to discuss strategy with IT after the engagement. - RISK ASSESSMENTS Risk is defined as the probability of loss. In IT terms we are talking about compromising data CIA. Risk management is a way to manage the probability of threats causing an
impact. Measuring risks using a risk assessment exercise is the first step toward managing or mitigating a risk. Risk assessments can identify network threats, their probabili- ties, and their impacts. The reduction of risk can be achieved by reducing any of these three factors. Regarding intranet risks and threats, we are talking about anything from threats such as unpatched PCs getting viruses and spyware (with hidden keylogging software) to network-borne denial-of-service attacks and even large, publicly embarrassing Web vandalism threats, such as someone being able to deface the main page of the com- pany website. The last is a high-impact threat but is mostly perceived to be a remote probability, unless, of course, the company has experienced this before. Awareness among vendors as well as users regarding security is at an all-time high because security is a high-profile news item. Any security threat assessment needs to explore and list exploitable vulnerabilities and gaps. Many midsized IT shops run specific vulnerability assessment (VA) tools in-house on a monthly basis. eEye’s Retina Network Security Scanner and Foundstone’s scanning tools appli- ance are two examples of VA tools that can be found in use at larger IT shops. These tools are consolidated on ready-to- run appliances that are usually managed through remote browser-based consoles. Once the gaps are identified and quantified, steps can be taken to mitigate these vulnera- bilities gradually, minimizing the impact of threats. In intranet risk assessments, we identify primarily Web server and database threats residing within the intranet, but we should also be mindful about the periphery to guard against breaches through the firewall or IPS. Finally, making intranet infrastructure and applications can be a complex task. This gets more complex and even confusing when information is obtained from different sources that are normally found at security conferences around the world. Frequently, these conference sources give a high-level overview, talking about generic compli- ance, but none gives a full picture and details that are required for quick implementation. So, with the preceding in mind, and because of the frequency of poor security practices or far-too-common security failures on the intranet, let us briefly look at an intranet security imple- mentation process checklist.
- INTRANET SECURITY IMPLEMENTATION PROCESS CHECKLIST With this checklist, you get all of your questions in one place. This not only saves time, it is also cost-effective. The following high-level checklist lists all of the questions that are typically raised during the implementation process (see checklist: An Agenda for Action for Intranet Security Implementation Process Activities).
An Agenda for Action for Intranet Security Implementation Process Activities The following high-level checklist should be addressed to find the following intranet security implementation process questions helpful (check all tasks completed): _____1. How do you validate the intranet? _____2. How do you validate Web applications? _____3. How do you ensure and verify accuracy of file transfer through emails? _____4. Does one need third-party certificates for digital signatures? _____5. How do you ensure limited and authorized access to closed and open systems? _____6. How can one safely access the company intranet while traveling? _____7. How do you best protect the intranet from Internet attacks? _____8. How do you handle security patches? _____9. Does the stakeholder expect to be able to find a procedure using a simple search interface? _____10. How many documents will be hosted? _____11. What design will be used (centralized, hub and spoke, etc.)? _____12. Who will be involved in evaluating projects? _____13. What is the budget? _____14. Who will be responsible for maintaining the site after it goes “live”? - SUMMARY It is true that the level of Internet hyperconnectivity among Generation X and Y users has mushroomed, and the network periphery that we used to take for granted as a security shield has been diminished largely because of the explosive growth of social networking and the resulting connectivity boom. However, with the various new types of incoming application traffic [Voice Over Internet Protocol, Session Initiation Protocol, and Extensible Markup Lan- guage (XML) traffic] to their networks, security adminis- trators need to stay on their toes and deal with these new protocols by implementing newer tools and technology. One example of new technology is the application-level firewall to connect outside vendors to intranets (also known as an XML firewall, placed within a DMZ) that protects the intranet from malformed XML and Simple Object Access Protocol message exploits coming from outside sourced applications.27
Published @ September 30, 2021 11:56 am