About This eBook
ePUB is an open, industry-standard format for eBooks. However, support of ePUB and
its many features varies across reading devices and applications. Use your device or app
settings to customize the presentation to your liking. Settings that you can customize often
include font, font size, single or double column, landscape or portrait mode, and figures
that you can click or tap to enlarge. For additional information about the settings and
features on your reading device or app, visit the device manufacturer’s Web site.
Many titles include programming code or configuration examples. To optimize the
presentation of these elements, view the eBook in single-column, landscape mode and
adjust the font size to the smallest setting. In addition to presenting code and
configurations in the reflowable text format, we have included images of the code that
mimic the presentation found in the print book; therefore, where the reflowable format
may compromise the presentation of the code listing, you will see a “Click here to view
code image” link. Click the link to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your device or app.
Security in Computing
FIFTH EDITION
Charles P. Pfleeger
Shari Lawrence Pfleeger
Jonathan Margulies
Upper Saddle River, NJ • Boston • Indianapolis • San Francisco
New York • Toronto • Montreal • London • Munich • Paris • Madrid
Capetown • Sydney • Tokyo • Singapore • Mexico City
Many of the designations used by manufacturers and sellers to distinguish their
products are claimed as trademarks. Where those designations appear in this book, and the
publisher was aware of a trademark claim, the designations have been printed with initial
capital letters or in all capitals.
The authors and publisher have taken care in the preparation of this book, but make no
expressed or implied warranty of any kind and assume no responsibility for errors or
omissions. No liability is assumed for incidental or consequential damages in connection
with or arising out of the use of the information or programs contained herein.
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs; and content
particular to your business, training goals, marketing focus, or branding interests), please
contact our corporate sales department at corpsales@pearsoned.com or (800) 382-3419.
For government sales inquiries, please contact governmentsales@pearsoned.com.
For questions about sales outside the U.S., please contact international@pearsoned.com.
Visit us on the Web: informit.com/ph
Library of Congress Cataloging-in-Publication Data
Pfleeger, Charles P., 1948–
Security in computing / Charles P. Pfleeger, Shari Lawrence Pfleeger, Jonathan
Margulies.—
Fifth edition.
pages cm
Includes bibliographical references and index.
ISBN 978-0-13-408504-3 (hardcover : alk. paper)—ISBN 0-13-408504-3 (hardcover :
alk.
paper)
1. Computer security. 2. Data protection. 3. Privacy, Right of. I. Pfleeger, Shari
Lawrence.
II. Margulies, Jonathan. III. Title.
QA76.9.A25P45 2015
005.8—dc23
2014038579
Copyright © 2015 Pearson Education, Inc.
All rights reserved. Printed in the United States of America. This publication is
protected by copyright, and permission must be obtained from the publisher prior to any
prohibited reproduction, storage in a retrieval system, or transmission in any form or by
any means, electronic, mechanical, photocopying, recording, or likewise. To obtain
permission to use material from this work, please submit a written request to Pearson
Education, Inc., Permissions Department, One Lake Street, Upper Saddle River, New
Jersey 07458, or you may fax your request to (201) 236-3290.
ISBN-13: 978-0-13-408504-3
ISBN-10: 0-13-408504-3
Text printed in the United States on recycled paper at Courier in Westford, Massachusetts.
First printing, January 2015
Executive Editor
Bernard Goodwin
Editorial Assistant
Michelle Housley
Managing Editor
John Fuller
Project Editor
Elizabeth Ryan
Copy Editor
Mary Lou Nohr
Proofreader
Linda Begley
Cover Designer
Alan Clements
Compositor
Shepherd, Inc.
To Willis Ware, a hero of
computer security and privacy.
Contents
Foreword
Preface
Acknowledgments
About the Authors
Chapter 1 Introduction
1.1 What Is Computer Security?
Values of Assets
The Vulnerability–Threat–Control Paradigm
1.2 Threats
Confidentiality
Integrity
Availability
Types of Threats
Types of Attackers
1.3 Harm
Risk and Common Sense
Method–Opportunity–Motive
1.4 Vulnerabilities
1.5 Controls
1.6 Conclusion
1.7 What’s Next?
1.8 Exercises
Chapter 2 Toolbox: Authentication, Access Control, and Cryptography
2.1 Authentication
Identification Versus Authentication
Authentication Based on Phrases and Facts:
Something You Know
Authentication Based on Biometrics: Something You
Are
Authentication Based on Tokens: Something You
Have
Federated Identity Management
Multifactor Authentication
Secure Authentication
2.2 Access Control
Access Policies
Implementing Access Control
Procedure-Oriented Access Control
Role-Based Access Control
2.3 Cryptography
Problems Addressed by Encryption
Terminology
DES: The Data Encryption Standard
AES: Advanced Encryption System
Public Key Cryptography
Public Key Cryptography to Exchange Secret Keys
Error Detecting Codes
Trust
Certificates: Trustable Identities and Public Keys
Digital Signatures—All the Pieces
2.4 Exercises
Chapter 3 Programs and Programming
3.1 Unintentional (Nonmalicious) Programming
Oversights
Buffer Overflow
Incomplete Mediation
Time-of-Check to Time-of-Use
Undocumented Access Point
Off-by-One Error
Integer Overflow
Unterminated Null-Terminated String
Parameter Length, Type, and Number
Unsafe Utility Program
Race Condition
3.2 Malicious Code—Malware
Malware—Viruses, Trojan Horses, and Worms
Technical Details: Malicious Code
3.3 Countermeasures
Countermeasures for Users
Countermeasures for Developers
Countermeasure Specifically for Security
Countermeasures that Don’t Work
Conclusion
Exercises
Chapter 4 The Web—User Side
4.1 Browser Attacks
Browser Attack Types
How Browser Attacks Succeed: Failed Identification
and Authentication
4.2 Web Attacks Targeting Users
False or Misleading Content
Malicious Web Content
Protecting Against Malicious Web Pages
4.3 Obtaining User or Website Data
Code Within Data
Website Data: A User’s Problem, Too
Foiling Data Attacks
4.4 Email Attacks
Fake Email
Fake Email Messages as Spam
Fake (Inaccurate) Email Header Data
Phishing
Protecting Against Email Attacks
4.5 Conclusion
4.6 Exercises
Chapter 5 Operating Systems
5.1 Security in Operating Systems
Background: Operating System Structure
Security Features of Ordinary Operating Systems
A Bit of History
Protected Objects
Operating System Tools to Implement Security
Functions
5.2 Security in the Design of Operating Systems
Simplicity of Design
Layered Design
Kernelized Design
Reference Monitor
Correctness and Completeness
Secure Design Principles
Trusted Systems
Trusted System Functions
The Results of Trusted Systems Research
5.3 Rootkit
Phone Rootkit
Rootkit Evades Detection
Rootkit Operates Unchecked
Sony XCP Rootkit
TDSS Rootkits
Other Rootkits
5.4 Conclusion
5.5 Exercises
Chapter 6 Networks
6.1 Network Concepts
Background: Network Transmission Media
Background: Protocol Layers
Background: Addressing and Routing
Part I—War on Networks: Network Security Attacks
6.2 Threats to Network Communications
Interception: Eavesdropping and Wiretapping
Modification, Fabrication: Data Corruption
Interruption: Loss of Service
Port Scanning
Vulnerability Summary
6.3 Wireless Network Security
WiFi Background
Vulnerabilities in Wireless Networks
Failed Countermeasure: WEP (Wired Equivalent
Privacy)
Stronger Protocol Suite: WPA (WiFi Protected
Access)
6.4 Denial of Service
Example: Massive Estonian Web Failure
How Service Is Denied
Flooding Attacks in Detail
Network Flooding Caused by Malicious Code
Network Flooding by Resource Exhaustion
Denial of Service by Addressing Failures
Traffic Redirection
DNS Attacks
Exploiting Known Vulnerabilities
Physical Disconnection
6.5 Distributed Denial-of-Service
Scripted Denial-of-Service Attacks
Bots
Botnets
Malicious Autonomous Mobile Agents
Autonomous Mobile Protective Agents
Part II—Strategic Defenses: Security
Countermeasures
6.6 Cryptography in Network Security
Network Encryption
Browser Encryption
Onion Routing
IP Security Protocol Suite (IPsec)
Virtual Private Networks
System Architecture
6.7 Firewalls
What Is a Firewall?
Design of Firewalls
Types of Firewalls
Personal Firewalls
Comparison of Firewall Types
Example Firewall Configurations
Network Address Translation (NAT)
Data Loss Prevention
6.8 Intrusion Detection and Prevention Systems
Types of IDSs
Other Intrusion Detection Technology
Intrusion Prevention Systems
Intrusion Response
Goals for Intrusion Detection Systems
IDS Strengths and Limitations
6.9 Network Management
Management to Ensure Service
Security Information and Event Management (SIEM)
6.10 Conclusion
6.11 Exercises
Chapter 7 Databases
7.1 Introduction to Databases
Concept of a Database
Components of Databases
Advantages of Using Databases
7.2 Security Requirements of Databases
Integrity of the Database
Element Integrity
Auditability
Access Control
User Authentication
Availability
Integrity/Confidentiality/Availability
7.3 Reliability and Integrity
Protection Features from the Operating System
Two-Phase Update
Redundancy/Internal Consistency
Recovery
Concurrency/Consistency
7.4 Database Disclosure
Sensitive Data
Types of Disclosures
Preventing Disclosure: Data Suppression and
Modification
Security Versus Precision
7.5 Data Mining and Big Data
Data Mining
Big Data
7.6 Conclusion
Exercises
Chapter 8 Cloud Computing
8.1 Cloud Computing Concepts
Service Models
Deployment Models
8.2 Moving to the Cloud
Risk Analysis
Cloud Provider Assessment
Switching Cloud Providers
Cloud as a Security Control
8.3 Cloud Security Tools and Techniques
Data Protection in the Cloud
Cloud Application Security
Logging and Incident Response
8.4 Cloud Identity Management
Security Assertion Markup Language
OAuth
OAuth for Authentication
8.5 Securing IaaS
Public IaaS Versus Private Network Security
8.6 Conclusion
Where the Field Is Headed
To Learn More
8.7 Exercises
Chapter 9 Privacy
9.1 Privacy Concepts
Aspects of Information Privacy
Computer-Related Privacy Problems
9.2 Privacy Principles and Policies
Fair Information Practices
U.S. Privacy Laws
Controls on U.S. Government Websites
Controls on Commercial Websites
Non-U.S. Privacy Principles
Individual Actions to Protect Privacy
Governments and Privacy
Identity Theft
9.3 Authentication and Privacy
What Authentication Means
Conclusions
9.4 Data Mining
Government Data Mining
Privacy-Preserving Data Mining
9.5 Privacy on the Web
Understanding the Online Environment
Payments on the Web
Site and Portal Registrations
Whose Page Is This?
Precautions for Web Surfing
Spyware
Shopping on the Internet
9.6 Email Security
Where Does Email Go, and Who Can Access It?
Interception of Email
Monitoring Email
Anonymous, Pseudonymous, and Disappearing
Email
Spoofing and Spamming
Summary
9.7 Privacy Impacts of Emerging Technologies
Radio Frequency Identification
Electronic Voting
VoIP and Skype
Privacy in the Cloud
Conclusions on Emerging Technologies
9.8 Where the Field Is Headed
9.9 Conclusion
9.10 Exercises
Chapter 10 Management and Incidents
10.1 Security Planning
Organizations and Security Plans
Contents of a Security Plan
Security Planning Team Members
Assuring Commitment to a Security Plan
10.2 Business Continuity Planning
Assess Business Impact
Develop Strategy
Develop the Plan
10.3 Handling Incidents
Incident Response Plans
Incident Response Teams
10.4 Risk Analysis
The Nature of Risk
Steps of a Risk Analysis
Arguments For and Against Risk Analysis
10.5 Dealing with Disaster
Natural Disasters
Power Loss
Human Vandals
Interception of Sensitive Information
Contingency Planning
Physical Security Recap
10.6 Conclusion
10.7 Exercises
Chapter 11 Legal Issues and Ethics
11.1 Protecting Programs and Data
Copyrights
Patents
Trade Secrets
Special Cases
11.2 Information and the Law
Information as an Object
Legal Issues Relating to Information
The Legal System
Summary of Protection for Computer Artifacts
11.3 Rights of Employees and Employers
Ownership of Products
Employment Contracts
11.4 Redress for Software Failures
Selling Correct Software
Reporting Software Flaws
11.5 Computer Crime
Why a Separate Category for Computer Crime Is
Needed
Why Computer Crime Is Hard to Define
Why Computer Crime Is Hard to Prosecute
Examples of Statutes
International Dimensions
Why Computer Criminals Are Hard to Catch
What Computer Crime Does Not Address
Summary of Legal Issues in Computer Security
11.6 Ethical Issues in Computer Security
Differences Between the Law and Ethics
Studying Ethics
Ethical Reasoning
11.7 Incident Analysis with Ethics
Situation I: Use of Computer Services
Situation II: Privacy Rights
Situation III: Denial of Service
Situation IV: Ownership of Programs
Situation V: Proprietary Resources
Situation VI: Fraud
Situation VII: Accuracy of Information
Situation VIII: Ethics of Hacking or Cracking
Situation IX: True Representation
Conclusion of Computer Ethics
Conclusion
Exercises
Chapter 12 Details of Cryptography
12.1 Cryptology
Cryptanalysis
Cryptographic Primitives
One-Time Pads
Statistical Analysis
What Makes a “Secure” Encryption Algorithm?
12.2 Symmetric Encryption Algorithms
DES
AES
RC2, RC4, RC5, and RC6
12.3 Asymmetric Encryption with RSA
The RSA Algorithm
Strength of the RSA Algorithm
12.4 Message Digests
Hash Functions
One-Way Hash Functions
Message Digests
12.5 Digital Signatures
Elliptic Curve Cryptosystems
El Gamal and Digital Signature Algorithms
The NSA–Cryptography Controversy of 2012
12.6 Quantum Cryptography
Quantum Physics
Photon Reception
Cryptography with Photons
Implementation
12.7 Conclusion
Chapter 13 Emerging Topics
13.1 The Internet of Things
Medical Devices
Mobile Phones
Security in the Internet of Things
13.2 Economics
Making a Business Case
Quantifying Security
Current Research and Future Directions
13.3 Electronic Voting
What Is Electronic Voting?
What Is a Fair Election?
What Are the Critical Issues?
13.4 Cyber Warfare
What Is Cyber Warfare?
Possible Examples of Cyber Warfare
Critical Issues
13.5 Conclusion
Bibliography
Index
Foreword
From the authors: Willis Ware kindly wrote the foreword that we published in
both the third and fourth editions of Security in Computing. In his foreword he
covers some of the early days of computer security, describing concerns that are
as valid today as they were in those earlier days.
Willis chose to sublimate his name and efforts to the greater good of the
projects he worked on. In fact, his thoughtful analysis and persuasive leadership
contributed much to the final outcome of these activities. Few people recognize
Willis’s name today; more people are familiar with the European Union Data
Protection Directive that is a direct descendant of the report [WAR73a] from his
committee for the U.S. Department of Human Services. Willis would have
wanted it that way: the emphasis on the ideas and not on his name.
Unfortunately, Willis died in November 2013 at age 93. We think the lessons
he wrote about in his Foreword are still important to our readers. Thus, with
both respect and gratitude, we republish his words here.
In the 1950s and 1960s, the prominent conference gathering places for practitioners and
users of computer technology were the twice yearly Joint Computer Conferences (JCCs)
—initially called the Eastern and Western JCCs, but later renamed the Spring and Fall
JCCs and even later, the annual National (AFIPS) Computer Conference. From this
milieu, the topic of computer security—later to be called information system security and
currently also referred to as “protection of the national information infrastructure”—
moved from the world of classified defense interests into public view.
A few people—Robert L. Patrick, John P. Haverty, and myself among others—all then
at The RAND Corporation (as its name was then known) had been talking about the
growing dependence of the country and its institutions on computer technology. It
concerned us that the installed systems might not be able to protect themselves and their
data against intrusive and destructive attacks. We decided that it was time to bring the
security aspect of computer systems to the attention of the technology and user
communities.
The enabling event was the development within the National Security Agency (NSA) of
a remote-access time-sharing system with a full set of security access controls, running on
a Univac 494 machine, and serving terminals and users not only within the headquarters
building at Fort George G. Meade, Maryland, but also worldwide. Fortuitously, I knew
details of the system.
Persuading two others from RAND to help—Dr. Harold Peterson and Dr. Rein Turn—
plus Bernard Peters of NSA, I organized a group of papers and presented it to the SJCC
conference management as a ready-made additional paper session to be chaired by me. [1]
The conference accepted the offer, and the session was presented at the Atlantic City (NJ)
Convention Hall in 1967.
Soon thereafter and driven by a request from a defense contractor to include both
defense classified and business applications concurrently in a single mainframe machine
functioning in a remote-access mode, the Department of Defense, acting through the
Advanced Research Projects Agency (ARPA) and later the Defense Science Board (DSB),
organized a committee, which I chaired, to study the issue of security controls for
computer systems. The intent was to produce a document that could be the basis for
formulating a DoD policy position on the matter.
The report of the committee was initially published as a classified document and was
formally presented to the sponsor (the DSB) in January 1970. It was later declassified and
republished (by The RAND Corporation) in October 1979. [2] It was widely circulated
and became nicknamed “the Ware report.” The report and a historical introduction are
available on the RAND website. [3]
Subsequently, the United States Air Force (USAF) sponsored another committee
chaired by James P. Anderson. [4] Its report, published in 1972, recommended a 6-year
R&D security program totaling some $8M. [5] The USAF responded and funded several
projects, three of which were to design and implement an operating system with security
controls for a specific computer.
Eventually these activities led to the “Criteria and Evaluation” program sponsored by
the NSA. It culminated in the “Orange Book” [6] in 1983 and subsequently its supporting
array of documents, which were nicknamed “the rainbow series.” [7] Later, in the 1980s
and on into the 1990s, the subject became an international one leading to the ISO standard
known as the “Common Criteria.” [8]
It is important to understand the context in which system security was studied in the
early decades. The defense establishment had a long history of protecting classified
information in document form. It had evolved a very elaborate scheme for
compartmenting material into groups, sub-groups and super-groups, each requiring a
specific personnel clearance and need-to-know as the basis for access. [9] It also had a
centuries-long legacy of encryption technology and experience for protecting classified
information in transit. Finally, it understood the personnel problem and the need to
establish the trustworthiness of its people. And it certainly understood the physical
security matter.
Thus, the computer security issue, as it was understood in the 1960s and even later, was
how to create in a computer system a group of access controls that would implement or
emulate the processes of the prior paper world, plus the associated issues of protecting
such software against unauthorized change, subversion and illicit use, and of embedding
the entire system in a secure physical environment with appropriate management
oversights and operational doctrine and procedures. The poorly understood aspect of
security was primarily the software issue with, however, a collateral hardware aspect;
namely, the risk that it might malfunction—or be penetrated—and subvert the proper
behavior of software. For the related aspects of communications, personnel, and physical
security, there was a plethora of rules, regulations, doctrine and experience to cover them.
It was largely a matter of merging all of it with the hardware/software aspects to yield an
overall secure system and operating environment.
However, the world has now changed and in essential ways. The desk-top computer and
workstation have appeared and proliferated widely. The Internet is flourishing and the
reality of a World Wide Web is in place. Networking has exploded and communication
among computer systems is the rule, not the exception. Many commercial transactions are
now web-based; many commercial communities—the financial one in particular—have
moved into a web posture. The “user” of any computer system can literally be anyone in
the world. Networking among computer systems is ubiquitous; information-system
outreach is the goal.
The net effect of all of this has been to expose the computer-based information system
—its hardware, its software, its software processes, its databases, its communications—to
an environment over which no one—not end-user, not network administrator or system
owner, not even government—has control. What must be done is to provide appropriate
technical, procedural, operational and environmental safeguards against threats as they
might appear or be imagined, embedded in a societally acceptable legal framework.
And appear threats did—from individuals and organizations, national and international.
The motivations to penetrate systems for evil purpose or to create malicious software—
generally with an offensive or damaging consequence—vary from personal intellectual
satisfaction to espionage, to financial reward, to revenge, to civil disobedience, and to
other reasons. Information-system security has moved from a largely self-contained
bounded environment interacting with a generally known and disciplined user community
to one of worldwide scope with a body of users that may not be known and are not
necessarily trusted. Importantly, security controls now must deal with circumstances over
which there is largely no control or expectation of avoiding their impact. Computer
security, as it has evolved, shares a similarity with liability insurance; they each face a
threat environment that is known in a very general way and can generate attacks over a
broad spectrum of possibilities; but the exact details or even time or certainty of an attack
is unknown until an event has occurred.
On the other hand, the modern world thrives on information and its flows; the
contemporary world, society and institutions cannot function without their computercommunication-based information systems. Hence, these systems must be protected in all
dimensions—technical, procedural, operational, environmental. The system owner and its
staff have become responsible for protecting the organization’s information assets.
Progress has been slow, in large part because the threat has not been perceived as real or
as damaging enough; but also in part because the perceived cost of comprehensive
information system security is seen as too high compared to the risks—especially the
financial consequences—of not doing it. Managements, whose support with appropriate
funding is essential, have been slow to be convinced.
This book addresses the broad sweep of issues above: the nature of the threat and
system vulnerabilities (Chapter 1); cryptography (Chapters 2 and 12); software
vulnerabilities (Chapter 3); the Common Criteria (Chapter 5); the World Wide Web and
Internet (Chapters 4 and 6); managing risk (Chapter 10); and legal, ethical and privacy
issues (Chapter 11). The book also describes security controls that are currently available
such as encryption protocols, software development practices, firewalls, and intrusiondetection systems. Overall, this book provides a broad and sound foundation for the
information-system specialist who is charged with planning and/or organizing and/or
managing and/or implementing a comprehensive information-system security program.
Yet to be solved are many technical aspects of information security—R&D for
hardware, software, systems, and architecture; and the corresponding products.
Notwithstanding, technology per se is not the long pole in the tent of progress.
Organizational and management motivation and commitment to get the security job done
is. Today, the collective information infrastructure of the country and of the world is
slowly moving up the learning curve; every mischievous or malicious event helps to push
it along. The terrorism-based events of recent times are helping to drive it. Is it far enough
up the curve to have reached an appropriate balance between system safety and threat?
Almost certainly, the answer is “no, not yet; there is a long way to go.” [10]
—Willis H. Ware
RAND
Santa Monica, California
Citations
1. “Security and Privacy in Computer Systems,” Willis H. Ware; RAND, Santa
Monica, CA; P-3544, April 1967. Also published in Proceedings of the 1967
Spring Joint Computer Conference (later renamed to AFIPS Conference
Proceedings), pp 279 seq, Vol. 30, 1967.
“Security Considerations in a Multi-Programmed Computer System,”
Bernard Peters; Proceedings of the 1967 Spring Joint Computer Conference
(later renamed to AFIPS Conference Proceedings), pp 283 seq, vol 30,
1967.
“Practical Solutions to the Privacy Problem,” Willis H. Ware; RAND,
Santa Monica, CA; P-3544, April 1967. Also published in Proceedings of
the 1967 Spring Joint Computer Conference (later renamed to AFIPS
Conference Proceedings), pp 301 seq, Vol. 30, 1967.
“System Implications of Information Privacy,” Harold E. Peterson and Rein
Turn; RAND, Santa Monica, CA; P-3504, April 1967. Also published in
Proceedings of the 1967 Spring Joint Computer Conference (later renamed
to AFIPS Conference Proceedings), pp 305 seq, vol. 30, 1967.
2. “Security Controls for Computer Systems,” (Report of the Defense Science
Board Task Force on Computer Security), RAND, R-609-1-PR. Initially
published in January 1970 as a classified document. Subsequently, declassified
and republished October 1979.
3. http://rand.org/publications/R/R609.1/R609.1.html, “Security Controls for
Computer Systems”; R-609.1, RAND, 1979
http://rand.org/publications/R/R609.1/intro.html, Historical setting for R-609.1
4. “Computer Security Technology Planning Study,” James P. Anderson; ESDTR-73-51, ESD/AFSC, Hanscom AFB, Bedford, MA; October 1972.
5. All of these documents are cited in the bibliography of this book. For images
of these historical papers on a CDROM, see the “History of Computer Security
Project, Early Papers Part 1,” Professor Matt Bishop; Department of Computer
Science, University of California at Davis.
http://seclab.cs.ucdavis.edu/projects/history
6. “DoD Trusted Computer System Evaluation Criteria,” DoD Computer
Security Center, National Security Agency, Ft George G. Meade, Maryland;
CSC-STD-001-83; Aug 15, 1983.
7. So named because the cover of each document in the series had a unique and
distinctively colored cover page. For example, the “Red Book” is “Trusted
Network Interpretation,” National Computer Security Center, National Security
Agency, Ft. George G. Meade, Maryland; NCSC-TG-005, July 31, 1987.
USGPO Stock number 008-000-00486-2.
8. “A Retrospective on the Criteria Movement,” Willis H. Ware; RAND, Santa
Monica, CA; P-7949, 1995. http://rand.org/pubs/papers/P7949/
9. This scheme is nowhere, to my knowledge, documented explicitly. However,
its complexity can be inferred by a study of Appendices A and B of R-609.1
(item [2] above).
10. “The Cyberposture of the National Information Infrastructure,” Willis H. Ware;
RAND, Santa Monica, CA; MR-976-OSTP, 1998. Available online at:
http://www.rand.org/publications/MR/MR976/mr976.html.
Preface
Tablets, smartphones, TV set-top boxes, GPS navigation devices, exercise monitors,
home security stations, even washers and dryers come with Internet connections by which
data from and about you go to places over which you have little visibility or control. At
the same time, the list of retailers suffering massive losses of customer data continues to
grow: Home Depot, Target, T.J. Maxx, P.F. Chang’s, Sally Beauty. On the one hand people
want the convenience and benefits that added connectivity brings, while on the other hand,
people are worried, and some are seriously harmed by the impact of such incidents.
Computer security brings these two threads together as technology races forward with
smart products whose designers omit the basic controls that can prevent or limit
catastrophes.
To some extent, people sigh and expect security failures in basic products and complex
systems. But these failures do not have to be. Every computer professional can learn how
such problems occur and how to counter them. Computer security has been around as a
field since the 1960s, and it has developed excellent research, leading to a good
understanding of the threat and how to manage it.
One factor that turns off many people is the language: Complicated terms such as
polymorphic virus, advanced persistent threat, distributed denial-of-service attack,
inference and aggregation, multifactor authentication, key exchange protocol, and
intrusion detection system do not exactly roll off the tongue. Other terms sound intriguing
but opaque, such as worm, botnet, rootkit, man in the browser, honeynet, sandbox, and
script kiddie. The language of advanced mathematics or microbiology is no less
confounding, and the Latin terminology of medicine and law separates those who know it
from those who do not. But the terms and concepts of computer security really have
straightforward, easy-to-learn meaning and uses.
Vulnerability: weakness
Threat: condition that exercises vulnerability
Incident: vulnerability + threat
Control: reduction of threat or vulnerablity
The premise of computer security is quite simple: Vulnerabilities are weaknesses in
products, systems, protocols, algorithms, programs, interfaces, and designs. A threat is a
condition that could exercise a vulnerability. An incident occurs when a threat does exploit
a vulnerability, causing harm. Finally, people add controls or countermeasures to prevent,
deflect, diminish, detect, diagnose, and respond to threats. All of computer security is built
from that simple framework. This book is about bad things that can happen with
computers and ways to protect our computing.
Why Read This Book?
Admit it. You know computing entails serious risks to the privacy of your personal data,
the integrity of your data, or the operation of your computer. Risk is a fact of life: Crossing
the street is risky, perhaps more so in some places than others, but you still cross the street.
As a child you learned to stop and look both ways before crossing. As you became older
you learned to gauge the speed of oncoming traffic and determine whether you had the
time to cross. At some point you developed a sense of whether an oncoming car would
slow down or yield. We hope you never had to practice this, but sometimes you have to
decide whether darting into the street without looking is the best means of escaping
danger. The point is all these matters depend on knowledge and experience. We want to
help you develop comparable knowledge and experience with respect to the risks of secure
computing.
The same thing can be said about computer security in everything from personal
devices to complex commercial systems: You start with a few basic terms, principles, and
concepts. Then you learn the discipline by seeing those basics reappear in numerous
situations, including programs, operating systems, networks, and cloud computing. You
pick up a few fundamental tools, such as authentication, access control, and encryption,
and you understand how they apply in defense strategies. You start to think like an
attacker, predicting the weaknesses that could be exploited, and then you shift to selecting
defenses to counter those attacks. This last stage of playing both offense and defense
makes computer security a creative and challenging activity.
Uses for and Users of This Book
This book is intended for people who want to learn about computer security; if you have
read this far you may well be such a person. This book is intended for three groups of
people: college and university students, computing professionals and managers, and users
of all kinds of computer-based systems. All want to know the same thing: how to control
the risk of computer security. But you may differ in how much information you need about
particular topics: Some readers want a broad survey, while others want to focus on
particular topics, such as networks or program development.
This book should provide the breadth and depth that most readers want. The book is
organized by general area of computing, so that readers with particular interests can find
information easily.
Organization of This Book
The chapters of this book progress in an orderly manner, from general security concerns
to the particular needs of specialized applications, and then to overarching management
and legal issues. Thus, this book progresses through six key areas of interest:
1. Introduction: threats, vulnerabilities, and controls
2. The security practitioner’s “toolbox”: identification and authentication, access
control, and encryption
3. Application areas of computer security practice: programs, user–Internet
interaction, operating systems, networks, data and databases, and cloud
computing
4. Cross-cutting disciplines: privacy, management, law and ethics
5. Details of cryptography
6. Emerging application domains
The first chapter begins like many other expositions: by laying groundwork. In Chapter
1 we introduce terms and definitions, and give some examples to justify how these terms
are used. In Chapter 2 we begin the real depth of the field by introducing three concepts
that form the basis of many defenses in computer security: identification and
authentication, access control, and encryption. We describe different ways of
implementing each of these, explore strengths and weaknesses, and tell of some recent
advances in these technologies.
Then we advance through computing domains, from the individual user outward. In
Chapter 3 we begin with individual programs, ones you might write and those you only
use. Both kinds are subject to potential attacks, and we examine the nature of some of
those attacks and how they could have been prevented. In Chapter 4 we move on to a type
of program with which most users today are quite familiar: the browser, as a gateway to
the Internet. The majority of attacks today are remote, carried from a distant attacker
across a network, usually the Internet. Thus, it makes sense to study Internet-borne
malicious code. But this chapter’s focus is on the harm launched remotely, not on the
network infrastructure by which it travels; we defer the network concepts to Chapter 6. In
Chapter 5 we consider operating systems, a strong line of defense between a user and
attackers. We also consider ways to undermine the strength of the operating system itself.
Chapter 6 returns to networks, but this time we do look at architecture and technology,
including denial-of-service attacks that can happen only in a network. Data, their
collection and protection, form the topic of Chapter 7, in which we look at database
management systems and big data applications. Finally, in Chapter 8 we explore cloud
computing, a relatively recent addition to the computing landscape, but one that brings its
own vulnerabilities and protections.
In Chapters 9 through 11 we address what we have termed the intersecting disciplines:
First, in Chapter 9 we explore privacy, a familiar topic that relates to most of the six
domains from programs to clouds. Then Chapter 10 takes us to the management side of
computer security: how management plans for and addresses computer security problems.
Finally, Chapter 11 explores how laws and ethics help us control computer behavior.
We introduced cryptography in Chapter 2. But the field of cryptography involves entire
books, courses, conferences, journals, and postgraduate programs of study. And this book
needs to cover many important topics in addition to cryptography. Thus, we made two
critical decisions: First, we treat cryptography as a tool, not as a field of study. An
automobile mechanic does not study the design of cars, weighing such factors as
aerodynamics, fuel consumption, interior appointment, and crash resistance; a mechanic
accepts a car as a given and learns how to find and fix faults with the engine and other
mechanical parts. Similarly, we want our readers to be able to use cryptography to quickly
address security problems; hence we briefly visit popular uses of cryptography in Chapter
2. Our second critical decision was to explore the breadth of cryptography slightly more in
a later chapter, Chapter 12. But as we point out, entire books have been written on
cryptography, so our later chapter gives an overview of more detailed work that interested
readers can find elsewhere.
Our final chapter detours to four areas having significant computer security hazards.
These are rapidly advancing topics for which the computer security issues are much in
progress right now. The so-called Internet of Things, the concept of connecting many
devices to the Internet, raises potential security threats waiting to be explored. Economics
govern many security decisions, so security professionals need to understand how
economics and security relate. Convenience is raising interest in using computers to
implement elections; the easy steps of collecting vote totals have been done by many
jurisdictions, but the hard part of organizing fair online registration and ballot-casting have
been done in only a small number of demonstration elections. And the use of computers in
warfare is a growing threat. Again, a small number of modest-sized attacks on computing
devices have shown the feasibility of this type of campaign, but security professionals and
ordinary citizens need to understand the potential—both good and bad—of this type of
attack.
How to Read This Book
What background should you have to appreciate this book? The only assumption is an
understanding of programming and computer systems. Someone who is an advanced
undergraduate or graduate student in computing certainly has that background, as does a
professional designer or developer of computer systems. A user who wants to understand
more about how programs work can learn from this book, too; we provide the necessary
background on concepts of operating systems or networks, for example, before we address
the related security concerns.
This book can be used as a textbook in a one- or two-semester course in computer
security. The book functions equally well as a reference for a computer professional or as
a supplement to an intensive training course. And the index and extensive bibliography
make it useful as a handbook to explain significant topics and point to key articles in the
literature. The book has been used in classes throughout the world; instructors often
design one-semester courses that focus on topics of particular interest to the students or
that relate well to the rest of a curriculum.
What Is New in This Book
This is the fifth edition of Security in Computing, first published in 1989. Since then,
the specific threats, vulnerabilities, and controls have changed, as have many of the
underlying technologies to which computer security applies. However, many basic
concepts have remained the same.
Most obvious to readers familiar with earlier editions will be some new chapters,
specifically, on user–web interaction and cloud computing, as well as the topics we raise
in the emerging topics chapter. Furthermore, pulling together the three fundamental
controls in Chapter 2 is a new structure. Those are the big changes, but every chapter has
had many smaller changes, as we describe new attacks or expand on points that have
become more important.
One other feature some may notice is the addition of a third coauthor. Jonathan
Margulies joins us as an essential member of the team that produced this revision. He is
currently director of the security practice at Qmulos, a newly launched security consulting
practice. He brings many years of experience with Sandia National Labs and the National
Institute for Standards and Technology. His focus meshes nicely with our existing skills to
extend the breadth of this book.
Acknowledgments
It is increasingly difficult to acknowledge all the people who have influenced this book.
Colleagues and friends have contributed their knowledge and insight, often without
knowing their impact. By arguing a point or sharing explanations of concepts, our
associates have forced us to question or rethink what we know.
We thank our associates in at least two ways. First, we have tried to include references
to their written works. References in the text cite specific papers relating to particular
thoughts or concepts, but the bibliography also includes broader works that have played a
more subtle role in shaping our approach to security. So, to all the cited authors, many of
whom are friends and colleagues, we happily acknowledge your positive influence on this
book.
Rather than name individuals, we thank the organizations in which we have interacted
with creative, stimulating, and challenging people from whom we learned a lot. These
places include Trusted Information Systems, the Contel Technology Center, the Centre for
Software Reliability of the City University of London, Arca Systems, Exodus
Communications, The RAND Corporation, Sandia National Lab, Cable & Wireless, the
National Institute of Standards and Technology, the Institute for Information Infrastructure
Protection, Qmulos, and the Editorial Board of IEEE Security & Privacy. If you worked
with us at any of these locations, chances are high that your imprint can be found in this
book. And for all the side conversations, debates, arguments, and light moments, we are
grateful.
About the Authors
Charles P. Pfleeger is an internationally known expert on computer and
communications security. He was originally a professor at the University of Tennessee,
leaving there to join computer security research and consulting companies Trusted
Information Systems and Arca Systems (later Exodus Communications and Cable and
Wireless). With Trusted Information Systems he was Director of European Operations and
Senior Consultant. With Cable and Wireless he was Director of Research and a member of
the staff of the Chief Security Officer. He was chair of the IEEE Computer Society
Technical Committee on Security and Privacy.
Shari Lawrence Pfleeger is widely known as a software engineering and computer
security researcher, most recently as a Senior Computer Scientist with the Rand
Corporation and as Research Director of the Institute for Information Infrastructure
Protection. She is currently Editor-in-Chief of IEEE Security & Privacy magazine.
Jonathan Margulies is the CTO of Qmulos, a cybersecurity consulting firm. After
receiving his master’s degree in Computer Science from Cornell University, Mr. Margulies
spent nine years at Sandia National Labs, researching and developing solutions to protect
national security and critical infrastructure systems from advanced persistent threats. He
then went on to NIST’s National Cybersecurity Center of Excellence, where he worked
with a variety of critical infrastructure companies to create industry-standard security
architectures. In his free time, Mr. Margulies edits the “Building Security In” section of
IEEE Security & Privacy magazine.
1. Introduction
In this chapter:
• Threats, vulnerabilities, and controls
• Confidentiality, integrity, and availability
• Attackers and attack types; method, opportunity, and motive
• Valuing assets
On 11 February 2013, residents of Great Falls, Montana received the following warning
on their televisions [INF13]. The transmission displayed a message banner on the bottom
of the screen (as depicted in Figure 1-1).
FIGURE 1-1 Emergency Broadcast Warning
And the following alert was broadcast:
[Beep Beep Beep: the sound pattern of the U.S. government Emergency
Alert System. The following text then scrolled across the screen:]
Civil authorities in your area have reported that the bodies of the dead are
rising from their graves and attacking the living. Follow the messages on
screen that will be updated as information becomes available.
Do not attempt to approach or apprehend these bodies as they are
considered extremely dangerous. This warning applies to all areas
receiving this broadcast.
[Beep Beep Beep]
The warning signal sounded authentic; it had the distinctive tone people recognize for
warnings of serious emergencies such as hazardous weather or a natural disaster. And the
text was displayed across a live broadcast television program. On the other hand, bodies
rising from their graves sounds suspicious.
What would you have done?
Only four people contacted police for assurance that the warning was indeed a hoax. As
you can well imagine, however, a different message could have caused thousands of
people to jam the highways trying to escape. (On 30 October 1938 Orson Welles
performed a radio broadcast of the H. G. Wells play War of the Worlds that did cause a
minor panic of people believing that Martians had landed and were wreaking havoc in
New Jersey.)
The perpetrator of this hoax was never caught, nor has it become clear exactly how it
was done. Likely someone was able to access the system that feeds emergency broadcasts
to local radio and television stations. In other words, a hacker probably broke into a
computer system.
You encounter computers daily in countless situations, often in cases in which you are
scarcely aware a computer is involved, like the emergency alert system for broadcast
media. These computers move money, control airplanes, monitor health, lock doors, play
music, heat buildings, regulate hearts, deploy airbags, tally votes, direct communications,
regulate traffic, and do hundreds of other things that affect lives, health, finances, and
well-being. Most of the time these computers work just as they should. But occasionally
they do something horribly wrong, because of either a benign failure or a malicious attack.
This book is about the security of computers, their data, and the devices and objects to
which they relate. In this book you will learn some of the ways computers can fail—or be
made to fail—and how to protect against those failures. We begin that study in the way
any good report does: by answering the basic questions of what, who, why, and how.
1.1 What Is Computer Security?
Computer security is the protection of the items you value, called the assets of a
computer or computer system. There are many types of assets, involving hardware,
software, data, people, processes, or combinations of these. To determine what to protect,
we must first identify what has value and to whom.
A computer device (including hardware, added components, and accessories) is
certainly an asset. Because most computer hardware is pretty useless without programs,
the software is also an asset. Software includes the operating system, utilities and device
handlers; applications such as word processing, media players or email handlers; and even
programs that you may have written yourself. Much hardware and software is off-theshelf, meaning that it is commercially available (not custom-made for your purpose) and
that you can easily get a replacement. The thing that makes your computer unique and
important to you is its content: photos, tunes, papers, email messages, projects, calendar
information, ebooks (with your annotations), contact information, code you created, and
the like. Thus, data items on a computer are assets, too. Unlike most hardware and
software, data can be hard—if not impossible—to recreate or replace. These assets are all
shown in Figure 1-2.
FIGURE 1-2 Computer Objects of Value
These three things—hardware, software, and data—contain or express things like the
design for your next new product, the photos from your recent vacation, the chapters of
your new book, or the genome sequence resulting from your recent research. All of these
things represent intellectual endeavor or property, and they have value that differs from
one person or organization to another. It is that value that makes them assets worthy of
protection, and they are the elements we want to protect. Other assets—such as access to
data, quality of service, processes, human users, and network connectivity—deserve
protection, too; they are affected or enabled by the hardware, software, and data. So in
most cases, protecting hardware, software, and data covers these other assets as well.
Computer systems—hardware, software, and data—have value and
deserve security protection.
In this book, unless we specifically distinguish between hardware, software, and data,
we refer to all these assets as the computer system, or sometimes as the computer. And
because processors are embedded in so many devices, we also need to think about such
variations as mobile phones, implanted pacemakers, heating controllers, and automobiles.
Even if the primary purpose of the device is not computing, the device’s embedded
computer can be involved in security incidents and represents an asset worthy of
protection.
Values of Assets
After identifying the assets to protect, we next determine their value. We make valuebased decisions frequently, even when we are not aware of them. For example, when you
go for a swim you can leave a bottle of water and a towel on the beach, but not your wallet
or cell phone. The difference relates to the value of the assets.
The value of an asset depends on the asset owner’s or user’s perspective, and it may be
independent of monetary cost, as shown in Figure 1-3. Your photo of your sister, worth
only a few cents in terms of paper and ink, may have high value to you and no value to
your roommate. Other items’ value depends on replacement cost; some computer data are
difficult or impossible to replace. For example, that photo of you and your friends at a
party may have cost you nothing, but it is invaluable because there is no other copy. On
the other hand, the DVD of your favorite film may have cost a significant portion of your
take-home pay, but you can buy another one if the DVD is stolen or corrupted. Similarly,
timing has bearing on asset value. For example, the value of the plans for a company’s
new product line is very high, especially to competitors. But once the new product is
released, the plans’ value drops dramatically.
FIGURE 1-3 Values of Assets
Assets’ values are personal, time dependent, and often imprecise.
The Vulnerability–Threat–Control Paradigm
The goal of computer security is protecting valuable assets. To study different ways of
protection, we use a framework that describes how assets may be harmed and how to
counter or mitigate that harm.
A vulnerability is a weakness in the system, for example, in procedures, design, or
implementation, that might be exploited to cause loss or harm. For instance, a particular
system may be vulnerable to unauthorized data manipulation because the system does not
verify a user’s identity before allowing data access.
A vulnerability is a weakness that could be exploited to cause harm.
A threat to a computing system is a set of circumstances that has the potential to cause
loss or harm. To see the difference between a threat and a vulnerability, consider the
illustration in Figure 1-4. Here, a wall is holding water back. The water to the left of the
wall is a threat to the man on the right of the wall: The water could rise, overflowing onto
the man, or it could stay beneath the height of the wall, causing the wall to collapse. So the
threat of harm is the potential for the man to get wet, get hurt, or be drowned. For now, the
wall is intact, so the threat to the man is unrealized.
FIGURE 1-4 Threat and Vulnerability
A threat is a set of circumstances that could cause harm.
However, we can see a small crack in the wall—a vulnerability that threatens the man’s
security. If the water rises to or beyond the level of the crack, it will exploit the
vulnerability and harm the man.
There are many threats to a computer system, including human-initiated and computerinitiated ones. We have all experienced the results of inadvertent human errors, hardware
design flaws, and software failures. But natural disasters are threats, too; they can bring a
system down when the computer room is flooded or the data center collapses from an
earthquake, for example.
A human who exploits a vulnerability perpetrates an attack on the system. An attack
can also be launched by another system, as when one system sends an overwhelming flood
of messages to another, virtually shutting down the second system’s ability to function.
Unfortunately, we have seen this type of attack frequently, as denial-of-service attacks
deluge servers with more messages than they can handle. (We take a closer look at denial
of service in Chapter 6.)
How do we address these problems? We use a control or countermeasure as
protection. That is, a control is an action, device, procedure, or technique that removes or
reduces a vulnerability. In Figure 1-4, the man is placing his finger in the hole, controlling
the threat of water leaks until he finds a more permanent solution to the problem. In
general, we can describe the relationship between threats, controls, and vulnerabilities in
this way:
Controls prevent threats from exercising vulnerabilities.
A threat is blocked by control of a vulnerability.
Before we can protect assets, we need to know the kinds of harm we have to protect
them against, so now we explore threats to valuable assets.
1.2 Threats
We can consider potential harm to assets in two ways: First, we can look at what bad
things can happen to assets, and second, we can look at who or what can cause or allow
those bad things to happen. These two perspectives enable us to determine how to protect
assets.
Think for a moment about what makes your computer valuable to you. First, you use it
as a tool for sending and receiving email, searching the web, writing papers, and
performing many other tasks, and you expect it to be available for use when you want it.
Without your computer these tasks would be harder, if not impossible. Second, you rely
heavily on your computer’s integrity. When you write a paper and save it, you trust that
the paper will reload exactly as you saved it. Similarly, you expect that the photo a friend
passes you on a flash drive will appear the same when you load it into your computer as
when you saw it on your friend’s computer. Finally, you expect the “personal” aspect of a
personal computer to stay personal, meaning you want it to protect your confidentiality.
For example, you want your email messages to be just between you and your listed
recipients; you don’t want them broadcast to other people. And when you write an essay,
you expect that no one can copy it without your permission.
These three aspects, confidentiality, integrity, and availability, make your computer
valuable to you. But viewed from another perspective, they are three possible ways to
make it less valuable, that is, to cause you harm. If someone steals your computer,
scrambles data on your disk, or looks at your private data files, the value of your computer
has been diminished or your computer use has been harmed. These characteristics are both
basic security properties and the objects of security threats.
We can define these three properties as follows.
• availability: the ability of a system to ensure that an asset can be used by any
authorized parties
• integrity: the ability of a system to ensure that an asset is modified only by
authorized parties
• confidentiality: the ability of a system to ensure that an asset is viewed only
by authorized parties
These three properties, hallmarks of solid security, appear in the literature as early as
James P. Anderson’s essay on computer security [AND73] and reappear frequently in
more recent computer security papers and discussions. Taken together (and rearranged),
the properties are called the C-I-A triad or the security triad. ISO 7498-2 [ISO89] adds
to them two more properties that are desirable, particularly in communication networks:
• authentication: the ability of a system to confirm the identity of a sender
• nonrepudiation or accountability: the ability of a system to confirm that a
sender cannot convincingly deny having sent something
The U.S. Department of Defense [DOD85] adds auditability: the ability of a system to
trace all actions related to a given asset. The C-I-A triad forms a foundation for thinking
about security. Authenticity and nonrepudiation extend security notions to network
communications, and auditability is important in establishing individual accountability for
computer activity. In this book we generally use the C-I-A triad as our security taxonomy
so that we can frame threats, vulnerabilities, and controls in terms of the C-I-A properties
affected. We highlight one of these other properties when it is relevant to a particular
threat we are describing. For now, we focus on just the three elements of the triad.
C-I-A triad: confidentiality, integrity, availability
What can happen to harm the confidentiality, integrity, or availability of computer
assets? If a thief steals your computer, you no longer have access, so you have lost
availability; furthermore, if the thief looks at the pictures or documents you have stored,
your confidentiality is compromised. And if the thief changes the content of your music
files but then gives them back with your computer, the integrity of your data has been
harmed. You can envision many scenarios based around these three properties.
The C-I-A triad can be viewed from a different perspective: the nature of the harm
caused to assets. Harm can also be characterized by four acts: interception, interruption,
modification, and fabrication. These four acts are depicted in Figure 1-5. From this point
of view, confidentiality can suffer if someone intercepts data, availability is lost if
someone or something interrupts a flow of data or access to a computer, and integrity can
fail if someone or something modifies data or fabricates false data. Thinking of these four
kinds of acts can help you determine what threats might exist against the computers you
are trying to protect.
FIGURE 1-5 Four Acts to Cause Security Harm
To analyze harm, we next refine the C-I-A triad, looking more closely at each of its
elements.
Confidentiality
Some things obviously need confidentiality protection. For example, students’ grades,
financial transactions, medical records, and tax returns are sensitive. A proud student may
run out of a classroom screaming “I got an A!” but the student should be the one to choose
whether to reveal that grade to others. Other things, such as diplomatic and military
secrets, companies’ marketing and product development plans, and educators’ tests, also
must be carefully controlled. Sometimes, however, it is not so obvious that something is
sensitive. For example, a military food order may seem like innocuous information, but a
sudden increase in the order could be a sign of incipient engagement in conflict. Purchases
of food, hourly changes in location, and access to books are not things you would
ordinarily consider confidential, but they can reveal something that someone wants to be
kept confidential.
The definition of confidentiality is straightforward: Only authorized people or systems
can access protected data. However, as we see in later chapters, ensuring confidentiality
can be difficult. For example, who determines which people or systems are authorized to
access the current system? By “accessing” data, do we mean that an authorized party can
access a single bit? the whole collection? pieces of data out of context? Can someone who
is authorized disclose data to other parties? Sometimes there is even a question of who
owns the data: If you visit a web page, do you own the fact that you clicked on a link, or
does the web page owner, the Internet provider, someone else, or all of you?
In spite of these complicating examples, confidentiality is the security property we
understand best because its meaning is narrower than that of the other two. We also
understand confidentiality well because we can relate computing examples to those of
preserving confidentiality in the real world.
Confidentiality relates most obviously to data, although we can think of the
confidentiality of a piece of hardware (a novel invention) or a person (the whereabouts of
a wanted criminal). Here are some properties that could mean a failure of data
confidentiality:
• An unauthorized person accesses a data item.
• An unauthorized process or program accesses a data item.
• A person authorized to access certain data accesses other data not authorized
(which is a specialized version of “an unauthorized person accesses a data
item”).
• An unauthorized person accesses an approximate data value (for example, not
knowing someone’s exact salary but knowing that the salary falls in a particular
range or exceeds a particular amount).
• An unauthorized person learns the existence of a piece of data (for example,
knowing that a company is developing a certain new product or that talks are
underway about the merger of two companies).
Notice the general pattern of these statements: A person, process, or program is (or is
not) authorized to access a data item in a particular way. We call the person, process, or
program a subject, the data item an object, the kind of access (such as read, write, or
execute) an access mode, and the authorization a policy, as shown in Figure 1-6. These
four terms reappear throughout this book because they are fundamental aspects of
computer security.
FIGURE 1-6 Access Control
One word that captures most aspects of confidentiality is view, although you should not
take that term literally. A failure of confidentiality does not necessarily mean that someone
sees an object and, in fact, it is virtually impossible to look at bits in any meaningful way
(although you may look at their representation as characters or pictures). The word view
does connote another aspect of confidentiality in computer security, through the
association with viewing a movie or a painting in a museum: look but do not touch. In
computer security, confidentiality usually means obtaining but not modifying.
Modification is the subject of integrity, which we consider in the next section.
Integrity
Examples of integrity failures are easy to find. A number of years ago a malicious
macro in a Word document inserted the word “not” after some random instances of the
word “is;” you can imagine the havoc that ensued. Because the document was generally
syntactically correct, people did not immediately detect the change. In another case, a
model of the Pentium computer chip produced an incorrect result in certain circumstances
of floating-point arithmetic. Although the circumstances of failure were rare, Intel decided
to manufacture and replace the chips. Many of us receive mail that is misaddressed
because someone typed something wrong when transcribing from a written list. A worse
situation occurs when that inaccuracy is propagated to other mailing lists such that we can
never seem to correct the root of the problem. Other times we find that a spreadsheet
seems to be wrong, only to find that someone typed “space 123” in a cell, changing it from
a numeric value to text, so the spreadsheet program misused that cell in computation.
Suppose someone converted numeric data to roman numerals: One could argue that IV is
the same as 4, but IV would not be useful in most applications, nor would it be obviously
meaningful to someone expecting 4 as an answer. These cases show some of the breadth
of examples of integrity failures.
Integrity is harder to pin down than confidentiality. As Stephen Welke and Terry
Mayfield [WEL90, MAY91, NCS91a] point out, integrity means different things in
different contexts. When we survey the way some people use the term, we find several
different meanings. For example, if we say that we have preserved the integrity of an item,
we may mean that the item is
• precise
• accurate
• unmodified
• modified only in acceptable ways
• modified only by authorized people
• modified only by authorized processes
• consistent
• internally consistent
• meaningful and usable
Integrity can also mean two or more of these properties. Welke and Mayfield recognize
three particular aspects of integrity—authorized actions, separation and protection of
resources, and error detection and correction. Integrity can be enforced in much the same
way as can confidentiality: by rigorous control of who or what can access which resources
in what ways.
Availability
A computer user’s worst nightmare: You turn on the switch and the computer does
nothing. Your data and programs are presumably still there, but you cannot get at them.
Fortunately, few of us experience that failure. Many of us do experience overload,
however: access gets slower and slower; the computer responds but not in a way we
consider normal or acceptable.
Availability applies both to data and to services (that is, to information and to
information processing), and it is similarly complex. As with the notion of confidentiality,
different people expect availability to mean different things. For example, an object or
service is thought to be available if the following are true:
• It is present in a usable form.
• It has enough capacity to meet the service’s needs.
• It is making clear progress, and, if in wait mode, it has a bounded waiting time.
• The service is completed in an acceptable period of time.
We can construct an overall description of availability by combining these goals.
Following are some criteria to define availability.
• There is a timely response to our request.
• Resources are allocated fairly so that some requesters are not favored over
others.
• Concurrency is controlled; that is, simultaneous access, deadlock management,
and exclusive access are supported as required.
• The service or system involved follows a philosophy of fault tolerance,
whereby hardware or software faults lead to graceful cessation of service or to
work-arounds rather than to crashes and abrupt loss of information. (Cessation
does mean end; whether it is graceful or not, ultimately the system is
unavailable. However, with fair warning of the system’s stopping, the user may
be able to move to another system and continue work.)
• The service or system can be used easily and in the way it was intended to be
used. (This is a characteristic of usability, but an unusable system may also
cause an availability failure.)
As you can see, expectations of availability are far-reaching. In Figure 1-7 we depict
some of the properties with which availability overlaps. Indeed, the security community is
just beginning to understand what availability implies and how to ensure it.
FIGURE 1-7 Availability and Related Aspects
A person or system can do three basic things with a data item: view it, modify it, or use
it. Thus, viewing (confidentiality), modifying (integrity), and using (availability) are the
basic modes of access that computer security seeks to preserve.
Computer security seeks to prevent unauthorized viewing
(confidentiality) or modification (integrity) of data while preserving access
(availability).
A paradigm of computer security is access control: To implement a policy, computer
security controls all accesses by all subjects to all protected objects in all modes of access.
A small, centralized control of access is fundamental to preserving confidentiality and
integrity, but it is not clear that a single access control point can enforce availability.
Indeed, experts on dependability will note that single points of control can become single
points of failure, making it easy for an attacker to destroy availability by disabling the
single control point. Much of computer security’s past success has focused on
confidentiality and integrity; there are models of confidentiality and integrity, for example,
see David Bell and Leonard La Padula [BEL73, BEL76] and Kenneth Biba [BIB77].
Availability is security’s next great challenge.
We have just described the C-I-A triad and the three fundamental security properties it
represents. Our description of these properties was in the context of things that need
protection. To motivate your understanding we gave some examples of harm and threats to
cause harm. Our next step is to think about the nature of threats themselves.
Types of Threats
For some ideas of harm, look at Figure 1-8, taken from Willis Ware’s report [WAR70].
Although it was written when computers were so big, so expensive, and so difficult to
operate that only large organizations like universities, major corporations, or government
departments would have one, Ware’s discussion is still instructive today. Ware was
concerned primarily with the protection of classified data, that is, preserving
confidentiality. In the figure, he depicts humans such as programmers and maintenance
staff gaining access to data, as well as radiation by which data can escape as signals. From
the figure you can see some of the many kinds of threats to a computer system.
FIGURE 1-8 Computer [Network] Vulnerabilities (from [WAR70])
One way to analyze harm is to consider the cause or source. We call a potential cause of
harm a threat. Harm can be caused by either nonhuman events or humans. Examples of
nonhuman threats include natural disasters like fires or floods; loss of electrical power;
failure of a component such as a communications cable, processor chip, or disk drive; or
attack by a wild boar.
Threats are caused both by human and other sources.
Human threats can be either benign (nonmalicious) or malicious. Nonmalicious kinds
of harm include someone’s accidentally spilling a soft drink on a laptop, unintentionally
deleting text, inadvertently sending an email message to the wrong person, and carelessly
typing “12” instead of “21” when entering a phone number or clicking “yes” instead of
“no” to overwrite a file. These inadvertent, human errors happen to most people; we just
hope that the seriousness of harm is not too great, or if it is, that we will not repeat the
mistake.
Threats can be malicious or not.
Most computer security activity relates to malicious, human-caused harm: A
malicious person actually wants to cause harm, and so we often use the term attack for a
malicious computer security event. Malicious attacks can be random or directed. In a
random attack the attacker wants to harm any computer or user; such an attack is
analogous to accosting the next pedestrian who walks down the street. An example of a
random attack is malicious code posted on a website that could be visited by anybody.
In a directed attack, the attacker intends harm to specific computers, perhaps at one
organization (think of attacks against a political organization) or belonging to a specific
individual (think of trying to drain a specific person’s bank account, for example, by
impersonation). Another class of directed attack is against a particular product, such as
any computer running a particular browser. (We do not want to split hairs about whether
such an attack is directed—at that one software product—or random, against any user of
that product; the point is not semantic perfection but protecting against the attacks.) The
range of possible directed attacks is practically unlimited. Different kinds of threats are
shown in Figure 1-9.
FIGURE 1-9 Kinds of Threats
Threats can be targeted or random.
Although the distinctions shown in Figure 1-9 seem clear-cut, sometimes the nature of
an attack is not obvious until the attack is well underway, or perhaps even ended. A
normal hardware failure can seem like a directed, malicious attack to deny access, and
hackers often try to conceal their activity to look like ordinary, authorized users. As
computer security experts we need to anticipate what bad things might happen, instead of
waiting for the attack to happen or debating whether the attack is intentional or accidental.
Neither this book nor any checklist or method can show you all the kinds of harm that
can happen to computer assets. There are too many ways to interfere with your use of
these assets. Two retrospective lists of known vulnerabilities are of interest, however. The
Common Vulnerabilities and Exposures (CVE) list (see http://cve.mitre.org/) is a
dictionary of publicly known security vulnerabilities and exposures. CVE’s common
identifiers enable data exchange between security products and provide a baseline index
point for evaluating coverage of security tools and services. To measure the extent of
harm,
the
Common
Vulnerability
Scoring
System
(CVSS)
(see
http://nvd.nist.gov/cvss.cfm) provides a standard measurement system that allows accurate
and consistent scoring of vulnerability impact.
Advanced Persistent Threat
Security experts are becoming increasingly concerned about a type of threat called
advanced persistent threat. A lone attacker might create a random attack that snares a
few, or a few million, individuals, but the resulting impact is limited to what that single
attacker can organize and manage. A collection of attackers—think, for example, of the
cyber equivalent of a street gang or an organized crime squad—might work together to
purloin credit card numbers or similar financial assets to fund other illegal activity. Such
attackers tend to be opportunistic, picking unlucky victims’ pockets and moving on to
other activities.
Advanced persistent threat attacks come from organized, well financed, patient
assailants. Often affiliated with governments or quasi-governmental groups, these
attackers engage in long term campaigns. They carefully select their targets, crafting
attacks that appeal to specifically those targets; email messages called spear phishing
(described in Chapter 4) are intended to seduce their recipients. Typically the attacks are
silent, avoiding any obvious impact that would alert a victim, thereby allowing the
attacker to exploit the victim’s access rights over a long time.
The motive of such attacks is sometimes unclear. One popular objective is economic
espionage. A series of attacks, apparently organized and supported by the Chinese
government, was used in 2012 and 2013 to obtain product designs from aerospace
companies in the United States. There is evidence the stub of the attack code was loaded
into victim machines long in advance of the attack; then, the attackers installed the more
complex code and extracted the desired data. In May 2014 the Justice Department indicted
five Chinese hackers in absentia for these attacks.
In the summer of 2014 a series of attacks against J.P. Morgan Chase bank and up to a
dozen similar financial institutions allowed the assailants access to 76 million names,
phone numbers, and email addresses. The attackers—and even their country of origin—
remain unknown, as does the motive. Perhaps the attackers wanted more sensitive
financial data, such as account numbers or passwords, but were only able to get the less
valuable contact information. It is also not known if this attack was related to an attack a
year earlier that disrupted service to that bank and several others.
To imagine the full landscape of possible attacks, you may find it useful to consider the
kinds of people who attack computer systems. Although potentially anyone is an attacker,
certain classes of people stand out because of their backgrounds or objectives. Thus, in the
following sections we look at profiles of some classes of attackers.
Types of Attackers
Who are attackers? As we have seen, their motivations range from chance to a specific
target. Putting aside attacks from natural and benign causes, we can explore who the
attackers are and what motivates them.
Most studies of attackers actually analyze computer criminals, that is, people who have
actually been convicted of a crime, primarily because that group is easy to identify and
study. The ones who got away or who carried off an attack without being detected may
have characteristics different from those of the criminals who have been caught. Worse, by
studying only the criminals we have caught, we may not learn how to catch attackers who
know how to abuse the system without being apprehended.
What does a cyber criminal look like? In television and films the villains wore shabby
clothes, looked mean and sinister, and lived in gangs somewhere out of town. By contrast,
the sheriff dressed well, stood proud and tall, was known and respected by everyone in
town, and struck fear in the hearts of most criminals.
To be sure, some computer criminals are mean and sinister types. But many more wear
business suits, have university degrees, and appear to be pillars of their communities.
Some are high school or university students. Others are middle-aged business executives.
Some are mentally deranged, overtly hostile, or extremely committed to a cause, and they
attack computers as a symbol. Others are ordinary people tempted by personal profit,
revenge, challenge, advancement, or job security—like perpetrators of any crime, using a
computer or not. Researchers have tried to find the psychological traits that distinguish
attackers, as described in Sidebar 1-1. These studies are far from conclusive, however, and
the traits they identify may show correlation but not necessarily causality. To appreciate
this point, suppose a study found that a disproportionate number of people convicted of
computer crime were left-handed. Does that result imply that all left-handed people are
computer criminals or that only left-handed people are? Certainly not. No single profile
captures the characteristics of a “typical” computer attacker, and the characteristics of
some notorious attackers also match many people who are not attackers. As shown in
Figure 1-10, attackers look just like anybody in a crowd.
FIGURE 1-10 Attackers
No one pattern matches all attackers.
Sidebar 1-1 An Attacker’s Psychological Profile?
Temple Grandin, a professor of animal science at Colorado State University and
a sufferer from a mental disorder called Asperger syndrome (AS), thinks that
Kevin Mitnick and several other widely described hackers show classic
symptoms of Asperger syndrome. Although quick to point out that no research
has established a link between AS and hacking, Grandin notes similar behavior
traits among Mitnick, herself, and other AS sufferers. An article in USA Today
(29 March 2001) lists the following AS traits:
• poor social skills, often associated with being loners during childhood; the
classic “computer nerd”
• fidgeting, restlessness, inability to make eye contact, lack of response to
cues in social interaction, such as facial expressions or body language
• exceptional ability to remember long strings of numbers
• ability to focus on a technical problem intensely and for a long time,
although easily distracted on other problems and unable to manage several
tasks at once
• deep honesty and respect for laws
Donn Parker [PAR98] has studied hacking and computer crime for many
years. He states “hackers are characterized by an immature, excessively
idealistic attitude … They delight in presenting themselves to the media as
idealistic do-gooders, champions of the underdog.”
Consider the following excerpt from an interview [SHA00] with “Mixter,” the
German programmer who admitted he was the author of a widespread piece of
attack software called Tribal Flood Network (TFN) and its sequel TFN2K:
Q: Why did you write the software?
A: I first heard about Trin00 [another piece of attack software] in July ’99
and I considered it as interesting from a technical perspective, but also
potentially powerful in a negative way. I knew some facts of how Trin00
worked, and since I didn’t manage to get Trin00 sources or binaries at that
time, I wrote my own server-client network that was capable of performing
denial of service.
Q: Were you involved … in any of the recent high-profile attacks?
A: No. The fact that I authored these tools does in no way mean that I
condone their active use. I must admit I was quite shocked to hear about the
latest attacks. It seems that the attackers are pretty clueless people who
misuse powerful resources and tools for generally harmful and senseless
activities just “because they can.”
Notice that from some information about denial-of-service attacks, he wrote
his own server-client network and then a sophisticated attack. But he was “quite
shocked” to hear they were used for harm.
More research is needed before we can define the profile of a hacker. And
even more work will be needed to extend that profile to the profile of a
(malicious) attacker. Not all hackers become attackers; some hackers become
extremely dedicated and conscientious system administrators, developers, or
security experts. But some psychologists see in AS the rudiments of a hacker’s
profile.
Individuals
Originally, computer attackers were individuals, acting with motives of fun, challenge,
or revenge. Early attackers acted alone. Two of the most well known among them are
Robert Morris Jr., the Cornell University graduate student who brought down the Internet
in 1988 [SPA89], and Kevin Mitnick, the man who broke into and stole data from dozens
of computers, including the San Diego Supercomputer Center [MAR95].
Organized, Worldwide Groups
More recent attacks have involved groups of people. An attack against the government
of the country of Estonia (described in more detail in Chapter 13) is believed to have been
an uncoordinated outburst from a loose federation of attackers from around the world.
Kevin Poulsen [POU05] quotes Tim Rosenberg, a research professor at George
Washington University, warning of “multinational groups of hackers backed by organized
crime” and showing the sophistication of prohibition-era mobsters. He also reports that
Christopher Painter, deputy director of the U.S. Department of Justice’s computer crime
section, argues that cyber criminals and serious fraud artists are increasingly working in
concert or are one and the same. According to Painter, loosely connected groups of
criminals all over the world work together to break into systems and steal and sell
information, such as credit card numbers. For instance, in October 2004, U.S. and
Canadian authorities arrested 28 people from 6 countries involved in an international,
organized cybercrime ring to buy and sell credit card information and identities.
Whereas early motives for computer attackers such as Morris and Mitnick were
personal, such as prestige or accomplishment, recent attacks have been heavily influenced
by financial gain. Security firm McAfee reports “Criminals have realized the huge
financial gains to be made from the Internet with little risk. They bring the skills,
knowledge, and connections needed for large scale, high-value criminal enterprise that,
when combined with computer skills, expand the scope and risk of cybercrime.” [MCA05]
Organized Crime
Attackers’ goals include fraud, extortion, money laundering, and drug trafficking, areas
in which organized crime has a well-established presence. Evidence is growing that
organized crime groups are engaging in computer crime. In fact, traditional criminals are
recruiting hackers to join the lucrative world of cybercrime. For example, Albert Gonzales
was sentenced in March 2010 to 20 years in prison for working with a crime ring to steal
40 million credit card numbers from retailer TJMaxx and others, costing over $200
million (Reuters, 26 March 2010).
Organized crime may use computer crime (such as stealing credit card numbers or bank
account details) to finance other aspects of crime. Recent attacks suggest that professional
criminals have discovered just how lucrative computer crime can be. Mike Danseglio, a
security project manager with Microsoft, said, “In 2006, the attackers want to pay the rent.
They don’t want to write a worm that destroys your hardware. They want to assimilate
your computers and use them to make money.” [NAR06a] Mikko Hyppönen, Chief
Research Officer with Finnish security company f-Secure, agrees that today’s attacks often
come from Russia, Asia, and Brazil; the motive is now profit, not fame [BRA06]. Ken
Dunham, Director of the Rapid Response Team for VeriSign says he is “convinced that
groups of well-organized mobsters have taken control of a global billion-dollar crime
network powered by skillful hackers.” [NAR06b]
Organized crime groups are discovering that computer crime can be
lucrative.
McAfee also describes the case of a hacker-for-hire: a businessman who hired a 16year-old New Jersey hacker to attack the websites of his competitors. The hacker barraged
the site for a five-month period and damaged not only the target companies but also their
Internet service providers (ISPs) and other unrelated companies that used the same ISPs.
By FBI estimates, the attacks cost all the companies over $2 million; the FBI arrested both
hacker and businessman in March 2005 [MCA05].
Brian Snow [SNO05] observes that hackers want a score or some kind of evidence to
give them bragging rights. Organized crime wants a resource; such criminals want to stay
under the radar to be able to extract profit from the system over time. These different
objectives lead to different approaches to computer crime: The novice hacker can use a
crude attack, whereas the professional attacker wants a neat, robust, and undetectable
method that can deliver rewards for a long time.
Terrorists
The link between computer security and terrorism is quite evident. We see terrorists
using computers in four ways:
• Computer as target of attack: Denial-of-service attacks and website
defacements are popular activities for any political organization because they
attract attention to the cause and bring undesired negative attention to the object
of the attack. An example is the massive denial-of-service attack launched
against the country of Estonia, detailed in Chapter 13.
• Computer as method of attack: Launching offensive attacks requires the use of
computers. Stuxnet, an example of malicious computer code called a worm, is
known to attack automated control systems, specifically a model of control
system manufactured by Siemens. Experts say the code is designed to disable
machinery used in the control of nuclear reactors in Iran [MAR10]. The persons
behind the attack are unknown, but the infection is believed to have spread
through USB flash drives brought in by engineers maintaining the computer
controllers. (We examine the Stuxnet worm in more detail in Chapters 6 and 13.)
• Computer as enabler of attack: Websites, web logs, and email lists are
effective, fast, and inexpensive ways to allow many people to coordinate.
According to the Council on Foreign Relations, the terrorists responsible for the
November 2008 attack that killed over 200 people in Mumbai used GPS systems
to guide their boats, Blackberries for their communication, and Google Earth to
plot their routes.
• Computer as enhancer of attack: The Internet has proved to be an invaluable
means for terrorists to spread propaganda and recruit agents. In October 2009
the FBI arrested Colleen LaRose, also known as JihadJane, after she had spent
months using email, YouTube, MySpace, and electronic message boards to
recruit radicals in Europe and South Asia to “wage violent jihad,” according to a
federal indictment.
We cannot accurately measure the degree to which terrorists use computers, because
terrorists keep secret the nature of their activities and because our definitions and
measurement tools are rather weak. Still, incidents like the one described in Sidebar 1-2
provide evidence that all four of these activities are increasing.
Sidebar 1-2 The Terrorists, Inc., IT Department
In 2001, a reporter for the Wall Street Journal bought a used computer in
Afghanistan. Much to his surprise, he found that the hard drive contained what
appeared to be files from a senior al Qaeda operative. The reporter, Alan
Cullison [CUL04], reports that he turned the computer over to the FBI. In his
story published in 2004 in The Atlantic, he carefully avoids revealing anything
he thinks might be sensitive.
The disk contained over 1,000 documents, many of them encrypted with
relatively weak encryption. Cullison found draft mission plans and white papers
setting forth ideological and philosophical arguments for the attacks of 11
September 2001. Also found were copies of news stories on terrorist activities.
Some of the found documents indicated that al Qaeda was not originally
interested in chemical, biological, or nuclear weapons, but became interested
after reading public news articles accusing al Qaeda of having those capabilities.
Perhaps most unexpected were email messages of the kind one would find in
a typical office: recommendations for promotions, justifications for petty cash
expenditures, and arguments concerning budgets.
The computer appears to have been used by al Qaeda from 1999 to 2001.
Cullison notes that Afghanistan in late 2001 was a scene of chaos, and it is
likely the laptop’s owner fled quickly, leaving the computer behind, where it fell
into the hands of a secondhand goods merchant who did not know its contents.
But this computer’s contents illustrate an important aspect of computer
security and confidentiality: We can never predict the time at which a security
disaster will strike, and thus we must always be prepared to act immediately if it
suddenly happens.
If someone on television sneezes, you do not worry about the possibility of catching a
cold. But if someone standing next to you sneezes, you may become concerned. In the
next section we examine the harm that can come from the presence of a computer security
threat on your own computer systems.
1.3 Harm
The negative consequence of an actualized threat is harm; we protect ourselves against
threats in order to reduce or eliminate harm. We have already described many examples of
computer harm: a stolen computer, modified or lost file, revealed private letter, or denied
access to data. These events cause harm that we want to avoid.
In our earlier discussion of assets, we noted that value depends on owner or outsider
perception and need. Some aspects of value are immeasurable, such as the value of the
paper you need to submit to your professor tomorrow; if you lose the paper (that is, if its
availability is lost), no amount of money will compensate you for it. Items on which you
place little or no value might be more valuable to someone else; for example, the group
photograph taken at last night’s party can reveal that your friend was not where he told his
wife he would be. Even though it may be difficult to assign a specific number as the value
of an asset, you can usually assign a value on a generic scale, such as moderate or
minuscule or incredibly high, depending on the degree of harm that loss or damage to the
object would cause. Or you can assign a value relative to other assets, based on
comparable loss: This version of the file is more valuable to you than that version.
In their 2010 global Internet threat report, security firm Symantec surveyed the kinds of
goods and services offered for sale on underground web pages. The item most frequently
offered in both 2009 and 2008 was credit card numbers, at prices ranging from $0.85 to
$30.00 each. (Compare those prices to an individual’s effort to deal with the effect of a
stolen credit card or the potential amount lost by the issuing bank.) Second most frequent
was bank account credentials, at $15 to $850; these were offered for sale at 19% of
websites in both years. Email accounts were next at $1 to $20, and lists of email addresses
went for $1.70 to $15.00 per thousand. At position 10 in 2009 were website administration
credentials, costing only $2 to $30. These black market websites demonstrate that the
market price of computer assets can be dramatically different from their value to rightful
owners.
The value of many assets can change over time, so the degree of harm (and therefore
the severity of a threat) can change, too. With unlimited time, money, and capability, we
might try to protect against all kinds of harm. But because our resources are limited, we
must prioritize our protection, safeguarding only against serious threats and the ones we
can control. Choosing the threats we try to mitigate involves a process called risk
management, and it includes weighing the seriousness of a threat against our ability to
protect.
Risk management involves choosing which threats to control and what
resources to devote to protection.
Risk and Common Sense
The number and kinds of threats are practically unlimited because devising an attack
requires an active imagination, determination, persistence, and time (as well as access and
resources). The nature and number of threats in the computer world reflect life in general:
The causes of harm are limitless and largely unpredictable. Natural disasters like
volcanoes and earthquakes happen with little or no warning, as do auto accidents, heart
attacks, influenza, and random acts of violence. To protect against accidents or the flu, you
might decide to stay indoors, never venturing outside. But by doing so, you trade one set
of risks for another; while you are inside, you are vulnerable to building collapse. There
are too many possible causes of harm for us to protect ourselves—or our computers—
completely against all of them.
In real life we make decisions every day about the best way to provide our security. For
example, although we may choose to live in an area that is not prone to earthquakes, we
cannot entirely eliminate earthquake risk. Some choices are conscious, such as deciding
not to walk down a dark alley in an unsafe neighborhood; other times our subconscious
guides us, from experience or expertise, to take some precaution. We evaluate the
likelihood and severity of harm, and then consider ways (called countermeasures or
controls) to address threats and determine the controls’ effectiveness.
Computer security is similar. Because we cannot protect against everything, we
prioritize: Only so much time, energy, or money is available for protection, so we address
some risks and let others slide. Or we consider alternative courses of action, such as
transferring risk by purchasing insurance or even doing nothing if the side effects of the
countermeasure could be worse than the possible harm. The risk that remains uncovered
by controls is called residual risk.
A basic model of risk management involves a user’s calculating the value of all assets,
determining the amount of harm from all possible threats, computing the costs of
protection, selecting safeguards (that is, controls or countermeasures) based on the degree
of risk and on limited resources, and applying the safeguards to optimize harm averted.
This approach to risk management is a logical and sensible approach to protection, but it
has significant drawbacks. In reality, it is difficult to assess the value of each asset; as we
have seen, value can change depending on context, timing, and a host of other
characteristics. Even harder is determining the impact of all possible threats. The range of
possible threats is effectively limitless, and it is difficult (if not impossible in some
situations) to know the short- and long-term impacts of an action. For instance, Sidebar 13 describes a study of the impact of security breaches over time on corporate finances,
showing that a threat must be evaluated over time, not just at a single instance.
Sidebar 1-3 Short- and Long-term Risks of Security Breaches
It was long assumed that security breaches would be bad for business: that
customers, fearful of losing their data, would veer away from insecure
businesses and toward more secure ones. But empirical studies suggest that the
picture is more complicated. Early studies of the effects of security breaches,
such as that of Campbell [CAM03], examined the effects of breaches on stock
price. They found that a breach’s impact could depend on the nature of the
breach itself; the effects were higher when the breach involved unauthorized
access to confidential data. Cavusoglu et al. [CAV04] discovered that a breach
affects the value not only of the company experiencing the breach but also of
security enterprises: On average, the breached firms lost 2.1 percent of market
value within two days of the breach’s disclosure, but security developers’ market
value actually increased 1.36 percent.
Myung Ko and Carlos Dorantes [KO06] looked at the longer-term financial
effects of publicly announced breaches. Based on the Campbell et al. study, they
examined data for four quarters following the announcement of unauthorized
access to confidential data. Ko and Dorantes note many types of possible
breach-related costs:
“Examples of short-term costs include cost of repairs, cost of replacement of the system, lost
business due to the disruption of business operations, and lost productivity of employees.
These are also considered tangible costs. On the other hand, long-term costs include the loss
of existing customers due to loss of trust, failing to attract potential future customers due to
negative reputation from the breach, loss of business partners due to loss of trust, and
potential legal liabilities from the breach. Most of these costs are intangible costs that are
difficult to calculate but extremely important in assessing the overall security breach costs to
the organization.”
Ko and Dorantes compared two groups of companies: one set (the treatment
group) with data breaches, and the other (the control group) without a breach but
matched for size and industry. Their findings were striking. Contrary to what
you might suppose, the breached firms had no decrease in performance for the
quarters following the breach, but their return on assets decreased in the third
quarter. The comparison of treatment with control companies revealed that the
control firms generally outperformed the breached firms. However, the breached
firms outperformed the control firms in the fourth quarter.
These results are consonant with the results of other researchers who conclude
that there is minimal long-term economic impact from a security breach. There
are many reasons why this is so. For example, cus...
Purchase answer to see full
attachment