COMPUTER SEcURITY
PRINCIpLES
AND
PRACTICE
Fourth Edition
William Stallings
Lawrie Brown
UNSW Canberra at the Australian Defence Force Academy
330 Hudson Street, New York, NY 10013
Director, Portfolio Management: Engineering, Computer Science & Global Editions: Julian
Partridge
Specialist, Higher Ed Portfolio Management: Tracy Johnson (Dunkelberger)
Portfolio Management Assistant: Meghan Jacoby
Managing Content Producer: Scott Disanno
Content Producer: Robert Engelhardt
Web Developer: Steve Wright
Rights and Permissions Manager: Ben Ferrini
Manufacturing Buyer, Higher Ed, Lake Side Communications Inc (LSC): Maura ZaldivarGarcia
Inventory Manager: Ann Lam
Product Marketing Manager: Yvonne Vannatta
Field Marketing Manager: Demetrius Hall
Marketing Assistant: Jon Bryant
Cover Designer: Marta Samsel
Cover Photo: E+/Getty Images
Full-Service Project Management: Kirthika Raj, SPi Global
Credits and acknowledgments borrowed from other sources and reproduced, with permission, in
this textbook appear on page 755.
Copyright © 2018, 2015, 2012, 2008 by Pearson Education, Inc., Pearson Education, Inc.,
Hoboken, New Jersey 07030 . All rights reserved. Manufactured in the United States of America.
This publication is protected by Copyright, and permission should be obtained from the publisher
prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or
by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain
permission(s) to use material from this work, please submit a written request to Pearson
Education, Inc., Permissions Department, Pearson Education, Inc., Hoboken, New Jersey 07030.
Many of the designations by manufacturers and seller to distinguish their products are claimed as
trademarks. Where those designations appear in this book, and the publisher was aware of a
trademark claim, the designations have been printed in initial caps or all caps.
Library of Congress Cataloging-in-Publication Data
Names: Stallings, William, author. | Brown, Lawrie, author.
Title: Computer security : principles and practice / William Stallings, Lawrie Brown, UNSW
Canberra at the Australian Defence Force Academy.
Description: Fourth edition. | Upper Saddle River, New Jersey : Pearson Education, Inc., [2017] |
Includes bibliographical references and index.
Identifiers: LCCN 2017025135| ISBN 9780134794105 | ISBN 0134794109
Subjects: LCSH: Computer security. | Computer networks--Security measures.
Classification: LCC QA76.9.A25 S685 2017 | DDC 005.8--dc23 LC record available at
https://lccn.loc.gov/2017025135
1 17
ISBN-10: 0-13-479410-9
ISBN-13: 978-0-13-479410-5
CONTENTS
Cover
Title Page
Copyright
Dedication
ONLINE CHAPTERS AND APPENDICES
Preface xii
Notation xxi
About the Authors xxii
Chapter 1 Overview 1
1.1 Computer Security Concepts 2
1.2 Threats, Attacks, and Assets 9
1.3 Security Functional Requirements 15
1.4 Fundamental Security Design Principles 17
1.5 Attack Surfaces and Attack Trees 21
1.6 Computer Security Strategy 24
1.7 Standards 26
1.8 Key Terms, Review Questions, and Problems 27
PART ONE COMPUTER SECURITY TECHNOLOGY AND PRINCIPLES 30
Chapter 2 Cryptographic Tools 30
2.1 Confidentiality with Symmetric Encryption 31
2.2 Message Authentication and Hash Functions 37
2.3 Public-Key Encryption 45
2.4 Digital Signatures and Key Management 50
2.5 Random and Pseudorandom Numbers 55
2.6 Practical Application: Encryption of Stored Data 57
2.7 Key Terms, Review Questions, and Problems 58
Chapter 3 User Authentication 63
3.1 Digital User Authentication Principles 65
3.2 Password-Based Authentication 70
3.3 Token-Based Authentication 82
3.4 Biometric Authentication 87
3.5 Remote User Authentication 92
3.6 Security Issues for User Authentication 95
3.7 Practical Application: An Iris Biometric System 97
3.8 Case Study: Security Problems for ATM Systems 99
3.9 Key Terms, Review Questions, and Problems 102
Chapter 4 Access Control 105
4.1 Access Control Principles 106
4.2 Subjects, Objects, and Access Rights 109
4.3 Discretionary Access Control 110
4.4 Example: UNIX File Access Control 117
4.5 Role-Based Access Control 120
4.6 Attribute-Based Access Control 126
4.7 Identity, Credential, and Access Management 132
4.8 Trust Frameworks 136
4.9 Case Study: RBAC System for a Bank 140
4.10 Key Terms, Review Questions, and Problems 142
Chapter 5 Database and Data Center Security 147
5.1 The Need for Database Security 148
5.2 Database Management Systems 149
5.3 Relational Databases 151
5.4 SQL Injection Attacks 155
5.5 Database Access Control 161
5.6 Inference 166
5.7 Database Encryption 168
5.8 Data Center Security 172
5.9 Key Terms, Review Questions, and Problems 178
Chapter 6 Malicious Software 183
6.1 Types of Malicious Software (Malware) 185
6.2 Advanced Persistent Threat 187
6.3 Propagation—Infected Content—Viruses 188
6.4 Propagation—Vulnerability Exploit—Worms 193
6.5 Propagation—Social Engineering—Spam E-mail, Trojans 202
6.6 Payload—System Corruption 205
6.7 Payload—Attack Agent—Zombie, Bots 207
6.8 Payload—Information Theft—Keyloggers, Phishing, Spyware 209
6.9 Payload—Stealthing—Backdoors, Rootkits 211
6.10 Countermeasures 214
6.11 Key Terms, Review Questions, and Problems 220
Chapter 7 Denial-of-Service Attacks 224
7.1 Denial-of-Service Attacks 225
7.2 Flooding Attacks 233
7.3 Distributed Denial-of-Service Attacks 234
7.4 Application-Based Bandwidth Attacks 236
7.5 Reflector and Amplifier Attacks 239
7.6 Defenses Against Denial-of-Service Attacks 243
7.7 Responding to a Denial-of-Service Attack 247
7.8 Key Terms, Review Questions, and Problems 248
Chapter 8 Intrusion Detection 251
8.1 Intruders 252
8.2 Intrusion Detection 256
8.3 Analysis Approaches 259
8.4 Host-Based Intrusion Detection 262
8.5 Network-Based Intrusion Detection 267
8.6 Distributed or Hybrid Intrusion Detection 273
8.7 Intrusion Detection Exchange Format 275
8.8 Honeypots 278
8.9 Example System: Snort 280
8.10 Key Terms, Review Questions, and Problems 284
Chapter 9 Firewalls and Intrusion Prevention Systems 288
9.1 The Need for Firewalls 289
9.2 Firewall Characteristics and Access Policy 290
9.3 Types of Firewalls 292
9.4 Firewall Basing 298
9.5 Firewall Location and Configurations 301
9.6 Intrusion Prevention Systems 306
9.7 Example: Unified Threat Management Products 310
9.8 Key Terms, Review Questions, and Problems 314
PART TWO SOFTWARE AND SYSTEM SECURITY 319
Chapter 10 Buffer Overflow 319
10.1 Stack Overflows 321
10.2 Defending Against Buffer Overflows 342
10.3 Other forms of Overflow Attacks 348
10.4 Key Terms, Review Questions, and Problems 355
Chapter 11 Software Security 357
11.1 Software Security Issues 358
11.2 Handling Program Input 362
11.3 Writing Safe Program Code 373
11.4 Interacting with the Operating System and Other Programs 378
11.5 Handling Program Output 391
11.6 Key Terms, Review Questions, and Problems 393
Chapter 12 Operating System Security 397
12.1 Introduction to Operating System Security 399
12.2 System Security Planning 400
12.3 Operating Systems Hardening 400
12.4 Application Security 404
12.5 Security Maintenance 406
12.6 Linux/Unix Security 407
12.7 Windows Security 411
12.8 Virtualization Security 413
12.9 Key Terms, Review Questions, and Problems 421
Chapter 13 Cloud and IoT Security 423
13.1 Cloud Computing 424
13.2 Cloud Security Concepts 432
13.3 Cloud Security Approaches 435
13.4 The Internet of Things 444
13.5 IoT Security 448
13.6 Key Terms and Review Questions 456
PART THREE MANAGEMENT ISSUES 458
Chapter 14 IT Security Management and Risk Assessment 458
14.1 IT Security Management 459
14.2 Organizational Context and Security Policy 462
14.3 Security Risk Assessment 465
14.4 Detailed Security Risk Analysis 468
14.5 Case Study: Silver Star Mines 480
14.6 Key Terms, Review Questions, and Problems 485
Chapter 15 IT Security Controls, Plans, and Procedures 488
15.1 IT Security Management Implementation 489
15.2 Security Controls or Safeguards 489
15.3 IT Security Plan 498
15.4 Implementation of Controls 499
15.5 Monitoring Risks 500
15.6 Case Study: Silver Star Mines 502
15.7 Key Terms, Review Questions, and Problems 505
Chapter 16 Physical and Infrastructure Security 507
16.1 Overview 508
16.2 Physical Security Threats 509
16.3 Physical Security Prevention and Mitigation Measures 516
16.4 Recovery from Physical Security Breaches 519
16.5 Example: A Corporate Physical Security Policy 519
16.6 Integration of Physical and Logical Security 520
16.7 Key Terms, Review Questions, and Problems 526
Chapter 17 Human Resources Security 528
17.1 Security Awareness, Training, and Education 529
17.2 Employment Practices and Policies 535
17.3 E-mail and Internet Use Policies 538
17.4 Computer Security Incident Response Teams 539
17.5 Key Terms, Review Questions, and Problems 546
Chapter 18 Security Auditing 548
18.1 Security Auditing Architecture 550
18.2 Security Audit Trail 554
18.3 Implementing the Logging Function 559
18.4 Audit Trail Analysis 570
18.5 Security Information and Event Management 574
18.6 Key Terms, Review Questions, and Problems 576
Chapter 19 Legal and Ethical Aspects 578
19.1 Cybercrime and Computer Crime 579
19.2 Intellectual Property 583
19.3 Privacy 589
19.4 Ethical Issues 596
19.5 Key Terms, Review Questions, and Problems 602
PART FOUR CRYPTOGRAPHIC ALGORITHMS 605
Chapter 20 Symmetric Encryption and Message Confidentiality 605
20.1 Symmetric Encryption Principles 606
20.2 Data Encryption Standard 611
20.3 Advanced Encryption Standard 613
20.4 Stream Ciphers and RC4 619
20.5 Cipher Block Modes of Operation 622
20.6 Key Distribution 628
20.7 Key Terms, Review Questions, and Problems 630
Chapter 21 Public-Key Cryptography and Message Authentication 634
21.1 Secure Hash Functions 635
21.2 HMAC 641
21.3 Authenticated Encryption 644
21.4 The RSA Public-Key Encryption Algorithm 647
21.5 Diffie-Hellman and Other Asymmetric Algorithms 653
21.6 Key Terms, Review Questions, and Problems 657
PART FIVE NETWORK SECURITY 660
Chapter 22 Internet Security Protocols and Standards 660
22.1 Secure E-mail and S/MIME 661
22.2 Domainkeys Identified Mail 664
22.3 Secure Sockets Layer (SSL) and Transport Layer Security (TLS) 668
22.4 HTTPS 675
22.5 IPv4 and IPv6 Security 676
22.6 Key Terms, Review Questions, and Problems 681
Chapter 23 Internet Authentication Applications 684
23.1 Kerberos 685
23.2 X.509 691
23.3 Public-Key Infrastructure 694
23.4 Key Terms, Review Questions, and Problems 697
Chapter 24 Wireless Network Security 700
24.1 Wireless Security 701
24.2 Mobile Device Security 704
24.3 IEEE 802.11 Wireless Lan Overview 708
24.4 IEEE 802.11i Wireless Lan Security 714
24.5 Key Terms, Review Questions, and Problems 729
Chapter 25 Linux Security
25.1 Introduction
25.2 Linux’s Security Model
25.3 The Linux DAC in Depth: Filesystem Security
25.4 Linux Vulnerabilities
25.5 Linux System Hardening
25.6 Application Security
25.7 Mandatory Access Controls
25.8 Key Terms, Review Questions, and Problems
Chapter 26 Windows and Windows Vista Security
26.1 Windows Security Architecture
26.2 Windows Vulnerabilities
26.3 Windows Security Defenses
26.4 Browser Defenses
26.5 Cryptographic Services
26.6 Common Criteria
26.7 Key Terms, Review Questions, Problems, and Projects
Chapter 27 Trusted Computing and Multilevel Security
27.1 The Bell-LaPadula Model for Computer Security
27.2 Other Formal Models for Computer Security
27.3 The Concept of Trusted Systems
27.4 Application of Multilevel Security
27.5 Trusted Computing and the Trusted Platform Module
27.6 Common Criteria for Information Technology Security Evaluation
27.7 Assurance and Evaluation
27.8 Key Terms, Review
Appendix A Projects and Other Student Exercises for Teaching Computer Security 732
A.1 Hacking Project 732
A.2 Laboratory Exercises 733
A.3 Security Education (SEED) Projects 733
A.4 Research Projects 735
A.5 Programming Projects 736
A.6 Practical Security Assessments 736
A.7 Firewall Projects 736
A.8 Case Studies 737
A.9 Reading/Report Assignments 737
A.10 Writing Assignments 737
A.11 Webcasts for Teaching Computer Security 738
Appendix B Some Aspects of Number Theory
Appendix C Standards and Standard-Setting Organizations
Appendix D Random and Pseudorandom Number Generation
Appendix E Message Authentication Codes Based on Block Ciphers
Appendix F TCP/IP Protocol Architecture
Appendix G Radix-64 Conversion
Appendix H The Domain Name System
Appendix I The Base-Rate Fallacy
Appendix J SHA-3
Appendix K Glossary
Acronyms 739
List of NIST and ISO Documents 740
References 742
Credits 755
Index 758
ONLINE CHAPTERS AND
APPENDICES 1
1Online chapters, appendices, and other documents are Premium Content, available via the access card at the
front of this book.
Chapter 25 Linux Security
25.1 Introduction
25.2 Linux’s Security Model
25.3 The Linux DAC in Depth: Filesystem Security
25.4 Linux Vulnerabilities
25.5 Linux System Hardening
25.6 Application Security
25.7 Mandatory Access Controls
25.8 Key Terms, Review Questions, and Problems
Chapter 26 Windows and Windows Vista Security
26.1 Windows Security Architecture
26.2 Windows Vulnerabilities
26.3 Windows Security Defenses
26.4 Browser Defenses
26.5 Cryptographic Services
26.6 Common Criteria
26.7 Key Terms, Review Questions, Problems, and Projects
Chapter 27 Trusted Computing and Multilevel Security
27.1 The Bell-LaPadula Model for Computer Security
27.2 Other Formal Models for Computer Security
27.3 The Concept of Trusted Systems
27.4 Application of Multilevel Security
27.5 Trusted Computing and the Trusted Platform Module
27.6 Common Criteria for Information Technology Security Evaluation
27.7 Assurance and Evaluation
27.8 Key Terms, Review
Appendix B Some Aspects of Number Theory
Appendix C Standards and Standard-Setting Organizations
Appendix D Random and Pseudorandom Number Generation
Appendix E Message Authentication Codes Based on Block Ciphers
Appendix F TCP/IP Protocol Architecture
Appendix G Radix-64 Conversion
Appendix H The Domain Name System
Appendix I The Base-Rate Fallacy
Appendix J SHA-3
Appendix K Glossary
PREFACE
WHAT’S NEW IN THE FOURTH EDITION
Since the third edition of this book was published, the field has seen continued innovations and
improvements. In this new edition, we try to capture these changes while maintaining a broad and
comprehensive coverage of the entire field. To begin the process of revision, the third edition of
this book was extensively reviewed by a number of professors who teach the subject and by
professionals working in the field. The result is that in many places the narrative has been
clarified and tightened, and illustrations have been improved.
Beyond these refinements to improve pedagogy and user-friendliness, there have been major
substantive changes throughout the book. The most noteworthy changes are as follows:
Data center security: Chapter 5 includes a new discussion of data center security, including
the TIA-492 specification of reliability tiers.
Malware: The material on malware in Chapter 6 has been revised to include additional
material on macro viruses and their structure, as they are now the most common form of virus
malware.
Virtualization security: The material on virtualization security in Chapter 12 has been
extended, given the rising use of such systems by organizations and in cloud computing
environments. A discussion of virtual firewalls, which may be used to help secure these
environments, has also been added.
Cloud security: Chapter 13 includes a new discussion of cloud security. The discussion
includes an introduction to cloud computing, key cloud security concepts, an analysis of
approaches to cloud security, and an open-source example.
IoT security: Chapter 13 includes a new discussion of security for the Internet of Things
(IoT). The discussion includes an introduction to IoT, an overview of IoT security issues, and
an open-source example.
SEIM: The discussion of Security Information and Event Management (SIEM) systems in
Chapter 18 has been updated.
Privacy: The section on privacy issues and its management in Chapter 19 has been extended
with additional discussion of moral and legal approaches, and the privacy issues related to big
data.
Authenticated encryption: Authenticated encryption has become an increasingly widespread
cryptographic tool in a variety of applications and protocols. Chapter 21 includes a new
discussion of authenticated description and describes an important authenticated encryption
algorithm known as offset codebook (OCB) mode.
BACKGROUND
Interest in education in computer security and related topics has been growing at a dramatic rate
in recent years. This interest has been spurred by a number of factors, two of which stand out:
1. As information systems, databases, and Internet-based distributed systems and
communication have become pervasive in the commercial world, coupled with the
increased intensity and sophistication of security-related attacks, organizations now
recognize the need for a comprehensive security strategy. This strategy encompasses the
use of specialized hardware and software and trained personnel to meet that need.
2. Computer security education, often termed information security education or information
assurance education, has emerged as a national goal in the United States and other
countries, with national defense and homeland security implications. The NSA/DHS
National Center of Academic Excellence in Information Assurance/Cyber Defense is
spearheading a government role in the development of standards for computer security
education.
Accordingly, the number of courses in universities, community colleges, and other institutions in
computer security and related areas is growing.
OBJECTIVES
The objective of this book is to provide an up-to-date survey of developments in computer
security. Central problems that confront security designers and security administrators include
defining the threats to computer and network systems, evaluating the relative risks of these
threats, and developing cost-effective and user friendly countermeasures.
The following basic themes unify the discussion:
Principles: Although the scope of this book is broad, there are a number of basic principles
that appear repeatedly as themes and that unify this field. Examples are issues relating to
authentication and access control. The book highlights these principles and examines their
application in specific areas of computer security.
Design approaches: The book examines alternative approaches to meeting specific computer
security requirements.
Standards: Standards have come to assume an increasingly important, indeed dominant, role
in this field. An understanding of the current status and future direction of technology requires
a comprehensive discussion of the related standards.
Real-world examples: A number of chapters include a section that shows the practical
application of that chapter’s principles in a real-world environment.
SUPPORT OF ACM/IEEE COMPUTER SCIENCE
CURRICULA 2013
This book is intended for both an academic and a professional audience. As a textbook, it is
intended as a one- or two-semester undergraduate course for computer science, computer
engineering, and electrical engineering majors. This edition is designed to support the
recommendations of the ACM/IEEE Computer Science Curricula 2013 (CS2013). The CS2013
curriculum recommendation includes, for the first time, Information Assurance and Security (IAS)
as one of the Knowledge Areas in the Computer Science Body of Knowledge. CS2013 divides all
course work into three categories: Core-Tier 1 (all topics should be included in the curriculum),
Core-Tier 2 (all or almost all topics should be included), and Elective (desirable to provide breadth
and depth). In the IAS area, CS2013 includes three Tier 1 topics, five Tier 2 topics, and
numerous Elective topics, each of which has a number of subtopics. This text covers all of the
Tier 1 and Tier 2 topics and subtopics listed by CS2013, as well as many of the elective topics.
Table P.1 shows the support for the ISA Knowledge Area provided in this textbook.
Table P.1 Coverage of CS2013 Information Assurance and Security (IAS) Knowledge Area
IAS Knowledge
Topics
Units
Textbook
Coverage
Foundational
CIA (Confidentiality, Integrity, and Availability)
Concepts in
Risk, threats, vulnerabilities, and attack vectors
Security (Tier 1)
Authentication and authorization, access control
3—User
(mandatory vs. discretionary)
Authentication
1—Overview
Trust and trustworthiness
Ethics (responsible disclosure)
4—Access Control
19—Legal and
Ethical Aspects
Principles of
Least privilege and isolation
Secure Design
Fail-safe defaults
(Tier 1)
Open design
End-to-end security
Defense in depth
Security by design
Tensions between security and other design goals
1—Overview
Principles of
Complete mediation
Secure Design
Use of vetted security components
(Tier 2)
Economy of mechanism (reducing trusted computing
1—Overview
base, minimize attack surface)
Usable security
Security composability
Prevention, detection, and deterrence
Defensive
Input validation and data sanitization
11—Software
Programming
Choice of programming language and type-safe
Security
(Tier 1)
languages
Examples of input validation and data sanitization errors
(buffer overflows, integer errors, SQL injection, and XSS
vulnerability)
Race conditions
Correct handling of exceptions and unexpected
behaviors
Defensive
Correct usage of third-party components
11—Software
Programming
Effectively deploying security updates
Security
(Tier 2)
12—OS Security
Threats and
Attacker goals, capabilities, and motivations
6—Malicious
Attacks (Tier 2)
Malware
Software
Denial of service and distributed denial of service
Social engineering
7—Denial-ofService Attacks
Network Security
Network-specific threats and attack types
8—Intrusion
(Tier 2)
Use of cryptography for data and network security
Detection
Architectures for secure networks
Defense mechanisms and countermeasures
9—Firewalls and
Security for wireless, cellular networks
Intrusion Prevention
Systems
Part 5—Network
Security
Cryptography
Basic cryptography terminology
2—Cryptographic
(Tier 2)
Cipher types
Tools
Overview of mathematical preliminaries
Public key infrastructure
Part 4—
Cryptographic
Algorithms
COVERAGE OF CISSP SUBJECT AREAS
This book provides coverage of all the subject areas specified for CISSP (Certified Information
Systems Security Professional) certification. The CISSP designation from the International
Information Systems Security Certification Consortium (ISC)2 is often referred to as the “gold
standard” when it comes to information security certification. It is the only universally recognized
certification in the security industry. Many organizations, including the U.S. Department of
Defense and many financial institutions, now require that cyber security personnel have the
CISSP certification. In 2004, CISSP became the first IT program to earn accreditation under the
international standard ISO/IEC 17024 (General Requirements for Bodies Operating Certification of
Persons).
The CISSP examination is based on the Common Body of Knowledge (CBK), a compendium of
information security best practices developed and maintained by (ISC)2, a nonprofit organization.
The CBK is made up of 8 domains that comprise the body of knowledge that is required for
CISSP certification.
The 8 domains are as follows, with an indication of where the topics are covered in this textbook:
Security and risk management: Confidentiality, integrity, and availability concepts; security
governance principles; risk management; compliance; legal and regulatory issues; professional
ethics; and security policies, standards, procedures, and guidelines. (Chapter 14)
Asset security: Information and asset classification; ownership (e.g. data owners, system
owners); privacy protection; appropriate retention; data security controls; and handling
requirements (e.g., markings, labels, storage). (Chapters 5, 15, 16, 19)
Security engineering: Engineering processes using secure design principles; security
models; security evaluation models; security capabilities of information systems; security
architectures, designs, and solution elements vulnerabilities; web-based systems
vulnerabilities; mobile systems vulnerabilities; embedded devices and cyber-physical systems
vulnerabilities; cryptography; and site and facility design secure principles; physical security.
(Chapters 1, 2, 13, 15, 16)
Communication and network security: Secure network architecture design (e.g., IP and
non-IP protocols, segmentation); secure network components; secure communication
channels; and network attacks. (Part Five)
Identity and access management: Physical and logical assets control; identification and
authentication of people and devices; identity as a service (e.g. cloud identity); third-party
identity services (e.g., on-premise); access control attacks; and identity and access
provisioning lifecycle (e.g., provisioning review). (Chapters 3, 4, 8, 9)
Security assessment and testing: Assessment and test strategies; security process data
(e.g., management and operational controls); security control testing; test outputs (e.g.,
automated, manual); and security architectures vulnerabilities. (Chapters 14, 15, 18)
Security operations: Investigations support and requirements; logging and monitoring
activities; provisioning of resources; foundational security operations concepts; resource
protection techniques; incident management; preventative measures; patch and vulnerability
management; change management processes; recovery strategies; disaster recovery
processes and plans; business continuity planning and exercises; physical security; and
personnel safety concerns. (Chapters 11, 12, 15, 16, 17)
Software development security: Security in the software development lifecycle; development
environment security controls; software security effectiveness; and acquired software security
impact. (Part Two)
SUPPORT FOR NSA/DHS CERTIFICATION
The U.S. National Security Agency (NSA) and the U.S. Department of Homeland Security (DHS)
jointly sponsor the National Centers of Academic Excellence in Information Assurance/Cyber
Defense (IA/CD). The goal of these programs is to reduce vulnerability in our national information
infrastructure by promoting higher education and research in IA and producing a growing number
of professionals with IA expertise in various disciplines. To achieve that purpose, NSA/DHS have
defined a set of Knowledge Units for 2- and 4-year institutions that must be supported in the
curriculum to gain a designation as a NSA/DHS National Center of Academic Excellence in
IA/CD. Each Knowledge Unit is composed of a minimum list of required topics to be covered and
one or more outcomes or learning objectives. Designation is based on meeting a certain threshold
number of core and optional Knowledge Units.
In the area of computer security, the 2014 Knowledge Units document lists the following core
Knowledge Units:
Cyber Defense: Includes access control, cryptography, firewalls, intrusion detection systems,
malicious activity detection and countermeasures, trust relationships, and defense in depth.
Cyber Threats: Includes types of attacks, legal issues, attack surfaces, attack trees, insider
problems, and threat information sources.
Fundamental Security Design Principles: A list of 12 principles, all of which are covered in
Section 1.4 of this text.
Information Assurance Fundamentals: Includes threats and vulnerabilities, intrusion
detection and prevention systems, cryptography, access control models,
identification/authentication, and audit.
Introduction to Cryptography: Includes symmetric cryptography, public-key cryptography,
hash functions, and digital signatures.
Databases: Includes an overview of databases, database access controls, and security issues
of inference.
This book provides extensive coverage in all of these areas. In addition, the book partially covers
a number of the optional Knowledge Units.
PLAN OF THE TEXT
The book is divided into five parts (see Chapter 0):
Computer Security Technology and Principles
Software and System Security
Management Issues
Cryptographic Algorithms
Network Security
The text is also accompanied by a number of online chapters and appendices that provide more
detail on selected topics.
The text includes an extensive glossary, a list of frequently used acronyms, and a bibliography.
Each chapter includes homework problems, review questions, a list of key words, and
suggestions for further reading.
INSTRUCTOR SUPPORT MATERIALS
The major goal of this text is to make it as effective a teaching tool for this exciting and fastmoving subject as possible. This goal is reflected both in the structure of the book and in the
supporting material. The text is accompanied by the following supplementary material to aid the
instructor:
Projects manual: Project resources including documents and portable software, plus
suggested project assignments for all of the project categories listed in the following section.
Solutions manual: Solutions to end-of-chapter Review Questions and Problems.
PowerPoint slides: A set of slides covering all chapters, suitable for use in lecturing.
PDF files: Reproductions of all figures and tables from the book.
Test bank: A chapter-by-chapter set of questions.
Sample syllabuses: The text contains more material than can be conveniently covered in one
semester. Accordingly, instructors are provided with several sample syllabuses that guide the
use of the text within limited time. These samples are based on real-world experience by
professors with the first edition.
All of these support materials are available at the Instructor Resource Center (IRC) for this
textbook, which can be reached through the publisher’s Website www.pearsonhighered.com/
stallings or by clicking on the link labeled Pearson Resources for Instructors at this book’s
Companion Website at WilliamStallings.com/ComputerSecurity. To gain access to the IRC,
please contact your local Pearson sales representative via pearsonhighered.com/educator/
replocator/requestSalesRep.page or call Pearson Faculty Services at 1-800-526-0485.
The Companion Website, at WilliamStallings.com/ComputerSecurity (click on, Instructor
Resources link), includes the following:
Links to Web sites for other courses being taught using this book.
Sign-up information for an Internet mailing list for instructors using this book to exchange
information, suggestions, and questions with each other and with the author.
STUDENT RESOURCES
For this new edition, a tremendous amount of original supporting material for students has been
made available online, at two Web locations. The Companion Website, at
WilliamStallings.com/ComputerSecurity (click on Student Resources link), includes a list of
relevant links organized by chapter and an errata sheet for the book.
Purchasing this textbook now grants the reader 12 months of access to the Premium Content
Site, which includes the following materials:
Online chapters: To limit the size and cost of the book, three chapters of the book are
provided in PDF format. The chapters are listed in this book’s table of contents.
Online appendices: There are numerous interesting topics that support material found in the
text but whose inclusion is not warranted in the printed text. A total of eleven online
appendices cover these topics for the interested student. The appendices are listed in this
book’s table of contents.
Homework problems and solutions: To aid the student in understanding the material, a
separate set of homework problems with solutions is available. These enable the students to
test their understanding of the text.
To access the Premium Content site, click on the Premium Content link at the Companion Web
site or at pearsonhighered.com/stallings and enter the student access code found on the card
in the front of the book.
PROJECTS AND OTHER STUDENT EXERCISES
For many instructors, an important component of a computer security course is a project or set of
projects by which the student gets hands-on experience to reinforce concepts from the text. This
book provides an unparalleled degree of support for including a projects component in the course.
The instructor’s support materials available through Pearson not only include guidance on how to
assign and structure the projects but also include a set of user manuals for various project types
plus specific assignments, all written especially for this book. Instructors can assign work in the
following areas:
Hacking exercises: Two projects that enable students to gain an understanding of the issues
in intrusion detection and prevention.
Laboratory exercises: A series of projects that involve programming and experimenting with
concepts from the book.
Security education (SEED) projects: The SEED projects are a set of hands-on exercises, or
labs, covering a wide range of security topics.
Research projects: A series of research assignments that instruct the students to research a
particular topic on the Internet and write a report.
Programming projects: A series of programming projects that cover a broad range of topics
and that can be implemented in any suitable language on any platform.
Practical security assessments: A set of exercises to examine current infrastructure and
practices of an existing organization.
Firewall projects: A portable network firewall visualization simulator is provided, together with
exercises for teaching the fundamentals of firewalls.
Case studies: A set of real-world case studies, including learning objectives, case description,
and a series of case discussion questions.
Reading/report assignments: A list of papers that can be assigned for reading and writing a
report, plus suggested assignment wording.
Writing assignments: A list of writing assignments to facilitate learning the material.
Webcasts for teaching computer security: A catalog of webcast sites that can be used to
enhance the course. An effective way of using this catalog is to select, or allow the student to
select, one or a few videos to watch, and then to write a report/analysis of the video.
This diverse set of projects and other student exercises enables the instructor to use the book as
one component in a rich and varied learning experience and to tailor a course plan to meet the
specific needs of the instructor and students. See Appendix A in this book for details.
NOTATIOn
Symbol
Expression
Meaning
D, K
D(K, Y)
Symmetric decryption of ciphertext Y using secret key K
D, PRa
D(PRa,Y)
Asymmetric decryption of ciphertext Y using A’s private key PRa
D, PUa
D(PUa,Y)
Asymmetric decryption of ciphertext Y using A’s public key PUa
E, K
E(K, X)
Symmetric encryption of plaintext X using secret key K
E, PRa
E(PRa,X)
Asymmetric encryption of plaintext X using A’s private key PRa
E, PUa
E(PUa,X)
Asymmetric encryption of plaintext X using A’s public key PUa
K
Secret key
PRa
Private key of user A
PUa
Public key of user A
H
H(X)
Hash function of message X
+
x+y
Logical OR: x OR y
•
x•y
Logical AND: x AND y
x
C
Logical NOT: NOT x
A characteristic formula, consisting of a logical formula over the values of
attributes in a database
X
X(C)
Query set of C, the set of records satisfying C
|, X
| X(C) |
Magnitude of X(C): the number of records in X(C)
∩
X(C)∩X(D)
Set intersection: the number of records in both X(C) and X(D)
||
x| |y
x concatenated with y
ABOUT
THE
AUTHORS
Dr. William Stallings authored 18 textbooks, and, counting revised editions, a total of 70 books
on various aspects of these subjects. His writings have appeared in numerous ACM and IEEE
publications, including the Proceedings of the IEEE and ACM Computing Reviews. He has
13 times received the award for the best Computer Science textbook of the year from the Text
and Academic Authors Association.
In over 30 years in the field, he has been a technical contributor, technical manager, and an
executive with several high-technology firms. He has designed and implemented both TCP/IPbased and OSI-based protocol suites on a variety of computers and operating systems, ranging
from microcomputers to mainframes. Currently he is an independent consultant whose clients
have included computer and networking manufacturers and customers, software development
firms, and leading-edge government research institutions.
He created and maintains the Computer Science Student Resource Site at Computer
ScienceStudent.com. This site provides documents and links on a variety of subjects of general
interest to computer science students (and professionals). He is a member of the editorial board
of Cryptologia, a scholarly journal devoted to all aspects of cryptology.
Dr. Lawrie Brown is a visiting senior lecturer in the School of Engineering and Information
Technology, UNSW Canberra at the Australian Defence Force Academy.
His professional interests include communications and computer systems security and
cryptography, including research on pseudo-anonymous communication, authentication, security
and trust issues in Web environments, the design of secure remote code execution environments
using the functional language Erlang, and on the design and implementation of the LOKI family of
block ciphers.
During his career, he has presented courses on cryptography, cybersecurity, data
communications, data structures, and programming in Java to both undergraduate and
postgraduate students.
CHAPTER 1 OVerVIeW
1.1 Computer Security Concepts
A Definition of Computer Security
Examples
The Challenges of Computer Security
A Model for Computer Security
1.2 Threats, Attacks, and Assets
Threats and Attacks
Threats and Assets
1.3 Security Functional Requirements
1.4 Fundamental Security Design Principles
1.5 Attack Surfaces and Attack Trees
Attack Surfaces
Attack Trees
1.6 Computer Security Strategy
Security Policy
Security Implementation
Assurance and Evaluation
1.7 Standards
1.8 Key Terms, Review Questions, and Problems
LearNING OBJectIVeS
After studying this chapter, you should be able to:
Describe the key security requirements of confidentiality, integrity, and availability.
Discuss the types of security threats and attacks that must be dealt with and give examples of
the types of threats and attacks that apply to different categories of computer and network
assets.
Summarize the functional requirements for computer security.
Explain the fundamental security design principles.
Discuss the use of attack surfaces and attack trees.
Understand the principle aspects of a comprehensive security strategy.
This chapter provides an overview of computer security. We begin with a
discussion of what we mean by computer security. In essence, computer
security deals with computer-related assets that are subject to a variety of
threats and for which various measures are taken to protect those assets.
Accordingly, the next section of this chapter provides a brief overview of the
categories of computer-related assets that users and system managers wish to
preserve and protect, and a look at the various threats and attacks that can be
made on those assets. Then, we survey the measures that can be taken to
deal with such threats and attacks. This we do from three different viewpoints,
in Sections 1.3 through 1.5. We then lay out in general terms a computer
security strategy.
The focus of this chapter, and indeed this book, is on three fundamental
questions:
1. What assets do we need to protect?
2. How are those assets threatened?
3. What can we do to counter those threats?
1.1 COMPUTER SECURITY
CONCEPTS
A Definition of Computer Security
The NIST Internal/Interagency Report NISTIR 7298 (Glossary of Key Information Security Terms,
May 2013) defines the term computer security as follows:
Computer Security: Measures and controls that ensure confidentiality, integrity, and availability
of information system assets including hardware, software, firmware, and information being
processed, stored, and communicated.
This definition introduces three key objectives that are at the heart of computer security:
Confidentiality: This term covers two related concepts:
Data confidentiality:1 Assures that private or confidential information is not made
available or disclosed to unauthorized individuals.
1RFC 4949 (Internet Security Glossary, August 2007) defines information as “facts and ideas, which
can be represented (encoded) as various forms of data,” and data as “information in a specific physical
representation, usually a sequence of symbols that have meaning; especially a representation of
information that can be processed or produced by a computer.” Security literature typically does not
make much of a distinction; nor does this book.
Privacy: Assures that individuals control or influence what information related to them may
be collected and stored and by whom and to whom that information may be disclosed.
Integrity: This term covers two related concepts:
Data integrity: Assures that information and programs are changed only in a specified and
authorized manner.
System integrity: Assures that a system performs its intended function in an unimpaired
manner, free from deliberate or inadvertent unauthorized manipulation of the system.
Availability: Assures that systems work promptly and service is not denied to authorized
users.
These three concepts form what is often referred to as the CIA triad. The three concepts embody
the fundamental security objectives for both data and for information and computing services. For
example, the NIST standard FIPS 199 (Standards for Security Categorization of Federal
Information and Information Systems, February 2004) lists confidentiality, integrity, and availability
as the three security objectives for information and for information systems. FIPS 199 provides a
useful characterization of these three objectives in terms of requirements and the definition of a
loss of security in each category:
Confidentiality: Preserving authorized restrictions on information access and disclosure,
including means for protecting personal privacy and proprietary information. A loss of
confidentiality is the unauthorized disclosure of information.
Integrity: Guarding against improper information modification or destruction, including
ensuring information nonrepudiation and authenticity. A loss of integrity is the unauthorized
modification or destruction of information.
Availability: Ensuring timely and reliable access to and use of information. A loss of
availability is the disruption of access to or use of information or an information system.
Although the use of the CIA triad to define security objectives is well established, some in the
security field feel that additional concepts are needed to present a complete picture (see Figure
1.1). Two of the most commonly mentioned are as follows:
Figure 1.1 Essential Network and Computer Security Requirements
Authenticity: The property of being genuine and being able to be verified and trusted;
confidence in the validity of a transmission, a message, or message originator. This means
verifying that users are who they say they are and that each input arriving at the system came
from a trusted source.
Accountability: The security goal that generates the requirement for actions of an entity to be
traced uniquely to that entity. This supports nonrepudiation, deterrence, fault isolation,
intrusion detection and prevention, and after-action recovery and legal action. Because truly
secure systems are not yet an achievable goal, we must be able to trace a security breach to
a responsible party. Systems must keep records of their activities to permit later forensic
analysis to trace security breaches or to aid in transaction disputes.
Note that FIPS 199 includes authenticity under integrity.
Examples
We now provide some examples of applications that illustrate the requirements just enumerated. 2
For these examples, we use three levels of impact on organizations or individuals should there be
a breach of security (i.e., a loss of confidentiality, integrity, or availability). These levels are
defined in FIPS 199:
2These examples are taken from a security policy document published by the Information Technology Security
and Privacy Office at Purdue University.
Low: The loss could be expected to have a limited adverse effect on organizational
operations, organizational assets, or individuals. A limited adverse effect means that, for
example, the loss of confidentiality, integrity, or availability might: (i) cause a degradation in
mission capability to an extent and duration that the organization is able to perform its primary
functions, but the effectiveness of the functions is noticeably reduced; (ii) result in minor
damage to organizational assets; (iii) result in minor financial loss; or (iv) result in minor harm
to individuals.
Moderate: The loss could be expected to have a serious adverse effect on organizational
operations, organizational assets, or individuals. A serious adverse effect means that, for
example, the loss might: (i) cause a significant degradation in mission capability to an extent
and duration that the organization is able to perform its primary functions, but the effectiveness
of the functions is significantly reduced; (ii) result in significant damage to organizational
assets; (iii) result in significant financial loss; or (iv) result in significant harm to individuals that
does not involve loss of life or serious life-threatening injuries.
High: The loss could be expected to have a severe or catastrophic adverse effect on
organizational operations, organizational assets, or individuals. A severe or catastrophic
adverse effect means that, for example, the loss might: (i) cause a severe degradation in or
loss of mission capability to an extent and duration that the organization is not able to perform
one or more of its primary functions; (ii) result in major damage to organizational assets; (iii)
result in major financial loss; or (iv) result in severe or catastrophic harm to individuals
involving loss of life or serious life-threatening injuries.
CONfiDENtiALitY
Student grade information is an asset whose confidentiality is considered to be highly important
by students. In the United States, the release of such information is regulated by the Family
Educational Rights and Privacy Act (FERPA). Grade information should only be available to
students, their parents, and employees that require the information to do their job. Student
enrollment information may have a moderate confidentiality rating. While still covered by FERPA,
this information is seen by more people on a daily basis, is less likely to be targeted than grade
information, and results in less damage if disclosed. Directory information, such as lists of
students or faculty or departmental lists, may be assigned a low confidentiality rating or indeed no
rating. This information is typically freely available to the public and published on a school’s
website.
INtEgRitY
Several aspects of integrity are illustrated by the example of a hospital patient’s allergy
information stored in a database. The doctor should be able to trust that the information is correct
and current. Now, suppose an employee (e.g., a nurse) who is authorized to view and update this
information deliberately falsifies the data to cause harm to the hospital. The database needs to be
restored to a trusted basis quickly, and it should be possible to trace the error back to the person
responsible. Patient allergy information is an example of an asset with a high requirement for
integrity. Inaccurate information could result in serious harm or death to a patient, and expose the
hospital to massive liability.
An example of an asset that may be assigned a moderate level of integrity requirement is a
website that offers a forum to registered users to discuss some specific topic. Either a registered
user or a hacker could falsify some entries or deface the website. If the forum exists only for the
enjoyment of the users, brings in little or no advertising revenue, and is not used for something
important such as research, then potential damage is not severe. The Webmaster may
experience some data, financial, and time loss.
An example of a low integrity requirement is an anonymous online poll. Many websites, such as
news organizations, offer these polls to their users with very few safeguards. However, the
inaccuracy and unscientific nature of such polls is well understood.
AVAiLABiLitY
The more critical a component or service is, the higher will be the level of availability required.
Consider a system that provides authentication services for critical systems, applications, and
devices. An interruption of service results in the inability for customers to access computing
resources and staff to access the resources they need to perform critical tasks. The loss of the
service translates into a large financial loss in lost employee productivity and potential customer
loss.
An example of an asset that would typically be rated as having a moderate availability
requirement is a public website for a university; the website provides information for current and
prospective students and donors. Such a site is not a critical component of the university’s
information system, but its unavailability will cause some embarrassment.
An online telephone directory lookup application would be classified as a low availability
requirement. Although the temporary loss of the application may be an annoyance, there are
other ways to access the information, such as a hardcopy directory or the operator.
The Challenges of Computer Security
Computer security is both fascinating and complex. Some of the reasons are as follows:
1. Computer security is not as simple as it might first appear to the novice. The requirements
seem to be straightforward; indeed, most of the major requirements for security services
can be given self-explanatory one-word labels: confidentiality, authentication,
nonrepudiation, and integrity. But the mechanisms used to meet those requirements can be
quite complex, and understanding them may involve rather subtle reasoning.
2. In developing a particular security mechanism or algorithm, one must always consider
potential attacks on those security features. In many cases, successful attacks are
designed by looking at the problem in a completely different way, therefore exploiting an
unexpected weakness in the mechanism.
3. Because of Point 2, the procedures used to provide particular services are often
counterintuitive. Typically, a security mechanism is complex, and it is not obvious from the
statement of a particular requirement that such elaborate measures are needed. Only when
the various aspects of the threat are considered do elaborate security mechanisms make
sense.
4. Having designed various security mechanisms, it is necessary to decide where to use
them. This is true both in terms of physical placement (e.g., at what points in a network are
certain security mechanisms needed) and in a logical sense [e.g., at what layer or layers of
an architecture such as TCP/IP (Transmission Control Protocol/Internet Protocol) should
mechanisms be placed].
5. Security mechanisms typically involve more than a particular algorithm or protocol. They
also require that participants be in possession of some secret information (e.g., an
encryption key), which raises questions about the creation, distribution, and protection of
that secret information. There may also be a reliance on communications protocols whose
behavior may complicate the task of developing the security mechanism. For example, if
the proper functioning of the security mechanism requires setting time limits on the transit
time of a message from sender to receiver, then any protocol or network that introduces
variable, unpredictable delays may render such time limits meaningless.
6. Computer security is essentially a battle of wits between a perpetrator who tries to find
holes, and the designer or administrator who tries to close them. The great advantage that
the attacker has is that he or she need only find a single weakness, while the designer
must find and eliminate all weaknesses to achieve perfect security.
7. There is a natural tendency on the part of users and system managers to perceive little
benefit from security investment until a security failure occurs.
8. Security requires regular, even constant monitoring, and this is difficult in today’s shortterm, overloaded environment.
9. Security is still too often an afterthought to be incorporated into a system after the design is
complete, rather than being an integral part of the design process.
10. Many users and even security administrators view strong security as an impediment to
efficient and user-friendly operation of an information system or use of information.
The difficulties just enumerated will be encountered in numerous ways as we examine the various
security threats and mechanisms throughout this book.
A Model for Computer Security
We now introduce some terminology that will be useful throughout the book.3 Table 1.1 defines
terms and Figure 1.2, based on [CCPS12a], shows the relationship among some of these terms.
We start with the concept of a system resource or asset, that users and owners wish to protect.
The assets of a computer system can be categorized as follows:
3See Chapter 0 for an explanation of RFCs.
Table 1.1 Computer Security Terminology
Source: Stallings, William, Computer Security: Principles and Practice, 4e., ©2019. Reprinted and electronically reproduced by permission of pearson
education, inc., new york, ny.
Adversary (threat agent)
Individual, group, organization, or government that conducts or has the intent to conduct detrimental
activities.
Attack
Any kind of malicious activity that attempts to collect, disrupt, deny, degrade, or destroy information system
resources or the information itself.
Countermeasure
A device or techniques that has as its objective the impairment of the operational effectiveness of
undesirable or adversarial activity, or the prevention of espionage, sabotage, theft, or unauthorized access
to or use of sensitive information or information systems.
Risk
A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a
function of 1) the adverse impacts that would arise if the circumstance or event occurs; and 2) the likelihood
of occurrence.
Security Policy
A set of criteria for the provision of security services. It defines and constrains the activities of a data
processing facility in order to maintain a condition of security for systems and data.
System Resource (Asset)
A major application, general support system, high impact program, physical plant, mission critical system,
personnel, equipment, or a logically related group of systems.
Threat
Any circumstance or event with the potential to adversely impact organizational operations (including
mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the
Nation through an information system via unauthorized access, destruction, disclosure, modification of
information, and/or denial of service.
Vulnerability
Weakness in an information system, system security procedures, internal controls, or implementation that
could be exploited or triggered by a threat source.
Figure 1.2 Security Concepts and Relationships
Hardware: Including computer systems and other data processing, data storage, and data
communications devices.
Software: Including the operating system, system utilities, and applications.
Data: Including files and databases, as well as security-related data, such as password files.
Communication facilities and networks: Local and wide area network communication links,
bridges, routers, and so on.
In the context of security, our concern is with the vulnerabilities of system resources. [NRC02]
lists the following general categories of vulnerabilities of a computer system or network asset:
The system can be corrupted, so it does the wrong thing or gives wrong answers. For
example, stored data values may differ from what they should be because they have been
improperly modified.
The system can become leaky. For example, someone who should not have access to some
or all of the information available through the network obtains such access.
The system can become unavailable or very slow. That is, using the system or network
becomes impossible or impractical.
These three general types of vulnerability correspond to the concepts of integrity, confidentiality,
and availability, enumerated earlier in this section.
Corresponding to the various types of vulnerabilities to a system resource are threats that are
capable of exploiting those vulnerabilities. A threat represents a potential security harm to an
asset. An attack is a threat that is carried out (threat action) and, if successful, leads to an
undesirable violation of security, or threat consequence. The agent carrying out the attack is
referred to as an attacker or threat agent. We can distinguish two types of attacks:
Active attack: An attempt to alter system resources or affect their operation.
Passive attack: An attempt to learn or make use of information from the system that does not
affect system resources.
We can also classify attacks based on the origin of the attack:
Inside attack: Initiated by an entity inside the security perimeter (an “insider”). The insider is
authorized to access system resources but uses them in a way not approved by those who
granted the authorization.
Outside attack: Initiated from outside the perimeter, by an unauthorized or illegitimate user of
the system (an “outsider”). On the Internet, potential outside attackers range from amateur
pranksters to organized criminals, international terrorists, and hostile governments.
Finally, a countermeasure is any means taken to deal with a security attack. Ideally, a
countermeasure can be devised to prevent a particular type of attack from succeeding. When
prevention is not possible, or fails in some instance, the goal is to detect the attack then recover
from the effects of the attack. A countermeasure may itself introduce new vulnerabilities. In any
case, residual vulnerabilities may remain after the imposition of countermeasures. Such
vulnerabilities may be exploited by threat agents representing a residual level of risk to the
assets. Owners will seek to minimize that risk given other constraints.
1.2 THREATS, ATTACKS, AND
ASSETS
We now turn to a more detailed look at threats, attacks, and assets. First, we look at the types of
security threats that must be dealt with, and then give some examples of the types of threats that
apply to different categories of assets.
Threats and Attacks
Table 1.2, based on RFC 4949, describes four kinds of threat consequences and lists the kinds of
attacks that result in each consequence.
Table 1.2 Threat Consequences, and the Types of Threat Actions that Cause Each
Consequence
Source: Based on RFC 4949
Threat Consequence
Threat Action (Attack)
Unauthorized Disclosure
Exposure: Sensitive data are directly released to an unauthorized
A circumstance or event whereby
an entity gains access to data for
which the entity is not authorized.
entity.
Interception: An unauthorized entity directly accesses sensitive data
traveling between authorized sources and destinations.
Inference: A threat action whereby an unauthorized entity indirectly
accesses sensitive data (but not necessarily the data contained in the
communication) by reasoning from characteristics or by-products of
communications.
Intrusion: An unauthorized entity gains access to sensitive data by
circumventing a system’s security protections.
Deception
Masquerade: An unauthorized entity gains access to a system or
A circumstance or event that
performs a malicious act by posing as an authorized entity.
may result in an authorized entity
Falsification: False data deceive an authorized entity.
receiving false data and believing
it to be true.
Repudiation: An entity deceives another by falsely denying
responsibility for an act.
Disruption
Incapacitation: Prevents or interrupts system operation by disabling a
A circumstance or event that
interrupts or prevents the correct
operation of system services and
system component.
Corruption: Undesirably alters system operation by adversely
modifying system functions or data.
functions.
Obstruction: A threat action that interrupts delivery of system services
by hindering system operation.
Usurpation
Misappropriation: An entity assumes unauthorized logical or physical
A circumstance or event that
results in control of system
services or functions by an
control of a system resource.
Misuse: Causes a system component to perform a function or service
that is detrimental to system security.
unauthorized entity.
Unauthorized disclosure is a threat to confidentiality. The following types of attacks can result in
this threat consequence:
Exposure: This can be deliberate, as when an insider intentionally releases sensitive
information, such as credit card numbers, to an outsider. It can also be the result of a human,
hardware, or software error, which results in an entity gaining unauthorized knowledge of
sensitive data. There have been numerous instances of this, such as universities accidentally
posting confidential student information on the Web.
Interception: Interception is a common attack in the context of communications. On a shared
local area network (LAN), such as a wireless LAN or a broadcast Ethernet, any device
attached to the LAN can receive a copy of packets intended for another device. On the
Internet, a determined hacker can gain access to e-mail traffic and other data transfers. All of
these situations create the potential for unauthorized access to data.
Inference: An example of inference is known as traffic analysis, in which an adversary is able
to gain information from observing the pattern of traffic on a network, such as the amount of
traffic between particular pairs of hosts on the network. Another example is the inference of
detailed information from a database by a user who has only limited access; this is
accomplished by repeated queries whose combined results enable inference.
Intrusion: An example of intrusion is an adversary gaining unauthorized access to sensitive
data by overcoming the system’s access control protections.
Deception is a threat to either system integrity or data integrity. The following types of attacks
can result in this threat consequence:
Masquerade: One example of masquerade is an attempt by an unauthorized user to gain
access to a system by posing as an authorized user; this could happen if the unauthorized
user has learned another user’s logon ID and password. Another example is malicious logic,
such as a Trojan horse, that appears to perform a useful or desirable function but actually
gains unauthorized access to system resources, or tricks a user into executing other malicious
logic.
Falsification: This refers to the altering or replacing of valid data or the introduction of false
data into a file or database. For example, a student may alter his or her grades on a school
database.
Repudiation: In this case, a user either denies sending data, or a user denies receiving or
possessing the data.
Disruption is a threat to availability or system integrity. The following types of attacks can result
in this threat consequence:
Incapacitation: This is an attack on system availability. This could occur as a result of
physical destruction of or damage to system hardware. More typically, malicious software,
such as Trojan horses, viruses, or worms, could operate in such a way as to disable a system
or some of its services.
Corruption: This is an attack on system integrity. Malicious software in this context could
operate in such a way that system resources or services function in an unintended manner. Or
a user could gain unauthorized access to a system and modify some of its functions. An
example of the latter is a user placing backdoor logic in the system to provide subsequent
access to a system and its resources by other than the usual procedure.
Obstruction: One way to obstruct system operation is to interfere with communications by
disabling communication links or altering communication control information. Another way is to
overload the system by placing excess burden on communication traffic or processing
resources.
Usurpation is a threat to system integrity. The following types of attacks can result in this threat
consequence:
Misappropriation: This can include theft of service. An example is a distributed denial of
service attack, when malicious software is installed on a number of hosts to be used as
platforms to launch traffic at a target host. In this case, the malicious software makes
unauthorized use of processor and operating system resources.
Misuse: Misuse can occur by means of either malicious logic or a hacker that has gained
unauthorized access to a system. In either case, security functions can be disabled or
thwarted.
Threats and Assets
The assets of a computer system can be categorized as hardware, software, data, and
communication lines and networks. In this subsection, we briefly describe these four categories
and relate these to the concepts of integrity, confidentiality, and availability introduced in Section
1.1 (see Figure 1.3 and Table 1.3).
Figure 1.3 Scope of Computer Security
Note: This figure depicts security concerns other than physical security, including controlling of
access to computers systems, safeguarding of data transmitted over communications systems,
and safeguarding of stored data.
Table 1.3 Computer and Network Assets, with Examples of Threats
Availability
Hardware
Confidentiality
Equipment is stolen or
An unencrypted USB
disabled, thus denying
drive is stolen.
service.
Integrity
Software
Programs are deleted,
An unauthorized copy of
A working program is
denying access to users.
software is made.
modified, either to cause it to
fail during execution or to
cause it to do some
unintended task.
Data
Files are deleted,
An unauthorized read of
Existing files are modified or
denying access to users.
data is performed. An
new files are fabricated.
analysis of statistical
data reveals underlying
data.
Communication
Messages are destroyed
Messages are read. The
Messages are modified,
Lines and
or deleted.
traffic pattern of
delayed, reordered, or
Networks
Communication lines or
messages is observed.
duplicated. False messages
networks are rendered
are fabricated.
unavailable.
HArDWArE
A major threat to computer system hardware is the threat to availability. Hardware is the most
vulnerable to attack and the least susceptible to automated controls. Threats include accidental
and deliberate damage to equipment as well as theft. The proliferation of personal computers and
workstations and the widespread use of LANs increase the potential for losses in this area. Theft
of USB drives can lead to loss of confidentiality. Physical and administrative security measures
are needed to deal with these threats.
SOfTWArE
Software includes the operating system, utilities, and application programs. A key threat to
software is an attack on availability. Software, especially application software, is often easy to
delete. Software can also be altered or damaged to render it useless. Careful software
configuration management, which includes making backups of the most recent version of
software, can maintain high availability. A more difficult problem to deal with is software
modification that results in a program that still functions but that behaves differently than before,
which is a threat to integrity/authenticity. Computer viruses and related attacks fall into this
category. A final problem is protection against software piracy. Although certain countermeasures
are available, by and large the problem of unauthorized copying of software has not been solved.
DATA
Hardware and software security are typically concerns of computing center professionals or
individual concerns of personal computer users. A much more widespread problem is data
security, which involves files and other forms of data controlled by individuals, groups, and
business organizations.
Security concerns with respect to data are broad, encompassing availability, secrecy, and
integrity. In the case of availability, the concern is with the destruction of data files, which can
occur either accidentally or maliciously.
The obvious concern with secrecy is the unauthorized reading of data files or databases, and this
area has been the subject of perhaps more research and effort than any other area of computer
security. A less obvious threat to secrecy involves the analysis of data and manifests itself in the
use of so-called statistical databases, which provide summary or aggregate information.
Presumably, the existence of aggregate information does not threaten the privacy of the
individuals involved. However, as the use of statistical databases grows, there is an increasing
potential for disclosure of personal information. In essence, characteristics of constituent
individuals may be identified through careful analysis. For example, if one table records the
aggregate of the incomes of respondents A, B, C, and D and another records the aggregate of
the incomes of A, B, C, D, and E, the difference between the two aggregates would be the
income of E. This problem is exacerbated by the increasing desire to combine data sets. In many
cases, matching several sets of data for consistency at different levels of aggregation requires
access to individual units. Thus, the individual units, which are the subject of privacy concerns,
are available at various stages in the processing of data sets.
Finally, data integrity is a major concern in most installations. Modifications to data files can have
consequences ranging from minor to disastrous.
COMMUNIcATION LINEs
AND
NETWOrKs
Network security attacks can be classified as passive attacks and active attacks. A passive attack
attempts to learn or make use of information from the system, but does not affect system
resources. An active attack attempts to alter system resources or affect their operation.
Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal
of the attacker is to obtain information that is being transmitted. Two types of passive attacks are
the release of message contents and traffic analysis.
The release of message contents is easily understood. A telephone conversation, an electronic
mail message, and a transferred file may contain sensitive or confidential information. We would
like to prevent an opponent from learning the contents of these transmissions.
A second type of passive attack, traffic analysis, is more subtle. Suppose we had a way of
masking the contents of messages or other information traffic so opponents, even if they captured
the message, could not extract the information from the message. The common technique for
masking contents is encryption. If we had encryption protection in place, an opponent might still
be able to observe the pattern of these messages. The opponent could determine the location
and identity of communicating hosts and could observe the frequency and length of messages
being exchanged. This information might be useful in guessing the nature of the communication
that was taking place.
Passive attacks are very difficult to detect because they do not involve any alteration of the data.
Typically, the message traffic is sent and received in an apparently normal fashion and neither the
sender nor receiver is aware that a third party has read the messages or observed the traffic
pattern. However, it is feasible to prevent the success of these attacks, usually by means of
encryption. Thus, the emphasis in dealing with passive attacks is on prevention rather than
detection.
Active attacks involve some modification of the data stream or the creation of a false stream,
and can be subdivided into four categories: replay, masquerade, modification of messages, and
denial of service.
Replay involves the passive capture of a data unit and its subsequent retransmission to produce
an unauthorized effect.
A masquerade takes place when one entity pretends to be a different entity. A masquerade
attack usually includes one of the other forms of active attack. For example, authentication
sequences can be captured and replayed after a valid authentication sequence has taken place,
thus enabling an authorized entity with few privileges to obtain extra privileges by impersonating
an entity that has those privileges.
Modification of messages simply means that some portion of a legitimate message is altered, or
that messages are delayed or reordered, to produce an unauthorized effect. For example, a
message stating, “Allow John Smith to read confidential file accounts” is modified to say, “Allow
Fred Brown to read confidential file accounts.”
The denial of service prevents or inhibits the normal use or management of communication
facilities. This attack may have a specific target; for example, an entity may suppress all
messages directed to a particular destination (e.g., the security audit service). Another form of
service denial is the disruption of an entire network, either by disabling the network or by
overloading it with messages so as to degrade performance.
Active attacks present the opposite characteristics of passive attacks. Whereas passive attacks
are difficult to detect, measures are available to prevent their success. On the other hand, it is
quite difficult to prevent active attacks absolutely, because to do so would require physical
protection of all communication facilities and paths at all times. Instead, the goal is to detect them
and to recover from any disruption or delays caused by them. Because the detection has a
deterrent effect, it may also contribute to prevention.
1.3 SECURITY FUNCTIONAL
REQUIREMENTS
There are a number of ways of classifying and characterizing the countermeasures that may be
used to reduce vulnerabilities and deal with threats to system assets. In this section, we view
countermeasures in terms of functional requirements, and we follow the classification defined in
FIPS 200 (Minimum Security Requirements for Federal Information and Information Systems).
This standard enumerates 17 security-related areas with regard to protecting the confidentiality,
integrity, and availability of information systems and the information processed, stored, and
transmitted by those systems. The areas are defined in Table 1.4.
Table 1.4 Security Requirements
Source: Based on FIPS 200
Access Control: Limit information system access to authorized users, processes acting on behalf of
authorized users, or devices (including other information systems) and to the types of transactions and
functions that authorized users are permitted to exercise.
Awareness and Training: (i) Ensure that managers and users of organizational information systems are
made aware of the security risks associated with their activities and of the applicable laws, regulations, and
policies related to the security of organizational information systems; and (ii) ensure that personnel are
adequately trained to carry out their assigned information security-related duties and responsibilities.
Audit and Accountability: (i) Create, protect, and retain information system audit records to the extent
needed to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or
inappropriate information system activity; and (ii) ensure that the actions of individual information system
users can be uniquely traced to those users so they can be held accountable for their actions.
Certification, Accreditation, and Security Assessments: (i) Periodically assess the security controls in
organizational information systems to determine if the controls are effective in their application; (ii) develop
and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in
organizational information systems; (iii) authorize the operation of organizational information systems and
any associated information system connections; and (iv) monitor information system security controls on an
ongoing basis to ensure the continued effectiveness of the controls.
Configuration Management: (i) Establish and maintain baseline configurations and inventories of
organizational information systems (including hardware, software, firmware, and documentation) throughout
the respective system development life cycles; and (ii) establish and enforce security configuration settings
for information technology products employed in organizational information systems.
Contingency Planning: Establish, maintain, and implement plans for emergency response, backup
operations, and postdisaster recovery for organizational information systems to ensure the availability of
critical information resources and continuity of operations in emergency situations.
Identification and Authentication: Identify information system users, processes acting on behalf of users,
or devices, and authenticate (or verify) the identities of those users, processes, or devices, as a prerequisite
to allowing access to organizational information systems.
Incident Response: (i) Establish an operational incident-handling capability for organizational information
systems that includes adequate preparation, detection, analysis, containment, recovery, and user-response
activities; and (ii) track, document, and report incidents to appropriate organizational officials and/or
authorities.
Maintenance: (i) Perform periodic and timely maintenance on organizational information systems; and
(ii) provide effective controls on the tools, techniques, mechanisms, and personnel used to conduct
information system maintenance.
Media Protection: (i) Protect information system media, both paper and digital; (ii) limit access to
information on information system media to authorized users; and (iii) sanitize or destroy information system
media before disposal or release for reuse.
Physical and Environmental Protection: (i) Limit physical access to information systems, equipment, and
the respective operating environments to authorized individuals; (ii) protect the physical plant and support
infrastructure for information systems; (iii) provide supporting utilities for information systems; (iv) protect
information systems against environmental hazards; and (v) provide appropriate environmental controls in
facilities containing information systems.
Planning: Develop, document, periodically update, and implement security plans for organizational
information systems that describe the security controls in place or planned for the information systems and
the rules of behavior for individuals accessing the information systems.
Personnel Security: (i) Ensure that individuals occupying positions of responsibility within organizations
(including third-party service providers) are trustworthy and meet established security criteria for those
positions; (ii) ensure that organizational information and information systems are protected during and after
personnel actions such as terminations and transfers; and (iii) employ formal sanctions for personnel failing
to comply with organizational security policies and procedures.
Risk Assessment: Periodically assess the risk to organizational operations (including mission, functions,
image, or reputation), organizational assets, and individuals, resulting from the operation of organizational
information systems and the associated processing, storage, or transmission of organizational information.
Systems and Services Acquisition: (i) Allocate sufficient resources to adequately protect organizational
information systems; (ii) employ system development life cycle processes that incorporate information
security considerations; (iii) employ software usage and installation restrictions; and (iv) ensure that thirdparty providers employ adequate security measures to protect information, applications, and/or services
outsourced from the organization.
System and Communications Protection: (i) Monitor, control, and protect organizational communications
(i.e., information transmitted or received by organizational information systems) at the external boundaries
and key internal boundaries of the information systems; and (ii) employ architectural designs, software
development techniques, and systems engineering principles that promote effective information security
within organizational information systems.
System and Information Integrity: (i) Identify, report, and correct information and information system flaws
in a timely manner; (ii) provide protection from malicious code at appropriate locations within organizational
information systems; and (iii) monitor information system security alerts and advisories and take appropriate
actions in response.
The requirements listed in FIPS 200 encompass a wide range of countermeasures to security
vulnerabilities and threats. Roughly, we can divide these countermeasures into two categories:
those that require computer security technical measures (covered in Parts One and Two), either
hardware or software, or both; and those that are fundamentally management issues (covered in
Part Three).
Each of the functional areas may involve both computer security technical measures and
management measures. Functional areas that primarily require computer security technical
measures include access control, identification and authentication, system and communication
protection, and system and information integrity. Functional areas that primarily involve
management controls and procedures include awareness and training; audit and accountability;
certification, accreditation, and security assessments; contingency planning; maintenance;
physical and environmental protection; planning; personnel security; risk assessment; and
systems and services acquisition. Functional areas that overlap computer security technical
measures and management controls include configuration management, incident response, and
media protection.
Note the majority of the functional requirements areas in FIPS 200 are either primarily issues of
management or at least have a significant management component, as opposed to purely
software or hardware solutions. This may be new to some readers, and is not reflected in many
of the books on computer and information security. But as one computer security expert
observed, “If you think technology can solve your security problems, then you don’t understand
the problems and you don’t understand the technology” [SCHN00]. This book reflects the need to
combine technical and managerial approaches to achieve effective computer security.
FIPS 200 provides a useful summary of the principal areas of concern, both technical and
managerial, with respect to computer security. This book attempts to cover all of these areas.
1.4 FUNDAMENTAL SECURITY
DESIGN PRINCIPLES
Despite years of research and development, it has not been possible to develop security design
and implementation techniques that systematically exclude security flaws and prevent all
unauthorized actions. In the absence of such foolproof techniques, it is useful to have a set of
widely agreed design principles that can guide the development of protection mechanisms. The
National Centers of Academic Excellence in Information Assurance/Cyber Defense, which is
jointly sponsored by the U.S. National Security Agency and the U. S. Department of Homeland
Security, list the following as fundamental security design principles [NCAE13]:
Economy of mechanism
Fail-safe defaults
Complete mediation
Open design
Separation of privilege
Least privilege
Least common mechanism
Psychological acceptability
Isolation
Encapsulation
Modularity
Layering
Least astonishment
The first eight listed principles were first proposed in [SALT75] and have withstood the test of
time. In this section, we briefly discuss each principle.
Economy of mechanism means the design of security measures embodied in both hardware
and software should be as simple and small as possible. The motivation for this principle is that
relatively simple, small design is easier to test and verify thoroughly. With a complex design, there
are many more opportunities for an adversary to discover subtle weaknesses to exploit that may
be difficult to spot ahead of time. The more complex the mechanism is, the more likely it is to
possess exploitable flaws. Simple mechanisms tend to have fewer exploitable flaws and require
less maintenance. Furthermore, because configuration management issues are simplified,
updating or replacing a simple mechanism becomes a less intensive process. In practice, this is
perhaps the most difficult principle to honor. There is a constant demand for new features in both
hardware and software, complicating the security design task. The best that can be done is to
keep this principle in mind during system design to try to eliminate unnecessary complexity.
Fail-safe default means access decisions should be based on permission rather than exclusion.
That is, the default situation is lack of access, and the protection scheme identifies conditions
under which access is permitted. This approach exhibits a better failure mode than the alternative
approach, where the default is to permit access. A design or implementation mistake in a
mechanism that gives explicit permission tends to fail by refusing permission, a safe situation that
can be quickly detected. On the other hand, a design or implementation mistake in a mechanism
that explicitly excludes access tends to fail by allowing access, a failure that may long go
unnoticed in normal use. For example, most file access systems work on this principle and
virtually all protected services on client/server systems work this way.
Complete mediation means every access must be checked against the access control
mechanism. Systems should not rely on access decisions retrieved from a cache. In a system
designed to operate continuously, this principle requires that, if access decisions are remembered
for future use, careful consideration be given to how changes in authority are propagated into
such local memories. File access systems appear to provide an example of a system that
complies with this principle. However, typically, once a user has opened a file, no check is made
to see of permissions change. To fully implement complete mediation, every time a user reads a
field or record in a file, or a data item in a database, the system must exercise access control.
This resource-intensive approach is rarely used.
Open design means the design of a security mechanism should be open rather than secret. For
example, although encryption keys must be secret, encryption algorithms should be open to public
scrutiny. The algorithms can then be reviewed by many experts, and users can therefore have
high confidence in them. This is the philosophy behind the National Institute of Standards and
Technology (NIST) program of standardizing encryption and hash algorithms, and has led to the
widespread adoption of NIST-approved algorithms.
Separation of privilege is defined in [SALT75] as a practice in which multiple privilege attributes
are required to achieve access to a restricted resource. A good example of this is multifactor user
authentication, which requires the use of multiple techniques, such as a password and a smart
card, to authorize a user. The term is also now applied to any technique in which a program is
divided into parts that are limited to the specific privileges they require in order to perform a
specific task. This is used to mitigate the potential damage of a computer security attack. One
example of this latter interpretation of the principle is removing high privilege operations to
another process and running that process with the higher privileges required to perform its tasks.
Day-to-day interfaces are executed in a lower privileged process.
Least privilege means every process and every user of the system should operate using the
least set of privileges necessary to perform the task. A good example of the use of this principle
is role-based access control, as will be described in Chapter 4. The system security policy can
identify and define the various roles of users or processes. Each role is assigned only those
permissions needed to perform its functions. Each permission specifies a permitted access to a
particular resource (such as read and write access to a specified file or directory, and connect
access to a given host and port). Unless permission is granted explicitly, the user or process
should not be able to access the protected resource. More generally, any access control system
should allow each user only the privileges that are authorized for that user. There is also a
temporal aspect to the least privilege principle. For example, system programs or administrators
who have special privileges should have those privileges only when necessary; when they are
doing ordinary activities the privileges should be withdrawn. Leaving them in place just opens the
door to accidents.
Least common mechanism means the design should minimize the functions shared by different
users, providing mutual security. This principle helps reduce the number of unintended
communication paths and reduces the amount of hardware and software on which all users
depend, thus making it easier to verify if there are any undesirable security implications.
Psychological acceptability implies the security mechanisms should not interfere unduly with
the work of users, and at the same time meet the needs of those who authorize access. If
security mechanisms hinder the usability or accessibility of resources, users may opt to turn off
those mechanisms. Where possible, security mechanisms should be transparent to the users of
the system or at most introduce minimal obstruction. In addition to not being intrusive or
burdensome, security procedures must reflect the user’s mental model of protection. If the
protection procedures do not make sense to the user or if the user, must translate his or her
image of protection into a substantially different protocol, the user is likely to make errors.
Isolation is a principle that applies in three contexts. First, public access systems should be
isolated from critical resources (data, processes, etc.) to prevent disclosure or tampering. In
cases where the sensitivity or criticality of the information is high, organizations may want to limit
the number of systems on which that data are stored and isolate them, either physically or
logically. Physical isolation may include ensuring that no physical connection exists between an
organization’s public access information resources and an organization’s critical information.
When implementing logical isolation solutions, layers of security services and mechanisms should
be established between public systems and secure systems that is responsible for protecting
critical resources. Second, the processes and files of individual users should be isolated from one
another except where it is explicitly desired. All modern operating systems provide facilities for
such isolation, so individual users have separate, isolated process space, memory space, and file
space, with protections for preventing unauthorized access. And finally, security mechanisms
should be isolated in the sense of preventing access to those mechanisms. For example, logical
access control may provide a means of isolating cryptographic software from other parts of the
host system and for protecting cryptographic software from tampering and the keys from
replacement or disclosure.
Encapsulation can be viewed as a specific form of isolation based on object-oriented
functionality. Protection is provided by encapsulating a collection of procedures and data objects
in a domain of its own so that the internal structure of a data object is accessible only to the
procedures of the protected subsystem and the procedures may be called only at designated
domain entry points.
Modularity in the context of security refers both to the development of security functions as
separate, protected modules, and to the use of a modular architecture for mechanism design and
implementation. With respect to the use of separate security modules, the design goal here is to
provide common security functions and services, such as cryptographic functions, as common
modules. For example, numerous protocols and applications make use of cryptographic functions.
Rather than implementing such functions in each protocol or application, a more secure design is
provided by developing a common cryptographic module that can be invoked by numerous
protocols and applications. The design and implementation effort can then focus on the secure
design and implementation of a single cryptographic module, including mechanisms to protect the
module from tampering. With respect to the use of a modular architecture, each security
mechanism should be able to support migration to new technology or upgrade of new features
without requiring an entire system redesign. The security design should be modular so that
individual parts of the security design can be upgraded without the requirement to modify the
entire system.
Layering refers to the use of multiple, overlapping protection approaches addressing the people,
technology, and operational aspects of information systems. By using multiple, overlapping
protection approaches, the failure or circumvention of any individual protection approach will not
leave the system unprotected. We will see throughout this book that a layering approach is often
used to provide multiple barriers between an adversary and protected information or services.
This technique is often referred to as defense in depth.
Least astonishment means a program or user interface should always respond in the way that
is least likely to astonish the user. For example, the mechanism for authorization should be
transparent enough to a user that the user has a good intuitive understanding of how the security
goals map to the provided security mechanism.
1.5 ATTACK SURFACES AND
ATTACK TREES
Section 1.2 provided an overview of the spectrum of security threats and attacks facing computer
and network systems. Section 8.1 will go into more detail about the nature of attacks and the
types of adversaries that present security threats. In this section, we elaborate on two concepts
that are useful in evaluating and classifying threats: attack surfaces and attack trees.
Attack Surfaces
An attack surface consists of the reachable and exploitable vulnerabilities in a system [BELL16,
MANA11, HOWA03]. Examples of attack surfaces are the following:
Open ports on outward facing Web and other servers, and code listening on those ports
Services available on the inside of a firewall
Code that processes incoming data, e-mail, XML, office documents, and industry-specific
custom data exchange formats
Interfaces, SQL, and web forms
An employee with access to sensitive information vulnerable to a social engineering attack
Attack surfaces can be categorized in the following way:
Network attack surface: This category refers to vulnerabilities over an enterprise network,
wide-area network, or the Internet. Included in this category are network protocol
vulnerabilities, such as those used for a denial-of-service attack, disruption of communications
links, and various forms of intruder attacks.
Software attack surface: This refers to vulnerabilities in application, utility, or operating
system code. A particular focus in this category is Web server software.
Human attack surface: This category refers to vulnerabilities created by personnel or
outsiders, such as social engineering, human error, and trusted insiders.
An attack surface analysis is a useful technique for assessing the scale and severity of threats to
a system. A systematic analysis of points of vulnerability makes developers and security analysts
aware of where security mechanisms are required. Once an attack surface is defined, designers
may be able to find ways to make the surface smaller, thus making the task of the adversary
more difficult. The attack surface also provides guidance on setting priorities for testing,
strengthening security measures, or modifying the service or application.
As illustrated in Figure 1.4, the use of layering, or defense in depth, and attack surface reduction
complement each other in mitigating security risk.
Figure 1.4 Defense in Depth and Attack Surface
Attack Trees
An attack tree is a branching, hierarchical data structure that represents a set of potential
techniques for exploiting security vulnerabilities [MAUW05, MOOR01, SCHN99]. The security
incident that is the goal of the attack is represented as the root node of the tree, and the ways by
which an attacker could reach that goal are iteratively and incrementally represented as branches
and subnodes of the tree. Each subnode defines a subgoal, and each subgoal may have its own
set of further subgoals, and so on. The final nodes on the paths outward from the root, that is, the
leaf nodes, represent different ways to initiate an attack. Each node other than a leaf is either an
AND-node or an OR-node. To achieve the goal represented by an AND-node, the subgoals
represented by all of that node’s subnodes must be achieved; and for an OR-node, at least one of
the subgoals must be achieved. Branches can be labeled with values representing difficulty, cost,
or other attack attributes, so that alternative attacks can be compared.
The motivation for the use of attack trees is to effectively exploit the information available on
attack patterns. Organizations such as CERT publish security advisories that have enabled the
development of a body of knowledge about both general attack strategies and specific attack
patterns. Security analysts can use the attack tree to document security attacks in a structured
form that reveals key vulnerabilities. The attack tree can guide both the design of systems and
applications, and the choice and strength of countermeasures.
Figure 1.5, based on a figure in [DIMI07], is an example of an attack tree analysis for an Internet
banking authentication application. The root of the tree is the objective of the attacker, which is to
compromise a user’s account. The shaded boxes on the tree are the leaf nodes, which represent
events that comprise the attacks. The white boxes are categories which consist of one or more
specific attack events (leaf nodes). Note that in this tree, all the nodes other than leaf nodes are
OR-nodes. The analysis used to generate this tree considered the three components involved in
authentication:
Figure 1.5 An Attack Tree for Internet Banking Authentication
User terminal and user (UT/U): These attacks target the user equipment, including the
tokens that may be involved, such as smartcards or other password generators, as well as the
actions of the user.
Communications channel (CC): This type of attack focuses on communication links.
Internet banking server (IBS): These types of attacks are offline attack against the servers
that host the Internet ...
Purchase answer to see full
attachment