SECURE, RESILIENT, AND AGILE
SOFTWARE DEVELOPMENT
SECURE, RESILIENT, AND AGILE
SOFTWARE DEVELOPMENT
Mark S. Merkow, CISSP, CISM, CSSLP
Boca Raton London New York
CRC Press is an imprint of the
Taylor & Francis Group, an informa business
AN AUERBACH BOOK
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2020 by Taylor & Francis Group
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed on acid-free paper
International Standard Book Number-13: 978-0-367-33259-4 (Hardback)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
Trademarks Used in This Publication
Adobe® is a registered trademark of Adobe, Inc., in San Jose, CA.
Alert Logic® is a registered trademark of Alert Logic Inc., in Houston, TX
Amazon Web Services® is a registered trademark of Amazon Technologies, Inc., in Seattle, WA.
AppScan® and IBM® are registered trademarks of International Business Machines Corporation,
in Armonk, NY.
Atlassian® and Jira® are registered trademarks of Atlassian Pty Ltd., Sydney, Australia.
Azure® is a registered trademark of Microsoft Corporation, in Redmond, WA (on hold pending
further action as of 2019/09).
Barracuda® is a registered trademark of Barracuda Networks Inc., in Campbell, CA.
Cigital® is a registered trademark of Synopsys, Inc., in Mountain View, CA.
Citrix® is a registered trademark of Citrix Systems, Inc.
Contrast Security® is a registered trademark of Contrast Security, Inc., in Los Altos, CA.
CSSLP® and (ISC)2® are registered trademarks of International Information Systems Security
Certification Consortium, Inc., in Clearwater, FL.
CVE® is a registered trademark and CWE™ is a trademark of MITRE Corporation, in McLean,
VA.
Dell® and Dell® EMC® are registered trademarks of Dell Inc. or its subsidiaries.
Ethereum® is a registered trademark of Stiftung Ethereum (Foundation Ethereum).
F5 Silverline® is a registered trademark of F5 Networks Inc., in Seattle, WA.
Fortify® is a registered trademark of EntIT Software LLC, in Sunnyvale, CA.
GCP® is a registered trademark and Google™ is a trademark of Google, Inc., in Mountain View,
CA.
ImmuniWeb® is a globally registered trademark owned by High Tech Bridge SA, in Geneva,
Switzerland.
Imperva® is a registered trademark of Imperva Inc. in Redwood City, CA.
ISACA® is a registered trademark of Information Systems Audit and Control Association, Inc.,
in Schaumburg, IL.
IriusRisk® is a registered trademark of Continuum Security, SL, in Spain.
Jama Connect™ is a trademark of Jama Software, in Portland, OR.
Kubernetes® is a registered trademark of The Linux Foundation, in San Francisco, CA.
LinkedIn® is a registered trademark of LinkedIn Corporation, in Sunnyvale, CA.
Microsoft® is a registered trademark of Microsoft Corporation, in Redmond, WA.
NICERC™ is a trademark of National Integrated Cyber Education Research Center, in Bossier
City, LA.
Offensive Security® is a registered trademark of Offensive Security Limited, in George Town,
Grand Cayman.
OWASP is designated as non-final office action issued (clarification needed as of 2019/09).
Qualys® is a registered trademark of Qualys Inc., in Foster City, CA.
Radware® is a registered trademark of Radware, in Mahwah, NJ.
ScienceSoft® is a registered trademark of ScienceSoft USA Corporation, in McKinney, TX.
SonarQube™ is a trademark of SonarSource SA, in Switzerland.
Sonatype® is a trademark of Sonatype Inc., in Fulton, MD.
Synopsys® and Synopsys Coverity® are registered trademarks of Synopsys, Inc., in the U.S. and/
or other countries.
ThreatModeler® is a registered trademark of ThreatModeler Software, Inc., in Jersey City, NJ.
Wallarm® is a registered trademark of Wallarm Inc., in San Francisco, CA.
Dedication
This book is dedicated to the next generation of application security
professionals to help alleviate the struggle to reverse the curses
of defective software, no matter where it shows up.
vii
Contents
Dedication
vii
Contents
ix
Preface
xvii
About the Author
xxi
Chapter 1: Today’s Software Development Practices Shatter
Old Security Practices
1.1
Over the Waterfall
1.2 What Is Agile?
1.3 Shift Left!
1.4
Principles First!
1.5 Summary
References
Chapter 2: Deconstructing Agile and Scrum
2.1
The Goals of Agile and Scrum
2.2 Agile/Scrum Terminology
2.3 Agile/Scrum Roles
2.4 Unwinding Sprint Loops
2.5 Development and Operations Teams Get Married
2.6 Summary
References
1
2
3
3
6
7
7
9
9
11
11
13
15
18
18
ix
x
Secure, Resilient, and Agile Software Development
Chapter 3: Learning Is FUNdamental!
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11
Education Provides Context and Context Is Key
Principles for Software Security Education
Getting People’s Attention
Awareness versus Education
Moving into the Education Phase
Strategies for Rolling Out Training
Encouraging Training Engagement and Completion
Measuring Success
Keeping the Drumbeat Alive
Create and Mature a Security Champion Network
A Checklist for Establishing a Software Security
Education, Training, and Awareness Program
3.12 Summary
References
Chapter 4: Product Backlog Development—Building
Security In
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
Chapter Overview
Functional versus Nonfunctional Requirements
Testing NFRs
Families of Nonfunctional Requirements
4.4.1 Availability
Capacity
Efficiency
Interoperability
Manageability
4.8.1 Cohesion
4.8.2 Coupling
Maintainability
Performance
Portability
Privacy
Recoverability
Reliability
Scalability
Security
Serviceability/Supportability
21
22
22
24
25
26
27
29
29
30
31
31
32
32
35
35
36
37
39
39
41
41
42
42
42
43
43
44
44
45
46
47
48
48
50
Contents
4.18 Characteristics of Good Requirements
4.19 Eliciting Nonfunctional Requirements
4.20 NFRs as Acceptance Criteria and Definition of Done
4.21 Summary
References
Chapter 5: Secure Design Considerations
5.1
5.2
5.3
5.4
Chapter Overview
Essential Concepts
The Security Perimeter
Attack Surface
5.4.1 Mapping the Attack Surface
5.4.2 Side Channel Attacks
5.5 Application Security and Resilience Principles
5.5.1 Practice 1: Apply Defense in Depth
5.5.2 Practice 2: Use a Positive Security Model
5.5.3 Practice 3: Fail Securely
5.5.4 Practice 4: Run with Least Privilege
5.5.5 Practice 5: Avoid Security by Obscurity
5.5.6 Practice 6: Keep Security Simple
5.5.7 Practice 7: Detect Intrusions
5.5.8 Practice 8: Don’t Trust Infrastructure
5.5.9 Practice 9: Don’t Trust Services
5.5.10 Practice 10: Establish Secure Defaults
5.6 Mapping Best Practices to Nonfunctional
Requirements (NFRs) as Acceptance Criteria
5.7 Summary
References
Chapter 6: Security in the Design Sprint
6.1
6.2
6.3
6.4
6.5
6.6
Chapter Overview
Design Phase Recommendations
Modeling Misuse Cases
Conduct Security Design and Architecture Reviews
in Design Sprint
Perform Threat and Application Risk Modeling
6.5.1 Brainstorming Threats
Risk Analysis and Assessment
xi
51
51
53
54
54
57
57
58
58
60
60
61
62
62
63
65
66
66
67
68
68
69
69
69
70
71
73
73
73
74
75
76
77
79
xii Secure, Resilient, and Agile Software Development
6.6.1 Damage Potential
6.6.2 Reproducibility
6.6.3 Exploitability
6.6.4 Affected Users
6.6.5 Discoverability
6.7 Don’t Forget These Risks!
6.8 Rules of Thumb for Defect Removal or Mitigation
6.9
Further Needs for Information Assurance
6.10 Countering Threats through Proactive Controls
6.11 Architecture and Design Review Checklist
6.12 Summary
References
Chapter 7: Defensive Programming
7.1
7.2
7.3
Chapter Overview
The Evolution of Attacks
Threat and Vulnerability Taxonomies
7.3.1 MITRE’s Common Weaknesses Enumeration
(CWE™ )
7.3.2 OWASP Top 10—2017
7.4
Failure to Sanitize Inputs is the Scourge of
Software Development
7.5
Input Validation and Handling
7.5.1 Client-Side vs. Server-Side Validation
7.5.2 Input Sanitization
7.5.3 Canonicalization
7.6
Common Examples of Attacks Due to Improper
Input Handling
7.6.1 Buffer Overflow
7.6.2 OS Commanding
7.7
Best Practices in Validating Input Data
7.7.1 Exact Match Validation
7.7.2 Exact Match Validation Example
7.7.3 Known Good Validation
7.7.4 Known Bad Validation
7.7.5 Handling Bad Input
7.8
OWASP’s Secure Coding Practices
7.9
Summary
References
79
80
80
80
80
81
82
82
84
84
88
88
89
89
90
91
91
92
94
95
98
98
99
100
100
100
101
101
101
102
103
104
105
105
105
Contents
Chapter 8: Testing Part 1: Static Code Analysis
8.1
8.2
8.3
Chapter Overview
Fixing Early versus Fixing Later
Testing Phases
8.3.1 Unit Testing
8.3.2 Manual Source Code Reviews
8.4 Static Source Code Analysis
8.5 Automated Reviews Compared with Manual
Reviews
8.6 Peeking Inside SAST Tools
8.7 SAST Policies
8.8 Using SAST in Development Sprints
8.9
Software Composition Analysis (SCA)
8.10 SAST is NOT for the Faint of Heart!
8.11 Commercial and Free SAST Tools
8.12 Summary
References
Chapter 9: Testing Part 2: Penetration Testing/
Dynamic Analysis/IAST/RASP
9.1
9.2
9.3
Chapter Overview
Penetration (Pen) Testing
Open Source Security Testing Methodology
Manual (OSSTMM)
9.4
OWASP’s ASVS
9.5
Penetration Testing Tools
9.6
Automated Pen Testing with Black Box Scanners
9.7
Deployment Strategies
9.7.1 Developer Testing
9.7.2 Centralized Quality Assurance Testing
9.8
Gray Box Testing
9.9
Limitations and Constraints of Pen Testing
9.10 Interactive Application Security Testing (IAST)
9.11 Runtime Application Self-Protection (RASP)
9.12 Summary
References
Chapter 10: Securing DevOps
10.1
Overview
xiii
107
107
107
108
109
109
112
113
114
119
119
121
124
124
125
125
127
127
128
128
129
131
131
133
133
134
134
134
135
136
136
137
139
139
xiv
Secure, Resilient, and Agile Software Development
10.2 Challenges When Moving to a DevOps World
10.2.1 Changing the Business Culture
10.3 The Three Ways That Make DevOps Work
10.4 The Three Ways Applied to AppSec
10.5 OWASP’s DevSecOps Maturity Model
10.6 OWASP’s DevSecOps Studio
10.7 Summary
References
Chapter 11: Metrics and Models for AppSec Maturity
11.1 Chapter Overview
11.2 Maturity Models for Security and Resilience
11.3 Software Assurance Maturity Model—OpenSAMM
11.3.1 OpenSAMM Business Functions
11.3.2 Core Practice Areas
11.4 Levels of Maturity
11.4.1 Objective
11.4.2 Activities
11.4.3 Results
11.4.4 Success Metrics
11.4.5 Costs
11.4.6 Personnel
11.4.7 Related Levels
11.4.8 Assurance
11.5 Using OpenSAMM to Assess Maturity Levels
11.6 The Building Security In Maturity Model (BSIMM)
11.7 BSIMM Organization
11.8 BSIMM Software Security Framework
11.8.1 Governance
11.8.2 Intelligence
11.8.3 SSDL Touchpoints
11.8.4 Deployment
11.9 BSIMM’s 12 Practice Areas
11.10 Measuring Results with BSIMM
11.11 The BSIMM Community
11.12 Conducting a BSIMM Assessment
11.13 Summary
References
139
141
143
145
147
148
148
149
151
151
152
152
154
155
156
156
157
157
157
157
157
158
158
158
163
163
164
164
164
166
167
167
167
167
171
171
172
Contents
Chapter 12: Frontiers for AppSec
12.1
Internet of Things (IoT)
12.1.1 The Industry Responds
12.1.2 The Government Responds
12.2 Blockchain
12.2.1 Security Risks with Blockchain
Implementations
12.2.2 Securing the Chain
12.3 Microservices and APIs
12.4 Containers
12.4.1 Container Security Issues
12.4.2 NIST to the Rescue Again!
12.5 Autonomous Vehicles
12.6 Web Application Firewalls (WAFs)
12.7 Machine Learning/Artificial Intelligence
12.8 Big Data
12.8.1 Vulnerability to Fake Data Generation
12.8.2 Potential Presence of Untrusted Mappers
12.8.3 Lack of Cryptographic Protection
12.8.4 Possibility of Sensitive Information Mining
12.8.5 Problems with Granularity of Access Controls
12.8.6 Data Provenance Difficulties
12.8.7 High Speed of NoSQL Databases’ Evolution
and Lack of Security Focus
12.8.8 Absent Security Audits
12.9 Summary
References
Chapter 13: AppSec Is a Marathon—Not a Sprint!
13.1 Hit the Road
13.2 Getting Involved with OWASP
13.3 Certified Secure Software Lifecycle Professional
(CSSLP®)
13.3.1 Why Obtain the CSSLP?
13.4 Higher Education
13.5 Conclusion
References
xv
173
173
174
175
175
176
177
178
180
180
181
182
183
183
185
185
186
186
186
186
187
187
187
187
188
191
192
192
193
194
194
194
196
xvi
Secure, Resilient, and Agile Software Development
Appendix A: Sample Acceptance Criteria for
Security Controls
197
Appendix B: Resources for AppSec
203
Training
Cyber Ranges
Requirements Management Tools
Threat Modeling
Static Code Scanners: Open Source
Static Code Scanners: Commercial
Dynamic Code Scanners: Open Source
Dynamic Code Scanners: Commercial
Maturity Models
Software Composition Analysis
IAST Tools
API Security Testing
Runtime Application Self-Protection (RASP)
Web Application Firewalls (WAFs)
Browser-centric Protection
Index
203
204
204
204
205
206
206
207
207
207
208
208
208
209
209
211
Preface
This book was written from the perspective of someone who began his software
security career in 2005, long before we knew much about it. Making all the
rookie mistakes one tends to make without any useful guidance quickly turns
what’s supposed to be a helpful process into one that creates endless chaos and
lots of angry people. After a few rounds of these rookie mistakes, it finally
dawned on me that we’re going about it all wrong. Software security is actually a
human factor issue, not a technical or process issue alone. Throwing technology
into an environment that expects people to deal with it but failing to prepare
them technically and psychologically with the knowledge and skills needed is a
certain recipe for bad results.
Think of this book as a collection of best practices and effective implementation recommendations that are proven to work. I’ve taken the boring details of
software security theory out of the discussion as much as possible to concentrate
on practical applied software security for practical people.
This is as much a book for your personal benefit as it is for your organization’s benefit. Professionals who are skilled in secure and resilient software
development and related tasks are in tremendous and growing demand, and
the market will remain that way for the foreseeable future. As you integrate
these ideas into your daily duties, your value increases to your company, your
management, your community, and your industry.
Secure, Resilient, and Agile Software Development was written with the following people in mind:
• AppSec architects and program managers in information security organizations
• Enterprise architecture teams with application development focus
• Scrum teams
○ Scrum masters
○ Engineers/developers
xvii
xviii
Secure, Resilient, and Agile Software Development
○ Analysts
○ Architects
○ Testers
•
•
•
•
•
•
DevOps teams
Product owners and their management
Project managers
Application security auditors
Agile coaches and trainers
Instructors and trainers in academia and private organizations
How This Book Is Organized
• Chapter 1 brings the state of software development up to date after the tsunami
of changes that have flipped software development and application security
practices on their head since 2010, when I co-authored Secure and Resilient Software: Requirements, Test Cases, and Testing Methods.
• Chapter 2 takes a detailed look at the Agile and Scrum software development
methodology to explore how security controls need to change in light of an
entirely new paradigm on how software is developed and how software is used.
• Chapter 3 focuses on ways to educate everyone who has a hand in any software
development project with appropriate and practical skills to Build Security In.
We look at ways of influencing development teams to espouse software security
in their day-to-day activities, establishing a role-based curriculum for everyone,
suggestions on how to roll out training, and ways to “keep the drumbeat alive”
all year long through outreach and events.
• Chapters 4 looks at the specification steps of new or altered software with ways
to incorporate security controls and other nonfunctional requirements (NRFs)
into user stories that bring to life the concepts of “shift left” and Building Security In. This chapter examines 15 families of nonfunctional requirements and 11
families of application security controls.
• Chapter 5 moves into foundational and fundamental principles for secure application design. It covers important concepts, techniques, and design goals to meet
well-understood acceptance criteria on features an application must implement.
• Chapter 6 examines how the design sprint is adapted for proper consideration of
security and other NFRs and ways to conduct threat modeling, application risk
analysis, and practical remediation while the design is still malleable.
• Chapter 7 on defensive programming includes information on the Common
Weaknesses Enumeration (CWE™), the OWASP Top 10 (2017), and some ways
to address the fundamental scourge of application security vulnerabilities—failure
to sanitize inputs.
• Chapter 8 is focused on white box application analysis with sprint-based activities to improve security and quality of an application under development. Static
code analysis is covered in depth for context on what these tools do and the
assumptions they use for operating.
Preface xix
• Chapter 9 looks at black box or grey box analysis techniques and tools for testing
a running version of an application for software or quality shortcomings.
• Chapter 10 is focused on techniques and activities to help transform the DevOps
process into a DevSecOps process with appropriate controls, metrics, and monitoring processes.
• Chapter 11 looks at two popular software maturity and metrics models for helping you determine the effectiveness and maturity of your secure development
program.
• Chapter 12 takes a survey of the frontier in which software use is expanding.
It covers topics including the Internet of Things (IoT), AI, machine learning,
blockchains, microservices, APIs, containers, and more.
• Chapter 13 closes the book with a call to action to help you gain access to education, certification programs, and industry initiatives to which you can contribute.
Each chapter logically builds on prior chapters to help you paint a complete
picture of what’s required for secure, resilient, and Agile application software
as you learn how to implement environment-specific, effective controls and
management processes that will make you the envy of your peers!
About the Author
Mark S. Merkow, CISSP, CISM, CSSLP, works at WageWorks in Tempe,
Arizona, leading application security architecture and engineering efforts in the
office of the CISO. Mark has over 40 years of experience in IT in a variety
of roles, including application development, systems analysis and design, security engineering, and security management. Mark holds a Master of Science
in Decision and Information Systems from Arizona State University (ASU),
a Master of Education in Distance Education from ASU, and a Bachelor of
Science in Computer Information Systems from ASU. In addition to his day
job, Mark engages in a number of extracurricular activities, including consulting, course development, online course instruction, and book writing.
Mark has authored or co-authored 17 books on IT and has been a contributing editor to four others. Mark remains very active in the information security
community, working in a variety of volunteer roles for the Phoenix Chapter
of (ISC)2®, ISACA®, and OWASP. You can find Mark’s LinkedIn® profile at:
linkedin.com/in/markmerkow
xxi
Chapter 1
Today’s Software
Development Practices
Shatter Old Security
Practices
In the decade since Secure and Resilient Software: Requirements, Test Cases, and
Testing Methods1 was published, the world of software development has flipped
on its head, shed practices from the past, brought about countless changes,
and revolutionized how software is designed, developed, maintained, operated, and managed.
These changes crept in slowly at first, then gained momentum and have
since overtaken most of what we “know” about software development and the
security tried-and-true methods that we’ve relied on and implemented over the
years. Involvement from application security (appsec) professionals—if they
happened at all—happened WAY too late, before executive decisions were
already made to supplant old practices and the ink was already dried on contracts with companies hired to make the change.
This late (or nonexistent) involvement in planning how to address security
hobbles appsec practitioners who are forced to bargain, barter, or somehow convince development teams that they simply cannot ignore security. Compound
this problem with the nonstop pace of change, and appsec professionals must
abandon old “ways” and try to adapt controls to a moving target. Furthermore,
the risks with all-new attack surfaces (such as autonomous vehicles), reliance on
1
2 Secure, Resilient, and Agile Software Development
the Internet of Things (IoT), and software that comes to life with kinetic activity can place actual human lives in real danger of injury or death.
Although we may have less work on our hands to convince people that
insecure software is a clear and present danger, appsec professionals have to
work much harder to get everyone on board to apply best practices that we are
confident will work.
A decade ago, we were striving to help appsec professionals to convince
development organizations to—minimally—address software security in every
phase of development, and for the most part over the decade, we saw that far
more attention is being paid to appsec within the software development lifecycle
(SDLC), but now we’re forced to adapt how we do things to new processes that
may be resistant to any changes that slow things down, while the risks and
impacts of defective software increase exponentially.
Here’s the definition of software resilience that we’ll use throughout the
book. This definition is an adaptation of the National Infrastructure Advisory
Council (NIAC) definition of infrastructure resilience:
Software resilience is the ability to reduce the magnitude and/or duration of
disruptive events. The effectiveness of a resilient application or infrastructure
software depends upon its ability to anticipate, absorb, adapt to, and/or
rapidly recover from a potentially disruptive event.2
In this chapter, we’re going to survey this new landscape for these changes to
update our own models on how to adapt to the Brave New World and maintain
software security, resilience, and agility.
1.1 Over the Waterfall
New paradigms have rapidly replaced the Waterfall model of software development that we’ve used since the beginning of the software age. Agile and Scrum
SDLCs have all but displaced the rigorous (and sometime onerous) activities,
and most certainly displaced the notion of “phase containment,” which appsec
professionals have counted on as a reliable means to prevent defects from creeping into subsequent phases.
This new landscape includes Agile/Scrum, DevOps, continuous integration/
deployment (CI/CD), and the newest revolution working its way in, site reliability engineering (SRE). To adapt to these changes, we need to understand
how the rigor we’ve put into Waterfall-based projects and processes has been
swept away by the tsunami of change that demands more software, faster and
cheaper.
Today’s Software Development Practices Shatter Old Security Practices 3
Changes in the software development paradigm forces change in the software security paradigm, which MUST work hand-in-hand with what development teams are expected to do. While we typically had a shot at inspecting
software for security issues at the end of the development cycle (because of
phase containment), this control point no longer exists. The new paradigm we
had to adopt is called “shift left,” preserving the notion that there are still phases
in the SDLC, while recognizing the fact that there aren’t.
1.2 What Is Agile?
In essence, Agile and Scrum are based on the philosophy that software takes on
a life of its own, constantly being improved, extended, and enhanced, and these
changes can be delivered in hours, rather than weeks, months, or years.
Let’s take a look at the overall scope of the Agile/Scrum process, as shown
in Figure 1.1. This diagram encapsulates all the processes described by Scrum
and some suggested time frames showing how it compresses time into digestible bites that continue to produce software. Some new roles are also indicated
(e.g., product owner and Scrum master), and teams are composed of ALL the
roles you formerly find on separate teams using Waterfall methods. This means
that one team is composed of the roles responsible for analysis, design, coding,
testing, coordination, and ongoing delivery as new features are added, changes
are made, or defects removed. It also means that work is not tossed over the wall
to the next person in line to do “something.” The team is responsible for all the
effort and results.
A minimally viable product (MVP) is the first release of a new software
application that’s considered “bare bones” but has sufficient functionality for
release to the market before the competition releases their own version. While
the actions are not shown in each sprint, they typically follow the same activities you’d find in the Waterfall model, but with more iterations and fewer
phase gates that control when software is tested and released. Software is then
changed regularly and is never really considered “complete.” This creates severe
challenges for appsec.
We’ll examine the Agile/Scrum process in depth in Chapter 2 and look
inside each sprint to see where security controls can work.
1.3 Shift Left!
Shifting left requires that development teams address software security from the
very inception of a project (Build Security In) and in every step along the way
Figure 1.1 Agile/Scrum Framework (Source: Neon Rain Interactive, licensed under CC BY-ND 3.0 NZ)
Today’s Software Development Practices Shatter Old Security Practices 5
to its manifestation. This means that everyone who has a hand in the specification and development of this new software “product” clearly understands their
security obligations and is prepared and able to meet those obligations. Security
teams can no longer “do” security for development teams—the teams must be
responsible and able to prove they’re living up to those expectations. We’ll talk
about how to make this happen with development team awareness, training,
and education in Chapter 3.
Shifting left also requires new ways in how designers create solutions based
on the requirements and how they vet those solutions for potential security
problems, since they clearly understand that changes in design once an application is developed will cost potentially hundreds of times more than if the defects
were caught while architecture and engineering is underway.
Developers are affected because they’re not given the luxury of time for
extensive testing, as they often had with former practices. Now, developers may
release new code all day and see it deployed within minutes, so it’s vital that
these developers “own” the responsibility for securing it, which means developing
it using a defensive programming state of mind. Shifting left in the development activity involves active use—and appropriate response—with security
checks built directly into their integrated development environment (IDE)—for
example, Visual Studio or Eclipse. Although these checks are on incomplete segments of an overall application, coding provides the first opportunity for security
inspection and is needed to continue the cycle of appsec.
Testing presents a major challenge to appsec, because tolerance for longrunning tests has all but disappeared. Although it’s true that a comprehensive
(finished this time) application is needed for comprehensive testing, product
managers won’t wait anymore while security tests are run, and vulnerable applications may be deployed (ship it now—fix it later). Shifting left in this environment forces security testing to happen incrementally, in what we used to call
integration testing—the point in development at which all the elements come
together to build as a new version of the software. If the implementation of
security testing is done correctly and responsively to the needs of the product
managers, it can serve as a control to actually “break” a build and force remediation of defects. We’ll discuss this at length in Chapters 10 and 11 on testing.
Taken together, shifting left in the appsec space makes it possible to gain the
assurance we need that our applications are appropriately secure, but it changes
the role of appsec professionals from “doing” appsec to needing to empower
everyone who touches software in the SDLC with practical and appropriate
knowledge, skills, and abilities.
Although the names and accelerated pace has significantly changed how
we deal with appsec, the activities of software development, as we understood
them in Waterfall methodologies, are still present. Requirements are still being
6 Secure, Resilient, and Agile Software Development
gathered, designs are still being built, coders are still coding, testers are still testing, and operators are still deploying and managing applications in production.
We can apply what we know works to help secure applications in development,
but we have to step back and let those who are intimate with the application
do the heavy lifting and prove to us that they’ve done what they needed to do!
At the end of the day, software security is a human factors issue—not a
technical issue—and for appsec professionals to succeed in implementing application controls, it’s vital to treat the human factor in ways we know work, rather
than throwing more tools at the problem.
1.4 Principles First!
Before we dig into the details on how to create and maintain excellence in
application security programs, let’s cover some enduring principles that we need
to live by in everything we do to secure application software and the processes
used to create it:
• Secure application development is a PEOPLE issue, not a technical one.
• Every intervention into the SDLC affects people.
• Shift Left within the SDLC as much of the appsec work as you can—the
more security work that’s performed closest to the point at which defects
are introduced is the surest way of eliminating them and preventing
“defect creep” from one phase or activity to the next.
• AppSec tools are great but of are questionable use if the people using them
don’t understand:
○ What the tool is telling them
○ Why their code is vulnerable
○ How their code is vulnerable
○ What do to about the vulnerability
• People can only deal with so much change at one time—too many changes
all at once to their processes leads to chaos and ultimately rebellion.
• Automate everything that you can (scanning, remediation planning,
retesting, etc.).
• There are only so many of us in the information security departments, but
thousands of development team staff who need accountability; don’t treat
security as a punishment or barrier—convince development team members that it empowers them, makes them more valuable as employees and
members of the development community, and they’ll quickly learn that it
does all these things!
Today’s Software Development Practices Shatter Old Security Practices 7
1.5 Summary
In Chapter 1, we surveyed the modern-day landscape on how software is
developed, operated, and managed to understand the impacts these changes
have forced on how we design, develop, and implement control mechansims
to assure software security and resilience. We’ll begin to explore how appsec
professionals can use Agile practices to improve Agile practices with security
controls and how baking in security from the very start is the surest way to gain
assurance that your applications can stand up and recover from chronic attacks.
References
1. Merkow, M. and Ragahvan, L. (2011). Secure and Resilient Software: Requirements,
Test Cases, and Testing Methods. 1st Ed. Auerbach Publications.
2. Critical Infrastructure Resilience Final Report and Recommendations, National
Infrastructure Advisory Council. Retrieved June 11, 2019, from http://www.dhs.
gov/xlibrary/assets/niac/niac_critical_infrastructure_resilience.pdf
Chapter 2
Deconstructing
Agile and Scrum
For purposes of context setting and terminology, we’re going to deconstruct
the Agile/Scrum development methodology to discover areas in which appsec
controls help in securing software in development and also help to control the
development methodology itself. We’ll look at ways to use Agile to secure Agile.
Let’s revisit the overall scope of the Agile/Scrum process, shown in Figure 2.1
(originally Figure 1.1).
There’s Agile/Scrum as a formal, strict, tightly controlled process, then there’s
Agile/Scrum as it’s implemented in the real world. Implementation of Agile will
vary from the fundamentalist and purist views to various elements that appear
as Agile-like processes, and everything in between. It’s less important HOW it’s
implemented in your environment than it is to understand what your specific
implementation means to your appsec efforts.
2.1 The Goals of Agile and Scrum
Agile software development refers to software development lifecycle (SDLC)
methodologies based on the idea of iterative development, in which requirements and solutions evolve through collaboration between self-organizing,
cross-functional teams. Agile development is designed to enable teams to
deliver value faster, with greater quality and predictability and greater abilities
to respond to change.1
9
Figure 2.1 Agile/Scrum Framework (Source: Neon Rain Interactive, licensed under CC BY-ND 3.0 NZ)
Deconstructing Agile and Scrum 11
Scrum and Kanban are the dominant implementations of Agile, and Scrum
is the one most often found in software development organizations.
2.2 Agile/Scrum Terminology
Here are some common terms and roles found within Agile SDLCs:
Product—the application under development, enhancement, or replacement.
Product Backlog—the list of features or requirements that the product must
include. These features are prioritized by the product owner for submission to
sprints.
Product Owner—typically from a business unit or business area who becomes
responsible for the creation of new products through the specifications (user
stories) they create and add to the product backlog. Think of the product owner
as the sponsor of the work the team performs overall. Often, the Scrum team
and the product owner work in entirely different organizations (a business unit
outside of the technology division of the firm).
User Stories—User stories help to shift the focus from writing about requirements to talking about them.2 Stories use non-technical language to provide
context for the development team and their efforts. After reading a user story,
the team knows why they are building what they’re building and what value it
creates.3 User stories are added to sprints and “burned down” over the duration
of the sprint.
Figure 2.2 depicts an example of a typical user story4:
Sprint—a fixed, time-boxed period of time (typically from 2–4 weeks), during
which specific prioritized requirements (user stories) are fed in from the product
backlog for design or development.
Definition of Done (DoD)—Each Scrum team has its own DoD or consistent
acceptance criteria across all user stories. A DoD drives the quality of work and
is used to assess when a user story has been completed.
2.3 Agile/Scrum Roles
Scrum role team titles are only relevant in establishing each person’s specific
expertise, but they don’t lock those who are in that role into only performing
that activity. Teams are self-organizing, so expertise is shared across the team as
Figure 2.2 A Typical User Story and Its Lifecycle (Used with permission of Seilevel. Source: Stowe, M. Going Agile: The Anatomy of
a User Story, at: https://seilevel.com/requirements/the-anatomy-of-a-user-story)
Deconstructing Agile and Scrum 13
needed to meet their objectives. The following are those roles of the team that
you commonly find in Scrum:
• Scrum Master—the person who serves as conductor and coach to help
team members carry out their duties. Scrum masters are experts on Scrum,
oversee the project throughout, and offer advice and direction. The Scrum
master most often works on one project at a time, provides it their full
attention, and focuses on improving the team’s effectiveness.5
• Analyst roles work with the product owner and Scrum master to develop
and refine the user stories that are submitted for development within a
development sprint.
• Architect roles work on the design of the application architecture and
design to meet the requirements described by the user stories. Design
work is conducted in a design sprint, sometimes called sprint zero.
• Designers work on the aspects of the product that relate to user interfaces
and user experience with the product. Essentially, designers are translators
for the following aspects of the product6:
○ Translate users’ desires and concerns for product owners.
○ Translate features for users—how the product will actually work and
what it looks like.
○ Translate experiences and interfaces for engineers.
• Engineer/Lead Engineer/Developers work to build (code) the features
for the applications based on user story requirements for each sprint.
• Testers/Quality Assurance (QA) Leads are those who work to determine
if the application being delivered meets the acceptance criteria for the user
stories and help to provide proof for the DoD for those user stories and,
ultimately, the product.
As you’ll see in Chapter 3, each of these roles require specialized application
security training to help them to gain the skills they need for ownership and
responsibility of security for their product.
2.4 Unwinding Sprint Loops
With the basic model and understanding of the roles people play within Scrum,
let’s now take a look at what happens inside each sprint as the cycle of development proceeds. Figure 2.3 expands on the steps inside a sprint loop.7
Under the paradigm of Building Security In, you can find opportunities for
effective security controls throughout the product’s lifecycle.
Beginning with Requirements Refinement for product backlog development, this is the opportunity to specify security functional and nonfunctional
Figure 2.3 Expanded Activities Inside a Sprint
Deconstructing Agile and Scrum 15
requirements (NFRs) as user stories or constraints on existing user stories in the
form of acceptance criteria that drives the DoD. This approach forces everyone
on the team to not only consider security, but also describe exactly how they
plan to meet the control needs. This basic step will drive all follow-on activity to
address security requirements and concerns head on through the analysis stage,
the design stage, the development stage, the testing phases, and ultimately the
acceptance phase prior to deployment of that release.
As constrained user stories enter a sprint, business systems analysts will refine
these into specifications that are suitable for architecture and design work. As
that work progresses and a final draft of a system design is available, the process
of threat modeling and attack surface analysis will help to remove design defects
that could lead to the most expensive and hardest to remediate vulnerabilities.
Performing this work while remaining in the design sprint enables development
teams to discover and fix the design or add controls that may be missed and
serves as a phase-containment control to prevent defect creep. Threat modeling
and other techniques for risk assessment are covered in Chapter 7.
Once an application design goes into the development activity, developers
can use integrated development environment (IDE)-based security testing tools
that can help them to identify and remove unit-based defects, such as use of
insecure functions, failures to sanitize inputs, etc.
As the product comes together from the various developers working on
units of it, and these units are collected for the application build, you find the
first opportunity to perform static code analysis testing (SAST). Scanning can
be set up within sandboxes in the development environment to help the team
eliminate defects that can only be discovered in or after integration steps. Teams
should be encouraged to use the IDE-based checker and the sandbox testing
continuously as the application gains functionality. Open Source components
and libraries used in the application can also be inspected for known vulnerabilities, using software composition analysis (SCA), and updated as needed
with newer versions that patch those issues. Once static code scanning is complete, the latest, clean scan can be promoted as a policy gate scan as proof the
application meets at least one DoD for security acceptance criteria.
As the methodology describes, this process repeats with each new sprint
until a functionally complete, high-quality application is primed for release
and deployment.
2.5 Development and Operations Teams Get Married
With the successful rise and proof of viability of Scrum to speed up software
development, further changes made to speed up HOW software is deployed
came on the scene with the marriage of development and operations.
16 Secure, Resilient, and Agile Software Development
In the “old” days, development teams would prepare change requests to
throw the application over the wall for deployment and operations. These
change requests would go through multiple levels of review and approvals,
across multiple departments, to launch their software into production or make
it available to the world. This process alone often took weeks or months.
Now, development teams and operations teams work together as partners
for ongoing operations, enhancements, defect removal, and optimization of
resources as they learn how their product operates in the real world.
As appsec professionals began integrating into planning for CI/CD, DevOps,
and new models for data center operations, DevOps began to transform into
what we’ll call DevSecOps. It’s also referred to as Rugged DevOps, SecDevOps,
and just about any permutation you can think of.
Essentially, DevOps (and DevSecOps) strives to automate as much as possible, leaving time for people to perform quality-related activities that help to
optimize how the application works.
From the time a version of software (a feature branch) is integrated and built
(complied and packaged) for release, automation takes over. This automation
is often governed by a gatekeeper function that orchestrates the process, runs
suites of tests on the package, and will only allow the application to release
if all the gate control requirements have been met. If a test reports an outcome that the gatekeeper’s policy indicates a failure, the gatekeeper function
can stop, or break, the build, necessitating attention and remediation from the
product team. Testing automation might include using a broad set of testing
Figure 2.4 Agile and DevOps
(A description of Figure 2.4 can be found on page 18.)
(A description of Figure 2.5 can be found on page 18.)
Figure 2.5 DevSecOps Cycle (Used with permission of L. Maccherone, Jr. Source: https://twitter.com/lmaccherone/
status/843647960797888512)
18 Secure, Resilient, and Agile Software Development
tools that perform a wide variety of tests, such as functional tests, code quality
and reliability tests, and technical debt. This is also an opportunity to include
security-related tests, but that testing in the continuous integration/continuous
deployment (CI/CD) pipeline must complete in a matter of seconds—or at
worst, a few minutes—otherwise it won’t be included as a gate for gatekeeper
purposes, or worse, may not be run at all. Five minutes maximum is a good rule
of thumb for the amount of extra time you may be allotted to test in the CI/
CD pipeline. This specific constraint on testing is a primary driver of the shift
left paradigm to adapt security controls within the SDLC. Figure 2.4 is a simple
depiction on how Agile and DevOps work in unison8:
Figure 2.5 shows what the marriage of Dev and Ops teams looks like when
comprehensive security controls transform DevOps into DevSecOps.9
Throughout the rest of the book, we’ll look at how these controls can be
implemented into your own environment to operate seamlessly with your existing practices.
2.6 Summary
In Chapter 2 we took a deeper dive into the new and improved software development world to see what’s changed and what’s stayed the same as we explore areas
for opportunities to effectively implement security controls and practices. We
examined the overall Agile/Scrum SDLC, roles, activities, and responsibilities.
Next we saw how the marriage of development and operations teams provide
opportunities for appsec professionals to “ruggedize” how applications are managed and operated to yield high quality and resilience every time.
References
1. Trapani, K. (2018, May 22). What Is AGILE? - What Is SCRUM? - Agile FAQ’s.
Retrieved from https://www.cprime.com/resources/what-is-agile-what-is-scrum/
2. Cohn, M. (n.d.). User Stories and User Story Examples by Mike Cohn. Retrieved
from https://www.mountaingoatsoftware.com/agile/user-stories
3. Atlassian. (n.d.). User Stories. Retrieved from https://www.atlassian.com/agile/
project-management/user-stories
4. User Story. (n.d.). Retrieved from https://milanote.com/templates/user-storytemplate
5. Understanding Scrum Methodology—A Guide. (2018, January 11). Retrieved
from https://www.projectmanager.com/blog/scrum-methodology
6. Tan Yun (Tracy). (2018, July 3). Product Designers in Scrum Teams ? Part 1. Retrieved
from https://uxdesign.cc/design-process-in-a-scrum-team-part-1-d5b356559d0b
Deconstructing Agile and Scrum 19
7. A Project Management Methodology for Agile Scrum Software Development.
(2017, October 31). Retrieved from https://www.qat.com/project-managementmethodology-agile-scrum/
8. Agile vs DevOps: Demystifying DevOps. (2012, August 3). Retrieved from
http://www.agilebuddha.com/agile/demystifying-devops/
9. Maccherone, L. (2017, March 19). DevSecOps Cycle [Diagram]. Retrieved from
https://twitter.com/lmaccherone/status/843647960797888512
Chapter 3
Learning Is FUNdamental!
As it turns out, throwing technology at defective software is likely the worst
way to address appsec and ignores the basic tenet—software security is a human
factors issue, not a technical issue. Tools are seductive with their coolness factor,
ease of acquisition and use, and producing quick results that—in fact—tell you
that you do have an issue with software security. Taking tools to the next step is
where things quickly fall apart.
Suddenly, development teams are bombarded by reams of proof that their
software is defective, and with finger-pointing from security teams, they’re left
in a state of upset and overall chaos. Furthermore, these development team
members often don’t understand what this proof is telling them and are completely unprepared to address these defects in any meaningful way.
Agile leads to an environment in which the incentives for developing new
applications are found when the software is delivered quickly and as inexpensively as possible. Goodness or quality (or resilience or security) is not directly
rewarded, and often the extra work required to address software goodness isn’t
given to development teams so they may address it.
Making matters worse, traditional college and private education that prepares programmers and IT roles for new technologies, new languages, and
new platforms don’t arm their students with the skills they need to meet the
demands of organizations that require resilient, high-quality applications that
can be constructed quickly at acceptable costs. Many development team members may enter the workforce never hearing the term nonfunctional requirement.
Each organization then finds they own the responsibility to break old bad
habits, instill new good habits, and educate the workforce adequately to fill
21
22 Secure, Resilient, and Agile Software Development
these gaps. To start the process, awareness of software security as an institutional issue is needed to set the stage for everything that follows. Awareness
drives interest and curiosity and places people on the path to wanting to learn
more. This awareness greases the skids that enable smooth engagement in software security education and ongoing involvement in appsec-related activities
that “keep the drumbeat” alive throughout the year.
In this chapter, we’re going to explore ways to bootstrap an awareness
program that leads development team members into role-specific training to
gain the right knowledge, skills, and abilities to succeed as defensive development teams.
3.1 Education Provides Context and Context Is Key
Without proper context, any mandates for high-quality, secure applications
won’t get communicated effectively to those who need to know. It’s of little use
to run around shouting that applications are vulnerable to cross-site scripting,
SQL injection, buffer overruns, and so forth, if the people you’re screaming at
have little clue as to what they’re hearing and even fewer skills or know-how to
do something about it. To this end, although prevention is always better than
remediating problems and rework, programmers are typically faced with learning their applications are insecure long after they’ve released them to production
and to the malicious users who are pervasive throughout the Internet.
Software security is special-topic area for an overall practice of security education, training, and awareness (SETA) programs, in which various levels of
awareness and training are needed to get through to the right people in their
various roles within the software development lifecycle (SDLC). An effective
program for building the right level of detail for each group of stakeholders uses
a layering approach that builds on foundational concepts that are relevant and
timely for each role in each phase.
3.2 Principles for Software Security Education
Here are some basic principles for consideration of what should be included or
addressed when setting up an appsec awareness and education program:
• Executive management sets the mandate. With management mandates for secure application development that are widely communicated,
you’re given the appropriate license to drive a program from inception forward to continuous improvement. You’ll need this executive support for
Learning Is FUNdamental!
•
•
•
•
•
•
•
•
23
establishing a program, acquiring an adequate budget and staff, and keeping the program going in the face of setbacks or delays.
Awareness and training must be rooted in company goals, policies,
and standards for software security. Establishing, then using, documented organizational goals, policies, and controls for secure application
development as the basis for your awareness and training program creates
a strong connection to developer actions that lead to compliance and
“Defense in Depth” brought to life.
Learning media must be flexible and be tailored to the specific roles
within your SDLC. Not everyone can attend an in-person instructor-led
course, so alternatives should be provided, such as computer-based training, recorded live presentations, and so forth.
Learning should happen as close as possible to the point where it’s
needed. A lengthy course that covers a laundry list of problems and solutions won’t be useful when a specific issue crops up and the learner can’t
readily access whatever was mentioned related to the issue.
Learning and practicing go hand-in-hand. As people personally experience the “how to” of learning new skills, the better the questions they ask,
and the quicker the knowledge becomes a regular practice.
Use examples from your own environment. The best examples of security problems come from your applications. When people see issues with
code and systems they’re already familiar with, the consequences of exploiting the code’s vulnerabilities hit close to home and become more real and
less theoretical. Furthermore, demonstrating where these examples stray
from internal standards for secure software helps people make the connection between what they should be doing and what they’ve been doing.
Add learning milestones into your training and education program.
People are less motivated to learn and retain discrete topics and information if learning is treated as a “check box” activity. People want milestones
in their training efforts that show progress and help them gain recognition and demonstrate progress. As you prepare a learning curriculum for
your various development roles, build in a way to recognize people as they
successfully advance through the courses, and make sure everyone knows
about it.
Make your program company culture relevant. Find icons or internally
well-known symbols in your organization that resonate with employees and
incorporate them in your program or build your program around them.
BOLO. Be On the Look Out for people who participate in your awareness and training program who seem more enthusiastic or engaged than
others. These people are your candidates for becoming internal application security evangelists or application security champions. People love
24 Secure, Resilient, and Agile Software Development
thought leaders, especially when they’re local, and you can harness their
enthusiasm and interest to help you advance your program and your cause.
3.3 Getting People’s Attention
When we’re honest with ourselves, we know that software security is not the
most exciting or engaging topic around. Apathy is rampant, and too many conflicting messages from prior attempts at software security awareness often cause
people’s eyes to glaze over, which leads to even further apathy and disconnection from the conversation.
Peter Sandman, who operates a risk communication practice, has identified a strategy for communication that’s most appropriate for software security
awareness, as well as other issues where apathy reigns but the hazards are serious (e.g., Radon poisoning, employee safety). The strategy, called Precaution
Advocacy,1 is geared toward motivating people by overcoming boredom with the
topic. Precaution Advocacy is used on high-hazard, low-outrage situations in
Sandman’s Outrage Management Model. The advocacy approach arouses some
healthy outrage and uses this attention to mobilize people to take, or demand,
precautions.
Software security is a perfect issue for which it’s difficult to overcome apathy
and disinformation and motivate people to address the issues that only people
can address and solve.
Precaution Advocacy suggests using four ways to getting people to listen—
then learn:
1. Learning without involvement—The average television viewer pays little attention to the commercials, but nevertheless knows dozens of advertising jingles by heart. Repetition is the key here. Posters, closed-circuit
TV segments, mouse pads, elevator wraps, etc., are some useful examples.
2. Make your campaign interesting/entertaining—If you can arouse
people’s interest or entertain them, you’ll get their attention, and eventually you won’t need so much repetition. It’s best if you can make your
awareness efforts impart interesting or entertaining messages often and
liberally.
3. Need to know—Whetting people’s appetite to learn encourages the
learner to seek information, and it’s easy to deliver information to people
who are actively seeking it. Sandman1 advises developers of awareness
programs to focus less on delivering the information and more on motivating their audience to want to receive it. Empowering people helps you
educate them. The more people understand that insecure software is a
Learning Is FUNdamental!
25
software engineering, human-based problem (not a network security
problem), the more they’ll want to learn how best to prevent these problems. Making software security a personal issue for those who can effect
improvements, then giving them the tools and skills to use, will make
them more valuable team members and leads to better secured application software.
4. Ammunition—Psychologist Leon Festinger’s “theory of cognitive dissonance”2 argues that a great deal of learning is motivated by the search
for ammunition to reduce the discomfort (or “dissonance”) that people
feel when they have done something or decided something they’re not
confident is wise. Overcoming cognitive dissonance is a vital step early in
your awareness program, so people experience your information as supportive of their new behavior, rather than as hostile to their old behavior.
People also need ammunition in their arguments with others. If others
already believe that software security is a hazardous organizational-wide
issue—with no cognitive dissonance—they won’t need to pay so much
attention to the arguments for doing it.
The last thing you want to do is frighten people or lead them to believe the
sky is falling, but you do want to motivate them into changing their behavior
in positive ways that improve software security and contribute to the organization’s goals and success. As your program progresses, metrics can show how
improvements in one area lead to reduced costs in other areas, simpler and
less frequent bug fixing, improved pride of code ownership, and eventually
best practices and reusable code components that are widely shared within the
development community.
3.4 Awareness versus Education
Security awareness is the effective sharing of knowledge about potential and
actual threats and an ability to anticipate and communicate what types of security threats developers face day after day. Security awareness and security training are designed to modify any employee behavior that endangers or threatens
the security of the organization’s information systems and data.
Beginning with an awareness campaign that’s culturally sensitive, interesting, entertaining, memorable, and engaging gives you the head start you need
to effect positive changes.
Awareness needs to reach everyone who touches software development in
your organization—from requirements analysts to post-production support
personnel. As you engage your workforce, be sure to keep the material fresh and
26
Secure, Resilient, and Agile Software Development
in step with what’s going on inside your organization. Provide employees with
the information they need to engage in the follow-on steps of training and education and make those steps easy to complete and highly visible to anyone who’s
looking. Awareness needs to begin with an assumption of zero knowledge; don’t
assume your developers understand application security concepts, principles,
and best practices—lay them out so they’re easy to find and easy to assimilate.
Define common terms (e.g., threat, exploit, defect, vulnerability) so everyone
understands them the same way, reducing confusion.
As your awareness efforts take hold and people come to realize how their
approach to software development affects the security of applications—and they
begin asking the right questions—they’ll be prepared to “do” something about
security, and that’s where education programs can take root. The BITS Software
Security Framework3 notes that, “[an] education and training program in a
mature software security program represents the ‘lubricant’ to behavior change
in developers and as a result, is an essential ingredient in the change process.”
3.5 Moving into the Education Phase
While awareness efforts are never really considered “done,” they can be progressively more detailed, preparing people for an education regimen that’s tailored
to their role.
People will naturally fall into one of several specific roles, and each role has
specific needs for specific information. Don’t invent new roles for development
team members. Use the roles that are in place and hitch your wagon to the
internal efforts to roll out and support Agile. All the roles on the Scrum team
should be addressed with role-based appsec training:
• Architects and leads
○ Secure code starts with secure requirements and design.
○ Secure code does not equal secure applications.
○ Security is required through all phases of the process.
○ Gain skills in threat modeling and attack surface analysis.
• Developers/engineers/lead engineers
○ Match training to level of awareness and technologies used.
• Testers
○ Someone must verify the security of the end product.
○ Testers can vary in capability and may need hand-holding while they
gain confidence in using security scanners and penetration testing tools.
• Information security personnel
○ They know security, they don’t necessarily know about application
development or application-specific security concerns.
Learning Is FUNdamental!
27
• Management, including Scrum masters
○ Project and program management, line management and upper
management.
○ Basics of application security risk assessments and risk management
concepts and strategies
○ Need to understand specific risks so they budget the time and resources
to address them.
Bundles or collections of courses can be assembled to address basic or baseline education, introductory education, intermediate and advanced or expert
education. Figure 3.1 is one example of how courses might be bundled to
address the various roles and the levels of learning.
3.6 Strategies for Rolling Out Training
Here are a few suggested approaches for rolling out your training program:
• Everybody gets everything.
○ Broadly deploy training level by level, team by team.
• Core training plus security specialists.
○ Specialists by functional groups, skills, or projects.
• Base training plus candidates for “software security champions or
evangelists.”
○ Less training for all, but a few go to people embedded in groups or
projects.
○ “Train the Trainer” approach.
○ Multi-level support for developers with base training.
• Start slow.
○ Roll out to test group or organization.
○ Mix and match models and test.
Selecting one of these or a hybrid of strategies will depend on several factors that are specific to your organization. These factors include geographical
dispersion to where your development teams are located, separation or concentration of groups who are responsible for mission-critical applications, existing
infrastructures for educating employees, number of people available to conduct
training, number of people needing training, etc. Learning programs come in
all shapes and sizes. Some programs are suited to in-person training, others to
online, computer-based training (CBT), or hybrids of the two. Gamification of
learning has entered the field and includes the use of cyber ranges (discussed
later) and computer-based learning delivered in a game-like environment.
Figure 3.1 Bundles of Courses Stratified by Role in the SDLC (© Innovation Inc. Used with permission. Source: Whitepaper: Rolling
Out an Effective Application Security Training Program 4 )
Learning Is FUNdamental!
29
3.7 Encouraging Training Engagement and Completion
Team members are under very tight time pressures to produce and have little
time for “extra” training in software development security. Often push-back
from team members and their management caused by assigning training to
those who are already strapped for time may lead to training-related issues. At
other times, there are managers of teams who stand out as excellent in how their
teams complete the training in a reasonably quick period of time (3–4 months
after it’s assigned).
What some of these managers do is set aside some time in the Agile process
itself to run a “training sprint,” in which the 2–4 weeks of time allotted for a
sprint is used for everyone to complete their training. Other managers set aside
a work day and take their staff off-site for a training day, bring in lunch, and
make it easier for team members to concentrate on training, and not the issue of
the moment. You can even turn the effort into a type of friendly competition,
using learning modules based on gamification of training.
3.8 Measuring Success
The OWASP OpenSAMM Maturity Model, discussed at length in Chapter 11—
Metrics and Models for AppSec Maturity—describes a Level II of program
maturity that supports a role-based education curriculum for development staff,
under the Education and Guidance domain of OpenSAMM5:
Conduct security training for staff that highlights application security in the
context of each role’s job function. Generally, this can be accomplished via
instructor-led training in 1–2 days or via computer-based training with
modules taking about the same amount of time per person. For managers
and requirements specifiers, course content should feature security requirements planning, vulnerability and incident management, threat modeling, and misuse/abuse case design. Tester and auditor training should focus
on training staff to understand and more effectively analyze software for
security-relevant issues. As such, it should feature techniques for code review,
architecture and design analysis, runtime analysis, and effective security test
planning. Expand technical training targeting developers and architects to
include other relevant topics such as security design patterns, tool-specific
training, threat modeling and software assessment techniques. To rollout
such training, it is recommended to mandate annual security awareness
training and periodic specialized topics training. Course should be available (either instructor-led or computer-based) as often as required based on
head-count per role.
30 Secure, Resilient, and Agile Software Development
At Level III maturity of Education and Guidance in OpenSAMM, the notion
of certification for development roles appears. These types of certifications
may be available internally though a custom-developed certification process,
or in the marketplace through such programs as ISC2’s CSSLP6 and/or SANS’
GIAC7 programs.
3.9 Keeping the Drumbeat Alive
Awareness efforts don’t end as training efforts kick in—new people are hired all
the time, team members change roles or teams, new technology is introduced,
or strategic initiatives force all teams to re-examine how their applications are
constructed, deployed, and operated.
Maintaining the appsec program on the radar screen with multiple, competing messages vying for the same attention is vital to help development teams
to authentically take ownership and personal responsibility for their application’s security.
Some ideas to keep the message alive include:
• Brown bag sessions on special topics of interest to a general audience of
IT professionals. These topics should be at a sufficiently high level for
engagement by those outside of software development: topics such as
securing IoT for an organization that uses or produces them, trends in
software development and software security, metrics from internal efforts,
and invited guest speakers from the industry or the products you use
to help with appsec. These brown bag sessions can be set up and conducted easily with desktop tools that are already in place (WebEx, Adobe
Connect, etc.)
• Newsletters on specific topics in appsec for the development community
are a great way to share information about new enhancements to appsec
efforts, featured people (discussed later in this chapter), new tools coming on the scene, changes in security standards related to development,
and even “Find the Bug” or “Find the Design Defect” challenges to help
encourage applied application security.
• Cyber ranges are collections of resources that permit a participant to
explore them on their own, use them in a competition, as a learning tool,
or as a supplement to another form of training. A typical cyber range presents the user with a deliberately vulnerable application that “knows” when
a vulnerability is exploited and awards points to the user to help them
keep track of progress. Each user has their own version of the vulnerable
application, so it differs from a hackathon in which participants attack or
Learning Is FUNdamental!
31
defend a single system. You can run cyber range activities in an in-person
setting, virtually for a fixed period of time, or some combination of the
two. Cyber ranges are often available as a cloud offering, so you don’t need
to build the infrastructure to operate and manage it. You can find a list of
cyber range resources at the end of this book.
• Team or unit-based security working groups; as the maturity of your
program increases, and as team members take a deeper interest in their
specific product or collection of products, you may be able to encourage
the teams to form and operate a special-interest security working group for
information and best practices sharing.
3.10 Create and Mature a Security Champion Network
Ideally, every Agile team will have its own security champion. These people act
as liaisons to the appsec team in the security department, possess an intimate
knowledge of the product they support, and demonstrate an interest in appsec.
These champions will be the ones to help you promote and propagate changes
or new activities that contribute to appsec.
Now and then, these people have questions of a nature that affect everyone
who works on products in IT. Creating a community for your security champions is an important step you can take to maintain the program’s growth and
adoption. Collect this community together often (as reasonable) to help them
to help themselves. Provide a forum or office hours for anyone who reaches out
for help, and encourage your security champions to engage in advanced learning topics and help them if they elect to pursue certifications in IT security
or appsec.
Aside from formal activities and media that you provide for your AppSec
SETA Program, recognizing individuals for significant contributions or advancement of appsec efforts is another powerful tool for encouraging further engagement. Feature them and their effort in your newsletter or during a brown bag,
and you’ll gain a supporter of your program for life!
3.11 A Checklist for Establishing a Software Security
Education, Training, and Awareness Program
The following checklist (see Table 3.1) is offered to help remind you of key
principles and practices that we’re certain will work. Consider these elements as
you’re formulating your overall customized appsec SETA program.
32 Secure, Resilient, and Agile Software Development
Table 3.1 Checklist for Education Program Success
Requirement for Program Success
Executive management establishes the mandate for software security and budgets
the time, expense, and delegation of authority to improve software security.
Company goals, policies, standards, and controls are in place for software security
throughout the SDLC.
Learning media is geared to your audience based on their availability, geographic
dispersion, access to materials (intranet based vs. Internet based), language
considerations, sensitivity of time zones where personnel are located.
Reference tools are readily available to developers and are usable for just-in-time
access for solving specific software security issues.
Examples of high-quality and secure source code are available to show developers
what needs to be accomplished and why.
Code examples come from familiar internal sources.
Courses are stratified by well-defined roles in the SDLC.
Progress of courses and completion of course bundles include reward and recognition
steps that further motivate learners.
A metrics program has been established to show trends over time and help to identify
components that are working as planned vs. those that need intervention or changes.
Program maturity is measurable and is used consistently.
3.12 Summary
AppSec SETA Programs are an all-encompassing and difficult problem to
address and solve and require dedication, effort, patience, and time to build an
effective program. Awareness and education are vital for success and require a
many-hats approach that includes psychology, creativity, engaging materials,
formal structures for learners to navigate, and a solid rooting in how people
learn and apply new skills in their jobs. As you apply these concepts and plan
activities, events, and media for your program’s ongoing communications, you
will be well on the way to building the best program possible for yourself, your
development teams, and your organization.
References
1. Snow, E. (n.d.). Motivating Attention: Why People Learn about Risk . . . or
Anything Else (Peter Sandman article). SnowTao Editing Services. Retrieved
from http://www.psandman.com/col/attention.htm
2. Cognitive Dissonance. (2007, February 5). Retrieved from https://www.simply
psychology.org/cognitive-dissonance.html
Learning Is FUNdamental!
33
3. BITS Software Security Framework. (2012). Retrieved from BITS website: http://
www.bits.org/publications/security/BITSSoftwareAssurance0112.pdf
4. Security Innovation. (n.d.). Rolling Out An Effective Application Security Training Program. Retrieved from https://web.securityinnovation.com/rolling-out-aneffective-application-security-training-program/thank-you?submissionGuid=
8c214b9b-e3fe-4bdb-8c86-c542d4cf1529
5. SAMM—Education & Guidance—2. (n.d.). Retrieved from https://www.owasp.
org/index.php/SAMM_-_Education_&_Guidance_-_2
6. Software Security Certification | CSSLP—Certified Secure Software Lifecycle
Professional | (ISC)². (n.d.). Retrieved from https://www.isc2.org/csslp/default.aspx
7. GIAC Security Certifications | Software Security Certifications. (n.d.). Retrieved
from http://www.giac.org/certifications/software-security
Chapter 4
Product Backlog
Development—
Building Security In
Chapter 1 defines software resilience as the ability to reduce the magnitude and/or
duration of disruptive events. The effectiveness of a resilient application or infrastructure software depends on its ability to anticipate, absorb, adapt to, and/or
recover rapidly from a potentially disruptive event.
4.1 Chapter Overview
Chapter 4 shifts the focus to the beginning steps for product development,
in which features are selected and written up as user stories and added to the
product backlog. We’ll first examine the classes and families for constraints
on the product that need to be specified for purposes of resilience. We’ll then
look for ways to apply these constraints as acceptance criteria and Definition of
Done attainment.
With a clear understanding of the nonfunctional requirements that constrain how a user story feature will be designed, developed, and tested, all of
those on the Scrum team are working from the same playbook. By specifying
these constraints up front, you’ve added key ingredients to the product development lifecycle that not only Build Security In, but enable various other desirable
35
36 Secure, Resilient, and Agile Software Development
aspects, such as scalability, portability, reliability, and so on. Because one of the
Agile goals for user stories is a change from specifying what needs to be present
to talking about what needs to be present, and you can neatly include elements
of performance, reliability, uptime, security, and so forth.
We’ll examine 15 categories of nonfunctional requirements to help you to
decide which characteristics are essential or desirable as you discuss user stories.
From there we’ll look at some concrete examples on how to use nonfunctional
requirements (NFRs) as acceptance criteria and Definition of Done.
4.2 Functional versus Nonfunctional Requirements
Software is useful for what it does. People purchase software because it fulfills
their need to perform some function. These functions (or features) can be as
simple as allowing a user to type a letter or as complex as calculating the fuel
consumption for a rocket trip to the moon. Functions and features are the reasons people purchase or pay for the development of software, and it’s in these
terms that people think about software.
What software is expected to “do” is described by a product owner as user
stories, or requirements in the old vernacular. These requirements show up on
the product backlog as they’re collected and prioritized for development.
NFRs are the quality, security, and resiliency aspects of software that only
show up in software specifications when they’re deliberately added. These
requirements come out when the major set of stakeholders who meet to discuss the planned product gets expanded beyond the people who will use it and
includes the people who will:
•
•
•
•
•
Operate it
Maintain it
Oversee the governance of the software development life cycle
Serve as security professionals
Represent legal and regulatory compliance groups who have a stake in
assuring that the software is in compliance with local, state, and federal
laws.
Although functional requirements state what the system must do, NFRs
constrain how the system must accomplish the what.
In commercial software, you don’t see these features or aspects of software
advertised or even discussed on the package or in marketing literature for the
software. Developers won’t state that their program is more secure than their
competitor’s products, nor do they tell you much about the environment under
which the software was developed. As purchasers of software, we don’t tend to
Product Backlog Development—Building Security In 37
ask for the characteristics related to uptime, reliability, accuracy, or speed. We
simply assume those characteristics are present. But providing these features is
not free, cheap, or automatic. Someone has to build these in from the moment a
user story is written!
Figure 4.1 illustrates what happens when requirements are ill-understood,
poorly documented, or just assumed by development and support teams.
Although this comic has been around for four decades, it’s as relevant today as
when it first came out.
4.3 Testing NFRs
Once software is developed, testing begins with making sure it meets its functional requirements: Does it perform what the user stories specify it needs to
perform in the way they’re specified to perform? Tests are developed for each
use case or scenario described by the users, and if the software behaves as the
acceptance criteria indicates it should, it’s passed on for user acceptance testing.
Software testing that focuses only on functionality testing for user acceptance can uncover errors (defects or bugs or flaws) in how the software operates.
If the system responds to input in the ways the users expect it to respond, it’s
stamped as ready to ship. If the system responds differently, the bugs are worked
out in successive remediation and retesting sprints until it behaves as desired.
Testing for resilience in software is a whole other ballgame. Developers
cannot test their own programs for anything more than determining whether
a function works. Developers rarely test their programs for security flaws or
stress the software to the point where its limitations are exposed or it fails to
continue operating.
Resilience and security testing flip the problem of user acceptance testing on its head. Resilience tests not only verify that the functions designed to
meet a nonfunctional requirement or service (e.g., security functions) operate
as expected, it also validates that the implementation of those functions is not
flawed or haphazard.
This kind of testing can only be performed effectively by experts and special-purpose tools. Over time, you can train QA testers on Scrum teams to run
and consume the results of these tools to help enable the team’s security selfsufficiency, or these tools can be automatically run earlier in a sprint to help
prevent security defects from reaching QA testing.
Gaining confidence that a system does not do what it’s not supposed to do is
akin to proving a negative, and everyone knows that you can’t prove a negative.
What you can do, however, is subject a system to brutal types of testing, and
with each resistance to an attack, gain increasing confidence that it was developed with a secure and resilience mindset from the very beginning.
Figure 4.1
Software Development Pitfalls
Product Backlog Development—Building Security In 39
4.4 Families of Nonfunctional Requirements
Resilient software demonstrates several characteristics that help to improve the
lives of everyone who has a stake in or responsibility for developing it, maintaining it, supporting it, or using it as a foundation on which new features and
functions are added. These characteristics fall into natural groups that address
the following. They are listed alphabetically, not in order of importance:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Availability
Capacity
Efficiency
Extensibility
Interoperability
Manageability
Maintainability
Performance
Portability
Privacy
Recoverability
Reliability
Scalability
Security
Serviceability
You may hear or see NFRs also called design constraints, quality requirements,
or “ ilities,” as referenced by the last part of their names. You’ll also see that there
is some overlap with NFRs: Some requirements address more than one aspect
of quality and resilience requirements, and it’s not important where this shows
up, so long as it winds up as part of acceptance criteria or Definition of Done
(or both), is accounted for in all development activities, and is tested to assure
its presence and correct operation.
Here we’ll examine these various areas and discuss some broad and some
specific steps and practices to assure their inclusion in the final product.
4.4.1 Availability
Availability shows up again later as a goal of security, but other availability
requirements address the specific needs of the users who access the system. These
include maintenance time windows at which the software might be stopped for
various reasons. To help users determine their availability requirements, experts
recommend that you ask the following questions:
40 Secure, Resilient, and Agile Software Development
• What are your scheduled operations?
• What times of the day and what days of the week do you expect to be
using the system or application?
The answers to these questions can help you identify times when the system or application must be available. Normally, responses coincide with users’
regular working hours. For example, users may work with an application primarily from 8:00 a.m. to 5:00 p.m., Monday through Friday. However, some
users want to be able to access the system for overtime or telecommuting work.
Depending on the number of users who access the system during off-hours, you
can choose to include those times in your normal operating hours. Alternatively,
you can set up a procedure for users to request off-hours system availability at
least three days in advance.
When external users or customers access a system, its operating hours are
often extended well beyond normal business hours. This is especially true with
online banking, Internet services, e-commerce systems, and other essential
utilities such as electricity, water, and communications. Users of these systems
usually demand availability 24 hours a day, 7 days a week, or as close to that
as possible.
How often can you tolerate system outages during the times that you’re
using the system or application? Your goal is to understand the impact on users
if the system becomes unavailable when it’s scheduled to be available. For example, a user may be able to afford only two outages a month. This answer tells
you whether you can ever schedule an outage during times when the system is
committed to be available. You may want to do so for maintenance, upgrades,
or other housekeeping purposes. For instance, a system that should be online 24
hours a day, 7 days a week may still require a scheduled downtime at midnight
to perform full backups.
How long can an outage last, if one does occur? This question helps identify
how long the user is willing to wait for the restoration of the system during an
outage, or to what extent outages can be tolerated without severely affecting
the business. For example, a user may say that any outage can last for up to a
maximum of only three hours. Sometimes a user can tolerate longer outages if
they are scheduled.1
Availability Levels and Measurements
Depending on the answers to the questions above, you should be able to specify
which category of availability your users require, then proceed with design steps
accordingly:
Product Backlog Development—Building Security In 41
• High availability—The system or application is available during specified
operating hours with no unplanned outages.
• Continuous operations—The system or application is available 24 hours a
day, 7 days a week, with no scheduled outages.
• Continuous availability—The system or application is available 24 hours
a day, 7 days a week, with no planned or unplanned outages.
The higher the availability requirements, the more costly the implementation
will be to remove single points of failure and increase redundancy.
4.5 Capacity
When software designs call for the ability for support personnel to “set the
knobs and dials” on a software configuration, instrumentation is the technique
that’s used to implement the requirement. With a well-instrumented program,
variables affecting the runtime environment for the program are external to
the program (not hard coded) and saved in an external file separate from the
executing code. When changes are needed to add additional threads for processing, programmers need not become involved if system support personnel
can simply edit a configuration file and restart the application. Capacity planning is made far simpler when runtime environments can be changed on the
fly to accommodate changes in user traffic, changes in hardware, and other
runtime-related considerations.
4.6 Efficiency
Efficiency refers to the degree that a system uses scarce computational resources,
such as CPU cycles, memory, disk space, buffers, and communication channels.2 Efficiency can be characterized using these dimensions:
• Capacity—Maximum number of users or transactions
• Degradation of service—The effects of a system with capacity of X transactions per time when the system receives X+1 transactions in the same
period
NFRs for efficiency should describe what the system should do when its limits
are reached or its use of resources becomes abnormal or out of pattern. Some
examples here might be to alert an operator of a potential condition, limit further
connections, throttle the application, or launch a new instance of the application.
42 Secure, Resilient, and Agile Software Development
4.7 Interoperability
Interoperability is the ability of a system to work with other systems or software
from other developers without any special effort on the part of the user, the
implementers, or the support personnel. Interoperability affects data exchanges
at a number of levels: ability to communicate seamlessly with an external system or trading partner, semantic understanding of data that’s communicated,
and ability to work within a changing environment of hardware and support
software. Interoperability can only be implemented when everyone involved in
the development process adheres to common standards. Standards are needed
for communication channels (e.g., TCP/IP), encryption of the channel when
needed (e.g., SSL/TLS), databases (e.g., SQL), data definitions (e.g., using
XML and standard Document Type Definitions, JSON objects), interfaces
between common software functions and microservices (e.g., APIs), and so on.
Interoperability requirements should dictate what standards must be applied
to these elements and how the designers and developers can get their hands on
them to enable compliant application software.
Interoperability is also concerned with use of internal standards and tools
for development. When possible, new systems under development should take
advantage of any existing standardized enterprise tools to implement specific
features and functions—for example, single sign-on, cryptographic libraries,
and common definitions of databases and data structures for internal uses.
4.8 Manageability
Manageability encompasses several other areas of NFRs but is focused on easing the ability for support personnel to manage the application. Manageability
allows support personnel to move the application around available hardware as
needed or run the software in a virtual machine, which means that developers
should never tie the application to specific hardware or external non-supported
software. Manageability features require designers and developers to build software as highly cohesive and loosely coupled. Coupling and cohesion are used as
software quality metrics as defined by Stevens, Myers, and Constantine in an
IBM Systems Journal article.3
4.8.1 Cohesion
Cohesion is increased when the responsibilities (methods) of a software module
have many common aspects and are focused on a single subject and when these
Product Backlog Development—Building Security In 43
methods can be carried out across a variety of unrelated sets of data. Low cohesion can lead to the following problems:
• Increased difficulty in understanding the modules.
• Increased difficulty in maintaining a system, because logical changes in
the domain may affect multiple modules, and because changes in one
module may require changes in related modules.
• Increased difficulty in reusing a module, because most applications won’t
need the extraneous sets of operations that the module provides.
4.8.2 Coupling
Strong coupling happens when a dependent class contains a pointer directly to a
concrete class that offers the required behavior (method). Loose coupling occurs
when the dependent class contains a pointer only to an interface, which can
then be implemented by one or many concrete classes. Loose coupling provides
extensibility and manageability to designs. A new concrete class can easily be
added later that implements the same interface without ever having to modify
and recompile the dependent class. Strong coupling prevents this.
4.9 Maintainability
Software maintenance refers to the modification of a software application after
delivery to correct faults, improve performance or other attributes, or adapt
the product to a modified environment, including a DevSecOps environment.4
Software maintenance is an expensive and time-consuming aspect of development. Software system maintenance costs are a substantial part of life-cycle
costs and can cause other application development efforts to be stalled or postponed while developers spend inordinate amounts of time maintaining their
own or other developers’ code. Maintenance is made more difficult if the original developers leave the application behind with little or no documentation.
Maintainability within the development process requires that the following
questions be answered in the affirmative:
1.
2.
3.
4.
5.
6.
Can I find the code related to the problem or the requested change?
Can I understand the code?
Is it easy to change the code?
Can I quickly verify the changes—preferably in isolation?
Can I make the change with a low risk of breaking existing features?
If I do break something, is it easy to detect and diagnose the problem?
44 Secure, Resilient, and Agile Software Development
Maintenance is not an application-specific issue, but a software development
environment issue: If there are few or no controls over what documentation is required, how documentation is obtained and disseminated, how the
documentation itself is maintained, or if developers are given sufficient time
to prepare original documentation, then maintainability of the application
will suffer. It’s not enough to include a requirement that “the software must be
maintainable”; specific requirements to support maintainability with actionable events must be included in design documents. The Software Maintenance
Maturity Model (SMmm)5 was developed to address the assessment and
improvement of the software maintenance function by proposing a maturity
model for daily software maintenance activities. The SMmm addresses the
unique activities of software maintenance while preserving a structure similar
to that of the Software Engineering Institute’s Capability Maturity Model integration (CMMi).
4.10 Performance
Performance (sometimes called quality-of-service) requirements generally address
three areas:
• Speed of processing a transaction (e.g., response time)
• Volume of simultaneous transactions (e.g., the system must be able to
handle at least 1,000 transactions per second)
• Number of simultaneous users (e.g., the system must be able to handle a
minimum of 50 concurrent user sessions)
The end users of the system determine these requirements, and they must be
clearly documented if there’s to be any hope of meeting them.
4.11 Portability
Software is considered portable if the cost of porting it to a new platform is
less than the cost of rewriting it from scratch. The lower the cost of porting
software, relative to its implementation cost, the more portable it is. Porting
is the process of adapting software so that an executable program can be created for a computing environment that is different from the one for which it
was originally designed (e.g., different CPU, operating system, mobile device,
or third-party library). The term is also used in a general way to refer to the
changing of software/hardware to make them usable in different environments.
Product Backlog Development—Building Security In 45
Portability is most possible when there is a generalized abstraction between the
application logic and all system interfaces. When there’s a requirement that
the software under development be able to run on several different computing
platforms—as is the case with web browsers, email clients, etc.—portability is
a key issue for development cost reduction, and sufficient time must be allowed
to determine the optimal languages and development environments needed to
meet the requirement without the risk of developing differing versions of the
same software for different environments, thus potentially increasing the costs
of development and maintenance exponentially.
4.12 Privacy
Privacy is related to security in that many privacy controls are implemented as
security controls, but privacy also includes non-security aspects of data collection and use. When designing a web-based application, it’s tempting to collect
whatever information is available to help with site and application statistics, but
some of the practices used to collect this data could become a privacy concern.
Misuse or overcollection of data should be prevented with specific requirements
on what data to collect, how to store it, how long to retain it, what’s permitted
for use of the data, and letting data providers (users in most cases) determine if
they want that data collected in the first place.
The U.S. Federal Trade Commission offers specific guidance on fair information practice principles that are related to four areas, along with other principles for collecting information from children6:
1.
2.
3.
4.
Notice/Awareness
Choice/Consent
Access/Participation
Integrity/Security
1. Notice/Awareness—In general, a website should tell the user how it collects
and handles user information. The notice should be conspicuous, and the privacy policy should state clearly what information the site collects, how it collects
it (e.g., forms, cookies), and how it uses it (e.g., Is information sold to market
research firms? Available to meta-search engines?). Also, the policy should state
how the site provides the other “fair practices”: choice, access, and security.
2. Choice/Consent—Websites must give consumers control over how their personally identifying information is used. This includes marketing directly to
the consumer, activities such as “purchase circles,” and selling information to
46 Secure, Resilient, and Agile Software Development
external companies such as market research firms. The primary problems found
here involve collecting information for one purpose and using it for another.
3. Access/Participation—Perhaps the most controversial of the fair practices,
users should be able to review, correct, and in some cases delete personally identifying information on a particular website. Inaccurate information or information used out of context can ruin a person’s life or reputation.
4. Security/Integrity—Websites must do more than reassure users that their
information is secure with a “feel-good” policy statement. The site must implement policies, procedures, and tools that will prevent anything from unauthorized access to personal information to hostile attacks against the site. Of biggest
concern is the loss of financial information such as credit card numbers, bank
account numbers, etc. You’ll find a separate section on security requirements
later in this chapter.
In 2018, the European Union (EU) General Data Protection Regulation
(GDPR) took effect as a legal framework that sets guidelines for the collection
and processing of personal information from individuals who live in the EU.
Since...
Purchase answer to see full
attachment