CIS 205 UB Incorporating Human Error Education Into Software Essay

User Generated

trznvc93

Humanities

CIS 205

University of Baltimore

CIS

Description

Requirements:

  • Relevant to the Topic "Mistake and Error" (for Human Interaction Computer Class)
  • 300-500 words
  • Paper must be in peer-reviewed publication from ACM, IEEE, HFES, UXPA
  • Paper must be published recently (2015-present)
  • Cite paper in APA format, include link or PDF to paper

Content requirements: Answer these questions

  • What problem did the authors intend to solve, how did they solve it, and were they successful?
  • Were there any flaws or novel approaches in their methodology?
  • How might the results of this work be applied in the real world?
  • What was one new thing you learned by reading this paper.

Guidelines for discussions:

  • Summarize the paper and provide your critique of the problem, methods, and outcomes (include the qualifications of the authors)
  • Lead a discussion of how the results could be applied in the real world.

Unformatted Attachment Preview

Incorporating Human Error Education into Software Engineering Courses via Error-based Inspections Vaibhav Anu Gursimran Walia Gary Bradshaw Department of Computer Science North Dakota State University Fargo, USA Department of Computer Science North Dakota State University Fargo, USA Department of Psychology Mississippi State University Mississippi State, USA vaibhav.anu@ndsu.edu gursimran.walia@ndsu.edu glb2@psychology.msstate.edu variety of hard and soft skills, which are associated with a wide range of disciplines like cognitive psychology, sociology, and reliability engineering. ABSTRACT In spite of the human-centric aspect of software engineering (SE) discipline, human error knowledge has been ignored by SE educators as it is often thought of as something that belongs in the realm of Psychology. SE curriculum is also severely devoid of educational content on human errors, while other human-centric disciplines (aviation, medicine, process control) have developed human error training and other interventions. To evaluate the feasibility of using such interventions to teach students about human errors in SE, this paper describes an exploratory study to evaluate whether requirements inspections driven by human errors can be used to deliver both requirements validation knowledge (a key industry skill) and human error knowledge to students. The results suggest that human error based inspections can enhance the fault detection abilities of students, a primary learning outcome of inspection exercises conducted in software engineering courses. Additionally, results showed that students found human error information useful for understanding the underlying causes of requirement faults. In this paper, we explore the feasibility of using requirements validation exercises to train SE students on an important RE skill that is associated to the science of Cognitive Psychology: human errors. Human error researchers study the cognitive and psychological processes that produce errors in human behavior. Human error research can help understand the nature and mechanism of errors that people can make during the RE process. Safety-critical domains like aviation and medicine have successfully used human error research for process improvement [2, 14]. However, human error knowledge is still viewed by Software Engineering (SE) students, faculties, and practitioners to be a component of Psychology and most students are not aware of human errors and their importance in creating quality requirements. Because, RE is human-centric, it is not surprising that most RE faults can be traced back to human errors [13]. Lanubile and Basili were the first ones to note that requirements faults essentially are manifestations of underlying human errors [4]. They proposed that requirements validations should be driven by underlying human errors rather than the conventional fault-based (e.g. fault checklist) methods. They coined the term, error-based inspections, which require inspectors to analyze faults for underlying human errors (through a process called error abstraction) and then use the error information to find additional related faults. In comparison to errorbased inspections, fault-based inspections (which are conventionally taught in SE courses) simply guide inspectors to focus on specific faults (manifestation of errors) like incorrectness, incompleteness, and ambiguity using a pre-defined set of questions. Keywords Human error; requirements inspection; taxonomy; psychology 1. INTRODUCTION Requirements engineering (RE) is essentially a social and highly human-centric activity, which involves several stakeholders (e.g., developers, end users, analysts) working with each other to determine customer needs and translating them into clear and precise set of requirements. There exists a strong need to get system requirements right, as failure to do so will lead to systematic failure in software products built as a result of incorrect requirements. The severe lack in human error training for SE students is also reflected in the fact that SE textbooks and literature has competing, and sometimes contradictory definitions of the terms errors and faults. These two terms are often used interchangeably [4, 12]. Lanubile and Basili separated the terms error and fault as follows: Because of the above-mentioned importance of obtaining correct requirements, RE is an integral part of software engineering (SE) curriculum [7]. Students are trained on various RE activities like eliciting, analyzing, specifying, and validating requirements. However, these technical aspects of RE by themselves are not sufficient to equip the students, who are future software engineers, with all skills required to produce quality requirements. This is because RE in the real world, as stated earlier, is very humancentric. Consequently, RE requires students to be trained on a Error: An error is a failure in human cognition. An error occurs when a developer’s thought process is flawed. Example of human cognitive failures include inattention, carelessness, and misunderstanding. Fault: A fault is a concrete manifestation of an error. For example, an incorrect fact getting recorded in a requirements artifact. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. SIGCSE ’17, Mar 08–11, 2017, Seattle, WA, USA © 2017 ACM. ISBN 978-1-4503-4698-6/17/03…$15.00 As part of our NSF grant, through interaction with SE and psychology researchers, we have developed a human error taxonomy (HET) of requirement errors based on comprehensive psychological frameworks of human errors proposed in the Cognitive Psychology literature. The current paper tries to explore whether HET can be used to educate students on human errors and tracing back faults to errors, which in turn could help them in detecting other related faults (manifestations of errors). To evaluate DOI: http://dx.doi.org/10.1145/3017680.3017782 39 human error theory is used for training the operators of safetycritical equipment in domains like medicine, aviation, and nuclear power plants [6]. In medicine, patient safety is a major concern and many academic healthcare institutions rely on Reason’s theory to impart patient safety education to healthcare providers like nurses, hospital administrators, and physicians. An example of such a safety training module published by Duke University School of Medicine can be found here: http://patientsafetyed.duhs.duke.edu/module_e/definitions.html this, two error-based inspection studies were conducted in separate graduate-level SE courses to evaluate if error-based inspections are useful in training students on human errors. The rest of this paper is structured as follows: In Section 2, we provide a brief background about human error research and the human error instruments used to train students during the inspection studies. In Section 3, we provide the primary goals of conducting the inspection studies and the procedure followed during the studies. Section 4 describes the analyses performed on the inspection data collected during inspection studies. Section 5 provides a summary of results and Section 6 draws conclusions regarding learning imparted through inspection exercises. Furthermore, in both aviation and healthcare, researchers frequently use Reason’s theory to develop educational content to train individuals like medical students, healthcare personnel, flight safety crew, and pilots on human errors [6, 14]. Provided the widespread usage of Reason’s theory in academic institutions as well as in the industry for imparting human error training, it can be inferred that Reason’s theory is an appropriate learning resource for SE students to learn about human errors. 2. BACKGROUND Researchers have used several human error classification systems, like Reason’s Swiss Cheese model [9], Human Factors Analysis & Classification system (HFACS) [11], Norman’s classification [5] and Rasmussen’s taxonomy [8] to show how people’s actions and decisions can be erroneous in different situations. Most of these human error classification systems find their fundamental principals in the theory proposed by James Reason in his wellrespected body of work on human errors [9]. Section 2.1 describes Reason’s error classification and its applicability through both, everyday examples of human errors and RE specific examples. 2.2 Error-Based Inspections and HET Originally proposed by Lanubile et al [4], error-based inspections add an extra step to the fault-based inspection technique. The extra step requires inspectors to analyze the faults found during faultbased inspection for underlying human errors and then use the human error information to find additional faults. Faults are analyzed for underlying human errors using a process called Error Abstraction (EA), wherein the inspector retrospectively analyses each fault to determine the human error that led to the injection of the fault. The premise of error-based inspections is that if the inspectors can be made aware of the underlying human errors, they can use error information to find additional faults that are either overlooked or missed when just focusing on faults. 2.1 Reason’s Classification of Human Errors Reason’s theory provides a framework to understand the specific breakdowns (or human errors) in human information processing. Reason proposed that human errors are induced during two primary cognitive activities that humans perform when faced with any problem-solving situation: planning and execution [9, 10]. Reason further associated the human errors that happen during planning and execution to commonly observed erroneous human behaviors. The human errors of execution are associated with inattentiveness, carelessness, and forgetfulness. Errors associated with inattentiveness and carelessness are referred to as slips, and those associated with forgetfulness are called lapses. The human errors of planning are induced due to lack of adequate knowledge when creating the plan and are referred to as mistakes. The Human Error Taxonomy (HET) [1] was developed to support the error abstraction (EA) leg of error-based inspections. HET provides a structured list of the most commonly occurring requirements phase human errors. Without the HET, the inspectors have to rely on their creativity and their experience with the requirements development process when abstracting human errors from faults. This is because error abstraction is a retrospective investigation to determine what human error/s could have led to the injection of the fault being investigated. HET is an instrument that aides the inspector during error abstraction by providing a tangible list of commonly occurring requirements phase human errors. Slips, which result from inattention while executing routine tasks, can be exemplified in common day-to-day activities like typing incorrectly or “fat-fingering” due to inattention or carelessness. Lapses also occur when executing routine tasks, but are failures of memory. For instance, having planned to repair/replace a broken machine-part, but forgetting it due to some interruption (e.g., taking a break) is a common lapse. From RE perspective, slips and lapses typically occur during mundane activities like typing, taking notes, reading, and filing. The HET (Figure 1) classifies the requirements phase human errors into Reason’s slips, lapses, and mistakes. HET was developed by first collecting data, from SE literature, about the common failure Mistakes, which are planning failures, occur when planning a solution for an unfamiliar problem. For example, a doctor misdiagnosing a patient either due to not studying this patient’s symptoms properly or not having any experience whatsoever with the symptoms exhibited by this specific patient. Mistakes are generally the result of lack of adequate knowledge, which in turn arises when working in novel situations. In novel situations, individuals attempt to use rules or procedures that have successfully worked in similar situations in the past. Mistakes are particularly applicable to RE as RE is a creative activity where software engineers are trying to build a solution for a new problem (i.e., a new user need). Owing to its applicability in a wide-range of domains, Reason’s theory is frequently used for training students and industry practitioners on human errors. Educational material on Reason’s Figure 1. Human Error Taxonomy (HET) 40 processes (human errors) associated with RE. Next, the identified errors were interpreted in light of human information processing limitations (slips, lapses, and mistakes). For example, SE literature reports that requirement practitioners sometimes analyze user needs without having knowledge about the overall functionality of the system, which leads to incorrect specifications. These errors (Application Errors in Figure 1) are a classic example of Reason’s mistakes, as a problem is being solved without having adequate knowledge about the problem-space. Fifteen such commonly occurring human errors were identified and classified as either a slip, a lapse, or a mistake to create the HET. An Error Abstraction (EA) instrument called Human Error Abstraction Assist (HEAA) [3] was also developed to act as a human error intervention tool during requirements inspections. HEAA is also used as a training tool to teach students the faults to human errors trace-back process. Procedure: This inspection exercise was conducted over a twoweek period. During the first week, students were trained on common requirement fault types and how to use fault-checklist technique to locate faults in a SRS document. Then, students used the training to inspect an externally developed document that specified requirements for a restaurant order/inventory management system called Restaurant Interactive Menu (RIM) and reported faults. The result of this step was 16 individual fault lists. Next, students were trained on requirements phase human errors (HET) using Reason’s account of cognitive failing (slips, lapses, mistakes) and how to analyze faults to abstract human errors followed by a re-inspection of requirements document guided by the identified error information. Next, students individually analyzed their faults (found during the first step) and abstracted human error/s for each fault, and classified each abstracted error into one of the 15 human error classes of HET. The result of this step was 16 individual error lists containing human errors that occurred during the creation of RIM requirements. Next, the students individually re-inspected the RIM document using the identified human errors to locate new faults (that were not found during the first inspection). This step resulted in 16 new fault lists. 3. STUDY DETAILS Inspection studies in two separate SE graduate-level courses were conducted using the error-abstraction technique (supported by HET). In this section, we present the study goals, and other details. 3.1 Study Objectives Objective 2: Evaluate whether students find error-based inspections a useful learning resource for understanding human errors that can occur during the requirements development process. Data collected: The faults found by each student were validated for true-positive by comparing it against a list of truepositive faults known to be present in the RIM document. We removed the false-positives prior to data analysis. Similarly, the error abstraction correctness of the human errors reported by each subject was validated by comparing student’s abstraction result with the abstraction results obtained by consulting a Cognitive Psychologist, Dr. Bradshaw (co-author). The data collected from each student included: (1) # of real faults found by each student during the first inspection of RIM; (2) # of human errors the student correctly abstracted and classified from the faults they found during the first step; (3) # of new faults found by the student during reinspection of RIM using the identified errors. 3.2 Study Design 3.2.2 Exercise II The major objective of having students perform error-based inspections was to help them understand the underlying human errors that can lead to insertion of requirement faults. Additionally, we hypothesize that focusing on underlying human errors will enable a higher coverage of faults that are otherwise overlooked or missed when only focusing on the fault information: Objective 1: Evaluate whether error-inspections (supported by HET and EA instrument) can help enhance students’ fault detection abilities as compared to fault-based inspections. This section describes the procedure followed during inspection exercises along with the data collected during the study run. The courses were chosen because they required students to learn about requirements inspections as part of their learning objectives. Participants: 34 graduate (Master’s and PhD) students enrolled in the Software Development Processes course participated. Procedure: Like the first exercise, students in this study were trained on the requirements phase human errors (HET), how to abstract human errors from faults (EA instrument), and using the error information to locate faults recorded in the requirements document. The primary focus of this exercise was to evaluate whether performing an error-based inspection of an externally developed requirements document can help students understand requirement phase human errors. To evaluate that, students were 3.2.1 Exercise I Participants: 16 graduate students enrolled in the Software Requirements Definition and Analysis course at North Dakota State University were trained and subsequently inspected requirements for a software system for errors and corresponding faults. Table I. Post-study survey (Inspection Exercise II) Q1 Q2 Q3 Questions that evaluated effectiveness of error-based inspections for imparting knowledge of human errors N Mean(SD) I feel confident I can distinguish between a slip and a lapse. 33 3.9 (0.6) I feel confident I can distinguish between a slip and a mistake 33 4.1 (0.8) HET documentation had sufficient detail to allow me to understand human errors that occur during the 33 3.8 (1) requirements development process Median 4 4 4 Q4 Q5 Q6 Questions related to educational value of human errors and error-based inspections The effort spent in learning human errors is valuable and worthwhile in finding faults in requirements document. I am confident that human errors represent real problems in the requirements development process. Human error information helped me detect faults I might have otherwise overlooked 3.97 (0.9) 4.2 (0.8) 3.9 (0.8) 4 4 4 Q7 Q8 Questions related to effectiveness of trainings provided about human errors and error-based inspections Rate the usefulness of training while abstracting human errors from faults 33 3.7 (0.7) Rate the usefulness of training when using abstracted human error information to find new faults. 33 3.8 (0.6)) 4 4 41 33 33 33 provided with a document that specified requirements for a Parking Garage Control System (PGCS), which was seeded with 30 realistic faults. The students were then given 10 (randomly selected out of the 30 seeded) faults in PGCS and asked to analyze these 10 faults to abstract and classify human errors into one of the error class of HET. The result of this task was 34 individual error lists containing human errors (and their classifications) that may have occurred during creation of PGCS requirements. Next, the students were asked to re-inspect the PGCS document using the abstracted human errors with the goal of locating the remaining faults. The result of this task was 34 individual fault lists. The inspection exercise was followed by a survey that gathered students’ feedback on the error-based inspection exercise and their understanding of the human errors and cognitive failure mechanisms that affect the requirements development process. Figure 3. Effect of human error knowledge on fault detection effectiveness the re-inspection (using the identified human error information). The results showed that, students found an average of 6 faults during the fault-based inspection but that number increased to an average of 14 new faults (that were not found during the first inspection) during the error-based inspection. While we certainly expected the number of new faults found during the re-inspection to be greater than zero (because of the fact that students were inspecting the same document the second time); however, an average fault effectiveness increase of 233% is noteworthy. The results from one-sample t-test (when using a comparison value of 6 faults) found this increase to be significant at p
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Here you go! Let me know if you need any edits. Thank you.

Running head: REVIEW PAPER.

1

Review Paper: Incorporating Human Error Education into Software
Engineering Courses via Error-based Inspections
Student’s Name
Institution
Date

2

REVIEW PAPER.
Review Paper.
One of the primary problems that the authors intended to solve was to assess if error-

inspections (reinforced by human error taxonomy (HET) and EA instrument) can support the
enrichment of student’s error detection skills. Consequently, the authors also intended to
examine if students can discover fault-centered inspections a beneficial knowledge source for
comprehending human errors that can transpire amid the requireme...


Anonymous
Nice! Really impressed with the quality.

Studypool
4.7
Indeed
4.5
Sitejabber
4.4

Similar Content

Related Tags