Cybersecurity Employee Training Discussion Questions

User Generated

CrgreCvcre1969

Computer Science

Description

In a minimum of 450 words, discuss the two topics below.

  • Describe Cybersecurity Training programs at one’s organization (frequency, use of automation, certification after finishing, etc.). How is cybersecurity training at the organization designed to successfully overcome resistance to changing users’ poor cybersecurity habits?
  • Should cybersecurity training be designed to correspond to different categories for individual roles and responsibilities in an organization? Explain the answer.

References

Jenkins, J. L., Grimes, M., Proudfoot, J., & Lowry, P. B. (2014, April 28). Improving Password Cyber-Security Through Inexpensive and Minimally Invasive Means: Detecting and Deterring Password Reuse Through Keystroke-Dynamics Monitoring and Just-in-Time Fear Appeals. Information Technology for Development, 20(2), 196-213. Retrieved October 12, 2020, from https://ssrn.com/abstract=2292761

Zhang, Z. (2011, August). Cohesive Cybersecurity Policy Needed for Electric Grid. National Defense Commentary. Retrieved October 12, 2020, from https://ssrn.com/abstract=1870487

Unformatted Attachment Preview

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. This version of the referenced work is the post-print version of the article—it is NOT the final published version nor the corrected proofs. If you would like to receive the final published version please send a request to Paul.Lowry.PhD@gmail.com, and I will be happy to send you the latest version. Moreover, you can contact the publisher’s website and order the final version there, as well. The current reference for this work is as follows: Jeffrey L. Jenkins, Mark Grimes, Jeff Proudfoot, and Paul Benjamin Lowry (2013). “Improving password cybersecurity through inexpensive and minimally invasive means: Detecting and deterring password reuse through keystroke-dynamics monitoring and just-in-time warnings,” Information Technology for Development (accepted 09-June-2013). If you have any questions and/or would like copies of other articles I’ve published, please email me at Paul.Lowry.PhD@gmail.com, and I’d be happy to help. My vita can be found at http://www.cb.cityu.edu.hk/staff/pblowry Alternatively, I have an online system that you can use to request any of my published or forthcoming articles. To go to this system, click on the following link: https://seanacademic.qualtrics.com/SE/?SID=SV_7WCaP0V7FA0GWWx Electronic copy available at: http://ssrn.com/abstract=2292761 Improving password cybersecurity through inexpensive and minimally invasive means: Detecting and deterring password reuse through keystroke-dynamics monitoring and just-in-time fear appeals Password reuse—using the same password for multiple accounts—is a prevalent phenomenon that can make even the most secure systems vulnerable. When passwords are reused across multiple systems, hackers may compromise accounts by stealing passwords from low-security sites to access sites with higher security. Password reuse can be particularly threatening to users in developing countries in which cybersecurity training is limited, law enforcement of cybersecurity is nonexistent, or in which programs to secure cyberspace are limited. This article proposes a two-pronged solution for reducing password reuse through detection and mitigation. First, based on the theories of routine, cognitive load, and motor movement, we hypothesize that password reuse can be detected by monitoring characteristics of users’ typing behavior (i.e., keystroke dynamics). Second, based on protection motivation theory, we hypothesize that providing just-in-time fear appeals when a violation is detected will decrease password reuse. We tested our hypotheses in an experiment and found that users’ keystroke dynamics are diagnostic of password reuse. By analyzing changes in typing patterns, we were able to detect password reuse with 81.71% accuracy. We also found that just-intime fear appeals decrease password reuse; 88.41% of users who received a fear appeal subsequently created unique passwords, whereas only 4.45% of users who did not receive a fear appeal created unique passwords. Our results suggest that future research should continue to examine keystroke dynamics as an indicator of cybersecurity behaviours, and use just-in-time fear appeals as a method for reducing non-secure behavior. The findings of our research provide a practical and cost-effective solution to bolster cybersecurity through discouraging password reuse. Keywords: password reuse; keystroke dynamics; protection motivation theory; just-in-time fear appeals; support vector machine; cybersecurity; developing countries Electronic copy available at: http://ssrn.com/abstract=2292761 Introduction Global information and communication technology is rapidly growing (Maurer, 2011); however, developing countries frequently lack adequate cybersecurity controls and opportunities to educate the public about cybersecurity1 (Chen, et al., 2006). Attackers (e.g., hackers, fraudsters) therefore target users from developing nations because they perceive that these regions do not have programs to secure cyberspace or to enforce cybersecurity laws (Tagert, 2010). Through gaining unauthorized access to user accounts, these attackers commit cybercrimes (e.g., identity theft; theft of money; phishing; unauthorized access to organizational or governmental resources) or even cyber warfare against other nations through espionage, sabotage, denial of service attacks, and malicious code hosting (Maurer, 2011; Tagert, 2010). One common tactic leveraged to attack users is through exploiting password reuse by taking advantage of a user who has an identical password across multiple accounts (Ives, Walsh, & Schneider, 2004; Notoatmodjo & Thomborson, 2009). When password reuse occurs, hackers have the ability to steal credentials from one site (e.g., a low-security or fraudulent website hosted in a developing country) and use these credentials to breach other sites, even those that have considerable cybersecurity controls in place (Notoatmodjo & Thomborson, 2009). This phenomenon is often referred to as the ‘Domino Effect’—after the failure of the weakest system, other systems will follow, yielding new password information from which still more systems 1 Cybersecurity is used to refer to the body of practices, technologies, and processes designed to protect cyber-infrastructure (e.g., data, networks, computers, software) and user assets from attack, damage, and unauthorized access and use (Brechbuhl, et al., 2010) will fail (Ives, et al., 2004). Despite the dangers of password reuse, it is very prevalent. A global study of over 500,000 users found that people, on average, access 25 password-protected sites using only 6.5 unique passwords, resulting in passwords being shared across 3.9 sites on average (Florencio & Herley, 2007). Users suffer from ‘password-overload’ with over half of individuals surveyed admitting to having reused a password for highly-important accounts (Notoatmodjo & Thomborson, 2009). Several potential solutions have been proposed to deter password reuse, such as password managers (Gaw & Felten, 2006), alternative authentication methods (Ives, et al., 2004), and composition rules (Campbell, Ma, & Kleeman, 2011). Although some of these techniques have shown utility— particularly in an organizational setting—their widespread adoption is low because of high costs, mass-deployability issues, or technical limitations (Bonneau, et al., 2012). These problems are amplified in developing nations in which cyber-infrastructure is poor, IT knowledge lags behind (Carte, Dharmasiri, & Perera, 2011; Pick & Azari, 2008), public/shared computer use is high (Florencio & Herley, 2006), and financial resources are scarce relative to other nations (Roztocki & Weistroffer, 2011) (see Table 1 in the literature review section for a more detailed discussion). We propose a cost-effective methodology for mitigating password reuse by detecting when it occurs, and then displaying just-in-time fear appeals. By implementing these warnings on an “as needed” basis, we intend to target users who are likely reusing passwords while not inconveniencing those who are likely creating unique passwords. This can provide a cost-effective remedy that is feasible to implement in the developing world. In this article, we describe how password reuse can be identified by monitoring a user’s keystroke behavior (i.e., keystroke dynamics). We then explain why just-in-time fear appeals provided on an “as needed” basis will discourage password reuse. In summary, we help address the need to decrease password reuse by answering the following questions: RQ1. Can keystroke dynamics be used to predict if individuals are reusing passwords? RQ2. Does providing just-in-time fear appeals help reduce password reuse? Literature Review Recognizing the dangers of password reuse, researchers have been challenged to better understand and create solutions to mitigate password reuse (Ives, et al., 2004). Table 1 reviews selected password reuse deterrence strategies identified in the literature, and their applicability to the developing world. One cause of password reuse is that users are often unaware of what constitutes good password practices (Furnell, 2007). Although previous studies have shown that training may increase awareness and thereby reduce password reuse (Charoen, Murali, & Lorne, 2008; Ives, et al., 2004), training alone is rarely sufficient. For example, in a developing-world context, users in Nigeria who were given brief training on improving passwords by using mnemonics simply used easily-memorable, easily-hacked mnemonics rather than unique strong passwords (Oghenerukevbe, 2010). Other research suggests that even if users know what constitutes good or bad password practices, they have little motivation to comply because they perceive little threat and do not want to be inconvenienced; and thus, users gravitate toward the path of least resistance (Tam, Glassman, & Vandenwauver, 2010; Zhang & McDowell, 2009). One suggested way to overcome these limitations is through fear appeals (Herath & Rao, 2009; Johnston & Warkentin, 2010). Generally, a fear appeal is a persuasive messaged intended to better help someone be aware of a threat and to persuade them to engage in a protective action. Fear appeals can be implemented to help increase users’ perceptions of the costs and dangers of reusing passwords (Zhang & McDowell, 2009); the more probable people perceive that their accounts can be compromised, the less likely they are to reuse passwords (Bryant & Campbell, 2006). Table 1. Examples of Password Reuse Deterrence Strategies Password Reuse Solution Client-side password management systems (Gaw & Felten, 2006) Composition rules / password requirements (Campbell, Kleeman, & Ma, 2007; Campbell, et al., 2011; Keith, Shao, & Steinbart, 2009) Alternative forms of authentication (Ives, et al., 2004), for example:  neural networks (Shouhong & Hai, 2008)  graphical passwords (Biddle, Chiasson, & Van Orschot, 2012)  smart cards (Hwang, Chong, & Chen, 2010)  one-time passwords using specialized servers (Huang, Ma, & Chen, 2011) Training (Charoen, et al., 2008; Ives, et al., 2004) Effectiveness in Developing Nations Computer use often occurs on public computers and in computer cafés in developing nations where the use of password managers may not be feasible (Florencio & Herley, 2006) While composition policies reduce the use of dictionary words and similar words, they do not normally reduce password reuse (Campbell, et al., 2011) Although useful in many organizational settings, these alternative forms of authentication have low public penetration because “none… retains the full set of benefits that legacy passwords already provide” (Bonneau, et al., 2012, p. 1). Normally not sufficient alone to deter password reuse in developing nations (e.g., Oghenerukevbe, 2010); there are limited opportunities to mass educate the public in developing nations (Cone, et al., 2007). Fear appeals are explained by a number of theories, protection motivation theory (PMT) being among the most developed (Rogers, 1975; Rogers, 1983). Johnston and Warkentin (2010) conducted a PMT-based study that concluded fear appeals can be used to positively influence users’ intentions to comply with individual acts of cybersecurity. In their study, perceptions of self-efficacy, response efficacy, and threat severity were found to inform the degree to which fear appeals impacted behavioral intentions. An organizational survey based on PMT used perceived severity, rewards, response efficacy, self-efficacy, and response costs to predict intentions to comply with IS security policies (Vance, Siponen, & Pahnila, 2012). Similar results were seen in (Posey, et al., 2011). Another multi-organizational survey determined that perceived severity, response efficacy, self-efficacy, and response cost influence users’ attitudes toward cybersecurity policies (Herath & Rao, 2009). A survey of college students showed that PMT is effective in increasing users’ willingness to use strong passwords (Zhang & McDowell, 2009). A larger survey of non-students also demonstrated this tendency (Milne, Labrecque, & Cromer, 2009). Finally, fear appeals have been shown to influence specific cybersecurity behaviors such as updating and protecting passwords (Workman, Bommer, & Straub, 2008). Our paper extends these findings to examine whether the use of just-in-time fear appeals, in response to the detection of password reuse, will discourage password reuse. We explore using keystroke dynamics as a minimally invasive and low-cost method for identifying password reuse and triggering a just-in-time fear appeal—properties that make it a compelling solution for developing countries. Keystroke dynamics are the behavioral characteristics of how a user types. Gaines et al. (1980) collected precise measurements of keystroke timings and identified two attributes that are useful in developing a keystroke signature for a user: (1) dwell time (how long a key is held down), and (2) transition time (the latency between key presses). Based on this research, many studies have investigated using keystroke dynamics as a method to supplement the more traditional identification and authentication practice of using usernames and passwords (Joyce & Gupta, 1990; Leggett & Williams, 1988; Park, Park, & Cho, 2010; Tappert, Villani, & Cha, 2009; Teh, et al., 2010). Aside from an identification and authentication context, research in keystroke dynamics has demonstrated that typing characteristics may be linked to changes in cognitive load (Vizer, Zhou, & Sears, 2009). Creating unique, strong passwords is a complex, mentally-taxing task that increases cognitive load (Adams & Sasse, 1999) resulting in distinct cognitive processing compared to reusing routine passwords. We thus contend that monitoring changes in keystroke dynamics has the potential for detecting password reuse. To date, however, researchers have yet to examine whether keystroke dynamics may be diagnostic of changes in cognitive processing and cognitive load associated with password reuse, and whether providing a just-in-time fear appeal will discourage password reuse when detected. Theory and Hypotheses We first explain why users should exhibit different keystroke dynamics when creating a unique password as compared to typing routine information such as their name, email address, username, or a commonly used password. To do so, we integrate theories of routine, cognitive load, and motor movement. We then use PMT and theory on salience to explain why just-in-time fear appeals will decrease password reuse when violations are detected. For typists, the cognitive nature of keystroke dynamics differs when typing routine words verses non-routine words (e.g., unique passwords). Routine words refer to strings that an individual has typed many times previously, such as their name, email address, and passwords that they frequently use. Theories explaining routine action (e.g., Miller, Galanter, & Pribram, 1986) posit that routine tasks are cognitively hierarchically structured indicating that higher-level goals initiate automatic lower-level tasks to fluently and rapidly coordinate behavior (Shaffer, 1976). For example, when typists type routine words, they rarely think about the letters they are typing; instead, they think of the word, or collection of words, which activates the keystroke-execution processes for automatically typing each letter in the word (Crump & Logan, 2010). Logan and Crump explain this phenomenon as the inner–outer loop theory of typing (Crump & Logan, 2010; Logan & Crump, 2009). This theory divides the cognitive process of typing into two separate hierarchical processes or ‘loops’: the outer loop and the inner loop. The outer loop transforms text or thoughts into a series of words and passes these words to the inner loop. The inner loop is an automatic and subconscious process that transforms each word into a series of coordinated hand and finger movements to make up the necessary keystrokes. The inner loop causes parallel activation of constituent keystrokes for each letter in the word and provides a serial control process (John, 1996; Wu & Liu, 2008). According to this theory, the outer loop is unaware of the processes of the inner-loop. Because of this, typists typically have to think only of the word or sentence rather than thinking of the individual characters or the layout of the keyboard. Hence, typists have little explicit knowledge of what their fingers are doing (Logan & Crump, 2009). This hierarchical nature of typing has received considerable support in literature. For example, research has shown that the copy-typing rate is dependent on the wordlike structure of the text. Words are typed much faster than random strings of characters, suggesting that words themselves play an important role in the cognitive processes that govern typing (Crump & Logan, 2010; Larochelle, 1983; Shaffer & Hardwick, 1968). Typing speeds are also influenced by word-structure, such as syllable boundaries, indicating that word-level development is used during typing (Weingarten, Nottbusch, & Will, 2004; Will, Nottbusch, & Weingarten, 2006). Research has also shown that the human mind processes the letters in a word or sentence congruently, and fingers move in parallel to type the words, as opposed to processing each letter individually (Gentner, Grudin, & Conway, 1980). Finally, when typists are asked to recall the structure of a keyboard, type the letters for the left or right hand only, or pay attention to the keys that are being typed, typing performance decreases drastically. These results suggest that the word or sentence units activate subconscious lower-level processes that trigger the individual key strokes (Logan, 2003). When people type non-routine strings of characters, such as a strong unique password, typing is governed by different cognitive processes than when they are typing words or strings using the routine processes described previously. Creating a strong unique password requires substantial cognitive attention (Adams & Sasse, 1999). Strong passwords can be defined as non-routine words that include upper and lower-case letters, special characters, considerable length, and digits constructed in a manner such that the user can remember the password. The additional constraint of uniqueness requires the individual to create a password that has not been used before, further increasing cognitive load. During this cognitively demanding password composition process, typing flow is disturbed. Rather than relying on the outer-inner loop process to translate words into fluent and efficient keystroke behavior, the requirement to include special characters, digits, and upper or lower-case letters causes people to become conscious of what is being typed at the character level. Such behavior has been shown to significantly alter keystroke dynamics, typically by making it more sporadic and deliberate (Crump & Logan, 2010; Larochelle, 1983; Shaffer & Hardwick, 1968). In summary, we predict that keystroke dynamics will differ for individuals typing in routine information—such as a name, email address, or previously used passwords—versus individuals typing in non-routine information, as with a unique password. When users type in routine information, they will produce fluent, efficient, and consistent typing behavior that is governed by the outer-loop, inner-loop cognitive processing of words. However, when users create strong and unique passwords, their keystroke dynamics will be influenced by cognitive load and character-level cognitive processing. Accordingly, we propose the following hypothesis: H1. Creating strong unique passwords results in a measurable difference in keystroke dynamics as compared with typing known information When password reuse is detected through keystroke dynamics, the system can utilize just-in-time fear appeals to encourage users to create a unique password. To explain why fear appeals will decrease password reuse, we leverage PMT. PMT was originally constructed to help clarify how warnings, known as fear appeals, influence behavior (Rogers, 1975; Rogers, 1983). Johnston and Warkentin (2010) define a fear appeal as “…a persuasive message with the intent to motivate individuals to comply with a recommended course of action through the arousal of fear associated with a threat” (pp. 550-551). PMT explains that two appraisal processes take place when individuals are given a warning—a threat appraisal process and a coping appraisal process. The outcome of these two appraisal processes will determine whether the warning will be successful in changing behavior (Rogers, 1983; Vance, et al., 2012). The threat appraisal process consists of assessing threat severity, threat vulnerability, and the benefit of behaving in a maladaptive manner (i.e., maladaptive rewards) (Rogers, 1983). In the context of password reuse, severity refers to the degree of hardship that can result from reusing a password. Passwords provide protection to many sensitive sites, including banks, email, social networking sites, and so on. Perceptions of severity include beliefs about the negative consequences that may result from being hacked due to password reuse on these sites, including identity theft, financial loss, personal data exposure, and loss of confidential information. The more severe one perceives the consequences of password reuse, the less likely it is that one will reuse passwords. Vulnerability refers to the probability that one believes he/she will experience harm from reusing a password. Typically, people are in denial about passwords: they believe that only others with important information will be hacked, and thus they reuse passwords (Zhang & McDowell, 2009). However, as one’s perception of vulnerability increases, password reuse should likewise decrease. Finally, the maladaptive benefit refers to the positive aspects of reusing passwords (e.g., easier to create and remember). If one perceives the benefit of reusing a password to be lower than the costs (e.g., the severity of and vulnerability to attack), they will likely not reuse passwords. The coping-appraisal process consists of assessing response efficacy, selfefficacy, and response costs (Rogers, 1975). In a password reuse context, response efficacy is the perceived effectiveness of creating strong unique passwords to avoid being hacked or becoming a victim of a cybersecurity breach. People who believe that creating unique passwords reduces susceptibility are likely to avoid password reuse. Again, in this context, self-efficacy refers to one's personal belief in his/her ability to stop reusing passwords. Humans have cognitive limitations that deter them from making optimal security decisions. Working memory is limited, and therefore people must rely on strategies to remember unique passwords. The more a person believes that he or she has the ability and resources to not reuse a password, the more that individual will avoid password reuse. Finally, response costs refer to the costs of creating unique passwords. Response costs include the time and effort spent creating unique passwords, the cost of forgetting one’s password, the inconvenience of being locked out of a system, and so on. The frustration and inconvenience of forgetting a password is amplified by the large number of online accounts and passwords people have to remember. The higher one perceives the costs of creating unique passwords to be, the more likely one will reuse passwords (Zhang & McDowell, 2009). Fear appeals may inspire individuals to create strong and unique passwords through heightening perceptions of severity and vulnerability (the threat-appraisal process) as well as increasing perceptions of response efficacy and self-efficacy (the coping-appraisal process). Rogers (1975) states, “…a basic postulate is that protection motivation arises from the cognitive appraisal of a depicted event as noxious and likely to occur, along with the belief that a recommended coping response can effectively prevent the occurrence of the aversive event. If an event is not appraised as severe, as likely to occur, or if nothing can be done about the event, then no protection motivation would be aroused” (p. 99). Just-in-time fear appeals should particularly have potential to heighten perceptions of severity, vulnerability, response efficacy, and self-efficacy because of the immediacy of the warning to the users’ behavior of creating a unique password. Immediacy has been shown to increase the salience of beliefs (Crano, 1995). An individual’s working memory is limited, and information must compete for attention, therefore, only the most salient information will gain attention and influence behavior (Miller, 1956). When creating a new user account, individuals may have several pieces of information competing for attention, including the primary purpose of creating an account, work responsibilities, time constraints, personal goals, and so forth (Adams & Sasse, 1999). Given these other competing cognitions, severity, vulnerability, response efficacy, and self-efficacy may not be salient to the user, and as such, the user may default to reusing a password. However, just-in-time warnings can make perceptions of severity, vulnerability, response efficacy, and self-efficacy more salient, and thereby more likely to impact behavior. In summary, we propose: H2. Users who receive a just-in-time fear appeal discouraging password reuse are more likely to create unique passwords than users who do not receive a justin-time fear appeal of password reuse Methodology To test our hypotheses, we conducted an experiment in which participants were required to create a user account on a website specially constructed for this study. After the first account creation screen, we randomly manipulated whether a just-in-time fear appeal was shown and, if so, gave the participant a second opportunity to create a unique password. We then compared whether users who received a just-in-time fear appeal created unique passwords more often than users who did not receive a just-intime warning. During the account creation and login process, precise timing data for keystrokes was captured to build a model predicting password reuse. Participant Selection A total of 148 students from a mid-level information systems course at a large public university in the South-western United States participated in the experiment. Participants were compensated with class credit for their participation. Client hardware and software configuration issues invalidated data from 13 of the participants, leaving usable data from 135 participants. Students were chosen for our sample because they commonly have multiple online accounts that may be targets for password reuse. The students represent an ethnically diverse population at this university, increasing the generalizability of this study to other nations and cultures. The average amount of college education was 3.2 years. Fifty-four percent of the participants were male and the average age was 23.4. The four most represented disciplines for the participants’ majors were Accounting (15%), MIS (15%), Marketing (13%), and Finance (11%). 55% of the participants were U.S. citizens, 16% Indian, 10% Mexican, 8% Chinese, and 11% citizens of other countries. Experiment Task and Procedure To participate in the experiment, all participants were required to create a user account and password on a registration system specially crafted for the experiment (Figure 1). During the account creation process, participants provided several pieces of routine information, specifically their name, university email address, and preferred username. The account creation process also required users to provide a password, which they self-selected to create uniquely or to reuse an existing password. Precise keystroke timing information was collected for each field, which was later used to establish a baseline of keystroke dynamics characteristics. Note that to protect the privacy of participants, only statistical properties of keystroke dynamics were recorded and this information was decoupled from personal identifiers. Figure 1. Account Creation Page After entering the required information and clicking the “Create Account” button, participants were assigned randomly to a “warning treatment” (n=69) or a “nonwarning treatment” (n=66). If assigned to the warning treatment, they were shown the prompt in Figure 2. We have detected that you may have used this password on other websites. Using the same password for multiple sites puts you at high risk of being hacked. To protect your privacy, please choose a unique password for this website. Figure 2. Fear Appeal Text Participants were then prompted to re-enter the same information needed to create an account, including a password. Participants were not prohibited from entering the same password that they previously used. The fear appeal message was specifically crafted to leverage principles from PMT to persuade the participant to create a unique password. The term “high risk” was included to increase the perceived vulnerability and the term “hacked” was included to increase perceived severity of reusing a password. The last sentence in the prompt explains how to protect against hackers by creating a unique password, thus increasing efficacy. After completing the account creation process successfully, users in both conditions were presented with a login screen where they were prompted to provide the username and password they had just created to access the system. After participants logged in, they were presented with a brief survey as described below. Survey Instruments In a post-experiment survey, PMT constructs—perceived severity, perceived vulnerability, response efficacy, and self-efficacy adapted from Vance et al. (2012) were captured as a manipulation check. Participants were then asked if they created a unique password when they first attempted to create an account (Figure 3). Finally, for participants who received the fear appeal treatment, the survey asked if they created a unique password on their second attempt (Figure 4). When you FIRST created a password on the account creation page, did you use a password that you have ever used before (e.g., your email password, school password, bank, iTunes, computer password, etc.)? PLEASE ANSWER THIS QUESTION TRUTHFULLY. We are conducting a study about password reuse behaviour. Your privacy is protected, and your password will not be saved. ○ I used a password I have used before when I FIRST created a password in this experiment ○ I created a unique password when I FIRST created a password in this experiment Figure 3. Password reuse question After being prompted to create a unique password, did you create a unique password (e.g., a password you have never used before on any account)? ○ Yes ○ No Figure 4: Password reuse question after fear appeal Keystroke Dynamics To predict password reuse, we captured characteristics of participants’ typing patterns for analysis. A JavaScript application was created to record key codes and the time (in milliseconds) at which each key was pressed and released. This code was embedded into the account creation and login pages of the experiment website. Statistical data collected by the application were stored in a SQL database. After data collection, algorithms were used to transform the raw keystroke timing data into digraph patterns and to calculate dwell time, transition time, and typing speed for each element that was entered (name, email address, username, password, and password confirmation). Dwell times were calculated by subtracting the time that each key was pressed from the time it was released. Transition times were calculated by subtracting the time a key was released from the time the subsequent key was pressed. Approximately 80% of the transition and dwell times were used in the subsequent analysis. The remaining 20% of individuals’ transition and dwell times were not used because they contained substantial corrections which invalidated the keystroke data (i.e. dozens of corrections or holding down keys for extended periods of time), invalid characters (a bi-product of some browsers and operating systems), or long pauses outside 3 standard deviations of the average. Typing speed was then calculated by dividing the total number of milliseconds each field took to type by the number of characters entered into the field, then dividing the result by 1000, resulting in a measure of characters per second. Data Analysis and Results Prior to the full data analysis, we assessed the validity and reliability of the adapted measures. Convergent and discriminant validity of measurement scales were assessed through a factor analysis using Varimax rotation as well as construct correlations and cross-correlations. All of the loadings of each item on its latent construct exceeded 0.6 and loaded less than 0.35 on other constructs. Average variance extracted (AVE) of all constructs was much larger than 0.5, therefore good convergent validity was demonstrated (Anderson & Gerbing, 1988). Additionally, all square roots of AVE exceeded the correlation coefficients between constructs and therefore demonstrated good discriminant validity (Fornell & Larcker, 1981). Finally, all Cronbach's alpha scores were above the 0.7 score suggested by Nunnally (1978). To ensure that the fear appeals presented to participants elicited the desired effects based on PMT, we then conducted manipulation checks on perceived severity, vulnerability, response efficacy, and self-efficacy with an ANOVA. The manipulation checks supported that the fear appeals influenced perceptions of severity (F (df = 133) = 12.123, p = .001), vulnerability (F (df = 133) = 4.227, p = .043), and response efficacy (F (df = 133) = 9.131, p = .003). The manipulation did not influence self-efficacy significantly (F (df = 133) = 1.860, p = .176), possibly because we did not provide training on strategies for composing unique passwords. Hypothesis Testing Hypothesis 1 To test hypothesis 1— creating strong unique passwords results in a measurable difference in keystroke dynamics as compared with typing known information—we evaluated our data using both statistical analysis of keystroke dynamics and a support vector machine (SVM) in WEKA, a popular suite of machine learning software (Hall, et al., 2009). In our initial analysis of the data, we calculated the average transition time and dwell time for the three non-password fields together along with the averages, standard deviations, and z-scores for each of the five fields individually for all users. Deviations in transition time and dwell time were used to generate features to classify password reuse. In the first row in Table 2, we show transition data from the first account creation screen for users who indicated they reused a password. The data indicate that users typically type their name and email address slightly slower than average (11% and 9% respectively), and type their username approximately 26% faster than the average of all non-password fields. The password and password confirmation fields are 35-36% slower than non-password fields. Fifty-three participants in the fear appeals condition reported reusing a password initially, but creating a unique password after being prompted. We conducted a within-subject t-test to examine whether participants’ keystroke dynamics were statistically different when creating unique passwords compared to reusing passwords. After receiving the prompt to create a new unique password and again filling out the fields on the account creation page, our results indicate no statistically significant difference in keystroke dynamics for name (t(df = 52) = -0.523, p =.603), email address (t(df = 52) = -0.429, p =.670), or username (t(df = 52) = 0.811, p =.421) for these participants, however, the speed with which the password is typed drops dramatically from an average of 81ms between key presses to an average of 107ms as shown in Table 2. This within-subject difference between typing non-unique passwords (first attempt) and typing unique passwords (second attempt) is highly significant (t (df = 52) = - 3.448, p =.001). Thus, H1 was supported Password (ms) and % of average Password Confirmation (ms) and % of average 66 65 (111%) (109%) 44 (74%) 81 (136%) 80 (135%) Second Attempt (Unique Password) 60 65 61 (109%) (103%) 40 (67%) 111 (186%) 103 (172%) Email Address (ms) and % of average 59 Name (ms) and % of average First Attempt (Non‐unique password) Average transition time (ms) Username (ms) and % of average Table 2: Analysis of transition times In a supplemental analysis, we built on this initial extraction and analysis of transition time and dwell time features to create a classification algorithm for identifying non-unique passwords using a support vector machine in WEKA. A support vector machine is a supervised learning approach that constructs a set of hyperplanes in a high-dimensional space. It uses a linear model to implement nonlinear class boundaries through mapping input vectors into the high-dimensional feature space. In this feature space, an optimal separating hyperplane is constructed. This hyperplane gives the maximum separation between decision classes. The training examples that are closest to the maximum margin hyperplane are called support vectors (Cristianini & Shawe-Taylor, 2000). Before applying the support vector machine learning approach, we performed feature selection using Classifier Subset Evaluator for WEKA’s support vector machine implementation—SMO. We used a best first search method to select features based on the raw values, averages, standard deviations, and z-scores of transition time and dwell time for each field. After selecting the attributes, we created a classification model by applying WEKA’s SMO classifier to the data. The results were validated using 10-fold cross validation, and are shown in Table 3. Table 3: Support vector machine classification results Class Created Unique Password Reused Password 1. 2. 3. 4. 5. True Positive Rate1 0.811 False Positive Rate2 0.178 Precision3 Recall4 F-Measure5 0.789 0.811 0.800 0.822 0.189 0.841 0.822 0.831 The fraction of users correctly classified as a hit (i.e., as creating a unique password or reusing a password) The fraction of users incorrectly classified as a hit The fraction of instances classified as a hit that are actually a hit The fraction of hits that are classified as a hit. The harmonic mean of precision and recall (a combined statistic of precision and recall) In summary, we were able to successfully distinguish between users who created unique passwords and users who reused passwords based on keystroke dynamics. Our model correctly classified 81.71% of participants. Hypothesis 2 To test hypothesis 2—users who receive a just-in-time fear appeal deterring password reuse are more likely to create unique passwords than users who do not receive a just-in-time fear appeal of password reuse—we performed a between-subjects t-test comparing the percentage of people who created unique passwords in the non-fear appeal group to the percentage of people who created unique password in the fear appeal group. This test was performed using the “two sample t-test between percents” from the StatPac Statistics Calculator. Table 4 summarizes the percentages of unique passwords in each group. The difference between the two groups was highly significant (t (df = 133) = 7.874, p < .001); hence, H2 was supported. Table 4. Summary of unique passwords in manipulations Received fear appeal No Yes # of # of participants who participants created a unique password % of participants who created unique passwords 66 69 4.45% 88.41% 3 61 Discussion This article addressed two research questions. The first research question asked whether keystroke dynamics could be used to predict if people are reusing passwords. To answer this research question, we monitored keystroke dynamics—specifically transition time and dwell time—on the account creation page of a website. We found significant differences in transition times between unique and non-unique passwords. We also trained a support vector machine and were able to correctly classify password reuse with an overall accuracy rate of 81.71%. Hence, we conclude that H1 was supported and that it is possible to detect password reuse by monitoring users’ keystroke dynamics. Our second research question asked whether providing just-in-time fear appeals would help reduce password reuse. In our study, approximately half of the participants were given a just-in-time fear appeal discouraging them from reusing passwords. The fear appeal was created based on constructs found in PMT, and significantly influenced perceptions of threat severity, threat vulnerability, and response efficacy. After receiving the fear appeal, 88.41% of participants created unique passwords. Conversely, in the group that did not received the fear appeal, only 4.45% of participants created unique passwords. The difference between the two groups was highly significant, supporting H2. We conclude that just-in-time fear appeals decrease password reuse. Implications for Research This paper makes several important contributions to research. First, our research highlights the need to understand and mitigate password reuse. In our data collection, only 7 out of 135 participants (5.19%) created a unique password during their first interaction with our system. This illustrates the prevalence of password reuse. Few extant studies have examined how to alleviate password reuse, especially in situations where users may have little or no opportunity for formal cybersecurity education and shared computer use is high—as is the case in developing countries. Our research helps address this need by explaining how keystroke dynamics can be diagnostic of password reuse and how just-in-time fear appeals can decrease password reuse all within the account creation page of a website. To our knowledge, this is the first study to examine how keystroke dynamics can be used to identify password reuse. Although considerable research has been conducted with regard to keystroke dynamics as a supplement to traditional authentication mechanisms, existing research examining changes in keystroke dynamics as a proxy for cognitive changes is scarce. In our study, we show that changes in keystroke dynamics are indicative of password reuse, and theoretically explain why differences in keystroke dynamics may result from changes in a user’s cognitive processing and level of cognitive load. Building on this theory, we were able to construct an algorithm for identifying password reuse with an accuracy rate of 81.71%. Future research should examine whether other forms of insecure behaviour can be detected through changes in keystroke dynamics. Finally, we contribute to theory by extending PMT to the context of password reuse. We found that just-in-time fear appeals to avoid password reuse promote unique password creation. We theorize that this effect is due to the increased salience of the message due to the immediacy of the fear appeal. Rarely is the act of ‘being secure’ the primary purpose of using a computer; rather, computers are used to achieve other goals such as increasing productivity, communicating, socializing, entertainment, and more. These other goals compete for the user’s attention, and cybersecurity beliefs, such as the severity and vulnerability of a threat, are often overcome by these other motives. Justin-time fear appeals may make cybersecurity beliefs, such as those found in PMT, more salient, thereby decreasing password reuse. Implications for Practice Password reuse is pervasive. Even the most technically-sound system can be breached by stolen credentials when individuals reuse passwords across multiple accounts. The detrimental effects of password reuse are particularly difficult to combat in developing countries in which systems may lack adequate cybersecurity controls, individuals often use public computers that limit the applicability of password management systems, government programs might not be in place to enforce cybersecurity laws, and public cybersecurity education is either scarce or non-existent. To protect individuals against password reuse, we propose a cost-effective method to deter password reuse. Our tool focuses on changing users’ password creation behaviour through two components. First, the system detects when password reuse is present. The prediction algorithm monitors keystroke dynamics characteristics on account creation pages to detect possible password reuse. When password reuse is detected, we found that a simple just-in-time fear appeal can be used to change users’ password creation behaviour, increasing the use of unique passwords in this study from 4.45%, when no fear appeal is used, to 88.41% when a just-in-time fear appeal is presented. Limitations and Future Research Future research should examine how just-in-time fear appeals influence user satisfaction, especially for users who are falsely accused (false-positives). Because of the small percentage of participants who created unique passwords prior to receiving the fear appeal in our study (7 out of 135, only 3 of which were in the fear appeal condition), we did not have adequate statistical power to compare the satisfaction of users who unjustly received the fear appeal and people who rightfully received the fear appeal. Based on expectation-confirmation theory (Oliver, 1977), we predict that users who unjustly receive a fear appeal will experience a decrease in satisfaction with the system. Expectation-confirmation theory explains that confirming or disconfirming an individual’s expectations will influence satisfaction. If perceived performance of the system falls short of an individual’s expectations, this negative disconfirmation will decrease satisfaction. Ultimately, the level of satisfaction or dissatisfaction experienced by the user will influence their tendency to use, repurchase, return, or discontinue the use of a product or service (Lowry, et al., 2009; McKinney, Yoon, & Zahedi, 2002). We predict that users who unjustly receive a fear appeal to create a unique password will experience a negative disconfirmation and view the website as less usable, which will result in a decrease in satisfaction. Future research should validate this proposition. Future research should also seek to improve the classification accuracy of password reuse. Although 81.71% is promising, we believe significant improvement can be made. One possible feature to further improve the algorithm’s accuracy is through more sophisticated keystroke dynamics analysis, in which digraph patterns for unique and non-unique password are compared against larger sets of routine and nonroutine text. This approach would require the collection of significantly more text than is typically available during the account creation process, but could be easily incorporated into other studies using keystroke dynamics. Furthermore, future research should cross-validate the results of this study in a developing country. Our student-based sample consists of individuals who are largely computer literate and mostly competent typists, which may limit the generalizability of our findings. Typing fluency may vary across different samples especially in developing countries, which may influence the features that predict password reuse. However, research has suggested that despite potential large differences between users (e.g., in terms of typing fluency, computer skills, and language), the keystroke dynamic differences within users will likely be constant (Gunetti & Picardi, 2005; Gunetti, Picardi, & Ruffo, 2005). Future research should validate the generalizability of our findings, and identify which features are most diagnostic in various populations. Finally, our research only examined whether users’ passwords were unique. We did not examine the overall strength of the password, nor did we investigate the pros and cons of different password creation strategies. It is possible that having more controls (e.g., requiring users to create unique passwords) can cause users to create passwords with lower password entropy as a negative side effect (Jenkins, et al., 2010). Thus, future research should more comprehensively examine how extensively password reuse should be discouraged (e.g., for all accounts or only for highly sensitive accounts) and how prompting users to create unique password may influence the entropy of passwords they create. Future research should also investigate how different passwordcreation strategies can encourage both password strength and uniqueness. Conclusion This paper helps address the need to reduce password reuse. Password reuse presents a security risk because a single stolen password can be used to gain unauthorized access a wide range of important websites and systems. In this study, we theoretically explain why password reuse can be detected through monitoring keystroke dynamics on account creation pages. We also proposed that providing just-in-time fear appeals when violations are detected will decrease password reuse by making important cybersecurity beliefs more salient. We tested our hypotheses experimentally and found that keystroke dynamics are diagnostic of password reuse. Keystroke dynamics of users who created unique passwords were significantly different than the keystroke dynamics of users who reused passwords. Using this knowledge, we created a support vector machine capable of identifying password reuse with 81.71% accuracy. We also found that just-in-time fear appeals strongly decreased password reuse; 88.41% of people who received a just-in-time fear appeal created unique passwords whereas only 4.45% of people who did not receive a fear appeal created a unique password. This paper demonstrates that keystroke dynamics can be used to assess complex human behaviour and cognitive processing, and that just-in-time fear appeals are a highly influential and cost-effective way to decrease password reuse. Developing countries may be able to realize gains in information systems cybersecurity by leveraging the minimally-invasive and cost-effective methods described in this paper. References Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12), 40-46. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411-423. Biddle, R., Chiasson, S., & Van Orschot, P. C. (2012). Graphical passwords: Learning from the first twelve years. ACM Computing Surveys, 44(4), 19:11-19:41. Bonneau, J., Herley, C., van Oorschot, P. C., & Stajano, F. (2012, May 20-23). The quest to replace passwords: A framework for comparative evaluation of web authentication schemes. Paper presented at the 2012 IEEE Symposium on Security and Privacy, San Francisco, CA. Brechbuhl, H., Bruce, R., Dynes, S., & Johnson, M. E. (2010). Protecting critical information infrastructure: Developing cybersecurity policy. Information Technology for Development, 16(1), 83-91. Bryant, K., & Campbell, J. (2006). User behaviours associated with password security and management. Australasian Journal of Information Systems, 14(1), 81-100. Campbell, J., Kleeman, D., & Ma, W. (2007). The good and not so good of enforcing password composition rules. Information Systems Security, 16(1), 2-8. Campbell, J., Ma, W., & Kleeman, D. (2011). Impact of restrictive composition policy on user password choices. Behaviour & Information Technology, 30(3), 379388. Carte, T. A., Dharmasiri, A., & Perera, T. (2011). Building IT capabilities: Learning by doing. Information Technology for Development, 17(4), 289-305. Charoen, D., Murali, R., & Lorne, O. (2008). Improving end user behaviour in password utilization: An action research initiative. Systemic Practice & Action Research, 21(1), 55-72. Chen, Y. N., Chen, H. M., Huang, W., & Ching, R. K. H. (2006). E-government strategies in developed and developing countries: An implementation framework and case study. Journal of Global Information Management, 14(1), 23-46. Cone, B. D., Irvine, C. E., Thompson, M. F., & Nguyen, T. D. (2007). A video game for cyber security training and awareness. Computers & Security, 26(1), 63-72. Crano, W. D. (1995). Attitude strength and vested interest. In R. E. Petty & J. A. Krosnick (Eds.), Attitude Strength: Antecedents and Consequences (pp. 131– 158). Mahwah, NJ, USA: Erlbaum. Cristianini, N., & Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines And Other Kernel-Based Learning Methods. Cambridge, UK: Cambridge University Press. Crump, M. J. C., & Logan, G. D. (2010). Hierarchical control and skilled typing: Evidence for word-level control over the execution of individual keystrokes. Journal of Experimental Psychology-Learning Memory and Cognition, 36(6), 1369-1380. Florencio, D., & Herley, C. (2006). How to login from an internet cafe without worrying about keyloggers. Paper presented at the Symposium on Usable Privacy and Security. Florencio, D., & Herley, C. (2007). A large-scale study of web password habits. Paper presented at the Proceedings of the 16th international conference on World Wide Web, Banff, Alberta, Canada. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50. Furnell, S. (2007). An assessment of website password practices. Computers & Security, 26(7-8), 445-451. Gaines, R. S., Lisowski, W., Press, S. J., & Shapiro, N. (1980). Authentication by keystroke timing: Some preliminary results: RAND Corporation. Gaw, S., & Felten, E. W. (2006). Password management strategies for online accounts. Paper presented at the Proceedings of the Second Symposium on Usable Privacy and Security. Gentner, D. R., Grudin, J., & Conway, E. (1980). Finger movements in transcription typing. Gunetti, D., & Picardi, C. (2005). Keystroke analysis of free text. ACM Transactions on Information and System Security (TISSEC), 8(3), 312-347. Gunetti, D., Picardi, C., & Ruffo, G. (2005). Keystroke analysis of different languages: A case study. Advances in Intelligent Data Analysis, 6(1), 133-144. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The WEKA Data Mining Software: An Update. SIGKDD Explorations, 11(1), 10-18. Herath, T., & Rao, H. R. (2009). Protection motivation and deterrence: A framework for security policy compliance in organisations. European Journal of Information Systems, 18(2), 106-125. Huang, C.-Y., Ma, S.-P., & Chen, K.-T. (2011). Using one-time passwords to prevent password phishing attacks. Journal of Network & Computer Applications, 34(4), 1292-1301. Hwang, M.-S., Chong, S.-K., & Chen, T.-Y. (2010). DoS-resistant ID-based password authentication scheme using smart cards. Journal of Systems & Software, 83(1), 163-172. Ives, B., Walsh, K. R., & Schneider, H. (2004). The domino effect of password reuse. Communications of the ACM, 47(4), 75-78. Jenkins, J. L., Durcikova, A., Ross, G., & Nunamaker Jr, J. F. (2010, December 12-15). Encouraging users to behave securely: Examining the influence of technical, managerial, and educational controls on users’ secure behavior. Paper presented at the International Conference on Information Systems, Saint Louis, Missouri. John, B. E. (1996). TYPIST: A theory of performance in skilled typing. HumanComputer Interaction, 11(4), 321-355. Johnston, A. C., & Warkentin, M. (2010). Fear appeals and information security behaviors: An empirical study. MIS Quarterly, 34(3), 549-565. Joyce, R., & Gupta, G. (1990). Identity authentication based on keystroke latencies. Communications of the ACM, 33(2), 168-176. Keith, M., Shao, B., & Steinbart, P. (2009). A behavioral analysis of passphrase design and effectiveness. Journal of the Association for Information Systems, 10(2), 6389. Larochelle, S. (1983). A comparison of skilled and novice performance in discontinuous typing. In W. E. Cooper (Ed.), Cognitive Aspects of Skilled Typewriting (pp. 67– 94). New York, NY, USA: Springer-Verlag. Leggett, J., & Williams, G. (1988). Verifying identity via keystroke characteristics. International Journal of Man-Machine Studies, 28(1), 67-76. Logan, G. D. (2003). Simon-type effects: Chronometric evidence for keypress schemata in typewriting. Journal of Experimental Psychology-Human Perception and Performance, 29(4), 741-757. Logan, G. D., & Crump, M. J. C. (2009). The left hand doesn’t know what the right hand is doing: The disruptive effects of attention to the hands in skilled typewriting. Psychological Science, 10(1), 1296–1300. Lowry, P. B., Romano, N. C., Jenkins, J. L., & Guthrie, R. W. (2009). The CMC interactivity model: How interactivity enhances communication quality and process satisfaction in lean-media groups. Journal of Management Information Systems, 26(1), 155-195. Maurer, T. (2011). Cyber norm emergence at the United Nations—an analysis of the UN’s activities regarding cyber-security: Belfer Center for Science and International Affairs, Harvard Kennedy School. McKinney, V., Yoon, K., & Zahedi, F. (2002). The measurement of web-customer satisfaction: An expectation and disconfirmation approach. Information Systems Research, 13(3), 296-315. Miller, G. A. (1956). The magical number seven plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81-97. Miller, G. A., Galanter, E., & Pribram, K. H. (1986). Plans and the Structure of Behavior. New York, NY, USA: Adams-Bannister-Cox. Milne, G. R., Labrecque, L. I., & Cromer, C. (2009). Toward an understanding of the online consumer's risky behavior and protection practices. Journal of Consumer Affairs, 43(3), 449-473. Notoatmodjo, G., & Thomborson, C. (2009). Passwords and perceptions. Paper presented at the Proceedings of the Seventh Australasian Conference on Information Security. Nunnally, J. C. (1978). Psychometric Theory (2nd ed.). New York, NY, USA: McGrawHill. Oghenerukevbe, E. A. (2010). Mnemonic passwords practices in corporate sites in Nigerian. Journal of Internet Banking & Commerce, 15(1), 1-11. Oliver, R. L. (1977). Effect of expectation and disconfirmation on postexposure product evaluations - Alternative interpretation. Journal of Applied Psychology, 62(4), 480-486. Park, S., Park, J., & Cho, S. (2010, June 29-July 1). User authentication based on keystroke analysis of long free texts with a reduced number of features. Paper presented at the 2010 Second International Conference on Communication Systems, Networks and Applications (ICCSNA). Pick, J. B., & Azari, R. (2008). Global digital divide: Influence of socioeconomic, governmental, and accessibility factors on information technology. Information Technology for Development, 14(2), 91-115. Posey, C., Roberts, T. L., Lowry, P. B., Courtney, J., & Bennett, R. J. (2011, September 22–23). Motivating the insider to protect organizational information assets: Evidence from protection motivation theory and rival explanations. Paper presented at the The Dewald Rhoode Workshop in Information Systems Security 2011, Blacksburg, Virginia, USA. Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change. Journal of Psychology, 91(1), 93-114. Rogers, R. W. (1983). Cognitive and physiological processes in fear appeals and attitude change: A Revised theory of protection motivation. In J. Cacioppo & R. Petty (Eds.), Social Psychophysiology. New York, NY, USA: Guilford Press. Roztocki, N., & Weistroffer, H. R. (2011). Information technology success factors and models in developing and emerging economies. Information Technology for Development, 17(3), 163-167. Shaffer, L. H. (1976). Intention and performance. Psychological Review, 83(5), 375393. Shaffer, L. H., & Hardwick, J. (1968). Typing performance as a function of text. Quarterly Journal of Experimental Psychology, 20(4), 360-369. Shouhong, W., & Hai, W. (2008). Password authentication using Hopfield neural networks. IEEE Transactions on Systems, Man & Cybernetics: Part C Applications & Reviews, 38(2), 265-268. Tagert, A. C. (2010). Cybersecurity Challenges in Developing Nations. Carnegi Mellon University, Pittsburgh, PA. Tam, L., Glassman, M., & Vandenwauver, M. (2010). The psychology of password management: A tradeoff between security and convenience. Behaviour & Information Technology, 29(3), 233-244. Tappert, C., Villani, M., & Cha, S.-H. (2009). Keystroke biometric identification and authentication on long-text input. In L. Wang & X. Geng (Eds.), Behavioral Biometrics for Human Identification: Intelligent Applications. Teh, P. S., Teoh, A. B. J., Tee, C., & Ong, T. S. (2010). Keystroke dynamics in password authentication enhancement. Expert Systems with Applications, 37(12), 8618-8627. Vance, A., Siponen, M., & Pahnila, S. (2012). Motivating IS security compliance: Insights from habit and protection motivation theory. Information & Management, 49(3–4), 190-198. Vizer, L. M., Zhou, L. N., & Sears, A. (2009). Automated stress detection using keystroke and linguistic features: An exploratory study. International Journal of Human-Computer Studies, 67(10), 870-886. Weingarten, R., Nottbusch, G., & Will, U. (2004). Morphemes, syllables, and graphemes in written word production. In T. Pechmann & C. Habel (Eds.), Language Production (pp. 529–572). Berlin, Germany: Mouton de Gruyter. Will, U., Nottbusch, G., & Weingarten, R. (2006). Linguistic units in word typing: Effects of word presentation modes and typing delay. Written Language & Literacy, 9(1), 153-176. Workman, M., Bommer, W. H., & Straub, D. (2008). Security lapses and the omission of information security measures: A threat control model and empirical test. Computers in Human Behavior, 24(6), 2799-2816. Wu, C., & Liu, Y. (2008). Queuing network modeling of transcription typing. ACM Transactions on Computer-Human Interaction, 15(1). Zhang, L., & McDowell, W. C. (2009). Am I really at risk? Determinants of online users' intentions to use strong passwords. Journal of Internet Commerce, 8(3/4), 180-197. Cohesive Cybersecurity Policy Needed for Electric Grid National Defense, Commentary, (August, 2011) Securing the electric grid is one of the key components of preventing terrorist attacks in the United States and increasing the country’s resilience and recovery from such events. A secure electric grid is one that is protected from errors, contingencies or assaults on computer systems and networks. There is no shortage of government policies for protecting critical infrastructure sectors from network vulnerabilities. What is missing is a focused comprehensive cybersecurity policy for the electricity sector. Smart-grid technology, which may rely on computer networks to intelligently manage electricity, makes this all the more important. But electric grid security is a topic that transcends smart-grid applications and reliability standards to issues of national security and international diplomacy. President Obama’s June 2011 “Policy Framework for the 21st Century Grid” by the National Science and Technology Council noted that ensuring that the electric grid can recover from cyber-attacks is “vital to national security and economic well-being.” A comprehensive cybersecurity policy for the industry is essential for this sector to work with the government to create and deploy technologies necessary to increase grid security and resilience. Current protection of the critical electric infrastructure sector is fragmented. The quasigovernment North American Electric Reliability Corp. (NERC) coordinates information sharing and creates mandatory cybersecurity reliability standards. These are valuable, but cannot replace a cohesive policy. A cybersecurity strategy must include at least six components: improving information sharing; clarifying the role of industry players in responding to different types of cyber-incidents; ensuring awareness of domestic and international law implications beyond the reliability standards; implementing long-term planning; evaluating other countries’ cybersecurity systems; and providing government funding. In the United States, private companies own and operate most critical infrastructure assets such as power lines and substations. While some may perceive defense against cyber-attacks as purely a government function, given the private ownership, a public-private partnership is necessary. Two elements of the government/electric industry partnership are the Information Sharing and Analysis Center (ISAC) and the cybersecurity reliability standards. To improve the partnership, NERC should use ISAC’s information sharing function and NERC should assist the industry with determining the scope of cybersecurity protection to be applied by the private industry. ISAC issues advisories and reliability or security threat alerts. NERC has been the coordinator of the electricity sector since 1998. Often private companies do not have the resources or expertise to conduct extensive evaluations. NERC addresses this need by monitoring private industry information and analyzing it for suspicious activity patterns and potential threats. In turn, the government can benefit from industry expertise and the private sector’s ability to implement certain technologies more rapidly. The long established use of the ISAC as a security information clearinghouse makes it an ideal platform for cooperation. The industry’s public-private partnership involves mandatory reliability standards created by NERC, the noncompliance of which can result in fines of up to $1 million per day. But simply complying with standards is inadequate to create an electric system resistant to and capable of rapid recovery from terrorist attacks. While the standards address perimeter access, anti-virus, security event monitoring and remote access controls, they do not address the range of appropriate responses in the continuum of cybersecurity events. Security problems range from minor employee mistakes and internal program malfunctions, to Internet viruses and worms and, in the worst-case scenario, to organized attacks by a sovereign state or a terrorist group to take down the entire grid. Government guidance can help industry better evaluate and plan security measures. Many companies may not have the financial resources or may not be able to justify the extra expense involved in defending against low-probability but high impact events such as an organized cyber-attack. While industry cannot implement a security system on par with the U.S. military, it can explore security upgrades that complement the existing system. The existing public-private partnership encourages the electric industry and the government to cooperate in creating guidance on the appropriate responses to different cyber-events. Other concerns involve the legal implications outside of NERC reliability standards. Depending on whether the electric industry utilizes passive or active defenses, such actions may trigger different laws. These include domestic laws and even the international law of armed conflict. By being sensitive to these nuances, the electric industry protects itself from liability, unanticipated consequences, and improves its effectiveness in advancing the national interest of preventing and recovering from terrorist attacks. Passive defense measures include strengthening the system via encryption and firewalls, facilitating recovery in the event of a successful attack, and educating users to behave properly during a threat. In contrast, active defense involves neutralizing a perpetrator’s ability to attack such as sending back destructive viruses. On the domestic front, certain responses to cyber-events may be illegal. The Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Protection Act prohibit victims from initiating investigations of their own. If a utility uses an active defense, then it should be aware that the CFAA forbids private companies from intentionally causing damage in excess of $5,000 without authorization. Limited relief however is available under some circumstances for actions taken in defense of property. Unfortunately, no government based institutional structure exists to provide the private sector with immediate relief if they are under a cyber-attack. Reporting to law enforcement authorities will only initiate investigations and allow for arrests later on, not permission to immediately launch an active defense to counter or neutralize a network penetration. On the international front, cybersecurity self defense could be illegal if it rises to the level of “use of force” or “armed attack” pursuant to the United Nations Charter and customary international law. The fact that a private company may be more likely to use active defense than sovereign states means its action can be mistakenly interpreted as hostile activity by the U.S. government. Domestic and international law implications add complexities. Utilities can create cybersecurity programs that manage the variety of events if they consider the potential liabilities and consequences of domestic and international laws. Such an understanding can do much to prevent negative diplomatic side effects. Furthermore, effective industry cybersecurity programs will advance the national interest of preventing and recovering from terrorist attacks. In the publicprivate partnership of cybersecurity protection, utilities can benefit greatly from government legal expertise. The North American Electric Reliability Corp. has been actively addressing cybersecurity challenges. In 2009, it informed the electric industry that it must improve identification of critical assets because it was discovered that fewer than 63 percent of transmission owners identified at least one critical asset. This basic critical asset identification problem must be resolved before critical cyber-assets can be identified because if there are none, then the reliability standards are useless. NERC has created a variety of pilot programs that assess the power companies’ abilities to resist cyber-attacks and simulate war games. In addition, a comprehensive policy should include long-term planning, evaluation of other sovereign state cybersecurity protection measures, and federal funding assistance. A strategic plan may include a framework where the industry will analyze certain characteristics to determine when federal government or military involvement is required. It can also include technical goals. Many computers in the electric grid network systems are not connected to the Internet for security reasons. With the implementation of the smart grid, new connections are being made, which requires new Internet security strategies. The next task for the government is to study the computer networks and Internet systems abroad to determine which tactics may work for the electric grid or for national cybersecurity. For instance, the Chinese government uses the Great Firewall to scan for subversive material, but it can also be used to disconnect Chinese networks from the Internet. Similarly, the Chinese power grid can be disconnected from the net. It is worthwhile to evaluate how these tactics may work in the United States. Finally, the policy should contain a funding mechanism to close the gap between basic security measures to ensure daily functions and measures for defending against cyber-attacks and warfare in the most extreme circumstances. Zhen Zhang is an attorney specializing in energy and environmental law. She is a global energy fellow at the Institute for Energy and Environment at Vermont Law School.
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Attached.

Running Head: DISCUSSION PAPER

1

Cybersecurity Employee Training
Student’s Name
Institutional Affiliation
Course
Date

DISCUSSION PAPER

2

Cybersecurity Employee Training
Question 1
The current world has evolved into a digital revolution. As a result, organizations face
increasing cases of cyberattacks. There is a need for employee training and awareness programs
designed to improve cybersecurity in organizations. There are various cybersecurity training
programs available for companies such as phish labs, securementum, Webroot, Lucy, and
Barracuda network phishing. When cybersecurity training is done correctly, it can help
organizations improve employee response to cyberattacks (Jenkins et al., 2014). The training
programs go beyond making employees aware of the cybersecurity threats by detecting new
cybersecurity threats. The cybersecurity training programs support reporting,...

Similar Content

Related Tags