In a minimum of 500 words, discuss the following questions:

Anonymous
timer Asked: Dec 12th, 2018
account_balance_wallet $40

Question Description

  • What are Frank, Greitzer & Hohimer (2011) argurments about the difficulties of picking up the trail before the fact, in order to provide time to intervene and prevent an insider cyber attack?
  • Do you agree with them? Why? Why not?

Unformatted Attachment Preview

Journal of Strategic Security Volume 4 Number 2 Volume 4, No. 2, Summer 2011: Strategic Security in the Cyber Age Article 3 Modeling Human Behavior to Anticipate Insider Attacks Frank L. Greitzer , Ph.D. Pacific Northwest National Laboratory, frank.greitzer@pnnl.gov Ryan E. Hohimer Pacific Northwest National Laboratory, ryan.hohimer@pnl.gov Follow this and additional works at: http://scholarcommons.usf.edu/jss Part of the Defense and Security Studies Commons, National Security Law Commons, and the Portfolio and Security Analysis Commons pp. 25-48 Recommended Citation Greitzer, Frank L. , Ph.D. and Hohimer, Ryan E.. "Modeling Human Behavior to Anticipate Insider Attacks." Journal of Strategic Security 4, no. 2 (2011): : 25-48. DOI: http://dx.doi.org/10.5038/1944-0472.4.2.2 Available at: http://scholarcommons.usf.edu/jss/vol4/iss2/3 This Article is brought to you for free and open access by the Journals at Scholar Commons. It has been accepted for inclusion in Journal of Strategic Security by an authorized editor of Scholar Commons. For more information, please contact scholarcommons@usf.edu. Modeling Human Behavior to Anticipate Insider Attacks Author Biography Dr. Frank L. Greitzer is a Chief Scientist at the Pacific Northwest National Laboratory (PNNL), where he conducts R&D in human decision making for diverse problem domains. At PNNL Dr. Greitzer leads the cognitive informatics R&D focus area, which addresses human factors and social/behavioral science challenges through modeling and advanced engineering/computing approaches. This research focuses on the intelligence domain, including human behavior modeling with application to identifying/predicting malicious insider cyber activities, modeling socio-cultural factors as predictors of terrorist activities, and human information interaction concepts for enhancing intelligence analysis decision making. Dr. Greitzer’s research interests also include evaluation methods and metrics for assessing effectiveness of decision aids, analysis methods and displays. Ryan Hohimer is a Senior Research Scientist at PNNL. His research interests include knowledge representation and reasoning, probabilistic reasoning, semantic computing, cognitive modeling, image analysis, data management, and data acquisition and analysis. He is currently serving as Principal Investigator of a Laboratory-directed Research and Development project that has designed and developed the CHAMPION reasoner. Abstract The insider threat ranks among the most pressing cyber-security challenges that threaten government and industry information infrastructures. To date, no systematic methods have been developed that provide a complete and effective approach to prevent data leakage, espionage, and sabotage. Current practice is forensic in nature, relegating to the analyst the bulk of the responsibility to monitor, analyze, and correlate an overwhelming amount of data. We describe a predictive modeling framework that integrates a diverse set of data sources from the cyber domain, as well as inferred psychological/motivational factors that may underlie malicious insider exploits. This comprehensive threat assessment approach provides automated support for the detection of high-risk behavioral "triggers" to help focus the analyst's attention and inform the analysis. Designed to be domain-independent, the system may be applied to many different threat and warning analysis/sense-making problems. This article is available in Journal of Strategic Security: http://scholarcommons.usf.edu/jss/ vol4/iss2/3 Greitzer and Hohimer: Modeling Human Behavior to Anticipate Insider Attacks Journal of Strategic Security Volume IV Issue 2 2011, pp. 25-48 DOI: 10.5038/1944-0472.4.2.2 Modeling Human Behavior to Anticipate Insider Attacks Frank L. Greitzer Ryan E. Hohimer Pacific Northwest National Laboratory Richland, WA USA frank.greitzer@pnl.gov Abstract The insider threat ranks among the most pressing cyber-security challenges that threaten government and industry information infrastructures. To date, no systematic methods have been developed that provide a complete and effective approach to prevent data leakage, espionage, and sabotage. Current practice is forensic in nature, relegating to the analyst the bulk of the responsibility to monitor, analyze, and correlate an overwhelming amount of data. We describe a predictive modeling framework that integrates a diverse set of data sources from the cyber domain, as well as inferred psychological/motivational factors that may underlie malicious insider exploits. This comprehensive threat assessment approach provides automated support for the detection of high-risk behavioral "triggers" to help focus the analyst's attention and inform the analysis. Designed to be domain-independent, the system may be applied to many different threat and warning analysis/sense-making problems. Introduction Imagine this (very general) scenario: John has been a productive employee for several years, but is extremely disappointed when he feels that other coworkers have taken credit for some of his accomplishments and he is passed over for a coveted promo- Journal of Strategic Security (c) 2011 ISSN: 1944-0464 eISSN: 1944-0472 Produced by The Berkeley Electronic Press, 2011 25 Journal of Strategic Security, Vol. 4, No. 2 Journal of Strategic Security tion. Filled with bitterness and frustration after being accused of inappropriate conduct at work, he negotiates with an outside entity to exploit his position to the benefit of the competition, planning later to join the competitor's organization. This brief scenario is a high-level description of a typical insider threat case. The insider threat refers to harmful acts that trusted insiders might carry out, such as something that causes harm to the organization or an unauthorized act that benefits the individual. Information "leakage," espionage, and sabotage involving computers and computer networks are the most notable examples of insider threats, and these acts are among the most pressing cyber-security challenges that threaten government and private-sector information infrastructures. The insider threat is manifested when human behaviors depart from established policies, regardless of whether they result from malice, disregard, or ignorance. In the scenario above, if we jump back to the time of the Revolutionary War, we can see close parallels to the case of Benedict Arnold, who in 1780 conspired with the British to work towards the surrender of West Point following events between 1777 and 1779 involving his being passed over for promotion and being accused of financial schemes. Viewing the general scenario in more modern times, one can see parallels with the career of Aldrich Ames, a CIA operative from the late 1950s to the late 1980s. Ames initially received enthusiastic and positive reviews, but had continuing problems with alcoholism, security violations leading to reprimands, extramarital affairs that violated policy, and financial problems that reportedly led him to become a spy for the Soviet Union. In even more contemporary times, we may consider the case of accused WikiLeaks insider Bradley Manning, a despondent and disillusioned Army intelligence officer who experienced a series of emotional upheavals, including the breakup of a personal relationship, and whose disgruntled and inappropriate workplace behavior led to his demotion to Private/ First Class before he allegedly leaked hundreds of thousands of U.S. Department of Defense and Department of State diplomatic cables.1 Surveys and studies conducted over the last decade and a half have consistently shown the critical nature of the problem in both government and private sectors. A 1997 Department of Defense (DoD) Inspector General report found that 87% of identified intruders into DoD information systems were either employees or others internal to the organization.2 The annual e-Crime Watch Survey conducted by Chief Security Officer (CSO) Magazine in conjunction with other institutions reveals that for both the government and commercial sectors,3 the most costly or damaging cybercrime attacks were caused by insiders, such as current or former employ26 http://scholarcommons.usf.edu/jss/vol4/iss2/3 DOI: http://dx.doi.org/10.5038/1944-0472.4.2.2 Greitzer and Hohimer: Modeling Human Behavior to Anticipate Insider Attacks Modeling Human Behavior to Anticipate Insider Attacks ees and contractors. A recent report covering over one hundred fortythree million data records collected by Verizon and the U.S. Secret Service analyzed a set of one hundred forty-one confirmed breach cases in 2009 and found that 46% of data breaches were attributed to the work of insiders.4 Of these, 90% were the result of deliberate, malicious acts; six percent were attributed to inappropriate actions, such as policy violations and other questionable behavior, and four percent to unintentional acts. One might legitimately ask: Can we pick up the trail before the fact, providing time to intervene and prevent an insider attack? Why is this so hard? There are several reasons why development and deployment of approaches to addressing insider threat, particularly proactive approaches, are so challenging: (a) the lack of sufficient real-world data that has "ground truth" enabling adequate scientific verification and validation of proposed solutions; (b) the difficulty in distinguishing between malicious insider behavior and what can be described as normal or legitimate behavior (since we generally don't have a good understanding of normal versus anomalous behaviors and how these manifest themselves in the data); (c) the potential quantity of data, and the resultant number of "associations" or relationships that may emerge produce enormous scalability challenges; and (d) despite ample evidence suggesting that in a preponderance of cases, the perpetrator exhibited observable "concerning behaviors" in advance of the exploit, there has been almost no attempt to address such human factors by researchers and developers of technologies/tools to support insider threat analysis.5 Both the similarities and differences in cases throughout history reveal challenges for efforts to combat and predict insider threats. While the human factor has remained somewhat constant, the methods and skills that apply to insider exploits have changed drastically in the last few decades. In the time of Benedict Arnold, and even up to the time of the exploits of Aldrich Ames and another notorious insider, Robert Hanssen, an insider had to possess requisite knowledge, direct access to the information to be leaked, physical access to the recipient of the information, and a physical copy of the information to be exfiltrated. Compared to the WikiLeaks case, even twenty years ago it would have been necessary to use an 18-wheeler truck to transport the several hundred thousand documents involved in the WikiLeaks case. Today, insider crime does not even require specific knowledge of the information to be leaked, and gigabytes or more of information can be exfiltrated using various means, including thumb drives, email, and other modern information technology tools. Attribution is hard, and the ability to predict or catch a perpetrator in the act is severely limited, especially if the only means of detection is driven by workstation and network monitoring. Indeed, we have suggested that 27 Produced by The Berkeley Electronic Press, 2011 Journal of Strategic Security, Vol. 4, No. 2 Journal of Strategic Security the only way to be proactive is for the insider threat warning/analysis system to take non-IT "behavioral" or psychosocial data into account in order to capitalize on signs and precursors of the malicious activity that are often evident in "concerning behaviors" prior to the execution of the crime.6 In this regard, research suggests that in a significant number of cases, the malicious intent of the perpetrator was "observable" prior to the insider exploit. For example, a study by the Computer Emergency Response Team (CERT) Insider Threat Center,7 a federally-funded research and development entity at Carnegie Mellon University's Software Engineering Institute, shows that 27% of insiders had come to the attention of either a supervisor or coworker for some concerning behavior prior to the incident. Examples of concerning behaviors include increasing complaints to supervisors regarding salary, increased cell phone use at the office, refusal to work with new supervisors, increased outbursts directed at coworkers, and isolation from coworkers.8 As described in a recent article on rogue insiders, the extensive and ongoing investigation of insider threat by CERT has determined that most cases carry a distinct pattern. According to CERT's technical manager Dawn Cappelli, "Usually the employees either have announced their resignation or have been formally reprimanded, demoted, or fired."9 In such cases, the article continues, the Human Resources department is aware of these high-risk personnel. The malefactors typically may be categorized as falling into one of two groups: either those who are moving to a new job and want to take their work with them, or those who are part of a well-coordinated spy ring operating for the benefit of a foreign government or organization. Goal of Insider Threat Research In an operational context, security analysts must review and interpret a huge amount of data to draw conclusions about possible suspicious behaviors that indicate policy violations or other potentially malicious activities. They apply their domain knowledge to perceive and recognize patterns within the data. The domain knowledge that analysts possess facilitates the process of identifying the relevance of and connections among the data. In our examination of current practice by security, cyber security, and counterintelligence analysts, we have observed that typically the analyst uses a number of tools that monitor different types of data to provide alerts or reports about suspicious activities. This is primarily done in a forensic mode and within certain domains of data, such as output from Security Event and Incident Management (SEIM) systems, network and workstation/system log reports, web-monitoring tools, 28 http://scholarcommons.usf.edu/jss/vol4/iss2/3 DOI: http://dx.doi.org/10.5038/1944-0472.4.2.2 Greitzer and Hohimer: Modeling Human Behavior to Anticipate Insider Attacks Modeling Human Behavior to Anticipate Insider Attacks access-control monitoring tools, and data loss/data leak protection (DLP) tools.10 While these tools provide varying levels of protection, they are primarily forensic in nature, and in general the analyst has the critical and difficult responsibility for data fusion that integrates the analysis and "sense-making" across these disparate domains. To date, no systematic methods have been developed that provide a complete and effective solution to the insider threat. Our goal is to create, adapt, and apply technology to the insider threat problem by incorporating into a reasoning system the capability to integrate different types of information that provide a useful picture of a person's motivation, as well as the capability and opportunity to carry out the crime. In this general context, our specific objective is to detect anomalous behaviors (insider threat indicators) before or shortly after the initiation of a malicious exploit. Insider-threat assessment falls in the class of problems referred to as illstructured, ill-defined, and wicked. Rittel and Webber defined "wicked problems" as those having goals that are incomplete, changing, and occasionally, conflicting.11 Klein suggests that most real-world problems are not well-specified and do not involve "explicit knowledge."12 Wicked problems defy clarifying goals at the start; we need to reassess our original understanding of the goals, and goals become clearer as we learn more. Methodologies are heuristic, involving discovery and learning through an iterative process. Thus, the objectives, concepts of operations, requirements, and other dictates in the proposed research and development are subject to periodic changes and maturation as the course of insider threat algorithms and software development matures. The neocortex was the inspirational metaphor for the design of our reasoning framework, called CHAMPION (for Columnar Hierarchical Autoassociative Memory Processing In Ontological Networks). The neocortex metaphor serves as inspiration for a functional (not structural) design that adopts functional requirements, as shown in Figure 1. 29 Produced by The Berkeley Electronic Press, 2011 Journal of Strategic Security, Vol. 4, No. 2 Journal of Strategic Security Figure 1: Functional requirements inspired by neocortex metaphor. The processing unit of the neocortex is the cortical column. For the CHAMPION reasoning system, the central processing unit is the Autoassociative Memory Component (AMC), which mimics the functionality of the cortical column. Multiple Domains of Data The insider threat problem manifests itself within a socio-technical system, a combination of social, behavioral, and technical factors that interact in complex ways.13 Knowledge within each of the factors is captured (or modeled) in a domain-specific ontology. This modeling approach organizes the notional concepts within a specific domain into a hierarchical mapping of those concepts. The ontologies are used by the computational reasoning system to interpret patterns within the data. The rationale for our approach of integrating across different domains of data is based on a body of scientific research and case studies in the field of insider threat, cyber security, and social/behavioral sciences, from which it has been widely concluded that behavioral and psychosocial indicators of threat risk should be taken into account by insider threat analysis systems.14 Indeed, Gudaitis and Schultz have separately argued that integrated solutions are required.15, 16 As Schultz observed, there is a need for a "new framework" for insider threat detection based on multiple indicators that not only address workstation and network activity logs but 30 http://scholarcommons.usf.edu/jss/vol4/iss2/3 DOI: http://dx.doi.org/10.5038/1944-0472.4.2.2 Greitzer and Hohimer: Modeling Human Behavior to Anticipate Insider Attacks Modeling Human Behavior to Anticipate Insider Attacks also include preparatory behavior and verbal behavior, among others. Thus, analysis of workstation and network data is a necessary, but not sufficient, condition for proactive insider threat analysis. A recent review describes many technical approaches to intrusion detection (including insider threats) that may be characterized according to the categorization of techniques in terms of threshold, anomaly, rule-based, and model-based methods.17, 18 Threshold detection is essentially summary statistics (such as counting events and setting off an alarm when a threshold is exceeded). Anomaly detection is based on identifying events or behaviors that are statistical outliers; a major drawback of this approach is its inability to effectively combat the strategy of insiders to work below the statistical threshold of tolerance and, over time, train systems to recognize increasingly abnormal behavior patterns as normal. Rule- or signature-based methods are limited to work within the bounds of the defined signature database; variations of known signatures are easily created to thwart such misuse-detectors, and completely novel attacks will nearly always be missed. Model-based methods seek to recognize attack scenarios at a higher level of abstraction than the other approaches, which largely focus on audit records exclusively as data sources. A sample of data sources for host- and network-based monitoring data that are relevant to insider threa ...
Purchase answer to see full attachment

Tutor Answer

Professor_Rey
School: University of Virginia

...

flag Report DMCA
Review

Anonymous
Thank you! Reasonably priced given the quality not just of the tutors but the moderators too. They were helpful and accommodating given my needs.

Similar Questions
Hot Questions
Related Tags
Study Guides

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors