SJHS Importance of Proper Information Systems Operations and What Is Considered Discussion

User Generated



st jude high school



The IASB has achieved great success in extending the adoption of international financial reporting standards and harmonising financial 

reporting practice across the globe. 


Using Nobes (2013) and Yip and Young (2012) and any other relevant academic materials, critically discuss to what extent IFRS contributes 

to the international harmonisation of financial reporting.


Nobes, C. (2013) The continued survival of international differences under IFRS. Accounting and Business Research, 43 (2), 83-111.

Yip, R.W.Y and Young, D (2012) Does Mandatory IFRS Adoption Improve Information Comparability? The Accounting Review, 87 (5), 


(You can obtain these articles from the electronic journals in the library. After logging into MUSE you should click on StarPlus – Library 

Catalogue. Then click on Browse EJournals. After that, you should type in the journal name (e.g., for the 1st article, type in “Accounting and 

Business Research”). Then you should navigate through the electronic library for this journal to find the article. If you have any problems 

finding the journal you should contact the staff in the library for assistance.)

Unformatted Attachment Preview

Domain 4 Information Systems Operations, Maintenance, & Support Part 2 MIS 415 001 – Information Systems Audit and Control Saiid Ganjalizadeh TOPICS • IS Hardware, Software, and Architecture • Computer Architecture • IS Architecture • Data Governance • Database Management Systems • GDPR • Tokenization 2 Domain 4 Information Systems Operations, Maintenance, & Support Part 2 3 Information Systems Operations, Maintenance, & Support Part 1 – This Week • Information Systems Operations • Patch Management • Information Lifecycle • Cloud Audit • Backup, Redundancy, & Administration Considerations • Job Scheduling and Monitoring Part 2 – Next Week • IS Hardware, Software, and Architecture • Computer Architecture • IS Architecture • Data Governance • Database Management Systems • GDPR • Tokenization • Data Governance • Virtual Machines • APIs • Microservices Part 3 – Week After Next • OSI Model and Networking • Wireless Technologies • Auditing Network 4 Computer Architecture Processors The brain of the computer, each processor type has a specific architecture and set of instructions that it can carry out. The operating system and programs that run on the operating system, have to be written for that specific instruction set that's provided by the chip. Components of Central Processing Unit (CPU) • Registers: a temporary storage location located in the chip • Arithmetic Logic Unit: performs mathematical functions and logical operations on data • Control Unit: fetches and interprets the code, and oversees the execution of the different instruction set 5 Computer Architecture Cont. Buses A bus is a subsystem that connects all the components of the system, it connects the CPU to the RAM to the other I/O devices, and basically it connects everything together. Memory (one of the system’s most critical resources) • Registers: in-chip/on the chip, running at the same speed the chip is (very fast and expensive) • Cache: either at the same speed of the chip or closer to it • RAM: a temporary storage for quick access • ROM: a chip that's burnt from the factory with a particular set of data to read from it many times. • Virtual: a more of a scheme used in modern computer systems. 6 Virtual Memory Virtual memory uses both hardware and software to enable a computer to compensate for physical memory shortages, temporarily transferring data from random access memory (RAM) to disk storage. Source: 7 Types of Memory The faster the RAM, the closer it is running to the actual CPU speed, the more expensive it is. The actual disk storage where that's the slowest and the cheapest form of memory storage. 8 Device Drivers A Device Driver • The software that runs some device controller, like graphics card or network card. • The operating system uses a device driver to communicate with a device controller. • A device driver provides an interface for the operating system and for programs to actually use that device. 9 Process and Threads A Process Is a set of instructions that's actually running/executing. So, when a process is created, the operating system assigns resources to it: ✓ Memory segment ✓ CPU time slot ✓ access to system APIs, directory and file access A Thread Is a set of instructions that must all be executed together. So, in a modern operating system and environment, the trend is: ✓ To be more multithreaded ✓ To take advantage of parallelization (chop it up into little, mini subtasks, and get them all executed and paralleled) 10 Process and Threads Cont. 11 Interrupts • An interrupt is the time slice that each process is given. • Process can only interact with the CPU when its interrupt is “called”. • More Useful/Important Terms (for Exam): ✓ Multithreading: the idea of chopping a task into little subtasks that can all be done in parallel (process several pieces of code [threads] at one time). ✓ Multitasking: is the idea that you can have more than one program (process) running at the same time. ✓ Multiprocessing: is the idea that you have multiple processes in the system (a system with more than one CPU). ✓ Virtual Machine: a simulated environment from which to run something. Virtual machines are based on computer architectures and provide functionality of a physical computer (Real machine). 12 Information System A discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination or disposition of information. Source: National Institute of Standards and Technology (NIST) 13 Parallel Computing Parallel Computing The idea od taking some tasks and break them down into subtasks that can be processed independently meaning on different systems. Parallel and Distributed Computing Model provides improved: • Expandability (ability to rapidly add more nodes to the network and get more processing capability) • Manageability (it's easier to manage) • Efficiency (because we're getting more efficient use of each of the nodes in our big pool) • Reliability (because if we lose one or lose two nodes, we still have several other nodes to take up the work) 14 Cloud Computing A model for enabling a sort of ubiquitous, convenient, on-demand network access to a shared pool of resources like networks, servers, storage, applications, and other services. NIST SP 800-145 describes 5 essential characteristics of Cloud Computing: • • • • • On-demand self service Broad network access Resource pooling Rapid elasticity Measured service (pay for certain amount of CPU or RAM, etc.) 15 Cloud Service Models 16 Cloud Deployment Models • Private: a cloud for one single organization • Community: a cloud for several organizations that have some kind of common goal or common mission • Public: a private cloud that you own yourself and then you may share resources or connect up to some public cloud, like Microsoft Azure or Amazon or Google or Oracles cloud • Hybrid: some private cloud that you own yourself and then you may share resources or connect up to some public cloud 17 Cyber-Physical System NIST’s definition of CPS: Smart systems that include engineered interacting networks of physical and computational components Examples: Internet of Things (IoT), Robotic Arms, Thermostat, etc. (all are controlled by a computer/software) 18 Supervisory Control and Data Acquisition (SCADA) • An old engineering based protocol and it's what is called an industrial control system. • It's a way to control things in industry like robot arm and welding facility that makes cars, for example. • And it is implemented in just about every sector; ✓ Energy (electric, oil, and gas) ✓ Food and beverage ✓ Manufacturing ✓ Transportation ✓ Water and sewer 19 Database Management Systems (DBMSs) Database • Collection of interrelated data • Stored and organized on a computer Database Management System • Computer programs to store, modify, and extract information from a database • Provides tools for data input, verification, storage, retrieval, query, and manipulation Database Model • Describes relationships between data elements • Used to represent the conceptual organization of data • Formal method of presenting information 20 Timeline of Database Models 21 More on Database Models • Traditional Files (Flat File DB) • Hierarchical DB ✓ Stores information in a tree-like fashion ✓ Information is traced from a major group to subgroups ✓ It predetermines the access paths to data stored in the DB ✓ Each record has one parent record and many children (child records) • Network DB ✓ A hierarchical model that each record has multiple parent and child records • Object-Oriented DB ✓ Stores objects and entities containing data and action (function, procedure) ✓ The operations carried out on information items are considered as part of their definition ✓ Allows modeling and creation of data as objects 22 More on Database Models • Object-Relational DB ✓ A relational database and an object-oriented database smashed together, so that you can access it either through a fourth generation SQL client or an API 23 Relational Database • • • • • • • Data is stored in a two-dimensional table (with rows and columns) A column is called a “field” (or attribute or key) A row is called a “record” (or tuple) Each cell contains one data value (Atomic) A row is identified uniquely by a an identifier (Primary Key) The order of columns is irrelevant The order of rows is irrelevant 24 Relational Database Cont. Foreign Key • A column in a table that contains the same values as a primary key of another table. • Has established relationship with that primary key • FK/PK relationships define a relational join (link) 25 Database Integrity Entity Integrity • No PK attribute can have a “null” value • PK value must be unique ✓ Primary Key = Unique identifier for a set of values Referential Integrity • When there is a relationship between 2 entities, those entities must exist • No record can contain a reference to a key of a nonexistent record ✓ A database has referential integrity if all FKs reference existing PKs 26 Tokenization for Data Protection Tokenization is the process of turning a meaningful piece of data, such as an account number, into a random string of characters called a token that has no meaningful value if breached. Tokens serve as reference to the original data, but cannot be used to guess those values. That’s because, unlike encryption, tokenization does not use a mathematical process to transform the sensitive information into the token. Source: McAfee 27 Tokenization for Data Protection There is no key, or algorithm, that can be used to derive the original data for a token. Instead, tokenization uses a database, called a token vault, which stores the relationship between the sensitive value and the token. The real data in the vault is then secured, often via encryption. Source: McAfee 28 Tokenization – How It Works Source: 29 Tokenization – How It Works Source: 30 PCI DSS Requirements 1. The tokenization system does not provide PAN in any response to any application, system, network, or user outside of the merchant’s defined CDE (Cardholder Data Environment). 2. All tokenization components are located on secure internal networks that are isolated from any untrusted and out-of-scope networks. 3. Only trusted communications are permitted in and out of the tokenization system environment. 4. The tokenization solution enforces strong cryptography and security protocols to safeguard cardholder data when stored and during transmission over open, public networks. Source: 31 Data Governance vs Data Management Data Governance is a strategic business program that determines and prioritizes the financial benefit data brings to the organization as well as mitigates the business risk of poor data practices and quality. Data Management is an IT program and set of technologies that enables and executes on business defined, prioritized policies, standards and rules to ensure data supports the information requirements of customers, employees, partners and shareholders. Source: Goetz, Michele, Information Management, Oct 2015 32 Data Management – Audit Objectives 1. To determine if personal information is clearly defined and appropriate tools are utilized to de-identify personal information. 2. To determine if use, processing, storage and/or retention of personal information occur only for legitimate business purposes or as authorized. 3. To determine if proper processes are in place in order to manage the electronic and physical records containing personally identifiable information. Source: ISACA Data Privacy Audit Workprogram 33 Sensitive Data Types 1.Personally Identifiable Information (PII) – sensitive information that is associated with an individual person (e.g. SSN, Date of Birth etc.) 2.Protected Health Information (PHI) – individually identifiable health information (e.g. medical records, SSN, email address). The HIPAA Privacy Rule provides federal protections for personal health information held by covered entities and gives patients an array of rights with respect to that information. At the same time, the Privacy Rule is balanced so that it permits the disclosure of personal health information needed for patient care and other important purposes ( 34 Sensitive Data Types - Examples 1.Personally Identifiable Information (PII) –SSN, date of birth, Driver’s license number, bank account number, passport number, address 2.Protected Health Information (PHI) – injury code, medical test results, medical claims information, date of birth, SSN 35 Protecting user identity and controlling how data is used Data Privacy – Audit Objectives Data privacy – appropriate handling of data based on the sensitive of information contained 1. To evaluate data governance for privacy, confidentiality and compliance (DGPC) and determine whether effective data management exists 2. To assess controls surrounding data in various phases of movement— during collection, in transit or at rest 3. To review controls around data access 4. To review third-party management of data 5. To evaluate incident-management policies and practices Source: ISACA Data Privacy Audit Workprogram 36 Data Security – Audit Objectives Protecting data 1. To determine if access to PII is defined and enforced throughout the enterprise. 2. To determine if suitable measures exists to prevent personal data from being read, copied, altered or deleted by unauthorized parties during transmission or during the transport of the data media 3. To determine if established processes and procedures exist for managing security of data at rest. • Appropriate encryption standards for data at rest and in transit • Physical security of data at rest is enforced and applied across the enterprise. 37 Data Classification (Example) Source: Data Stewardship | University Policy | George Mason University ( 38 Data Security (Example) Transparent Data Encryption (TDE) column encryption protects confidential data, such as credit card and Social Security numbers, that is stored in table columns. TDE column encryption uses the two-tiered key-based architecture to transparently encrypt and decrypt sensitive table columns. The TDE master encryption key is stored in an external security module, which can be an Oracle software keystore or hardware keystore. This TDE master encryption key encrypts and decrypts the TDE table key, which in turn encrypts and decrypts data in the table column. Source: 39 Data Classification (Example) Classification & distribution Policy alerts example Source: Source: Source: 40 General Data Protection Regulation (GDPR) 1. GDPR is the toughest privacy and security law in the world. 2. If you process the personal data of EU citizens or residents or you offer goods or services to such people, then the GDPR applies to you even if you’re not in the EU. 3. The regulation was put into effect on May 25, 2018. The GDPR will levy harsh fines against those who violate its privacy and security standards, with penalties reaching into the tens of millions of euros. 4. Penalty –There are two tiers of penalties, which max out at €20 million or 4% of global revenue (whichever is higher), plus data subjects have the right to seek compensation for damages. Source: 41 General Data Protection Regulation (GDPR) • Data Protection – organization must, “by design and by default,” consider data protection and the data protection principles in the design of any new product or activity. • Accountability - data controllers have to be able to demonstrate they are GDPR compliant. If you think you are compliant with the GDPR but can’t show how, then you’re not GDPR compliant. • Data Security – required to handle data securely by implementing “appropriate technical and organizational measures.” • Consent – must be “freely given, specific, informed and unambiguous.” Data subjects can withdraw previously given consent whenever they want, and you have to honor their decision. Source: 42 GDPR Fines Source: Source: Source: 43 Domain 4 Information System Operations, Maintenance, & Support Part 1 MIS 415 – Information Systems Audit and Control Saiid Ganjalizadeh TOPICS • Information Systems Operation • Patch Management • Information Lifecycle • Cloud Audit • Backup, Redundancy, and Administration Considerations • Job Scheduling and Monitoring 2 Domain 4 Information System Operations, Maintenance, & Support Part 1 3 Information System Operations, Maintenance, & Support Part 1 – This Week • Information Systems Operations • Patch Management • Information Lifecycle • Cloud Audit • Backup, Redundancy, & Administration Considerations • Job Scheduling and Monitoring Part 2 – Next Week • • • • Data Governance Virtual Machines APIs Microservices Part 3 – Week After Next • OSI Model and Networking • Wireless Technologies • Auditing Network 4 Information Systems Operations Operational responsibility: a routine/day-to-day activities to keep the systems and networks up and running. Examples: • Performing backups and recovery • Applying patching and hotfixes • Media management • Configuration management • Handling and escalating incidents 5 Asset Identification and Management Knowing what the company owns: • Hardwar • Firmware • OS • Language runtime environment (Java) • Applications • Individual libraries 6 Are All Patches Applied? 7 Patching Considerations 8 Patching Issues 9 Patch Tuesday Patch Tuesday (also known as Update Tuesday) is an unofficial term used to refer to when Microsoft, Adobe, Oracle and others regularly releases software patches for their software products. It is widely referred to in this way by the industry. Microsoft formalized Patch Tuesday in October 2003. (Source: 10 Configuration Management • CM is different from the older school term “Change Control/Change Management”. • It is the management of the logical description of the IT environment. ✓ Diagrams, documentation, configuration of hardware, software, settings, source code, etc. ✓ Sometimes it includes policies, procedures, standards, and other documentation • Should ideally be maintained in a Configuration Management Database (CMDB) 11 Change Management • Establish baselines of configuration management, documentation, system hardware, software, settings • Formally control changes to the baselines • Change can only be made with prior approval • Usually implemented with configuration control boards or groups 12 Release Management • Change control of production software • A process to ensure only authorized software versions are released into production 13 Enterprise Monitoring • Network/security operations center ✓ Event logging ✓ Traffic monitoring ✓ Security monitoring 14 Problem Management • • • • Lowering the impact of problems on services Reducing the number of failures to an acceptable baseline level Preventing the same problem from occurring again Types of problems: ✓ Software ✓ Hardware ✓ Availability ✓ Network ✓ Environmental ✓ Security and safety 15 Root Cause Analysis (RCA) • A process that is along side of the Problem Management. • It’s about to get to the underlying cause, the root cause of our problems that we manage. • It's kind of the difference between just putting a Band-Aid on the problem and actually fixing the problem. • Don’t fix the symptom, fix the actual cause of he problem, and prevent it from happening again. • When facing a problem, track back to Root Cause, and figure out how to prevent this from happening again. 16 Incident Handling • An incident is one or more events that you can track that turn into a bad thing. • We should have a well documented policy and procedure for handling incidents as they pop up. • An event is just something you can monitor or track something happening. If one or more of those events turns to be something bad, that's an incident. • You want to make sure that you get back up and running and back to normal operations in the least effected way. • Document steps to follow once an incident is discovered. • Design an escalation process. 17 Incident Handling Cont. 18 Help Desk/Support • Resolves end-user and system technical or operational problems • Usually implemented in tiers (tier 1 ➔ tier 2 ➔ tier 3) 19 IT Service Management • Manages the IT operations that serves all the other departments in the organization like customers to the IT shop • IT Shop provides efficient and effective services to those departments. • Provides: ✓ IT service delivery (like email/HR/network system) ✓ IT service support (like help desk) 20 IT Service Management Framework • Information Technology Infrastructure Library (ITIL), a British standard. It's broken down into five different volumes and delivers that service in some best practice fashion. • International Standards Organization (ISO 20000), which was ratified in 2011 and it's a set of best practices for doing IT service management ✓ It's done in a sort of a PDCA, plan-do-check-act methodology. ✓ Sometimes that's referred to as a Deming or a Shewhart cycle. ✓ It's a circular approach. 21 Information Lifecycle Management A Typical Diagram of ILM 22 Acquisition: ➢ Information is acquired by two ways: • Received from external sources • Created internally Data Classification and Marking ➢ Goal is to provide confidentiality, integrity, and availability o Classification Types: Top Secret, Secret, Internal Use Only o Confidentiality: Disclosure or Unauthorized Access o Integrity: Unauthorized Modification o Availability: Hard/Digital Copy 23 Use and Archival: Destruction: ➢ The most challenges in terms ➢ Goal is to delete/get rid of information in of ensuring confidentiality, integrity, and availability: • • • • Controls should be in place Information should be available to only the right people Backup is a copy data set currently in use for the purpose of recovering from the loss of the original data Archive is a copy of a data set that is no longer in use, but is kept in case it is needed at some future point a secure way, for sensitive data: o Zeroization (erasing sensitive data permanently) o Multiple overwrite (Ex: overwrite a disk several times) o Degaussing (eliminating magnetic fields on old tapes or hard drive) o Physical destruction 24 Auditing Information Lifecycle As an auditor you want to know if the organization has the following: • Data classification policy ✓ Are they following it? • Controls in place to protect (Does the organization have:) ✓ Confidentiality ✓ Integrity ✓ Availability • Does the organization have: ✓ Secure archival tools ✓ Secure destruction tools/process (shredder, degausser, etc.) • Are these tools in place and been tested recently? 25 Patching and Security Updates Source: 26 Patching and Security Updates On September 7, 2017, Equifax announced a cybersecurity incident affecting 143 million consumers. This number eventually grew to 148 million—nearly half the U.S. population and 56 percent of American adults. In 2005, former Equifax Chief Executive Officer (CEO) Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, information technology (IT) systems, and data. While the acquisition strategy was successful for Equifax’s bottom line and stock price, this growth brought increasing complexity to Equifax’s IT systems, and expanded data security risks. In August 2017, three weeks before Equifax publicly announced the breach, Smith boasted Equifax was managing “almost 1,200 times” the amount of data held in the Library of Congress every day. Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable. Source: 27 Patching and Security Updates On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Equifax used Apache Struts to run certain applications on legacy operating systems. The following day, the Department of Homeland Security alerted Equifax to this critical vulnerability. Equifax’s Global Threat and Vulnerability Management (GTVM) team emailed this alert to over 400 people on March 9, instructing anyone who had Apache Struts running on their system to apply the necessary patch within 48 hours. The Equifax GTVM team also held a March 16 meeting about this vulnerability. Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed. Source: U.S. House of Representatives Committee on Oversight and Government Reform 28 Patching and Security Updates On May 13, 2017, attackers began a cyberattack on Equifax. The attack lasted for 76 days. The attackers dropped “web shells” (a web-based backdoor) to obtain remote control over Equifax’s network. They found a file containing unencrypted credentials (usernames and passwords), enabling the attackers to access sensitive data outside of the ACIS environment. The attackers were able to use these credentials to access 48 unrelated databases. On July 31, Chief Information Officer (CIO) David Webb informed Richard Smith of the cyber incident. Equifax suspected the attackers exploited the Apache Struts vulnerability during the data breach. Source: U.S. House of Representatives Committee on Oversight and Government Reform 29 Patching and Security Updates Equifax should have addressed at least two points of failure to mitigate, or even prevent, this data breach. First, a lack of accountability and no clear lines of authority in Equifax’s IT management structure existed, leading to an execution gap between IT policy development and operation. As an example, Equifax had allowed over 300 security certificates to expire, including 79 certificates for monitoring business critical domains. Second, Equifax’s aggressive growth strategy and accumulation of data resulted in a complex IT environment. Equifax ran a number of its most critical IT applications on custom built legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made IT security especially challenging. Equifax recognized the inherent security risks of operating legacy IT systems because Equifax had begun a legacy infrastructure modernization effort. This effort, however, came too late to prevent the breach. Source: U.S. House of Representatives Committee on Oversight and Government Reform 30 Patching and Security Updates Equifax was unprepared for these risks. An August 2016 report by the financial index provider MSCI Inc. assigned Equifax’s data security efforts a rating of zero out of ten. The provider’s April 2017 rating remained unchanged. Both reports concluded: Equifax’s data security and privacy measures have proved insufficient in mitigating data breach events. The company’s credit reporting business faces a high risk of data theft and associated reputational consequences . . . . The company’s data and privacy policies are limited in scope and Equifax shows no evidence of data breach plans or regular audits of its information security policies and systems. Source: U.S. House of Representatives Committee on Oversight and Government Reform 31 What is Cloud Computing? cloud com·put·ing noun “a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using internet technologies” Gartner, 2016 “a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, and applications) that can be rapidly provisioned and released with minimal management effort or service provider interaction” Source: National Institute of Standards and Technology, 2011 32 Benefit of the Cloud The cloud is • Elastic leasing of pooled computer resources over the Internet • Elastic ✓ Automatically adjusts for unpredictable demand ✓ Limits financial risks • Pooled ✓ Same physical hardware ✓ Economies of scale Why cloud is used? • Lower costs - cheap processors, essentially free data communication and storage • Ubiquitous access • Improved scalability • Elasticity • Virtualization technology • Internet-based standards enable flexible, standardized processing capabilities 33 Types of Cloud Services • Software as a Service (SaaS): Salesforce sales tracking as a service • Platform as a Service (PaaS): Microsoft SQL Azure • Infrastructure as a Service (PaaS): Amazon licenses S3 (Simple Storage Service) • Analytics as a Service (AaaS): provides access to data analysis software and tools through the Cloud • Business Process as a Service (BPaaS): delivery of business process outsourcing (BPO) services • Everything as a Service (EaaS): a concept of being able to call up re-usable, fine-grained software components across a network New Cloud Services 34 Cloud Services 35 Shared Responsibility Model AWS responsibility “Security of the Cloud” Customer responsibility “Security in the Cloud” Source: AWS 36 Cloud Server Source: AWS 37 Cloud Server Source: AWS 38 Cloud Audit – Network Configuration & Management 1. Determine if network security architecture is baseline and supports enterprise’s security requirements 2. Determine if enterprise can identify and take timely action against inappropriate network traffic 3. Network communications are managed through a formal network trafficmanagement program 4. Privileged access provisioned to personnel in accordance with valid business need 5. Connectivity between the enterprise and the cloud platform Source: ISACA AWS Audit Workprogram 39 Cloud Audit – Network Configuration & Management (Example) Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can logically isolate a VPC Source: 40 Cloud Audit – Network Configuration & Management (Example) You can logically isolate a VPC Source: 41 Cloud Audit – Asset Configuration & Management 1. Determine if changes made to cloud applications and related resources are authorized 2. Determine if potential risk (that changes the environment and adversely affect operations) is mitigated through monitoring of cloud assets 3. The enterprise maintains the environment's integrity by establishing change schedules 4. Cost containment are supported through identification and removal of unnecessary assets in a timely manner 5. Data retention and purge directives for cloud assets to ensure data are retained only for the time required by law or for business needs Source: ISACA AWS Audit Workprogram 42 Cloud Audit – Asset Configuration & Management Amazon CloudWatch is basically a metrics repository. An AWS service—such as Amazon EC2—puts metrics into the repository, and you retrieve statistics based on those metrics. If you put your own custom metrics into the repository, you can retrieve statistics on these metrics as well. Source: 43 Cloud Audit – Asset Configuration & Management (Example) Source: 44 Cloud Audit – Asset Configuration & Management (Example) AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Source: 45 Cloud Audit - Encryption Source: AWS 46 Job Scheduling and Monitoring Job scheduling tools enables IT to automate repetitive tasks based within a defined schedule. Notifies IT operations of errors (job abends*), failures in data validations and delay in processing a transaction. IT auditor should review the set-up of critical jobs identified for a process or activity under audit. The review should include the validation check performed, error notifications, and completeness/accuracy of processing. IT auditor should also review who has access to make changes to the job configuration. *An abend is an unexpected or abnormal termination of an application or operating system that results from a problem with the software. 47 Job Scheduling and Monitoring Source: 48 Job Scheduling and Monitoring Source: Source: 49 Backup, Redundancy, and Administration Considerations Backup Types: • Full ✓ All files are backed up ✓ Fastest restoration process ✓ Takes the longest to perform backup • Incremental ✓ Backs up files that have changed since the last backup ✓ Backups can be performed quickly, but restoration takes longer ✓ Full backup must be restored first and then each incremental backup • Differential ✓ Backs up files that have changed since last full backup ✓ For restoration, full backup is restored and then differential backup is restored 50 Single Point of Failure • Single points of failure are a big deal when it comes to IT operations, and dealing with these is a big deal. • Any system or network that uses only one piece or component to do some kind of job, essentially is a single point of failure. If it breaks, you're down. • Can cause failures that take too long to recover from. 51 Mitigation • Redundant and fault-tolerant technologies • Redundant LAN routes • On-demand backup WAN connection • Documented procedures on how to deal with failures • Properly trained personnel 52 Solutions for Redundancy and Fault Tolerance 53
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

View attached explanation and answer. Let me know if you have any questions.


Importance of Proper Information Systems Operations and what is considered

Student's name
University Affiliation
Course Name and Number
Instructor's Name

Importance of proper Information Systems Operations and what is considered
This topic is about the significance of proper information systems operations in an
organization. I believe this topic focuses on the advantages created by the existence of proper
functioning systems operations. Also, the functions of information systems operations are also

I was struggling with this subject, and this helped me a ton!


Similar Content

Related Tags