Identify a business problem or opportunity at a company where you work or with

Anonymous
timer Asked: Nov 29th, 2016

Question description

Number of pages: 4

Format: APA

Sources: Appropriate

Time: 2100 hrs (Today)

Identify a business problem or opportunity at a company where you work or with which you're familiar. This will be a business problem that you use for the individual assignments in Weeks 3 to 5. It should be a problem/opportunity for which gathering and analyzing some type of data would help you understand the problem/opportunity better………………………..

Identify a research variable within the problem/opportunity that could be measured with some type of data collection.

Consider methods for collecting a suitable sample of either qualitative or quantitative data for the variable.

Consider how you will know if the data collection method would be valid and reliable.

Develop a 1,050-word analysis to describe a company, problem, and variable.

Include in your submission:

Identify the name and description of the selected company……

Describe the problem at that company…………….

Identify one research variable from that problem……………..

Describe the methods you would use for collecting a suitable sample of either qualitative or quantitative data for the variable (Note: do not actually collect any data).

Analyze how you will know if the data collection method would generate valid and reliable data (Note: do not actually collect any data).

Format your assignment consistent with APA guidelines.

Identify a business problem or opportunity at a company where you work or with which you're familiar. This will be a business problem that you use for the individual assignments in Weeks 3-5. It should be a problem/opportunity for which gathering and analyzing some type of data would help you understand the problem/opportunity better.

Identify a research variable within the problem/opportunity that could be measured with some type of data collection.

Consider methods for collecting a suitable sample of either qualitative or quantitative data for the variable.

Consider how you will know if the data collection method would be valid and reliable.

Develop a 1,050-word analysis to describe a company, problem, and variable.

Include the following in your submission:

  • Identify the name and description of the selected company,
  • Describe the problem at that company,
  • Identify one research variable from that problem,
  • Describe the methods you would use for collecting a suitable sample of either qualitative or quantitative data for the variable (Note: do not actually collect any data)
  • Analyze how you will know if the data collection method would generate valid and reliable data (Note: do not actually collect any data)

Format your assignment consistent with APA guidelines.

Click the Assignment Files tab to submit your assignment.

CLO Business Decision Making Project, Part 2

Use the same business problem/opportunity and research variable you wrote about in Week 3.

Remember: do not actually collect any data; think hypothetically.

Develop a 1,050-word report in which you:

  • Identify the types of descriptive statistics that might be best for summarizing the data, if you were to collect a sample.
  • Analyze the types of inferential statistics that might be best for analyzing the data, if you were to collect a sample.
  • Analyze the role probability or trend analysis might play in helping address the business problem.
  • Analyze the role that linear regression for trend analysis might play in helping address the business problem.
  • Analyze the role that a time series might play in helping address the business problem.

Format your assignment consistent with APA guidelines.

Click the Assignment Files tab to submit your assignment.

CLO Business Decision Making Project, Part 3

Prepare an 11- to 15-slide Microsoft® PowerPoint® presentation for the senior management team based on the business problem or opportunity you described in Weeks 3-4.

Include on the slides what you'd want the audience to see (include appropriate visual aids/layout) and include in the Speaker's Notes section what you'd say as you present each slide. If any source material is quoted or paraphrased in the presentation, use APA citations and references.

Draw on material you developed in the Weeks 3-4 assignments.

Include the following in your presentation:

  • Introduction slide
  • Agenda slide
  • Describe the organization, with a brief description
  • Explain the business problem or opportunity
  • Analyze why the business problem is important
  • Identify what variable would be best to measure for this problem. Explain why.
  • Apply data analysis techniques to this problem (tell which techniques should be used: descriptive stats, inferential stats, probability, linear regression, time series). Explain why.
  • Apply a possible solution to the problem/opportunity, with rationale.
  • Evaluate how data could be used to measure the implementation of such a solution.
  • Conclusion
  • References slide (if any source material is quoted or paraphrased throughout the presentation)

Top of Form

Creating and Implementing a Data Collection Plan

Overview

Welcome to the e-learning lesson on Creating and Implementing a Data Collection Plan. Data collection is a crucial step in the process of measuring program outcomes. By measuring outcomes, an organization can better recognize the effectiveness and value of its programs, and pinpoint where changes or improvements need to be made. Before collecting data, your organization should have a solid understanding of the purpose of the program you wish to evaluate. You should have a working logic model that identifies your desired outcomes, the resources and activities necessary to accomplish these outcomes, and a detailed list of the specific measures you will take to implement them. Once this piece is complete, you can begin gathering relevant data through surveys, interviews, focus groups, or other methods.

Data collection happens before analysis and reporting.

Valid and reliable data is the backbone of program analysis. Collecting this data, however, is just one step in the greater process of measuring outcomes. The five steps include:

  1. Identify outcomes and develop performance measures.
  2. Create and implement a data collection plan (discussed in this lesson).
  3. Analyze the data.
  4. Communicate the results.
  5. Reflect, learn, and do it again.

This lesson will illustrate effective options and techniques for data collection.

At the end of this lesson, you will be able to understand how to plan for and implement data collection for a specific program; identify the most appropriate and useful data collection methods for your purposes; and manage and ensure the integrity of the data you collect.

CHAPTER 1: Data Collection Methods

Your data collection process will include attention to all the elements of your logic model: what resources you had available, what activities you actually provided, how many of each output you delivered, and to what degree you accomplished your outcomes. In collecting indicator data, you are likely to use one or more of these four methods: surveys, interviews or focus groups, observations, and record or document review. In selecting the best method for data collection, you will need to consider the type of information you need; the method’s validity and reliability; the resources you have available, such as staff, time, and money; and cultural appropriateness, or how well the method fits the language, norms, and values of the individuals and groups from whom you are collecting data.

Surveys are standardized written instruments that can be administered by mail, email, or in person.

The primary advantage of surveys is their cost in relation to the amount of data you can collect. Surveying generally is considered efficient because you can include large numbers of people at a relatively low cost. There are two key disadvantages: First, if the survey is conducted by mail, response rates can be very low, jeopardizing the validity of the data collected. There are mechanisms to increase response rates, but they will add to the cost of the survey. We will discuss tips for boosting response rates later in this lesson. Written surveys also don’t allow respondents to clarify a confusing question. Thorough survey pre-testing can reduce the likelihood that problems will arise.

Here are some examples of ways to use surveys:

  • Track grassroots organizations’ use of and satisfaction with technical assistance services you provide.
  • Survey all organizations receiving technical assistance to learn about changes in their fundraising tactics and the results of their efforts to raise more money.

Click here to download the “Technical Assistance Survey Template.” You can adapt this template for use in your program evaluation.

Interviews are more in-depth, but can be cost-prohibitive.

Interviews use standardized instruments but are conducted either in person or over the telephone. In fact, an interview may use the same instrument created for a written survey, although interviewing generally offers the chance to explore questions more deeply. You can ask more complex questions in an interview since you have the opportunity to clarify any confusion. You also can ask the respondents to elaborate on their answers, eliciting more in-depth information than a survey provides. The primary disadvantage of interviews is their cost. It takes considerably more time (and therefore costs more money) to conduct telephone and in-person interviews. Often, this means you collect information from fewer people. Interview reliability also can be problematic if interviewers are not well-trained. They may ask questions in different ways or otherwise bias the responses.

Here are some examples of ways to use interviews:

  • Talk to different grassroots organizations to learn about the way in which they are applying new knowledge of partnership development.
  • Interview individuals within an organization to explore their perceptions of changes in capacity and ability to deliver services.

Focus groups are small-group discussions based on a defined area of interest.

While interviews with individuals are meant to solicit data without any influence or bias from the interviewer or other individuals, focus groups are designed to allow participants to discuss the questions and share their opinions. This means people can influence one another in the process, stimulating memory or debate on an issue. The advantage of focus groups lies in the richness of the information generated. The disadvantage is that you can rarely generalize or apply the findings to your entire population of participants or clients. Focus groups often are used prior to creating a survey to test concepts and wording of questions. Following a written survey, they are used to explore specific questions or issues more thoroughly.

Here are some examples of ways to use focus groups:

  • Hold a structured meeting with staff in a community-based organization to learn more about their grants management practices, what worked during the year, and what did not.
  • Conduct a discussion with staff from several organizations to explore their use of computer technology for tracking financial data.

Observations can capture behaviors, interactions, events, or physical site conditions.

Observations require well-trained observers who follow detailed guidelines about whom or what to observe, when and for how long, and by what method of recording. The primary advantage of observation is its validity. When done well, observation is considered a strong data collection method because it generates firsthand, unbiased information by individuals who have been trained on what to look for and how to record it. Observation does require time (for development of the observation tool, training of the observers, and data collection), making it one of the costlier methods.

Here are some examples of ways to use observations:

  • Observe individuals participating in training to track the development of their skill in the topic.
  • Observe community meetings sponsored by grassroots organizations to learn about their partnership-building techniques and collaborative behavior.

Record or document review involves systematic data collection from existing records.

Internal records available to a capacity builder might include financial documents, monthly reports, activity logs, purchase orders, etc. The advantage of using records from your organization is the ease of data collection. The data already exists and no additional effort needs to be made to collect it (assuming the specific data you need is actually available and up-to-date).

If the data is available and timely, record review is a very economical and efficient data collection method. If not, it is likely well worth the time to make improvements to your data management system so you can rely on internal record review for future outcome measurement work. Just a few changes to an existing form can turn it into a useful data collection tool. A small amount of staff training can increase the validity and reliability of internally generated data.

Here are some examples of documents or records from which you can gather data:

  • Sign-in logs from a series of workshops to track attendance in training, measuring consistency of attendance as an indicator of organizational commitment to learning.
  • Feedback forms completed by workshop participants to learn about satisfaction with training provided.

Official records can include Federal, state, or local government sources such as the U.S. Census Bureau, health departments, law enforcement, school records, assessor data, etc. If the data is relevant and accessible, then official record review is very low-cost.

CHAPTER 2: Validity and Reliability

Validity and reliability are two critical concepts in implementing effective outcome measurement systems. These concepts can be illustrated through the example of a bathroom scale. Let’s say Jennifer weighs herself every day and finds that the scale always reads 150 pounds. Assuming she is not losing or gaining weight, and that she actually does weigh 150 pounds, then the scale is valid and reliable. It’s valid because it displays an accurate reading, and it’s reliable because it displays this reading each time it’s used. The scale could be reliable without being valid. For example, if Jennifer’s weight is displayed each day as 135 pounds, then the scale is not providing a valid reading because the number is not accurate. It doesn’t measure Jennifer’s true weight. If, however, the scale always reads 135 pounds, and she is not gaining or losing weight, then it is still reliable because it reflects a consistent reading.

Validity is the accuracy of the information generated.

The primary advantage of surveys is their cost in relation to the amount of data you can collect. Surveying generally is considered efficient because you can include large numbers of people at a relatively low cost. There are two key disadvantages: First, if the survey is conducted by mail, response rates can be very low, jeopardizing the validity of the data collected. There are mechanisms to increase response rates, but they will add to the cost of the survey. We will discuss tips for boosting response rates later in this lesson. Written surveys also don’t allow respondents to clarify a confusing question. Thorough survey pre-testing can reduce the likelihood that problems will arise.

Reliability refers to consistency.

Reliability can also be thought of as the extent to which data are reproducible. Do items or questions on a survey, for example, repeatedly produce the same response regardless of when the survey is administered or whether the respondents are men or women? Bias in the data collection instrument is a primary threat to reliability and can be reduced by repeated testing and revision of the instrument.

You cannot have a valid instrument if it is not reliable. However, you can have a reliable instrument that is not valid. Think of shooting arrows at a target. Reliability is getting the arrows to land in about the same place each time you shoot. You can do this without hitting the bull’s-eye. Validity is getting the arrow to land on the bull’s-eye. Lots of arrows landing in the bull’s-eye means you have both reliability and validity.

CHAPTER 3: Deciding When and How to Collect Data

Once you have identified the data collection methods you intend to use, and after you have carefully tested to make sure your methods are as valid and reliable as possible, you need to decide when you will collect the data and how often. Using an appropriate schedule to gather data—such as before, during, or after a program—is vital. It’s also important to prepare your clients for the data collection process and to assure them that you will protect the confidentiality of their feedback.

Consider the most appropriate data collection design for your program.

Here are descriptions for five approaches or designs you are likely to use for your data collection. You may want to employ more than one type of design.

Design 1: Post-only Measures

  • Data are collected once: at the end of the program, service, or activity
  • Example: Level of participant knowledge on a survey after a training workshop

Design 2: Pre/Post Measures

  • Data are collected twice: at the beginning to establish a baseline and at the end of the program
  • Example: Comparison of an organization’s documented fundraising success before and after receiving technical assistance

Design 3: Time Series

  • Data are collected a number of times: during an ongoing program and in follow-up
  • Example: Monthly observations of an organization’s collaboration meetings to track changes in partnership development and communication

Design 4: Measures with a Comparison Group

  • Data are collected from two groups: one group that receives the intervention and one that doesn’t
  • Example: Comparison of data on skill development from individuals who participated in training and those who have not yet taken your workshop
  • Note: Comparison groups can be very useful in demonstrating the success of your intervention. The main question is, can you find a group of people or organizations that is just like the group with whom you are working? In order to provide a valid comparison, the two groups must have the same general characteristics. A similar group may be difficult to find. However, if you are working with different groups at different times, and the groups are similar, this approach may work for you.

Design 5: Measures with a Comparative Standard

  • Data are collected once: at the end of the program, service, or activity, and are compared with a standard
  • Example: Comparison of this year’s data on organizations’ success in fundraising versus last year’s data
  • Note: Comparative standards are standards against which you can measure yourself. There are standards of success in some fields (e.g., health mortality and morbidity rates, student achievement scores, teen birth rates). For intermediaries, however, there are unlikely to be many regarding your program outcomes or indicators. You can, however, compare your results for one time period to an earlier one, as shown in the example above. You collect data for the first time period as your baseline and use it as your standard in the future.

Implement data collection procedures.

It will be vital to find reliable and trustworthy people to collect and manage the data. Consider who will collect the data and how you will recruit these people. What steps will they need to take to collect the data? How will you train them? Finally, who will be responsible for monitoring the data collection process to ensure you are getting what you need? It’s important to answer each of these questions during your planning. You don’t want to be surprised halfway through the process to discover your three-month follow-up surveys were not mailed out because you didn’t identify who would do so!

Prepare your clients (FBOs and CBOs) for data collection.

Communicate with the organizations you serve or the program’s staff to inform them of this step in the evaluation process. Make sure they know that you will be collecting data, either at the time of service or in follow-up. Clarify why it is important to you and how you intend to use the data. Organizations often have outcome reporting requirements themselves, so they usually are responsive if they have been alerted to your needs ahead of time. Advising them in advance about your data collection plans will help increase their willingness to participate during implementation.

Protect individuals’ confidentiality and get informed consent.

Anonymous and confidential do not mean the same thing. “Anonymous” means you do not know who provided the responses. “Confidential” means you know or can find out who provided the responses, but you are committed to keeping the information to yourself.

You must ensure that you protect the confidentiality of any individual’s data or comment. It is easy to make your surveys anonymous, but if you want to track people over time, you’ll likely need to attach ID numbers to each person, keeping a list of the names and numbers in a locked file.

It is important to inform people that you are measuring your program’s outcomes and may use data they provide in some way. You must let them know that their participation is voluntary and explain how you will maintain the confidentiality of their data.

CHAPTER 4: Strategies for Quality Assurance and Boosting Response Rates

As data is collected and entered into a storage mechanism, checking for errors and data quality is an important step that is easily overlooked. Build in time to review data and follow up about discrepancies in your overall timeline; the more data you collect, the more time you will need to assure its quality. Additionally, low response rates can threaten an outcome measurement effort. Follow-up surveys with organizations that no longer actively participate in a capacity building program are especially vulnerable to this problem.

Use these strategies

  1. Double entry. This entails setting up a system to collect data twice and then compare for discrepancies. This can be costly and time-consuming, but is the most thorough method for quality control.
  2. Spot checking. This entails reviewing a random sample of data and comparing it to the source document for discrepancies or other anomalies. If discrepancies are found, the first step is to identify any patterns (data entered in a particular time period or by a specific staff person; data associated with a particular beneficiary organization; a specific type of data that is incorrect across many records—for example, if all data for additional persons served at an organization was formatted as a percentage instead of as a whole number). The capacity builder may need to review all the data entered, especially if there is no discernible pattern to the errors.
  3. Sort data to find missing, high, or low values. If you are using a database or spreadsheet function, identifying outliers (those pieces of data at either extreme) is very easy, whether through the use of formulas or sorting functions.
  4. Use automation, such as drop-down menus. Automating data collection provides a uniform way to report information and makes sorting and analyzing data much easier. For example, organizations reporting the number of additional persons served will all use the same language to report the outcome, whereas without such automation the language could vary significantly from report to report. Additionally, more sophisticated forms can pre-populate performance goals from an existing database, which reduces data entry errors made by those filling out the forms.
  5. Format a database to accept only numbers. Whether organizations are filling out forms directly or your staff is entering data from a handwritten form, formatting your data fields to accept only numbers reduces errors related to typos.
  6. Review data for anomalies. This strategy requires that a staff person who is familiar with the organization’s capacity building interventions and who has a good eye for detail reviews the data collected and identifies anomalies. Some of these anomalies may not appear with general sorting.
  7. Discuss data discrepancies with the organization. If, after implementing any of these quality assurance mechanisms, discrepancies remain unexplained, take the data back to the organization for discussion and clarification.

Boosting your response rates can help ensure sufficient and timely data.

It’s important to have response rates that are as high as possible considering your circumstances. If response rates are low, you may be excluding valuable opinions, feedback, and responses that can help you shape future training and technical assistance programs. You may also get an inaccurate picture of how your current program is proceeding. The following strategies can help you increase your response rates.

  1. Tie data collection to project milestones. Throughout the course of the capacity building relationship, it is relatively simple to require organizations to report desired data. For example, an evaluation could be due as a requirement for moving on to the next phase of the project, such as releasing funds for a capacity building project or approving a consultant to begin work. However, once the organization exits the capacity building program, the capacity builder loses this leverage.
  2. Conduct an exit interview once the engagement is complete. Participation in this interview can be mandated in a memorandum of understanding. An exit interview is close enough to the intervention that the organization may still be invested in maintaining its relationship with the capacity builder and follow through on the commitment. However, the organization may not have realized all its possible outcomes and therefore, the data may not capture some of the ripple effects where outcomes are realized after the data has been collected.
  3. Stay in touch. By holding monthly meetings or conference calls with organizations after they exit the program, the capacity builder can maintain more informal connections and provide reminders. The organizations have access to advice and support and may be more likely to participate in a follow-up data collection effort. Establishing a community of practice among organizations so they have even more reason to be in touch with each other (and you) is one way to implement this strategy.
  4. Provide the outcome data to the organization. Offer organizations a short, summary report card of the data you collect from them and demonstrate how it can be used as a marketing tool. This summary can be invaluable to a program and may increase the number of responses you get to your data surveys. If you can use the merging functions available in software like Microsoft Word and Outlook, generating report cards for tens or even hundreds of organizations may take just a few hours.
  5. Offer multiple collection methods. Be available to complete the survey on the phone with the organization. Be available to go to the organization’s headquarters and do it in person. Be prepared to offer language translation if necessary, offer the survey electronically, or mail the survey with a stamped envelope. The easier it is for an individual to complete the survey, the more response rates will increase.
  6. Be culturally competent. Capacity builders may take great steps to ensure that training and technical assistance is culturally appropriate and should extend to data collection efforts. Moreover, if you are engaging a third party to collect data—a consultant or a team of interns, for example—remember that being a third party means they have not had the benefit of getting to know an organization and its staff through the course of the capacity building engagement. Language barriers, cultural differences, and individual preferences can influence whether you are likely to get a response.
  7. Introduce your external data collectors. If you are working with those third parties, introduce them to the organizations you are working with early on. Maintaining a relationship helps improve response rates, but the lack of a relationship will hurt response rates. As a caveat, be sure to maintain confidentiality about the results, especially if a third party is collecting direct feedback about your services.

Summary

Thank you for viewing this e-learning lesson on creating and implementing a data collection plan. Data collection is a critical part of measuring program outcomes. The bottom line is that how you collect your data and your attention to those you’re collecting it from will ensure that you have optimum quality data to work with during the analysis phase, which is the next step in the process of outcome measurement.

Thank you for taking the time to learn about data collection plans.

You should now have a better understanding of how to plan for and implement data collection for a specific program. Identifying the most appropriate and useful data collection method for a specific program will help you effectively ensure the integrity of the data you collect.

These resources can offer additional guidance in creating your plan.

Data Collection Plan Worksheet:
This worksheet will help you lay out your data collection plan step by step, from the outcomes to be measured to who will collect and manage data.
Download it here: Data Collection Plan Worksheet

The Outcome Measurement Resource Network (United Way of America):
The Resource Network offers information, downloadable documents, and links to resources related to the identification and measurement of program- and community-level outcomes.
http://www.liveunited.org/outcomes/

Outcome Indicators Project (The Urban Institute):
The Outcome Indicators Project provides a framework for tracking nonprofit performance. It suggests candidate outcomes and outcome indicators to assist nonprofit organizations that seek to develop new outcome monitoring processes or improve their existing systems.
http://www.urban.org/center/cnp/projects/outcomeindicators.cfm

Bottom of Form

Tutor Answer

(Top Tutor) Studypool Tutor
School: Boston College
Studypool has helped 1,244,100 students
flag Report DMCA
Similar Questions
Hot Questions
Related Tags
Study Guides

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors