Statistics Hw.

foq27
timer Asked: Apr 30th, 2018

Question Description

PLEASE FIND THE ATTACHMENT FOR THE SLIDES YOU MAY NEED

1.In a one-way between-subject ANOVA, when calculating the sum of squared total deviations with 3 groups and 10 participants in each group, the df for the SST is _____.

a. 9

b. 10

c. 27

d. 29

2. In one-way ANOVA, the SSB is calculated as _______.

a. SSB = ∑nj(ij - )2

b. SSB = ∑(- )2

c. SSB = ∑nj(j - )2

d. SSB = ∑nj( -j)2

3. Researchers study the effect of distracted driving on driving behavior. Driving behavior is measured by the number of mistakes recorded on a simulator. There are 36 participants who are randomly assigned to one of the three groups: driving while eating, driving while texting, and normal driving without distraction. Each group has 12 participants and the group means are 22, 25, and 16, respectively, and SST = 2016. What is the MSW based on the information provided here?

a. 43.2

b. 45.818

c. 252

d. 1512

4. In calculating ANOVA, when n1 = 5, n2 = 6, and n3 = 7 and 1 = 1.2, 2 = 2.5, 3 = 2.7, ______.

a. 1.96

b. 2.10

c. 2.22

d. 2.53

5. Given 1 = 1.2, 2 = 2.5, 3 = 2.7 and n1 = n2 = n3 = 8 calculate SSB

a. 10.61

b. 12.55

c. 13.27

d. 15.38

6. Researchers study the effect of distracted driving on driving behavior. Driving behavior is measured by the number of mistakes recorded on a simulator. There are 36 participants who are randomly assigned to one of the three groups: driving while eating, driving while texting, and normal driving without distraction. Each group has 12 participants and the group means are 22, 25, and 16, respectively and SST = 2016. What is the calculated F for the one-way ANOVA based on the information provided here?

a. 4.37

b. 5.50

c. 6.78

d. 9.11

7. The sum of squares within groups is largely attributable to attributes ______.

a. related to the independent variable

b. that are controllable by researchers

c. that are out of the control of researchers

d. based upon variation between groups

8. The sum of squares within groups is calculated by ______.

a. SSW = ∑∑(Xij - j)2

b. SSW = ∑∑(Xij - j)3

c. SSW = ∑∑nj(X - j)2

d. SSW = ∑∑(Xij - )2

9. Researchers study the effect of distracted driving on driving behavior. Driving behavior is measured by the number of mistakes recorded on a simulator. There are 36 participants who are randomly assigned to one of the three groups: driving while eating, driving while texting, and normal driving without distraction. Each group has 12 participants and the group means are 22, 25, and 16, respectively, and SST = 2016. What are the df for the SSB (sum of squares between groups) based on the information provided here?

a. 1

b. 2

c. 3

d. 4

10. In a one-way ANOVA, SST = 502, and 1 = 26, 2 = 24, 3 = 28, and n1 = n2 = n3 = 10, calculate the SSW.

a. 450

b. 422

c. 305

d. 204

11. Researchers study the effect of distracted driving on driving behavior. Driving behavior is measured by the number of mistakes recorded on a simulator. There are 36 participants who are randomly assigned to one of the three groups: driving while eating, driving while texting, and normal driving without distraction. Each group has 12 participants and the group means are 22, 25, and 16, respectively, and SST = 2016. What is the rejection zone for the one-way ANOVA based on the information provided here?

a. F > 4.17

b. F > 4.08

c. F > 3.32

d. F > 3.23

12. The alternative hypothesis of an ANOVA states that ______.

a. one group mean is not equal to the others

b. all group means are equal

c. none of the group means are equal

d. not all group means are equal

13. There are 30 participants who are randomly assigned to one of the three groups: driving while eating, driving while texting, and normal driving without distraction. Each group has 10 participants and the ANOVA summary table is shown below.

Source

SS

Df

MS

F

Between groups

410

dfB = 2

MSB = 205

4.486

Within groups

1234

dfW = 27

MSW = 45.70

Total

1644

dfT = 29

What is the effect size for the distracted driving conditions on driving behavior?

a. 4.486

b. 2.243

c. 0.892

d. 0.249

14. Many people lose their jobs during economic downturns. Assume that an Industrial and Organizational psychologist is studying the level of psychological stress of job seekers. One of the independent variables is age. Age is classified into three levels: younger than 40, 40–55, and older than 55. The descriptive statistics of psychological stress scores for all three groups are reported in table below. Higher numbers mean higher reported psychological stress.

Descriptive statistics of psychological stress scores for all three groups:

Age

N

s

Younger than 40

20

41.3

5.34

40–55

20

43.7

5.97

Older than 55

20

48.8

6.15

60

44.6

SST = 2524.4

What is the SSB for the psychological stress among job seekers?

a. 586.8

b. 602.4

c. 1922

d. 1937.6

15. The chi-square results are stable as long as there are no more than ____________ of the cells in a Chi-square test with an expected frequency less than 5.

a. 5%

b. 10%

c. 15%

d. 20%

16. The degrees of freedom for the Chi-square goodness-of-fit test are

a. (c-1), where c is the number of columns or categories.

b. (r-1)(c-1), where r is the number of rows and c is the number of columns.

c. n-2, where n is the sample size.

d. n-1, where n is the sample size.

17. A small and insignificant χ2 for an independence test means that

a. two variables are moving in the same direction.

b. we can use variable X to predict variable Y.

c. two variables have similar means.

d. two variables are independent.

18. A medical device company went through a sales workforce reduction. The company laid off 25 female sales representatives. There were 67 female sales reps before the layoffs, the company laid off 30 sales reps and there were a total of 100 people in the sales positions. What is the expected number of female sales reps who were laid off?

a. 7.5

b. 16.8

c. 20.1

d. 42

19. For a chi-square test with α = .05 and df = 2, the rejection zone is ____.

a. χ2 > 3.841

b. χ2 > 5.991

c. χ2 < 3.841

d. χ2 < 5.991


Assume that you suspect that a die has been tampered with. You throw the die 42 times and record the number of times each value on the die occurs in the following table. Conduct a hypothesis test to see if this die is a fair die, assume α = .10. Show the 4-step hypothesis test procedure in the answer.

X

1

2

3

4

5

6

O

10

5

8

6

4

9

Unformatted Attachment Preview

Straightforward Statistics Chieh-Chen Bowen Chapter 14: Chi-Square Tests for Goodness of Fit and Independence What You Know • The four-step hypothesis testing continues to be relevant in Chapter 14. • There are only minor modifications in the four-step hypothesis test for Chisquares, and they will be specified. Bowen, Straightforward Statistics, © SAGE Publications 2016 What Is New • Various t-tests, correlation, regression, and ANOVA are parametric statistics. • Chi-square, χ2, is the first nonparametric statistic you encounter. Bowen, Straightforward Statistics, © SAGE Publications 2016 Chi-Square, χ2, Concept • Chi-square is based on frequencies. − O, observed frequencies, can only be whole numbers − E, expected frequency − No more than 20% of the cells in a Chisquare test with an expected frequency less than 5 Bowen, Straightforward Statistics, © SAGE Publications 2016 X Chi-square, χ2, Formula • Chi-square = χ2 = ∑ • Large discrepancies between O and E lead to large calculated χ2. • Small discrepancies between O and E lead to small calculated χ2. Bowen, Straightforward Statistics, © SAGE Publications 2016 X E in the χ2 for Goodness-of-Fit Test • In a one-way frequency table or a goodness-of-fit test, − E = np, − where n = sample size − p = population proportion or theoretical probability • p is calculated based on the probability theories discussed in Chapter 5. Bowen, Straightforward Statistics, © SAGE Publications 2016 X E in the χ2 for Independence Test • The expected frequency for each cell is calculated under the independence assumption, E = • For each observed frequency in a twoway contingency table, you can calculate a corresponding expected frequency. Bowen, Straightforward Statistics, © SAGE Publications 2016 X E in the χ2 for Independence Test • The expected frequency in each cell has the same proportion as the row total and column total relative to its sample size. • Both observed frequencies and expected frequencies add to the same row totals and column totals. Bowen, Straightforward Statistics, © SAGE Publications 2016 X Goodness-of-Fit Tests • The Critical Values of Chi-squares identify the values that set the boundaries of the rejection zones. • The df for the Goodness-of-Fit test is (c – 1), where c is the number of columns or categories in the one-way frequency table. • You need the df to find the rejection zone because the rejection zone is identified as χ2 > χ2(c – 1). • Chi-squares can only be positive numbers. Bowen, Straightforward Statistics, © SAGE Publications 2016 X Hypothesis Testing With the ChiSquare Goodness-of-Fit Test • The four-step hypothesis-test procedure 1. 2. 3. 4. State the pair of hypotheses. Identify the rejection zone. Calculate the test statistic. Draw the correct conclusion. Bowen, Straightforward Statistics, © SAGE Publications 2016 X 1. State the Pair of Hypotheses • Chi-square is a nonparametric statistic, so the hypotheses do not describe population parameters. The pair of hypotheses for one-way frequency tables are: ➢ H0: A particular variable distribution fits its theoretical distribution (i.e., uniform distribution, binomial distribution, etc.). ➢ H1: A particular variable distribution does not fit its theoretical distribution. Bowen, Straightforward Statistics, © SAGE Publications 2016 Step 2. Identify the rejection zone • Critical values of Chi-squares presented in Appendix H. • You need α level and df to identify the Rejection zone: χ2 > χ2(c – 1) Bowen, Straightforward Statistics, © SAGE Publications 2016 Critical Values of Chi-Square Table Right-tailed α level 0.05 0.025 0.01 df 0.1 1 2.706 3.841 5.024 6.635 7.879 2 4.605 5.991 7.378 9.210 10.597 3 6.251 7.815 9.348 11.345 12.838 4 7.779 9.488 11.143 13.277 14.86 Bowen, Straightforward Statistics, © SAGE Publications 2016 0.005 Critical Values of Chi-Square Table 5 9.236 11.07 12.833 15.086 16.75 6 10.645 12.592 14.449 16.812 18.548 7 12.017 14.067 16.013 18.475 20.278 8 13.362 15.507 17.535 20.09 21.955 9 14.684 16.919 19.023 21.666 23.589 Bowen, Straightforward Statistics, © SAGE Publications 2016 Calculated χ2 and Conclusion • Step 3. Calculate the test statistic. χ2 = ∑ • Step 4. Make the correct conclusion. – If the calculated is χ2 within the rejection zone, we reject H0. – If the calculated is χ2 not within the rejection zone, we fail to reject H0. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 14.2, the Goodness-of-Fit Test • Assume that you suspect that a die has been tampered with. You throw the die 36 times and record the number of times each value on the die comes up. Conduct a hypothesis test to see if this die is a fair die. Assume α = .05. Bowen, Straightforward Statistics, © SAGE Publications 2016 • Step 1. H0 : The die is fair. H1 : The die is not fair. • Step 2. α = .05, with df – (c – 1) = 6 – 1 = 5, the rejection zone χ2 > 11.07. • Step 3. χ2 = = 4.667 • Step 4. The calculated χ2 = 4.667 is not within the rejection zone. Therefore, we fail to reject H0. The evidence is not strong enough to claim that the die is not a fair die. Bowen, Straightforward Statistics, © SAGE Publications 2016 Hypothesis Testing With the ChiSquare for Independence Test • Step 1. State the pair of hypotheses. ➢ H0: Two variables are independent of each other. ➢ H1: Two variables are not independent of each other. • Step 2. Identify the Rejection Zone. ➢ df = (r – 1)(c – 1), the rejection zone for a Chi-square for independence test is identified as χ2 > χ2(r – 1)(c – 1). Bowen, Straightforward Statistics, © SAGE Publications 2016 Chi-Square for Independence Test • Step 3. Calculate the test statistic. Chi-square = χ2 = ∑ Where E = • Step 4. Make the correct conclusion. − If the calculated is χ2 within the rejection zone, we reject H0. − If the calculated is χ2 not within the rejection zone, we fail to reject H0. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example14.4, Chi-Square Test for Independence • A company specializing in medical devices underwent workforce reduction for its sales force. The outcome of a workforce reduction for an individual employee can be classified as either “laid off” or “not laid off.” The employee’s gender can be classified as either “male” or “female.” Bowen, Straightforward Statistics, © SAGE Publications 2016 Example14.4 … continued • Conduct a hypothesis test to see if layoff decision and employee’s gender are independent, using α = .05. • Step 1. State the pair of hypotheses. H0: The layoff decision and employee’s gender are independent. H1: The layoff decision and employee’s gender are not independent. • Step 2. Identify the rejection zone. When α = .05, with df = (r – 1)(c – 1) = (2 – 1)(2 – 1) = 1, the rejection zone is χ2 > 3.841. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 14.4 … continued • Step 3. Calculate the test statistic. E= Expected frequency table Laid off Male Female Not laid off 9.9 20.1 Bowen, Straightforward Statistics, © SAGE Publications 2016 23.1 46.9 Example 14.4…continued χ2 = = 5.71 • Step 4. Draw the correct conclusion. • The calculated χ2 = 5.171 is within the rejection zone, so we reject H0. • Interpretation: The evidence is strong enough to support the claim that the layoff decision and employee gender are not independent. The probability of obtaining such observed frequencies is pretty small (p < .05) under the assumption that layoff decisions are independent of employee’s gender. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 14.5 • The personnel files in the company’s Midwest region showed that there were 3,300 male employees, and among them, 960 were promoted in the past 6 months. During the same time, there were 2,700 female employees, and among them, 700 were promoted. Conduct a four-step hypothesis testing procedure to test if promotion decisions and an employee’s gender are independent, using α = .05. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 14.5…continued • Step 1. State the pair of hypotheses. − H0: The promotion decision and employee’s gender are independent. − H1: The promotion decision and employee’s gender are not independent. • Step 2. Identify the rejection zone. − When α = .05, with df = 1, the rejection zone is χ2 > 3.841. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 14.5…continued • Step 3. Calculate the test statistic. Observed frequency table: Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 14.5 •Expected frequency table: • χ2 = = 7.43 Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 14.5 • Step 4. Draw the correct conclusion. − The calculated χ2= 7.433 is within the rejection zone, so we reject H0. • Interpretation: The evidence is strong enough to support the claim that the promotion decision and employee’s gender are not independent. The probability of obtaining such observed frequencies is pretty small (i.e., p < .05) under the assumption that the promotion decision and employee’s gender are independent. Bowen, Straightforward Statistics, © SAGE Publications 2016 Straightforward Statistics Chieh-Chen Bowen Chapter 13: One-Way Analysis of Variance (ANOVA) What You Know and What Is New • • • • Independent-sample t-tests Dependent-sample t-tests Simple regression Analysis of Variance (ANOVA) − Between-subjects design: different participants are assigned to different groups − Within-subjects design: research participants are assigned to all levels of the independent variables. Bowen, Straightforward Statistics, © SAGE Publications 2016 Introducing ANOVA • The Analysis of Variance (ANOVA) is a statistical method to test the equality of group means by partitioning variances due to different sources. • ANOVA is applied to comparisons between two or more group means. • Single-factor ANOVA (One-Way ANOVA) uses one independent variable. Bowen, Straightforward Statistics, © SAGE Publications 2016 Introducing ANOVA Terms • The subscript i refers to individuals in each group, and the subscript j refers to j groups. • , , and symbolize the mean for group 1, the mean for group 2, and the mean for group 3, respectively. • refers to the mean for the entire sample. • The number of participants in each group is denoted by n1, n2, and n3. • The entire sample size is equal to the sum of the three group sizes, n = n1 + n2 + n3. Bowen, Straightforward Statistics, © SAGE Publications 2016 Visualizing SST • Sum of squared total deviations, SST = ∑∑(Xij – )2. – The first subscript refers to the row and the second to the column. • SST = the squared difference between every individual value in every group, Xij, and the overall sample mean, SST = ∑∑(Xij - )2 Bowen, Straightforward Statistics, © SAGE Publications 2016 Sum of Squares Between (SSB) • SSB shows the variability between different groups. − The differences between group means are largely attributable to systematic differences due to group effects. • SSB = ∑nj( 2 – ) j − where j is the mean in group j, nj is the number of participants in group j, and is the sample mean. Bowen, Straightforward Statistics, © SAGE Publications 2016 Visualizing Sum of Squares Between (SSB) Bowen, Straightforward Statistics, © SAGE Publications 2016 Visualizing Sum of Squares Within (SSW) • The sum of squares within group (SSW) reflects the variability within groups. SST = SSB + SSW SSW = SST – SSB Bowen, Straightforward Statistics, © SAGE Publications 2016 df between Groups (dfB), within Groups (dfW) • The degrees of freedom between groups (dfB) are defined as the number of groups minus 1. − • Assume the number of groups is k, the df within group for group 1 is (n1 – 1), for group 2 is (n2 – 1), for group 3 is (n3 – 1), … for group k is (nk – 1). − • Assume the number of groups is k, the dfB = k – 1. The dfW is the sum of df for each group. dfW = (n1 – 1) + (n2 – 1) + (n3 – 1) + … + (nk – 1) = (n1 + n2 + n3 +… + nk) – k =n–k The sample size n = n1 + n2 + n3 + … + nj. − dfT = n – 1. Bowen, Straightforward Statistics, © SAGE Publications 2016 ANOVA Figure Bowen, Straightforward Statistics, © SAGE Publications 2016 ANOVA Summary Table Bowen, Straightforward Statistics, © SAGE Publications 2016 Hypothesis Testing With ANOVA • Step 1. State the pair of hypotheses for ANOVA. H0: µ1 = µ2 = µ3 =… = µk. H1: Not all µk are equal. • If the outcome is to fail to reject H0, no further analysis needs to be done. • If the outcome of an ANOVA is to reject H0, additional post hoc comparisons need to be done to identify exactly where the significant difference comes from. Bowen, Straightforward Statistics, © SAGE Publications 2016 Hypothesis Testing With ANOVA • Step 2. Identify the rejection zone. − The critical values of F can be identified by F(k – 1, n – k) in the F-Table for ANOVA. The rejection zone is identified as the calculated F > F(k – 1, n – k). • Step 3. Calculate the test statistic. − Once the ANOVA summary table is complete, F= • Step 4. Make the correct conclusion. − If the calculated F is within the rejection zone, we reject H0. If the calculated F is not within the rejection zone, we fail to reject H0. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 13.2 • Assume that a food-manufacturing company conducts three focus groups to test different advertising strategies on customers’ perceptions of food quality. ➢ Three advertising strategies are tested in this study: (1) weight control, (2) all natural ingredients, and (3) organic/environmental friendliness. ➢ There are eight participants in each focus group. ➢ The company wants to find out whether significant differences exist in the evaluation of the food quality among three groups. Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 13.2, Continued • Food quality ratings in three focus groups: SST = 47.833 Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 13.2, Continued • Step 1. State the pair of hypotheses. − − H0 : μ1 = μ2 = μ3 =… = μk H1 : Not all μk are equal. • Step 2. Identify the rejection zone. − The critical value of F is identified by F(2,21) = 3.47 in the F-Table for ANOVA. The rejection zone is identified as the calculated F > 3.47. • Step 3. Calculate the test statistic. − Under the equal group size, the sample mean is: = (6 + 7 + 7.75)/3 = 6.917 SSB = ∑nj( j - )2 = 8(6 – 6.917)2 + 8(7 – 6.917)2 + 8(7.75 – 6.917)2 = 12.333 SSW = SST – SSB = 47.833 – 12.333 = 35.5 Bowen, Straightforward Statistics, © SAGE Publications 2016 Example 13.2 • ANOVA summary table • Step 4. Make the correct conclusion. ➢ The calculated F = 3.648 is within the rejection zone, we reject H0. ➢ A significant F-test does not tell us exactly where the difference occurs. ➢ Post hoc comparisons are designed to fulfill this purpose. Bowen, Straightforward Statistics, © SAGE Publications 2016 Post Hoc Comparisons • Post hoc means “after the fact.” • Once you conclude that not all group means are equal, you must clearly and definitely identify exactly where that significant difference occurs. • Specifically, in example 13.2, the calculated F = 3.648 is within the rejection zone. • To identify exactly where the source of the significant difference is located, you will have to do the comparisons on all possible pairs simultaneously: µ1 versus µ2, µ1 versus µ3 and µ2 versus µ3. Bowen, Straightforward Statistics, © SAGE Publications 2016 Least Significant Difference (LSD) • The first post hoc method was developed by Fisher in 1935 to calculate the least significant difference (LSD) between two group means using the equivalent of multiple t-tests. • Any difference between two group means greater than LSD is significant. Any group mean difference equal to or less than LSD is not significant. LSD = Bowen, Straightforward Statistics, © SAGE Publications 2016 Least Significant Difference (LSD) • When two groups have an equal number of participants, n1 = n2, the LSD formula can be simplified as: LSD = − Where t = the critical value of t from a t table with a df = (n – k), at two-tailed α level. − The n1 and n2 refer to the number of participants in group 1 and group 2, respectively. Bowen, Straightforward Statistics, © SAGE Publications 2016 Least Significant Difference (LSD) • In Example 13.2, we already calculated MSW = 1.69 with df = 21, and n1 + n2 = 8. • The critical t value for a two-tailed α = .05 with df = 21 is 2.08. LSD = • The mean difference between group 1 and group 2 is 2 + 1 = 7 – 6 = 1 • The mean difference between group 1 and group 3 is 3 + 1 = 7.75 – 6 = 1.75 − The difference is larger than LSD, so it is significant. Bowen, Straightforward Statistics, © SAGE Publications 2016 Least Significant Difference (LSD) • The mean difference between group 2 and group 3 is 3 - 1 = 7.75 – 7 = .75 − The difference is smaller than LSD, so it is not significant. • A special caution to keep in mind is that LSD does not keep the familywise error rate under control when conducting multiple t-tests simultaneously. • The accumulated Type I error rate from the conducting similar comparisons multiple times among k groups is called the familywise error (FWER). FWER = 1 – (1 – α) Bowen, Straightforward Statistics, © SAGE Publications 2016 Tukey’s HSD test • Tukey’s honestly significant difference (HSD) formula calculates a critical value for group mean differences while keeping the familywise risk of committing a Type I error under control HSD = − Where n is the number of participants in each group and both groups have the same number of participants. • Any group mean difference greater than HSD is significant, and any group mean difference equal or less than HSD is not significant. Bowen, Straightforward Statistics, © SAGE Publications 2016 Tukey’s HSD test • Similar to the t value from the LSD test, a q value needs to be found in a table. The values of q are shown in Appendix G, the Critical Values of Studentized Range Distribution (q). • The critical value of q can be identified by the same α level as the ANOVA, and two degrees of freedom: k = number of groups, and df for the MSW, (n – k). • In example 13.2 α = .05, k = 3, and n – k = 21. – The closest df for MSW is 20 in the q table, therefore, the closest q value shown in the table is 3.58. Bowen, Straightforward Statistics, © SAGE Publications 2016 Tukey’s HSD test • HSD = 3.58 − − − = 1.645 The mean difference between group 1 and group 2 is 2 – 1 = 7 – 6 = 1. Not significant. The mean difference between group 1 and group 3 is 3 – 1 = 7.75 – 6 = 1.75. Significant. The mean difference between group 1 and group 2 is 3 – 2 = 7.75 – 7 = 0.75. Not significant. • Because HSD controls for the familywise risk of committing a Type I error, the critical value is set higher than LSD, which does not control for the familywise risk, HSD > LSD. Bowen, Straightforward Statistics, © SAGE Publications 2016 Bonferroni Correction • The Bonferroni correction simply adjusts α level of the individual pairwise comparison to a new level in order to control the familywise α level at .05 or another predetermined value. αB = − Where αB = adjusted α level for the individual pair comparison, αFW = the predetermined acceptable familywise α level, the default is .05, and is the number of all possible pairwise comparisons. Bowen, Straightforward Statistics, © SAGE Publications 2016 Statistical Assumptions of an ANOVA 1. We assume that all k populations have equal variance. 2. All k populations are normally distributed. Normal distribution is a basic requirement when group means are compared. 3. All k samples are randomly selected and independent from one another. In one-way ANOVA, we only deal with between-subject factors. Therefore, each research participant is only assigned to one group. Each group consists of different individuals. Bowen, Straightforward Statistics, © SAGE Publications 2016 Effect Size for One-Way ANOVA • The most common measure of effect size for the One-Way ANOVA is η², pronounced as Etasquared, which measures the percentage of total variance explained by the independent variable. η² = • The hypothesis test can tell you whether the ratio of the between-group variance to the within-group variance is statistically significant. − However, when a large sample size is involved, even small, trivial between-group variance can lead to a statistically significant F-value. Bowen, Straightforward Statistics, © SAGE Publications 2016 Effect Size for One-Way ANOVA • Let’s use the numbers in Example 13.2 to calculate an effect size of three advertising strategies. • The effect size η² = − = .258 The three advertising strategies actually explained 25.85 of the variance on customer’s perceptions of the food quality. Bowen, Straightforward Statistics, © SAGE Publications 2016
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

This question has not been answered.

Create a free account to get help with this and any other question!

Related Tags

Brown University





1271 Tutors

California Institute of Technology




2131 Tutors

Carnegie Mellon University




982 Tutors

Columbia University





1256 Tutors

Dartmouth University





2113 Tutors

Emory University





2279 Tutors

Harvard University





599 Tutors

Massachusetts Institute of Technology



2319 Tutors

New York University





1645 Tutors

Notre Dam University





1911 Tutors

Oklahoma University





2122 Tutors

Pennsylvania State University





932 Tutors

Princeton University





1211 Tutors

Stanford University





983 Tutors

University of California





1282 Tutors

Oxford University





123 Tutors

Yale University





2325 Tutors