MBA5652 Columbia Southern Correlation and Regression Analysis & Sun Coast Data Set

User Generated

nevryenzbanpbfgn

Business Finance

MBA5652

Columbia Southern University

Description

Correlation and Regression Analysis Using Sun Coast Data Set

Using the Sun Coast data set, perform a correlation analysis, simple regression analysis, and multiple regression analysis, and interpret the results.

Please follow the Unit V Scholarly Activity template here to complete your assignment.

You will utilize Microsoft Excel ToolPak for this assignment.

Example:

  • Correlation Analysis
    • Restate the hypotheses.
    • Provide data output results from Excel Toolpak.
    • Interpret the correlation analysis results
  • Simple Regression Analysis
    • Restate the hypotheses.
    • Provide data output results from Excel Toolpak.
    • Interpret the simple regression analysis results
  • Multiple Regression Analysis
    • Restate the hypotheses.
    • Provide data output results from Excel Toolpak.
    • Interpret the multiple regression analysis results.

The title and reference pages do not count toward the page requirement for this assignment. This assignment should be no less than two pages in length, follow APA-style formatting and guidelines, and use references and citations as necessary.

Unformatted Attachment Preview

UNIT V STUDY GUIDE Data Analysis: Correlation and Regression Course Learning Outcomes for Unit V Upon completion of this unit, students should be able to: 6. Differentiate between various research-based tools commonly used in businesses. 6.1 Determine the most appropriate statistical procedure to use from among correlation, simple regression, and multiple regression to test hypotheses. 7. Test data for a business research project. 7.1 Establish whether to accept or reject null and alternative hypotheses by using correlation, simple regression, and multiple regression. Course/Unit Learning Outcomes 6.1 7.1 Learning Activity Unit Lesson Video: How to Find Correlation in Excel with the Data Analysis Toolpak Video: How to Use Excel-The PEARSON Function Video: Excel 2016 Correlation Analysis Video: How to Calculate a Correlation (and p value) in Microsoft Excel Video: Correlation Coefficient in Excel Video: How to Perform a Linear or Multiple Regression (Excel 2013) Video: Multiple Regression Interpretation in Excel Unit V Scholarly Activity Unit Lesson Video: Excel 2016 Correlation Analysis Video: How to Calculate a Correlation (and p value) in Microsoft Excel Video: Correlation Coefficient in Excel Video: Multiple Regression Interpretation in Excel Unit V Scholarly Activity Reading Assignment In order to access the following resources, click the links below: Glen, S. (2013, December 14). How to find correlation in Excel with the Data Analysis Toolpak [Video file]. Retrieved from https://www.youtube.com/watch?v=AjQA78tI39Q Click here for a transcript of the video. TheRMUoHP Biostatistics Resource Channel. (2014, November 6). How to use Excel-The PEARSON Function [Video file]. Retrieved from https://www.youtube.com/watch?v=JO-Gc5bEG70 Click here for a transcript of the video. Porterfield, T. (2017, May 18). Excel 2016 correlation analysis [Video file]. Retrieved from https://www.youtube.com/watch?v=kr64tfZmiGA Click here for a transcript of the video. MBA 5652, Research Methods 1 Quantitative Specialists. (2014, September 15). How to calculate a correlationUNIT (and xp-value) Microsoft STUDYinGUIDE Excel [Video file]. Retrieved from https://www.youtube.com/watch?v=vFcxExzLfZI Title Click here for a transcript of the video. MrSnyder88. (2009, November 8). Correlation coefficient in Excel [Video file]. Retrieved from https://www.youtube.com/watch?v=s2TVkYmmCAs Click here for a transcript of the video. economistician.com. (2015, May 15). How to perform a linear or multiple regression (Excel 2013) [Video file]. Retrieved from https://www.youtube.com/watch?v=wBocR96UdyY Click here for a transcript of the video. TheWoundedDoctor. (2013, May 6). Multiple regression interpretation in Excel [Video file]. Retrieved from https://www.youtube.com/watch?v=tlbdkgYz7FM Click here for a transcript of the video. Unit Lesson Data Analysis: Correlation and Regression Unit IV discussed descriptive statistics and the importance of testing the data to ensure assumptions are met before using parametric statistical procedures. When using descriptive statistics, the data that are collected are described by the researcher both visually and statistically. The visual representation alone can reveal information about whether assumptions are met. Although all statistical tests have different assumptions, normality is universally shared and is relatively easy to observe through the use of histograms. It is preferable to use parametric tests since they are more powerful than non-parametric tests, which have fewer assumptions that must be met. Regardless of the statistical procedure under consideration, the assumptions must be met if the researcher can have confidence in the validity of the results. Units V through VII will focus on inferential statistics, which include the parametric tests of correlation, regression, t test, and ANOVA. Inferential Statistics Unlike descriptive statistics, inferential statistics go beyond simply describing the data to making inferences, or predictions, about a population. The inferences are often based on the characteristics of a sample. Inferences, or predictions, are stated in the form of hypotheses. Results of statistical tests on samples are used to generalize those results to a population (Zikmund, Babin, Carr, & Griffin, 2013). Descriptive statistics and inferential statistics are not mutually exclusive. In fact, performing descriptive statistics should always be a precursor to inferential statistics for assumption testing for statistical procedures being considered. Populations, Samples, and Generalization Statistical procedures are used to answer questions about a population. A population can be people or things, such as a company’s entire consumer base or the total units produced for a new product. A population can be very large or very small. For example, a company may collect productivity data on their 100 employees. They are interested in knowing if there is a relationship between the size of merit increases and job productivity. The 100 employees represent the entire population, which would be considered a census. Since data are collected from all 100 employees, the company can have certainty that the statistical results represent the entire population. In many instances, however, it is impractical and cost prohibitive to collect data from all participants in the population. In these scenarios, data is collected from a sample of the population. The statistical results from the sample are then used to generalize the findings to the population. Using the example above, now assume the company has a population of 200,000 employees. They decide to select a random sample of 100 employees to whom they have provided various merit increases. Like the example MBA 5652, Research Methods 2 above, their interest is to understand if there is a relationship between the sizeUNIT of merit increases and job x STUDY GUIDE productivity. If they determine that there is a statistically significant relationshipTitle between the size of merit increases and productivity, they can generalize those results to the population of 200,000 employees. This can inform their decision-making and planning regarding the size of raises to provide for the next fiscal year and the productivity increase they can forecast. This is the function of inferential statistics. Relationships or Differences Statistical analysis can be simplified as either looking for relationships (or associations) between variables or looking for differences between variables or groups. This unit considers statistical testing that looks for relationships between variables. The statistical procedures highlighted to test for relationships will be correlation, simple regression, and multiple regression. Correlation and regression analyses are parametric tests. Chi-square is a corresponding non-parametric test. Correlation Although many course concepts in research methods may be new and foreign, correlation may feel more familiar and comfortable. The concept of correlation makes intuitive sense to most people since relationships between variables (e.g., years of education and income, safety training hours and lost time hours, and hours of exercise and weight loss) occur frequently in daily life. Relationships naturally occurring between variables can be positive or negative. A positive or negative relationship between variables does not mean positive or negative in the context of making a value judgment of good or bad. A positive or negative relationship, in statistical terms, means the direction of the relationship. An example of a positive relationship between variables is durable goods orders and the S&P 500 index. When durable goods orders decrease, there is a decrease in the S&P 500 index. When durable goods orders increase, there is an increase in the S&P 500 index. This is a positive relationship because both variables move in the same direction. As one variable increases, the other increases. Conversely, when one variable decreases, the other decreases. An example of a negative relationship between variables is outdoor temperature and heating oil expenditures. When the outdoor temperature increases, heating oil expenditures decrease. When the outdoor temperature decreases, heating oil expenditures increase. This is a negative relationship because the variables move in opposite directions. As one variable increases, the other decreases. Conversely, when one variable decreases, the other increases. Another important distinction that must be understood is the difference between correlation and causation. Even if a statistical test (e.g. Pearson’s r) indicates a statistically significant relationship between variables, it must never be said that one variable causes the change in the other variable. For example, there is a positive correlation between ice cream sales and violent crime in New York City (both increase in the warmer months of the year, and both decrease in the cooler months). It would be absurd to say that ice cream causes violent crime—even though the relationship between variables does exist. This extreme example makes the point that correlation does not mean causation. Causation can only be statistically shown via experimental research designs, which have tight controls to manipulate variables. Pearson Correlation Coefficient (r) When conducting correlation analysis, the Pearson correlation coefficient (r) is the most commonly used parametric measure of association between two variables (Norusis, 2008). The Pearson statistic is represented by r, which is the standardized covariance between the variables, and measures the linear relationship between variables (Field, 2005). The Pearson correlation coefficient is sometimes represented by R, but this is normally used in the context of regression analysis. One can easily determine how to calculate r using long-hand by referring to a statistics textbook, but it is much easier and faster to use statistical software to quickly calculate the Pearson correlation coefficient. For the purposes of this course, it is most important to understand what Pearson’s r is, what it measures, and how to interpret it, rather than how to calculate it by long-hand. MBA 5652, Research Methods 3 When using correlation analysis, a hypothesis is tested that there is no statistically UNITsignificant x STUDY relationship GUIDE between variables. The null and alternative hypotheses would be stated like so. Title Ho1: There is no statistically significant relationship between X and Y. Ha1: There is a statistically significant relationship between X and Y. As mentioned above, the r statistic can indicate a positive relationship or a negative relationship between variables. The r statistic can also indicate no relationship at all between variables. An r of +1 indicates a perfect positive correlation, while an r of -1 indicates a perfect negative correlation (Field, 2005). The r statistic will always fall between +1 and -1. An r of 0 indicates no correlation exists between variables. Correlation When reviewing the literature for research articles, it is very common to find r statistics less than .5. Given the fact that an r of 1 indicates a perfect correlation, a statistically significant r of .5 or less hardly seems large enough to get excited about; however, the American Psychological Association would disagree. The American Psychological Association (as cited in Kerr, Garvin, Heaton, & Boyle, 2006) concluded that psychologists studying highly complex human behavior should be satisfied with correlations in the r = 0.10 to 0.20 range, and they should be generally pleased with correlations in the 0.25–0.35 area. The best new variables typically increase predictions, for instance, of job performance between 1% and 4%. A 10% contribution of emotional intelligence would be considered very large (Kerr et al., 2006). Although there are no concrete guidelines for interpreting r and R2, The following chart suggests some general guidelines that are fairly consistent with other rule-of-thumb published guidelines. Adapted from Guideline for Interpreting Correlation Coefficient by I. Phanny, 2014. (https://www.slideshare.net/phannithrupp/guideline-for-interpreting-correlation-coefficient/2). MBA 5652, Research Methods 4 Coefficient of Determination (R2) UNIT x STUDY GUIDE Title The Pearson’s r is useful itself, but the closely related coefficient of determination (R2) is also very informative. Simply squaring r produces R2, which indicates the amount of variability in one variable that is explained by the other variable (Field, 2005). According to the American Psychological Association (as cited in Kerr et al., 2006), a researcher should be generally pleased with a correlation of r = .25, which translates to a coefficient of determination R2 = .0625. This means that the variable x explains 6.25% of the variability in the variable y. Most statistical software programs will calculate both r and R2 for when running correlation analysis, so it is easy to see the strength of the association and the explained variance. Again, it is important not to confuse correlation with causation. Examples of r and R2: r = .10, R2 = .01 explains 1% of the total variance between the variables being tested r = .30, R2 = .09 explains 9% of the total variance between the variables being tested r = .50, R2 = .25 explains 25% of the total variance between the variables being tested Interpreting Correlation Output Results The following correlation analysis looked for a statistically significant relationship between the variables of height and weight. The results show that there is a moderately strong correlation r = .6 (Pearson’s Correlation). It is also necessary to assess whether the correlation is statistically significant using an alpha of .05. The results indicate a p value of .023 < .05. Therefore, the null hypothesis is rejected, and the alternative hypothesis is accepted. Reject Ho1: There is no statistically significant relationship between weight and height. Accept Ha1: There is a statistically significant relationship between weight and height. Although the information obtained through correlation analysis is revealing and useful, it is limited in that correlation analysis cannot be used to make predictions (Field, 2005). To be able to predict the value of a dependent variable (DV) from observations of the independent variable (IV), regression analysis must be used. Regression Analysis Relationships between variables can be useful for making predictions. Regression analysis is a concept that many students have heard of, even if they are not entirely comfortable with it. If the relationship between the variables X and Y are known, predictions can be made about how a change in X will relate to a change in Y. Remember that this is not stating that a change in X causes a change in Y. It is only possible to predict a change based on the relationship between variables. Regression analysis can be powerful, especially when multiple X variables are included (multiple regression) to make a prediction about a change in a single Y variable. When using regression analysis, a hypothesis is tested that there is no statistically significant prediction of the dependent variable (i.e., Y or outcome variable) by one or more independent variables (X). If a single independent variable is used to predict Y, it is termed simple regression. If two or more independent X variables are used to predict Y, it is termed multiple regression. The null and alternative hypotheses would be stated as follows. Ho1: There is no statistically significant relationship to predict Y from X1, X2…and Xn. MBA 5652, Research Methods 5 Ha1: There is a statistically significant relationship to predict Y from X1, X2…and Xn. x STUDY GUIDE UNIT Title Regression analysis uses a linear model to apply a line of best fit to the data. The line of best fit is the most optimal because it results in the smallest amount of difference between the observed data points and the line (Field, 2005). As the linear regression example below shows, a line of best fit is applied to the data for the variables mortality (DV) and cigarette consumption (IV). This is an example of simple linear regression because there is only one IV. If all of the data points fell on a straight line, it would be a perfect linear relationship, which would allow us to make a perfect prediction of the Y axis variable by looking at the X axis variable (Norusis, 2008). A perfect linear relationship is rare, so we develop the regression model as Y = a + b(X). The resulting mathematical model is tested for statistical significance. If statistically significant, at a p value of less than .05, the IV data can be plugged into the model to be multiplied by the calculated coefficient, added to the calculated constant (Yintercept or a0), resulting in the predicted DV. The statistical software will calculate the model and values for a0 and b1, which will appear as the following equation: Y = a0 + b1 (X) Adapted from images in Multiple Linear Regression by J. Neill, 2008 (https://www.slideshare.net/jtneill/multiple-linear-regression). or DV = a0 + b1 (IV1) Simple regression creates the statistical model, shown above, with a single independent variable (IV), sometimes referred to as a predictor variable, and a single DV, sometimes referred to as the outcome or criterion variable. Multiple regression creates a statistical model with a single independent variable and two or more DVs. The multiple regression model is similar to the simple regression equation in that it still contains a Y-intercept, or a, but the multiple regression model contains multiple IVs and multiple corresponding coefficients, or bx, as shown below. MBA 5652, Research Methods 6 Y = a0 + b1X1 + b2X2 +…+ bnXn or UNIT x STUDY GUIDE Title DV = a0 + b1(IV1) + b2(IV2) +…+ bn(IVn) If the multiple regression model is statistically significant, at a p value of less than .05, the IV data can be plugged into the model to be multiplied by the calculated coefficients, added to the calculated constant (Y-intercept or a0), resulting in the predicted DV. Interpreting Regression Output Results Interpreting simple and multiple regression output is similar. There are several key test statistics and p values that are returned in a regression analysis that must be evaluated to a) determine statistical significance and b) assess the strength of the linear regression model. Adapted from images in Multiple Linear Regression by J. Neill, 2008 (https://www.slideshare.net/jtneill/multiple-linear-regression). Multiple R: This is Pearson’s r, as discussed in the correlation section. Regression often uses a capital R instead of r. This is simply the square root of r2. Multiple R describes the strength of the correlation between the model and the dependent variable. In the regression output below, the multiple R figure of 99.2% indicates a very strong positive correlation between the regression model and the dependent (output) variable. R square (r2): This is the coefficient of determination as was discussed in the correlation section. Regression often uses a capital R. The square of R explains the amount of variation in the dependent (output) variable that is explained by the regression model. In the regression output below, the R square (r2) figure indicates that 98.3% of the variation in the dependent variable is explained by the regression model. This is a very high r2. ANOVA: This indicates whether the regression model is statistically significant in its ability to predict the dependent variable. ANOVA uses significance F for probability, and this is synonymous with the p value discussed previously in the course. A significance level of F < .05 indicates statistical significance. In the regression output below, the significance level of F = .000009 < .05 would indicate that the null hypothesis should be rejected, and the alternative accepted that there is a statistically significant relationship between the regression model and the dependent variable. The t Stat: This assesses the statistical significance of the individual predictor variable coefficients. A p value < .05 for any given t stat indicates statistical significance for the corresponding coefficient. In the regression output below, the p values for the coefficients of variables 1, 2, and 3 are all
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

See attached:)

job site
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

microns
4
6.5
8
8
4
7
2
5.5
5
4
4.5
8.5
7
5
9.5
7
9.5
9.5
5
6
8
7.5
8.5
9
3
6
7.5
7.5
2.7
2
7.5
9
6
3
8
1
8.5
0.7
0.5
8.5
2
4.5

mean
annual sick
days per
employee
11
7
5
5
10
7
11
9
7
10
8
4
7
8
3
7
5
7
10
6
5
6
5
4
7
7
7
5
7
9
6
4
6
8
5
8
5
8
8
4
8
9

43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88

6
7
5
2.5
5
4
8
5
3.5
8
5
4.9
7.5
2.5
6.5
8
5
7
4
8
1
2
4
4
7
5.2
5
6
8
6
6.5
1.5
8.5
2
8
10
10
8
1
7
10
7
7
6.5
7.5
8.5

7
7
10
12
9
7
5
7
9
6
5
10
9
7
7
6
6
6
9
5
8
8
10
9
7
10
7
6
6
7
6
8
6
8
5
4
5
8
8
6
2
8
6
7
6
6

microns
microns

1

89
90
91
92
93
94
95
96
97
98
99
100
101
102
103

3
0.5
1
9
7
2
8.5
3.5
3
7
5.5
7.5
0.2
4
5

8
9
8
6
7
9
7
9
8
6
7
7
12
8
9

mean annual sick days
-0.715984185
per employee
SUMMARY OUTPUT
Regression Statistics
Multiple R
0.715984185
R Square
0.512633354
Adjusted R Square 0.507807941
Standard Error
1.327783455
Observations
103
ANOVA
df
Regression
Residual
Total

Intercept
microns

1
101
102
Coefficients
10.08144483
-0.522376554

mean annual sick days per employee

1

SS

Standard Error

MS
F
Significance F
187.2953239 187.2953 106.2362
1.89059E-17
178.0638994 1.763009
365.3592233
t Stat
P-value
0.315156969 31.98865 1.17E-54
0.050681267 -10.3071 1.89E-17

Lower 95% Upper 95%Lower 95.0%
9.456258184 10.70663 9.456258
-0.622914554 -0.42184 -0.62291

Upper 95.0%
10.70663
-0.42184

contract #
1116
205
2146
1603
914
437
598
2801
1420
682
714
2336
2216
1327
1025
920
465
138
1411
1439
2388
1849
1740
1666
1672
2455
403
856
2050
1271
2429
812
405
1630
2376
861
2436
36
30
76
1155
1273

safety
training
lost time
expenditure hours
1985.12
10
1500.00
30
1126.46
40
1294.10
40
1445.56
40
1112.47
50
1720.81
60
1789.66
60
1837.52
60
2000.00
60
2271.86
60
1507.97
70
1542.33
70
1544.90
70
1547.16
70
1567.55
70
1600.00
70
1452.69
80
1500.00
80
1500.00
80
1326.60
90
1351.25
90
1380.69
90
1423.57
90
1095.58
100
1132.88
100
1134.58
100
1163.44
100
1170.92
100
1177.05
100
1188.19
100
1323.29
100
1051.17
110
1054.03
110
1071.75
110
1077.86
110
1080.34
110
923.37
120
957.17
120
964.11
120
976.64
120
982.06
120

SUMMARY OUTPUT
Regression Statistics
Multiple R
0.939559
R Square
0.882772
Adjusted R Square
0.882241
Standard Error
24.61329
Observations
223
ANOVA
df
Regression
Residual
Total

SS
MS
1 1008202 1008202
221 133884.9 605.814
222 1142087

Coefficients
Standard Error t Stat
Intercept
273.4494 2.665262 102.5976
safety training expenditure
-0.14337 0.003514 -40.7947

1886
630
407
488
1971
62
118
2
42
92
160
149
178
160
56
152
3
196
157
221
62
64
143
139
217
182
162
159
162
145
203
135
158
182
35
163
105
163
131
73
55
181
124
55
136
5
179

985.69
985.97
1000.00
1002.27
1035.43
910.85
912.35
912.35
917.02
917.02
922.02
922.02
950.00
804.28
826.86
832.87
836.33
842.96
856.99
883.88
888.57
893.36
900.89
910.85
759.86
767.13
777.24
784.22
785.69
785.69
792.35
799.90
800.62
801.58
746.02
756.98
759.52
700.93
709.40
715.68
724.94
612.23
613.70
624.54
635.19
642.79
644.15

120
120
120
120
120
130
130
130
130
130
130
130
130
140
140
140
140
140
140
140
140
140
140
140
150
150
150
150
150
150
150
150
150
150
160
160
160
170
170
170
170
180
180
180
180
180
180

150
140
96
157
51
153
137
195
116
196
65
194
113
85
190
148
54
183
100
92
157
40
217
137
91
96
203
205
188
29
189
34
56
49
145
11
75
155
132
168
65
38
186
9
146
123
108

656.14
669.81
669.81
678.60
689.55
526.48
537.35
561.96
564.16
566.50
568.95
574.51
579.32
583.63
601.43
607.26
611.48
393.77
405.91
406.81
426.78
429.50
450.56
456.90
471.81
474.76
500.92
503.17
507.64
507.77
509.43
513.69
514.07
526.14
336.51
344.39
345.69
365.99
377.93
385.05
391.75
264.00
264.00
264.00
265.40
268.60
275.70

180
180
180
180
180
185
185
185
185
185
185
185
185
185
185
185
185
190
190
190
190
190
190
190
190
190
190
190
190
190
190
190
190
190
200
200
200
200
200
200
200
210
210
210
210
210
210

221
201
25
72
213
129
195
141
75
33
119
205
120
110
208
19
211
221
23
133
19
84
167
96
200
171
18
107
152
120
143
24
104
170
115
159
125
128
105
209
190
199
110
151
219
33
156

283.16
288.00
288.00
295.84
300.00
305.27
313.36
244.00
250.57
255.12
256.89
263.27
263.60
223.64
226.65
230.00
230.00
231.52
233.29
234.00
234.00
234.00
234.00
234.00
234.57
236.60
236.91
237.70
243.24
244.00
218.00
205.00
206.97
215.37
215.69
215.99
217.99
218.00
205.00
169.58
174.09
179.06
179.78
192.61
196.65
201.36
202.71

210
210
210
210
210
210
210
220
220
220
220
220
220
230
230
230
230
230
230
230
230
230
230
230
230
230
230
230
230
230
235
240
240
240
240
240
240
240
245
250
250
250
250
250
250
250
250

161
133
131
54
24
48
212
91
4
187
134
24
6
135
132
101
34
27
86
184
35
140
173
131
42
95
17
68
197
59
2
119
19
28
65
18
95
201
40
199

204.57
205.00
135.99
145.59
152.61
125.63
128.00
128.00
45.30
45.69
50.00
51.23
56.89
68.95
102.57
102.57
102.64
103.57
125.18
731.36
70.92
75.90
97.97
100.00
100.00
100.03
102.56
102.57
56.90
56.90
68.09
68.59
42.57
45.69
30.43
35.99
25.69
26.60
26.90
20.46

250
250
260
260
260
270
270
270
280
280
280
280
280
280
280
280
280
280
280
285
290
290
290
290
290
290
290
290
300
300
300
300
320
320
330
330
340
340
340
360

F
Significance F
1664.211 7.7E-105

P-value Lower 95%Upper 95%Lower 95.0%
Upper 95.0%
2.1E-188 268.1968 278.702 268.1968 278.702
7.7E-105 -0.15029 -0.13644 -0.15029 -0.13644

contract #
1420
682
714
2336
2216
1327
1025
920
465
138
1411
1439
2388
1849
1740
1666
1672
2455
403
856
2050
1271
2429
812
405
1630
2376
861
2436
36
30
76
1155
1273
1886
630
407
488
1971
62
118
2

Frequency
(Hz)
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
12500
16000
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
12500
200
250
315
400
500
630
800
1000
1250
1600
2000
2500
3150

Angle in
Degrees
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Chord
Length
0.1809
0.0893
0.0313
0.1867
0.0705
0.1281
0.1966
0.1353
0.1688
0.1578
0.0558
0.1513
0.1238
0.1276
0.1666
0.1515
0.1229
0.0985
0.1724
0.1127
0.0705
0.1988
0.1159
0.0332
0.0364
0.1231
0.0657
0.1365
0.1178
0.1752
0.1886
0.0328
0.042
0.1188
0.1409
0.1953
0.095
0.1033
0.1205
0.0792
0.1538
0.1122

Velocity
(Meters
per
Second)
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6

Displacement
Decibel
0.00266337 126.201
0.00266337 125.201
0.00266337 125.951
0.00266337 127.591
0.00266337 127.461
0.00266337 125.571
0.00266337 125.201
0.00266337 123.061
0.00266337 121.301
0.00266337 119.541
0.00266337 117.151
0.00266337 115.391
0.00266337 112.241
0.00266337 108.721
0.00283081 126.416
0.00283081 127.696
0.00283081 128.086
0.00283081 126.966
0.00283081 126.086
0.00283081 126.986
0.00283081 126.616
0.00283081 124.106
0.00283081 123.236
0.00283081 121.106
0.00283081 119.606
0.00283081 117.976
0.00283081 116.476
0.00283081 113.076
0.00283081 111.076
0.00310138 118.129
0.00310138 119.319
0.00310138 122.779
0.00310138 124.809
0.00310138 126.959
0.00310138 128.629
0.00310138 129.099
0.00310138 127.899
0.00310138 125.499
0.00310138 124.049
0.00310138 123.689
0.00310138 121.399
0.00310138 120.319

Regres

42
92
160
149
178
160
56
152
3
196
157
221
62
64
143
139
217
182
162
159
162
145
203
135
158
182
35
163
105
163
131
73
55
181
124
55
136
5
179
150
140
96
157
51
153
137
195

4000
5000
6300
8000
10000
12500
200
250
315
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
12500
16000
315
400
500
630
800
1000
1250
1600
2000

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5

0.1113
0.0395
0.1158
0.1531
0.0536
0.0614
0.1989
0.148
0.1894
0.1004
0.1402
0.0914
0.0443
0.0917
0.0738
0.1069
0.0592
0.1304
0.1722
0.14
0.0533
0.0386
0.0611
0.0512
0.0525
0.119
0.0502
0.1675
0.0769
0.1848
0.0301
0.1086
0.0336
0.1414
0.0908
0.1899
0.0411
0.1952
0.1854
0.1594
0.0694
0.119
0.0992
0.1617
0.0352
0.1558
0.1929

39.6
39.6
39.6
39.6
39.6
39.6
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6

0.00310138
0.00310138
0.00310138
0.00310138
0.00310138
0.00310138
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00331266
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00336729
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107

119.229
117.789
116.229
114.779
112.139
109.619
117.195
118.595
122.765
125.045
127.315
129.095
129.235
127.365
124.355
122.365
122.375
120.755
119.135
118.145
115.645
113.775
110.515
108.265
127.122
125.992
125.872
126.632
126.642
124.512
123.392
121.762
119.632
118.122
115.372
113.492
109.222
106.582
121.851
124.001
126.661
128.311
128.831
127.581
125.211
122.211
122.101

116
196
65
194
113
85
190
148
54
183
100
92
157
40
217
137
91
96
203
205
188
29
189
34
56
49
145
11
75
155
132
168
65
38
186
9
146
123
108
221
201
25
72
213
129
195
141

2500
3150
4000
5000
6300
8000
10000
12500
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
315
400
500
630
800
1000
1250
1600
2000

1.5
1.5
1.5
1.5
1.5
1.5
1.5
1.5
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3

0.1106
0.0372
0.0788
0.1443
0.145
0.1235
0.0939
0.1307
0.1365
0.1578
0.1453
0.1711
0.1049
0.1201
0.1069
0.1415
0.1371
0.0396
0.1809
0.1359
0.0862
0.1713
0.0617
0.0351
0.0869
0.092
0.0463
0.0912
0.1189
0.0728
0.0974
0.1719
0.1601
0.1365
0.1769
0.1207
0.0464
0.054
0.1325
0.1342
0.0755
0.1299
0.0932
0.0514
0.1089
0.0886
0.0375

39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6

0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00392107
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00425727
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00452492
0.00495741
0.00495741
0.00495741
0.00495741
0.00495741
0.00495741
0.00495741
0.00495741
0.00495741

120.981
119.111
117.741
116.241
114.751
112.251
108.991
106.111
127.564
128.454
129.354
129.494
129.004
127.634
126.514
125.524
124.024
121.514
120.264
118.134
116.134
114.634
110.224
126.159
128.179
129.569
129.949
129.329
127.329
124.439
123.069
122.439
120.189
118.689
117.309
115.679
113.799
112.169
123.312
125.472
127.632
129.292
129.552
128.312
125.802
122.782
120.532

75
33
119
205
120
110
208
19
211
221
23
133
19
84
167
96
200
171
18
107
152
120
143
24
104
170
115
159
125
128
105
209
190
199
110
151
219
33
156
161
133
131
54
24
48
212
91

2500
3150
4000
5000
6300
8000
315
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
250
315
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
12500
250
315
400
500
630
800
1000
1250
1600

3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4

0.1367
0.0878
0.0527
0.0804
0.1046
0.0376
0.0866
0.1706
0.0415
0.1478
0.1076
0.1267
0.05
0.1347
0.1087
0.1435
0.1973
0.1044
0.0841
0.1838
0.093
0.1697
0.0768
0.1091
0.0793
0.1529
0.098
0.1105
0.0622
0.1327
0.0372
0.1477
0.1161
0.1541
0.0489
0.0591
0.179
0.1765
0.1491
0.0971
0.154
0.1933
0.1458
0.1364
0.1705
0.1915
0.0907

39.6
39.6
39.6
39.6
39.6
39.6
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
31.7
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6

0.00495741
0.00495741
0.00495741
0.00495741
0.00495741
0.00495741
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00529514
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00497773
0.00579636
0.00579636
0.00579636
0.00579636
0.00579636
0.00579636
0.00579636
0.00579636
0.00579636

120.162
118.922
116.792
115.792
114.042
110.652
123.118
125.398
127.548
128.698
128.708
126.838
124.838
122.088
120.088
119.598
118.108
115.608
113.858
109.718
126.395
128.175
129.575
130.715
131.615
131.755
131.015
129.395
126.645
124.395
123.775
121.775
119.535
117.785
116.165
113.665
110.905
107.405
123.543
126.843
128.633
130.173
131.073
130.723
128.723
126.343
123.213

4
187
134
24
6
135
132
101
34
27
86
184
35
140
173
131
42
95
17
68
197
59
2
119
19
28
65
18
95
201
40
199
1116
205
2146
1603
914
437
598
2801
1420
682
714
2336
2216
1327
1025

2000
2500
3150
4000
5000
1250
1600
2000
2500
3150
4000
5000
6300
8000
10000
12500
16000
20000
315
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
315
400
500
630
800
1000
1250
1600
2000
2500
3150
4000
5000
6300
315

4
4
4
4
4
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

0.1071
0.0701
0.0824
0.0889
0.1509
0.1841
0.0952
0.1142
0.0788
0.1754
0.0478
0.1279
0.1299
0.1322
0.1343
0.1186
0.0951
0.1286
0.0852
0.0318
0.069
0.1949
0.0807
0.194
0.083
0.1852
0.0719
0.0748
0.0907
0.1268
0.125
0.0859
0.1641
0.0708
0.0366
0.1473
0.1221
0.0533
0.0359
0.0752
0.0322
0.0525
0.0813
0.049
0.1688
0.1373
0.0339

39.6
39.6
39.6
39.6
39.6
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
71.3
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
55.5
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
39.6
31.7

0.00579636
0.00579636
0.00579636
0.00579636
0.00579636
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00214345
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00229336
0.00253511
0.00253511
0.00253511...


Anonymous
Great! 10/10 would recommend using Studypool to help you study.

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags