### Unformatted Attachment Preview

Math 448
Final
May 7
Name:
ID number:
Instructions:
• Read problems very carefully.
• The correct final answer alone is not sufficient for full credit. Because this is an at
home exam, you have more time to explain your answers clearly.
• If your answers are not fully explained, they will not receive full credit.
• Answer at most one question per piece and scan or take a picture to upload to gradescope. Label each problem on gradescope. PROBLEMS THAT ARE NOT CORRECTLY LABELED AND ORIENTED MAY NOT BE GRADED!
• Your final answers need to be simplified only if this is required in the statement of
the problem. Otherwise, you there is no need to simplify numerical quantities like
0.3+ 3
4 · 0.3 · 7 + 234 or to reduce fractions like 36 to lowest terms. There is also no need to
5
547
simplify factorials such as 5! or binomial coefficients such as 73 .
• You can use any relevant hypothesis that we discussed in class, unless the problem
explicitly asks to derive a test.
• When computing p-values, state what you use to compute them, you can also just give
bounds on the p-value by using the tables in the book.
Resources used:
I have read the above instructions and the work in this exam is solely my own:
Signature:
Question:
1
2
3
4
5
Total
Points:
10
10
10
10
10
50
Score:
1. (10 points) An infectious disease has 2 mutations, and researchers are interested in what
proportion of the population has each of the mutations. They took a random sample of
n people and checked if they had mutation 1, mutation 2, or no disease. Let pi be the
proportion of the population with mutation i, for i = 1, 2.
Recall: If X1 is the number of people with mutation 1 and X2 is the number of people
with mutation 2, then the joint distribution of X1 , X2 will be multinomial, that is:
n
pX1 ,X2 (x1 , x2 ) =
px1 px2 (1 − p1 − p2 )n−x1 −x2
x1 , x2 , n − x1 − x2 1 2
where
n
x1 , x2 , n − x1 − x2
=
n!
x1 !x2 !(n − x1 − x2 )!
(a) Derive the Large Sample Size Likelihood Ratio test for H0 : p1 = p2 against
Ha : p1 6= p2 , in the large sample regime. Be sure to indicate what Ω0 and Ω are, as
well what is the maximum value of the Likelihood Function on these sets. Finding
the values where the maximum occurs might look complicated, but this location is
somewhat intuitive.
(b) If they test 100 people and find 18 have mutation 1, 8 have mutation 2, and the
remaining people don’t have the disease. Is there sufficient evidence to claim the
proportions are different at α = .05? What is the p-value?
2. (10 points) You have 2 coins, Coin 1 and Coin 2, that each have unknown probability
of showing heads after being flipped. Out of the 200 tosses, the Coin 1 shows heads 37
times, and the Coin 2 shows heads 44 times. Let p1 be the probability that the Coin 1
shows heads, and p2 be the probability that Coin 2 shows heads.
(a) Construct an approximate 99% confidence interval for p1 − p2 .
(b) Does the experiment provide sufficient evidence to indicate that the p1 and p2 are
different? Use an α = 0.01 level.
(c) Use the test statistic given by the difference in the number of heads in each trial to
hypothesis test Ho : p1 = p2 versus Ha : p1 6= p2 with α = 0.01.
(d) What is the p-value associated with this test?
Page 2
3. (10 points) Suppose Y is a random sample of size 1 from a population with density
function
( θ−1
x
e−x , for x ≥ 0
Γ(θ)
f (y|θ) =
0,
otherwise
where θ > 0 is an unknown parameter.
Based on the single observation of Y , find the uniformly most powerful test at level α
for testing H0 : θ = 1 versus Ha : θ < 1. Explain where you use that Ha is a one-sided
test.
Page 3
4. (10 points) A professor wants to make an exam that is a good assessment of the students
ranking, so they want the distribution for all students to have a mean of 75 and standard
deviation a of 9. The test is given to 5 randomly selected students, and their scores are:
71, 67, 59, 89, and 93 out of 100.
Assume the distribution of students scores is normal.
(a) What is the p-value for the test of the null hypothesis H0 : µ = 75 against the
alternative hypothesis Ha : µ 6= 75.
(b) What is the p-value for the test of the null hypothesis H0 : σ 2 = 92 against the
alternative hypothesis Ha : σ 2 > 92 .
(c) Give a 2-sided .95-confidence interval for µ.
(d) Give am upper tail .95-confidence interval for σ 2 .
What should the professor conclude?
Page 4
5. (10 points) You want to determine if the percentage of a population that has a disease
is greater than 30%, but have very limited resources. So you propose the following
test: people are randomly selected from the population and tested until someone tests
positive. Once someone tests positive the experiment is stopped and the total number
of people tested is recorded. Let p be the total proportion of the population that has
has the disease.
We test the null hypothesis that H0 : p = .3 against the alternative hypothesis Ha : p >
.3.
If 3 or less people were tested you reject H0 in favor of Ha .
(a) In words, what is a type I error for this test?
(b) In words, what is a type II error for this test?
(c) What is α, the probability of a type I error?
(d) What is β, the probability of a type II error, if p = .4?
(e) What is the power of this test as a function of p?
Page 5
Continuous Distributions
Mean
Variance
MomentGenerating
Function
1
; θ ≤ y ≤ θ2
θ2 − θ1 1
θ1 + θ2
2
(θ2 − θ1 )2
12
etθ2 − etθ1
t (θ2 − θ1 )
1
1
2
(y
−
µ)
√ exp −
2σ 2
σ 2π
−∞ < y < +∞
µ
σ2
β
β2
(1 − βt)−1
αβ
αβ 2
(1 − βt)−α
v
2v
(1−2t)−v/2
α
α+β
(α + β) (α + β + 1)
Distribution
Uniform
Normal
Exponential
Probability Function
f (y) =
f (y) =
f (y) =
Gamma
Chi-square
f (y) =
1
α−1 −y/β
e
;
α y
(α)β
00
β
0 0
;
(α + β)
y α−1 (1 − y)β−1 ;
(α)(β)
0 r
Poisson
Negative binomial
λ y e−λ
;
y!
y = 0, 1, 2, . . .
p(y) =
p(y) =
y−1
r−1
p r (1 − p) y−r ;
y = r, r + 1, . . .
λ
λ
r
p
r(1 − p)
p
2
exp[λ(et − 1)]
pet
1 − (1 − p)et
r
MATHEMATICAL STATISTICS WITH APPLICATIONS
This page intentionally left blank
SEVENTH EDITION
Mathematical
Statistics with
Applications
Dennis D. Wackerly
University of Florida
William Mendenhall III
University of Florida, Emeritus
Richard L. Scheaffer
University of Florida, Emeritus
Australia • Brazil • Canada • Mexico • Singapore • Spain
United Kingdom • United States
Mathematical Statistics with Applications, Seventh Edition
Dennis D. Wackerly, William Mendenhall III, Richard L. Scheaffer
Statistics Editor: Carolyn Crockett
Assistant Editors: Beth Gershman, Catie Ronquillo
Editorial Assistant: Ashley Summers
Technology Project Manager: Jennifer Liang
Marketing Manager: Mandy Jellerichs
Marketing Assistant: Ashley Pickering
Marketing Communications Manager: Darlene
Amidon-Brent
Project Manager, Editorial Production: Hal Humphrey
Art Director: Vernon Boes
Print Buyer: Karen Hunt
Production Service: Matrix Productions Inc.
Copy Editor: Betty Duncan
Cover Designer: Erik Adigard, Patricia McShane
Cover Image: Erik Adigard
Cover Printer: TK
Compositor: International Typesetting and Composition
Printer: TK
© 2008, 2002 Duxbury, an imprint of Thomson
Thomson Higher Education
10 Davis Drive
Belmont, CA 94002-3098
USA
Brooks/Cole, a part of The Thomson Corporation.
Thomson, the Star logo, and Brooks/Cole are trademarks
used herein under license.
ALL RIGHTS RESERVED. No part of this work
covered by the copyright hereon may be reproduced
or used in any form or by any means—graphic, electronic,
or mechanical, including photocopying, recording,
taping, web distribution, information storage and retrieval
systems, or in any other manner—without the written
permission of the publisher.
Printed in the United States of America
1 2 3 4 5 6 7 14 13 12 11 10 09 08 07
ExamView® and ExamView Pro® are registered
trademarks of FSCreations, Inc. Windows is a registered
trademark of the Microsoft Corporation used herein under
license. Macintosh and Power Macintosh are registered
trademarks of Apple Computer, Inc. Used herein under
license.
© 2008 Thomson Learning, Inc. All Rights Reserved.
Thomson Learning WebTutorTM is a trademark of
Thomson Learning, Inc.
International Student Edition
ISBN-13: 978-0-495-38508-0
ISBN-10: 0-495-38508-5
For more information about our products, contact
us at: Thomson Learning Academic Resource
Center 1-800-423-0563
For permission to use material from this text or
product, submit a request online at
http://www.thomsonrights.com.
Any additional questions about permissions
can be submitted by e-mail to
thomsonrights@thomson.com.
CONTENTS
Preface
xiii
Note to the Student
xxi
1 What Is Statistics? 1
1.1
Introduction
1.2
Characterizing a Set of Measurements: Graphical Methods
3
1.3
Characterizing a Set of Measurements: Numerical Methods
8
1.4
How Inferences Are Made
1.5
Theory and Reality
1.6
Summary
1
13
14
15
2 Probability 20
2.1
Introduction
2.2
Probability and Inference
21
2.3
A Review of Set Notation
23
2.4
A Probabilistic Model for an Experiment: The Discrete Case
2.5
Calculating the Probability of an Event: The Sample-Point Method
2.6
Tools for Counting Sample Points
2.7
Conditional Probability and the Independence of Events
2.8
Two Laws of Probability
20
26
35
40
51
57
v
vi
Contents
2.9
Calculating the Probability of an Event: The Event-Composition
Method 62
2.10
The Law of Total Probability and Bayes’ Rule
2.11
Numerical Events and Random Variables
2.12
Random Sampling
2.13
Summary
70
75
77
79
3 Discrete Random Variables and Their
Probability Distributions 86
3.1
Basic Deﬁnition
3.2
The Probability Distribution for a Discrete Random Variable
3.3
The Expected Value of a Random Variable or a Function
of a Random Variable 91
3.4
The Binomial Probability Distribution
3.5
The Geometric Probability Distribution
3.6
The Negative Binomial Probability Distribution (Optional) 121
3.7
The Hypergeometric Probability Distribution
3.8
The Poisson Probability Distribution
3.9
Moments and Moment-Generating Functions
138
3.10
Probability-Generating Functions (Optional)
143
3.11
Tchebysheff’s Theorem
3.12
Summary
86
87
100
114
125
131
146
149
4 Continuous Variables and Their Probability
Distributions 157
4.1
Introduction
4.2
The Probability Distribution for a Continuous Random Variable
4.3
Expected Values for Continuous Random Variables
4.4
The Uniform Probability Distribution
4.5
The Normal Probability Distribution
178
4.6
The Gamma Probability Distribution
185
4.7
The Beta Probability Distribution
157
174
194
170
158
Contents vii
4.8
Some General Comments
4.9
Other Expected Values
4.10
Tchebysheff’s Theorem
4.11
Expectations of Discontinuous Functions and Mixed Probability
Distributions (Optional) 210
4.12
Summary
201
202
207
214
5 Multivariate Probability Distributions 223
5.1
Introduction
5.2
Bivariate and Multivariate Probability Distributions
224
5.3
Marginal and Conditional Probability Distributions
235
5.4
Independent Random Variables
5.5
The Expected Value of a Function of Random Variables
5.6
Special Theorems
5.7
The Covariance of Two Random Variables
5.8
The Expected Value and Variance of Linear Functions
of Random Variables 270
5.9
The Multinomial Probability Distribution
5.10
The Bivariate Normal Distribution (Optional)
5.11
Conditional Expectations
5.12
Summary
223
247
255
258
264
279
283
285
290
6 Functions of Random Variables 296
6.1
Introduction
6.2
Finding the Probability Distribution of a Function
of Random Variables 297
6.3
The Method of Distribution Functions
6.4
The Method of Transformations
6.5
The Method of Moment-Generating Functions
6.6
Multivariable Transformations Using Jacobians (Optional)
6.7
Order Statistics
6.8
Summary
296
341
333
298
310
318
325
viii
Contents
7 Sampling Distributions and the Central
Limit Theorem 346
7.1
Introduction
7.2
Sampling Distributions Related to the Normal Distribution
7.3
The Central Limit Theorem
7.4
A Proof of the Central Limit Theorem (Optional)
7.5
The Normal Approximation to the Binomial Distribution
7.6
Summary
346
353
370
377
378
385
8 Estimation 390
8.1
Introduction
8.2
The Bias and Mean Square Error of Point Estimators
8.3
Some Common Unbiased Point Estimators
8.4
Evaluating the Goodness of a Point Estimator
8.5
Conﬁdence Intervals
8.6
Large-Sample Conﬁdence Intervals
8.7
Selecting the Sample Size
8.8
Small-Sample Conﬁdence Intervals for µ and µ1 − µ2
8.9
Conﬁdence Intervals for σ
8.10
Summary
390
392
396
399
406
411
421
2
425
434
437
9 Properties of Point Estimators and Methods
of Estimation 444
9.1
Introduction
9.2
Relative Efﬁciency
9.3
Consistency
9.4
Sufﬁciency
9.5
The Rao–Blackwell Theorem and Minimum-Variance
Unbiased Estimation 464
9.6
The Method of Moments
9.7
The Method of Maximum Likelihood
9.8
Some Large-Sample Properties of Maximum-Likelihood
Estimators (Optional) 483
9.9
Summary
444
445
448
459
485
472
476
Contents ix
10 Hypothesis Testing 488
10.1
Introduction
10.2
Elements of a Statistical Test
10.3
Common Large-Sample Tests
10.4
Calculating Type II Error Probabilities and Finding the Sample Size
for Z Tests 507
10.5
Relationships Between Hypothesis-Testing Procedures
and Conﬁdence Intervals 511
10.6
Another Way to Report the Results of a Statistical Test:
Attained Signiﬁcance Levels, or p-Values 513
10.7
Some Comments on the Theory of Hypothesis Testing
10.8
Small-Sample Hypothesis Testing for µ and µ1 − µ2
10.9
Testing Hypotheses Concerning Variances
10.10
Power of Tests and the Neyman–Pearson Lemma
10.11
Likelihood Ratio Tests
10.12
Summary
488
489
496
518
520
530
540
549
556
11 Linear Models and Estimation by Least Squares 563
11.1
Introduction
11.2
Linear Statistical Models
11.3
The Method of Least Squares
11.4
Properties of the Least-Squares Estimators: Simple
Linear Regression 577
11.5
Inferences Concerning the Parameters βi
11.6
Inferences Concerning Linear Functions of the Model
Parameters: Simple Linear Regression 589
11.7
Predicting a Particular Value of Y by Using Simple Linear
Regression 593
11.8
Correlation
11.9
Some Practical Examples
11.10
Fitting the Linear Model by Using Matrices
11.11
Linear Functions of the Model Parameters: Multiple Linear
Regression 615
11.12
Inferences Concerning Linear Functions of the Model Parameters:
Multiple Linear Regression 616
564
566
569
584
598
604
609
x
Contents
11.13
Predicting a Particular Value of Y by Using Multiple Regression
11.14
A Test for H0 : βg+1 = βg+2 = · · · = βk = 0
11.15
Summary and Concluding Remarks
622
624
633
12 Considerations in Designing Experiments 640
12.1
The Elements Affecting the Information in a Sample
12.2
Designing Experiments to Increase Accuracy
12.3
The Matched-Pairs Experiment
12.4
Some Elementary Experimental Designs
12.5
Summary
640
641
644
651
657
13 The Analysis of Variance 661
13.1
Introduction
13.2
The Analysis of Variance Procedure
13.3
Comparison of More Than Two Means: Analysis of Variance
for a One-Way Layout 667
13.4
An Analysis of Variance Table for a One-Way Layout
13.5
A Statistical Model for the One-Way Layout
13.6
Proof of Additivity of the Sums of Squares and E(MST)
for a One-Way Layout (Optional) 679
13.7
Estimation in the One-Way Layout
13.8
A Statistical Model for the Randomized Block Design
13.9
The Analysis of Variance for a Randomized Block Design
13.10
Estimation in the Randomized Block Design
13.11
Selecting the Sample Size
13.12
Simultaneous Conﬁdence Intervals for More Than One Parameter
13.13
Analysis of Variance Using Linear Models
13.14
Summary
661
662
671
677
681
686
688
695
696
701
705
14 Analysis of Categorical Data 713
14.1
A Description of the Experiment
14.2
The Chi-Square Test
14.3
A Test of a Hypothesis Concerning Speciﬁed Cell Probabilities:
A Goodness-of-Fit Test 716
713
714
698
Contents xi
14.4
Contingency Tables
14.5
r × c Tables with Fixed Row or Column Totals
14.6
Other Applications
14.7
Summary and Concluding Remarks
721
729
734
736
15 Nonparametric Statistics 741
15.1
Introduction
15.2
A General Two-Sample Shift Model
15.3
The Sign Test for a Matched-Pairs Experiment
15.4
The Wilcoxon Signed-Rank Test for a Matched-Pairs Experiment
15.5
Using Ranks for Comparing Two Population Distributions:
Independent Random Samples 755
15.6
The Mann–Whitney U Test: Independent Random Samples
15.7
The Kruskal–Wallis Test for the One-Way Layout
15.8
The Friedman Test for Randomized Block Designs
15.9
The Runs Test: A Test for Randomness
15.10
Rank Correlation Coefﬁcient
15.11
Some General Comments on Nonparametric Statistical Tests
741
742
744
765
771
777
783
16 Introduction to Bayesian Methods
for Inference 796
16.1
Introduction
16.2
Bayesian Priors, Posteriors, and Estimators
16.3
Bayesian Credible Intervals
16.4
Bayesian Tests of Hypotheses
16.5
Summary and Additional Comments
796
797
808
813
816
Appendix 1 Matrices and Other Useful
Mathematical Results 821
A1.1
Matrices and Matrix Algebra
A1.2
Addition of Matrices
A1.3
Multiplication of a Matrix by a Real Number
A1.4
Matrix Multiplication
821
822
823
758
823
789
750
xii
Contents
A1.5
Identity Elements
A1.6
The Inverse of a Matrix
A1.7
The Transpose of a Matrix
A1.8
A Matrix Expression for a System of Simultaneous
Linear Equations 828
A1.9
Inverting a Matrix
A1.10
Solving a System of Simultaneous Linear Equations
A1.11
Other Useful Mathematical Results
825
827
828
830
834
835
Appendix 2 Common Probability Distributions, Means,
Variances, and Moment-Generating Functions 837
Table 1 Discrete Distributions
837
Table 2 Continuous Distributions
838
Appendix 3 Tables 839
Table 1
Binomial Probabilities
Table 2
Table of e−x
Table 3
Poisson Probabilities
843
Table 4
Normal Curve Areas
848
Table 5
Percentage Points of the t Distributions
Table 6
Percentage Points of the χ 2 Distributions
Table 7
Percentage Points of the F Distributions
Table 8
Distribution Function of U
Table 9
Critical Values of T in the Wilcoxon Matched-Pairs, Signed-Ranks
Test; n = 5(1)50 868
839
842
849
850
852
862
Table 10 Distribution of the Total Number of Runs R in Samples of Size
(n 1 , n 2 ); P(R ≤ a) 870
Table 11 Critical Values of Spearman’s Rank Correlation Coefﬁcient
Table 12 Random Numbers
Answers to Exercises
Index
896
877
873
872
PREFACE
The Purpose and Prerequisites of this Book
Mathematical Statistics with Applications was written for use with an undergraduate
1-year sequence of courses (9 quarter- or 6 semester-hours) on mathematical statistics.
The intent of the text is to present a solid undergraduate foundation in statistical
theory while providing an indication of the relevance and importance of the theory
in solving practical problems in the real world. We think a course of this type is
suitable for most undergraduate disciplines, ...