Fundamentals of Statistical Reasoning in Education, 2nd Edition
, by Theodore Coladarci (Univ. of Maine); Casey D. Cobb (Univ. of Connecticut ; ); Edward W. Minium (San Jose State Univ.); Robert C. Clarke (San Jose State Univ.)- ISBN: 9780470084069 | 0470084065
- Cover: Paperback
- Copyright: 9/1/2008
Introduction | p. 1 |
Why Statistics? | p. 1 |
Descriptive Statistics | p. 2 |
Inferential Statistics | p. 3 |
The Role of Statistics in Educational Research | p. 4 |
Variables and Their Measurement | p. 5 |
Some Tips on Studying Statistics | p. 9 |
Descriptive Statistics | p. 13 |
Frequency Distributions | p. 15 |
Why Organize Data? | p. 15 |
Frequency Distributions for Quantitative Variables | p. 15 |
Grouped Scores | p. 17 |
Some Guidelines for Forming Class Intervals | p. 18 |
Constructing a Grouped-Data Frequency Distribution | p. 19 |
The Relative Frequency Distribution | p. 21 |
Exact Limits | p. 22 |
The Cumulative Percentage Frequency Distribution | p. 24 |
Percentile Ranks | p. 25 |
Frequency Distributions for Qualitative Variables | p. 27 |
Summary | p. 28 |
Graphic Representation | p. 37 |
Why Graph Data? | p. 37 |
Graphing Qualitative Data: The Bar Chart | p. 37 |
Graphing Quantitative Data: The Histogram | p. 38 |
The Frequency Polygon | p. 42 |
Comparing Different Distributions | p. 43 |
Relative Frequency and Proportional Area | p. 44 |
Characteristics of Frequency Distributions | p. 46 |
The Box Plot | p. 49 |
Summary | p. 51 |
Central Tendency | p. 59 |
The Concept of Central Tendency | p. 59 |
The Mode | p. 59 |
The Median | p. 60 |
The Arithmetic Mean | p. 62 |
Central Tendency and Distribution Symmetry | p. 64 |
Which Measure of Central Tendency to Use? | p. 66 |
Summary | p. 67 |
Variability | p. 75 |
Central Tendency Is Not Enough: The Importance of Variability | p. 75 |
The Range | p. 76 |
Variability and Deviations from the Mean | p. 77 |
The Variance | p. 78 |
The Standard Deviation | p. 79 |
The Predominance of the Variance and Standard Deviation | p. 81 |
The Standard Deviation and the Normal Distribution | p. 81 |
Comparing Means of Two Distributions: The Relevance of Variability | p. 82 |
In the Denominator: n vs. n - 1 | p. 85 |
Summary | p. 85 |
Normal Distributions and Standard Scores | p. 91 |
A Little History: Sir Francis Galton and the Normal Curve | p. 91 |
Properties of the Normal Curve | p. 92 |
More on the Standard Deviation and the Normal Distribution | p. 93 |
z Scores | p. 95 |
The Normal Curve Table | p. 97 |
Finding Area When the Score Is Known | p. 99 |
Reversing the Process: Finding Scores When the Area Is Known | p. 102 |
Comparing Scores from Different Distributions | p. 104 |
Interpreting Effect Size | p. 105 |
Percentile Ranks and the Normal Distribution | p. 107 |
Other Standard Scores | p. 108 |
Standard Scores Do Not "Normalize" a Distribution | p. 110 |
The Normal Curve and Probability | p. 110 |
Summary | p. 111 |
Correlation | p. 119 |
The Concept of Association | p. 119 |
Bivariate Distributions and Scatterplots | p. 119 |
The Covariance | p. 124 |
The Pearson r | p. 130 |
Computation of r: The Calculating Formula | p. 133 |
Correlation and Causation | p. 135 |
Factors Influencing Pearson r | p. 136 |
Judging the Strength of Association: r[superscript 2] | p. 139 |
Other Correlation Coefficients | p. 141 |
Summary | p. 142 |
Regression and Prediction | p. 149 |
Correlation versus Prediction | p. 149 |
Determining the Line of Best Fit | p. 150 |
The Regression Equation in Terms of Raw Scores | p. 153 |
Interpreting the Raw-Score Slope | p. 156 |
The Regression Equation in Terms of z Scores | p. 157 |
Some Insights Regarding Correlation and Prediction | p. 158 |
Regression and Sums of Squares | p. 161 |
Measuring the Margin of Prediction Error: The Standard Error of Estimate | p. 163 |
Correlation and Causality (Revisited) | p. 168 |
Summary | p. 169 |
Inferential Statistics | p. 179 |
Probability and Probability Distributions | p. 181 |
Statistical Inference: Accounting for Chance in Sample Results | p. 181 |
Probability: The Study of Chance | p. 182 |
Definition of Probability | p. 183 |
Probability Distributions | p. 185 |
The Or/addition Rule | p. 187 |
The And/multiplication Rule | p. 188 |
The Normal Curve as a Probability Distribution | p. 189 |
"So What?" Probability Distributions as the Basis for Statistical Inference | p. 192 |
Summary | p. 192 |
Sampling Distributions | p. 197 |
From Coins to Means | p. 197 |
Samples and Populations | p. 198 |
Statistics and Parameters | p. 199 |
Random Sampling Model | p. 200 |
Random Sampling in Practice | p. 202 |
Sampling Distributions of Means | p. 202 |
Characteristics of a Sampling Distribution of Means | p. 204 |
Using a Sampling Distribution of Means to Determine Probabilities | p. 207 |
The Importance of Sample Size (n) | p. 211 |
Generality of the Concept of a Sampling Distribution | p. 212 |
Summary | p. 213 |
Testing Statistical Hypotheses about [Mu] When [sigma] Is Known: The One-Sample z Test | p. 221 |
Testing a Hypothesis about [Mu]: Does "Homeschooling" Make a Difference? | p. 221 |
Dr. Meyer's Problem in a Nutshell | p. 222 |
The Statistical Hypotheses: H[subscript 0] and H[subscript 1] | p. 223 |
The Test Statistic z | p. 225 |
The Probability of the Test Statistic: The p Value | p. 226 |
The Decision Criterion: Level of Significance ([alpha]) | p. 227 |
The Level of Significance and Decision Error | p. 229 |
The Nature and Role of H[subscript 0] and H[subscript 1] | p. 231 |
Rejection versus Retention of H[subscript 0] | p. 232 |
Statistical Significance versus Importance | p. 233 |
Directional and Nondirectional Alternative Hypotheses | p. 235 |
Prologue: The Substantive versus the Statistical | p. 237 |
Summary | p. 239 |
Estimation | p. 247 |
Hypothesis Testing versus Estimation | p. 247 |
Point Estimation versus Interval Estimation | p. 248 |
Constructing an Interval Estimate of [Mu] | p. 249 |
Interval Width and Level of Confidence | p. 252 |
Interval Width and Sample Size | p. 253 |
Interval Estimation and Hypothesis Testing | p. 253 |
Advantages of Interval Estimation | p. 255 |
Summary | p. 256 |
Testing Statistical Hypotheses about [Mu] When [sigma] Is Not Known: The One-Sample t Test | p. 263 |
Reality: [sigma] Often Is Unknown | p. 263 |
Estimating the Standard Error of the Mean | p. 264 |
The Test Statistic t | p. 266 |
Degrees of Freedom | p. 267 |
The Sampling Distribution of Student's t | p. 268 |
An Application of Student's t | p. 270 |
Assumption of Population Normality | p. 272 |
Levels of Significance versus p Values | p. 273 |
Constructing a Confidence Interval for [Mu] When [sigma] Is Not Known | p. 275 |
Summary | p. 275 |
Comparing the Means of Two Populations: Independent Samples | p. 283 |
From One Mu to Two | p. 283 |
Statistical Hypotheses | p. 284 |
The Sampling Distribution of Differences Between Means | p. 285 |
Estimating [Characters not reproducible] | p. 288 |
The t Test for Two Independent Samples | p. 289 |
Testing Hypotheses about Two Independent Means: An Example | p. 290 |
Interval Estimation of [Mu subscript 1] - [Mu subscript 2] | p. 293 |
Appraising the Magnitude of a Difference: Measures of Effect Size for X[subscript 1]-X[subscript 2] | p. 295 |
How Were Groups Formed? The Role of Randomization | p. 299 |
Statistical Inferences and Nonstatistical Generalizations | p. 300 |
Summary | p. 301 |
Comparing the Means of Dependent Samples | p. 309 |
The Meaning of "Dependent" | p. 309 |
Standard Error of the Difference Between Dependent Means | p. 310 |
Degrees of Freedom | p. 312 |
The t Test for Two Dependent Samples | p. 312 |
Testing Hypotheses about Two Dependent Means: An Example | p. 315 |
Interval Estimation of [Mu subscript D] | p. 317 |
Summary | p. 318 |
Comparing the Means of Three or More Independent Samples: One-Way Analysis of Variance | p. 327 |
Comparing More Than Two Groups: Why Not Multiple t Tests? | p. 327 |
The Statistical Hypotheses in One-Way ANOVA | p. 328 |
The Logic of One-Way ANOVA: An Overview | p. 329 |
Alison's Reply to Gregory | p. 332 |
Partitioning the Sums of Squares | p. 333 |
Within-Groups and Between-Groups Variance Estimates | p. 337 |
The F Test | p. 337 |
Tukey's "HSD" Test | p. 339 |
Interval Estimation of [Mu subscript i] - [Mu subscript j] | p. 342 |
One-Way ANOVA: Summarizing the Steps | p. 343 |
Estimating the Strength of the Treatment Effect: Effect Size ([Omega superscript 2]) | p. 345 |
ANOVA Assumptions (and Other Considerations) | p. 346 |
Summary | p. 347 |
Inferences about the Pearson Correlation Coefficient | p. 357 |
From [Mu] to [rho] | p. 357 |
The Sampling Distribution of r When [rho] = 0 | p. 357 |
Testing the Statistical Hypothesis That [rho] = 0 | p. 359 |
An Example | p. 359 |
Table E | p. 361 |
The Role of n in the Statistical Significance of r | p. 363 |
Statistical Significance versus Importance (Again) | p. 364 |
Testing Hypotheses Other Than [rho] = 0 | p. 364 |
Interval Estimation of [rho] | p. 365 |
Summary | p. 367 |
Making Inferences from Frequency Data | p. 375 |
Frequency Data versus Score Data | p. 375 |
A Problem Involving Frequencies: The One-Variable Case | p. 376 |
X[superscript 2]: A Measure of Discrepancy Between Expected and Observed Frequencies | p. 377 |
The Sampling Distribution of X[superscript 2] | p. 379 |
Completion of the Voter Survey Problem: The X[superscript 2] Goodness-of-Fit Test | p. 380 |
The X[superscript 2] Test of a Single Proportion | p. 381 |
Interval Estimate of a Single Proportion | p. 383 |
When There Are Two Variables: The X[superscript 2] Test of Independence | p. 385 |
Finding Expected Frequencies in the Two-Variable Case | p. 386 |
Calculating the Two-Variable X[superscript 2] | p. 387 |
The X[superscript 2] Test of Independence: Summarizing the Steps | p. 389 |
The 2 x 2 Contingency Table | p. 390 |
Testing a Difference Between Two Proportions | p. 391 |
The Independence of Observations | p. 391 |
X[superscript 2] and Quantitative Variables | p. 392 |
Other Considerations | p. 393 |
Summary | p. 393 |
Statistical "Power" (and How to Increase It) | p. 403 |
The Power of a Statistical Test | p. 403 |
Power and Type II Error | p. 404 |
Effect Size (Revisited) | p. 405 |
Factors Affected Power: The Effect Size | p. 406 |
Factors Affecting Power: Sample Size | p. 407 |
Additional Factors Affecting Power | p. 408 |
Significance versus Importance | p. 410 |
Selecting an Appropriate Sample Size | p. 410 |
Summary | p. 414 |
References | p. 419 |
Review of Basic Mathematics | p. 421 |
Introduction | p. 421 |
Symbols and Their Meaning | p. 421 |
Arithmetic Operations Involving Positive and Negative Numbers | p. 422 |
Squares and Square Roots | p. 422 |
Fractions | p. 423 |
Operations Involving Parentheses | p. 424 |
Approximate Numbers, Computational Accuracy, and Rounding | p. 425 |
Answers to Selected End-of-Chapter Problems | p. 426 |
Statistical Tables | p. 448 |
Index | p. 461 |
Useful Formulas | p. 479 |
Table of Contents provided by Ingram. All Rights Reserved. |
The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.
Digital License
You are licensing a digital product for a set duration. Durations are set forth in the product description, with "Lifetime" typically meaning five (5) years of online access and permanent download to a supported device. All licenses are non-transferable.
More details can be found here.