Overview
Citation
Licensing
Dedication
Preface
Preface to Version 0.70.1
Preface to Version 0.70
Preface to Version 0.65
Preface to Version 0.6
Preface to Version 0.5
Preface to Version 0.4
Preface to Version 0.3
Part I. Background
1
Why do we learn statistics?
1.1
On the psychology of statistics
1.1.1
The curse of belief bias
1.2
The cautionary tale of Simpson’s paradox
1.3
Statistics in psychology
1.4
Statistics in everyday life
1.5
There’s more to research methods than statistics
2
A brief introduction to research design
2.1
Introduction to psychological measurement
2.1.1
Some thoughts about psychological measurement
2.1.2
Operationalisation: defining your measurement
2.2
Scales of measurement
2.2.1
Nominal scale
2.2.2
Ordinal scale
2.2.3
Interval scale
2.2.4
Ratio scale
2.2.5
Continuous versus discrete variables
2.2.6
Some complexities
2.3
Assessing the reliability of a measurement
2.4
The “role” of variables: predictors and outcomes
2.5
Experimental and non-experimental research
2.5.1
Experimental research
2.5.2
Non-experimental research
2.6
Assessing the validity of a study
2.6.1
Internal validity
2.6.2
External validity
2.6.3
Construct validity
2.6.4
Face validity
2.6.5
Ecological validity
2.7
Confounds, artifacts and other threats to validity
2.7.1
History effects
2.7.2
Maturation effects
2.7.3
Repeated testing effects
2.7.4
Selection bias
2.7.5
Differential attrition
2.7.6
Non-response bias
2.7.7
Regression to the mean
2.7.8
Experimenter bias
2.7.9
Demand effects and reactivity
2.7.10
Placebo effects
2.7.11
Situation, measurement and subpopulation effects
2.7.12
Fraud, deception and self-deception
2.8
Summary
Part II. An introduction to jamovi
3
Getting started with jamovi
3.1
Installing jamovi
3.1.1
Starting up jamovi
3.2
Analyses
3.3
The spreadsheet
3.3.1
Variables
3.3.2
Computed variables
3.3.3
Copy and Paste
3.3.4
Syntax mode
3.4
Loading data in jamovi
3.4.1
Importing data from csv files
3.5
Importing unusual data files
3.5.1
Loading data from text files
3.5.2
Loading data from SPSS (and other statistics packages)
3.5.3
Loading Excel files
3.6
Changing data from one level to another
3.7
Installing add-on modules into jamovi
3.8
Quitting jamovi
3.9
Summary
Part III. Working with data
4
Descriptive statistics
4.1
Measures of central tendency
4.1.1
The mean
4.1.2
Calculating the mean in jamovi
4.1.3
The median
4.1.4
Mean or median? What’s the difference?
4.1.5
A real life example
4.1.6
Mode
4.2
Measures of variability
4.2.1
Range
4.2.2
Interquartile range
4.2.3
Mean absolute deviation
4.2.4
Variance
4.2.5
Standard deviation
4.2.6
Which measure to use?
4.3
Skew and kurtosis
4.4
Descriptive statistics separately for each group
4.5
Standard scores
4.6
Summary
4.6.1
Epilogue: Good descriptive statistics are descriptive!
5
Drawing graphs
5.1
Histograms
5.2
Boxplots
5.2.1
Violin plots
5.2.2
Drawing multiple boxplots
5.2.3
Using box plots to detect outliers
5.3
Bar graphs
5.4
Saving image files using jamovi
5.5
Summary
6
Pragmatic matters
6.1
Tabulating and cross-tabulating data
6.1.1
Creating tables for single variables
6.1.2
Adding percentages to a contingency table
6.2
Logical expressions in jamovi
6.2.1
Assessing mathematical truths
6.2.2
Logical operations
6.2.3
Applying logical operation to text
6.3
Transforming and recoding a variable
6.3.1
Creating a transformed variable
6.3.2
Collapsing a variable into a smaller number of discrete levels or categories
6.3.3
Creating a transformation that can be applied to multiple variables
6.4
A few more mathematical functions and operations
6.4.1
Logarithms and exponentials
6.5
Extracting a subset of the data
6.6
Summary
Part IV. Statistical Theory
Prelude to Part IV
On the limits of logical reasoning
Learning without making assumptions is a myth
7
Introduction to probability
7.1
How are probability and statistics different?
7.2
What does probability mean?
7.2.1
The frequentist view
7.2.2
The Bayesian view
7.2.3
What’s the difference? And who is right?
7.3
Basic probability theory
7.3.1
Introducing probability distributions
7.4
The binomial distribution
7.4.1
Introducing the binomial
7.5
The normal distribution
7.5.1
Probability density
7.6
Other useful distributions
7.7
Summary
8
Estimating unknown quantities from a sample
8.1
Samples, populations and sampling
8.1.1
Defining a population
8.1.2
Simple random samples
8.1.3
Most samples are not simple random samples
8.1.4
How much does it matter if you don’t have a simple random sample?
8.1.5
Population parameters and sample statistics
8.2
The law of large numbers
8.3
Sampling distributions and the central limit theorem
8.3.1
Sampling distribution of the mean
8.3.2
Sampling distributions exist for any sample statistic!
8.3.3
The central limit theorem
8.4
Estimating population parameters
8.4.1
Estimating the population mean
8.4.2
Estimating the population standard deviation
8.5
Estimating a confidence interval
8.5.1
A slight mistake in the formula
8.5.2
Interpreting a confidence interval
8.5.3
Calculating confidence intervals in jamovi
8.6
Summary
9
Hypothesis testing
9.1
A menagerie of hypotheses
9.1.1
Research hypotheses versus statistical hypotheses
9.1.2
Null hypotheses and alternative hypotheses
9.2
Two types of errors
9.3
Test statistics and sampling distributions
9.4
Making decisions
9.4.1
Critical regions and critical values
9.4.2
A note on statistical “significance”
9.4.3
The difference between one sided and two sided tests
9.5
The
\(p\)
value of a test
9.5.1
A softer view of decision making
9.5.2
The probability of extreme data
9.5.3
A common mistake
9.6
Reporting the results of a hypothesis test
9.6.1
The issue
9.6.2
Two proposed solutions
9.7
Running the hypothesis test in practice
9.8
Effect size, sample size and power
9.8.1
The power function
9.8.2
Effect size
9.8.3
Increasing the power of your study
9.9
Some issues to consider
9.9.1
Neyman versus Fisher
9.9.2
Bayesians versus frequentists
9.9.3
Traps
9.10
Summary
Part V. Statistical Tools
10
Categorical data analysis
10.1
The
\(\chi^2\)
(chi-square) goodness-of-fit test
10.1.1
The cards data
10.1.2
The null hypothesis and the alternative hypothesis
10.1.3
The “goodness-of-fit” test statistic
10.1.4
The sampling distribution of the GOF statistic
10.1.5
Degrees of freedom
10.1.6
Testing the null hypothesis
10.1.7
Doing the test in jamovi
10.1.8
Specifying a different null hypothesis
10.1.9
How to report the results of the test
10.1.10
A comment on statistical notation
10.2
The
\(\chi^2\)
test of independence (or association)
10.2.1
Constructing our hypothesis test
10.2.2
Doing the test in jamovi
10.2.3
Postscript
10.3
The continuity correction
10.4
Effect size
10.5
Assumptions of the test(s)
10.6
The Fisher exact test
10.7
The McNemar test
10.7.1
Doing the McNemar test in jamovi
10.8
What’s the difference between McNemar and independence?
10.9
Summary
11
Comparing two means
11.1
The one-sample
\(z\)
-test
11.1.1
The inference problem that the test addresses
11.1.2
Constructing the hypothesis test
11.1.3
A worked example, by hand
11.1.4
Assumptions of the
\(z\)
-test
11.2
The one-sample
\(t\)
-test
11.2.1
Introducing the
\(t\)
-test
11.2.2
Doing the test in jamovi
11.2.3
Assumptions of the one sample
\(t\)
-test
11.3
The independent samples
\(t\)
-test (Student test)
11.3.1
The data
11.3.2
Introducing the test
11.3.3
A “pooled estimate” of the standard deviation
11.3.4
Completing the test
11.3.5
Doing the test in jamovi
11.3.6
Positive and negative
\(t\)
values
11.3.7
Assumptions of the test
11.4
The independent samples
\(t\)
-test (Welch test)
11.4.1
Doing the Welch test in jamovi
11.4.2
Assumptions of the test
11.5
The paired-samples
\(t\)
-test
11.5.1
The data
11.5.2
What is the paired samples
\(t\)
-test?
11.5.3
Doing the test in jamovi
11.6
One sided tests
11.7
Effect size
11.7.1
Cohen’s d from one sample
11.7.2
Cohen’s
\(d\)
from a Student’s
\(t\)
test
11.7.3
Cohen’s
\(d\)
from a paired-samples test
11.8
Checking the normality of a sample
11.8.1
QQ plots
11.8.2
Shapiro-Wilk tests
11.8.3
Example
11.9
Testing non-normal data with Wilcoxon tests
11.9.1
Two sample Mann-Whitney U test
11.9.2
One sample Wilcoxon test
11.10
Summary
12
Correlation and linear regression
12.1
Correlations
12.1.1
The data
12.1.2
The strength and direction of a relationship
12.1.3
The correlation coefficient
12.1.4
Calculating correlations in jamovi
12.1.5
Interpreting a correlation
12.1.6
Spearman’s rank correlations
12.2
Scatterplots
12.2.1
More elaborate options
12.3
What is a linear regression model?
12.4
Estimating a linear regression model
12.4.1
Linear regression in jamovi
12.4.2
Interpreting the estimated model
12.5
Multiple linear regression
12.5.1
Doing it in jamovi
12.6
Quantifying the fit of the regression model
12.6.1
The
\(R^2\)
(R-squared) value
12.6.2
The relationship between regression and correlation
12.6.3
The adjusted
\(R^2\)
(R-squared) value
12.7
Hypothesis tests for regression models
12.7.1
Testing the model as a whole
12.7.2
Tests for individual coefficients
12.7.3
Running the hypothesis tests in jamovi
12.8
Regarding regression coefficients
12.8.1
Confidence intervals for the coefficients
12.8.2
Calculating standardised regression coefficients
12.9
Assumptions of regression
12.10
Model checking
12.10.1
Three kinds of residuals
12.10.2
Three kinds of anomalous data
12.10.3
Checking the normality of the residuals
12.10.4
Checking for collinearity
12.11
Model selection
12.11.1
Backward elimination
12.11.2
Forward selection
12.11.3
A caveat
12.11.4
Comparing two regression models
12.12
Summary
13
Comparing several means (one-way ANOVA)
13.1
An illustrative data set
13.2
How ANOVA works
13.3
Two formulas for the variance of
\(Y\)
13.4
From variances to sums of squares
13.5
From sums of squares to the
\(F\)
-test
13.7
A worked example
13.8
Running an ANOVA in jamovi
13.9
Using jamovi to specify your ANOVA
13.10
Effect size
13.11
Multiple comparisons and post hoc tests
13.12
Running “pairwise”
\(t\)
-tests
13.13
Corrections for multiple testing
13.14
Bonferroni corrections
13.15
Holm corrections
13.16
Writing up the post hoc test
13.17
Assumptions of one-way ANOVA
13.18
Checking the homogeneity of variance assumption
13.19
Running the Levene test in jamovi
13.20
Removing the homogeneity of variance assumption
13.21
Checking the normality assumption
13.22
Removing the normality assumption
13.23
The logic behind the Kruskal-Wallis test
13.25
How to run the Kruskal-Wallis test in jamovi
13.26
Repeated measures one-way ANOVA
13.27
Repeated measures ANOVA in jamovi
13.28
The Friedman non-parametric repeated measures ANOVA test
13.29
On the relationship between ANOVA and the Student
\(t\)
-test
13.30
Summary
14
Factorial ANOVA
14.1
Factorial ANOVA 1: balanced designs, no interactions
14.1.1
What hypotheses are we testing?
14.1.2
Running the analysis in jamovi
14.1.3
How are the sum of squares calculated?
14.1.4
What are our degrees of freedom?
14.1.5
Factorial ANOVA versus one-way ANOVAs
14.1.6
What kinds of outcomes does this analysis capture?
14.2
Factorial ANOVA 2: balanced designs, interactions allowed
14.2.1
What exactly
is
an interaction effect?
14.2.3
Degrees of freedom for the interaction
14.2.4
Running the ANOVA in jamovi
14.2.5
Interpreting the results
14.3
Effect size
14.3.1
Estimated group means
14.4
Assumption checking
14.4.1
Homogeneity of variance
14.4.2
Normality of residuals
14.5
Analysis of Covariance (ANCOVA)
14.5.1
Running ANCOVA in jamovi
14.6
ANOVA as a linear model
14.6.1
Some data
14.6.2
ANOVA with binary factors as a regression model
14.6.3
How to encode non binary factors as contrasts
14.6.4
The equivalence between ANOVA and regression for non-binary factors
14.6.5
Degrees of freedom as parameter counting!
14.7
Different ways to specify contrasts
14.7.1
Treatment contrasts
14.7.2
Helmert contrasts
14.7.3
Sum to zero contrasts
14.7.4
Optional contrasts in jamovi
14.8
Post hoc tests
14.9
The method of planned comparisons
14.10
Factorial ANOVA 3: unbalanced designs
14.10.1
The coffee data
14.10.2
“Standard ANOVA” does not exist for unbalanced designs
14.10.3
Type I sum of squares
14.10.4
Type III sum of squares
14.10.5
Type II sum of squares
14.10.6
Effect sizes (and non-additive sums of squares)
14.11
Summary
15
Factor Analysis
15.1
Exploratory Factor Analysis
15.1.1
Checking assumptions
15.1.2
What is EFA good for?
15.1.3
EFA in jamovi
15.1.4
Writing up an EFA
15.2
Principal Component Analysis
15.2.1
Performing PCA in jamovi
15.3
Confirmatory Factor Analysis
15.3.1
CFA in jamovi
15.3.2
Reporting a CFA
15.4
Multi-Trait Multi-Method CFA
15.4.1
MTMM CFA in jamovi
15.5
Internal consistency reliability analysis
15.5.1
Reliability analysis in jamovi
15.6
Summary
16
Bayesian statistics
16.1
Probabilistic reasoning by rational agents
16.1.1
Priors: what you believed before
16.1.2
Likelihoods: theories about the data
16.1.3
The joint probability of data and hypothesis
16.1.4
Updating beliefs using Bayes’ rule
16.2
Bayesian hypothesis tests
16.2.1
The Bayes factor
16.2.2
Interpreting Bayes factors
16.3
Why be a Bayesian?
16.3.1
Statistics that mean what you think they mean
16.3.2
Evidentiary standards you can believe
16.3.3
The
\(p\)
-value is a lie.
16.3.4
Is it really this bad?
16.4
Bayesian
\(t\)
-tests
16.4.1
Independent samples
\(t\)
-test
16.4.2
Paired samples
\(t\)
-test
16.5
Summary
Part IV. Endings, alternatives and prospects
17
Epilogue
17.1
The undiscovered statistics
17.1.1
Omissions within the topics covered
17.1.2
Statistical models missing from the book
17.1.3
Other ways of doing inference
17.1.4
Miscellaneous topics
17.2
Learning the basics, and learning them in jamovi
References
Published with bookdown
Learning statistics with jamovi: a tutorial for psychology students and other beginners (Version 0.70)
Part IV. Endings, alternatives and prospects