library(here)stress.data = read.csv(here("data/stress.csv"))library(psych)describe(stress.data$Stress)
## vars n mean sd median trimmed mad min max range skew kurtosis se## X1 1 118 5.18 1.88 5.27 5.17 1.65 0.62 10.32 9.71 0.08 0.22 0.17
mr.model <- lm(Stress ~ Support + Anxiety, data = stress.data)summary(mr.model)
## ## Call:## lm(formula = Stress ~ Support + Anxiety, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -4.1958 -0.8994 -0.1370 0.9990 3.6995 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.31587 0.85596 -0.369 0.712792 ## Support 0.40618 0.05115 7.941 1.49e-12 ***## Anxiety 0.25609 0.06740 3.799 0.000234 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.519 on 115 degrees of freedom## Multiple R-squared: 0.3556, Adjusted R-squared: 0.3444 ## F-statistic: 31.73 on 2 and 115 DF, p-value: 1.062e-11
Review here:
In the case of univariate regression:
seb=sYsX√1−r2xyn−2
In the case of multiple regression:
seb=sYsX√1−R2YˆYn−p−1√11−R2i.jkl...p
seb=sYsX√1−R2YˆYn−p−1√11−R2i.jkl...p
what cannot be explained in Xi by other predictors
Large tolerance (little overlap) means standard error will be small.
what does this mean for including a lot of variables in your model?
You goal should be to match the population model (theoretically)
Including many variables will not bias parameter estimates but will potentially increase degrees of freedom and standard errors; in other words, putting too many variables in your model may make it more difficult to find a statistically significant result
But that's only the case if you add variables unrelated to Y or X; there are some cases in which adding the wrong variables can lead to spurious results. [Stay tuned for the lecture on causal models.]
Simultaneous: Enter all of your IV's in a single model. Y=b0+b1X1+b2X2+b3X3
Hierarchically: Build a sequence of models in which every successive model includes one more (or one fewer) IV than the previous. Y=b0+e Y=b0+b1X1+e Y=b0+b1X1+b2X2+e Y=b0+b1X1+b2X2+b3X3+e
This is known as hierarchical regression. (Note that this is different from Hierarchical Linear Modelling or HLM [which is often called Multilevel Modeling or MLM].) Hierarchical regression is a subset of model comparison techniques.
Model comparison: Comparing how well two (or more) models fit the data in order to determine which model is better.
If we're comparing nested models by incrementally adding or subtracting variables, this is known as hierarchical regression.
Multiple models are calculated
Each predictor (or set of predictors) is assessed in terms of what it adds (in terms of variance explained) at the time it is entered
Order is dependent on an a priori hypothesis
m.1 <- lm(Stress ~ Support, data = stress.data)m.2 <- lm(Stress ~ Support + Anxiety, data = stress.data)anova(m.1, m.2)
## Analysis of Variance Table## ## Model 1: Stress ~ Support## Model 2: Stress ~ Support + Anxiety## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 116 298.72 ## 2 115 265.41 1 33.314 14.435 0.0002336 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(m.1)
## Analysis of Variance Table## ## Response: Stress## Df Sum Sq Mean Sq F value Pr(>F) ## Support 1 113.15 113.151 43.939 1.12e-09 ***## Residuals 116 298.72 2.575 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(m.2)
## Analysis of Variance Table## ## Response: Stress## Df Sum Sq Mean Sq F value Pr(>F) ## Support 1 113.151 113.151 49.028 1.807e-10 ***## Anxiety 1 33.314 33.314 14.435 0.0002336 ***## Residuals 115 265.407 2.308 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## ## Call:## lm(formula = Stress ~ Support + Anxiety, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -4.1958 -0.8994 -0.1370 0.9990 3.6995 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.31587 0.85596 -0.369 0.712792 ## Support 0.40618 0.05115 7.941 1.49e-12 ***## Anxiety 0.25609 0.06740 3.799 0.000234 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.519 on 115 degrees of freedom## Multiple R-squared: 0.3556, Adjusted R-squared: 0.3444 ## F-statistic: 31.73 on 2 and 115 DF, p-value: 1.062e-11
## ## Call:## lm(formula = Stress ~ Support, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -3.8215 -1.2145 -0.1796 1.0806 3.4326 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 2.56046 0.42189 6.069 1.66e-08 ***## Support 0.30006 0.04527 6.629 1.12e-09 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.605 on 116 degrees of freedom## Multiple R-squared: 0.2747, Adjusted R-squared: 0.2685 ## F-statistic: 43.94 on 1 and 116 DF, p-value: 1.12e-09
m.0 <- lm(Stress ~ 1, data = stress.data)m.1 <- lm(Stress ~ Support, data = stress.data)anova(m.0, m.1)
## Analysis of Variance Table## ## Model 1: Stress ~ 1## Model 2: Stress ~ Support## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 117 411.87 ## 2 116 298.72 1 113.15 43.939 1.12e-09 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
R2Y.1234...p=r2Y1+r2Y(2.1)+r2Y(3.21)+r2Y(4.321)+...
One of the benefits of using regression (instead of partial correlations) is that it can handle both continuous and categorical predictors and allows for using both in the same model.
Categorical predictors with more than two levels are broken up into several smaller variables. In doing so, we take variables that don't have any inherent numerical value to them (i.e., nominal and ordinal variables) and ascribe meaningful numbers that allow for us to calculate meaningful statistics.
You can choose just about any numbers to represent your categorical variable. However, there are several commonly used methods that result in very useful statistics.
In dummy coding, one group is selected to be a reference group. From your single nominal variable with K levels, K−1 dummy code variables are created; for each new dummy code variable, one of the non-reference groups is assigned 1; all other groups are assigned 0.
Occupation | D1 | D2 |
---|---|---|
Engineer | 0 | 0 |
Teacher | 1 | 0 |
Doctor | 0 | 1 |
The dummy codes are entered as IV's in the regression equation.
In dummy coding, one group is selected to be a reference group. From your single nominal variable with K levels, K−1 dummy code variables are created; for each new dummy code variable, one of the non-reference groups is assigned 1; all other groups are assigned 0.
Occupation | D1 | D2 |
---|---|---|
Engineer | 0 | 0 |
Teacher | 1 | 0 |
Doctor | 0 | 1 |
The dummy codes are entered as IV's in the regression equation.
Person | Occupation | D1 | D2 |
---|---|---|---|
Billy | Engineer | 0 | 0 |
Susan | Teacher | 1 | 0 |
Michael | Teacher | 1 | 0 |
Molly | Engineer | 0 | 0 |
Katie | Doctor | 0 | 1 |
Solomon’s paradox describes the tendency for people to reason more wisely about other people’s problems compared to their own. One potential explanation for this paradox is that people tend to view other people’s problems from a more psychologically distant perspective, whereas they view their own problems from a psychologically immersed perspective. To test this possibility, researchers asked romantically-involved participants to think about a situation in which their partner cheated on them (self condition) or a friend’s partner cheated on their friend (other condition). Participants were also instructed to take a first-person perspective (immersed condition) by using pronouns such as I and me, or a third-person perspective (distanced condition) by using pronouns such as he and her.
library(here)solomon = read.csv(here("data/solomon.csv"))
Grossmann, I., & Kross, E. (2014). Exploring Solomon’s paradox: Self-distancing eliminates self-other asymmetry in wise reasoning about close relationships in younger and older adults. Psychological Science, 25, 1571-1580.
psych::describe(solomon[,c("ID", "CONDITION", "WISDOM")], fast = T)
## vars n mean sd min max range se## ID 1 120 64.46 40.98 1.00 168.00 167.00 3.74## CONDITION 2 120 2.46 1.12 1.00 4.00 3.00 0.10## WISDOM 3 115 0.01 0.99 -2.52 1.79 4.31 0.09
psych::describe(solomon[,c("ID", "CONDITION", "WISDOM")], fast = T)
## vars n mean sd min max range se## ID 1 120 64.46 40.98 1.00 168.00 167.00 3.74## CONDITION 2 120 2.46 1.12 1.00 4.00 3.00 0.10## WISDOM 3 115 0.01 0.99 -2.52 1.79 4.31 0.09
library(knitr)library(kableExtra)head(solomon) %>% select(ID, CONDITION, WISDOM) %>% kable() %>% kable_styling()
ID | CONDITION | WISDOM |
---|---|---|
1 | 3 | -0.2758939 |
6 | 4 | 0.4294921 |
8 | 4 | -0.0278587 |
9 | 4 | 0.5327150 |
10 | 2 | 0.6229979 |
12 | 2 | -1.9957813 |
solomon = solomon %>% mutate(dummy_2 = ifelse(CONDITION == 2, 1, 0), dummy_3 = ifelse(CONDITION == 3, 1, 0), dummy_4 = ifelse(CONDITION == 4, 1, 0)) solomon %>% select(ID, CONDITION, WISDOM, matches("dummy")) %>% kable() %>% kable_styling()
ID | CONDITION | WISDOM | dummy_2 | dummy_3 | dummy_4 |
---|---|---|---|---|---|
1 | 3 | -0.2758939 | 0 | 1 | 0 |
6 | 4 | 0.4294921 | 0 | 0 | 1 |
8 | 4 | -0.0278587 | 0 | 0 | 1 |
9 | 4 | 0.5327150 | 0 | 0 | 1 |
10 | 2 | 0.6229979 | 1 | 0 | 0 |
12 | 2 | -1.9957813 | 1 | 0 | 0 |
14 | 3 | -1.1514699 | 0 | 1 | 0 |
18 | 2 | -0.6912011 | 1 | 0 | 0 |
21 | 2 | 0.0053117 | 1 | 0 | 0 |
25 | 4 | 0.2863499 | 0 | 0 | 1 |
26 | 4 | -1.8217968 | 0 | 0 | 1 |
30 | 1 | -1.2823302 | 0 | 0 | 0 |
32 | 1 | -2.3358379 | 0 | 0 | 0 |
35 | 4 | 0.2710307 | 0 | 0 | 1 |
50 | 1 | 0.7179373 | 0 | 0 | 0 |
53 | 1 | -2.0595072 | 0 | 0 | 0 |
57 | 4 | -0.2327698 | 0 | 0 | 1 |
58 | 4 | 0.0214245 | 0 | 0 | 1 |
60 | 3 | 0.1112851 | 0 | 1 | 0 |
62 | 1 | -1.7895030 | 0 | 0 | 0 |
65 | 2 | 0.9330889 | 1 | 0 | 0 |
68 | 1 | -0.3152235 | 0 | 0 | 0 |
71 | 4 | 0.7765844 | 0 | 0 | 1 |
76 | 4 | 1.1960573 | 0 | 0 | 1 |
84 | 2 | 0.0248331 | 1 | 0 | 0 |
86 | 3 | 1.2175357 | 0 | 1 | 0 |
88 | 3 | 0.5025819 | 0 | 1 | 0 |
89 | 1 | -0.4693998 | 0 | 0 | 0 |
95 | 4 | 0.4821839 | 0 | 0 | 1 |
99 | 1 | -0.0352657 | 0 | 0 | 0 |
102 | 1 | 1.1155606 | 0 | 0 | 0 |
105 | 2 | 1.4556172 | 1 | 0 | 0 |
117 | 1 | NA | 0 | 0 | 0 |
122 | 2 | 0.4161299 | 1 | 0 | 0 |
143 | 1 | -1.3339417 | 0 | 0 | 0 |
145 | 4 | NA | 0 | 0 | 1 |
152 | 4 | 0.6508028 | 0 | 0 | 1 |
153 | 2 | -1.8543092 | 1 | 0 | 0 |
159 | 2 | -0.8511141 | 1 | 0 | 0 |
168 | 2 | 0.0029835 | 1 | 0 | 0 |
2 | 4 | 0.1340113 | 0 | 0 | 1 |
3 | 4 | -0.8836265 | 0 | 0 | 1 |
4 | 4 | 0.9063644 | 0 | 0 | 1 |
5 | 1 | 1.7905951 | 0 | 0 | 0 |
7 | 1 | -0.9868494 | 0 | 0 | 0 |
11 | 3 | 1.0372247 | 0 | 1 | 0 |
13 | 3 | -2.4860158 | 0 | 1 | 0 |
15 | 2 | 1.1166410 | 1 | 0 | 0 |
16 | 3 | 0.0412327 | 0 | 1 | 0 |
17 | 3 | 0.1183208 | 0 | 1 | 0 |
19 | 2 | -1.2353752 | 1 | 0 | 0 |
20 | 3 | 0.5182724 | 0 | 1 | 0 |
22 | 3 | 0.6202474 | 0 | 1 | 0 |
23 | 3 | -0.6130326 | 0 | 1 | 0 |
24 | 2 | 0.0114708 | 1 | 0 | 0 |
27 | 4 | 0.5735473 | 0 | 0 | 1 |
29 | 1 | -0.9486002 | 0 | 0 | 0 |
31 | 1 | 0.1183208 | 0 | 0 | 0 |
33 | 3 | -0.0208230 | 0 | 1 | 0 |
34 | 3 | 0.9004090 | 0 | 1 | 0 |
36 | 4 | 0.8704434 | 0 | 0 | 1 |
37 | 3 | 0.9556476 | 0 | 1 | 0 |
38 | 2 | 1.0240299 | 1 | 0 | 0 |
39 | 3 | -0.1556817 | 0 | 1 | 0 |
40 | 3 | 0.6229979 | 0 | 1 | 0 |
41 | 2 | -0.8691839 | 1 | 0 | 0 |
42 | 4 | 1.2319783 | 0 | 0 | 1 |
43 | 1 | -1.4556055 | 0 | 0 | 0 |
44 | 4 | 0.9341692 | 0 | 0 | 1 |
45 | 4 | -0.2287715 | 0 | 0 | 1 |
46 | 1 | -0.2903366 | 0 | 0 | 0 |
47 | 2 | 0.7034946 | 1 | 0 | 0 |
48 | 3 | 0.7551061 | 0 | 1 | 0 |
49 | 3 | -0.5291273 | 0 | 1 | 0 |
51 | 1 | 0.7262208 | 0 | 0 | 0 |
52 | 2 | 0.6108835 | 1 | 0 | 0 |
54 | 3 | -0.1134342 | 0 | 1 | 0 |
55 | 3 | 0.4150495 | 0 | 1 | 0 |
56 | 3 | 1.2991128 | 0 | 1 | 0 |
59 | 1 | -2.3324293 | 0 | 0 | 0 |
61 | 3 | -1.1745673 | 0 | 1 | 0 |
63 | 3 | 0.8560007 | 0 | 1 | 0 |
64 | 2 | -0.0486279 | 1 | 0 | 0 |
66 | 2 | 0.9532683 | 1 | 0 | 0 |
67 | 4 | NA | 0 | 0 | 1 |
69 | 2 | 0.8188319 | 1 | 0 | 0 |
70 | 4 | 1.6041250 | 0 | 0 | 1 |
72 | 2 | 0.9870285 | 1 | 0 | 0 |
73 | 4 | 0.1554896 | 0 | 0 | 1 |
74 | 1 | 0.3141548 | 0 | 0 | 0 |
75 | 1 | NA | 0 | 0 | 0 |
77 | 1 | -2.3046244 | 0 | 0 | 0 |
78 | 1 | 0.2277028 | 0 | 0 | 0 |
79 | 4 | 0.0545949 | 0 | 0 | 1 |
80 | 3 | -0.1217177 | 0 | 1 | 0 |
81 | 1 | -0.8641051 | 0 | 0 | 0 |
82 | 3 | 0.3524040 | 0 | 1 | 0 |
83 | 3 | 0.1565700 | 0 | 1 | 0 |
85 | 3 | 0.3430401 | 0 | 1 | 0 |
87 | 2 | 1.1792865 | 1 | 0 | 0 |
90 | 1 | 0.4329007 | 0 | 0 | 0 |
91 | 2 | -0.8083760 | 1 | 0 | 0 |
92 | 1 | 1.1427757 | 0 | 0 | 0 |
93 | 1 | 0.4101745 | 0 | 0 | 0 |
94 | 3 | 0.2387368 | 0 | 1 | 0 |
96 | 1 | -1.3751088 | 0 | 0 | 0 |
97 | 2 | 0.0834802 | 1 | 0 | 0 |
98 | 1 | -0.9282022 | 0 | 0 | 0 |
100 | 4 | 1.6584869 | 0 | 0 | 1 |
101 | 1 | -0.5150559 | 0 | 0 | 0 |
103 | 3 | 0.2421454 | 0 | 1 | 0 |
104 | 4 | -1.2128165 | 0 | 0 | 1 |
106 | 1 | -0.9736546 | 0 | 0 | 0 |
107 | 3 | 0.1843749 | 0 | 1 | 0 |
108 | 1 | -2.5231846 | 0 | 0 | 0 |
134 | 1 | 0.7839913 | 0 | 0 | 0 |
135 | 2 | 0.5787934 | 1 | 0 | 0 |
146 | 3 | 0.4955462 | 0 | 1 | 0 |
149 | 3 | 1.0877557 | 0 | 1 | 0 |
154 | 3 | NA | 0 | 1 | 0 |
mod.1 = lm(WISDOM ~ dummy_2 + dummy_3 + dummy_4, data = solomon)summary(mod.1)
## ## Call:## lm(formula = WISDOM ~ dummy_2 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## dummy_2 0.6814 0.2497 2.729 0.007390 ** ## dummy_3 0.7541 0.2348 3.211 0.001729 ** ## dummy_4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
When working with dummy codes, the intercept can be interpreted as the mean of the reference group.
ˆY=b0+b1D2+b2D3+b3D2ˆY=b0+b1(0)+b2(0)+b3(0)ˆY=b0ˆY=ˉYReference
What do each of the slope coefficients mean?
From this equation, we can get the mean of every single group.
newdata = data.frame(dummy_2 = c(0,1,0,0), dummy_3 = c(0,0,1,0), dummy_4 = c(0,0,0,1))predict(mod.1, newdata = newdata, se.fit = T)
## $fit## 1 2 3 4 ## -0.5593042 0.1220847 0.1948435 0.3344884 ## ## $se.fit## 1 2 3 4 ## 0.1686358 0.1841382 0.1634457 0.1877848 ## ## $df## [1] 111## ## $residual.scale## [1] 0.9389242
And the test of the coefficient represents the significance test of each group to the reference. This is an independent-samples t-test.
The test of the intercept is the one-sample t-test comparing the intercept to 0.
summary(mod.1)$coef
## Estimate Std. Error t value Pr(>|t|)## (Intercept) -0.5593042 0.1686358 -3.316641 0.0012319438## dummy_2 0.6813889 0.2496896 2.728944 0.0073896074## dummy_3 0.7541477 0.2348458 3.211247 0.0017291997## dummy_4 0.8937927 0.2523909 3.541303 0.0005832526
What if you wanted to compare groups 2 and 3?
solomon = solomon %>% mutate(dummy_1 = ifelse(CONDITION == 1, 1, 0), dummy_3 = ifelse(CONDITION == 3, 1, 0), dummy_4 = ifelse(CONDITION == 4, 1, 0)) mod.2 = lm(WISDOM ~ dummy_1 + dummy_3 + dummy_4, data = solomon)summary(mod.2)
## ## Call:## lm(formula = WISDOM ~ dummy_1 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.12208 0.18414 0.663 0.50870 ## dummy_1 -0.68139 0.24969 -2.729 0.00739 **## dummy_3 0.07276 0.24621 0.296 0.76816 ## dummy_4 0.21240 0.26300 0.808 0.42104 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
In all multiple regression models, we have to consider the correlations between the IVs, as highly correlated variables make it more difficult to detect significance of a particular X. One useful way to conceptualize the relationship between any two variables is "Does knowing someone's score on X1 affect my guess for their score on X2?"
Are dummy codes associated with a categorical predictor correlated or uncorrelated?
In all multiple regression models, we have to consider the correlations between the IVs, as highly correlated variables make it more difficult to detect significance of a particular X. One useful way to conceptualize the relationship between any two variables is "Does knowing someone's score on X1 affect my guess for their score on X2?"
Are dummy codes associated with a categorical predictor correlated or uncorrelated?
cor(solomon[,grepl("dummy", names(solomon))], use = "pairwise")
## dummy_2 dummy_3 dummy_4 dummy_1## dummy_2 1.0000000 -0.3306838 -0.2833761 -0.3239068## dummy_3 -0.3306838 1.0000000 -0.3387900 -0.3872466## dummy_4 -0.2833761 -0.3387900 1.0000000 -0.3318469## dummy_1 -0.3239068 -0.3872466 -0.3318469 1.0000000
R will automatically convert factor-level variables into dummy codes -- just make sure your variable is a factor before adding it to the model!
class(solomon$CONDITION)
## [1] "integer"
solomon$CONDITION = as.factor(solomon$CONDITION)
mod.3 = lm(WISDOM ~ CONDITION, data = solomon)summary(mod.3)
## ## Call:## lm(formula = WISDOM ~ CONDITION, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## CONDITION2 0.6814 0.2497 2.729 0.007390 ** ## CONDITION3 0.7541 0.2348 3.211 0.001729 ** ## CONDITION4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
summary(mod.1)
## ## Call:## lm(formula = WISDOM ~ dummy_2 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## dummy_2 0.6814 0.2497 2.729 0.007390 ** ## dummy_3 0.7541 0.2348 3.211 0.001729 ** ## dummy_4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
summary(mod.2)
## ## Call:## lm(formula = WISDOM ~ dummy_1 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.12208 0.18414 0.663 0.50870 ## dummy_1 -0.68139 0.24969 -2.729 0.00739 **## dummy_3 0.07276 0.24621 0.296 0.76816 ## dummy_4 0.21240 0.26300 0.808 0.42104 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
summary(mod.3)
## ## Call:## lm(formula = WISDOM ~ CONDITION, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## CONDITION2 0.6814 0.2497 2.729 0.007390 ** ## CONDITION3 0.7541 0.2348 3.211 0.001729 ** ## CONDITION4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
anova(mod.3)
## Analysis of Variance Table## ## Response: WISDOM## Df Sum Sq Mean Sq F value Pr(>F) ## CONDITION 3 14.131 4.7105 5.3432 0.001783 **## Residuals 111 97.855 0.8816 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Analysis of Variance (the long way)
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |