fit <- lm(y~x1+x2,data=data) > summary(fit) Call: lm(formula = y ~ x1 + x2, data = data) This fits y = beta0 + beta1 x1 + beta2 x2 + e, e are iid N(0,sigma^2) Residuals: Min 1Q Median 3Q Max (summary of distribution of residuals) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) beta0hat se(beta0hat) beta0hat/se(beta0hat) p-value for test of H_0: beta0=0 x1 beta1hat se(beta1hat) beta1hat/se(beta1hat) p-value for test of H_0: beta1=0 x2 beta2hat se(beta2hat) beta2hat/se(beta2hat) p-value for test of H_0: beta2=0 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: sigmahat on n-3 degrees of freedom (sigmahat is estimate of SQRT of error variance) NOTE: 3 is the number of betas in this model: beta0, beta1, and beta2 Multiple R-squared: SSR/SST = 1 = SSE/SST, Adjusted R-squared: 1-(SSE/(n-3))/(SST/(n-1)) F-statistic: (SSR/1)/(SSE/(n-3)) on 1 and n-3 DF, p-value: p-value for test of H_0: beta1=beta2=0 (This compares the models y = beta0 + e and y = beta0 + beta1 x2 + beta2 x2 + e. Prefer the bigger model if p-value is small!) > anova(fit) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x1 1 SSR(x1) SSR(x1)/1 (SSR/1)/(SSE/(n-3)) (use results from summary instead of these p-values x2 1 SSR(x2|x1) SSR(x2|x1)/1 (SSR/1)/(SSE/(n-3)) Residuals (n-2) SSE SSE/(n-3) --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 SSR(x1) tells how much x1 reduces unexplained squared errors when it is added to a model with only an intercept SSR(x2|x1) tells how much x2 reduces unexplained squared errors when it is added to a model that has x1 and an intercept already To get SSR(x1|x2): fit <- lm(y~x2+x1,data=data) anova(fit) IMPORTANT NOTE: SSE/(n-3) = Mean Squared Error (MSE), and MSE is "sigmasquared hat" - the estimated error variance.