Outline:
definitional formula
ANOVA:
1. Partition Variance into components associated with the sources of variability
2. Compare the variances to determine if part due to something of interest is large with respect to variability within groups
Oneway ANOVA broke total variance into IV and Error. Then compared the effect of the IV to error variance to see if it was a meaningfully large effect.
Twoway ANOVA, interested in Main Effect of A, Main Effect of B, Interaction of A and B. Thus we will partition variance into parts caused by IV_{A}, IV_{B}, Int_{AxB}, and Error. Then we will compare the variance associated with each thing of interest to error variance to see if each effect is meaningful.
Language issues again:
X_{iab} : X  score; i  an individual; a  level of IV_{A}, b  level of IV_{B}
Each cell will have a Mean: M_{A,B}
There will be marginal Means associated with each level of each IV.
IV  A  .  
IV B 


marginal Means 

X_{2,1,1} X_{3,1,1} 
X_{2,2,1} X_{3,2,1} 
M_{B=1} 

X_{2,1,2} X_{3,1,2} 
X_{2,2,2} X_{3,2,2} 
M_{B=2} 
marginal Means  M_{A=1}  M_{A=2}  M_{T} 
Partitioning Variance
To get to variance, we go through several steps. Variance is SS/df, and SS is the sum of the squared deviations. So we will start at the deviations level and partition the deviation into components for each source. These things are based on the math way of looking at MEs and interactions.
Note what is missing in the ANOVA definitions  ANOVA defines Main Effects as overall differences among levels of IV, but is not concerned that the differences are consistent across the levels of the other IV.
Deviations
Dev_{T} = Dev_{A} + Dev_{B} + Dev_{AxB} + Dev_{E}
Sums of Squares
Yes things add up at this level
degrees of freedom
Things add up at this level as well:
df_{T} = df_{A} + df_{B} + df_{AxB} + df_{E}
Mean Squares (Variances)
Things don't add up at this level, so don't compute total variance. At this point, however, you have completed the first part of an ANOVA  partitioning variance (this is where all the math work is).
Comparing Variances
Usually present the comparison of variances (in which you see if the
effects of interest are big compared to variability within groups) in a
source table. The F statistic is the comparison of the MS for each effect
to the MSE. This allows you to check how likely it is to get an effect
that size given chance (the p level!). You can then decide if you think
the effect is just due to chance (not significant) or so unlikely that
you assume it is caused by something.






































df_{E} 









The basic issue here: Is the size of my effect (or interaction) so large compared what to would be expected given the variability within my groups that it is very unlikely that the effects are due to chance? If so, then assume the differences observed were caused by the IV (or the interaction).