Rapple+Journal+10

Back to Lisa's Journal or Go to Journal 11

To determine whether there is an association between two variables there are two steps that you want to take. Before you begin, you need to look at the variables to see what level of measurement they are. Based on that measurement level you will choose the proper test. 1. Choose **Lambda** if one or both are **categorical**. **Lambda** will tell you if there is an association or there is not an association. 0 (zero) for no association, 1 for total association. It canNOT tell you if there is a direct or inverse association. If the Lambda is 0.5 that means that there is a 50% ability to predict the dependent variable by knowing the independent variable. Next look at the **ChiSquare** value to determine the "goodness of fit". ChiSquare is used when you want to see if there is a statistical significance between what is expected and what is measured. In other words, it compares what will occur at random and what has occurred in actuality. The null hypothesis supports a purely random occurrance. So it can support rejecting the null if the ChiSquare shows significance. 2. Choose **Gamma** if one or both of the variables are **ordinal** (but NOT categorical).

Probability - ask "not statistically significant"?
 * STATISTICAL SIGNIFICANCE** is determined by using chiSquare for nominal and ordinal variables (in all their different combinations). T-test is used to check for statistical significance for continuous variables. If p<= to 0.05 it means it IS SIGNIFICANT. So, for example, a ChiSquare of .03 is significant. A ChiSquare of .006 is significant.

If something is statistically significant, it means that we believe that our observation/measurement did not occur by happenstance or accident. To measure this we try to show that the probability of this happening randomly is extremely low. So **p-value (probability value)** is calculated using a statistical test that is chosen based on the type of variables we are using. P-value cut offs are set at 0.05 or 0.01. If the p is less than .05 that means that there is a less than 5% chance that the observation/measurement was random. (so... p < 0.01, less than 1% chance). Setting the alpha is the p-value where you want your p-values to be smaller than to be statistically significant. IF OUR P-VALUE IS LESS THAN THE ALPHA, we REJECT THE NULL HYPOTHESIS and say "there appears to be a difference between groups/ or a relationship between the variables that is significant. But if the p-value is not below the alpha and just slightly over the NULL cannot be rejected and we would say that there is marginal significance. If there is a large effect with a rather small sample size this might point to further research with a larger sample.

Here is the brain twister for me: So, we have a hypothesis (idea) that we have to support may be true. To do that we instead turn our attention to the NULL hypothesis. If we can prove that it is NOT true, it gives evidence that the hypothesis is true by default. So, back to the p-value. If the probability is so low that our findings are by chance we can "reject the Null hypothesis". By rejecting it, the hypothesis wins! "If null isn't true that hypothesis must be true".