Tip:
Highlight text to annotate it
X
Well hello, my friends.
Now we begin a new module on comparative analysis.
This video is an introduction to comparative analysis, and
I'll remind you that facts are stubborn things, but
statistics are more pliable.
Comparative analysis is about comparing
values across groups.
Now, I want to tell you, there are many ways to do
comparative analysis.
This is like when I was a child, getting on a spinning
merry-go-round.
You just have to get on somewhere, and then once
you're on, you gain our footing and you move forward.
So we're going to look at the simplest form of comparative
analysis, which compares one value across two groups.
And we will begin with two groups that are independent of
each other and that are normally distributed.
Normality of the dependent variable is required for
t-Test, ANOVA, and MANOVA.
Our comparison of two groups and one variable across two
groups is called a t-Test, which is listed right here.
A t-Test compares one normally distributed dependent variable
across two groupings.
For instance, we might compare a
normally distributed variable--
we might call it, say, age--
across the groupings of male and female.
Age would be the dependent value, and the groupings would
be male or female.
The dependent value should be composed of continuous data.
Now, continuous data--
ratio are preferred.
Sometimes, though, they will compare interval data across a
t-Test, which, for some people is acceptable.
I prefer that we compare ratio data.
The groupings are usually nominal.
Now, they can be ordinal, but groupings may be things like
male or female, young or old.
You might compare African American
males to Hispanic males.
You might compare white males to alligators.
You have to have some sort of grouping.
The t-Test is comparing two normally distributed curves to
see if they differ.
Now, I want you to keep that in mind-- normally
distributed.
The values that we are comparing for the groups are
required to be normally distributed.
If they are not normally distributed, then we will do a
non-parametric design, which we shall study
in the later modules.
The t-Test is often referred to as the
difference of two means.
In other words, we have two distributions, and we are
comparing them to see if they differ.
Now, the t-Test might be represented with this picture.
You'll notice that we have group one and group two.
These are at least- might be ordinal, but generally
nominal, and we're comparing for the
group by the same variable--
value one, value one, or variable one, variable one.
These values must be normally distributed.
Now, let's look at how a t-Test compares to ANOVA.
ANOVA has three or more groups.
It can have, actually, two or more groups.
We'll come back to that in just a moment.
But in an ANOVA, you're comparing two or more groups,
and one value for each group, and the value must be normally
distributed.
I hope you catch on to something very clever here.
A t-Test is, in fact, an ANOVA with only two groupings.
The next thing that we will look in this
module will be MANOVA.
MANOVA, we can compare two or more groups if we choose to.
And when doing so, we can compare two or more variables
at the same time.
In this design, we have three groupings with two variables
in each group, and we can compare those all in one
single glance.
Now, again, the groupings are generally nominal or ordinal.
The variable values, the dependent values that we're
examining, must be normally distributed, and we prefer
that they be ratio.
Now, the basic assumptions of the t-Test, the ANOVA, and
MANOVA include the following.
Now, I want to remind you, do see the similarity in MANOVA
to ANOVA to t-Test?
A MANOVA with only one variable value is an ANOVA,
and an ANOVA with only two groupings is a t-Test.
Isn't that cool?
So the assumptions for MANOVA apply to ANOVA,
which apply to t-Test.
As we add more groupings, we may pick up a few extra
requirements that we have to notice.
The basic assumptions of the t-Test, ANOVA, and MANOVA
include the following.
Now, these are basic assumptions.
You will recall, as we add more groupings and we add more
variables, we may pick up a couple of additional
assumptions that we would have to consider.
First of all, we have independence of the variables
across the groups.
In other words, one group--
males--
are not telling the females what to do.
I could comment on that, but they say watch a man that says
he's the head of his house, he'll lie about other things.
And Sharon and I were married in 1978, and I know who runs
this household.
So, you have to have independence of
values across the group.
You do independent random sampling.
I laugh about the little child that said, I'm smart.
And you say, well how do you know you're smart?
Well, 'cause Grandma and Grandpa and Mom and Daddy said
I was smart.
I don't know if that's an independent
random sample or not.
Groupings are categorical.
Again, they would be at least ordinal,
and we prefer nominal--
such as male or female, that type of grouping.
They may be public two-year community colleges versus
private two-year community colleges.
We can do those things.
The dependent variables are continuous or scale.
I prefer ratio data for the dependent variables.
And we have homogeneity of variance.
Now, I've already told you that the dependent variables
must be normally distributed, but the normality of that
distribution also requires that they
have the same variance.
In other words, we're not comparing little skinny curve,
[INAUDIBLE]
distributions to great big wide normal distributions.
So we have some basic assumptions.
Now, again, I want thank you very much for your support.
I appreciate you patronage as we go through this.
Live long and prosper, or, to be a little bit more modern,
may the odds be ever in your favor.
You have a good one.
This is the Old Dawg signing off.