Tip:
Highlight text to annotate it
X
Art Reingold: Good morning. Good morning. So,
um, a couple of administrative announcements. So first of all
I understand that some people would like at least a copy of the
presentation on Friday. If you were here on Friday you know
that Len Syme doesn't need a Power Point in order to talk
intelligently unlike some faculty. There is no Power Point
presentation. As far as I know the audio should be available.
If he spoke into the microphone in theory that was captured and
should be posted on bspace or wherever it's posted. That
should be there.
Secondly in terms of the request for the derivation of
the formula of attributable risk that's going to be discussed
in section this week and the entire derivation will be posted
on bspace. Just to point out that's one difference between 250
A and 250 B. In 250 B I actually derive all the formulas and
equations. Here we state then and don't derive them. For
those interested you'll see the full derivation. Despite my
failure to initially remember it the last slide I showed on
Wednesday with population attributable risks and odds ratios
was in fact for triggers for heart attack for myocardial
infarction. Several of you e-mailed to me. It's interesting
to look at that particular graph and see the many, many things
that can potentially set off a heart attack. Some have a high
odds ratio but low attributable risk because they are uncommon
exposures. Others have a lower odds ratio but in fact a higher
attributable risk because they are quite common exposures. I
think it nicely illustrates the relationship between those
things. One last administrative thing. Because I need to be
in Argentina on Friday of this week professor Bates has agreed
to do the two presentations on experimental studies with
individual randomization. So we slightly changed things.
Michael will be presenting today and Wednesday. So they come
in the intelligent sequence and then on Friday Lisa Barcellos
will be talking about genetic factors in disease.
So the basically there's a switch between what's on the
schedule for Wednesday and for Friday. And then next Monday
I'll pick up the discussion on randomized studies in which we
randomize groups of individuals rather than randomize
individuals. So any questions about the schedule? Okay.
Michael.
Michael Bates: Thank you, Art.
So, as Art said I'll be talking about randomized trials
with individual interventions and this is basically the list of
topics that I'm going to be covering. I've got two hours I
guess. Today and Wednesday.
So I haven't split it into two separate distinct talks.
I basically have one talk. I'll finish at some point today and
do the rest of it on Wednesday basically. These are the things
I'm basically going to be covering. The general features of
experimental studies.
And I'm going to talk some detail about parallel study
design which is really the main type of design, randomization,
blinding, also known as masking. Efficacy and effectiveness,
data analysis and some alternative study designs which I
probably won't get to until Wednesday. Some ethical issues and
talk about early termination of studies. That's the list.
So, when I teach the summer epidemiology class I like
to constantly remind the class of this hierarchy. At least my
hierarchy of study designs maybe not everybody in epidemiology
would completely agree with it. There are some areas here
which, you know, you could swap some of these around possibly.
But basically it's a hierarchy starting down at the bottom, I
say at least informative. I could replace that with hypothesis
generating. And moving up through this list here to the top
where we have analytic study designs. Have you talked about
the difference between descriptive and analytic studies? Good.
These are more hypothesis test. And today we're going to be
talking about experimental studies. The other day I talked
about ecological studies which are more towards the descriptive
study design. This is more a continuum it's not like there's a
clear cut point where descriptive studies change over to
experimental, change over to analytic studies. It's more of a
gradation.
But today we're going to be talking about experimental
studies often regarded as the gold standard of epidemiology
studies. When they are properly conducted and they have a good
enough sample size they have a very high degree of validity.
In other words they are free of bias. I know you haven't
talked about bias or confounding, but you will be. This will
be clearer. Unfortunately I'm going to have to introduce some
of these terms like bias and confounding today because it's
unavoidable when talking about experimental studies. The thing
about epidemiology you can't learn it in a completely linear
way. Once you go to the end of the course you'll have to cycle
through the course notes a few times and things that didn't
perhaps make complete sense the first go around will make a lot
more sense when you revisit them.
And you can also think of experimental studies in
humans to some extent the human analog of say toxicology
studies. I came from a background in toxicology involving rats
and mice but I found that humans, human studies were more
interesting that's why I switched to epidemiology. Basically
experimental studies in humans can be thought a bit like
toxicology studies in animals in terms of the overall study
design.
So we can, there's a major split if we look at the
range of epidemiology study designs as I set out in the
hierarchy. There's really a very big divide here between
experimental studies which you're going to be talking about
today and specifically individual level studies and
observational studies which we're not dealing with today but
you will cover in some detail. The thing about observational
studies they observe people and place. They record data on
their exposures and health outcomes. There's no interference
with these people. We try not to interfere and see what
happens with them in their natural living circumstances. The
experimental studies as I'll describe we do interfere with the
people in terms of their exposures. The main defining feature
of experimental studies is the allocation of treatment. That
is really the big thing that distinguishes experimental studies
from observational studies. Usually the allocation is
randomized. There may be a few instances, for example, phase
one studies in the hierarchy of clinical trials where
randomization may not take place. We'll talk about what
randomization means and how to go about it.
Basically the people are randomized to groups for
different treatments allocated. The participants are followed
over time and the health outcomes they experience monitored.
They go under different names. And they use quite a
bit of terminology to describe the studies such as intervention
studies. And as I'm going to explain they can be used for
treatment purposes or for prevention purposes, but they may
have completely different names. So treatment studies may be
called randomized controlled trials, clinical trials,
therapeutic trials, maybe some other trials as well. In
prevention settings they may be called field trials or
community intervention trials. I think you're going to be
talking about community trails next Monday. We'll stick to the
studies which have the individual people as participants today.
So as I said, studies may be concerned with either
prevention or treatment.
So, the prevention, this involves interventions given
to people who are basically healthy. They may be higher risk
individuals. They may have a particular genetic make up that
puts them at higher risk. It's to prevent disease. Trials of
vaccines would fall into this category. Another example here
does tamoxifen lower the incidence of breast cancer risk in
high risk women compared to women not given tamoxifen.
I think what the studies have found it's only in a
particular genotype, a particular genetic gene make up that
they, that tamoxifen is actually protective. And the treatment
studies where they're given to people to either try and cure
them, your study populations who maybe have the disease and you
split them into different groups and you try and see whether it
cures them or causes remission or reduces their risk of
recurrence or whatever. These are the two basic study types.
The first instance that is usually cited of an
experimental study by James Lind, a British naval surgeon in
the eighteenth century. At the time scurvy, which most of you
know was caused by vitamin C deficiency. The British empire
more or less stretched around the world. They were out to sea
for many years.
Anyway, Lind who is a naval surgeon he did this early
clinical trial and took 12 men with scurvy and divided them
into six groups of two each.
And he gave them, administered these treatments. This
is supposedly a picture of him treating the sailors. And he
found that of the six treatments, it was oranges and lemon
which were the only ones that really effective. But as is
often the case in public health it took, how long, it took
50 years or so before this was adopted. In those days there
were many instances of this where early public health
discoveries are not adopted for maybe a century or so later, 50
to a hundred years later. Eventually the British sailors were
given lemon juice or lime juice, which is why British sometimes
referred to as limeys. They gave them the lemon or lime juice.
So, okay. We're going to be talking about individual
persons today. Not the community interventions. Preventive
only. You don't treat communities for disease generally. You
give community interventions for prevention. Individual person
interventions can be therapeutic or preventive. This is an
example with women with stage one breast cancer survive with
lumpectomy as long as women who are given lumpectomy plus
radiation. This is an example of a study with individual
people treatment study.
So, there's a key ethical requirement in terms of being
able to carry out an experimental study. Treatment can only be
given in an experimental study if it has potential benefits.
That can be curative or preventive. It must have potential
benefits. This has major limitations on in terms of what
exposures or what treatments we can give people. What would be
an example of something we couldn't test in an experimental
study?
Clearly obvious. Many things. Anthrax, smoking, any
of these things. This is in some senses a problem that makes
it more difficult for people who are mainly in observational
epidemiology such as myself to actually conclusively determine
cause and effect. With experimental studies if they are
properly conducted with a large sample size then you can fairly
clearly identify causal relations but with observational
studies we can't. There are many things we can only
investigate in observational studies. This is the primary
requirement. It has to have either, have some sort of
potential benefit. We'll come back to this probably on
Wednesday in terms of when the issue of equipoise, which we'll
discuss.
Okay. So, this is the general schema for the conduct
of experimental studies. So you start by having some sort of
hypothesis. Some sort of purpose of your study. Design the
study and obtain funding.
And then you have to get as with all epidemiologic
studies, you need IRB approval. Berkeley has its own IRB which
we on this campus would apply to. And I'll talk about that
more on Wednesday. And then you need to have some study
participants selection and recruitment procedures based on
certain eligibility criteria for the study participants and
then you need to get informed consent.
And that point the eligible and willing participants
are randomly allocated to one or more study treatment or
intervention groups if you like to call it that. You basically
follow them for some period of time and then to investigate
whatever the particular study outcome is that you are
investigating, whether it be cure or remission or first
occurrence or disease in prevention studies and so on.
And then basically the statistical analysis relative at
least to observational studies is relatively straightforward in
terms of comparing one group, the rates of the outcome in one
group with another.
So the question of number of participants. So, this
involves power and sample size calculations. You are not
covering in this course, but if you go into 250 B or do some of
the statistics classes you will definitely do this.
So basically we have to determine the study size. This
is determined up front. It's based on a number of assumptions,
particularly in terms of the number of disease cases that are
likely to occur.
And the implications of these sample size calculations
are the prevention studies usually have much larger sample
sizes than therapeutic studies because in therapeutic studies
we are starting off with people with the disease and you are
looking for some appreciable benefit in terms of reduction in
terms of cure. Whereas with prevention studies we maybe have
to have a very large population because the rate of the disease
may not be very high, but you may, you want to detect a
difference between the two groups. Maybe one getting a vaccine
and another one getting a placebo. You may need a large
population size, maybe tens of thousands in order to
confidently detect a difference in the incidence of disease
between the two groups.
Now, this is the general schema for randomized trials.
So, you have some reference population which is a
population that you are wanting to be able to make some sort of
inferences for. From this reference population you take an
experimental population. Some of these people may be unwilling
to participate. You have some number of non-participants.
Then you have some willing people. And some treatment
allocation into two groups a treatment group and a comparison
group. It's fairly straightforward.
Um, and this basically just summarizes the groups. The
reference population, the general group which results of a
trial are intended to be applicable. That may be a large
population or a somewhat smaller population.
Um, and then you have your experimental population.
People who are considered for the enrollment in a trial. And
then you have your actual study participants.
And those study participants are, well I should say
first we have some selection criteria which we use to take
people from the reference population to determine our
experimental population from which the study participants
derive. Study participants are the ones actually willing to
participate. And you have a number of exclusion and inclusion
criteria for study participation.
And sometimes this is a compromise between those who
are, the population more suitable for research and those best
for generalization. And so, for example, in prevention trials
you might if you are wanting to, for example, see whether a
particular treatment is effective in reducing the rate of lung
cancer you might just say focus on smokers because they are
more likely to have the lung cancer.
Rather than take a general population sample.
And you may disease to exclude people who are likely to
be lost to follow up. The follow up people from the beginning
of treatment to the end of the trial. You may make some
decisions to exclude some of these people who may not be likely
to be reliable. They may not be compliant. They may be
planning to leave the area and so on. These are the individual
decisions made in the study design. Clinical trials have
various phases of testing. These are basically the phasing.
There's preclinical testing likely to involve studies in
chemistry and cell cultures and animals.
Before you get to the stage of actually being ready to
treat human beings. And then we have these sequential phases.
First screening for safety, establishing the testing protocol
and final testing. I'll give you more detail on these. The
phase one studies involve a relatively small number of usually
healthy people and small amounts of the drug are administered
just to determine its safety and also to see how it is
metabolized, how rapidly it's excreted. Whether or not the
metabolic parameters in humans are similar to what are being
observed in animals. They are very different than the animal
models may not really be predicting what might happen in the
human situation. That's phase one.
If a new treatment survives phase one, then you may
move onto phase two. Which is testing for safety and efficacy
in a larger population. And you may look at different dose
levels.
And this is to help with the designing of the larger
trial. If you move into phase three.
So, the larger trial, phase three is involves much
larger study population. It could be thousands of people. And
usually you're testing in comparison with the current standard
therapy or if there's no current standard therapy you may be
testing against the placebo. And this is the last phase
required before in this country at least the FDA will approve a
treatment for wider use. But that's not the end of the story.
The phase four studies which are various follow up
studies that you may explore different patient outcomes,
different patient populations looking for side effects, all
that sort of thing.
So this is general post-marketing surveillance.
So these are the main studies. In each one of these
there are a multitude of different study designs and things
that can take place.
>>>: It says a small number of healthy people and
sometimes advanced disease cases what does that mean?
Michael Bates: Yes, for example if the drug was
intended to treat a serious cancer. And it may be that you
disease not to. And some of these cancer treatment drugs can
have very serious side effects. They are very potent
chemicals. So you may just use people who are at a very
advanced stage. Particularly if there's no other treatment
available. It would depend on the nature of the situation.
But certainly some of these cancer therapeutic agents they are
highly cytotoxic. You may not want to give them in small doses
to healthy people. Whereas a blood pressure medication you
might be happy to use relatively healthy people.
Okay. So, let me talk first about the parallel design
trials. These are the most common. This is the most common
study design. And basically you take your eligible and willing
participants and you randomize them into two or more treatment
groups. Usually two but sometimes more than that. But each
person, each individual person only receives one treatment.
And then the comparison group either gets the current standard
treatment torr they get a placebo. We'll come back to
placebos. The groups are followed in a very consistent way and
their outcomes are measured.
So, um, we, I mentioned randomization before. So
randomization is carried out at the stage of having recruited
eligible and willing participants. And the contingent of the
randomization is to achieve baseline comparability between the
various groups. This has great benefits. Particularly in
terms of equalizing the various groups. Any factors that could
otherwise influence the outcome. I'm particularly talking
about confounding factors. I know you haven't yet talked about
confounding in this class. When you come to it, it will be
much clearer. Randomization is an excellent way of eliminating
confounding. The good thing about it. Often we know about
particular confounding factors in our observational studies we
take them into account when we carry out the data analysis in
the cohort case studies. We can only take them into account if
we know what they are. We can adjust for them in the
multi-variate models. The great thing about randomization we
don't have to know what the confounding factors are. There's
always the possibility in an observational study there are
unknown confounding factors which are completely influencing
our results.
But the great thing about randomization we simply if
every person has an equal chance of getting into any of the
groups then we equalize those both known and unknown factors
between the groups and so they cannot influence the study
outcome.
If the randomization has taken place properly. Then
when he can say the groups should be essentially identical
except for the intervention.
And the other thing is that it eliminates any sort of
conscious or unconscious selection bias by either the physician
or the patient. So physicians may -- they have patients and
they may be more inclined to give one drug to people who have a
more serious condition and another one to people that have a
less serious condition. This eliminates if it's done properly
it eliminates the selection bias and prevents the patient for
opting for either one drug or the other or the treatment or the
placebo. Of course most patients naturally do not want to be
part of the placebo group. They hope they are not part of the
placebo group. That's a chance they take. If they had any
idea which group was which almost all of them would opt for the
treatment group.
Now randomization even when carried out properly, if
you are only randomizing a small group, then you may not
really, you may not achieve this ideal situation of having the
groups identical. We'll talk about this, we'll talk about
baseline comparability.
So, there is a problem if you have very small groups.
We have some other methods I'll talk about such as blocked
randomization and stratified randomization which can help with
that. The key thing is you have a very large population and
you properly randomize your chances of having the groups being
essentially identical is very high. If you have a small group
then it decreases and is a problem.
>>>: I just had a quick question. When someone
says they are now doing experimental cancer treatment or
experimental chemo, they are not necessarily getting the
experimental chemo. They could be in the control.
Michael Bates: They are in the trial. Yeah.
They are in the trial. There may be two drugs being compared.
>>>: Or they may be getting nothing.
Michael Bates: They may be getting a placebo.
Yeah. It's a possibility.
>>>: Can you use a placebo.
Michael Bates: We'll come back to placebos. But
it would affect the statistical power. The optimal statistical
power is the ability of a study given a design features
including the sample size in order to detect a real effect is
one is actually there.
And if you have unequal group sizes, it's regarded as
unbalanced and it reduces your chances of detecting an effect,
a difference between the two groups. This applies to any study
design you talk about including case control studies. The
optimal study design is equal number of cases and controls.
Sometimes in these situations you have to increase the controls
to the cases because you have a limited number of cases. But
in an experimental study I can't really see if you have two
groups one is placebo and one is an experiment I can't see any
reason to decrease the number of controls. Can you? Could you
think of a reason, Art?
Art Reingold: No.
Michael Bates: I don't think there would be any
reason. Optimally equal size.
Um, so, the thing about, let's talk about confounding a
little even though you haven't studied this in any formal
sense. A normal clinical practice physicians they may, for
example, give an antibody, say a particular antibiotic to
people who have certain, may have the more severe cases.
Somebody with a more milder form of condition they may give
another antibiotic.
So that's typical clinical practice. And the problem
is if you were just to carry out an observational study and you
took people who are, who received say treatment A and compared
them with people who received treatment B in the normal course
of clinical practice, this can lead to the two treatment groups
appearing to be differently effective. It could have simply
been because the tendency of the physicians to allocate
treatment A to one category, say the more severely effected
patients and treatment be to the less severely effected
patients. So treatment A because it's given to the more
severely effected patients may seem less effective. But it has
been given to more severely affected participants. They may
have a lower survival rate. It can completely distort the
comparison.
And this is referred to as confounding by indication.
And in this instance the confounding factor would be
the disease severity or some other factor that determined
whether people got treatment A or treatment B.
So the point of the randomization process is completely
avoid this problem of confounding by indication to eliminate
the possibility the subject of factors involved with the
physician could influence this.
Let me talk about some randomization methods. Starting
with simple randomization which is the most common one and the
most intuitively obvious one.
And so, each participant should have the same chance of
receiving each possible treatment. And of course by treatment
I'm also referring to placebo. That's regarded as a treatment.
A couple ways of randomly allocating people to one group or
another. Say there's just two groups would be to use a random
number table or random number generator on a computer. So
basically random number tables you can get books of random
numbers. Just 0 to 9, just randomly completely random. You
arbitrarily would pick one point in the random number table and
then start there. If it's an odd number you would give
treatment A. If it's even you might give treatment B. It can
be done with a similar thing, a random number generator. This
will give you a completely random sequence of patients that
wouldn't be influenced by any other characteristics.
Completely random.
But there are a number of pseudorandom allocations
methods that might be used. For example, patients, the first
patient might get treatment A. The second patient might get
treatment B. And the third treatment A and so on. Or you
could based on the day of week of allocation or the month of
birth. There are all sorts of way you could potentially do
this. The last digit of the hospital record number. But the
problem with this is there may be some hidden bias built in
there.
That is out of the control of the people running the
study. For example, the day of the week of allocation. It
could be for example that a particular physician might say,
suggest that certain patients knowingly have certain
characteristics come in on a Tuesday. Tuesday is the day they
give treatment B or something like that. You can imagine there
are various ways this thing could be manipulated in various
ways.
So, generally speaking these methods, because of the
possibility of some hidden and biassing factor are better to
avoid. So what about, and this is a question for you? What
about tossing a coin?
Let's say, for example, say that a patient comes in,
they are eligible and willing. And you decide it would be
heads for treatment A and tails would be treatment B. What
about coin tossing as a method? How would that be? Would that
be a good method?
>>>: If I don't get the answer I like I do best
two out of three.
Michael Bates: You toss the coin again. It's
easy to say I made a mistake until you get the right one. The
one you want. It's completely able to be manipulated by the
people allocating the treatment.
So you really don't want to use anything like that.
Anything that's manipulable.
You really need for the people running the study who
are probably not the people allocating the treatment. You'll
maybe have treatment centers around the country. So you have
to take it completely out of their hands. So you might have
say like a series of packets for one set of packets for
treatment A and another set for treatment B. They are randomly
ordered by the people running the study by say a random number
table. Then they are numbered like 1, 2, 3, 4, 5 and so on.
And then they go to the physicians who are going to
provide the treatment and they just take the next one in order,
in sequential order. They don't know what it is. They don't
get a chance to manipulate the process. Even if they would
have the best of intentions they might think well, I'd rather
give them the treatment than the placebo.
So, they don't have that option because all they get
is, it's taken completely out of their hands and they just take
the next packet which is the allocation. Treatment A or
treatment B.
Now that was a simple randomization. But if you are in
a situation where you have a fairly small number of study
participants, say you have 30 participants in a study. And,
you know, it may be in the early stages or rare disease. You
have 30 participants. And if you do some sort of randomized
split saying a random number table and then ideally you would
like 15 in each group. As I said the maximum study power comes
from having equal numbers in each group. By simply taking
random numbers you have some chance, 20 percent chance of
getting 11 to 19 split or more uneven. 5 to 25 or anything
worse than 11 to 19. This is as I said statistically
inefficient. So the blocked randomization process is a very
good way of ensuring you get treatment groups of equal size, of
15 each. This is the process, it's a little tricky to
understand the first time through it. I'll try and take you
through it reasonably slowly. You start the process with
blocks. Blocks that have a number of patients that is a
multiple of the number of treatment groups. For example, in
this example we have two treatment groups. Treatment A and
treatment B.
So two groups.
So we need a multiple of two groups. Multiple of two.
We'll take the smallest multiple, which is four. Two by two is
four. Okay. The next step is to identify all the possible
permutations of A and B with equal numbers of As and Bs.
So, in other words two As and two Bs. Obviously there
are more permutations. We only take these with two As and two
Bs. There are six alternatives. There are six possible
permutations to arrange A and B in equal numbers.
And then you assign the patients as they enter the
trial to blocks of four. So the first four patients is a
block. And then you use a process to randomly select from the
six possible permutations. Randomly select from these.
So say that you randomly selected BAAB and then so
patient one gets B. Patients 2 and 3 get A and patient four
gets B.
And so if you carry this through the whole process then
you will end up with an equal number of participants in each
group.
Does that make sense? So you'll have 15 on treatment A
and 15 on treatment B or whatever the numbers happen to be.
Is that clear? Feel free to say no because I find when
I talk about this usually many people are not quite clear.
>>>: So when you say randomly select one
permutation for each block, who is doing the selecting? The
patient?
Michael Bates: No, not the patient. The people
running the study. You don't want the patients to control it.
The patients, so the patients don't all turn up in one block on
one day. So, when you start the study you have to wait some
period of time because they get diagnosed. They'll come in in
some order and you'll basically allocate them, you will kind of
organize beforehand the first block of four gets this. This
one is selected for the second block of our and so on. It's
all worked out in advance. It makes sure, it's all planned in
advance and ordered. So you can order the packets. If you are
the principal investigator and the people running the study you
can set this all up in advance. So that basically the to go
right through study. The 32 participants, it's divisible by
four. You can have it arranged. This will insure when you
have recruited your 32 participants, 16 will be in treatment A
and 16 will be in treatment B.
>>>: What if four walk in the door at the same
moment?
Michael Bates: That would be most unlikely to
happen. Think of it as a rare disease. Usually it would be a
rare disease when you have this situation. Unless you have a
small study budget. It's most unlikely four people would walk
through simultaneously.
>>>: Or two people. Then there's some selection
bias.
Michael Bates: In that, I'm sure there's a way of
discriminating who came through first and who came through
second. It would be an extremely unusual circumstance. The
patients usually make an appointment. They come at different
times. They are not just walking into the office in a block.
>>>: Okay.
Michael Bates: There are different times in terms
of their diagnosis and so on. I think in practice that would
not be an issue.
So that's the blocked randomization process. It's
useful when you have a relatively small number of participants.
>>>: How do you determine the block size? This
instance it's two by two. If there were seven groups?
Michael Bates: If there were seven groups. The
thing is though, if you really only had a small number of
participants it's most unlikely you would have seven groups.
You would be spreading your participants too thinly. Really if
you have a small number of participants it's better just to
have say just two groups that you are comparing. Otherwise you
have very few in each group and you wouldn't be able to make
any useful comparisons between them.
>>>: Blocked is only used with small groups.
Michael Bates: Small total numbers of
participants, yeah. If you have a very large study, you know,
many participants, then it's, the unevenness of just by chance
of getting a huge number in group A and a very small number in
group B is very low.
It's easy to show that with a big, a large sample size
that you randomize. There may be a few differences between
them. But if you have 3000 people say, it's going to be close
to 1500 in group A and 1500 in group B. But 30, you could have
an imbalance. Any other questions on that?
Let's move onto stratified randomization. And it may
be that you want to have sort of perhaps a balanced number of
males and females, for example, in your trial.
But it may be that the disease occurs more commonly in
males or in females. In one group or the other.
So if you just randomly, you just took random selection
of patients then you might get a huge number, or relatively
large number of males compared to females. But it may be that
you want the study results to be potentially applicable to both
males and females. So you may want to have a more balanced
group in terms of the same number of males and same number of
females. Or this could also be in regard to age groups. So it
may be the disease occurs more commonly in elderly people. But
it may also occur in younger people. So you may want to make
sure you have it balanced and have your study population
balanced in terms of the distribution of ages.
The first step is really to define the different
stratification categories. It could be strata, males and
females or it could be different age strata. And then you
separately select the desired number of participants from each
of those strata.
And the that will then give you what you want.
And then it may be that you have many people in one
strata. For example, the condition may occur more frequently
in males. So some males who are eligible may not get to
participate. You may wait a longer time to get enough women to
participate, for example.
>>>: Do you mean you break them up into two
groups, males and females.
Michael Bates: You could use the blocked
randomization. If you can used the blocked randomization on
top of the stratified randomization if you have small numbers.
But you may not need to use the blocking if the numbers are
adequate. You just want to have a balance in terms of the age
groups or gender, so on, you could do this. Yes, you can
combine the two.
And so this is actually, I think I took this out of
your book, Aschengrau. This is just a simple example eligible
women stratified according to gestational age at entry. So
different gestational age. And then they randomized at that
point to the treatment or the placebo. This is for ***,
maternal *** infant transmission. A simple example.
Another method which is used to prevent human factors,
subjective factors from impacting the study. I see we've
reached 10 o'clock. Maybe this is a good point to stop. And
I'll continue on Wednesday. (Applause)