The following
is an edited transcript of an intervew between Warren Ewens
and Anya Plutinski. The interview was conducted on December
1, 2004 in Philadelphia, Pennsylvania.
Interview Table of Contents:
Personal Background
AP: When did you first develop an interest in genetics? Who
were your teachers? What texts did you read as a graduate
student?
WE: I majoring in mathematical statistics at the University
of Melbourne as an undergraduate. After that, I moved (in
1961) to the Australian National University to do a Ph.D.
There were various persons there with whom I could do this,
and I decided that I would work with Pat Moran, who at that
time had developed an interest in the statistical aspects
of genetics. This decision was made despite the fact that
I knew essentially nothing about genetics. It was not a random
choice but a reasoned one, since I was certain that genetics
would be an increasingly important area of science. That view
has been borne out by subsequent events, and I’m very
happy that I did choose to work in genetics.
Concerning teachers, Moran was my only teacher I had when
I was doing my PhD. In Australia at that time one did not
do coursework as part of a PhD, but started immediately on
research. Thus one’s “teacher” was one’s
supervisor, with no other teacher involved in any formal way.
However I did receive a lot of help from Joe Gani, who was
in Moran’s department, and who had a very broad interest
in stochastic processes, including an interest in genetics.
In fact my first genetics paper was written with him.
So far as textbooks are concerned, there were very few readily
available in Australia in those days, and such books as one
could get were expensive, because they all came from overseas.
Thus one often did work without the help of any textbook at
all. The only textbook in mathematical genetics that I read
as a graduate student was C.C. Li’s “First Course
in Population Genetics”, which I found to be very useful.
The lack of textbooks was mitigated to a large extent by first-class
library facilities, with all relevant journals represented
and up-to-date. Although I would not call it a textbook in
the normal sense, Fisher’s great book “The Genetical
Theory of Natural Selection” was of course extremely
influential to me in my PhD. work.
AP: I was wondering if I could follow up your comment that
you thought genetics was an increasingly interesting and important
area. Why did you think so? Were there any social or historical
factors influencing that choice?
WE: No, the factors influencing this view were largely scientific.
As I said, I was not very knowledgeable about genetics generally,
but I was certainly aware, as any educated person should have
been, about Watson and Crick and the discovery of the nature
of DNA in 1953. It did not take too much imagination to realize
that, with this new evidence about the nature of the genetic
material, there would need for a rewriting of the Darwinian
evolutionary paradigm in terms of the actual molecular genetic
material. This was also emphasized to Moran and me by David
Catcheside, who was the Professor of Genetics at the Australian
National University at the time.
AP: Who were your influences after your PhD?
WE: In 1964, when I completed my PhD, I did a six months post-doc
at Stanford, working with Sam Karlin, who influenced me enormously.
At that time he was already a famous mathematician, and fortunately
for me he had recently become interested in genetics. In fact
I was his first post-doc in the genetics area. His colleague
Jim McGregor, with whom Karlin had written a series of important
papers in stochastic processes, also influenced me greatly.
At that time, Karlin and McGregor ran a regular Monday evening
seminar series in mathematical genetics, and apart from them
and myself, this was attended by Walter Bodmer and Oscar Kempthorne.
They were both senior, well-known people in genetics, and
in Kempthorne’s case in statistics as well. So this
was tough company. I was influenced very much by Kempthorne,
who taught me to emphasize the genetics rather than the mathematics,
to make such mathematics as one did relevant to a genetic
reality, rather than doing the mathematics entirely for its
own sake. He was quite scathing about people who did the latter.
While I was at Stanford, Jim Crow came by to give a seminar.
I got to know him very quickly, possibly because I said in
the question period after his talk that I thought that what
he had said was wrong! Despite this, (or perhaps because of
it, because he was a great and generous man), he invited me
to give some talks at Madison, which I did in June of that
year (1964). He influenced me very much as well, then and
later. Crow was an extremely senior figure in population genetics,
and one could say that he and his close associate Kimura formed
the major force in population genetics theory at that time.
I have kept in close touch with both Crow and Karlin since
those times, with Crow influencing me in the same way that
Kempthorne did. That is to say, as a geneticist, he put the
genetics first and the mathematics second, an attitude that
I found very appealing. From then on, that was the way in
which I tried to address mathematical genetics questions.
Genetic Load
AP: The mention of Crow brings up a further question I had.
Were many biologists concerned about the problems of genetic
load and mutation load at the time you began your graduate
career?
WE: Yes. There was an interest in two load concepts. The first
was the mutational load, and interest in that concept came
from the concern about genetic damage caused by atomic bombs.
This was discussed in detail by the great geneticist Muller
in 1950. Jim Neal and Jack Schull had gone to Japan shortly
after the 1939-45 war to conduct an examination of the mutational
effects of the atomic bomb. Their work on this matter was
very well known, and so the question of how much genetic damage
had been produced by the bomb was uppermost in many people’s
minds. That damage became analyzed mathematically as a mutational
load.
A second form of the load concept was introduced by the
British biologist-mathematician Haldane who claimed, in 1957,
that substitutions in a Darwinian evolutionary process could
not proceed at more than a certain comparatively slow rate,
because if they were to proceed at a faster rate, there would
be an excessive “substitutional load.” Since Haldane
was so famous, that concept attracted a lot of attention.
In particular, Crow and Kimura made various substitutional
load calculations around 1960, that is at about that time
that I was becoming interested in genetics.
Perhaps the only disagreement I ever had with Crow concerned
the substitutional load, because I never thought that the
calculations concerning this load, which he and others carried
out, were appropriate. From the very start, my own calculations
suggested to me that Haldane’s arguments were misguided
and indeed erroneous, and that there is no practical upper
limit to the rate at which substitutions can occur under Darwinian
natural selection.
AP: Can I follow that up? Can you, in layman’s terms,
explain why you think that there is no upper limit in the
way that Haldane suggested?
WE: I can, but it becomes rather mathematical. Let me approach
it this way. Suppose that you consider one gene locus only,
at which a superior allele is replacing an inferior allele
through natural selection. In broad terms, what this requires
is that individuals carrying the superior allele have on average
somewhat more offspring than the mean number of offspring
per parent, otherwise the frequency of the superior allele
would not increase. This introduces a concept of a “one-locus
substitutional load,” and a formal numerical value for
this load is fairly easily calculated. However, the crux of
the problem arises when one considers the many, perhaps hundreds
or even thousands, substitution processes that are being carried
out at any one time. In his mathematical treatment of this
“multi-locus” situation, Kimura, for example,
in effect simply multiplied the loads at the various individual
substituting loci to arrive at an overall total load. The
load so calculated was enormous. This uses a reductionist
approach to the load question, and to me, this reductionist
approach is not the right way of doing things. Further, the
multiplicative assumption is, to me, unjustified. It is the
selectively favored individuals, carrying a variety of different
genes at different loci, who are reproducing and being required
to contribute more offspring than the average. If you consider
load arguments from that individual-based, non-reductionist
basis, the mathematical edifice which Kimura built up just
evaporates, and in my view the very severe load calculations
which he obtained by his approach became irrelevant and misleading.
The individual-based calculations that I made indicated to
me that there is no unbearable substitutional load.
AP: Did you or your teachers have much concern about the effects
of the bomb?
WE: Not in the sense that any such concern motivated my own
theoretical work. On the other hand, as I said earlier, there
was of course a general concern: any rational person would
obviously be concerned about possible genetic damage caused
by atomic bombs. But in the 1960’s I was too young to
understand the relevant mathematical calculations, and I think
that I also felt that the data were not sufficiently firm
at that stage for definitive conclusions. So, although I had
a general “citizen’s” interest about mutational
damage, I did not have a professional interest in it in the
sense of carrying out calculations concerning it.
Diffusion Processes and Molecular
Evolution
AP: What were your main research interests as a graduate student?
WE: As I have said, my background was in statistics, so my
research interests were largely in the statistical or mathematical
side of genetics, and in particular evolutionary genetics.
One of the things I did in my Ph.D. thesis was the following.
The stochastic genetical model which was analyzed in some
detail at that time is the so-called “Wright-Fisher”
model, which is a simple Markov chain model. One of the problems
with that model is that many quantities which you would like
to calculate cannot be calculated exactly. The mathematics
is just too difficult, and to this day nobody has been able
to calculate even quite straightforward properties of this
model. However, it has been known since the 1920’s that
you could approximate the Wright-Fisher model by a so-called
diffusion process. A diffusion process in this context is
one for which time is taken to be continuous, and the frequency
of any allele is also taken to be continuous, that is capable
of taking any value in a continuous range of values. Although
in reality time is of course continuous, the frequency of
any allele must be discrete, that is it can take only a discrete
set of values. So one of the questions that was in the air
at that time was how accurate calculations made for the diffusion
might be for the corresponding unknown values for the Wright-Fisher
model. This is a purely mathematical question. This was one
of the problems that I took up in my thesis, and I was able
to get quite specific answers for it. Broadly speaking, what
one found was that these approximations were extremely accurate,
even in very small populations, and that a bound could often
be placed on the error incurred by making them. Since the
diffusion calculations are very simple, and the formulae which
you get from them are very transparent and make very clear
to you the effects of various parameters, such as mutation
rates and selection differentials, one was very happy to have
very simple, albeit approximate calculations, rather than
exact Wright-Fisher model values that in any event are impossibly
difficult to calculate.
AP: Can I follow that up? You mentioned two advantages of
the diffusion approximations of the Wright-Fisher model –
ease of calculation and results that made clear the effects
of various parameters on those results. Were there different
assumptions that one needed to be sensitive to in using diffusion
models in some applications?
WE: No, I don’t think so. The main difference was in
replacing a model that is discrete in time and space by one
that is continuous in time and space. This sort of thing is
done often in applied work when some calculation using one
approach is easy and the corresponding calculation using the
other approach is difficult or impossible. Further, it has
been argued that diffusion formulae would probably give results
closer to reality than the Wright-Fisher formulae in that
diffusion processes assume continuous time, which is more
appropriate than the discrete time Wright-Fisher assumption.
Another reason that one would be happy to use diffusion results
is that the Wright-Fisher model is only a model. Nobody would
claim that it represents true reality, because it makes many
simplifying assumptions. Therefore, any calculation made by
using that model need not necessarily represent a true “real-world”
value at all closely.
AP: When did you first hear about techniques for observing
genetic variation? What did you expect that work would find?
Were you surprised?
WE: The first time I heard about results concerning the amount
of genetic variation in natural populations was when I read
the well-known papers of Lewontin and Hubby in 1966. In these
papers they showed, by using electrophoretic techniques, that
there was a lot more genetic variation in natural populations
than perhaps many people had previously thought would be the
case. I had no a priori particular view on the amount of variation
existing in natural populations, because as a mathematician,
I had no specialized knowledge about that. Many biologists
had claimed that there would be very little variation, that
we would all be very similar genetically, and I was quite
prepared to believe that. However, this view was taken not
only for purely observational reasons, but also for theoretical
reasons: calculations made in the 1960’s using the concept
of segregation load – a third concept of genetic load
– led to the view that there could not much genetic
variation from one person to the next genetically. I very
quickly came to disagree with this load calculation, as I
did also for other forms of load calculations, as I have mentioned
earlier. I therefore saw no load-based limit to the amount
of genetic variation that could exist in natural populations.
All the same, I was quite surprised at the extent of variation
revealed by the Lewontin and Hubby results. At that time,
in 1966, two years before Kimura put forward the neutral theory,
I started thinking about whether one could use the patterns
of genetic variation which they observed to assess whether
that variation was neutral or selectively induced. However,
my work on that at that stage was very naïve. I did not
have the mathematical equipment to approach that question
in depth, so I did not publish anything on that in the 1960’s.
Insofar as I did do any (unpublished) work on it, my calculations
suggested that the variation which had been revealed by Lewontin
and Hubby in Drosophila - and also by Harris’s work
on humans at essentially same time – looked to me as
though it could, on the whole, reasonably be regarded as selectively
neutral. Although, as I said, I published nothing on it at
the time, I was fascinated by the fact that there was all
this variation, and I was clear in my mind that the pattern
of variation which one observed should tell you something
about the nature of the evolutionary that had led to these
patterns.
AP: Were you familiar with work in the 50’s and 60’s
by molecular biologists such as Margoliash and Fitch on the
molecular clock?
WE: I knew Walter Fitch quite well in Madison in 1971, when
I visited there to work with Crow. Although I was fascinated
by his work, I did not work in the same area that he did,
having other interests at that time. It was however clear
to me that, following the work of Watson and Crick, one would
have to recast evolutionary theory, and specifically the mathematical
aspects of it, in a molecular framework, and Fitch was a leader
in that effort at that time.
My own interests in molecular genetics were more technical
and mathematical, and focused on Kimura’s so-called
infinite sites model, now called the infinitely many sites
model. This model formed the start of a recasting of the type
of evolutionary theory in which I was interested on a molecular
basis. The word “site” of course referred to a
nucleotide site, and “infinitely many” referred
to the approximation that in any genome there might be billions
of nucleotide sites. As an approximation you could take this
to be infinitely many sites, an assumption that made the mathematics
a bit easier. It was a great credit to Kimura that he in effect
said: “we have to rethink through classical evolutionary
population genetics theory on a molecular basis.” His
first major paper on this model was more or less contemporary
with the work that Fitch and Margoliash were doing. Kimura’s
work could be described as intra-populational and Fitch and
Margoliash’s as inter-populational.
AP: Kimura built upon your own work. What was the relationship
between your original work on diffusion modeling and Kimura’s?
WE: I would definitely not say that Kimura built on my work.
People built on Kimura’s work. Kimura was extremely
brilliant and did not need to rely on other people. The relationship
between Kimura and myself at that time was that he was the
top dog and I was just a junior person involved in the same
field. And, I might add, he would make that sort of thing
very clear to you.
(Laughs)
So the influence went entirely the other way. I subsequently
had many disagreements with Kimura, since his work, although
path-breaking, was often incorrect, and the relationship between
us was pretty tense for decades. But that did not stop me
from admiring his work very much and in effect saying this
in my published work. For example, in my 1979 book “Mathematical
Population Genetics”, I made as many positive references
to his work as I could. As just one example, I said that his
work heralded a rebirth of population genetics theory, which
I believed then, and still believe now, to be a true statement.
In cases where I disagreed with his work, I said so quietly
and blandly, but also firmly, in that book
AP: It is true, though, isn’t it, that the work you
did on diffusion models, in particular, the testing of the
diffusion approximation the Wright-Fisher model, was something
that Kimura re-derived – in particular, the results
of the infinite alleles model with the diffusion approach.
WE: No, he did not borrow diffusion work from me. He had been
doing diffusion work himself well before I even got into the
field - he was a master of diffusion work. There was no way
in which he used me or anybody else in using diffusion theory.
Frequently I did independently find some of the results which
he had previously found and of which I was unaware. On the
other hand, after a while I did get some results that he had
not obtained, in particular the sampling formula named after
me.
AP: When did you first encounter the neutral theory? Were
you surprised when it was suggested?
WE: I first encountered it when I read Kimura’s 1968
paper proposing the theory, which in effect claimed that many
allelic substitutions that occurred in evolutionary history
were the result of purely random processes and had no selective
significance. I was surprised when he put this view forward,
because most people in the field were in effect selectionists
at that time, and thought that selection was the major vehicle
for allelic frequency change. So the view that the great majority
of allelic substitutions were not driven selectively but were
just random chance events, was, to me, bold and innovative,
perhaps even reckless. He was criticized by many people, including
Dobzhansky, the famous geneticist, who was a confirmed Darwinian.
Dobzhansky said something like: “We hear these irritating
comments from people from time to time. They are just derived
from theory, which is irrelevant. It will be found later that
the great majority of replacements are selective.”
I personally took no position on this matter, ever. The reason
for this was my interests were focused on the purely mathematical
side of the question. I was more interested in addressing
the question of what patterns of gene variation would you
expect to see if indeed many of the substitutions were neutral.
In particular, I was curious as to whether the Lewontin and
Hubby patterns look selective or neutral.
On the other hand, one problem that I had with the neutral
theory was that the main argument that Kimura put forward
for it was a theoretical and mathematical one, and derived
from the Haldane substitutional load argument and calculations
that we discussed earlier. Kimura argued that the very large
number of substitutions which one knew were going on at any
one time in many populations could not be selectively induced,
but had to be purely random, because of the large substitutional
load that he claimed would arise if they were selective. To
the extent that I thought the whole substitutional load argument
was incorrect, I thought that that was an extraordinarily
weak basis for the neutral theory arguments. I said so then,
and have said so frequently since.
Tests of Neutral Molecular Evolution
AP: So, was part of your motivation for developing a statistical
test for neutrality because you thought load arguments were
ineffective?
WE: No, I was just interested in the purely mathematical question
of the patterns of variation to be expected under neutrality.
This interest derived from two things, as I stated earlier.
The first was the data becoming available in large volume
on genetic variation, and that one wanted to explain the patterns
exhibited by this variation, and the second was the mathematics
of the neutral theory. And so it became to me an obvious problem
to think about.
In doing this I approached this question as a statistician.
In statistical terms the null hypothesis is that the variation
one observed was caused neutrally, or randomly. Another aspect
of randomness that has to be allowed for in the calculations
is that the variation that is observed was not from an entire
population, but from a sample of a population. The sampling
process brings about a second level of randomness. It thus
became necessary to work out what patterns of variation you
would expect to see from a random sample of the genetic material
in some population in which allelic frequencies were changing
purely randomly. Having worked that out, it then because a
statistical question of developing a test to see whether the
patterns actually observed are consistent with those that
the neutral theory predicted were likely.
It was, for me at least, quite difficult work to do, and I
think that I was very lucky to be able to do it. (Now, of
course, the analysis can be done very quickly and efficiently,
using Kingman’s concept of the coalescent.) Once it
was done, however, I did not pursue that work very strongly
after I published my paper describing my conclusions, because
it was immediately clear that the test of neutrality that
I devised was not, in statistical terms, very powerful. That
is to say, even if there is quite a reasonable amount of selection
involved, it’s very hard to observe that by the statistical
procedures that I developed. The reason for this is rather
hard to discuss without showing the mathematical formulae,
but I thought, if you can’t really pick up very much
evidence of selection from these tests, if the tests are inherently
not very powerful, then why pursue this topic? So, I moved
to other things. Other people subsequently developed further
tests, which can describe, which are slightly more powerful
than the test I produced, and which therefore have been useful
in testing the neutral theory.
AP: Perhaps I should ask you then about those other tests,
and their relationship to yours?
WE: My test assumed several things. First, it is based on
the so-called infinitely many alleles model, and not the infinitely
many sites model that I described earlier. I did this because
this was more appropriate for testing the Lewontin-Hubby form
of data for neutrality. It differed from tests now available
that are based on the infinitely many sites model, and which
use nucleotide sequence data. Since these are the ultimate
form of genetic data, these tests can be expected to be more
efficient than my test. Secondly, I considered the genetic
variation at just one gene locus only. The lack of power in
my test arises in part because of this, and a test that uses
data from many loci can be expected to be more powerful than
mine. Some current tests do this. There is however a problem
with doing so, since the hypothesis tested now becomes whether
all of the loci are selectively neutral. To me, on the other
hand, the important question was: is this one particular gene
locus selectively neutral? So you have to give up something
in terms of the question which you ask if you use multi-locus
tests.
One reason why essentially all tests of neutrality are not
very powerful centers around the fact of co-ancestry. The
allelic types found in a sample of, say, 100 genes are not
analogous to the results, or types (heads or tails) found
in 100 tosses of a coin. The different tosses of a coin give
independent results, but the corresponding result is not true
of genes. The genes in any sample of genes have an ancestry,
and this ancestry implies a dependence from one gene to another
in the nature of the genetic material. This means in effect
that even though you might have one hundred genes in your
sample, you only have perhaps what might be called five or
ten independent observations. Think of the simple example
of identical twins, who have identical co-ancestry. The genetic
data in both twins tells you no more than the genetic data
in one of the twins. Although we are not all identical twins
to each other we are all related, even if distantly. Returning
to the coin case, if you toss a coin only five or ten times
you have only almost no power to assess whether the coin is
fair or not. Correspondingly, in the genetic case, if in effect
you only have a small number of independent genes, then you
don’t have very much power in your test.
AP: You said your test was developed as an infinitely many
alleles test, and not an infinite sites model. Can you explain
why that shift occurred and what the distinction is between
these two?
WE: My test was designed for the electrophoretic data that
Lewontin and Hubby, as well as many others, were obtaining
in the late 1960’s, and used the infinitely many alleles
model. Strictly speaking, this model and thus this rest were
not appropriate for these data, but I felt that a revised
version of my test might be constructed for them.
The shift occurred because sequence data became available.
As I mentioned earlier, these are ultimate form of genetic
data, so that any test based on them uses more information
than a test using the infinitely many alleles model. Such
a test would then be based on the infinitely many sites evolutionary
model, not (as mine was), on the infinitely many alleles model.
I was of course aware of this at the time that I introduced
my test, but I did not work out the mathematics of a test
of neutrality based on the infinitely many sites model because
I felt that detailed sequence data would be a long time coming.
AP: When Kimura first proposed his theory, he was thinking
of neutrality of sites that have functional roles, or loci,
not neutrality – in the way we think of it today –
at the molecular level, where you can have synonymous substitutions.
Do you think that that was part of the reason why it was so
controversial?
WE: I don’t think so.
AP: Or, was there confusion about what was being proposed
as neutral?
WE: I asked him this question several times, and he gave me
very brusque answers. In fact he never answered that question,
at least to me.
There is one case where neutrality could be regarded as trivial,
or uninteresting. Because of the redundancy properties of
the genetic code, you might have two different nucleotide
sequences which produce exactly the same amino acid sequence.
And since it is the amino acids which are the relevant entities,
you can argue that two different nucleotide sequences, that
is two different alleles, that code for the same amino acid
sequence, might well be taken to be selectively equivalent.
I therefore sometimes asked Kimura if the neutral theory was
a theory concerning nucleotide or amino acid sequences, but
he never gave me an answer.
On a different point, the word “neutrality” in
the theory means selective neutrality, meaning, for example,
no differential viability or fecundity. This is as opposed
to no difference in function. Now, I could perhaps with some
difficulty imagine two different alleles whose functions were
different but which were selectively equivalent. In this case
one would say that, in terms of the theory, that they were
selectively neutral with respect to each other. On the other
hand, and this was the argument that Dobzhansky made, one
might find it very hard to imagine a situation where two alleles
have a different function and that there be no selective implication
of that difference in function. Claims that different alleles
with different functions are selectively equivalent have of
course been frequently made, and the issue has been hotly
contested.
Drift
AP: One of the things I noticed when you discuss the neutral
theory is that you mention neutral substitutions and you mention
causing something to be substituted neutrally, but you haven’t
use the word “drift.” I’m curious why, when
you discuss Kimura’s theory, you didn’t talk about
drift causing substitution of neutral alleles.
WE: There is no strong implication about that, because it
is more or less implicitly assumed that neutral substitutions
arise effectively only by random drift. Neutral changes are
in effect just drift changes – the two expressions are
used interchangeably. So if I did not use the words “drift
substitution” and used an expression like “neutral
substitution,” that would just mean drift substitution.
AP: Can I follow that up? It seems, in the case of classical
population genetics, that drift has a very distinct meaning.
That is, random binomial sampling, and thus the effects of
drift increase with smaller population sizes. Whereas for
the neutral theory, substitution via drift is independent
of population size. So it seems like the concept of drift
in those two cases are very different. What causes drift in
one case is different from what causes drift in the other.
Is that a confusion on my part?
WE: I would not place such a big difference on the two situations
as you do. The word “drift” originated probably
with Wright, and was related to his theory of evolution as
taking place best in a large population divided into smaller
sub-populations. One component of his theory was drift, that
is to say random changes in allelic frequencies, in these
small sub-populations. It is certainly true, as you state,
that drift is a more important factor in small rather than
in large populations. So, the word “drift,” coming
from Wright, did tend to have the connotation of arising in
small populations. (As a side issue, it is, however, important
to note that Wright used drift arguments only in one part
of his evolutionary argument, and used selection in much of
his theory.)
It is also certainly true, as you state, that the size of
the population played comparatively little part in Kimura’s
neutral theory. He would have claimed that neutral substitutions
occur in populations of any size. So to that extent you could
say that the implication of the word “drift” is
slightly different in Kimura than it was in Wright.
However, the important point is this. The mechanism of the
drift is the same in both cases – random sampling of
genes from a parental generation making up the daughter generation.
To that extent there is no difference between Wright’s
and Kimura’s use of the word “drift”. That
is why I do not place any real difference between the use
of drift by Wright and Kimura.
AP: Would you say that the cause of substitution via drift
in the neutral theory is simply neutrality of the alleles,
whereas the cause of substitution in classical population
genetics of alleles is reduction in population size? For Wright,
for instance, a gene could be substituted via drift that did
have a selective effect. Whereas, for Kimura, the alleles
were neutral in their effect, that that explains why they
are substituted at a regular basis.
WE: Yes, you could say that. In the case of Wright, everyone
would agree that drift is relatively important in a very small
population, and that even if there is a modest amount of selection,
the effect of drift will dominate that of selection in a sufficiently
small population. On the other hand, in a very large population
small selective differences have a much more important effect,
and in a huge population, it is quite unlikely that a selectively
unfavored allele will randomly go to fixation. This arises
because such a long time would be needed for it to go to fixation
that the cumulative force of the selective disadvantage would
make it very unlikely that that allele would in fact go to
fixation. And, to return to your question, to the extent that
Kimura claimed that the neutral theory applied to large as
well as small populations, he would require a much more rigid
concept of neutrality, maybe complete selective equivalence
not allowing for even very small selective differences.
AP: Do you think that part of the reason why there might have
been confusion or dispute about the neutral theory had to
do with Kimura’s deployment of Wright’s concept
of drift? Or, do you think that the neutralists might have
been more sympathetic to Wright’s view of evolution,
and the anti-neutralists might have been more sympathetic
to a Fisherian view?
WE: That is possible. There might have been an overlap between
those who adopted what you might call the Wrightian view of
evolution in classical genetics and those who adopted the
Kimurian view of neutral evolution. However, one should not
take that line of argument too far, because I’ve heard
it from people who discussed it with Wright that Wright himself
actually disagreed with Kimura on the neutral theory. If he
did, he did not say so very strongly. So it might be a mistake
to see a too close an association between the Wrightian paradigm
and the neutral theory.
AP: How exactly did the shift to nucleotide data change the
debate concerning the neutral theory? Once nucleotide data
became available, did the tests for neutrality change, did
the debate change? Why, if so, in your view?
WE: My answer here is something of a guess, because I moved
almost entirely to human genetics questions before there was
a significant amount of molecular data.
The best-known test of neutrality these days is the so-called
Tajima test, which is based on nucleotide sequence data. Because
of that, one could certainly say that the form of the data
which was used to discuss the neutral theory did indeed change
towards nucleotide sequence data. This change was inevitable,
because the ultimate tests of the neutral theory would presumably
have to be based on that ultimate form of data.
AP: You speak in your 1979 paper about a shift to testing
for “generalized” neutrality as a better null
hypothesis than “strict” neutrality. I’d
was wondering what you meant by that distinction, first, and
second, does that bear on Ohta’s shift to the nearly
neutral theory?
WE: The strict neutrality theory would claim that essentially
all the substitutions which have taken place were strictly
neutral. The generalized theory would allow for the fact that
many mutations, when they first arose, were perhaps slightly
deleterious, but that the selective differential is so slight
that these mutations could, just by chance, increase in frequency
and become fixed in the population. I would call that the
“generalized” theory.
That is essentially the Ohta theory. Her claim could be said
to be stronger than Kimura’s, in that she would have
claimed, I believe, that most substitution processes derive
from slightly deleterious alleles rather than strictly neutral
ones. Now, you might say, why could it possibly be that a
slightly deleterious allele is more likely to fix than a strictly
neutral one? Of course, any one deleterious mutation is less
likely to fix than a strictly neutral mutation, but her argument
is based on the view that since there are so many more slightly
deleterious mutations than strictly neutral mutations, there
will tend to be more fixations overall of deleterious rather
than neutral alleles.
What I did in the 1979 paper was to consider what the effect
would be on my test for neutrality if indeed many alleles
were slightly deleterious. What I found (and this is tied
up with the fact that these tests are not very powerful) that
you would not easily be able to pick up, by using purely statistical
methods, the fact that an allele was slightly deleterious
as compared with strictly neutral.
AP: Have there been tests developed now that can discriminate
between the slightly deleterious theory and the strictly neutral
theory?
WE: I don’t know of any such tests. I think that if
there were any, they would need huge amounts of data before
and such slight difference could be established.
AP: What is your present view on the neutral theory? How has
it changed in the past 25 years?
WE: As I said, I left the field quite some time ago, so I
don’t have an informed view on that matter, and I don’t
know how much it has changed in the last 25 years. But, the
theory has clearly been very influential. For example, you
could argue that junk DNA is selectively neutral. (There are
reasons why one might not make that argument, but it would
at least be a plausible one to entertain.) If that argument
were true, then the neutral theory would become important,
because, as just one example, you could use junk DNA to assist
you in the reconstruction of phylogenetic trees. All the mathematical
approaches to that reconstruction that I know of in effect
assume neutrality, and this is so whether the data refer to
junk DNA or DNA coding for genes. So, to that extent, you
can think of the neutral theory as being important. Similar
arguments apply to much of the theory surrounding the concept
of the coalescent. Coalescent theory is very simple and elegant
in the selectively neutral case, but quite difficult in the
selective case.
Of course there is a strong possibility that the neutral theory
is assumed not because it is appropriate but because the math
of that theory is so very simple compared to the math applying
for any selective theory.
AP: Can I follow that up? Do you think that that has lead
to models of phylogenetic change that is not very well supported
by the evidence?
WE: I think that that is quite possible. However, here we
enter into another question. In mathematical population genetics
theory you know from the very start that you are making big
simplifying assumptions. You are in a very different position
from a physicist, who might believe that his mathematical
models describe reality exactly. No sensible population geneticist
would make any claim along those lines. He or she is forced
to simplify, because reality is so complicated that you don’t
know it in any detail, and even if you did know it and used
math describing it faithfully, the analysis would be impossible
to carry through. So simplification is unavoidable. I do not
know whether the use of the neutral theory is too much of
a simplification and has lead us to incorrect and distorted
views about the true evolutionary tree, it’s shape and
dimensions, but I suspect that there has been quite a significant
distortion.
Tape 2:
AP: I had further follow-up questions for you about notions
of drift operating in classical and molecular population genetics.
In particular, I want to ask about the use of different kinds
of models for drift. I was wondering whether you can discuss
what the assumptions or implications would be for using a
classical random sampling model versus a diffusion model?
WE: I can answer that question best by talking about the Wright-Fisher
model, which we discussed earlier. Random events, and thus
drift, are unavoidable in biological evolution, and this randomness
is modeled in the Wight-Fisher model by this binomial sampling
process central to that model.
The Wright-Fisher model is approximated by a diffusion process
in the following way. The properties of a diffusion process
are determined entirely by the mean change, and the variance
of the change, in some quantity in a given small amount of
time. When a diffusion process is used as an approximation
to the Wright-Fisher model, this quantity is the frequency
of a given allele in the population. The mean and the variance
of the change in this frequency for the Wright-Fisher model
are found using binomial sampling formulae, and these are
then used for the diffusion process. One could then say, roughly,
that the difference between the Wright-Fisher process and
the corresponding diffusion process is just the difference
between a discrete process and a continuous process that have
the same mean and variance for the change in allelic frequency
in a given time period.
As I mentioned earlier, it can be shown that formulae found
for various quantities from the diffusion model, for example,
the probability of fixation of an allele having a given selective
advantage over the alternative allele, provide very accurate
approximations to the (unknown) Wright-Fisher model values.
They are indeed so accurate that for they are often used for
the Wright-Fisher values, and formulae in textbooks claiming
to be Wright-Fisher formulae are very often the corresponding
diffusion formulae.
AP: Here’s my hypothesis behind that question. You can
dispute this if you like. Because Kimura was looking at enormous
amounts of molecular changes over long periods of evolutionary
time, it might make more sense to use a diffusion model, to
think of gene frequency changes as continuous instead of discrete,
whereas for smaller populations, with smaller numbers of gene
changes, you might want to use a binomial sampling model.
Is that a confused hypothesis?
WE: I don’t think I would agree with the basic argument
that you are making. Even though Kimura might have been interested
in long time frames, as you observe, he grew up within the
Wright-Fisher paradigm, and he did a great deal of diffusion
theory within that paradigm well before he introduced the
neutral theory. He was thus well aware of the convenience
and simplicity involved in using the diffusion mechanism.
So I don’t think he introduced diffusion theory because
of the different time scale involved with the neutral theory.
It was simply the most convenient mechanism to use, whatever
the time scale.
Human Genetics
AP: You said earlier on in our discussion that you’ve
moved on in your research, and you’re more interested
now in human genetics. I was wondering if you could say a
little bit about your recent research, and in what problems
you think are important in population genetics, and genetics
in general, today?
WE: I moved to human genetics for several reasons around about
1980. The first one was, as I mentioned, that I felt that
tests of the neutral theory did not have enough power to be
worth developing further. More broadly, I felt that that the
discussion of the neutral theory was becoming too arcane,
and that I should try to work on something more relevant and
useful. I had several colleagues at Penn working on human
genetics problems, and who told me that there were many mathematical
problems in human genetics, for example those associated with
finding disease genes, that I should take up. I thought that
these suggestions made good sense and that I should move to
the human genetics/disease area. I found that shift very hard
to make, because one had to think about quite different questions
and with quite different mathematics. Things like diffusion
theory just weren’t involved. It took me so much effort
to get into this new way of thinking that I soon paid very
little attention to problems in evolutionary genetics.
My main interest in this (for me) new area was linkage analysis.
Linkage analysis tries to establish the chromosome on which
some disease gene might lie, and then to find its approximate
location on that chromosome. Once this is done, an examination
of the DNA in that location can be used to try to find, more
exactly, the location of the disease gene. One can think of
the linkage analysis part as indicating the general area in
which to find a needle in a haystack, leaving it to other
methods to locate it within that area.
The disease might be caused in whole or in part by the replacement
of a single “good” nucleotide by a “bad”
one, which in turn might mean the replacement of one amino
acid by another. Sometimes it is caused by the deletion of
some genetic material, in which case things can go more seriously
wrong and a more serious disease eventuates. Thus molecular
genetic considerations enter in, and this forms something
of a link between molecular evolution analysis and human disease
investigation. More broadly, evolutionary genetics and human
genetics, which in 1980 barely spoke the same language, are
now increasingly interrelated.
AP: You developed a test to help determine the location of
disease gene in families. Could you explain that and how you
came about discovering that?
WE: This is so-called transmission disequilibrium test, often
called by the acronym TDT. The historical background to this
test is as follows. People want to locate where a disease
gene is in the genome One method for doing this is through
the use of “marker” alleles. A marker locus has
two important properties. First, one knows where it is on
the genome, and one also knows any person’s genotype
your at that marker locus. So if there are two alleles at
this marker locus, which we might call M1 and M2, (M standing
for marker), one can tell whether somebody is M1M1, M1M2,
or M2M2 at the marker locus. One imagines that the polymorphism
at the marker locus has been in existence for a long time.
At some time in the past, maybe several thousand years ago,
there was an original mutation causing a disease. Let us suppose
that the disease locus is very close to the marker locus.
Now that disease mutation will have occurred on a chromosome
that had either M1 or M2 at the marker locus in the individual
in whom the mutation occurred. Let’s assume that it
was M1 – then there an immediate association between
having M1 at the marker locus and having the disease. As time
goes on, recombination events occur between disease and marker
loci, and this association tends to break down. But if the
marker locus is very close to the disease locus there will
not be many such recombination events, even over quite long
periods. Suppose now that you take a sample of individuals,
some of whom have the disease (cases) and some whom do not
(controls). If disease and marker loci are indeed very closely
linked, there will tend to be an association between having
the disease and having the M1. Such an association can be
tested for by a classical 2x3 chi-squared test, where the
two “row” categorizations are case versus control
and the three “column” categorizations are the
genotypes at the marker locus.
This chi-square test (of association) was used in the past
as a surrogate as a test for linkage. However, it was subsequently
observed that associations can arise for reasons quite different
from linkage. One such reason is population stratification.
You might have a disease which occurs more frequently in one
human group, let’s say Caucasians, or whites, and less
frequently, let’s say, in blacks. Further, there might
be a gene which occurs more frequently in whites and less
frequently in blacks. If so, you will see an association between
the gene and having the disease in a sample containing both
blacks and whites, but this has nothing to do necessarily
with linkage. This means that the test which I described a
few moments ago has a serious flaw in it – the inference
that the association arises because of linkage is not necessarily
correct. This could be a serious problem, so several people
attempted to overcome it. The test which I helped to develop
was in this direction. In broad terms, the way in which we
were able to overcome the stratification problem was by using
data within families. We looked at a mother and a father and
an affected child, a so-called trio, and were able to develop
a test for linkage using “within family” data,
whose properties were free of any population stratification
which there might be. As I said, the details of the procedure
are rather mathematical, not easy to describe verbally, but
I think I’ve given you the broad background of how it
works.
More recently I have moved into the area of genomics and bioinformatics,
which could be described as the analysis of questions previously
considered at the single gene locus level by an analysis at
the whole genome level, using of course whole genome data.
The bioinformatics part this work involves the use of computer
technology; because one has massive amounts of data when one
looks at the entire genome, one cannot easily manipulate,
absorb or analyze these without the use of computers. Such
analyses are often involved with the investigation of genetic
diseases, so the three areas in which I have been interested
in, evolutionary genetics, human genetics, and genomics, are
increasingly coming together.
This increasingly implies a change of direction in research
in theoretical evolutionary population genetics. The original
work of Fisher and Wright is prospective – it looked
to the future, attempting to and succeeding in validating
the Darwinian theory within a Mendelian hereditary paradigm,
and in quantifying the Darwinian theory in Mendelian terms.
Of course, there are many people who do not believe in evolution,
but to me it’s as solid a fact as you could imagine,
largely because of this work. The emphasis has now changed
in evolutionary genetics to retrospective questions, looking
backwards in time. These are data-induced questions, of the
form: “These are the genetic data that we now have,
how did we get here?” The reconstruction of the phylogenetic
tree linking all contemporary species is perhaps the most
obvious example of that sort of question. Human genetics often
also looks retrospectively: “We have these data about
people are affected by this disease, what can we say about
when and where the original disease mutation arose?”
It is this similarity in research directions that has led
to the increased unification that I have just referred to,
especially since the data involved in approaching these questions
is increasingly genomic.
AP: Can I follow that up? What do you think that this unification
will entail, in terms of rejecting all or part of classical
or molecular genetics? Do you think that some of the original
theory will be challenged?
WE: That is a very hard question to answer. Some will have
to be thrown out. Many questions will be addressed using molecular
theory, because molecular data are the data that we now have.
But, on the other hand, the original classical theory still
has relevance, and there will always be a place for the broad
implications of the classical theory. On the other hand a
lot of the classical theory will have to be changed. Biological
reality is unbelievably complicated, and these complications
were ignored in much of the classical theory. This was perhaps
inevitable, since it is natural to start with simplifying
assumptions that lead to an amenable mathematical analysis.
This leads one to ask: “When is mathematics useful in
a scientific discipline?” It is no coincidence that
mathematics grew up strongly in physics, and most strongly
in simple areas of physics, for example in considering the
motion of a single planet around a sun, where one can get
a fairly complete theory. More complicated areas of physics
do not yield so easily to mathematics. As a trivial example,
even if we know the weather today, it is impossible to state
in detail what the weather will be like in three weeks: there
are too many complexities involved. I think that biology is
in that situation. It’s incredibly complex, and many
of the simplified mathematical models considered in the past
were almost hopelessly oversimplified. To me is a fascinating
and difficult question to know how useful mathematics will
be for the very complex biological reality that is now being
investigated.
Thank you Warren.
Genetics Draft
Addendum:
WE: We are talking now about John Gillespie and his drift
and draft views, which I have discussed with him recently.
His view, which I believe is very compelling, is that if you
have a favored allele which is moving rapidly to fixation
at some locus, it will drag along with it genetic material
at closely linked loci. This may include loci at which the
alleles are selectively neutral. Under this view the changes
in the allelic frequencies at those neutral loci are caused
not only by pure random drift, but are also induced by the
draft of the frequency change of the favored allele. An expression
often used in this connection is a “selective sweep”:
the selective process sweeps along with it segments of genetic
material close to the selective locus. John’s argument
would be that we have to recast evolutionary genetics theory
in terms of “draft,” rather than “drift.”
AP: So, is he effectively saying that there is no or very
little drift?
WE: You could say that he was saying what Fisher said in a
somewhat different context. Fisher would always admit that
there was drift at any one locus, but he would claim that
selection at that locus would ultimately overcome the effects
of drift. John’s argument comes to the same conclusion
in a different context. He would admit that there is drift
at a neutral locus, but he would claim that that is unimportant
compared with the draft caused by selection, not at that locus,
but at a closely linked locus. Of course many calculations
have been made in connection with this argument, a central
parameter being the recombination fraction between the neutral
and the selected locus.
AP: Thank you.
|