Theory: The Necessary Evil
Howard S. Becker
Originally
published in Theory and Concepts in Qualitative Research: Perspectives from
the Field, David J. Flinders and Geoffrey E. Mills, eds., (New York:
Teachers College Press, 1993) pp. 218-229.
Epistemological Worries
Qualitative
researchers in education have begun to question the epistemological premises of
their work. Or, at least, someone in the arena is questioning those premises,
and the questioning worries the researchers who actually do the work of
studying schools, students, and education close up. Attacks on qualitative
research used to come exclusively from the methodological right, from the
proponents of positivism and statistical and experimental rigor. But now the
attack comes from the cultural studies left as well, from the proponents of the
"new ethnography," who argue that there is no such thing as
"objective knowledge" and that qualitative research is no more than an insidious disguise for the old enemy of
positivism and pseudo-objectivity.
The
attack can conveniently, if somewhat misleadingly, be as "theoretical."
Convenient because it is not a question of empirical findings; misleadingly,
because the theory involved is not substantive. These worries, which used to
take the form of a concern with the theoretical bases of our substantive
findings, now focus on the theory of knowledge that underlies the whole
enterprise. What bothers qualitative researchers in education, I think, is that
they are no longer sure, as they once were, that they
are doing things the "right way." They worry that the work they do
may be built on sand, despite all their care and precautions, all their
attempts to answer the multitude of criticisms that greet their efforts, and
all their attempts to still the qualms that arise from within--that the whole
thing will count for nothing in the end. Not only that our work will not be
accepted as scientific, but also that that model of scientific work we aspired
to is now discovered to be philosophically unsound and in need of serious
rethinking. Who are we kidding with all this science talk? Why don't we admit
that what we do is just another kind of story, no better or worse than any
other fiction?
How shall we
understand all this? Are these realistic worries? Are we building on sand? Is
what we do just another story?
Theory or Social Organization?
We
can take these debates at face value, worry over their content, and try to
answer all the questions asked of us. That's the conventional way to deal with
such problems. The literature discussing qualitative research in education
parallels similar discussions in sociology and, especially, anthropology. These discussions center on the relative
merits of qualitative and quantitative research, on the problems or virtues of
positivism, on the importance (or danger) of subjectivity, and so on.
These
are, of course, serious problems in the philosophy of social science. It is not
clear how any of us, qualitative or quantitative, can justify what we produce
as certified or warranted or credible knowledge. Whatever safeguards we take,
whatever new tricks we try, questions can be, and are, raised.
Qualitative research--we might better say research that is designed in the
doing, that therefore is not systematic in any impersonal way, that leaves room
for, indeed insists on, individual judgment, that takes account of historical,
situated detail, and context and all that--research of that kind is faulted for
being exactly all of those things and therefore not able to produce
"scientific," objective, reliable knowledge that will support prediction
and control. Research which tries to be systematic and impersonal, arithmetic
and precise, and thereby scientific, is faulted for leaving out too much that
needs to be included, for failing to take account of crucial aspects of human behavior and social life, for being unable to advance our
understanding, for promising much more in the way of prediction and control
than it ever delivers.
Epistemological
issues, for all the arguing, are never settled, and I think it fruitless to try
to settle them, at least in the way the typical debate looks to. If we haven't
settled them definitively in two thousand years, more or less, we probably
aren't ever going to settle them. These are simply the commonplaces, in the
rhetorical sense, of scientific talk in the social sciences, the framework in
which debate goes on. So be it.
Also, so what? Because I don't mean those
remarks fatalistically. I don't counsel resignation, acceptance of an
inescapable tragic fate. No. There's nothing tragic about it. It's clearly
possible, on the evidence we have all around us, to find out things about
social life in ways that are more or less good enough, at least for the people
we are working with now. It's happened often enough in the past and there's no
reason to think it can't continue to happen.
In
fact, this is exactly the import of Thomas Kuhn's analysis of science, as I
understand it. Whenever scientists can agree on what the questions are, what a
reasonable answer to them would look like, and what ways of getting such
answers are acceptable--then you have a period of scientific advance. At the
price, Kuhn is careful to point out, of leaving out most of what needs to be
included in order to give an adequate picture of whatever we are studying, at
the price of leaving a great deal that might properly be subjected to
investigation, that in fact desperately needs investigation, uninspected and untested.
That's
alright. Because, though everything can be questioned, we needn't question it
all at once. We can stand on some shaky epistemological ground Over Here for as
long as it takes to get an idea about what can be seen from this vantage point.
Then we can move Over There, to the place we had been treating as problematic
while we took Over Here for granted and, taking Over There for granted, make
Over Here problematic for a while. It's John Dewey's
point: Reality is what we choose not to question at the moment. (There's also
Lily Tomlin's point, as it comes out of the mouth of Trudy the Bag Lady, no
mean philosopher herself: "After all, what is reality anyway? Nothin` but a
collective hunch." And, she adds, "Reality is the leading
cause of stress amongst those in touch with it".)
Any
working scientist must have a position on such questions, implicit or explicit
(and the better shape the science is in the more the positions are implicit), just in order to get on with the work. Any working
researcher's positions on these questions are likely--the chief fear of the
philosophically minded--to be inconsistent, just because they have to be taken
ad hoc, to deal with immediate problems of getting the work done. Not only is
inconsistency unavoidable, it is the basis of everyday scientific practice.
For
instance: I am devoted to qualitative work and think that the criticisms made
of "simple minded counting" are often quite correct. But I also rely,
whenever I can, on data from the
Similarly,
the hardest-nosed positivists, if anyone will admit to being such any more,
routinely take into account all sorts of knowledge acquired with the help of"soft" methods, without which they couldn't
make sense of their data. They may not admit it, but the interpretations they
make of "hard findings" rely on their own understanding of the less
easily measured, though still easily observed, aspects of social life.
In short, we
all, qualitative and quantitative workers alike, have to use methods we
disapprove of, philosophically and methodologically, just to get on with it and
take account of what must be taken account of to make sense of the world.
The Necessary Evil
So
we all have to be epistemological theorists, know it or not, because we
couldn't work at all if we didn't have at least an implicit theory of
knowledge, wouldn't know what to do first. In that sense, theory is necessary.
But
the questions raised about the justification for what we do, which is what
these theories are, cannot be definitively answered. That's an empirical
generalization, based on the simple observation that we are still discussing
the matter. To spend a lot of time on unanswerable questions is a waste of time
(see Stanley Lieberson's discussion in Making It
Count) and quite paralyzing. If you have convinced yourself that what you
are doing can't be justified reasonably, it's hard to get up the energy
necessary to do it. It seems better to continue discussing the problem in hope
of finding an answer that satisfies you and the people who are aggravating you
about the warrant for your conclusions.
In that sense,
the pursuit of epistemological and similar questions in the philosophy of
social science is evil. If you're accustomed to this dilemma it isn't a great
trouble--you make a choice and go about your business. But some
researchers--most especially graduate students--are especially vulnerable to
the questioning doubts that paralyze thought and will and work. For them the
evil is serious. To repeat, we still have to do the theoretical work, but we
needn't think we are being especially virtuous when we do. Theory is a
dangerous, greedy animal, and we need to be alert to keep it in its cage.
Social Organization
From a different
vantage point, we can see debates over method and its justification as the kind
of thing that happens in the world of social science, as a recurring social
phenomenon to be investigated rather than a serious epistemological problem--in
other words, to paraphrase ethnomethodologist Harold Garfinkel, as a topic rather than an aggravation. And we
can ask sociological questions about debates like this: When, in the life of a
discipline, or of a researcher, as my remarks about graduate students
suggested, or of a piece of research, do these questions become troubling? Who
is likely to be exercised about them? How do such unresolved and unresolvable debates fit into the social organization of
the discipline?
The Relativistic Specter
To
ask such questions immediately raises the specter of
a paradoxical situation in which I presume, on the basis of a social
science analysis which is itself philosophically unjustified, to give you
the social science lowdown on a critique of what I am
at the moment doing. It's a kind of debunking, not unlike psychoanalytically
inclined writers who respond to criticism with an analysis of the unconscious
motives of their critics. That is just the problem that is giving some
contemporary sociologists of science fits, because they understand perfectly
well that their analysis of the workings of science is in some sense a critique
of science. If the critique is correct, then it applies to the analysis that
produced the critique. You can see were that leads.
An alternative
position is to accept the reflexivity this involves, indeed to embrace it, and
then use our knowledge of the social organization of science to solve the
problems so raised. In other words, if it's an organizational problem, the
solution has to be organizational. You don't solve organizational problems by
clarifying terms or arguments. Organizations are not philosophies and people
don't base their actions on philosophical analyses. Not even scientists do
that.
Science Worlds, Chains of Association
What
does it mean to speak of the social organization of an intellectual or
scientific discipline? We can speak here of scientific worlds in analogy
to the analyses that have been made of art worlds. These analyses focus on a
work of art--a film, a painting, a concert, a book of poetry--and ask: who are
all the people who had to cooperate so that that work could come out the way it
did? This is not to say that there is any particular way the work has to come
out, only that if you want your movie to have orchestral music in the
background, you will have to have someone compose the music and musicians play
it; you can easily, of course, have no music, but then it will be a different
film than the one whose action is accompanied by a score.
An
art world is made up of all the people who routinely cooperate in that way to
produce the kind of works they usually produce: the composers, conductors and
performers who produce concert music; the playwrights, actors, directors,
designers and business people who produce theatre works; the writers,
designers, editors and business people who produce novels; the long list of
everyone from director and actors to grips and accountants and caterers and
transportation captains who work together to make Hollywood films; and so on.
The
cooperation that makes up an art world and produces its characteristic works
depends on the use of conventions, standardized ways of doing things everyone
knows and depends on. Examples are musical scales, forms like the sonnet or the
three movement sonata, the
That's
an art world. A science world, by analogy, would consist of all those people
who cooperate to produce the characteristic activities and products of that
science. This means more than the people who make up the scientific community
to which Kuhn called our attention. It includes, for instance, the people who
provide the materials with which the science works: the experimental animals,
the purified chemicals and water to experiment on them with, the carefully
controlled spaces to do it all in. For social science it typically means,
importantly, the people who provide us with data by gathering statistics, doing
interviews, being interviewed, letting us observe them, collecting and giving
us access to documents. Just as with art works, the kinds of cooperation that
are available, and the terms on which it is available, necessarily affect the
kind of science that can be done. A contemporary example is the conflict over
the use of laboratory animals in biological research.
One
of the distinctive characteristics of science worlds (as opposed, e.g., to art
worlds) is the emphasis on proof and persuasion, on being able to convince
someone else by commonly accepted "rational" methods to accept what
you say even though they'd rather not. Bruno Latour
has made this the cornerstone of his analysis of "science in action."
He speaks of scientists trying to get more and more people to accept their statements,
by enrolling "allies" with whom opponents of their statements will
also have to contend. Footnotes and appeals to the literature serve to line up
allies with whom people who disagree with you will also have to disagree. In Latour's analysis, people agree with each other not because
there is a basic scientific logic which decides disputes, and certainly not
because Nature or Reality adjudicate the dispute, but because one side or the
other has won a "trial of strength," on whatever basis such trials
are decided in that community. In a series of provocative dicta, Latour says things like (I'm paraphrasing), "It is not
that scientists agree when the facts require them to, but rather that when they
agree, what they agree on become the facts."
A
beginning on this kind of (what we might call) organizational epistemology is
to note that every way of doing research and arriving at results is good
enough, good enough for someone situated at some point in the research process.
If it weren't good enough for someone, no one would be doing it. Who it has to
be good enough for and when it has to be that good are empirical questions that
depend on the social organization in which that bit of knowledge arises.
The
most general finding here is that, though every scientific method has easily
observed technical flaws and is based on not very well hidden philosophical
fallacies, they are all used routinely without much fear or worry within some
research community. The results they produce are good enough for the community
of scientific peers that uses them. The flaws will be recognized and discounted
for; the fallacies will be acknowledged and ignored. Everyone knows all about
it, knows that everyone else knows all about it, and they have all agreed not
to bother each other about it. So the Census, with all the flaws I alluded to,
is plenty good enough for the rough differentiations social scientists usually
want to make. But that's because the social scientists who use census data have
made the collective hunch that these data are good enough for the purposes they
will put them to, not because the flaws don't exist. Few enough people we would
ordinarily think of as white say they are black and few enough people we would
ordinarily think of as black say they are white to change any conclusions we
base on these numbers, and we don't think the difference between twenty-four
and twenty-five large enough to invalidate the conclusions we base on age
statistics.
An
interesting corollary of this is that what methods and data are acceptable
depends on the stage of the scientific process at which they are used and
presented, and the purpose they are used for. At an early stage of the
scientific process, for instance, we are mainly playing, exploring ideas for
the further ideas or explorations they might lead us to. We don't much care
whether the results are valid or not, or whether the conclusions are true. What
we really care about is that the discussion proceed, that we find something
interesting to talk about. This stage may take place over a cup of coffee, in a
seminar, in casual conversation with a colleague. I remember a seminar with
Everett Hughes, in which a student interrupted one of his discursive
explorations of a "fact" he had heard somewhere to say that later
research had shown the fact wasn't true. Without breaking stride, Hughes asked
what the new fact was, and continued to explore its possibilities.
In
fact, it is often seen as an intellectual mistake to dismiss ideas at this
stage of work just because they might not be true. The worst thing that can
happen to a research community, in some sense, is to run out of researchable
problems. Yuval Yonay has pointed out that
researchers will often accept all sorts of anomalies if the general position
containing them opens up a lot of new researchable questions, whose exploration
can produce publishable papers and the feeling of progress.
At
a somewhat later stage in the research process, we are mainly interested in
getting an idea worth the time and effort we are going to put into it. At this
point, not just any idea will do. We want some assurance that the idea we
choose will bear the weight we are going to put on it, that it is not so
unsupported in fact that taking it as a starting point will not leave us
stranded, that taking it seriously will in fact produce a result. So we look in
the literature to see what others have done and how it worked out. Before we go
to the trouble of writing a research proposal or setting up a project--a more
sizeable investment than one makes in a casual conversation--we want to know
that we are building on a solid foundation. We subject what we find in earlier
reports to careful scrutiny, and bring more rigorous methodological standards
to bear, because we don't want to waste our time. If there's something wrong
with this way of working, we want to know it now. Putting down a larger bet, we
want better odds.
We
could pursue this analysis through a variety of steps. What kinds of rigor do
we demand before we accept a journal article for publication or a paper for the
annual meeting of the tribe? (Here we might note the role of practical
considerations. While everyone insists that only the highest standards are
employed in choosing papers for these purposes, it is also well-known that
scientific associations commit themselves to fill a certain number of rooms in
the hotels in which they meet with paying customers; otherwise they will be
charged for the meeting rooms, the Presidential Suite, and so on. The best way
to ensure that a sufficient number attend the meeting is to accept their papers
for the program and require that everyone on the program register for the
meeting. The people who organize these programs usually receive a nicely worded
double message: maintain standards and maximize participation. It's not clear
that these are compatible.)
A final stage
has to do with what work receives the highest honor,
which does not take the form of a prize but rather of imitation. What research
becomes paradigmatic in the Kuhnian sense, providing exemplars
of the work that particular scientific community has standardized on, has taken
as exemplifying the problems, methods, and styles of reasoning that everyone
will work on? Oddly enough, at this stage we aren't really very critical,
precisely because a whole community has accepted this work as paradigmatic. All
the mechanisms of scientific training and community formation Kuhn describes
combine to convince people that what everyone already believes is what they
better believe too. Obviously it doesn't always work that way but, of
necessity, it does work that way every time a scientific community adopts a
paradigmatic way of working.
Specialization (philosophical and
methodological worry as a profession)
When
intellectual specialties reach a size sufficient to support specialization
(this is one of those demographic facts I spoke of earlier) they often (and in
the social sciences almost invariably) develop specialties in theory and
methodology and philosophy of science (as it applies to their particular
discipline). The specialists in these topics do some work which members of the
discipline think is necessary to the entire enterprise but which has become too
complex and specialized for everyone to do for themselves.
The
social sciences have probably (this is speculative intellectual history, and
could be checked out in the appropriate monographs, although I haven't done
that) developed specialized methodologists and philosophers of science because
they have come under attack, in ways that hurt, from people who think that the
enterprise is not philosophically (especially "scientifically")
defensible. The attacks have frequently come from the natural sciences, and
have had serious practical consequences in the struggle for academic recognition
and advantages (faculty positions, research funds, etc.), so they have been
seen as requiring answers. The job therefore must be done and, to be done
right, must be done by people who can hold their own in that kind of argument,
people who know the latest stuff and the most professional styles of argument.
One
consequence of turning this part of our business over to specialists is that
the specialists have interests which don't fully coincide with ours. They play
to different audiences. Philosophers of science, even if they come from our own
ranks, have as at least part of their audience the world of professional
philosophy, at least that part of it which concerns itself with their topic.
What makes them useful to us is also what makes them difficult. They know all
the tricks of philosophers of science in large part because they have become
philosophers and are part of that world. In consequence, they are sensitive to
the opinions of other philosophers of science, philosophers who do not have one
foot in one of the social sciences, even when those peoples' opinions push them
in directions that are not relevant to the concerns of working scientists.
Philosophers
and theorists of knowledge, concerned to meet the standards of the
philosophical discourse they are involved in, frequently follow their logic to
conclusions which make the day-to-day work of science impractical or
impossible. They seem to conclude that social science, as we now do it, can't
be done. I'm reminded of Donald Campbell, who used to say that these people are
very convincing but, if they're right, then what have we been doing all these
years? That is, to say that it can't be done is only to say that it can't be
done in a way that meets some set of standards that is not extant in the
research community in which the work is actually being done.
The
same thing is true when we consider the specialists who deal with technical
questions, claiming to derive the warrant for their strictures from
philosophical premises. Science is, remember, a cooperative enterprise in which
all the cooperators have something to say about what
is done. That includes, to bring this down to some earthy and necessary
considerations, the people who pay for what is done and the people who are the
objects (or subjects, since what term we use to describe these people is
contested) of our study.
A
simple example: some years ago a distinguished sociological methodologist
reasoned that the newly invented technique of path analysis could be used to
deal with measurement error in survey research. It was quite easy and
straightforward: all you had to do was have the same interviewers interview the
same respondents on three separate occasions using the same interview guide.
Easy enough, except that neither interviewers or respondents would cooperate.
The interviewers felt like fools asking the same people the same questions over
and over again and, when they got their nerve up to do it, the respondents
wouldn't answer: "You asked that twice already. Are you stupid? Or what?"
The philosophical theory and its technical application were clear; the social
logic was off.
Great
advances in social science often depend on increases in funding. For years,
most of what was known about fertility came from detailed analyses of the data
of the Indianapolis Study, in its time the most detailed body of materials
available on married couples' choices about how many children to have and when
to have them. A major step forward occurred when increased funding made it
possible to use national samples to study the decisions of couples to have
children. It had never, of course, been methodologically defensible to use
In other words,
general statements of what must be done to be scientifically adequate rely,
usually without acknowledgement, on practical matters and, in this, they follow
rather than lead everyday practice.
Audiences
Audiences
(and especially the people whose lives and activities we study) react to what
we say in variable ways and researchers worry about that. Some of our
philosophical and epistemological and theoretical concerns have to do with
justifying what we do to such "external" audiences.
Educational
research is particularly vulnerable to problems of justification. Everything
educational researchers do has some consequence for people in the education
business. Do we find that one method of teaching is superior to others? The
people who are committed to the others--not just "philosophically"
but also by virtue of not knowing how to do the new thing or having built their
reputations on the way they now do it--will want to find reasons why these
results are not valid.
I
don't mean that it's just mercenary. It's more complicated than that. If you
have a reason to look for trouble, you're more likely to look. Every method
having flaws, if you look, you'll find. As I remarked earlier, every way of
doing business is good enough--for someone at some time for some purpose.
Conversely, no way is good for all purposes and all people at all times. So it
is always possible to criticize how things are done if you are a different
person at a different time with a different purpose.
Finally
To come full circle,
the reasons and the people and the times for research are organizational facts,
not philosophical constructs. Epistemology and philosophy of science are
problems insofar as we cohabit with the people who make those topics their
business and are thus sensitive to their opinions, questions, and complaints.
Educational researchers, poised uneasily as they are between the institutions
of (mostly) public education, the scientific and scholarly communities of the
university world, and the people who give money in Washington, who aren't sure
which of those constituencies they ought to take seriously, have the unenviable
task of inventing a practice that will answer to all of them more or less
adequately. The difficulties are compounded by the splintering of the academic
component of the mix into a variety of disputatious factions, which is mostly
what I have been discussing. No amount of careful reasoning or thoughtful
analysis will make the difficulties go away. They are grounded in different standards
and demands based in different worlds. In particular, as long as theory
consists of a one-way communication from specialists who live in the world of
philosophical discourse, empirical researchers will not be able to satisfy
them. In my own view, we (the empirical researchers, among whom I still count
myself) should listen carefully to those messages, see what we can use, and be
polite about the rest of it. After all, as Joe E. Brown remarked in the last
scene of "Some Like It Hot," when he discovered that the woman he
wanted to marry was a man after all, "Nobody's perfect!"
Bibliographical Note
Thomas
Kuhn's ideas can be found in his The Structure of Scientific Revolutions
(Chicago: University of Chicago Press, 1962). The remarks of Trudy the Bag Lady
appear in Jane Wagner, The Search for Signs of Intelligent Life in the
Universe (New York: Harper and Row, 1986), p. 18. Stanley Leiberson's Making It Count was published by the
Página creada y actualizada por
grupo "mmm".
Para
cualquier cambio, sugerencia,etc.
contactar con: fores@uv.es
© a.r.e.a./Dr.Vicente Forés López
Universitat de València Press
Creada: 15/09/2000 Última
Actualización: 15/06/2001
©Copyright http://www.uv.es/%7Efores/programa/becker_necessaryevil.html
Academic year 2008/2009
© a.r.e.a./Dr.Vicente Forés López
©Macarena
García Mora
Universitat de València Press
garmoma2@alumni.uv.es
Página creada: 1/11/08