After the Last
Generation: Rethinking Scholarship in the Days of Serious Play
Stuart Moulthrop
smoulthrop@ubalt.edu
ABSTRACT
This paper picks up on speculations about likely changes in the nature
and structure of higher education under the influence of video games and other
forms of cybertext. It looks specifically at changes
in the primary mode of academic production, arguing that significant space
needs to be made for practical engagement alongside theory and criticism. A new
genre of formal academic work is proposed, called the intervention, a serious work of application
intended to contribute to pragmatics as well as abstract understanding.
Categories and Subject Descriptors
J.5 [Arts
and Humanities]
General Terms
Human Factors, Theory.
Keywords
Cybertext, Games, Theory, Academics,
Literacy, Intervention.
I was just living my life. I just liked to go live at the
edge of the system, where things were breaking off and breaking down. It took
me a long time to figure out what I was really doing, that I was always in some
place where the big story was turning into little weird counterstories.
But now I'm wising up to my situation, because I'm old now, and I know enough
to get along in the world.
-- Lekhi
Starlitz [26]
1. APOCALYPSE, THEN
For some of us, it's always the end of the world. I say this to frame an
interesting remark heard recently from the linguist, literacy theorist, and
born-again video gamer James Paul Gee. Gee had earlier addressed a group of
English teachers, arguing that techniques and conventions developed in games
were likely to unsettle very deeply the future of education [10]. Learning at all
levels, he said, would come to depend more heavily on simulation and discovery,
on iterative, intensely personal encounters with information, rather than
traditional methods based in authority and exposition. As Gee has written:
If the
principles of learning in good video games are good, then better theories of
learning are embedded in the video games many children in elementary and
particularly high school play than in the schools they attend. Furthermore, the
theory of learning in good video games fits better with the modern, high-tech,
global world today's children and teenagers live in than do the theories (and
practices) of learning that they see in school.... Is it a wonder, then, that
by high school, very often both good students and bad ones, rich ones and poor
ones, don't like school? [9]
Traditionalists may object that schooling is meant to be endured, not
enjoyed, so students' discomfort may actually signal success. But such
dismissals cede any practical advantage to games, whose appeal rests on their
ability to help us find the pleasure in unpleasure.
Drawing on his acquaintance with neuroscience, Steven Johnson argues that video
games achieve this trick by exploiting basic brain chemistry. Because they
create "a system where rewards are both clearly defined and achieved by
exploring an environment," game designers can motivate players to find all
manner of tedious, repetitive work ultimately and strangely satisfying [15].
Clearly, effects of the sort Gee and Johnson describe portend great
institutional change, perhaps along the lines laid out by Pierre Lévy in his thinking about general reform in education:
Traditional
representations of learning in parallel, graded steps, pyramids structured into
levels and organized on the basis of prerequisites, converging toward
"higher" forms of knowledge, should be replaced by the image of
merging knowledge spaces, open, continuous, in flux, nonlinear, reorganizing
themselves according to the goal or context, where each participant occupies a
singular and evolving position. [17]
Those who have worked in cybertext or
interactive design may detect a ring of familiarity toward the end of that
sentence. Though we maintain healthy doubts about the claim, many of us
grudgingly acknowledge that so-called new media blend the identity of the
receiver with that of designer or author. The effect has important limits, but
it is real enough up to a point. The "singular and evolving" role of
the learner may thus remind us of the hypertextual
reader, or the user of an interactive multimedia system, or the player of a
video game. By analogy, then, the
seriously playful student
envisioned by Lévy and Gee begins to look like a
teacher or researcher.
Given such a shift in identity, what becomes of that "singular and
evolving" subject position currently known as faculty? Will the university as
"knowledge space" need such people? Could we instead imagine a
community of co-learners, staffed by progressively senior students? I raised
these points in a forum following Gee's talk, asking if the generation of the postwar Boom might be the last cohort of tenured
professors. With due respect both for the senselessness of endings and the
bluntness of my question, Gee said: "As we know them, yes."
This response leaves gracious room for interpretation. Those who know
Gee's writings, especially The New Work Order and What Video Games Have
to Teach Us About Learning and Literacy, can probably guess how he might
have developed the idea himself. In the rest of this paper, I offer my own
variations on the theme, which may wander far from anything Gee would have
said, but which orbit more or less elliptically around a shared concern for the
implications of new media.
Speaking of a last generation of tenured faculty invokes a fairly
familiar apocalypse, at least for citizens of the
These concerns cannot be dismissed out of hand. In the
But dire as this all may sound, it's actually not the end of the world.
Political and economic factors are themselves affected by more basic influences;
and while I imply no absolute or primary determinism, changes in technology,
particularly in the realm of communication, surely rank high among these
shaping forces. Thus we may attempt a more sophisticated view, where politics
and economics provide perspective for a subtler inquiry into identities and
practices, involving vectors of change on longer, larger scales.
If not exactly optimistic, this orientation is at least responsible to reality.
Yes, something nasty no doubt hangs above our heads, promising to wreak havoc
with all those finely tuned systems evolved since the second world war; but the
end of the world is after all just a language game, and the show must go on. Invoking Gee's
escape clause -- the end of professors "as we know them" -- allows us
to posit new species, as yet not fully described or understood. The older one
gets, the harder it is to cross the event horizon that demarcates these new
identities. Still, it is at least possible to trace their origins, and perhaps
to speculate about their future.
2. NOT SO MUCH FOR YOU WITH THE WRITING
Lévy notes that "[t]he emergence of cyberspace will
most likely have -- already has had -- as radical an effect on the pragmatics of
communication as the discovery of writing" [17]. Given the scope of its
terms, some might dismiss this remark as hyperbole. Humanist scholars in
particular seldom seem willing to set anything on a par with the invention of
writing, so centrally does that milestone figure in our profession's primal
scene. As Walter Ong noted long ago, our enormous
dependence on writing leads to a curious naturalization or internalization of
technology. Scholars often assume -- fallaciously, Ong
insisted -- that thought is identical to language, while language is, if not
identical, then at least readily convertible to writing. As Ong
observed, this tendency explains much of the staunch resistance among
academics, at least in the humanities, to the introduction of media other than
writing and print [20].
Never particularly successful, that resistance has lately become less
futile than simply irrelevant. This is the point of Lévy's
pronouncement. Earlier so-called communications revolutions wrought only
partial transformations: the increased emphasis on the image in photography and
film; the recovery of orality in telegraphy,
telephony, and radio; the creation of mass consciousness through broadcasting.
Though they began to challenge writing as the primary foundation of culture,
these media did not affect the conditions of writing itself. This was good news
for academics. It was possible to study just about any medium through the
miracle of content
-- by which
we meant, written representations of our experience of the other medium --
without having to become much more than auditors or spectators. Among other
things, this allowed the academy to draw a bright line between production work
in various media (mere techne) and the writing of criticism
and theory (the primary work of scholars).
With the coming of cybernetic communication systems -- hypertext, the
World Wide Web, soon now the Semantic Web -- the conditions of all media are
strongly transformed, and writing is clearly included. As Mark Poster and Lev Manovich each point out, a digital storage system is not
the same thing as an archive of
written text. To begin with,
digital information is not statically inscribed, but rather copied,
distributed, indexed, and linked according to specific logical processes [22].
The locus of reading and writing has changed from stable page to flickering
screen, and as Manovich puts it, "the screen
keeps alternating between the dimensions of representation and control,"
between the supposed transparency of image and the opacity of menus and diagrams
[18]. In Markku Eskelinen's
terms, we experience a major shift in "user function," from the interpretative
in writing
to the configurative
in
cybernetic media [7].
Not surprisingly, Eskelinen makes this
observation in a discussion of video games, the medium which at the moment
represents the most interesting form of cybertext.
Games (as I will hereafter familiarly refer to them) imply the most extensive
transformation of the media object: from the work or text of writing to Manovich's dyad of database and interface. As we have seen,
they seem also to embody principles of learning that have been neglected or
suppressed in conventional models of education. But most significant for our
purposes, they demand a momentous change in method from those who study them.
As Espen Aarseth announces:
Games are
both object and process; they can’t be read as texts or listened to as music,
they must be played. Playing is integral, not coincidental like the
appreciative reader or listener. The creative involvement is a necessary
ingredient in the uses of games. [1]
Johnson reports with devastating accuracy what happens when academics
ignore Aarseth's precept: they tend to condemn video
games as antisocial, deficient in "content," and tied to instant
gratification [15]. These characterizations are, of course, either reducible to
subjective judgement or (especially in the last instance) demonstrably wrong.
People who write them have probably not spent much time handling a game
controller, or have failed to understand the experience. In place of
"creative involvement," they prefer critical insulation, substituting
content-as-writing for the real essence of gaming, which is a dynamic encounter
with a consistent simulation or virtual world -- in other words, serious play.
Aarseth's notion of play as "creative
involvement" augurs a new conception of scholarship and critical response,
one built on extensive practical engagement with games and other cybertexts. Surely this shift from representation to
experiential immersion may be one defining feature of a new academic identity
on which we have begun to speculate. But important as this difference may be,
it is not sufficient to describe the species. We will need to expand what is
meant by "creative involvement," pushing beyond Aarseth's
primary injunction to play.
3. BY FITS UNBALANCED
As Ong, C.P. Snow, or Walter Benjamin might
have said, scholars of the text seem often to back blindly and reluctantly into
the future, gazing steadily at the past. Even the most apparently progressive
have a strong inclination to revert. Since I will be handing out blame here, I
begin at home: my own work in hypertext fiction, along with my persistence in
solo authorship and continuing addiction to narrative, surely count as
retrograde. Moving to more illustrious company, we might remember Jay Bolter's
remark that hypertext represents "the revenge of the text upon
television" [2] or his and Richard Grusin's
later notion of "remediation," with its useful and problematic
emphasis on the integration of old media with new [3].
We seem to spend a lot of energy on recuperation in this passage from
revenge to remedy. As the predominant prefix re-minds, we keep looking back.
Consider this interesting remark from Gee:
When people
learn to play video games, they are learning a new literacy. Of course, this is not the way
the word "literacy" is normally used. Traditionally, people think of
literacy as the ability to read and write. Why, then, should we think of
literacy more broadly, in regard to video games or anything else, for that
matter? [9]
Why indeed? Gee bases his answer both on the now canonical concept of
"multimodal" literacy, a scheme of interpretation based upon sound
and images as well as words, and on the idea of socially situated literacies, which focuses less on the ability to recognize
patterns of letters than the ability to master and manipulate socially
constructed memes.
Presumably, these approaches to media employ the term literacy
as a kind of
pivot, swinging almost instantly from any genuine concern with letters into
concepts quite distant from writing. We would thus speak of video
game literacy only to
signify a system of competencies that permits increasingly sophisticated forms
of understanding, on the analogy of reading and writing. In this sense, the
formulation looks like one of Orwell's "dying metaphors,"
constructions so far removed from their original frame of reference that the
remaining connections seem almost arbitrary [21]: for instance, when we find dialing
instructions beside a
touchtone phone.
To Orwell's original categories, dying and dead metaphors, we need to
add a third option: the revenant or undead metaphor,
whose referent uncannily haunts the living language. As Anne Wysocki and Johndan Johnson-Eilola pointedly wonder: "What are we likely to carry
with us when we ask that our relationship with all technologies should be like
that we have with the technology of printed words?" [28]. Applying that
question here, we might say that expanded notions of literacy imply something
like a franchise scheme -- by which I do not mean franchise as universal
entitlement, but something more like McDonald's or 7-11: a distribution of
proprietary interest. In this new conglomerate, the alphabet plays the role of
corporate mascot, the sign in which we prosper. The franchisees of greater
literacy carry over both the afflatus of high culture and the familiar method
of content representation, maintaining their lasting investment in print.
Comics, movies, or video games thus become McBooks,
which we proceed to McRead, though our
standard of taste remains the haute
cuisine of the bound
volume and scholarly monograph.
If this treatment seems unjust, I concede the point: I have no real
right to cast this stone, since every word you read here confirms my complicity
as an academic writer. If the critique seems unjustified, though, consider what
might happen if we blindly assert the priority of the printed page over
cybernetic media. First, let us suppose that not every scholar will be as scrupulous
and dedicated as Gee, whose advocacy of video games is informed by extensive,
omnivorous play. The breadth of his gaming repertoire puts many of my aspiring
undergraduate game designers to shame. Lesser lights may stint on the
"creative involvement" and write from something less than adequate
experience, with predictable results. But beyond this, even if we can define
and insist upon some minimum of practical engagement, should we be satisfied
with a regime where play and reflection remain separate?
In this respect, one thread in recent thinking about games seems notably
problematic: the assertion, following Huizinga, that play is more primitive
than culture [24]. The point may be beyond factual dispute -- plenty of
mammalian behavior probably counts as play, and
lately primate researchers have found that chimpanzees can decisively outscore
humans in PacMan [11]. However, these observations
raise some unsettling questions. If play itself is outside culture, how do we
understand the theory of play? Surely it belongs on the inside: only one sort
of primate produces academic essays. Do game theory and criticism thus
constitute an interface between the primal and the civilized, the viewport
through which our playful, animal selves are exposed to reflection, humanism --
and writing?
Resistance to this stance seems at least conceivable. For example, we
might adopt Donna Haraway's neobiological
continuum of animal-human-cyborg, allowing us to push
the origins of language and culture back beyond the primal scene of writing,
certainly far enough to include play. Yet this approach will probably strike
many as extreme, if not as Haraway says,
"blasphemous" [12]. Most academics will be far more comfortable
distinguishing play from reflection. This view preserves the old separation of
media, whereby all things not of the letter must be exchanged for letters in
order to enter the system of learning. It also echoes yet again that mainstay
of western patriarchy, the segregation of mutable, laboring
body from abstracting, discursive mind.
As several generations of feminist critique have shown, this distinction
always entails significant risk [4]. Aarseth rightly
portrays the cybernetic renaissance of games as an important cultural opening,
an opportunity for new syntheses of theory and practice; but the outcome of
this development remains in doubt. Separating play from culture, or games from
writing, would create a situation reminiscent of that "dissociation of
sensibility" T.S. Eliot found in English poetry [6]. As Eliot put it,
everyone after the Renaissance "thought and felt by fits,
unbalanced," unlike Donne and the other Metaphysicals,
who in Eliot's view were the last to hold reason and emotion in a unified
linguistic field. Graduate school taught me to scoff at this idea, for Eliot
was a mere formalist, a knuckle-walker from the days before Structuralism; but
whatever its limits as literary theory, the basic logic of Eliot's dichotomy
seems worth reviving, if only in a death-defying metaphor.
In place of thinking and feeling, our new axis of dissociation opposes
action to reflection. We play games, then we write about the experience. Play
first, then write. If we remain true to this course, we will likely produce for
game culture an academic field very much like literary studies, film studies,
and other established specialties. No doubt such conformity has its advantages,
but it would seriously restrict our horizons.
4. REWRITING WRITING
Assuming we choose to reunify our sensibilities, how can this be done,
especially when we face such enormous diversity between the written word and
media like video games? Could learning, as Gee suggests, become literally more
like play, and what implications would this have for institutions and
practices?
To approach these questions, let us begin with an admittedly ambiguous
move, revisiting the amalgamation of media that underlies the new literacy, but
this time with a crucial difference. As noted earlier, scholarly reflection
depends almost exclusively upon the letter; but so in fact do video games and
other ergodic forms, whose substrate consists (at
least on one level) of alphanumeric code. This comparison may seem at first
simply to replicate the new literacy; but every classical equation can be read
two ways, and in this case we will read backwards, not exporting the ethos of
writing to new media, but vice versa. As John Cayley
declares:
Programming
is writing, writing recognised as prior and provisional, the detailed
announcement of a performance which may soon take place (on the screen, in the
mind) an indication of what to read and how. Programming will reconfigure the
process of writing and incorporate 'programming' in its technical sense,
including the algorithms of text generators, textual movies, all the 'performance-design'
publication and production aspects of text-making. [5]
Cayley's identification of programming
and writing appears to close the same gap addressed by the new literacy, but in
fact its implications are radically different. Multimodal or culturally-based literacies do not attempt to alter the status of writing,
even if they imply significant changes in method, rhetoric, or genre. Setting
the letter alongside music or video makes no changes in the operation of the
glyph. Writing is still writing, even with funkier friends. But when Cayley opens the definition of writing to include
programming, he registers a change in the status of the letter itself -- crucially, a change that flows
into writing from cybernetic media. The elements of programming code,
understood within their proper configuration, always signify on at least two
levels: as elements of a syntax readable by humans, and
as instructions to be performed
by software and hardware. This sort of writing is not simply intelligible, but
also executable. When we identify writing with programming, we move the letter
from the domain of inscription to that of computation.
Cayley's shift turns literacy from undead metaphor into a very live wire indeed, since it
connects not merely by analogy, but in actual practice, to all the media that
can be managed by cybernetic means. To be sure, Wysocki
and Johnson-Eilola's question still applies here --
what do we carry over? -- but the answer comes out differently. No doubt we
still import methods and ideologies from the history of writing, and now also
from the origins of cybernetics. This point will need attention before we
finish. In addition, however, and of more immediate interest, we export
operations of writing itself, syntax, grammar, and even style, albeit in highly
specialized, variant forms. These operations now coexist with performative features, such as modularity, inheritance, and
recursion, producing text with radically new dimensions. In effect, Cayley's opening rewrites writing.
So how does this maneuver address our primary
problem, the dissociation of experience and reflection? Most obviously, by
expanding the ambit of writing to include not just the secondary creativity of
play, but also the primary production tasks of programming, and by extension,
media design. In fact, by situating the letter within the cybernetic process or
feedback loop, this extended literacy directly connects writing with play. I
mean not simply that it reveals the control structures that govern our
experience of play, but that those structures themselves become
objects of play.
This claim takes a bit of explaining. As veterans of the field know,
game design is itself a game, a friendly but unstinting competition with other
developers, distributors, hardware engineers, and most crucially, with the
players themselves. On some level, the basic logic of game play applies to
design as well: just as the player's performance can regularly be improved,
subject to exhaustion or diminishing returns, so there must be evolution both
within the responses of any game itself, and in the developmental sequence to
which all games belong. This is another major difference between inscription
and computation (though Barthes' transition "from work to text" points
in this direction). Writing as "work" tends to fix itself in time,
but cybernetic writing leans into the future. The code base of a successful
game is at least momentarily stable, but while its popularity lasts it will
remain in flux, subject to upgrades, service releases, versioning, sequelization -- not to mention unscheduled expansion by modders and other intensive participants.
6. INTERVENTION
There is much more to say on the level of theory, but practical
questions present themselves most urgently at this point. What exactly does
this rewriting of writing imply for readers, players, teachers, and learners?
How will the shift into cybernetic textuality shape
the new academic identities we are trying to define?
Should we insist, for instance, that all serious students of games and
new media be able to make things with code? The point could be argued,
especially if we are willing to count competence with ECMAScript
derivatives (JavaScript, ActionScript), or similarly
simplified tools like Lingo, VisualBASIC, or Squeak,
for at least partial fulfillment of the prerequisite.
Perhaps some advanced proficiency with electronic publishing tools such as
Extensible Markup Language can suffice in some cases.
Certainly we could maintain, as Janet Murray has done for many years
now, that students of new media should master "procedural" methods
closely attached to code [19]. These methods may stretch beyond Cayley's initial equation of writing and programming. His
remarks were originally addressed to cybertextual
poetry, a genre where the convergence of executable and deliverable text is
most apparent, and for which a single author will often suffice. The production
of games and other large-scale, multimedia cybertexts
involves more skills and more hands. It implicates sound design,
three-dimensional modeling, lighting and texturing,
motion capture, and animation, not to mention quality assurance and play
testing. Software products used for these tasks generally offer graphical and
parametric controls and require no knowledge of the programming languages in
which they were written. Because these tools do generate code, albeit in an
invisible or indirect way, and because designers must ultimately integrate
their work into a general code structure, it seems feasible to include them
within cybernetic writing.
We arrive, then, at an important expansion of "the creative
involvement" with new media, one that includes a substantial, productive
engagement with code, either directly or at a minimal remove. To put this very
simply, an alternation of play and reflection is not enough. We must also play
on a higher level, which means that we must build.
The received structures for criticism and theory are familiar: notes,
reviews, papers, chapters, dissertations, books. How can the new-model faculty
earn appropriate professional credit as designers and builders? Taking my cues
equally from the
Since I am always more inclined toward particulars than abstractions, I remain
at best an amateur theorist, and thus advance the concept more as provocative
sketch than complete working model. Hopefully others will massage and modify
it, or find in its limitations the germ of better ideas.
To count as an intervention, a project must satisfy four criteria:
1. It should
belong somewhere in the domain of cybertext,
constituted as an interface to a database and including a
feedback structure and generative logic to accommodate active
engagement.
2. It should
be a work of production crafted with commonly available media and tools.
3. It should
depart discernibly from previous practice and be informed by some overt
critical stance, satirical impulse, or polemical commitment, possibly laid out
in an argument or manifesto.
4. It should
have provocative, pedagogic, or exemplary value, and be freely or widely
distributed through some channel that maximizes this value, such as the
A fifth requirement is left implicit, namely that the value of the work
will ultimately be established through robust, transparent peer review. Thus I assume
both that interventions will be recognized as valid scholarly efforts and that
some adequate community of reception will grow up around them.
We can find many forerunners and early examples of this new type of
cultural product, and while illustrations will add substance to my scheme,
there is always a danger in making lists, especially short ones. The survey
that follows is not intended as any kind of proto-canon. It is simply a
starting point for further discussion, and it certainly omits many worthy
examples, either through economy or ignorance.
Offerings from several independent game developers come immediately to
mind, from Brenda Laurel and Purple Moon [16] to the younger generation that
includes Mary Flanagan, Ian Bogost, and Gonzalo Frasca. Bogost's polemical games,
many of which appear on his Web site watercoolergames.com, offer excellent
illustrations. Eric Zimmerman of Gamelab also
deserves mention, both for his advocacy of independent game development and his
conceptual experiments in game design (e.g., Sissyfight
2000) [24].
Much of the work loosely known as Net Art fits at least parts of my
definition, for instance the interface pranksterism
of the Jodi unit, many of the projects of the Media Lab's Media and Culture
Group, and experiments by latter-day Oulipists such
as Mark Amerika, Rob Wittig, Nick Montfort, William
Gillespie, and Scott Rettberg. At this point we shade
over into electronic literature, where again some relevant specimens can be
found. We might turn first to Talan Memmot, who has already explored quite extensively the
interface between code and conventional writing. Memmot's
assimilation of psychoanalytic and poststructuralist theory into forms of
digital expression opens a potentially rich borderland between traditions of
academic writing and design work in new media. Projects like "Lexia to perplexia" provide
good examples of extant interventions. At its most playful, Memmot's
work converges with Cayley's concept of "textual
instruments" and Noah Wardrip-Fruin's corresponding
work with "playable texts." Some of my own attempts in this line,
including "Reagan Library" and "Pax,"
might also deserve mention, along with the interactive fictions of Adam Cadre,
whose remarkable text adventures regularly reinterpret both their genre and the
larger conventions of cybertextual writing.
We might ask how this very loose set of examples helps define the new
academic identity. Though most of those named above hold regular academic
posts, some do not (e.g., Wittig, Cayley, Cadre), and
nothing in my definition restricts it to work-for-tenure. As Jill Walker's
recent discussion of "feral hypertext" illustrates, we need to
consider both formal and informal contexts of production when thinking about cybertext [27]. Many people who produce interventions will
be master designers, public intellectuals, outsider artists, dedicated fans,
and non-academics of other stripes.
So we should consider possibilities that satisfy only some of my
criteria, but might still be argued onto the list. Take for instance the work
of the satirists at RoosterTeeth Productions,
creators of the "machinima" series Red
vs. Blue and The
Strangerhood. These efforts use popular video games (Halo
and The
Sims) essentially
as puppet rigs, combining original voiceover with video content made by
manipulating game characters. The resulting movies deconstruct and otherwise
send up digital culture in various ways. These products are not cybertexts, since they take the form of video for playback,
and their idiom has more to do with Comedy Central than Leonardo; but they suggest the potential
of what Johnson calls "media riffing,"
recombining and redeploying assets from mainstream products as radically
personal forms of expression [14]. Taken further, this do-it-yourself aesthetic
raises interesting possibilities for interventions in massively multiplayer
role-playing and particularly in the emerging area of alternative reality
games, which recruit the ordinary structures of digital communication (blogs,
e-mails, GPS systems) for purposes of performance and play.
7. GETTING ALONG IN THE WORLD
Treated as a valid form of academic work, the intervention would give a
generation that understands writing's cybernetic turn the chance to act upon
that knowledge in a recognized way. It would support an academic identity that
includes production as well as theory, situating itself not in a culture
removed from play, but within a rapidly evolving culture of
play, thus
avoiding the dissociative tendencies of our present institutions.
Yet while these speculations no doubt imply radical changes, they are
not entirely at odds with the status quo. Honoring
Bolter's emphasis on continuity and remediation, I do not propose that
interventions entirely replace familiar forms of scholarship, at least for
those who aspire to relatively conventional careers. Criticism and theory in
their present form would certainly continue, and scholars would still be
expected to produce a certain portion of their work in presently accepted
forms. Cybernetic
writing is founded upon
inscription, and no viable structure can destroy its own foundation.
Indeed, many who might serve as models for the new scholarly practice,
people like Bogost, Wardrip-Fruin,
Montfort, and Zimmerman, have produced notable efforts on both sides of the
horizon, books as well as cybertexts. I sometimes
think of this upcoming generation as elegant amphibians, equipped for survival
in new worlds as well as old -- or if this looks like turning one's
acquaintances into frogs, say instead birds of play, able to cruise for miles in the cybertextual
element, but ready to plant their feet once again in the library.
Awkwardness about totem animals aside, that trope of metamorphosis
broaches a theme that may prove troubling. Where did this talk of evolution and
adaptation come from, and what does it mean? Why should we assume that those
who come after the last generation must live in two worlds -- carrying, in
effect, a double load of professional expectation? Before attempting answers,
we need to bring back a question we have left hovering over this discussion: Wysocki and Johnson-Eilola's
enduring reservation about ideology. Since we have gained much so far by
inversion, however, let us turn the question inside out. What do we carry with
us when we ask that our relationship with newer technologies not
resemble our
older investment in printed words?
As we have already noticed, interventions and programming-as-writing
situate the new-model scholar within the greater game of software development.
When writing enters the domain of computation, it falls under jurisdiction of
We may wish this were not so. C.P. Snow observed that academic humanists
are "natural Luddites," inclined whenever possible to disconnect
themselves from machines [25]. To those who hold that increasingly old line,
the ideology of endless expansion no doubt represents a monumental threat. Thinkers
of this sort will of course reject software interventions, preferring forms of
resistance that defend the original identity and function of the letter. Given
the difficulties inherent in the new identity, we may feel the attraction of
this position, whether we are of the insurgent generation ourselves, or just
among those who wish for their success. There is a fundamental injustice in
this intellectual deflation, with its assumption that tomorrow all goods will
be better, more abundant, and two for the price of one.
In spite of this unfairness, however, there remains a persuasive reason
not to abandon the cybernetic turn in writing and its possibilities for
intervention -- because for all its dark, Satanic machinations, and for all its
ideology of ever-Moore excess, the world of cybertext
contains in embryo the next great human invention after the discovery of
writing. Lévy names this concept "the universal
without totality," the model for a communication system that effectively
internalizes its own deconstruction, legitimating itself not by any
metaphysics, but through its own infinitely extensible discourse. He writes:
The ongoing
process of global interconnection will indeed realize a form of the universal,
but the situation is not the same as for static writing. In this case the
universal is no longer articulated around a semantic closure brought about by decontextualization. Quite the contrary. This universal
does not totalize through meaning; it unites us through contact and general
interaction. [17]
Here we have lapsed, of course, into the language of high theory; but
the enormous importance of Lévy's work lies not in
its heady abstractions, but in its compelling particularity. The universal
without totality provides a remarkably suggestive scheme for thinking about
many of the great things in life -- natural language, for instance, and quite
possibly the organization of the brain. For Lévy,
though, the primary example of a universal without totality is the Internet,
with its consensual protocols, its aspiration to truly universal coverage, and
its lack of central control. So the universal without totality is the world of
text as we know it; but at the same time, it is also the end of an older order
some of us once knew, a culture that was not yet ready to connect theory with
practice.
Now begins the time of contact and interaction, of engagement and
intervention, of ideas in action. The new must in some way displace or
transfigure what came before, but at the end of this day there is no sense of
tragedy, only a certain sadness and frustration. Every moment has its
discontents, its challenges and failures. Yet no moment is every truly last, at
least not so long as we persist in human conversation. Play somehow resumes,
albeit under the new burden of seriousness that must come with any real
cultural advance. To rewrite mythology, it is the Icarians
who fly above our labyrinths, and if like Dedalus or Lekhi Starlitz one has to say
"I'm old now," there is at least the tardy consolation of wisdom, of
figuring out, however late in the day, how to get along in the world. We may be
the last of our kind, but other kinds come after, just across that strange
horizon where world and word both change.
8. REFERENCES
[1] Aarseth, E. "Games Studies Year
One." Game
Studies 1, no. 1
(2001).
[2] Bolter, J. Writing Space.
[3] Bolter, J. and R. Grusin. Remediation: Understanding New Media. MIT Press, Cambridge, MA, 1999.
[4] Bordo, S. Unbearable Weight:
Feminism, Western Culture, and the Body. U.
[5] Cayley, J. "The Writing of
Programming in the Age of Digital Transliteration." Cybertext
Seminar,
[6] Eliot, T.S. "The Metaphysical Poets." In F. Kermode, ed. Selected
Prose of T.S. Eliot. Harcourt Brace Jovanovich, Newy York, 1975,
59-67.
[7] Eskelinen, M. "The Gaming
Situation." Game Studies 1, no. 1 (2001).
[8] Gee, J.P., G. Hull, and C. Lankshear. The
New Work Order: Behind the Language of the New Capitalism.
[9] Gee. J.P. What Video Games Have to Teach Us About Learning and
Literacy.
[10] Gee, J.P. "Pleasure, Passion, and Provocation in Video
Games" (2005 Garth Boomer Address). Australian Association for the
Teaching of English/Australian Literacy Educators' Association National
Meeting,
[11] Greenspan, S. and Shanker, S. The
First Idea: How Symbols, Language, and Intelligence Evolved from our Primate
Ancestors to Modern Humans.
[12] Haraway, D. Simians, Cyborgs, and Women: The Reinvention of Nature.
[13] Hardison, O.B. Disappearing through
the Skylight: Culture and Technology in the 20th Century. Penguin, New York, 1990.
[14] Johnson, S. Interface Culture: How Technology Transforms the Way
We Create and Communicate. Perseus,
[15] Johnson, S. Everything Bad is Good for You: How Today's Popular
Culture is Actually Making Us Smarter. Riverhead, New York, 2005.
[16] Laurel, B. Utopian Entrepreneur. MIT Press, Cambridge, MA, 2001.
[17] Lévy, P. Cyberculture. Trans. Robert Bonnono. U.
[18] Manovich. L. The Language of New Media. MIT Press, Cambridge, MA, 2001.
[19] Murray, J. "Humanistic Approaches for Digital-Media
Studies." Chronicle
of Higher Education, June 24, 2005.
[20] Ong. W. Orality
and Literacy.
[21] Orwell, G. "Politics and the English Language." In A
Collection of Essays.
[22] Poster, M. The Mode of Information. U.
[23] Perelman, L. School's Out: Hyperlearning,
the New Technology, and the End of Education. William Morrow, Philadelphia,
PA, 1992.
[24] Salen, K. and E. Zimmerman. Rules
of Play: Game Design Fundamentals. MIT Press, Cambridge, MA, 2004.
[25] Snow, C.P. The Two Cultures.
[26]
[27] Walker, J. "Feral Hypertext." Proceedings of the
Sixteenth ACM Conference on Hypertext and Hypermedia. (
[28] Wysocki, A. and J. Johnson-Eilola. "Blinded by the Letter: Why Are We Using
Literacy as a Metaphor for Everything Else?" In G. Hawisher
and C. Selfe, eds., Passions, Pedagogies,
and 21st Century Technologies. U.
URL: http://iat.ubalt.edu/moulthrop/essays/dac2005.pdf