In the trenches of the contemporary Canadian university, the winter conversation surrounding ChatGPT and its evil twin, Sydney, lacked something of the heady mix of doomsday prognostications and jubilant AI utopianism that were on display in Silicon Valley and its colonies.
While the startlingly wicked machinations of Sydney were well-noted, the practical work of pedagogy during the academic year naturally moved conversations in the direction of immediate threats: ChatGPT’s production of passable work. This new form of “plagiarism,” as one is inclined to call it, is not plagiarism at all. There is no Ur text, other than the internet in its entirety. And though we may call it a peculiar kind of pastiche, it is truer to see these generated documents as the perfection of Baudrillard’s simulacrum–imitations without an original.
Within what remains of university humanities programs, the discussion around ChatGPT tends in at least two directions.
On the one hand, there is a pronounced push to embrace AI. This often means, practically, allowing it to generate first drafts of papers (which students may then improve in what constitutes a new version of old-style plagiarism) or, more creatively, using it to assist with various forms of literary analysis, language acquisition, and more.
On the other hand, there are pedagogues whose critique of the neo-liberal university is animated by a deep antipathy to technocratic regimes. For them, any move to shake hands with ChatGPT indicates one more moment in a long series of capitulations by a bureaucratic technocracy whose rubrics, content warnings, industrial scale class size, and moralizing tone undermine the essentially human and communal work of education. Many of these people would readily agree with Irina Dimutrescu’s recent argument that the point of the humanities is to teach students how to cultivate their humanity, not to prize efficiency at the cost of self-knowledge.
In either case, AI is perceived as a watershed moment for higher education. And no doubt it is.
Nonetheless, discussions about the challenge AI poses thus far have overlooked the extent to which ChatGPT is not in the first instance a tool to be moderated, assimilated, or repudiated. Rather, the current iterations of AI in the academy are symptomatic of deeper and long-term fissures in contemporary pedagogy and modern understandings of the university. Indeed, for those with ears to hear, the intrusion of AI-generated content into the university sphere is a strange kind of judgement, even an old-style apocalypse, whose real gift is neither its productive power nor the opportunity it provides to declare one’s Luddite bona fides. ChatGPT is, both more simply and more profoundly, a revelation.
But if so, of what?
The most tempting answer is to suggest that AI reveals and provides what the modern university desires: a narrow bandwidth of moral and intellectual conformity framed by the increasingly therapeutic interventions of a growing administrative infrastructure wrapped up in prose that, if not inspired, at least strives for the inoffensive.
There is some truth in this. To recontextualize an observation made recently by Stephen Marche, in the same way that the critique of metanarratives was a constitutional feature of student papers in the 1990s, taking itself to be radical when it was in fact reproducing the normative assumptions of a normative academic milieu, so today the narrative binary of oppressed and oppressor structures much of the intellectual work produced by students every year, offering up less a radical critique of cultural commonplaces than their reproduction. Indeed, the rhetorical similarities between administrative diversity documents, student assignments, and even textbooks in the Arts and Social Sciences are often astonishing.
But still, what AI reveals is something more significant than the no doubt perennial concern of cookie-cutter content. Rather, current AI illuminates the much larger technological capture of higher education which has been decades in the making and was radically accelerated by COVID. This institutional capture, as Dorothy Sayers noted in 1948, reflects and reprises a larger cultural captivity playing itself out in our private and public lives. In this sense, the university is downstream of larger cultural trends despite being a key “upstream” institution.
The surface signs of “technopoly” in the university are easy enough to see: the swift and almost total disappearance of hardcopy course packs and books in favour of digital texts; the processes of automated submissions and grading (the efficiency of which is as undeniable as the loss of human contact in the submission process is lamentable); ubiquitous tools for the statistical analysis of student populations; branding and re-branding processes that explicitly commodify students, teachers, institutions, and learning. In an educational context that normalizes laptop note taking, grading rubrics, and online research, ChatGPT is just another window on a screen.
But behind these pedagogical developments (which largely occur in the name of streamlining or “meeting students where they are”), there is a larger story of technology and culture that has to do with what the late George Grant identified as the technological imperative of modern liberalism: for every problem, a technological answer.
The transformation of this assumption into a way of life is everywhere present, from the simple and salutary production of pain killers to fix my headaches, to the emergence of medications for non-medical problems (the pill, for example) to support my reproductive choices, to the normalizing of MAID as a technique for mitigating both existential dread and late-in-life health care costs. As Grant notes in Technology and Empire, modern liberalism and modern skepticism alike have cleared the way not for freedom (as they hoped) but for “the totally technological society by destroying anything from before the age of progress which might inhibit its victory.”
Grant’s critique of technology reminds us that (for moderns) technology is not simply a pile of tools we may take up or put down. It is, rather, a way of being in the world whose homogenizing and reductive tendencies find embodiment in the growing bureaucratic structures of modern institutions. The near total victory of technological modernity was nowhere better manifest than during the COVID pandemic which revealed, at the very heart of our democratic and liberal institutions, a totalitarian impulse that ran roughshod over even the most common sense and humane of insights: that young people will suffer if confined to their homes and that this suffering should be prevented or at least minimized.
That this technological orientation may be the unintended modus operandi of modern humanities education is not self-evident. Despite the various ways that the bureaucratic life of the university reproduces modern technopoly, students do still read Shakespeare (hardly a technological feat). And remarkable educators still encounter brilliant young scholars.
And yet, it seems increasingly that the ends for which students read and learn and write have little to do with the cultivation of their humanity or, as one might hope, with an eye towards self-knowledge. Rather, the implicit goals of humanities education seem less the cultivation of awe, memory, and contemplation as the preconditions of responsible worldly action, than the production of outrage and activism that manifest a stifling political homogeneity.
These ends may not be intentional, but they do seem hard-wired into the techniques of modern curricular development, which cannot be entirely separated from the endless efforts to justify the humanities in productive (or at least relevant) categories. Largely abandoned in all of this is the useful uselessness of the novel or the poem–the subtle way in which the reading of literature refuses productivity in the name of amplifying interiority. Lost also is the sense of the past as populated by something other than antagonists. That is, by neighbours (as Alan Jacobs notes in Breaking Bread with the Dead) whose companionship can help address the oppressive loneliness of our cultural obsession with “being present.” This “presence” is agonizingly exacerbated by modern devices that translate, almost imperceptibly, our solitude into loneliness as, to paraphrase Byung-chul Han, the real disappears.
The apocalypse of ChatGPT is a prophetic revelation whose greatest benefit to higher education is not the conversation it is generating around how teachers or students should use it. This conversation is critical, of course, but when confined to questions of use it will prove to be (to borrow from Marshall McLuhan) “idiotic.” Rather, the gift of ChatGPT is its apocalyptic manifestation of the totalizing technological claims already animating both our culture and our universities.
That AI is provoking professors to explain the urgency of the humanities in the face of this new technology is an important first step in responding to this revelation. An important second step will require universities to acknowledge their unreflective assimilation of pedagogical technologies and the extent to which even the EDIA (Equity, Diversity, Inclusion and Accessibility) efforts of the modern institution may be shaped by a “technics” of diversification.
In any case, so much will depend on the quality of the justifications offered for the humanities in the face of AI and on the ability to describe the work of the humanities in terms that are not always already technological. How can this be done? How do we stand outside what seems to determine our thinking and foreclose the possibilities we can imagine?
For George Grant, doing so requires courage, as does all real thinking. But more fundamentally, doing so requires a simple recognition that is fundamental to the great creative activity of the human spirit across cultures: to perceive the world, and what really matters in it, as gift.
To know the givenness of things apart from our making and outside all our planning is to be re-oriented from the manipulative pathologies and neuroses that so easily colour even our best intentions, to the posture of gratitude… even adoration. And the cultivation of adoration may be the seed of all that is worthwhile in human life and, by extension, in the fruitful study of that life.
To cite Byung-Chul Han again, “We owe the cultural achievements of humanity—which include philosophy—to deep, contemplative attention. Culture presumes an environment in which deep attention is possible. Increasingly, such immersive reflection is being displaced by an entirely different form of attention: hyperattention. A rash change of focus between different tasks, sources of information, and processes characterizes this scattered mode of awareness…. A purely hectic rush produces nothing new. It reproduces and accelerates what is already available.”
If universities can become places not wholly dedicated to the hermeneutic of suspicion but, rather, dedicated in the first instance to awe–even adoration–then ChatGPT will have been a prophetic word indeed. As Rita Felski notes, the 20th century commitment to negative critique obscures the more humane experiences of “attachment, engagement, [and] identification” that motivate our encounters with traditions of human creativity. If education can re-ground itself on the principle of sympathetic reflection rather than rage, we might anticipate with hope, in a tired academic milieu, the appearance of something new.
Image credit: “Clickworker” by Max Gruber via Wikimedia Commons
A superbly reflective essay, Mr. Snook. The colorful introductory paragraph certainly caught my attention. Your recognition of accepting the world as a gift, including gratitude for the political accomplishments of past generations, is very perceptive and highly relevant nowadays.
I followed your link to Irina Dimutrescu’s essay in The Walrus, which summarized the value of learning to write, in contrast to cranking out a product:
“Functionally, the bot ends up imitating the form of a standard essay. The problem, [John] Warner explains, is that students are often taught to write according to formulas too. If students are judged based on how well they stick to a model, it’s understandable that they will look for the most efficient way to reach that goal. For them, writing is a product they deliver in return for a grade.”
Expository writing necessarily asks students to assimilate information and then express (a few) new ideas in their summary. The plethora of information nowadays makes this task more complex, and it’s no surprise that new tools become attractive to “fix the problem” as you so cogently noted.
Indeed, the urgency of the old meme “time is money” is all around us, so why not ingest a pill to enable you to get back to work, rather than take time out to explore your headache and find a more natural cure?
Ironically, some of your writing is *too* precise, symptomatic of the disease, rather than supporting a possible cure. Shortly after telling us how “the reading of literature refuses productivity”, you indulge in explanatory overkill:
“That AI is provoking professors to explain the urgency of the humanities in the face of this new technology is an important first step in responding to this revelation.”
As a former copy editor and writing instructor, I wonder why “in the face of this new technology” needs to be reiterated here as a modifier of urgency. And further, how does the phrase “in responding to this revelation” add anything but obvious verbiage to your outlining of the first step?
Many of us trained by academia have a tendency to express ourselves thoroughly when criticizing. That’s why some of my comments at FPR are so long!
The current education industry, from pre-K through to universities, is an abomination that chatgpt or any other “disruptive technology” can’t possibly make worse. Perhaps it can serve as “individualized tutors” to allow for more student-specific education, to allow every kid to learn at his or her own pace (which of course will be viciously fought by the “equity” industry that has taken over primary education), and if it can separate job training from the university system, that would also greatly benefit students, society, and those higher education institutions that actually might still care about the lofty ideals you mention.
~~The current education industry, from pre-K through to universities, is an abomination that chatgpt or any other “disruptive technology” can’t possibly make worse.~~
Disagree. As dumb as things currently are, they can always be dumbed down further. If current students are losing the understanding that knowledge takes work, making things ever easier for them can only make it all worse.
Self-programmed learning dates back even before personal computers. I was a guinea pig for one such project nearly 60 years ago, conducted at a local community college with a handful of students of various ages.
One big flaw of that method persisted for at least half a century, which is what bored my kids when they ventured into the free online Khan Academy. Namely, you cannot move on to the next topic unless you correctly answer ALL of the questions about the previous topic.
That suggests an underlying assumption about what proves a person has learned something. So the experts and/or government enter the picture anyway.
University = job training is basically a consequence of what was called “relevance”, demanded by leftist radicals in the 1970s, later by conservatives railing against public education as impractical and wasteful.
As for ChatGPT helping someone learn, I strongly recommend that you follow the link Christopher gave for Irina Dimutrescu, who makes a thorough case for the scales weighing heavily toward net disadvantage rather than advantage.
Don’t get me wrong, I’m no chatgpt
stan and don’t think it is going to lead to a wonderful teaching utopia.
It’s just a glorified bot after all. I think the current system is completely broken and is actively malign in so many cases, especially for smarter kids, that it can’t make things worse and will almost certainly make things better.
Comments are closed.