top of page

Post #118: The Shadow of the Future (AI)

23 June 2024


[This text is a slightly revised and edited version of a talk I gave on Friday, June 21st, on concerns about the use of AI in the classroom.]


     The world is ever new—from day to day and year to year, from innovation to innovation and from generation to generation. But behind the perennial renewal it is also very old, not only in geological time, but also with a view to the hundred or so human generations of which we know specifics, and the ten thousand that stand behind them in outline, in myth, or in conjecture.

     As our world keeps renewing itself continuously, it is therefore never only a question of what we want to change and improve, but also of what we would like to preserve, or what losses we would face if we failed to do so. “A state without the means of change is without the means of its conservation,” Edmund Burke conceded in his Reflections, while even his most revolutionary counterparts sat on chairs and benches, ate bread, drank wine, and nursed in their hearts dreams of equality that hearken back to ages when humans still roamed the savannah in small bands. No matter how eager we are for a brighter future, we all need grounding in the ways of the past.

     What I have to say here, then, is much more than an academic concern for me: it is a question of faith, and of a way of life. We are always at a crossroads in human affairs, but perhaps sometimes more so than others. The issues before us in the form of AI and its encroachments on the classroom go to the very heart of education as we know it, and as future generations may no longer if we are not careful about how we lay the tracks for them—fancying, perhaps, that they will not need to stay on the ground at all, but fly about at will, or be beamed up or down by Scotty.

     All the same, even as it falls to me to play the part of Cassandra today, to make the case for the prosecution and categorically against ceding any ground to large-language models in the classroom, I would not be taken for a technophobe or a Luddite. I appreciate the blessings of technology as much as anyone, when they really are such, and I think the world of Steve Jobs, may he rest in peace. Say nothing of the magical MacBook; I would not want to be without my iPhone either, despite all there is to be said about what smartphones are doing to human culture in its slower and more traditional modes. The internet looks a place of miracles and wonders to me, no less; but all these marvels needs to be used judiciously, with a proper weighing of their likely benefits and costs, the dangers of virtual sorcery no less than the benevolent spirits of the digital fairies.

     The displacement of established ways by technological innovation is as old as technology itself, and it has often meant very real losses not only for cultures (as when ancient crafts got sidelined or outright destroyed by mechanization, for example) but also for the individuals, often a great many, with a lifetime of investments in them, material and emotional. Such is structural change: never perfectly painless for anyone, often exceedingly bitter for those who have the most to lose.

     Yet it is also through this process that a staggering range of goods once reserved to the most privileged (or unimaginable even to them) has come within the reach of the masses, who could barely dream of them before. As commercial aviation moved from active flying to autopilot over recent decades, for example, an art was certainly lost, and its practitioners had ample reason to mourn its passing; but there was a great gain in safety at the same time. Such instances could be endlessly multiplied, and AI will make them still more plentiful, with many beneficial results, no doubt.

     In an educational setting, however, it is not a question of producing as efficiently (or safely) as possible things that others value for their own sake, but of learning how to produce them oneself, and of earning credit by assessment, relative to others, for one’s acquired ability, which is then used for making all manner of social judgments with serious, sometimes lifelong, consequences. As one may be a great admirer of all that calculators can do for us so easily, yet remain adamant that they should be kept out of certain classrooms, because the point of studying math is not coming up with correct answers for others to consume or make use of, but the acquisition of useful mental habits and techniques—so one may question fundamentally the place of AI in schooling, whatever its benefits elsewhere, and never more so than at a liberal arts college, which is supposed to aim at the development of our mental faculties as a worthy end in itself, over and above what the economic demands of the day may be.

     In the wider academic context too—that is, for teachers and scholars—the main thing is surely to demonstrate one’s abilities and get them publicly recognized by one’s peers, be the inherent value of academic research what it may. Where such demonstrated learning and the corresponding credit earned is the main thing, whether among students or teachers, the use of AI looks highly problematic, even if it may also produce wonders in making results much easier and quicker to arrive at. Whether learning (and “higher learning” especially, as the quaint phrase would have it) really is, or ought to be, a matter of such facility in producing outcomes, and not rather of steady application to difficult problems, is another pressing question that AI turbocharges without bringing us any closer to an answer.

     As a practical matter, I fear that the temptation to use chatbots in ways that no one would defend as appropriate in an educational setting must often be well-nigh irresistible—not even necessarily with any conscious intent to cheat, but sometimes out of desperation, and perhaps even more often still out of a student’s self-consciousness about the weaknesses of his or her own voice and work. Ironically, the most well-intentioned students too can fall into this trap, as I have seen them do despite the most urgent warnings, precisely because they are so keen on producing “quality” writing and too easily impressed, in their diffidence, by the specious smoothness of machine-generated “improvements” on their own more spotty but also more authentic and valid productions.

     Against these ever-mounting pressures we have only two promising lines of defense, it seems to me. The first is policing that can never be rigorous enough, even while it undermines the relationship of trust that one may hope to foster in the classroom. The second is digging in and reinforcing with all means at our disposal, pedagogical and moral, the principle that to pass off as your own anything that is not so in fact, must remain gravely dishonest and dishonorable whichever way you twist and turn it, and in an educational context, especially, cannot look anything other than shameful. This may sound unfashionably censorious, but who could deny that it is taking a teacher’s stance—if there is anything left of the idea that education should also prepare for life and bring some marginal improvements in one’s appreciation of rights and wrongs, and the infinite shades of nuance between them.

     If we allow AI to be presented, instead, as the way of future, inside the classroom or outside, then it will not only become every day more difficult to draw robust and credible lines around it, but we will ourselves be contributing to opening the floodgates that shall soon make reliable judgments of scholastic merit impossible. In such a world, even more than in today’s, the evaluation of our students’ abilities, with all the difference it may make to their lives, will become more than ever a lottery slanted not in favor of the capable but the crafty.

     The potential benefits of AI are, once again, not in dispute. What is at issue is how compatible the emerging paradigm will prove with education as we know it, and if we do side with the new against the old, just how far our losses will extend. The implications for the liberal arts as traditionally conceived are potentially devastating, and inasmuch as some of us are still championing that time-honored ideal, AI looks decidedly more menacing than promising, especially considering how many other technological changes are also forcing the pace away from traditional learning.

     The struggle to get students to read in anything like a mature manner is already daunting enough, and worsening with every passing year. If, at this pivotal moment, we not only relax our guard but take our defenses down altogether; or if, even worse, we actively encourage our students in the already all-but irresistible belief that it might be legitimate to “get a little help” from the machine; then before we know it, I am afraid, there will be no more clear lines of honesty and authorship to uphold, as bionic writing sweeps everything before it, swamping all unaided human effort and making any reliable distinction and demarcation between the two impossible and therefore practically irrelevant.

     It may be that technical standards of literacy will improve—in the products, not the producers—as a consequence of new-model writing displacing the unaided human hand and mind. Such tools are nothing new, one might say, since we do not usually write on cave walls with our fingers dipped in blood. But there is something unprecedented here: the older technologies, though they too impacted human thinking heavily, never proposed to reshape or replace it altogether, thus calling into question the very survival of the human art of writing, of thinking on paper (or pieces of bark, wax tablets, electronic screens) by one’s own efforts, as we have known it for almost three thousand years. If that should prove a lost cause, as it well may, it is unclear whether the Humanities will still deserve their name, or what could take their place.

     Allow me to digress, for just a minute, from the more practical and pedagogical side of the argument to the more personal, putting on my hat as a thinker and a writer, which is where my heart really is. To let the bots do our writing for us is in no small measure to let them do our thinking too—and whose thinking is that really? I concede that as a technical feat, the productions of the chatbots are becoming quite impressive; but as I understand it, they are operating on a kind of meta-groupthink by mining what has been said online, and that is hardly the way to make me feel much confidence in the soundness of their synapses. Even their inventors, brilliant mathematicians that they may be, have no clear understanding where their creatures are taking us, or by what routes exactly, and to a more bookish mind, all this conjures up visions of Golems and Frankensteins, or if the comic mode be preferred to the tragic, of the sorcerer’s apprentice in Goethe’s famous poem, who was able to conjure up all manner of magical powers, but not to call a halt to their mischief or clean up the resulting mess. For that he had to rely on the return of the master magician—and I would like to know who that might be in our case, because I see no very credible candidates on the scene.

     The same Goethe observed, at the age of eighty, that all his days he had sought to become a proper reader, but that even near the end of his life, he did not imagine that he had been entirely successful. And so I goes, I believe, for all serious writers. “You are what you eat,” it has been said. One might add, in a similar vein, that you write what you’ve read and properly digested, and that it is at least half, and perhaps the better half, of becoming a fine writer to become a discerning reader. Writers model themselves on other writers they admire. We seek out their company almost as if they were alive, as Machiavelli immortalized in a famous letter, telling of how when he had been banished to his farm, the highlight of his days was donning his ceremonial robes after sunset and conversing with the spirits of the past in his library. Everything in this process depends, as Albert Jay Nock observed in his Memoirs of a Superfluous Man, on the quality of the company we keep in literature—elevating and bracing, if it is good, a mere waste of time, or even debilitating and enfeebling, if it is bad. “In books one has the best company in the world at one’s command, if one knows where to look; but if not, also the worst.”

     Just what, I would like to be told, is becoming of this selectiveness, this all-important discernment, when it comes to the chatbots? Who exactly is teaching them to write, and what exemplars of style do they use to train themselves? What are they reading, and where do their get their standards of good and bad? I am not sure that what they are doing can be called reading at all, in the distinctively human sifting, evaluating, distinguishing sense; and as their reading (such as it is) gets done in great bulk but indiscriminately, so they end up writing competently enough, perhaps, but without finesse, elegance, or voice. And how could it be otherwise? Who could have taught them that rare element of refinement, or even where to look for it? They could have been raised on a library of great books, I suppose, in a better world; but hardly at the hands of programmers whose specialty is writing code.

     Like their creators, the chatbots are not known for their humility or restraint. If you feed them high literature, they will gladly improve it for you, or so they presume without hesitation or shame—and turn it into mash. Feed them Edward Gibbon and Edmund Burke, as I have done, and see what happens. Whether the results are more cause for laughing or crying is debatable, but it is not what I want for the future, and I don’t see how it could be anything that any other humanist should want either.

     The forces at work being much greater than any one institution, it is not a matter of whether we can stop AI from spreading, and from sooner or later invading universities everywhere, whether we like it or not. The vital choice we do still face, however, is whether we want to add our weight or not rather to oppose the gaining momentum, while we still can. Even if opposition may prove ultimately futile (and it is premature to conclude so just yet), I see no reason for us to be in a rush to join the great throng, welcoming and cheering the advent of the brave new world of AI in the classroom, rather than doing what we can to delay the dread day and mourn when nothing more can be done to keep it at bay—availing ourselves of the last human freedom, when forced to give way before overwhelming forces, of doing so under protest and with as much noisy alarm about the coming evils as we can muster under the circumstances of a rout.

     It has never been and cannot be the responsibility of the Humanities to be at the cutting edge of progress; these ancient disciplines, more than others, are a last repository for learned and humane traditions that have survived the millennia looking old-fashioned all along, and for which we have to answer before past and future generations alike. We, right now, are the final link in that precious and precarious chain by which these cultural treasures and achievements will either be passed on, or else perish along the way, and that gives me a very clear sense of where I think we should, nay where we well-nigh must take our stand as humanists.

     No doubt such a conclusion will sound far too strident and uncompromising to many. As a one-time political scientist (I would not be able to say what I am now), I certainly appreciate a good consensus position, or even a mediocre compromise that gives both sides at least some of what they want. But I also know, as a political theorist, that first principles do not trade-off well against each other. Much as we should look for common ground, there are junctures when we cannot avoid choosing one direction, one way of life, as against another. From what I can tell, we stand before just such a fork in the road today—with the important caveat that I do not presume to speak for others and that whether I am ultimately right or wrong is not for me to determine.

     What I see, alas, is not just a nuisance or a false turn among so many that are inevitable in life, but an existential threat to something I hold very dear: the culture of the written word as we’ve known it for centuries and millennia—that logos of which it is said, at the outset of the Book of John, that it was there from the very beginning. I know nothing of such mystical beginnings, but I can say with undaunted worldly conviction that the ideal of a liberal education is, to me, no mere phrase to which one pays lip service even as one disregards its substance; it is, as a Socratic might put it, a question of the life well-lived and the good order of one’s soul, and as such intricately bound up with the word, though not exhausted by it. (The original Socrates left no writings behind, I am aware; but our Socrates comes to us, for better or for worse, via the ancient texts.)

     It is not up to me, or up to us, to decide what becomes of traditional book-learning in this brave new world of ours; we are responsible only for our tiny corner of it. But that little corner, at least, I would have us defend with a blind determination that may look obstinate to some, or naïve, or perhaps even foolish or retrograde to others. I would prefer not to be seen in such an unflattering light, but it would not change my mind if I were, not because I am deaf to new arguments or because I so cherish the role of recalcitrant, but because what is before us concerns the very heart of my educational philosophy and my outlook on life. I would have to take my stand on the defense even if it turned out that the battle was hopeless and the war unwinnable. Here I stand, I can do no other.

     I am powerless to prevent or push back in any big way against the ravages of that technologically savvy, outwardly smooth barbarism which I see descending upon us on all sides. All I can do in my small way is to put what strength remains into holding the line right before me, proclaiming with the faithful of yore that ignorance and evil there must always be in the world—but let it not come by me, or under my watch.

Related Posts

Daniel Pellerin

(c) Daniel Pellerin 2023

bottom of page