
Greetings from Bard College, where I am attending a weeklong class on “The Academic Essay in the Age of AI” at Bard’s Institute of Writing and Thinking. I was a little stunned by the size of this gathering—hundreds of people from around the world, many of them repeat participants. I’ve long thought of Bard as an innovative place—at the vanguard of the arts, for example, and famous for its Prison Initiative—and this has proved to be one more manifestation of it. Bard president Leon Botstein, who has presided over the school for a half-century, gave us a puckish address about his days as a student of Hannah Arendt and Leo Strauss.
I was pleased when my bosses at Greenwich Country Day School agreed to underwrite my attendance for this class, because, like many of my colleagues, I regard Artificial Intelligence as an existential threat to the enterprise of teaching, in particular the foundational skill of coaching students to write. In recent months, I’ve been keenly listening to how high school and college faculty have been grappling with AI. There’s a surprising amount of complacency on the one hand (it doesn’t work; I can recognize it when I see it), and the denial on the other (I refuse to allow it in my classes—as if that’s an option). This particular course, taught by a kindly seasoned pro with extensive teaching and administrative experience, largely consists of English teachers—who in my experience are the most implacably opposed to AI, though in this particular case we have people willing to grapple with it—and there’s more in the way of creative writing prompts than I might like. But it’s good for me to be stretched outside my usual zone.
Over the course of the last year, I’ve tried and failed to design AI-proof assignments. I’ve also resorted to the most obvious alternatives, like timed in-class essays using internet-blocking software and implementing oral exhibitions where students have to demonstrate familiarity with content and an ability to think on their feet. This is one of the paradoxes of AI: it ends up placing a premium on older models of education, one I suspect will be difficult for some students, but which will also be important for their success in the occupational marketplace, where a classic liberal arts background may prove to be a more flexible and useful asset than it might initially appear. (Of course, people have been saying such things for at least half a century.)
Be that as it may, I’ve also felt that simply resorting to AI workarounds is insufficient. My goal has been to come up with ways in which it can be used, whether actively deployed or passively allowed, in assignments in which students still have to do meaningful intellectual work. In that regard, the key thing to understand about AI is that it’s literally synthetic: Large Language Models like Chat GPT distill existing information. What they don’t do—and what students desperately need—is help fostering the most elemental kinds of empiricism that makes successful people successful: to closely observe a person, place, or thing. In short, to read—text, data, faces, situations.
It won’t work, however, to simply assign students a generalized task of close reading— “write me an essay that explains what’s going on in this poem”—and expect them to do it the way you hope they will. Instead, you have to ask them to apply what they’re studying to situations in which they have to make and defend informed judgments in which it’s stipulated at the outset that there are multiple forks in the road. Doing so will, in a potentially meaningful way, reveal who they are.
Let me be more specific. In the class, we watched a soliloquy of Macbeth in which the titular character talks himself into murdering Duncan. Then we were told to ask our chatbots how King James I and Thomas Jefferson would likely react to Macbeth’s speech. The results are neat parlor tricks, especially because the answers you get will sometimes be in the voice of such figures. But doing such things can also be enervating, because they lead students to think there’s little point in developing a perspective about something in which one can summon an endless variety of opinions in seconds. Such exercises can also be deeply misleading, because James I was a complex and contradictory figure (like all of us), and Thomas Jefferson could be a hypocrite of legendary proportions, truths which you may not get from a Large Language Model trained to give you what it thinks you want to hear—and doesn’t know what you don’t know.
As a result, my thought for an essay assignment goes something like this: Write about a situation in which, like Macbeth, you found yourself rationalizing questionable behavior. Explain, using specific examples from the play and readings/discussions of Jefferson we’d presumably have—how our conversations have helped you understand better in retrospect what you did or didn’t do, and they might help you improve your judgment in the future.
Another example from the class: We did some classroom analysis of Ralph Waldo Emerson’s classic essay “Self-Reliance.” A little bored and restless with this, I asked Chat GPT to rewrite Emerson’s pithy epigrams in the language of Bruce Springsteen and Taylor Swift, and got some cringeworthy results. I then asked who some good foils for Emerson are. One (obvious) answer I got: Nathaniel Hawthorne. My thought for an essay assignment: Given what you know of our school’s student population, who would be a better fictive graduation speaker? Again, this would require the student not only to attend to what such authors actually say, but also provide evidence why one of them would be a better choice for a speaker for a given community than the other. That kind of thing.
The key, I’ve coming to understand, is triangulation: to create situations where students need to toggle between a specific bounded text (like a novel or work of history), the information they can pull out of a chatbot (for which you can prep them by discussing where and how of where to cast their line in the digital river), and a situation in which they have to apply these two dimensions to a third where they have to make, and justify, a decision for which they know there’s more than one (or two) options.
In short, what AI clarifies is the importance of developing powers of discrimination. As I often tell my students, life is an existential state of insufficient information: if we always knew what we wanted to know, it would be easy to invest, take a job, choose a marriage partner, whatever. We’re awash in data, but it’s not always clear which data is really relevant and how to use it. Fostering powers for figuring out what matters has always been the goal of education. That continues to matter. Indeed, it may matter more than ever.
Dear professor
Thank you very much for sharing this experience.
Roberto PESENTI
From the Harvard Business School: “If you torture data enough it will admit to anything”. Savor your week at Bard. I am jealous.