Diary of the Late Republic, #25
I’ve taught many courses about many things in the last 35 years, but through them all I’ve always considered myself first and foremost a writing teacher. I never considered it a glamorous profession—sometimes editing bad prose seems akin to changing dirty diapers—but always a necessary one. For all the tinkering at the technological margins, I never believed my job would ever change all that much. Or be engineered out of existence (or, at any rate, beyond recognition).
No more.
In November of 2022, the tech company AI unveiled Chat GPT (that’s “General Purpose Technology”), an artificial intelligence program that created a sensation in the media at large. Like all sensations, it dissolved fairly quickly, breathless stories of transformation and doom followed by accounts of “hallucinations,” where such programs would invent facts in the process of answering questions. I think we all understood that refinements were in the offing. Someday.
Someday is here. This is the message of Ethan Mollick’s Co-Intelligence: Living and Working with AI, published last month and already in its third printing. Mollick is a professor of management at the University of Pennsylvania’s Wharton School, and this short, notably well-written book surveys a world that we’re already inhabiting—a world in which AI does a lot of things, whether writing business plans (in poetry, if you’d like) to draw pictures, better than humans do. Moreover, unlike previous innovations, they’re not a matter of automating repetitive tasks, but rather generating ideas. Rather than disrupting labor from the bottom up, this time it’s top down.
As Mollick makes clear, none of us really understand the larger implications of all of this. But it’s dawning on me that I’m going to have to accelerate changes in my methodology. Kids can now ask an AI program to pretty much answer any question in any way with more ease and speed than they could ever do themselves, and telling them not to use it when writing an essay is akin to not using a calculator to solve a math problem: not only an implausible request, but very possibly a counterproductive one. The toothpaste is not going back into the tube.
There are ways around it. I have long used in-class essays as a pedagogical arrow in my quiver, but they’ve now become of central importance. In such assignments, students are not given a question until they arrive in class, able to use any information they like as long as they generate an answer themselves (AI programs are blocked in school buildings). I will also likely step up my use of oral presentations as assessment tools for similar reasons. Prominent among them is a premium on the ability to think on one’s feet as a hallmark of an educated person. As is the ability to frame good and useful questions, since outputs continue to be directly related to inputs.
Will there be shortcuts and limits to such efforts? Of course; we’ve long had a variety of forms of cheating (tutoring prominent among them), and even the best-designed curriculum often fails to elicit the kinds of work and measurements that may matter most. I suspect that only a minority of students in any given class are truly devoted to learning; the rest are just trying to get by—grades are the only currency that matters to them—which will typically include a few who actively (and sometimes creatively) cut corners. As with so much else, education is a percentages business.
I don’t think the news is all bad. As Mollick documents, students can use AI to generate ideas and compare versions of things in ways that can really jump-start and/or deepen their work. Often, he says, it’s mediocre workers who benefit the most—it’s akin to having a tutor. To that end, I just gave permission to a group of students to use AI for a project they’ve been working on—with the proviso that they explain what they did. In my happier moments, I imagine reading essays that I don’t have to correct on the basis of spelling, grammar, and structure but instead assess the value of their ideas—which I would then ask students to extrapolate, revise, or otherwise apply their implications to novel situations. One of the most interesting things about AI is that it often gives you different answers to the same question—or different answers depending on the way you frame the question. My job would then require me to be more imaginative in terms of framing worthwhile inquiries.
What worries me about this brave new world is the degree to which AI will tempt students—which is to say the next generation—to abjure thinking altogether. They’re reluctant enough to do it as it is; I can’t tell you how many essays I read that say, in effect, “This topic says very important things about which many people have opinions, depending on your perspective,” which is not simply not thinking but actual anti-thinking: actively avoiding engagement. How much easier it might be if AI doesn’t simply write your essay but tells you the best place to have your dinner, go on vacation—or, for that matter, who to date or what to do for a living based on vague preferences you can barely articulate. (See Tyler Austin Harper’s new essay in The Atlantic on this.)
Are these concerns exaggerated? Almost surely. In any case, the kind of skills I try to foster have always been a minority avocation. I just hope enough of us will have the will—and skill—to stay a step ahead of machines that can master us.