A Broken Clock is Right Twice a Day
[Editor’s Note: Dr. Martha Nadell is the chair of the English Department at Brooklyn College; teaches courses in American, African-American, and Brooklyn literature, as well as in college writing; and is a Harvard College and University graduate. Martha is also a thought leader in AI among the professoriate, participated in an AI Literacy institute in conjunction with CUNY, and co-created a series of chatbots to guide student critical thinking and career development. I’ve enjoyed a series of coffees with Martha, in which we’ve explored some of the themes in this article. Personally I find it very reassuring to see experienced educators humbly and seriously wrestling with how to use AI effectively in the classroom.
In this, our first guest post, Martha explains how her deepening understanding of the imperfections of AI helped her to reflect on her own opportunities for improvement as an educator as well as on how to partner more successfully with these powerful new tools.]
It was 10:30 that opened my eyes – not 10:30 on a random Wednesday morning, but my request that ChatGPT construct an image of an analog clock that displayed the time. But that smooth-talker, that silky-voiced ChatGPT, which had convinced at least some of us that it was on the verge of replacing human intelligence, couldn’t do it.1 No matter how many times I asked, it couldn’t get the hour hand to point to 10 and the minute hand to point to 6, something a second grader with a crayon and a piece of paper could manage with ease. This glitch, about which I learned in a recent workshop, changed my thinking about AI.2
From Panic to Pedagogy
Late in 2022, when ChatGPT first made headlines, academia seemed to lose its collective mind; the Great AI Panic of 2023 was about to begin. Some of my colleagues immediately went apocalyptic, imagining a world in which AI took over; Skynet was about to go live, they portended. Others started stockpiling blue books and reworking their assessments into the written and oral exams of years ago. A few were ready to have AI integrated, somehow, into their brains. But others stuck their heads in the sand and pretended it didn’t exist.
That was my approach; I was an ostrich. Accustomed to the glacial pace of academia, I thought I would have time for some research during my winter break or over the summer. In the meantime, I would teach as if AI didn’t exist and asked my students to buy into that approach. In those first few semesters, when AI still felt new, my students and I agreed that they would not use it, or so I thought.
That was the wrong strategy, of course. AI was too tempting for my students; it saved them time, it was easy, and it was accessible. And so, when, in my first-year writing class at a mid-sized public college, I received a paper about “democratic patenting” for an assignment about democratic parenting, I knew I had to act. I had to take my metaphorical head out of the sand and really think about generative AI and its impact not only on my classroom but also on higher education.
Early on, it was very easy to spot generative AI-produced work. ChatGPT (that’s all that students seemed to be using) was producing solidly mediocre work, C+ at best. The problems were obvious: deeply conventional language, workaday structures, and unoriginal thought. Students were offloading their cognitive work to a pattern-matching machine, which could produce prose that possessed an air of authority, if only you didn’t read too closely.
Today, the irony of my concerns isn’t lost on me. There I was, lamenting the death of originality and decrying the loss of creativity, wondering if my students would be able to write papers in distinctive voices and to present original thoughts. But I had recycled my curricular materials repeatedly, after deriving them from a handbook for college composition, which had gathered readings and suggested activities that were available to innumerable instructors, as well as websites that sold academic papers. I had created conditions that were ripe for plagiarism, for those students so inclined, and I had to do something.
Hanging on to Process
Pre-AI, I had thought that, with enough scaffolding, enough creative and engaging classroom activities, enough training and warning about plagiarism, my students wouldn’t, in fact, plagiarize. By and large, I was correct; I had very few cases of plagiarism, which were always easy to spot and address. Most of my students developed a strong, individualized writing process, which was one of my explicit goals in the course. But now, with the days of cut-and-paste plagiarism long past, I realize that, although I focused on helping students develop a viable writing process, I was expecting and accepting a product that was too formulaic and unoriginal, essays that were conventional at best, precisely because I was reusing my course material each year. The AI revolution has forced me to reckon with making sure that my own practices in teaching writing encourage even more of a focus on process, rather than product, more of a focus on academic integrity and transparency, rather than on catching plagiarism.
But the challenge of AI isn’t, of course, just plagiarism, which at its most basic is claiming work that you did not do as your own. In fact, the challenge is hanging on to process. When I teach, I ask my students to follow a model proposed by Anne Lamott in her book Bird by Bird. The idea is that everyone, except the rarest among us, has to write in drafts. That first one is the “really, really shitty first draft,” where we get our ideas down, where writing is thinking and thinking is writing, where we don’t have to pay attention to the editor whispering in our ear but can improvise and play with thought and language.3 In the age of AI, we have to figure out how we can retain that “shitty first draft,” whatever that may mean for each discipline,4 when as a recent article claimed, “everyone is cheating their way through college.”5 After all, it’s that shitty first draft – that messy, unpolished, initial encounter with a set of ideas – that allows and encourages students to think for themselves. Brainstorming – writing down one’s thoughts without judgment – is, after all, the first step for some serious cognitive work. As Ted Chiang reminds us, “Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say.”6
Developing Critical AI Literacy
One way to retain an emphasis on process, when product is so tempting, is to engage in “critical AI literacy.” This could include unpacking the ethical and environmental implications of the production of large language models and their use: implications of how the datasets on which LLMs rely are collected and coded; the enormous energy demands of training and LLMs; or its potential to upend careers through elimination and speed-up.
Critical AI literacy can also be thought of as an understanding of what AI is and isn’t and how AI works and doesn’t; LLMs are fallible predictors of what humans may say. They do not think for themselves, no matter how polished their prose may become, and, yet, they may be useful. To this end, I ask my students to think through if and how they can use AI as a tool and as part of their own process, and when it’s appropriate to do so. I, along with many others, have developed activities and even our own chatbots that deploy AI as a conversational thought partner, bouncing ideas back and forth with students, helping them hone their arguments by offering counter-arguments or contradictions; as a co-researcher that may not be entirely accurate but may provide useful sources; as an editor that may help with typos but point students to somewhat generic prose; and as a tutor, always available to help students with their grammar, thesis statements, conclusions, or something else. Depending on the discipline and interest of instructors and students, this list can be even more expansive. But, no matter how AI is used in the classroom, a critical distance, a stance of skepticism has to remain, because AI can’t seem to tell time.
AI’s Fallibility, the Importance of the Human, and the Liberal Arts
Back to 10:30. AI’s inability to construct an image of an analog clock that reads 10:30, a simple request, one would think, reminds us of AI’s limits and fallibility. It reminds us that AI isn’t human, that it can’t engage in the metacognition that we prize as critical thinking. While it may be adept at creating code or producing generic lab reports, it cannot replace creativity, originality, empathy, and ethics. As AI becomes more and more integrated into academic endeavors and workplaces, we need folks that understand the latter, even more than the former.
And this is where the liberal arts will flourish. With its orientation toward analysis, ethics, empathy, and creativity, the liberal arts can foster in students that necessary skepticism about AI and, also, guide them toward an approach that deploys it effectively.
My students learn how to read closely and critically; they develop skills in making arguments, based on solid evidence. They question, test, and search for the strongest ideas when confronted with competing narratives. These aren’t skills that will become obsolete in the face of advancements in generative AI, which is, at its core, a tool that is so good at replicating and predicting word sequences that many anthropomorphize it. These are skills in thinking, which will become even more essential, as AI improves, using chatty language to mask the fact that LLMs are sophisticated pattern-generating machines, neither conscious, nor creative, nor capable of understanding or empathy. Liberal arts majors will be the ones that will remind us that, although generative AI may be adept at predicting what a human would say or write, it cannot predict what a human will think or feel, at least not yet.
The liberal arts is also the place where students will be able to develop a facility for ethical practices for its use. The humanities especially will be the place where faculty develop strategies to teach students how to use AI as the tool that it is and to remind them that AI can produce confident-sounding and plausible “hallucinations,” those invented sources that don’t exist but could, presented with the veneer of authority that the most distinguished of professors may envy.
The place of the liberal arts in an AI-inflected world is especially important right now, when the liberal arts, and especially the humanities, are presumed to lack relevance to students’ post-graduate lives and when colleges and universities have to address the career readiness of their graduates. At this moment, when the danger in AI is its uncritical or thoughtless use, it is essential to hang on to what is distinctly human, what AI is not yet or may never be able to do. The liberal arts are where critical thinking happens, where students are able to recognize the limits of what AI is good at – predicting the likelihood of common and formulaic arrangements of language and thought – and can think through ethical quandaries with empathy. Liberal arts students have the creativity to go for the unlikely and the unpredictable in pursuit of new knowledge or new expression. And this is why liberal arts graduates will flourish in AI-inflected work environments, for they possess the interest and skills, honed in years of humanities, social science, arts, and sciences classes, to maintain the human.
In the age of AI, liberal arts reminds us that we have to double down on the things that make us irreplaceably human: our ability to question, analyze, imagine, and speculate rather than just predict probable outcomes, our willingness to grapple with ethical dilemmas through the lens of our own experiences, and our impulse to create something genuinely new rather than just remixing what already exists.
See this thread on reddit, which locates the source of this problem in the training data and pattern-recognition architecture for generative AI.
Thanks to Zach Muhlbauer at the “Technical Interlude & Tinkering,” Critical AI Literacy Institute, The City University of New York, May 16, 2025.
Anne Lamott, Bird by Bird: Some Instructions on Writing & Life (Anchor Books, 2019), 20.
Thanks to Luke Walzer for making the connection between the “shitty first draft” in writing and the “shitty first draft” in other disciplines (Critical AI Literacy Institute, The City University of New York, May 16, 2025).
James D. Walsh, “Everyone Is Cheating Their Way Through College,” New York Magazine, May 7, 2025.
Ted Chiang, “ChatGPT is the Blurry JPEG of the Web,” The New Yorker, February 9, 2023






Dr. Nadell,
Thank you for writing this. I have been having similar thoughts myself. I'm more on the tech side of the questions you are raising, but developing in the AI space has forced me to revaluate my ethical stance, my intentions as well as my cognitive processes. I have begun engaging with AI as way to elevate all three, and find myself on a philosophical journey where I am reading the likes of Murdoch, Ricoeur, Midgley, Baudrillard....alongside mathematicians like Kurt Goedel and even transformer theory. Confronting this technology is for me a massive internal struggle, and I am so grateful for the scaffolding that my humanities background has left me, however lacking it might have been, with the tools I need to face it head on. At least I know my deficits, and I have some idea how to approach them.
I would be worried less about kids faking it, and more about engaging them in a lifelong passion. This technology can be an accelerated gateway to that passion, at least it is for me.
Though I do have to ask: would I be this interested in these philosophical questions, if I had not experienced at least some rigor in my (pre-AI) humanities education?
Would I even know that there are such questions?