American institutions are abuzz about AI and its potential. Universities, in particular, seem to be embracing AI without much question. The University of Texas at Austin is celebrating 2024 as “the Year of AI.” Arizona State University and Johns Hopkins University now offer undergraduate and graduate degrees in Artificial Intelligence. Multiple universities, including Penn State, Baylor, Oregon State, and the University of Michigan, among others, have hosted AI events where there seem to be very few questions about AI’s ethical ramifications or its effects on human flourishing. Instead, many professors understand AI as a tool that can enhance teaching and learning. In fact, instructors regularly receive advice on how they can incorporate AI into their classrooms to enrich student learning. Most seem to believe that professors have an obligation to teach students how to use AI responsibly because they will inevitably encounter it in their future workplace. This, I think, is the wrong approach.
My understanding of AI comes from a broader view of technology that has developed over the last ten years. Born in 1998, I grew up as technology advanced, smartphones proliferated, and everyone around me, especially adults, seemed to embrace a technocratic future. In middle school, my teachers installed smartboards in their classrooms. In high school, students received Chromebooks or iPads. At all times, everyone was on their smartphones. When I reflect on my encounters with these technologies from an early age, I cannot help but consider how adults’ ready embrace of technology shaped me in profoundly negative ways.
Constant engagement with technology at school meant that I was more likely to use it outside of the classroom. Because the adults I trusted quickly accepted and adopted new technologies in educational spaces, I figured those technologies were always acceptable. As a result, I was less likely to read, more likely to scroll, and highly likely to feel depressed. By age sixteen, I had developed an eating disorder, much to social media’s credit. Such experiences are not uncommon. My learning diminished, my relationships worsened, and I felt like I was slipping away from myself. Sometimes I would delete everything, try to unplug from all media, and find some semblance of peace, but then I’d go back to school. Screens were everywhere.
By the time I reached college, it was inescapable. Few classrooms were tech-free. Students interacted constantly on social media. On my smartphone alone, my screen time averaged anywhere from five to eight hours per day. There were few instances where some form of digital technology did not shape my day-to-day life. If I reaped any benefits from this embrace of technology, I have yet to discover what they are.
As I descended into a state of concern over my own sanity, everyone I talked to treated the expansion of these new technologies as inevitable. When I heard adults lament the loss of reading a physical book, playing outside, or speaking face-to-face, I also heard some kind of justification. “Well, that’s just how it goes.” “We have to adapt.” “This is the future.” Such responses left me feeling helpless. My future, it seemed, was to be miserable as technology engulfed me.
But this passive acceptance of a supposed inevitable technological change is the wrong response. And we are once again at a moment where we must refuse this mindset, particularly about AI.
AI’s negative effects have already begun to be documented. I’ll name just a few. First, AI blurs the real and imaginary. One teenager died by suicide after repeated interactions with an AI chatbot he had developed a relationship with. Second, AI steals from humans. It is no secret that AI companies exploit personal data and use individuals’ work without their permission. Third, AI destroys the planet. To keep AI up and running, companies need constant energy that results in unethical extraction of natural resources. And to top it all off, one AI researcher suggested that if the growth of AI continues without checks then “we are all going to die.” While some have rightfully suggested that we teach students responsible engagement with the digital world, this triad of abuses, and what’s seemingly at stake for humanity, leaves me wondering how we can teach responsible use of AI—a system that is innately irresponsible.
Paul Kingsnorth suggests that we interrogate the consequences of a particular technology and draw lines. For now, I think the responsible action for educators is to draw an AI line. Teachers do not need to bring AI into the classroom. They do not need to encourage student interaction with ChatGPT. Professors do not need to teach students AI “skills.” And no educator needs to support the expansion of AI across their school’s campus. Instead, adults must ask more questions: Why would this technology be valuable for students? Will this help students flourish in my class and after? How will AI form those who rely on it? Answering these questions will take time. To rush AI into the classroom or into daily life is to put student well-being at stake. And as Kingsnorth reminds us, refusal to accept certain forms of technology can “enrich rather than impoverish.” By refusing AI in educational spaces, students may, in fact, come out better on the other side. In my own experience, the embrace of technology by trusted adults in a trusted place led to more excessive use of those technologies outside of it. Unbeknownst to them, the effects were detrimental.
AI developers, just like anyone else in marketing and advertising, want us to believe that denying ourselves AI would be to deny a rich future filled with possibilities. But as a student, I bore the brunt of unquestioning technological embrace by the adults around me. Only within the last few years have I discovered what a beautiful life exists offline. I spend more time outside, with friends, and reading books. My relationships with myself and with others have vastly improved. I regularly tell my husband that my newfound, healthy relationship with food is nothing short of a miracle. And to think that eight years before I was born, Wendell Berry had already provided an exhortation to limit involvement with harmful technology. Indeed, all of this has been possible because I try (and often fail) to abide Berry’s wisdom. I aim to constrain my interactions with the digital and increase my interactions with the real.
Now, in my role as a teacher, I feel a deep sense of responsibility to my students because I was like them not that long ago. My tech-free classroom, prohibitions of AI, and handwritten exams may seem old-fashioned to them at first. But I have had multiple students thank me for asking them to put their phones away, focus their attention, and embrace opportunities to learn. I can only hope that this approach to education leaks into their lives. They, too, might look up from the screen in front of them and wonder what all they have been missing.
To deny students AI, then, is not to deny them a future. Perhaps, in fact, it is a way to invite them to live fully into the one before them.
Image credit: Thomas Malton, “Cambridge University: Great Court And Chapel” (1789) via Wikimedia Commons