A college student’s perspective on using AI in class
[Maximilian Milovidov is a freshman at Columbia University and a member of TikTok’s Youth Council. He used a large language model to edit this essay for length and a human to edit for content. This piece also appeared in the Body Electric newsletter. Sign up here for a biweekly guide to move more and doomscroll less.]
Last fall on campus, I attended a reading of The New Yorker article “What Happens After A.I. Destroys College Writing?” During the audience discussion that followed, I shared something that drew unexpected laughs: A course I was taking, “Writing AI,” might be the only one on campus where artificial intelligence was not prohibited but, rather, required. This “AI-first” class was a living thought experiment asking: What if we taught students to use AI critically, rather than insisting they ignore it or assume they’re using it to cheat?
I don’t see our choice as “AI or no AI” any more than past generations could halt the spread of the printing press — that widely decried threat to scholarship. Children born today will never know a world without AI. The majority of U.S. teens already use AI chatbots, and over half turn to them for schoolwork. Students will reach for these tools, whether universities ban them or not.

A prevailing concern is that generative AI encourages people to outsource their thinking to machines, which weakens understanding — known as “cognitive offloading.” Through class, I realized this worry holds weight only if AI is treated like an omniscient oracle. When students are encouraged to experiment with and critique large language models (LLMs), AI becomes an on-demand study partner with benefits and drawbacks. At the very least, it’s a sounding board; at best, a viable alternative to a teaching assistant or tutor.
In class, we brought our own ideas and outlines. We fed drafts into a chatbot while documenting its suggestions and then explaining why we accepted or rejected them. We began with our own sparks of inspiration, argument and thought, but learned to prompt chatbots to expose gaps in reasoning or find unseen connections. My professor called this the “friend test”: You would ask a friend for feedback on a paper, but you wouldn’t make the friend write it.
Research shows that AI can supplement education when used as a collaborator for feedback, iteration or ideating. One study found that students using moderate AI assistance during lectures outperformed both the students using fully automated help and those using minimal support. When used in moderation, AI can improve human cognitive performance.
AI can also level the uneven playing field of academia. For the many students without access to private tutors, chatbots can generate practice questions, mock exams and flash cards; give feedback on a paragraph; or suggest a counterargument. Using “study mode” features, LLMs can nudge students toward answers instead of handing them over, the way a good TA would. A 2025 Harvard University study found that students using an AI tutor achieved learning gains there were more than double those in traditional classrooms, and they felt more engaged doing so.
The fact that these systems are trained on biased, Western-centric data is precisely why students must learn to question them. When we fed drafts into a model, it did not magically return A+ prose. More often, it amplified our weaknesses back at us: Vague claims remained vague, filler language spread and it often read as impersonal and corporate.
Sometimes the chatbot’s version of an essay was so horrifyingly bland that I became weirdly proud of my own messy and imperfect paragraphs. Those moments taught me more about my own writing process than any closed-book or in-class essay. When anyone can generate a passable paragraph, what distinguishes us is not whether we can produce text, but whether we can think, judge and revise.
My “Writing AI” class sought to explore appropriate use of AI in academia. In practice, though, we learned how to be thoughtful about why we are reaching for a tool, what we are hoping to get from it, and what is given up in the process. It provided a space to openly discuss our relationship with the defining technology of our generation, without shame or fear of punishment — remarkable amid a climate where campuswide “AI shame” routinely drives student use underground.
These skills will matter after graduation. AI may automate entry-level jobs, which makes it all the more crucial that we are taught how to work alongside these systems. We cannot control what kind of job market we join, but we can demand that our education prepares us to wield this technology, not hopelessly endure it.
This essay was written by Maximilian Milovidov and edited by Phoebe Lett.
You can hear more from Milovidov and why he resents being called a member of the “Anxious Generation” in TED Radio Hour‘s episode “Did social media break a generation — or just change it?“
Sign up for our Body Electric newsletter, or share it with a friend.
Trump administration’s embattled FDA vaccine chief is leaving for the second time
The FDA's controversial vaccine chief, Dr. Vinay Prasad, is leaving the agency. It's the second time he has abruptly departed following decisions involving the review of vaccinations and specialty drugs.
Family, former presidents and a Hall of Famer give Rev. Jesse Jackson a final sendoff
Several speakers at Jackson's funeral invoked his hallmark catchphrases: "Keep hope alive" and "I am somebody."
Bernard LaFayette, Selma voting rights organizer, dies at 85
Bernard LaFayette, who died Thursday, laid the foundations of the Selma, Alabama, campaign that culminated in the passage of the Voting Rights Act. He was a Freedom Rider and helped found the Student Nonviolent Coordinating Committee.
Oil surges to its highest price since 2023, and stocks drop after U.S. jobs report
Stocks fell Friday on worries that the economy could become stuck in a worst-case scenario of stagnating growth and high inflation. Oil prices touched their highest levels since 2023 after surging again because of the Iran war.
No lawsuits required: U.S. Customs is working on a system to refund tariffs
U.S. Customs told the trade court it aims for a streamlined process in 45 days to return importers' money without requiring individual lawsuits.
Poll: A majority of Americans opposes U.S. military action in Iran
Most Americans disapprove of President Trump's handling of Iran, and a majority sees Iran as either only a minor threat or no threat at all, an NPR/PBS News/Marist poll finds.
