A week ago, I became an anonymous hero amongst teachers at my school. I also became a villain to many of my students.
I have been dealing with an uptick of cheating—or at least directly copying and pasting from Google’s AI overview—this school year. The most recent (and perhaps most insidious) infiltration of AI-cheating in my classroom has been via Google Lens. Students are able to quickly screenshot a selection of the screen, and Google Lens does the rest of the work for them. The product is targeted as homework help. Suddenly, students were quickly working through IXL questions and increasing their SmartScores, but when asked to explain the concept, they were lost. Los Angeles teachers are dealing with the same issues.

Image from Google Lens.
I thought my students were not ready for this much AI, and I thought I was not either. So, I consulted with my favorite 6th-grade hacker, emailed a few people on the technology side, and got Google Lens disabled. If students are going to cheat, shouldn’t they have to work for it a little bit?
At least, this was my initial reaction to getting Google Lens removed; however, as the day went on and students became increasingly aware that 1) Google Lens had been disabled and 2) I had been behind its removal, I became aware of differing opinions. Two students noted that their science teacher encourages them to use Google Lens during daily warm-ups to better understand the processes they are learning. An argument could be made that it can serve math students by acting as a personalized tutor.
This brings me to now. I am a learning design graduate student deeply enmeshed in AI and education. The truth is that I am not ready for AI in my classroom right now, but I cannot disable it forever. And I question, am I doing a disservice to my students by hiding from it? As a student, I have found it to be an incredibly powerful tool for brainstorming and design organization. I have friends and colleagues who share the opinion that there is no such thing as ethical AI use; I cannot say that I agree. I see the potential that it has, such as in Magic School, Google’s Learn Your Way experiment, and augmented textbooks (LearnLM Team, 2025), especially in Title 1 schools like mine.
The AIK12 (AI for K-12) initiative provides guidelines for AI education in schools. Their 5 Big Ideas could be useful as a starting point for explicit instruction.

For further research, the TeachAI principles and AILit Framework move beyond what AI is to focus on how it should be used. Harvard tells us that we need to explicitly teach AI and use it with our students. Still, I struggle with seeing the classroom as a place for the most cutting-edge technology; is this how the teachers felt when computers came into the classroom? Sometimes, I wonder what it would be like if we took them away altogether.
I maintain an open-mindedness toward AI in the classroom; I plan to include more explicit instruction on AI, using resources from Common Sense Media. I will continue to explore the possibilities in my classroom and see what it can do for students.
References
AI4K12. (n.d.). Sparking curiosity in AI. Retrieved December 9, 2025.
Bennett, A. [EduTechforSchools] (2025, November 3). Google lens in the classroom: instant answers or learning partner? [Video]. YouTube.
Google. (n.d.). Google Lens [Computer software].
Google. (n.d.). Learn Your Way [AI learning platform]. Retrieved December 8, 2025.
Jones, C. (2025, December 2). His students suddenly started getting A’s. Did a Google AI tool go too far? Los Angeles Times.
LearnLM Team. (2025). Towards an AI-augmented textbook.
MagicSchool AI. (2025, April 2). AI in education: How schools can use federal funding to purchase MagicSchool.
OECD. (2025). Empowering learners for the age of AI: An AI literacy framework for primary and secondary education (Review draft).
Ross, E. M. (2023, July 20). Embracing artificial intelligence in the classroom. Harvard Graduate School of Education.
TeachAI. (n.d.). TeachAI: AI for education. Retrieved December 8, 2025.
