Is this EdTech trap one of my own making?

My pursuit of efficiency has, at times, led me to rely on tools like Google Forms, EDPuzzle, or Kahoot, which, while quick to grade, lead to a diminished interest among my students. They do not engage deeply with the material if they know a multiple-choice quiz awaits — no matter how gamified — and rarely interact with me about their personal understanding of their scores.

The concept of “surveillance in place of care” (Corrigan & Beazley, 2020) resonates deeply with me. These tools collect constant data on student performance, tracking every answer, score, and even time spent. This data is then primarily used for reporting and accountability, rather than genuinely informing student learning. It creates a system where students feel constantly observed and measured, often reducing their complex learning to points. “What do these points even mean?” my students will inevitably ask sometime in October, before conferences, after they have sat through some amount of time where I break down how grades work.

This practice of using points as a grade and prioritizing the efficiency of EdTech inadvertently prioritizes monitoring over genuine learning support, creating the unintended consequences Kohn describes, where students focus on the score, not the learning.

However, I am reminded that even within systemic constraints, I do still have agency. Take a recent assessment I designed: small groups of students using flashcards as the foundation of a short-answer test for “I Never Had It Made.” This assessment is my attempt to break free from this cycle of efficiency over feedback. The design incorporates built-in time efficiencies; I can check for an understanding of figurative language types during their group discussion time, reducing the burden on my limited planning minutes. Collaborative group discussions can occur concurrently with other activities, such as students working on i-Ready lessons, maximizing instructional time. The use of simple notecards as the “technology” for their written responses also intentionally moves away from the opaque nature of some digital platforms, making the process transparent and tangible for students. Further, the design choice reduces both the perceived need for AI-driven surveillance and the temptation for students to resort to AI for cheating.

The most impactful element of this assessment, however, is the pedagogical shift it enables. Getting to actually sit with a group of students, reflect on what they learned about a specific assignment, and discuss their understanding before they earn a grade feels caring and compassionate, especially for 6th graders. This Kohn approach provides a greater opportunity for intellectual risk-taking. The assessment becomes a tool for learning, and there is a greater chance of trust being built among the class.

References

Corrigan, A. & Beazley, G. (Executive Producers). (2020, October 30). Failure to Disrupt book club with Chris Gilliard [Audio Podcast]. TeachLab. https://www.teachlabpodcast.com/failure-to-disrupt-book-club-with-chris-gilliard/

Gemini. (2025). Gemini (2.5 Flash) [Large language model]. https://gemini.google.com

Contribution from Google Gemini: Prompted Gemini to develop a ungraded grading system for a group activity where students work towards the following learning objective: Students will identify types of figurative language in “I Never Had It Made,” explain their meaning, and analyze their impact within the text.

Grammarly. (2025). Generative AI Assistancehttps://app.grammarly.com

Contribution from Grammarly: I utilized Grammarly to enhance the clarity and grammatical correctness of the post, as well as to refine word choice and overall flow.

Kohn, A. (2011). The case against grades. Alfie Kohn. https://www.alfiekohn.org/article/case-grades/

Scroll to Top