The Illusion of Growth
A few weeks ago, I shared my surprisingly positive experience with the i-Ready Reading Diagnostic, a standardized test that, against my philosophical leanings, proved to be my most effective assessment experience in the context of teaching 6th-grade English Language Arts. Perhaps most enlightening about that blog post was how quickly the idea came to me; I felt confident that, for all of its drawbacks, i-Ready was a strong assessment. However, when it came time to think about my worst assessment experience, I experienced a deep avoidance of the idea, fishing for contributions from my fiancé and friends, using Google Gemini to prompt “worst assessments a teacher could give,” and giving a lot of thought to land on an answer that is disappointing for a multitude of reasons. Unfortunately, my worst assessment comes with much less nuance and a lot more frustration, and it is as innocuous as a multiple-choice pre- and post-test given in my classroom. It is an assessment that, despite a seemingly straightforward design and an intended purpose of measuring growth, consistently undermined genuine learning, transformed my teaching, and ultimately lacked validity in truly measuring student knowledge, making it far from fair for both me and my students.
What It Is and How It Works
This particular assessment is a 20-question multiple-choice digital test. In my department, pre-and post-tests are designed to measure students’ understanding of core reading skills, such as theme, plot, and character development. Roughly half of the questions are dedicated to vocabulary and definition matching, while the other half require students to apply these reading skills to brief passages or sample sentences. The digital interface, Illuminate DNA, itself adds a layer of complexity: students must navigate a PDF of the test questions on one side of their screen while simultaneously using a separate digital bubble sheet on the other. This setup often proves confusing, especially during the early part of the school year.

Sample questions on Unit 1 post-test on the testing platform (Illuminate DNA, 2025)
Why It’s My Worst
This pre/post-test stands out as my worst assessment experience primarily because its ultimate purpose shifted away from student learning and toward teacher evaluation. As a key data point for my end-of-year review, ensuring students showed “growth” from the pre- to post-test became imperative. Before recent legislative changes in Michigan, this data was a mandated component of teacher evaluations, meaning the assessment became a way of grading the teacher, not the students. This system creates a perverse incentive: it is in my best interest to guarantee high post-test scores, sometimes even encouraging students to not fully engage with the pre-test.
Consequently, my entire unit design becomes heavily influenced by the impending assessment. As the post-test approaches, I inevitably find myself cramming any skills not yet thoroughly covered, resorting to games that practice the exact question types students will see. We use study guides that mirror the test’s structure, making it the epitome of teaching to the test.
This approach has a severe impact on student engagement and the quality of instruction. Students often do not feel genuinely invested in the learning process, as they recognize the focus on test preparation. An end-of-year survey I conducted confirmed this, with many students expressing a wish for more time with the material and less time preparing for the test. Montenegro and Jankowski (2017) noted, “Assessment, if not done with equity in mind, privileges and validates certain types of learning and evidence of learning over others, can hinder the validation of multiple means of demonstration” (p. 5). My ELA pre-and post-assessment, by narrowing instruction to specific testable skills, inadvertently tells students that only a very particular type of knowledge and demonstration matters, which is far from an equitable approach.
Furthermore, the data generated is deeply problematic. It shows me what I already know: which students have figured out the game of school and which have not or choose not to participate in it. This test is far from a meaningful assessment of what students have genuinely learned after days or weeks of carefully planned instruction. The “growth” data it provides offers my administrators and district an inaccurate and incomplete picture of my students’ actual development. Because of the pressure, the instruction leading up to it is often of lower quality.

Multiple Assessment Growth Summary and Roster Data Sheet (Illuminate DNA, 2025)
A Need for More Meaningful Assessment
What begins as an intended measure of student growth can, under external pressures like teacher evaluation, devolve into a system that undermines genuine learning and forces instruction into rote memorization. I know that pre- and post-assessments are designed for quantification. Still, they often lack actual validity and fairness in their application, ultimately providing misleading data and diminishing the entire learning experience for both students and myself. This assessment serves as a reminder that numbers alone are insufficient, that grades without feedback are meaningless, and that the value of an assessment lies not in its ability to quantify performance for external systems but in its capacity to genuinely support a learner’s growth.
References
Gemini. (2025). Gemini (2.5 Flash) [Large language model]. https://gemini.google.com
Contribution from Google Gemini: Prompted Google Gemini to assess the quality of draft. Gemini provided questions to extend further sections. I responded to these questions and embedded them within the writing. Prompted Google Gemini to help write a title for the blog post.
Grammarly. (2025). Generative AI Assistance. https://app.grammarly.com
Contirbution from Grammarly: Grammarly has a feature that “increases the impact” of your text. I used several of Grammarly’s suggested improvements in sentence structure and clarity. This tool helped to strengthen the piece in certain areas.
Montenegro, E., & Jankowski, N. A. (2017). Equity and assessment: Moving towards culturally responsive assessment. (Occasional Paper No. 29). University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
