Why Benchmark?

By:

Kevin Fahy

Read time:

5 minutes

Kevin Fahy, LinkIt's Chief Client Officer

I spent the largest portion of my educational career as a middle school math teacher in New Jersey. Back in 2014, when the NJ state assessment first migrated from the paper-based NJ ASK/HSPA to the online PARCC test (now called the NJSLA), teachers like me were scrambling. My students had little to no experience with online test taking, and I needed a tool to introduce and familiarize them with this new, high-stakes testing environment.

I had piloted several free, online testing programs and did not find one that met my needs. Fortunately, my district introduced me to LinkIt, and asked me to help spearhead a pilot of their tool in my math classroom. In short order, in addition to an online testing tool that closely mirrored the state testing environment, I found myself awash in useful data and analytics that had significant instructional impacts in my classroom.

The partnership with LinkIt! wasn’t without some personal apprehension, however. You see, as part of our district’s usage of LinkIt, we also began administering LinkIt! Benchmark tests. The tests come in three forms (A, B, and C), and are administered in the fall, winter, and spring, respectively. While the content and questions on each test are different, the standards coverage and rigor are not. All three tests assess mastery of major grade-level standards with end-of-year expectations. This concept, which I came to know as benchmarking, was completely foreign to me.

As you can imagine, I had questions. The first question I asked was, “Why would I give my students a test in September on concepts I haven’t even taught yet? Won’t it simply show me what I already know, that my students don’t know anything?” The initial answer was that the fall test was a baseline measurement for growth. While this made sense — how else can you see how much a student has grown throughout the year if you do not have a starting point — being the stubborn person I was (and still am), I wanted more incentive to administer this test. What is the purpose of assessment if it does not impact what I do in the classroom? But I followed my directive and administered the test in the fall, winter, and again in the spring. And over the course of that first year, I began to discover hidden gems within the data that helped crystallize the purpose of these tests for my students and me.

Allow me to outline these gems below, and how they helped me to help my students.

Differentiation

High-performing students aren't high performing in everything and low-performing students aren't low performing in everything, even on the fall benchmark. I was surprised to see how many students were already beginning the year with a head start on the grade level standards they were yet to learn. Conversely, even for lower achieving students, I was able to identify pockets of students who were stronger in particular areas. During the winter administration, I could perform the same analysis for standards I had yet to teach. For the other standards I did teach between the fall and winter assessment administrations, I could quickly identify students who were in need of remediation and support.

Item Analysis and Common Errors

LinkIt’s Item Analysis report is a very powerful tool to be able to see how students are answering questions. Because LinkIt! was transparent with testing content, I could identify strong, incorrect distractor answers that large portions of students were choosing as they took the test. For standards I hadn’t taught, this helped me formulate better instructional strategies for preconceived notions the students were bringing to the material I was about to teach. One of the most powerful activities I could perform was to pull up a question in front of the class and say, “Look around the room. The correct answer was B, but 3 out of every 4 of you chose the same incorrect answer C. Why might that be? Discuss in your groups the mistake that leads a student to pick C, and then we can discuss it as a class.”

Focusing on Student Growth vs. Achievement

It is very easy to become discouraged when you see students with scores lower than you would expect for a classroom assessment. Understanding the nature of benchmarking requires you to eschew the traditional classroom gradebook model of assessment. Even more challenging might be for a special education teacher, or for other classrooms with historically low achievement, to become discouraged about seeing student predictive levels in red, orange, and yellow. This is normal and healthy to think about, but there is tremendous variability among students in any achievement level. This is especially true for lower-performing students. Benchmarking is about growth, and moving students ahead one small step at a time.

Replication of High-Stakes Testing Environments

I have never prescribed the notion of teaching to the test. Rather, my goal as a teacher was to teach to standards and assess those standards in an environment that mirrored the rigor, item type, and testing modality expectations set forth on the state test. Because I could see how much time students were spending on each benchmark item, and how often they revisited this item, I could discern if a student’s performance was due to a knowledge gap, or some other hidden factor such as test perseverance or even item type. Knowing that my students struggled on drag-and-drop items, for example, allowed me to infuse that item type into my own assessments, thereby helping them to answer questions in a way that accurately reflected their level of content knowledge.

Baseline for Growth

As mentioned above, the fall benchmark is a true baseline measure to calculate growth throughout the year. LinkIt! calculates a growth metric known as LGP (LinkIt! Growth Profile) that shows how each student’s growth compares to students with similar baseline achievement levels throughout the state. This helped me — and ultimately my district administrators — to identify instructional and curricular strengths and weaknesses.

Early Indicator

All three forms of LinkIt! Benchmarks are highly correlated to ultimate student performance on the spring NJSLA state assessment. As such, students are placed into predictive achievement levels based on their test performance, and in the fall I had a great indicator of where my students were projected to be by the spring. This correlation can be a key component of a balanced assessment strategy to help identify and determine interventions and remediation.

Balanced Assessment System

I don’t think any educator should put all their eggs in one basket. In other words, the LinkIt! benchmark was one of several instructional tools I used to determine what would best help my students in both the short and long term. Curricular assessments, daily and weekly formative assessments, and project-based activities all played a role in helping me adjust instruction throughout the year.

Kevin's focus at LinkIt! is managing and overseeing all aspects of client implementation and success including professional development, client support, and communication. In addition to previously working as a LinkIt! Account Director/Educational Consultant, Kevin has over 20 years of experience in education as a math teacher, instructional coach, and administrator, and successfully implemented the LinkIt! solution in each of these roles before joining our team. Kevin's commitment to client satisfaction and success is an instrumental part of LinkIt's vision.

H2

Paragraph text

H3

H4

H5
Quote

Link

Ready to Learn More?

We are excited to explore how we can be helpful to you.

Get in Touch

@

Sales:        +1 207 502 3150

Support:   +1 212 242 5065

150 West 22nd Street, 4th Floor
New York, NY 10011

All rights reserved

Copyright © 2024 LinkIt!