What are your initial reactions to this Calvin & Hobbes comic? Think or jot down a few ideas, then take a look at my initial reactions:
- Ugh, this teacher is putting no thought into using assessment as a platform for meaningful feedback and growth. Which is why it’s no surprise that…
- Ick, Calvin is exemplifying a classic performance or ego orientation to learning, where what excites him is having “performed well” on the task. He doesn’t really care about what he learned, just that he did well on it.
Here’s something that may not surprise a lot of folks – us math teachers can be really bad at feedback. A lot of times that’s because our archetype of mathematics is performance based – the only thing we’ve ever been told matters is whether or not the student “did the math” and “got the answer” – how they performed on a task rather what learning they demonstrated. Sometimes it’s because we just don’t think we have enough time to give detailed feedback, and (for some reason that does not apply to social studies, English, or science teachers) we don’t get around to giving effective feedback to students.
One big problem with this meh approach to feedback is that it really messes with our students’ mathematical identity. As Aguirre, Mayfield-Ingram, and Martin hone in on throughout their fantastic book The Impact of Identity in K-8 Mathematics, “[feedback] has the power to determine whether children see themselves as mathematically proficient.” Feedback matters, folks.
If you’ve read my extensive (2) postings on this blog before, you’ll notice that I’ve mentioned meaningful feedback before. Honestly, meaningful feedback is like my bread and butter, so I’d hate to leave it at just one post. So here’s another, specifically focusing on a few techniques I’ve found to be really helpful with allowing us to give meaningful, actionable feedback to students while still being able to give feedback to 150 students within a week or two (i.e., on a faster timeline than what I accomplished in my first years in the classroom, where ungraded tasks became mountains of paper which proved a significant safety hazard).
What defines a “Math Task”?
First things first, this type of feedback revolves around the idea of performance tasks. Jay McTighe of Understanding by Design fame gives a great overview of what it means for something to be a “performance task,” stating that “[a] performance task is any learning activity or assessment that asks students to perform to demonstrate their knowledge, understanding and proficiency. Performance tasks yield a tangible product and/or performance that serve as evidence of learning. Unlike a selected-response item (e.g., multiple-choice or matching) that asks students to select from given alternatives, a performance task presents a situation that calls for learners to apply their learning in context.”
The layman definition I usually give to my teachers (and perhaps some performance purists will prosecute me for this) actually centers around what a performance task emphasizes rather than what sort of contexts qualify for the definition. I tell my folks the following:
A performance task is any task where you’re asking students to explain and defend their mathematical thinking.
From this definition you can see that the emphasis with a performance task is not just on whether or not a student can “get the answer” to a problem, but whether she or he can engage in metacognitive thinking and explain, either in writing or verbally (yes, you can do this verbally!), their thinking process. It’s a deeper expectation of understanding the content, and requires a deeper level of feedback. “Be careful with your math” just won’t suffice.
First, Use a Consistent, Holistic Rubric
The teachers I work with accomplish meaningful feedback by using a consistent rubric based of the QUASAR Cognitive Assessment Instrument (QCAI).
This is vital for a few reasons:
- It forces us to give feedback on student strategy and communication, not just on whether the student gets a nice, neat answer. There’s your metacognition.
- It still, though, emphasizes that correctly executing algorithms and getting an appropriate solution matters.
- It allows students to get actionable feedback on their thinking throughout the year. Knowing I need to “work harder” or “get functions better” really doesn’t help me as a student, but when I can see that “oh, my communication was a 2 – that means that the next time around I need to make sure that my explanation is clear and I’m using examples in my response” then suddenly I have something specific to focus on going forward.
- The QUASAR project was also designed from an equity lens, so it has a special place in my heart ❤
Then, Derive a Rubric for Each Task
The biggest issue that using a consistent, generalized rubric presents for new teachers is that they often struggle to apply it to the context of a specific performance task. For instance, what the heck does it mean to identify “all important elements of the problem” for a task involving unit rate? This ambiguity, a central tenet of why these sorts of rubrics are useful to students, is a serious roadblock for new teachers. The solution that we’ve employed to resolve this confusion and to provide consistent, meaningful feedback to students is to create a derived rubric for each task. Basically, let’s use QCAI to give kids consistent feedback on their growth throughout the year, but let’s also know exactly what we’re looking for in any given task before we start trying to rate them. Here’s how we make it, and what it looks like in action.
Sarah Haden, a 7th grade math teacher I work with, sent me the following performance task:
Vanessa and Angela are hosting a birthday party and making two different kinds of hot wings. Vanessa is making regular buffalo wings and Angela is making honey barbeque wings. Vanessa paid $10.50 for 2 ½ pounds of wings. Angela paid $12.20 for 3 1/3 pounds of wings.
Vanessa argued that since she paid less than Angela did for almost as many wings, she got the better deal. Is Vanessa correct? Explain how you know who got the better deal.
To derive a rubric for this task, we take each rubric row of the QCAI and consider the context of this task.
Math Knowledge (MK)– I think of this as the “destination” when you use your GPS. Did we get to where we wanted to go? Essentially, we are talking about whether we got a correct answer and knew what mathematically was going on with this problem. The highest rating for this MK reads:
Shows understanding of the problem’s mathematical concepts and principles; uses appropriate mathematical terminology and notations; and executes algorithms completely and correctly.
For Sarah’s task, we began by considering the first two sentences. In this task, we want students to understand the mathematical concept of unit rate and how it can be used to compare ratios, so we derived that we’d like the following: Arithmetic and written explanation shows understanding of how unit rate can be used to compare ratios (2 points)
We then looked at the last sentences and considered what sort of algorithms students should be correctly executing in this problem, and derived that the students should “Identify the correct comparative prices for each individual without error” (2 points).
We repeated is process for Strategic Knowledge (SK), which I think of as “the path you took to get to your destination,” and Communication (C), or “how clearly you explain the trip that took you to your destination.” Here’s where we started and where we ended up:
Creating this derived rubric really helps us whizz through our feedback since we now know exactly what we’re looking for, but still allows us to give students feedback in the context of the generalized rubric that they will see in subsequent tasks. It also really helps with providing written feedback since we know exactly what students’ strengths and areas of growth look like in the context of this specific task. Consider this students’ response (note that Sarah’s original task had an error in switching Vanessa and Angela’s name in the question – I believe she addressed this in class because students had crossed out and corrected the two names in many of the student samples she sent my way):
Here’s the feedback I suggested for Sarah based by using the derived rubric and the “A cubed” feedback approach:
- MK: 1.5 (Process to get the first unit rate was sound, but had arithmetic error of 2 1/2 x ⅖ = 41/40. Did not identify a price for Angela’s ratio and the explanation seemed to talk about the total amount of hot wings Vanessa had rather than how they compare proportionally).
- SK: 2 (Set up one ratio correctly – OMG I love how [student] changed 10.50 into 10 ½! – but did not use those answers to determine a better deal.
- C: 2 points (the sentence starts strong but then runs on – two ‘because’ statements. She does though frame it around who had a better deal, but does not connect it explicitly to any of the work she did)
- Quick A cubed (Affirm, Ask, Advance) feedback I might leave [student]: I love how you set up the ratio for Venessa’s wings price, and used that to find a unit rate! I don’t see a similar ratio for Angela – how might finding a unit rate for Angela’s wings price help you compare her with Vanessa?
If we were just grading this student on “correctness” they’d probably be staring at a “0%” right now. Instead, even though this student really struggled with a lot with the concept of unit rate, they are able to really be affirmed on the effort they put in, get feedback on that effort, and have a clear next step both for this problem and subsequent performance tasks. For this problem, the student should go back and consider Angela’s wing price. For future tasks, this student could look at any of the rubric rows and see what would take her or him to the next level.
The less that we have to rely on seemingly arbitrary and competitive grading schemes, the more that students can focus on their work rather than their rank. That’s the goal as we push for more focus on learning orientation in our math classrooms. Every student can and should be able to feel like each day in math class is their own “Calvin Day,” not just because of where they are, but because of how far they’ve come and where they can go from here. That feat depends on intentionality of feedback, and that responsibility is squarely on our shoulders.