One notable gap in my curricular resume is any sort of performance work based on real-life data. Like most math teachers who work with struggling populations, I think performance task projects waste hours of time for limited learning outcomes. So how could I create a performance-based task that didn’t waste time, had a number of easily achievable goals with real-life data that yielded observations worth the time and effort?
What I Borrowed:
I got the germ of an idea from Dan Meyer, particularly this picture:
Meyer surveyed about 100 people with some metadata categories and loaded all this information into an Excel spreadsheet. His students generated the a list of questions:
Paired off with laptops, the students downloaded the data and were given Excel to analyze it. Meyer looked for the students who didn’t know how to use Excel (manually sorting data, manually calculating differences, and so on).
Meyer’s primary teaching goal was data analysis and tech transfer. I was more interested in getting my kids to think about data organization and description.
What I Changed
Collect the data myself? Nuts to that. I made the kids do the surveying. I created a Survey Monkey poll, which I wouldn’t do again–too much hassle, and even though all the kids have smart phones, they were much happier just filling out a paper form. I ended up doing an hour of data entry. Nothing terrible.
Thirty kids were assigned to collect ten guesses each. I was hoping for at least a hundred responses, got two hundred.
I have traditionally given pretty short shrift to absolute value, because the formula (split it up into two equations/inequalities, solve) has relatively little to do with the concept itself. So I give them some practice but don’t focus much on it. Like performance work, one of my goals for the year was beefing up my instruction, to give kids a good sense of what absolute value is.
I redesigned my AV unit introduction, which I really should write up. It went well, and over the next week the kids did well with the algorithms. It’s still difficult to keep both the concepts and the algorithms unified; they aren’t intuitive and I don’t yet have a meaningful model for a multi-step absolute value equation or inequality. But I’ve been pleased with my additional curriculum work this year and have a good basis to build on.
The Halloween unit had two objectives. First, it would reveal another use of the concept—or, if nothing else, reveal why contestants on The Price is Right have be come the closest without going over, since Bob Barker deals with retail price, not absolute value. But a second, linked learning objective was the meaning of central tendencies. Half the class is sophomores, so a little refresher course on mode, median, mean, and range wouldn’t hurt a few months before the CAHSEE.
Doing an about face from the Meyer lesson, I eschewed all technology short of calculators. I mean, seriously, who wants to spend the time getting all the students up and running on Excel? Besides, I wanted the kids to think about the ordering the data, finding the mode, literally counting to find the median. I like the occasional tactile exercise and it doesn’t hurt to remind kids that way, way back in the day, computer programmers had to write their own data sorting routines, from bubble sorts on up. Sneer if you like at menial tasks, but I’m not a big fan of just pushing buttons. No Excel utilities until you can do the same tasks manually, dammit.
I had eight groups of four students, so I broke the list up into groups of twenty five. To ensure a random sampling, I added a new column, filled it with randomly generated numbers, and then sorted the responses by the random values. (Yes, I used the Excel utilities. I can do the same tasks manually, dammit.)
So each group of four got a list of twenty five numbers something like this (not an actual list, which I didn’t save. Argggh.):
I wrote the following instructions on the board:
I described “measures of central tendency”, and prompted the class for the definitions of mean, median, and mode. I pointed out that task 1 was pretty straightforward, that task 2 required some thought and (hint hint) required a concept they’d been working with recently.
Each group was given their own mid-sized whiteboard, which they could lay across their desks or lean against the wall. One group used a full-size whiteboard.
And off they went.
This group had a whiteboard leaning against the wall, despite my exhortations, and were working in pairs because of spatial constraints. So I made them put the desks in a tight group instead of an L, which you can see in the background of this group:
This last guy wasn’t working alone; his teammates just didn’t make it in the picture. Here’s their final product:
Once two groups of four students finished, I paired them up to join forces as a new group of eight and find the information for the list of fifty numbers.
Here’s a sample of boardwork from group 4, a group of girls who typically struggle a bit and had a tough time figuring out the difference between ordering by number and ordering by difference from the winning amount. But they finally had their “aha” moment when they were placing the number 1500.
Four of the original eight groups—by and large, the strongest kids—struggled a bit with the organization of data and the final task of finding the absolute value. They all finished in time to form two new groups compiling fifty numbers.
The other four groups, including group 4 above, finished their first analysis quickly,paired up to produce new lists of fifty and were done with twenty minutes to spare.
Then I said, “Now, guys, put it all together.” and they asked me if I was kidding.
Nope. And thus was born the Mega-Group. (I did let them off the hook for median.) Fifteen kids putting all the data together, and I missed getting a picture of it. Sigh. There wasn’t one board with all the data on it, but here’s the final working board I used to talk to the class, with some markup:
I really need to make sure my boardwork is understandable after the fact. I know the kids get it at the time, but it’s a bit like architectural layers.
In brown–two groups’ work. The other eight’s work is on a board to the right. In green–the kids rework their average by finding the sums for the two averages and then adding them to find the total, which they then divide by 100. (They were greatly cheered to discover that they didn’t have to add up all the terms one more time.) In blue–the mean, mode, and range for the hundred numbers.
Then I used the board as the discussion pointer for the whole class. To the right, with a red square around it (on the whiteboard, not my markup), the overall mean, mode, and median for the entire set. In darker brown outline, my additional instructions on using the average formula to recreate the sum. I wasn’t teaching so much as demonstrating a concept (it was about a week to the PSAT) to show how the mega-group had been able to recalculate their averages without going through all the work of adding up the numbers individually.
See? If they’d been working in Excel, they never would have learned a useful algebra short cut, the explanation of which is difficult to see in my admittedly disorganized boardwork, so here you go:
So where’s the “real life”?
Time worked out perfectly; the kids came to a natural stopping point with fifteen minutes left in class. I could have done an entire class on the data discussion, but it’s an algebra II class, not stats. What I wanted was some “aha!” moments, as the kids realized what the data revealed about the contestants. Here’s the unmarked up discussion board:
But then we looked at all the groupwork data, looking for patterns. In almost every case, the median value was somewhere in the high 600s, low 700s. The mode of 480 quickly emerged as the groups combined. Averages consistently lowered as groups merged.
I put up a picture of the original poster and suddenly I’m looking at a sea of faces that get it.
“So what relationship does the mode of 480 have to the picture you used to collect data?”
“It’s four times the sample size!” said Brad.
“Which means, Tracy?”
“People thought that the jack o’lantern was four times as big as the owl jar.”
“More people guessed exactly four times than any other guess.”
“There you go. So how is that different from the median? Remember, the median reflects the middle value. But if I look around the room, I see that all the groups had middle values in the 600s or so. What does that mean, Mark?”
“That’s 6 times, right?”
“Good. But exactly six times would be 690, and I don’t see 690 as a mode, surprisingly.”
“Yeah, it’s like people said it was about six times, and then, you know, went up or down.”
“Oh, so the people who thought it was 6 times were more likely to estimate, figure ‘a little over or under 690’. So you get a lot of numbers in the high 600s, low 700s. But the people who guessed four times….”
Kevin jumped in, “It’s like the people who guessed four times as big were…not as realistic about what the numbers could be?”
“Oh, interesting. So someone who guessed four times might not have had as much experience thinking about weird questions like how many teeny tiny Lemonheads would fit in and around, so they went with a straight 4:1. The people who guessed six times as much were more versed in estimates, and fudged.”
“Yeah, but it wasn’t six times. It’s more like eight times bigger.”
“Which brings up the biggest point of all, for me! I was really worried. I thought everyone would ballpark a thousand Lemonheads and we’d have 20 people tied for the win. Instead, just three people guessed a thousand, and they tied for second. One of them is in this room–yay, George!”(George is in purple on the left in the first picture.)
Jose said, “Some people guessed really, really high. Did that change the average?”
“Great question. Look around at all the numbers. What’s the lowest you see?”
“We had it,” Manuel said. “115–someone guessed the big and little jars had the same amount, which is crazy.”
“Exactly. So the lowest number was about one thousand less than the actual count, and it was totally ridiculous. What about the highest?”
“9999!” everyone chorused.
“that’s nearly NINE THOUSAND more. So the biggest guess is nine times as far off as the lowest guess.”
“We had a three thousand guess.”
“So did we.”
“Those kind of guesses are called outliers, data that is significantly off the average. Many times, statisticians will discard this data to see if it is distorting the outcome. Here’s a question—if you get rid of data that is really large or really small, which measure is it most likely to affect, if any: the mean, the median, or the mode.”
“Not the mode,” Khan said with certainty.
“Right. Unless the outlier data was a repeated number—like, maybe someone bored just kept typing in 9999—the mode is unlikely to be affected.”
“It might change the median, but not a lot.”
“Not if your center is robust, right. And if the center isn’t robust–if taking a few numbers off the top or bottom really changes the center, then maybe the data aren’t outliers.”
“It could really change the mean.”
“Yep. Since the average uses all the numbers in a sum, removing really big ones—even though we’re also reducing the number of terms–can have an impact on the mean. I checked that out, though.”
“Since a few numbers skewed really high, I decided I’d pull out the guesses that were greater than 1.5 times the actual count. That left 6 guesses out: 9999, 6000, 4000, 3000, 3000, 3000. Those six guesses comprised nearly 17% of the total sum that made up the average.”
“Just six numbers?”
“Yep. Just 3% of the guesses made up 17% of the total. So what would removing them do to the average? I would be dividing by 194 instead of 200, remember.”
“Make the average smaller.”
“Right. How much smaller?”
“Before you answer, I want you all to think about how much your GPA changes just because you get an A instead of a B. A lot or a little?”
“A little!” Jun complained.
“Right. Averages are pretty robust—they don’t change easily. On the other hand, this is a whole bunch of data. So I’ll put it this way. If I remove the big numbers, do you think the average would be higher than the median, or lower?”
“Higher” says Dante, with certainty.
“Good. How much higher?”
“Not bad. The new average is 744.”
“You took out 9000 and 6000 and the average only went down one hundred and twenty some points?” asked Raman.
“That’s a lot, actually. It’s proof that the big guesses were skewing the mean. But either mean–867 or 744—tells us what about the people guessing?
“They guessed low.”
“Most people guessed low. Once we remove the outliers, we see that the average guess was just over six times the number in the hint, when the actual amount was nine times. So each central tendency–the mean, the median, and the mode—gives us an insight into our guessers and their decisions.”
The kids could see actual human behavior in each description and yes, my performance task skeptic brethren, it did matter that the behavior came from data my students collected themselves.
Designing lessons like this is, I think, one of the great thrills of teaching. It’s not enough to define a task. You have to give careful thought to how you will use the information. Most teachers err by giving the students too much responsibility—discuss the data in groups, and decide what it reveals. But most kids, given this information, would never have come up with the same insights. They’d have just shrugged and given up. You have to give them plenty of success, plenty of faith, before you give them analysis, and even then you need to guide your strugglers.
I read lots of lessons in which the teachers say “Most kids gave up, but my top kids were able to figure out a couple of the tasks.” Or they group kids heterogeneously and the weaker kids don’t do much work except watch the stronger ones. In this case, my strugglers got the charge of being in the top performance group on their own merits.
I don’t mind struggle, but I want the struggle to either pay off for 90% of the class OR be a brief struggle for my weaker students while my top students get a challenge. But far too often, I see or hear of lessons in which the teacher ends up going from group to group or pair to pair explaining what the kids were supposed to figure out. I’d rather do that in a class discussion, or let kids move things along with my guidance. That way I know all the kids are paying attention, rather than talking about the Homecoming rally.
Two people guessed 1035—off just by one. A student’s father won half the candy, but a sophomore in Algebra II/Trig got the other half, along with a cute trick or treat pillowcase. Three runners up guessed 1000, and got a pack of Halloween minipencils.
The learning objectives were met. The class discussion involved the central tendencies, but all the groups quickly realized the relevance of absolute value, and every single group mentioned game shows.
I didn’t put on any weight, despite sneaking an ungodly number of Lemonheads while filling the jack o’lantern. Counting makes me hungry.
So all ended well.