I’m on the train home from a very interesting conference, “Using Data to Make Thoughtful Decisions About Schools” at the ridiculously gorgeous (especially on a sunny late fall day) St. Andrew’s School in Delaware. (In a synchronicitous link to a previous post here, St. Andrew’s was a prime filming location for Dead Poet’s Society, though I only heard a couple of participants allude to this.)
The event was created under the aegis of St. Andrew’s and the people of the College and Work Readiness Assessment, of which I have written before. Most of the several dozen participating schools have had some experience, between one and seven or more years, with the CWRA, making the instrument itself practically venerable and, of course, generating lots of data for both CWRA’s creators, the Council for Aid to Education, and the individual schools that have used it.
Much of our time was spent thinking about how we might drill down (ah! Corporate-speak!) into our schools’ data to figure out what we’re actually accomplishing by way of educating our students and then how we might use this data to show us how to do this even more efficaciously.
Some users—the Virginia Beach public school system and the amazing New Tech Network schools—have made excellent use of their accumulating CWRA data, helped along by their ability, by virtue of size and inclination, to actually deploy data experts to dig into the material and by the fact that they never need to wonder whether their sample sizes are large enough to generate statistically significant results—an issue for me at my school and I suspect for others among the CWRA’s independent school users.
In the end we each generated a “research question” that would be important in our schools and about which we then assembled a little presentation to make the pitch to the appropriate audience—usually faculties or administrative bodies. This was a great little exercise that now has me vowing to master the intricacies of standard deviations and correlation coefficients, calculations whose purpose I now understand a bit more clearly than before.
About half of us got to road-test our presentations, which of course led to questions. These, in turn, led me to posit my own statistical theorem, which is: The more data you have, the more questions it will generate. It’s not far-fetched in my mind to imagine each little manipulation yielding results that only demand more data and manipulation, in a process that seems more akin to the mathematics of fractals than to simple arithmetic, where input plus input (or minus, or times, or to the power of) yields an answer, Q.E.D.
The corollary question that follows from this—I’m feeling all mathematical this evening, rocking along on the Amtrak to Boston—is whether there’s ever a final question, the Ultimate Question, the question whose answer lets you know that you are done. The answer to this is probably, No.
I’m indebted for the clarifying response, as I often am these days, to my friend Jonathan Martin, who has been swimming in assessment data of various kinds rather seriously over the past few years and with whom I first spoke, on this very topic, when I was doing some research many years ago for the National Association of Independent Schools’ Commission on Accreditation regarding new assessment instruments and schools’ preparedness for using them.
Jonathan’s point was, well, to the point: “Each new question is probably a better question, and so these chains of questions will keep taking you closer to more and more important things.” It seems to be a kind of statistical Zeno’s Paradox, where you will always be coming closer yet never arrive. But as you do get ever closer, the lights of the destination will provide more and more illumination.
I don’t suppose this is anything very original, but I felt it keenly at St. Andrew’s. I’m generally a kind of Big Data skeptic, mostly because I have no idea of or control over the questions the government and business wonks are asking about data that involves me and my family. No doubt they’re stoked to keep mining and massaging this information until their corporate masters can call the tune on how we will spend all of our cash and their government counterparts can track our every move, word, and thought, all to some end that even the editors of Wired can only see through Google glass, darkly—but that’s a bit bleak, now, isn’t it? Blame it on Amtrak and what looks very like a full moon rising orangely over Orange, Connecticut.
But as an educator I’m pretty okay with grabbing some critical data and then asking questions, a lot of them, even if it feels like opening up an infinite matryoshka doll. I’ve written elsewhere, also in connection with the CWRA’s utility, of independent schools’ penchant for letting lore stand in for actual data, and I’m not a fan of this practice.
We just have to be honest about the questions we ask and most of all resolve to face squarely the answers we discover, even when they do not please us or affirm the wonders we believe we are accomplishing. But the CWRA, like any smart assessment tool and any good set of questions, is a fine tool for starting this process, and I had a fine old time at this conference for that very reason.