Here’s Why It Hasn’t
For too long, the American education system failed too many kids, including far too many poor kids and kids of color, without enough public notice or accountability. To combat this, leaders of all political persuasions championed the use of testing to measure progress and drive better results. Measurement has become so common that in school districts from coast to coast you can now find calendars marked “Data Days,” when teachers are expected to spend time not on teaching, but on analyzing data like end-of-year and mid-year exams, interim assessments, science and social studies and teacher-created and computer-adaptive tests, surveys, attendance and behavior notes. It’s been this way for more than 30 years, and it’s time to try a different approach.
The big numbers are necessary, but the more they proliferate, the less value they add. Data-based answers lead to further data-based questions, testing, and analysis; and the psychology of leaders and policymakers means that the hunt for data gets in the way of actual learning. The drive for data responded to a real problem in education, but bad thinking about testing and data use has made the data cure worse than the disease.
How We Got Here
In 2001, Congress adopted No Child Left Behind, key legislation that mandated annual testing and led to data-based decision making for schools. That was the same year I started teaching. When I joined a charter school in Washington, DC, the school had recently expanded. It had a fabulously charismatic CEO with an inspiring life story. All its students completed internships and all the seniors wrote theses about public policy. The best of these made for great stories, to be told to donors and the charter oversight board. But the data — standardized tests required by the new law — revealed that our students, overall, struggled to read and do math anywhere near grade level. The graduation rate stunk.
The new data meant that we could no longer ignore most students’ reality: Our teachers were failing. As Michelle Rhee, former chancellor of the District of Columbia Public Schools, said, “When we took control of this school district in 2007, 8 percent of the 8th graders were operating on grade level in mathematics—8 percent. And if you would have looked at the performance evaluations of the adults in the system at the same time, you would have seen that 95 percent of them were being rated as doing a good job. How can you possibly have a system where the vast majority of adults are running around thinking, ‘I’m doing an excellent job,’ when what we’re producing for kids is 8 percent success?”
One of Michelle Rhee’s core values for the public school system was “Our decisions at all levels must be guided by robust data.” (I worked for Rhee in 2009-10, and I was a total believer.) This gospel spread throughout K-12 education. Under Barack Obama, the federal Race to the Top program demanded measurement of teacher impact as part of evaluations. Teachers got used to setting SMART goals for their lessons (M for Measurable!) and putting up data walls in their classrooms. A guide for principals mandated goal-setting, with the proviso that “each target must be quantifiable…you and your school will be most successful if you can justify a goal and target with hard data.” Another popular book for principals is called, simply, Driven by Data.
By the time I became principal of a middle and high school, the data bug had infiltrated our methodology so much so that we effectively shut down all non-test related activities for six days in the spring for state testing. Earlier in the year, we had six other days of testing to judge where students began in reading and math, and how they were progressing according to nationwide norms. We spent the equivalent of a full day of teacher professional development teaching teachers how to give the tests and avoid the appearance of cheating. An assistant principal, along with an assessment manager, devoted the equivalent of almost two months to attending required trainings, creating testing plans, and completing forms and spreadsheets related to the state testing.
We’ve slid from a reasonable, necessary, straightforward question — are the students learning? — to the current state of education leadership: where school leaders and policy-makers expect too much of data, over-test student learning to the detriment of learning itself, and get lost in their abundance of numbers.
Leading through Data
The leadership decision at stake is how much data to collect. I’ve heard variations on “In God we trust; all others bring data” at any number of conferences and beginning-of-school-year speeches. But the mantra “we believe in data” is actually only shorthand for “we believe our actions should be informed by the best available data.” In education, that mostly means testing. In other fields, the kind of process is different, but the issue is the same. The key question is not, “will the data be useful?” (of course it can be) or, “will the data be interesting?” (Yes, again.) The proper question for leaders to ask is: will the data help us make better-enough decisions to be worth the cost of getting and using it? So far, the answer is “no.”
Nationwide data suggests that the growth of data-driven schooling hasn’t worked even by its own lights. Harvard professor Daniel Koretz says “The best estimate is that test-based accountability may have produced modest gains in elementary-school mathematics but no appreciable gains in either reading or high-school mathematics — even though reading and mathematics have been its primary focus.”
We wanted data to help us get past the problem of too many students learning too little, but it turns out that data is an insufficient, even misleading answer. It’s possible that all we’ve learned from our hyper-focus on data is that better instruction won’t come from more detailed information, but from changing what people do. That’s what data-driven reform is meant for, of course: convincing teachers of the need to change and focusing where they need to change.
What We Do Next
Data, many times, is incredibly useful. In this article, I’ve used data on student performance; on teacher prowess; from student surveys on their experience. And I’ll be the first to admit how much I relied on it as a principal. “Are the students learning?” is still the most important question, and it can’t be answered without looking at the results. But looking ever-closer, and ever-more-often, won’t make the students learn more. And trying to turn teachers into data analysts instead of helping them to be better teachers is a recipe for disaster.
So what do we do instead?
A much more straightforward approach is to change what we have control over: the quality of teaching in our schools. In every driven-by-data guide to instruction, the last step, after all the analysis, is to teach to where the student gaps are. The sad truth is, far too much teaching isn’t what the research says is most effective. For instance, the American Federation of Teachers’ recommends 10 research-based instructional strategies all teachers should use. But in my own experience across hundreds of classrooms, very few teachers, for instance, “ask a large number of questions and check the responses of all students.” More often, they get the right answer from one student and move on. The same is true of the other strategies on the list.
Better instruction won’t come from more detailed information, but from changing what people do. That’s what data-driven reform is meant for, of course, by convincing teachers of the need to change and focusing where they need to change. But actually changing is the hard part – and the only important one. Don’t try to turn teachers into data analysts; try, instead, to help them be better teachers.
Simon Rodberg was the founding principal of the District of Columbia International School, a public charter middle and high school. His writing has appeared in Educational Leadership and Principal magazine, and he is at work on a book about school leadership.