Education Guide

Course Content


With solutions-oriented reporting, the need for solid evidence is central. Without proof that an innovation works or, at least, shows promise — or doesn’t — a solutions story risks becoming a flimsy puff piece that provides little value to audiences and can undermine the credibility of a journalist and his or her news organization. Fortunately for education reporters, there is a wealth of public data, academic research, and other resources to tap into when examining the effectiveness of a potential solution, or, for that matter, the underlying causes of the problem.

Almost every idea in education claims to be evidence-based, so reporters should approach such claims with skepticism. A good primer is Daniel Willingham’s book 'When Can You Trust the Experts?: How to Tell Good Science from Bad in Education.' I’d also recommend 'Covering Medical Research: A Guide for Reporting on Studies' by HealthNewsReview.org, publisher Gary Schwitzer

John Higgins portrait
John Higgins
The Seattle Times

But be wary: The factors that can lead to student success are complex; and the ways that societal factors, individual and school characteristics, and local conditions interact to contribute to both problems and solutions can be difficult to disentangle. In addition, one year is rarely enough time to assess the success of an intervention, since the effects of education can play out, in various ways, over a lifetime.

Even approaches that seem like obvious successes need to be investigated. So, when considering evidence, ask:

  • What other changes have emerged at the school in the period during which the program has been introduced, and could they also have contributed to improvements in test scores and school culture?
  • Is there a particularly dynamic principal or teacher leading the changes?
  • Could the demographic or academic background of the students be playing a role, and would the program’s success be replicable with different types of students?
  • Are scores rising for all students, or just subsets?
  • How big was the sample of students in the study and how representative was it?
  • How much does the program cost?

Using Data

people looking at laptop and notes

It can be easier to feel like a story is important if you are exposing problems. It is more challenging, although equally as important, to tell a compelling story about something that’s working. I try to keep my eyes open for those stories, while maintaining a healthy skepticism. Solutions-oriented reporting should in some ways be similar to investigative reporting, with as many data points and sources interrogated as possible.

Meredith Kolodner portrait
Meredith Kolodner
The Hechinger Report

State education department websites are often the best place to start for K-12 data, including test scores, attendance rates, student-teacher ratios, graduation and dropout rates, discipline rates, demographics, teacher turnover rates, and per pupil spending—usually accessible at the school and district level, with state averages for all data points that provide a point of reference. Even with just basic Excel skills, reporters can begin answering questions about which schools are posting the most intriguing results and where the largest improvements, or drops, have occurred. By comparing two variables that have a well-established relationship, like the percentage of low-income students at a school and that school’s dropout rate, reporters can find “bright spots”—schools that aren’t necessarily top performers, but that are doing better than would be expected.

In addition, some high schools also collect and publish data online that share college-going rates and post- secondary plans of students, while others may share data from benchmark tests or other interim assessments. Many states publish reports that link college attendance and performance back to individual high schools. Federal surveys and datasets, such as the National Assessment for Educational Progress, an achievement test known as the Nation’s Report Card, can also provide useful statistics.

Getting Inside Test Scores

teacher oberving student

In education, you don’t always have the evidence that you like, and over-relying on some pieces of evidence can be problematic. In my story about parent involvement in Chicago’s Logan Square neighborhood, a lot of the indicators are qualitative. They didn’t have what you’d ideally want – data that attributes student performance increases to parents in the classroom. Test scores are going up, but we can’t claim a direct link. But in my view, the problem that the program is trying to solve is a lack of parent involvement in schools – and on that count, they’ve trained 1800 parents in 20 years. The perspectives of teachers and principals I spoke with backed that up: the school had surveyed those people to see if program was valuable, and the results were consistent.

Linda Shaw portrait
Linda Shaw
The Seattle Times

One of the best pieces of data we have to see if a particular program is working is also the most hazardous: test scores. At the most simplistic level, standardized tests offer a way to be consistent in comparing academic performance of schools and districts.

However, the reliability and quality of these tests is often questioned. Does a snapshot of student performance from a single day truly capture how well all students know the material or does it only reflect that some are better test takers than others? Are test results affected by the extent that different teachers prepare their students for the test using test-taking strategies and practice exams? And just as the students who take the test change each year, the tests themselves also often change, which can make long- term comparisons impossible. For instance, in the spring of 2015, more than 40 states took brand new exams aligned to the Common Core State Standards, which will not be comparable to past state tests.

The solution isn’t to ignore test scores, but to examine additional proof points: test scores plus, for example, attendance rates, parent survey results, a research study, interviews and observations.

Research Studies

stacked report binders

One of the great frustrations of this beat is that education studies always seem to conflict – kind of like the news that spinach is good for you. No it’s not. Yes it is... It seems like for every study that comes out on charters or vouchers or whatever, another study conflicts it. Consider the source and funder, as always.

Lesly Brody portrait
Lesly Brody
The Wall Street Journal

There is a wealth of research studies in the field of education. And as with data, journalists should proceed with care: Watch for weaknesses in research design and be clear about caveats with readers. In assessing those studies, you might ask: How big is the sample size? (1,000 is standard for a national poll with a 95 percent confidence interval, for example.) And is the study observational or experimental?

It’s important to consider the study’s research question. A good study will state a clearly defined purpose, research goals, and background research underlying the question at hand.

It’s also important to be aware of and to understand conflicts over research design and findings. One well- known example is the debate over how to measure the effectiveness of charter schools. Two major studies found significantly different results – one that compared only students who applied to oversubscribed charter schools in order to compare the achievement of those who got in versus those that didn’t, found that charters raised student achievement, while the other, which compared students in charters to similar students in regular public schools, found they mostly performed similarly or worse than regular public schools. The different methodologies employed by the researchers became part of the story. Similarly, broad-based studies, such as those conducted by the government to evaluate major programs like Head Start or federally-funded after-school services, often mask a great deal of variation across programs.

Especially if you’re reporting on the findings of a single study, try to get a second opinion or note previous, conflicting research. And always keep in mind possible biases:

  • Who funds the organization or researcher?
  • Who runs the organization?
  • Do they have an obvious political orientation or affiliation
  • Why are they conducting this research?