A native of Wisconsin, Jill Weber has seen educational policy from many perspectives.
After beginning her career as a classroom teacher, Weber became interested in the junction of education and technology and obtained a graduate degree in education at Harvard University. She eventually made the leap from teaching to supporting teachers through professional development programs.
“In the program that I ran, we worked with state departments and school districts to help them integrate technology,” she says. “But there was very little research to indicate whether computers were the panacea everybody thought they would be. At that time, research just wasn’t at the heart of the conversation.”
Now the director of the Regional Educational Laboratory Northeast and Islands (REL-NEI), her focus has shifted again—this time towards policy. Not surprisingly, her work has affected how she makes sense of education reform.
“Now I think a lot more about the research that education policy is based on,” she says. “Take all the current conversations about turning around schools and the place of technology in the classroom. It all comes back to that fundamental question: How do you know if something is working?”
What works in education? Since 2006, Jill Weber has been pursuing this question as director of the Regional Educational Laboratory Northeast and Islands (REL-NEI). Based at EDC, REL-NEI researches some of the biggest questions facing educators today, creating a bridge between researchers, policymakers, and practitioners.
Describe REL-NEI’s work over the past six years.
Our two biggest studies were our randomized controlled trials. These were intensive research studies involving students in 101 schools across New England. But we also did dozens of research projects dealing with specific issues of interest in our region, such as gender gaps in assessment outcomes and strategies to reduce teacher turnover.
What impact has REL-NEI had in that time?
We’ve made a real difference across our region [New England, New York, Puerto Rico, and the U.S. Virgin Islands]. In Connecticut, for example, we’ve worked with the Commissioner’s Cabinet from early on. Every time they were thinking about whether to go forward with a policy consideration, they asked us to provide the research upon which they could base their decision. So we were able to advise the cabinet as they developed policies, and we also observed their implementation at the district level.
Our biggest math study, Access to Algebra I, was a partnership between EDC researchers and our REL-NEI colleagues at the American Institutes for Research (AIR). The study found that online courses are an effective way to broaden access to Algebra I. But at its heart, this study was really about access and equity in learning opportunities. Small, rural schools—like many of those that participated in the study—often don’t have the resources to offer a face-to-face Algebra I course to the few kids who are ready for it. Consequently, these kids begin high school a step behind students from middle schools that do offer Algebra I. We know that one of the primary reasons administrators turn to online courses is to provide options that students would not have otherwise. It isn’t to replace a face-to-face course or teacher.
This study involved students from across New England. How did you recruit them?
Recruiting was a tough challenge, not only in the number of students we had to reach out to, but also in the types of schools we were working with. For example, we were recruiting small schools in Maine with one, two, maybe three eighth graders. In many cases, the teachers had known their students since kindergarten. And we were asking them to turn over the child’s instruction to us for the year of the study.
What’s impressive is that we had close to a 99 percent retention rate during the study. This rate is almost unheard of and really speaks to EDC’s reputation in math education.
What are some of the challenges of conducting these kinds of long-term studies?
All studies start with a research question, but by the time you have finished the study, it seems that you have more questions than answers. For example, a study we conducted on the effect of using the Thinking Reader software program on sixth-graders’ reading comprehension didn’t show any impact. But the study only followed one year of classroom implementation; it didn’t focus on the experiences teachers were having as they used this new piece of software. So although the study was successful and well conducted, we know there is still more to learn about how to support teachers when they begin to use programs like Thinking Reader. These lessons might in turn influence the impact such a program can have on student outcomes.