Over the past decade, most states have significantly ramped up the attention paid to teacher performance, adopting more formal requirements and measures for evaluating educators. In Massachusetts, for example, a revised educator evaluation framework took effect in 2011, and Maine began implementation of a new educator evaluation framework earlier this year.
But developing strong evaluation systems poses many challenges for districts and states, which must consider such questions as how to involve principals or integrate student data, or what impact the climate and culture of the school might have on the implementation of these systems.
As a result, educational leaders still struggle to figure out how to build meaningful educator evaluation systems that promote professional growth and raise student achievement.
To explore this and other related questions, a diverse group of professionals, including administrators, educators, and evaluation experts, gathered at Implementation of Educator Evaluation Systems: Examining Problems of Practice, a recent event co-hosted by EDC’s Regional Educational Laboratory Northeast and Islands (REL-NEI) and Harvard University’s National Center for Teacher Effectiveness. During the event, participants discussed the ways in which they could work together to encourage the conditions needed to build practical systems that work for both teachers and students.
EDC’s Karen Shakman, a researcher with REL-NEI’s Northeast Educator Effectiveness Research Alliance (NEERA), organized the event. She believes that bringing researchers and practitioners together fosters dialogue and promotes the positive school environment needed for these systems to work.
“We want researchers to think critically about how their findings relate to what state and district leaders need to know,” she says. “We’re also really trying to open the lines of communication between researchers and practitioners, and to help them begin to build alliances.”
According to Jill Conrad, senior advisor for human capital strategy at the Boston Public Schools, the event met all of those goals.
“The conference brought me up to date on some recent studies and how research is being used to change the part that really matters, which is instruction,” she says.
She adds that the field has advanced significantly in a short time—with a greater focus on results. “It was refreshing to have a focus on implementation and design issues with implications for policy,” she says.
Liz Hoyt, from the Rodel Foundation in Delaware, says that the discussions helped her understand the practices needed to create better educator evaluation systems.
“We’re all looking to improve the way educator evaluation systems work,” she says. “This was an opportunity to share experiences and to spread best practices.”