Subscribe for 33¢ / day
Rob Meyer
Rob Meyer is the director of UW-Madison's Value-Added Research Center.

Rob Meyer can't help but get excited when he hears President Barack Obama talking about the need for states to start measuring whether their teachers, schools and districts are doing enough to help students succeed.

"What he's talking about is what we are doing," says Meyer, director of the University of Wisconsin-Madison's Value-Added Research Center.

If states hope to secure a piece of Obama's $4.35 billion "Race to the Top" stimulus money, they'll have to commit to using research data to evaluate student progress and the effectiveness of teachers, schools and districts.

Crunching numbers and producing statistical models that measure these things is what Meyer and his staff of 50 educators, researchers and various stakeholders do at the Value-Added Research Center, which was founded in 2004. These so-called "value-added" models of evaluation are designed to measure the contributions teachers and schools make to student academic growth. This method not only looks at standardized test results, but also uses statistical models to take into account a range of factors that might affect scores - including a student's race, English language ability, family income and parental education level.

"What the value-added model is designed to do is measure the effect and contribution of the educational unit on a student, whether it's a classroom, a team of teachers, a school or a program," says Meyer. Most other evaluation systems currently in use simply hold schools accountable for how many students at a single point in time are rated proficient on state tests.

Under Meyer's method, for example, one school might report solid standardized test scores, but receive a relatively low value-added mark because the school's demographics indicate the scores should be even higher. Conversely, another school might have middle-of-the-road standardized test results, but receive a high value-added mark because the make-up of that school's population suggests the scores should have been far worse. For another analogy, click here.

Meyer, who is an economist, built his first model of a value-added system in 1989 and implemented his first system for the Minneapolis public schools in 1992. But it was his work with Milwaukee Public Schools, starting in 2000, that truly put his work on the map. Since then, Meyer and his co-workers have partnered with major school systems in New York City, Chicago and Dallas, among others.

The Madison Metropolitan School District started using the Value-Added Research Center's evaluation system last year, and Meyer's center currently is working with the Wisconsin Department of Public Instruction and the Cooperative Educational Service Agency No. 2 in south-central Wisconsin to roll out a statewide value-added system.

Meyer also retains close ties to U.S. Secretary of Education Arne Duncan because the Value-Added Research Center was working on measures with Chicago's public schools when Duncan led the district, before he left in December 2008 to join the Obama administration.

Meyer recently sat down with The Cap Times to talk about the work of his center, which is housed within the Wisconsin Center for Education Research in UW-Madison's School of Education. What follows is an edited transcript:

Cap Times: Last week in Madison, President Obama said teachers and schools need to be held accountable for student success, and that states must find ways to measure this. I assume that's something you like to hear?

Meyer: Absolutely. And I think that's coming from his strong trust in Arne Duncan. Arne Duncan, I think, thought that our value-added methods in Chicago had a lot of potential.

Cap Times: How does your value-added numbers crunching system differ from past evaluation efforts and what are its advantages?

Meyer: The first important contrast is to proficiency benchmarks that are related to the No Child Left Behind Act. What these benchmarks do well is measure where kids are at a particular time. What we do with value-added is try to figure out one piece of information, "What did the school do in this last year to impact the score?" Value-added looks at the growth of a student from year to year.

Cap Times: What are the limits of using your value-added measures?

Meyer: Well, you can't think that one test score is the only thing that matters. As we develop value-added, we need to continue to grow every part of our system at the same time - everything needs to keep up. For instance, it would be our vision, and I think the Wisconsin Department of Public Instruction would strongly agree, that we build tests to help us better understand not just what kids know, but ones that help us measure growth in students as well. We are very fortunate that DPI has committed to rebuilding and reinventing and redesigning the state test (required by No Child Left Behind). And so these are some new demands we are putting on the assessments. We need to grow the system.

(Note: The state is in the process of phasing out the Wisconsin Knowledge and Concepts Examinations, the statewide tests used to comply with the federal No Child Left Behind law. The new system will likely use state, district and classroom tests.)

Cap Times: How careful do people have to be about reading too much into your value-added score numbers, especially early on when there isn't necessarily a mound of data to analyze?

Meyer: You do have to be careful and that's one of the reasons we've moved very actively into working with districts on professional development and helping people interpret the reports. Nobody would report a Gallup poll without telling you the confidence interval. And so the same thing applies here. One of the interesting challenges with looking at any kind of statistic is the numbers will jump around somewhat from year to year because of the noise. A Gallup poll will go up or down by a couple of points. So we spend a lot of time dealing with that so people can understand what's the informational content in this. When you look at a presidential poll and the numbers bounce around by a number or two, people kind of get that. And that's sort of the same thing we're dealing with.

Cap Times: If I were a teacher and potential raises were linked to how my students did on tests, there would be a strong incentive to simply teach what was going to be on the standardized tests. Is that a concern?

As Madison as it gets: Get Cap Times' highlights sent daily to your inbox

Meyer: That's definitely a concern. That's true now with tests being used under No Child Left Behind. There are a couple of things one does to address this issue. First, it's very nice if you have more than one assessment. That's why lots of school districts are moving to short-cycle assessments on top of the state assessment. It's also important to change the items on a test every year.

It's also important to include other measures of how teachers are doing, perhaps based on observational ratings of teachers using really good rubrics. That's big and that's something that we're moving into big-time. Milwaukee is very interested in this and I just met with the Chicago folks, and they're going to a district-wide observational rating system that's world-class. The best programs would be a blend of test assessments plus observational ratings.

Cap Times: It seems as though the Value-Added Research Center is really a national leader when it comes to these measures. How did that come about?

Meyer: Like many things that come out of academia, this started out with my working with a graduate student. And Milwaukee Public Schools was our research partner. As we shared our ideas and started working more and more closely with Milwaukee, their people said, "Well, you know, it would be very difficult for us to build a system quickly and deliver on time, so you should think of getting into the production of value-added numbers." So I actually took a leave of absence from the university and worked with funding directly from the Milwaukee Public Schools and built a value-added production system. Then over the years our partners have said, "Well, it would be really nice to have reports that are good and clear and not just an Excel table." And then we moved into providing professional development to help people figure out how to use the value-added numbers to become better educators. So we've moved into many, many, many different areas and continued to grow in response to requests from our partners.

Cap Times: Why is this idea that we need to measure student growth really starting to take off?

Meyer: Because we finally have the test scores and data available to do something like value-added analysis. A few places across the country, like Milwaukee, thought it was useful to have assessment data early on. So Milwaukee has been doing standardized tests for years and we could use those results to produce value-added measures. But most school districts didn't do these tests. So the key to all of this is that (No Child Left Behind) required assessment in grades three through eight beginning in the 2005-06 school year. Before that, we often didn't have data to look at. These test scores made this all possible.

Cap Times: What's your opinion of pay-for-performance educator compensation programs?

Meyer: It's a very interesting topic. In Chicago, where we do value-added work, they do this to some degree, and it was very much a collaborative project between the Chicago Teachers Union and Chicago Public Schools. And there are a number of places around the country where teachers unions and districts are working together to develop pay-for-performance programs. Teachers tend to be more open to this if they understand how value-added works.

Cap Times: Who tends to be the biggest critics or gives you the biggest push-back when you talk about your value-added assessments?

Meyer: On the stakeholders side, certainly one of the biggest things we've found is that teachers and their representatives don't want this to be a mysterious black box. The value-added model that's producing the numbers, they want to know about it and they want to know how it works. One of the hallmarks of what we do is we're completely open book on our model. We have no problem bringing in union leaders to the inside to see what we're doing. When we do that, people have a much easier time buying in.