You’re a Creative, Motivated Teacher. You’re Fired

OK. How many of you have ever given — or received — a performance evaluation that resulted in you, or someone else, losing a job? If you work in business or government, my guess is not many. Most organizations use performance evaluations, at best, to cap pay increases, merit or otherwise. (You know. Everyone is always “excellent” until it comes time to divvy up the merit pay budget.)

That’s not the case in education these days. And teachers are losing their jobs based on evaluation models that are suspect at best. If we want to improve education in this country and give young people the opportunity to be successful in careers and throughout their lives we need excellent teachers. But not only have we been devaluing teaching as a profession for years — now we are gleefully firing teachers based on so-called “value added” statistical models. (See HuffPost, “Teachers Survey: Job Satisfaction, Security Take A Dive.“)

Here’s an example as reported in WaPo, “Creative…motivating and fired“:

By the end of her second year at MacFarland Middle School, fifth-grade teacher Sarah Wysocki was coming into her own.

“It is a pleasure to visit a classroom in which the elements of sound teaching, motivated students and a positive learning environment are so effectively combined,” Assistant Principal Kennard Branch wrote in her May 2011 evaluation.

Two months later, she was fired.

Wysocki, 31, was let go because the reading and math scores of her students didn’t grow as predicted. Her undoing was “value-added,” a complex statistical tool used to measure a teacher’s direct contribution to test results. The District and at least 25 states, under prodding from the Obama administration, have adopted or are developing value-added systems to assess teachers.

When her students fell short, the low value-added trumped her positives in the classroom. Under the D.C. teacher evaluation system, called IMPACT , the measurement counted for 50 percent of her annual appraisal. Classroom observations, such as the one Branch conducted, represented 35 percent, and collaboration with the school community and schoolwide testing trends made up the remaining 15 percent.

Her story opens a rare window into the revolution in how teachers across the country are increasingly appraised — a mix of human observation and remorseless algorithm that is supposed to yield an authentic assessment of effectiveness. In the view of school officials, Wysocki, one of 206 D.C. teachers fired for poor performance in 2011, was appropriately judged by the same standards as her peers. Colleagues and friends say she was swept aside by a system that doesn’t always capture a teacher’s true value.

Proponents of value-added contend that it is a more meaningful yardstick of teacher effectiveness — growth over time — than a single year’s test scores. They also contend that classroom observations by school administrators can easily be colored by personal sentiments or grudges. Researchers for the Bill & Melinda Gates Foundation reported in 2010 that a teacher’s value-added track record is among the strongest predictors of student achievement gains.

Which is why D.C. school officials have made it the largest component of their evaluation system for teachers in grades with standardized tests. The District aims to expand testing so that 75 percent of classroom teachers can be rated using value-added data. Now, only about 12 percent are eligible.

“We put a lot of stock in it,” said Jason Kamras, chief of human capital for D.C. schools.

Yet even researchers and educators who support value-added caution that it can, in essence, be overvalued. Test results are too vulnerable to conditions outside a teacher’s control, some experts say, to count so heavily in a high-stakes evaluation. Poverty, learning disabilities and random testing day incidents such as illness, crime or a family emergency can skew scores.

Oh, boy. And how about this.

Wysocki said there is another possible explanation: Many students arrived at her class in August 2010 after receiving inflated test scores in fourth grade.

Fourteen of her 25 students had attended Barnard Elementary. The school is one of 41 in which publishers of the D.C. Comprehensive Assessment System tests found unusually high numbers of answer sheet erasures in spring 2010, with wrong answers changed to right. Twenty-nine percent of Barnard’s 2010 fourth-graders scored at the advanced level in reading, about five times the District average.

D.C. and federal investigators are examining whether there was cheating, but school officials stand by the city’s test scores.

Wow. Possible cheating to inflate test scores? I’m shocked. Wonder what Mr. Chief of Human Capital for D.C. Schools thinks about that?  I digress.

OK. Let’s have someone else opine on this. Here’s Valerie Strauss, writing in WaPo, “Firing of D.C. teacher reveals flaws in value-added evaluation“:

The firing of a D.C. teacher called “creative,” “visionary” and “motivating” is the latest example of the many things wrong with value-added methods to evaluate teachers, the newest trend in school reform that is sweeping states with a push from the Obama administration.

My colleague Bill Turque tells the story of teacher Sarah Wysocki, who was let go by D.C. public schools because her students got low standardized test scores, even though she received stellar personal evaluations as a teacher.

She was evaluated under the the D.C. teacher evaluation system, called IMPACT, a so-called “value-added” method of assessing teachers that uses complicated mathematical formulas that purport to tell how much “value” a teacher adds to how much a student learns.

One of the many profound problems with this is that the measurement for how much a student learns is a standardized test, which we know can only measure a narrow band of student achievement — and that’s only if the test is relatively well written, a student takes the exam without illness, anxiety or exhaustion, and nobody cheats.

The value-added formulas — which supposedly can factor in all of the outside variables that might affect how well a student performs on a test — are prone to so much error as to make them unreliable, according to mathematicians and other assessment experts who have warned against using these models.

Elizabeth Phillips, principal of P.S. 321 in Park Slope, N.Y., is trying to deal with the fallout from bad value-added evaluations at her school. New York City last month released value added scores for 18,000 teachers over the objections of educators in the state, and Phillips wrote that they were “extremely inaccurate, both in terms of actual mistakes and in how data are interpreted.

“It is wrong to call a great teacher a failing teacher because a few kids got 3-4 questions wrong one year rather than 2-3 questions wrong the year before,” Phillips wrote.

Wysocki found herself fired because, though her evaluations were high, her students did not score as highly as the complex value-added formula had predicted they would. She argued that this could have happened because more than half of her students’ test scores from the year earlier may have been inflated; the school they attended is now under investigation for cheating.

She was fired anyway, and now teaches in Fairfax County, one of the country’s best public school systems. Good move, D.C.

The Obama administration helped push states down the value-added road by insisting in Race to the Top requirements that student growth be included in evaluation systems. State after state seeking Race to the Top money — and even those who didn’t — jumped on the bandwagon. The arm-twisting on value-added has been so strong that even union leaders have stopped fighting for a blanket prohibition on its use and now are working to keep down the percentage of an evaluation that depends on student test scores.

We live in an era when school reformers keep talking about the importance of making decisions based on “data” — but apparently, only half-baked data will do. It may be that at some point in the future someone will figure out how to fairly and reliably use a mathematical formula to evaluate how well a teacher does his/her job, but we aren’t even close to being there yet.

So how fair is it to use such a system right now?

Not at all. One day, the folks who championed it — including administration officials who say they are concerned about fairness and equity — may well come to regret their myopia. It will be too late, though, for teachers now being smeared with this exercise in delusionary assessment.

I recognize that there are teachers who aren’t nearly excellent standing before students in classrooms in every city in America this morning. And there is no excuse for allowing a poor teacher to remain; this really is more important than the jobs most people have pushing papers and surfing the Internet for eight or so hours a day in businesses and government. But I would feel better about removing ineffective teachers if we gave them the same level of support that others receive: on-the-job training, mentoring, administrative support and the resources necessary to do the job. Oh, and some parental support and involvement doesn’t hurt either. And I guess it would be kinda nice if all students showed up every day well-fed and healthy.

So this fixation on value-added statistical models strikes me as another reform that continues the educational Slide to the Bottom. [I know. It’s the unions fault. Spare me the e-mails.]

Glad I’m quasi-retired.

And for more on all this, read the NYT op-ed, “Confessions of a ‘Bad’ Teacher.”

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s