LA Times Identifies High, Low Performers

image from www.latimes.com There's been lots of back and forth over the past 48 hours since the publication of the LA Times' eye-opening story about variations in kids' test scores based on what teacher they have -- and naming them publicly.

"The best teachers were not concentrated in schools in the most affluent neighborhoods, nor were the weakest instructors bunched in poor areas...The quality of instruction typically varied far more within a school than between schools."

The UTLA has called for a boycott of the LA Times over the story.

Comments

Leave a comment
  • I think the LA Times presented its story in a far too simplistic manner. The RAND corp which did the analysis for the LA Times has a very complex review of the statistical approach used that forms the basis for the article. See http://www.rand.org/pubs/monographs/2004/RAND_MG158.pdf

    The value-added assement approach may be a useful way to look at teaching, but it is very complex. I have to wonder if the LA Times staff has the background in advanced statistics to reasonably make some of the journalistic interpretations they appear to make in the article. I took the full graduate school statistics sequence for education majors and I found the discussion of the approach at points to go over my head.

    Rod Estvan

  • In reply to Rodestvan:

    I posted that article on the trib when absolute laymen were calling for the "bottome 10%" to be "culled." Some solid work has been done since then ('04), but the same biases remain. One issue is innate student ability, another is tracking, and yet another is x. As I wrote above, I'd hate to see this whole concept thrown away, but it will happen when idiots are spouting off about extremely complicated concepts that will have an immediate and negative impact on children and professionals. It's simply not possible to say, "Uh, look, um, my kid was in the 45th percentile last year and is in the 50th this year. That teacher's a friggin genius." And it's definitely not possible to say, "That teacher should be fired."

    We still have to figure out how to efficiently show the effect of a teacher on student achievement, but, aside from the current limitations, I think they're barking up the wrong model. I'd say more, but I'm working on it.

  • In reply to Rodestvan:

    Thanks to spamdingle for posting the link to the Rothstein paper. For those of us looking at special education teachers' effectiveness using any type of value added approach, it seems to be incrediably complex. Moreover, students with certain disabilities can be expected to have slower growth rates for specific academic areas whereas others may exhbit academic growth patterns that are in fact based on personal dynamics, for example emotionally disturbed students. This is way too complex to use for evaluating special education teachers.

    Rod Estvan

  • In reply to Rodestvan:

    I agree with Rod. Special education teachers need to administer individual untimed tests such as the Woodcock-R in order to assess student growth. Using a standardized test such as the ISAT(even untimed) really is not an accurate assessment. I just reviewed the records of my 20 students with disabilities for next year. I realize that some of my students(about 40%) are not really students with learning disabilities but students who really have cognitive disabilities or emotional disorders. The school psychologist determines the cognitve ability and I am suspicious of all of the students I have had over the years who present as cognitively delayed but are labeled as students with learning disabilities. They do not show the same growth as a child with learning disabilities. I am wondering if this is an trend nationally or just within CPS and the rationale for misdiagnosing of students with disabilities. I am concerned because it is really misleading parents.

  • In reply to Rodestvan:

    VAM is likely to misjudge the effectiveness of teachers and schools and could produce incorrect generalizations about their characteristics, hampering systematic efforts to improve education. In addition, much of the discussion is unpublished, and the practical import of these concerns when VAM is applied to student achievement remains largely unclarified and largely misunderstood.

  • I wonder if Huberman's PM department could understand it?

  • Using names without at all understaning value added modeling and its limitations is criminally stupid. I don't expect anyone to read this whole paper, but at least read the summary:
    http://www.economics.harvard.edu/faculty/staiger/files/rothstein%2Bteacher%2Beffects%2Bqje2010.pdf
    Basically, year over year VAM's are highly flawed and cannot, in their present forms, represent teacher efficacy, let alone predict it. In fact, the previous year's teacher has a stronger correlation with present achievement than does the present teacher.

    Look, this holds great potential and I'd hate to see it scrapped without further investigation, but its time has not yet come. I had a principal call the math teacher to her office to help her figure out what percentage of the school was (enter ethnicity here). She couldn't do it. We can't have tweedle dee and tweedle dumber using highly inferential statistical modeling to make staffing decisions.

Leave a comment