.comment-link {margin-left:.6em;}

Ed Knows Policy

EKP -- a local (Washington, DC) and national blog about education policy, politics, and research.

Who is Ed Researcher?

Wednesday, May 31, 2006

Ticket out of DC

Senator George Voinovich is a bad man. He wants to strangle the program that gives District of Columbia residents a shot at high qualiy public colleges and HBCUs.

Don't you think DC's Senator should work out a deal with the gentleman from Ohio? Oh, wait, that's right, DC doesn't HAVE any Senate representation.

(hat tip to Mark Lerner)


Click here for continuation of post (if any)

NCLBlog vs. Education Next

Some of it is just snark, but you gotta love a good point by point takedown.


Click here for continuation of post (if any)

Tuesday, May 30, 2006

NRA Bullish on Education

A quote in a Washington Post story on a recent study of gun violence made me laugh out loud. The piece had dueling quotes about the "Eddie Eagle" program that the NRA designed to teach kids about gun safety.

First, the gun control guy says Eddie Eagle is ineffective:

"Teaching kids to be safe around guns doesn't work" in preventing accidents, said Jon Vernick, co-director of the Center for Gun Policy and Research at Johns Hopkins. Studies have found that children exposed to Eddie Eagle programs are no less likely to play with guns than children who don't take the class, he added.


But the NRA response is priceless:

NRA spokesman Andrew Arulanandam disputed that.

"This is probably the first time I've heard that education is a bad thing and not effective," he said.

NRA spokesman Arulanandam needs to read some education research. But it gets funnier for anyone who know how decision making works in public schools (on flip):

Its adoption by school districts around the country, Arulanandam said, "is a testament to its effectiveness."


Ah, it reminds me of the Simpsons episode where Bart gets a book about knife safety called "Don't do what Donny Don't Does"


Click here for continuation of post (if any)

Wednesday, May 24, 2006

Questions for Sanders et al. on NBPTS study

I'm no NBPTS apologist, but I have to say that I'm underwhelmed by the final report (pdf) written by Sanders, Ashton, and Wright. NBPTS posted a summary of the negative comments offered by peer reviewers of the report, but that set of critiques is pretty unconvincing too. Three major flaws (or potential flaws, it's hard to tell based on their report) are:




  • 1. Attrition bias

  • 2. Underpowered tests

  • 3. Omitted variable bias



  • I'll try to explain those in English on the continuation page.



    1. Attrition bias. A potentially huge source of bias is sample attrition. The researchers threw away about 35% of the records (p. 4) without explaining well enough why or whether it might bias their results. They list 6 reasons for dropping cases, some of which are harmless and others not so much. For example, "the student record lacked two prior year scores." Well, that would mean that teachers who work with highly mobile populations are going to have fewer records and they might even drop out of the sample altogether (if they teach fewer than 10 "stayers"). If your stayers are better than your leavers, then this analysis uses a decidedly nonrandom subsample of your students to judge your performance. I'm not sure I want these guys in charge of estimating my value added.



    To see why this might be the source of sample loss rather than the other 5 reasons, just look at the sample size by grade level. Analysis of grades 7 and 8, which requires data on kids who necessarily changed schools to move from elementary to middle school, relies on the smallest samples.



    2. Underpowered tests. I know it's hard to complain that the sample size is too small when the authors report that they have 260,000 records and 4600 teacher-year-grade-subject combinations. But when people use bullshit units like "teacher-year-grade-subjects" you should start to worry. By my calculations they had 102 National Board Certified teachers (NBCs) in their sample -- still not too shabby, especially considering that the comparison groups were as large or in the case of "never in NBPTS", much larger. Yet, because they did all analysis broken down by grade, that translates into an average of 20 NBCs per grade or a range of 41 teachers in grade 1 down to only 8 teachers in grade 8. Maybe they had more teachers and fewer than 3 years per teacher, but it's not reported, so I assumed the worst here.



    Why am I harping on sample size here when so many good studies use even fewer teachers? Because when your finding is that the group differences are not statistically signficant there are two possible explanations. One is that there really is no difference (i.e. NBPTS is no good) and the other is that you have too few data points to know. The latter explanation is not ruled out here.



    In the end, they may be right that teacher variability is so great relative to certification that certification is essentially meaningless, but I think they need to compare NB cert with other signals of teaching quality, like experience, degrees, or exam scores. IF NB cert is even a tiny bit better than those other signals, then it's probably worth the money.



    3. Omitted variable bias. The paper is very boastful of how sophisticated their model is (and how superior it is to the studies by Goldhaber and Anthony and by Cavalluzo) because they treat teacher effects as random. But at the end of the day, the fixed versus random effect question is one of interpretation, not of right or wrong. A fixed effect interpretation says that we will treat these teachers in the study as a fixed sample and we explore relationships among them, like their group averages by certification status. The random effects interpretation says that we will assume they come from a larger population and instead of estimating their average effectiveness, we will estimate what kind of "effectiveness distribution" they seem to have come from. The latter approach is more ambitious, so it's harder to identify parameters or show that entire distributions differ, and hence you're more likely to generate null findings. Combine that with the problem of few NBCs per grade and you're nearly guaranteed to get the result that they did.



    So what does this have to do with omitted variables? A true value added score is supposed to account for everything outside the control of the teacher. Ideally, you want to compare teachers with the same kids in the same buildings and the same working conditions. No researcher can do all that, but you can come a lot closer than Sanders et al. did. Their model controls for hardly anything besides prior test score: just race and sex of the student and teachers' years of experience.



    But wait, they did use prior test scores, right, so that takes care of everything? Well, no. Not everyone with the same initial score has the same expected growth. Even conditional on prior score you will have slower growth for more economically disadvantaged students or those with disabilities or special needs. I'm not asking the authors to do a randomized trial (although that would be nice), but they could have controlled for even a weak proxy for family income like free/reduced price lunch eligibility. They could have included controls for disability status or special needs. NEver mind all the harder stuff like school conditions that might also contribute to the between-classroom variability they document so carefully.



    Like I said, this study may show that National Board certification is worthless, but I'm not yet convinced.



    Click here for continuation of post (if any)

    Monday, May 22, 2006

    Another good blog: VARC Blog

    VARC Blog is worth checking out. Author Chris Thorn gives you up to date info on Milwaukee Public Schools' use of value added indicators, his take on the selection of TN and NC as growth model states under NCLB, and other related topics.


    Click here for continuation of post (if any)

    Friday, May 19, 2006

    NBER working papers I'm reading: Grad school and Alt. Teacher Certification

    Maybe I'm crazy, but this is what I like to read in my spare time. It's the list of working papers from the National Bureau of Economic Research, namely those in field "I2", which is the EconLit code for Education.

    Here's one paper (pdf) that I've started to read, but I sure wish the authors would put the findings in the abstract. It tries to figure out what aspects of graduate schools cause their students to finish. I'm pretty sure that's their metric of program quality, so it's not on the top of my list, but I do profess some curiosity.

    This one (pdf) is on the top of my list. It's the Kane/Rockoff/Staiger paper on alt cert teachers in New York City (not to be confused with the Loeb/Boyd/Wyckoff/Lankford research on alt cert teachers in New York City). (More on flip)

    Kane et al. have a fairly massive database on a highly representative huge sample from NYC, one of the places where N does indeed go to infinity, or at least gets close. They find more variation between routes to teaching than within them and they find positive returns to education. What I like about this paper is how they consider the static impacts of certification simultaneously with the dynamics of retention and teacher improvement over time. So far, it looks like TFA comes out looking good because they do well early on. Since I'm not done reading it, I'll quote from the abstract:


    The classroom performance during the first two years, rather than certification status, is a more reliable indicator of a teacher's future effectiveness... Given relatively modest estimates of experience differentials, even high turnover groups (such as Teach for America participants) would have to be only slightly more effective in their first year to offset the negative effects of their high exit rates."


    Click here for continuation of post (if any)

    Thursday, May 18, 2006

    Blogrec: I Thought a Think

    Found a nice blog today: I Thought a Think. Teacher blogs usually have great anecdotes and stories, but this one goes further, with a clear interest in the big picture.


    Click here for continuation of post (if any)

    Wednesday, May 17, 2006

    "What Works?" What gives?

    Poor WWC, poor AIR, poor IES. They are all caught up in a large poorly conceived, poorly executed plan to do something noble. Ed Week writes it up here ($), describing the new website's new look. They just want to serve as the scientists who scan the research base and boil down the evidence for busy practitioners who are ill trained for the task.


    All fine and good, but they didn't think through some things that well:



    • Research hardly ever reaches consensus, especially in education; therefore, "what works?" is a nonsensical question. It should be the "What Are Researchers Tending to Conclude Clearinghouse (WART-CC)"

    • There's no one-size-fits-all solution to education problems; so evidence on the effectiveness of interventions in School District A have to be mulled carefully, regardless of whether it is positive or negative, before District B decides whether to adopt or adapt it.

    • Anybody can interpret the findings of a randomized experiment, but judging whether a quasi-experimental study is any good or not is what takes sophisticated judgment, and it has to be subjective. That's why it's better to have our current system where people and institutions that live off their reputations commit to these judgments and stand by them, with the rest of us deciding whose reputation merits paying attention. It's the same private sector solution that gives us movie and restaurant reviews. It should have been journalists hiring academics, not the government hiring contractors.


    Click here for continuation of post (if any)

    Monday, May 15, 2006

    Interpreting Research on Immigrant Schoolchildren (Mexico is not a continent)

    Kevin Carey gets it wrong downplays the role of Mexicans in immigration over at Education Sector ("Analysis and Perspectives: Crying Wolf About Immigrant Education"). He reads this graph from an Urban Institute report to criticize those who say that immigrants are "primarily" Mexican.


    Well, literally, he's right. But over 1/3 of the immigrants are from one country, Mexico. That's more than any other country in the world. More than the entire continent of Asia. Almost twice Europe and Canada. Almost twice the rest of Latin America. So it depends on how you look at it.

    That having been said, I think it's a good thing. This country would be a better place if we were majority Mexicano, but that's a personal opinion. We're discussing graph-reading skills here.

    He also points out data showing that undocumented students are a tiny fraction of all students. Yes, true, but the issue for many communities is not the average, but their own numbers. Re-do that graph for a border town and see what you get.

    Again, I'm complaining about his blatant attempt to twist the data. I'm on record as loving (legal) immigration. Even for illegal immigration, you should never punish children for what their parents do.

    UPDATE: I obviously caught Kevin's attention. I should have articulated the criticism better. Here's take two:

    Mexico is not a continent. List them by country or by continent. But mixing units is misleading and sneaky.


    Click here for continuation of post (if any)

    Honesty

    AFT NCLBlog finds the money quote in MDRC's executive summary of their report on their high school studies.

    "Because of methodological issues, lessons in this report should be viewed
    as judgements, not facts. Almost all the judgements are grounded in evidence,
    although the evidence is thick in some cases, thinner in others."

    Translation: "We did some research, but we're not sure what we have here."

    Ouch.


    Click here for continuation of post (if any)

    Sunday, May 14, 2006

    Teacher Induction: Ed Week Stenographers at it Again

    Teacher induction programs, which provide mentoring and other kinds of on-the-job support and training to new teachers, are much overlooked as the education policy of the future. Induction programs are to teaching what remedial reading is to the first year of college. Effective induction will be absolutely critical until we figure out how to raise the quality and preparation of new entrants to the profession. Overambitious alt cert programs, weak ed schools? You name your villain, but most new teachers are not ready for prime time in their first year (TEACHERS: please educate me otherwise in the comments if I'm wrong about that).

    Teacher induction is often a home-grown affair, with districts putting together programs by themselves or in parntership with local colleges of education. But there are national purveyors as well, and the New Teacher Center at Santa Cruz is the big kahuna in this market. NTC's biggest score was a $30m project to work in New York City. The first year implementation report--a self-evaluation--is here (pdf).

    The thing that really bugs me is having found the report by way of Ed Week ($), which seems to have their problem again with a malfunctioning bullshit filter. (continued below)


    This report on the first year of implementation is about as self-serving as it can be. By the way, not all first year implementation reports are this uninformative -- see Wolf et al. on the DC voucher opportunity scholarship program. But I can't blame NTC. They said their program was "promising" because, well, the world didn't explode.

    Their mentor ratios ran high--I think, by reading between the lines, but they don't come out and tell you how high, except to say that "a number of mentors" had more than the target of 17 teachers. Seventeen teachers! Well, mentor ratios are like class sizes. It sounds great to shrink them, but it's not clear whether it's cost effective to do so. I'd like to see how effective the program was. Anyway, Ed Week says "roughly 17" but I think they didn't bother to ask NTC what the number really was. If 17 is the target and they had some go over, then they must have had an equal number go under to average "roughly 17." If it sounds like I'm nitpicking on Ed Week, then I am, because they are supposed to be journalists, not stenographers, skimming report abstracts and summarizing them. Lowly bloggers can do that (and we throw in snark for free).

    As you would expect, Ed Week repeats in their lead the unsupported claim that "the program shows promise for boosting their quality and helping stem the number who leave." When you tack on the attribution, "a report has found" you absolve yourself of any responsibility, right, Bess Keller of Ed Week? Wrong. You could have at least pointed out in that first lovely sentence that this is the claim made by the folks selling the program.

    That makes a big differences when you read further down that the report recommends extending the program for a second year. As if $30 million for one year was not enough. Of course NTC is going to recommend that policymakers buy the deluxe 2-year edition of their product. Remind me, how is this Ed Week story not an infomercial? The model was well suited to the district. The "structural challenges" (read: excuses) seem to have victimized poor NTC, who fought bravely against all these external problems like poor communication and travel times between schools. If your mentors can't get from school to school in the most densely settled city in the country, how can they do it anywhere else? Come on, Ms. Keller, ask some tough questions for a change. You're charging money for your site. This blog is free and I don't even use ads.


    Click here for continuation of post (if any)

    Saturday, May 13, 2006

    Report on DC Vouchers -- year 2

    The report on Year 2 of the DC school voucher program came out and nobody noticed (pdf of the full report here). At least I didn't notice until just last week.

    Still no impact estimates, but for an interim report, this is still worth reading (and it's well written -- hats off to authors Patrick Wolf, Babette Guttman, Mike Puma, and Marsha Silverberg). The study's design and methods are discussed in an appendix, so this is your last chance to criticize the study on methodological grounds -- it's no fair if you do so after you see the impact findings.

    The no-show rate -- kids who were awarded vouchers but didn't use them -- is 26 percent (29 percent in the research sample). I suppose that's about par. They allowed in 20 percent more than capacity in anticipation of this.

    The percentage of kids for whom this program was a windfall because they were already attending private schools is 9 percent -- not too shameful, since they still had to be below 185% of poverty level.

    By fall of 2005, the program has filled to capacity and generated enough sample members (kids who applied to oversubscribed slots) to do the impact study comparing lottery winners to lottery losers.

    Could the DC voucher program/experiment be one of the few things that the U.S. Department of Education is handling in a competent manner? I can't wait til next year to see the findings.



    Click here for continuation of post (if any)

    Friday, May 12, 2006

    May hiatus -- what I'm not blogging

    Sorry if anyone is checking here and not seeing posts. There actually has been quite a bit of activity in education research that I should be blogging but I'm too busy doing my job.



    There was the March AEFA conference. The April AERA conference. Various working papers and journal articles came out in the last few weeks. The states applying to ED to use growth models as part of their NCLB compliance. The Teacher Incentive Fund grant announcement from the feds. And some National Board/American Board release/nonrelease of research below the fold...




    There was the National Board's non-release of study by Sanders and the American Board's release of a non-study by Mathematica -- ok, I forced that. I couldn't resist the wordplay. Actually, ABCTE did release a study. It was done by... ABCTE, that showed that ABCTE is great (pdf). Surprise! It's actually not too badly done, but they rightly ask you to wait for the independent research to come out of Mathematica. Just need to find some actual American Board candidates for Mathematica to study.


    Click here for continuation of post (if any)