(Cross-Posted at www.progressivefix.com--I'm behind in getting these up on my blog...)
Everyone should read Matt Yglesias’s post,”
How Close Were We, Really?“ which makes a point that I’ve been mulling. The fact that health care reform blew up so quickly after the Brown win implies that whatever consensus had been achieved between the Senate and House, it was significantly incomplete, weak, or both. House liberals apparently were not prepared to pass anything coming out of conference that didn’t reverse the problems they have with the Senate bill. But it’s unclear whether moderate senators or representatives would have stayed on board in that event. If the last week shows nothing else it reveals that a whole lot of members of Congress were decidedly un-excited about supporting anything resembling either chamber’s bill.

This seems like a job for 
Keith Hennessey: knowing what we know now about the uneasiness of moderates and the stubbornness of liberals, what was the likelihood that reform would have passed if Coakley had won? (Keith had the probability of collapse given a narrow Coakley victory at 10 percent — and two percent with a big win — before the election.)

If this interpretation is right, it implies that many progressives haven’t given enough credit to how far out on the plank many moderates actually went (which isn’t that surprising given how
many of them misread the polls). Pre-Brown, moderates were betting that antagonism toward reform wasn’t so strong that their job — their chance to work on all of their other legislative priorities — was in mortal danger. The Brown win provided new information that clearly affected the calculus (as did the initial freak-out by Massachusetts’s own Barney Frank).

Perhaps one big reason why the Obama team (and everyone else) was caught flat-footed after the election was that they were unaware of how much moderates already felt they had stuck their necks out.

All this said, I think the consensus that Democrats having second thoughts ought to accept that they have no choice but to vote for the final bill is correct. Actually, I think these Democrats have probably reached that conclusion too. But it’s important to note that that wouldn’t be enough to pass something — if House liberals won’t vote for the Senate bill, it doesn’t matter what moderates do. What progressive bloggers need to do is start working the liberal legislators in the House.


(Cross-Posted at www.progressivefix.com--I'm behind in getting these up on my blog...)
Last week, I spent some time looking at the living standards of the middle class, showing that they have improved notably over time and giving evidence that they are better than or comparable to middle-class lifestyles in other industrialized nations. I will be returning to this issue in a later post in order to address the “two-income trap” argument of Elizabeth Warren, which was raised by Reihan Salam and by Rortybomb.

For now though, I want to talk about the living standards of the poor. It’s important to make the distinction between trends (which I’ll discuss today) and absolute levels of material well-being (which I’ll discuss in a later post) because things can have improved a lot at the same time that they are still not all that great.

Let’s return to the comparison I used in my 
post on the middle class of “the gold standard” of 1973, when median household income was at its pre-stagflation peak, to 2008. To represent “the poor,” I’ll look at the 20th percentile — the household that is doing better than 20 percent of other households but worse than 80 percent of them. You’ll have to trust me that my research indicates the story would be similar if I were talking about the tenth percentile.

It’s easy to look at 
only a fairly limited income measure going back to 1973 for the 20thpercentile. Doing so indicates that income at the 20th percentile grew from $19,046 to $20,712 (in 2008 dollars, adjusted by the Bureau’s preferred CPI-U-RS). That’s obviously not impressive growth, though it should be noted that the poor are a bit better off today than they were in 1973 (and they look a little better comparing 1973 to 2007, which is a fairer comparison). Using the PCE deflator, which the federal Bureau of Economic Analysis uses (and which I prefer because of the evidence that the CPI-U-RS overstates inflation, particularly among the poor), income increased by about $3,000 after accounting for the cost of living, or 16 percent. That’s about the same as for the middle class using the same measures and methods.

As I noted in the middle-class post, the official income definition is pretty limited. The Census Bureau’s “Definition 14″ takes into account taxes, public benefits, and the value of health insurance, and it’s easy to look at going back to 1979 (which was at least as good/bad a year for the poor as 1973 was). By this measure, income at the 20th percentile rose 
from $17,999 to$24,642 from 1979 to 2008 (using the CPI-U-RS). That’s an increase of over one-third—after adjusting for the cost of living. When the PCE is used to adjust for the cost of living, the increase is almost $8,000—45 percent!

A number of commenters to my post on the middle class didn’t like that the value of health benefits were included in my “comprehensive” income measure. I prefer including them in “income” because employer health care costs have caused earnings growth to be quite a bit lower than it otherwise would have been, and employer- and publicly-provided health insurance contribute to living standards. It is possible that the way the Census Bureau estimates the value of health insurance exaggerates improvements in well-being, but it is not simply the case that rapid health care inflation negates those estimates. Many health economists believe that rising health care costs do reflect corresponding improvements in the quality of care received. At any rate, whether or not you believe I have a dog in this fight, hopefully you believe that the Census Bureau doesn’t.

Nevertheless, we can look at the trend omitting the value of health insurance in 2008. Doing so offers a somewhat conservative estimate of the increase because I can’t omit the value of insurance from 1979. The increase, however, is 21 percent using the CPI-U-RS, and 29 percent using the PCE.

So it seems pretty likely that the living standards of the poor in the U.S. have improved fairly robustly in recent decades. Before leaving behind the question of trends, I should note that there is pretty overwhelming evidence that male workers who don’t get further education beyond high school have seen real wage stagnation (though the story for the median male worker, 
as I showed in the middle-class posts, is much better). The fact that household incomes at the bottom have grown reflects a decline in taxes paid, an increase in the value of means-tested benefits, and greater work among women (including single women). Computations I have done indicate that confining things to non-elderly households doesn’t affect the story importantly; nor does adjusting incomes for household size.

This issue of greater work among women is one of the last remaining arguments to my case that I feel I need to address more, because it is obviously key to the question of whether higher incomes really reflect improved living standards broadly construed. After all, we could all work more hours and sleep less, which would improve our incomes but not necessarily our quality of life. I’ll take this up in my next couple of posts, but suffice it to say, you can assume my read of the evidence doesn’t overturn the case I’ve been trying to make thus far.


(Cross-Posted at www.progressivefix.com--I'm behind putting these up on my blog...)

I spent a chunk of time on the train to New York yesterday reading through bloggers’ reactions to Democrats’ reactions to the Scott Brown victory in Massachusetts. And I’m confused.

First, an awful lot of liberal bloggers seem all too eager to advance a pernicious stereotype about the Democratic Party — that it is feckless, weak, wimpy, cowardly, unprincipled, etc. Look, it’s not that every Democrat was scared away from health care reform by the Brown win. As far as we know, very few were. If you want to make accusations of cowardice, aim them at those few specific legislators who have flip-flopped — the rest of the party can’t do much to make them vote in favor of reform. If President Obama didn’t come out as aggressively in favor of passing the Senate bill as you wanted, that’s probably because he knows he doesn’t have the votes and has little interest in self-immolation. By tarring the entire party, you aid and abet Republican efforts to caricature Democrats.

And for the love of God, if you 
feel no longer energized to elect Democrats in November because some congressman in some other state caved, well, you need to take a deep breath and count to 10. Losing health care would be a huge, regrettable defeat, but by sitting out November, you would also make progressives in Congress worth supporting suffer for the sins of others.


(Cross-Posted at www.progressivefix.com--I'm late getting these up on my blog...)
There will be a mountain of analysis regarding the Brown victory in Massachusetts last night and what it means for health care reform. But what is striking to me this morning, skimming my
RSS feeds, is the same thing I have found striking throughout the past year — how willfully ignorant liberal advocates of health care reform continue to be about public opinion on the Senate- and House-passed versions of health care reform.

There’s no need for extended analysis of the polling to make my point. Start with the basic favor/oppose trend for health care reform:

You can argue that people are uninformed. You can argue that Republicans have misled them. You can argue that people support something called “health care reform” as a general concept. But the numbers are what they are — only a minority supports the bills under consideration.

Faced with such numbers, reform advocates have defensively pointed out that much of the opposition to health care reform comes from the left, as if that somehow rendered the bills’ unpopularity irrelevant. What is devastating to their case, however, is a look at the intensity of views toward reform.

When assessing polling results, I have found it is crucial to employ what I call the Kessler Rule, after Third Way’s Jim Kessler. Jim argues that anytime someone tells a pollster that they are “somewhat” supportive or opposed to something, it basically means they don’t have strong feelings one way or another or that they have so little interest in the issue that they haven’t even formed an opinion. 
Rasmussen has been asking its respondents whether they “strongly” or “somewhat” support or oppose health care reform for months. The first time they asked was in August, during the congressional recess, when they found that 43 percent of respondents were strongly opposed, compared with 23 percent who were strongly supportive. Keep in mind, this was when the public option was still included in all major proposals, so liberal backlash was unlikely to have been much of a factor in this contrast.

The most recent poll Rasmussen conducted was over the weekend. Results: 44 percent strongly opposed, 18 percent strongly supportive.

You would think that such numbers would dent the confidence of reform advocates that the public overwhelmingly supported their own preferences. You would be wrong. Instead, incredibly, health care reform was 
cited throughout the fall and winter as Exhibit A for why we need to get rid of the filibuster in the Senate! If something as popular as health care reform faced such difficulty winning passage, it was argued, then the Senate can no longer govern!

Now with Scott Brown’s defeat of Martha Coakley, advocates have 
bent over backwards making the case that the election of a conservative in one of the most liberal states in the country — to fill a seat vacated by the patron saint of health care reform, at a time when the result would determine the fate of reform — had nothing to do with public opposition to reform.

Rasmussen’s election night survey says everything you need to know about how much these advocates are kidding themselves: 
78 percent of Brown voters strongly oppose the health care bills before Congress.

What’s my point? It’s not that the case for health care reform is bunk or that policymakers should make their decisions based on polls. Like many progressives, I think the House should pass the Senate bill and that they should fix it later. (Unlike most progressives, my “fixes” would involve moving in the direction of Wyden-Bennett or even a more generous version of the House Republican bill rather than in the direction of House Democrats.) It’s not that liberal advocates should not spin issues in ways that promote their policy preferences. It’s that they should not 
believe their own spin — the country remains moderate. But don’t take it from me — take it from the 2010 electorate in November.
(Cross-Posted at www.progressivefix.com--I'm late putting these up on my blog...)

My last post tackled inequality trends in the U.S. and how progressives ought to think about them. Now I want to look at middle-class living standards. In the course of basically agreeing with Dalton Conley that progressives should be more concerned with poverty than inequality,Kevin Drum argues that what got lost from the Conley analysis is the stagnation of the middle class (“sluggish middle class wages in a country that’s been growing energetically for decades”). And yesterday he endorsed the views of economist Raghuram Rajan, who blames the financial crisis on “the purchasing power of many middle-class households lagging behind the cost of living.”

Kevin has always been one of my favorite bloggers, but I have to disagree with him here—both in terms of the level of income the typical American has and in terms of recent trends, a careful look at the data implies that the middle class is doing pretty well. The common belief among progressives that this isn’t the case causes us to misdiagnose what the nation’s most pressing economic problems are and to put forth an agenda that doesn’t resonate as strongly as we think it does.

My friend Steve Rose really deserves the most credit for trying to draw attention to the reality of middle-class living standards being better than the left believes. In a much-circulated 
report for PPI and in his analyses for Third Way, Steve showed that, for instance, when measured correctly, the typical working-age American’s income is much higher than official statistics imply.

Many progressives thought that Steve was somehow pulling a fast one, a view with which I strongly disagree, but let me make similar points in a more transparent way here. First, consider what many progressives consider “the good old days”—the height of the pre-1970s economic boom. In 1973, the median inflation-adjusted income was 
higher than it had ever been and higher than it would be again until 1978—$45,533 (in 2008 dollars). Call this the gold standard before, in the conventional progressive telling, things started going south.

How much did things go south? Well, in 2008 the median was $50,303. That’s right—about $5,000 
higher (after adjusting for changes in the cost of living). This improvement understates things because households also became smaller over time, and because the inflation-adjustment here probably overstates inflation. For instance, if one uses the Bureau of Economic Analysis’s Personal Consumption Expenditures deflator, the increase from 1973 to 2008 was about $7,700, or 18 percent. Not only does that still not adjust for declining household size, it also doesn’t include changes in taxes, non-cash benefits, the value of health insurance, and capital gains. Incorporating these adjustments shows an increase in living standards that is more like 40 percent.

Rather than household income, others on the left point to stagnation in men’s wages (women’s wages have increased dramatically by any measure). For example, the 
Economic Policy Institute estimates that the median male worker’s hourly wage was $16.88 in 1973 and $16.85 in 2007. However, EPI’s figures show that when fringe benefits are taken into account, the median male worker’s hourly compensation increased by somewhere between 5 and 10 percent over this period. And these estimates don’t use the PCE deflator. Nor do they account for changes in taxation and public benefits—the very means we use to mitigate low income.

To review, “stagnation” of household income or male wages means that after adjusting them for the rising cost of living, they are as high as they were in the glory days of the 1960s and early 1970s–they have actually increased. When analysts on the left concede these increases, they then move the goal posts and argue that wages have not grown as much as they should have. Typically, they contrast modest wage growth with more rapid productivity growth. But too often these analyses are done on an apples-to-oranges basis. Critics 
leftright, and center have all pointed out flaws with the kind of comparisons that EPI and others make. Careful analyses reduce the gap between productivity growth and wage and income growth, though they don’t necessarily eliminate it. At any rate, economic theory says that compensation will increase with productivity all else being equal, and all else has not remained static.

It is certainly true that wage growth has been slower since 1973 than in the two previous decades. But that isn’t a realistic bar to use. The U.S. was the only major economy left standing after World War II, and there was little foreign competition putting downward pressure on manufacturing wages and jobs. The period between WWII and 1973 was anomalous—it could not have been expected to have lasted.

The other way to judge middle-class living standards in the U.S. is to compare them to those in other countries. The Luxembourg Income Study shows that at most points in the income distribution (the 25th percentile, the median, the 75th percentile), income in the U.S. exceeds that in nearly all European countries, including Sweden, the model for many on the left. (The most accessible evidence on this is in a 2002 article in the journal Daedalus by Christopher Jencks.) Determining how to incorporate publicly provided benefits such as education and health care is very complicated, but the evidence we have indicates that American middle-class living standards are at worst comparable to those in European nations.

Trying to persuade the middle class that it is worse off than it is potentially has harmful side effects. For one, as economist Benjamin Friedman and sociologist William Julius Wilson have argued, people are more generous when they feel they are doing well. When they feel economically threatened, they are more inclined to protect what they have than to help others. What’s more, widespread economic malaise can be a self-fulfilling prophecy, preventing people from making the individual choices that ensure, for instance, a strong recovery from recession. In terms of policy, the belief that the middle class is doing poorly can lead to scarce public resources being diverted to those doing relatively well rather than being used to help those truly in need. And politically, it can lead to a tone-deaf and unpersuasive populism that does little to help Democrats win in swing districts and close elections.

Again, the point here is that progressives should care about the facts. Up next…the poor.


(Cross-Posted at
www.progressivefix.com -- I'm late adding these to my blog....)

Happy New Year everyone! I am very late to this debate, but I wanted to weigh in on the conversation launched by Dalton Conley’s pre-holiday 
American Prospect article on progressivism and inequality. In case you missed it, Conley argued that progressives shouldn’t care that much about inequality and that we should instead care about the poor. Inequality, he showed, has grown between the rich and the middle, but not between the middle and the poor. Bruce Bartlett, weighing in from the right, agreed.

I’ll address the living standards of the middle class and the poor in subsequent posts, but let me add my two cents about inequality trends in this one. 
An analysis I conducted back in November showed that what has likely happened is that the very top—the top one-half of one percent—has pulled away from everyone else, though the increase from 1980 to 2009 has probably been fairly modest. Whether this has been a good or bad thing—or aside from trends, whether higher inequality in the U.S. than elsewhere is a good or bad thing—ought to depend on three questions, empirical and normative, none of which we have much of a handle on.

First, how does letting the rich get richer affect the absolute living standards of everyone else? As Alan Reynolds has argued, measures of inequality tend to reinforce a fixed-pie conception of national wealth—gains by the rich come at the expense of everyone else. But of course, the pie is not fixed in size, and it may be that allowing the rich to get a greater share of the pie makes for a bigger pie and bigger slices for everyone (a point made by Bartlett). Think about Rawls’s maximin rule—that any inequality that results in the worst-off being better off is just. It’s not necessarily the case that greater inequality must help out those who fall behind, but it’s certainly plausible.

Second, how does letting the rich get richer affect the relative deprivation experienced by everyone else? There are two questions here. When the rich get richer, people at the bottom and even in the middle may get priced out of certain goods and services, as prices get bid up by the wealthy. On the one hand, it may be that yachts become less affordable to the non-rich, which presumably no one would get too worked up about. On the other hand, if the price of an Ivy League education or prime neighborhoods becomes unaffordable to the non-rich, that would have bigger implications. Beyond the issue of being priced out of goods and services, inequality may make the non-rich feel less well off—even if their absolute living standards improve. If the Nissan Sentra you own is nicer than the Chevy Cobalt you used to have but feels no better since more people are driving Jaguars than in the past, then there’s room for debate about whether you are “better off”.

Third, if inequality makes most people better off in absolute terms (by making the pie bigger) but makes them feel worse off in relative terms (if their bigger piece feels smaller than before because of how much bigger others’ slices have gotten), then how much weight are we to give each effect? Unlike the other two considerations, this one has empirical and normative dimensions. You may think that being better off but feeling worse off is a net change for the worse, while I may think that it’s only being better off that matters. 
Robert Frank has made the case—not entirely convincingly, in my view—for the former view.

If you’re looking for the answer to these questions in a blog post, then my heart goes out to you. What I will say is that a situation in which the top 1 in 200 pulls away from the bottom 199 is quite a bit different than a situation in which the top 40 pulls away from the bottom 160, since relative deprivation is likely to be a bigger problem in the latter case.

More to the point, reflexive soak-the-rich tendencies among progressives are unjustified—the details and the facts matter, unless you simply are opposed to inequality regardless of whether it might help the bottom and middle.

Middle-class living standards next…

(This is cross-posted from ProgressiveFix.com, the new online face of the Progressive Policy Institute, where I will be posting regularly.  Give 'em a look.)

To read the first part of this post, click here.

Defining the Center

Let’s examine Hacker and Pierson’s definition of “the center.” When they compare activists to independents, changes in the distance from independents may be due to growing extremism among activists. However, the distance may grow without activists changing their views at all if independents change their views. So saying Republican activists drifted further away from the center than Democratic activists may misstate what occurred; independents may simply have drifted toward Democratic activists over time without activists drifting anywhere. It’s also possible that Republican activists have grown more extreme, which has pushed independents closer to Democratic activists’ (unchanged) views.

Furthermore, secular changes in ideology over time can move people from the independent category into Democratic and Republican camps and vice versa, making it difficult to say whether the changes identified indicate that activists (or independents) are changing their views, or that it’s just flows into or out of the parties that is changing. If one of the parties looks more or less extreme, it could simply be that people who would have called themselves independent in the past are now identifying with one of the parties, making the leftover independents look somewhat more extreme in the opposite direction.

Rather than compare activists to independents, why not simply measure how far they are from the midpoint of the ideology scale? When one does so, one obtains the graph below.

By this measure, which avoids all of the problems with using independents as a reference point, the change in extremism among Democratic activists looks exactly the same as the trend for Republican activists. Once again, Republican activists look more extreme in any year, and this time (not shown) this remains the case when one looks at the unsmoothed data points.

A Better Way to Measure Ideology

There is also a problem with Hacker and Pierson’s measure of ideology. If we want to know whether party activists have become ideologically more extreme over time, we should use as pure a measure of ideology as possible. The measure Hacker and Pierson use, however, conflates ideology with tolerance and empathy because it is based on questions asking how warm or cold one feels toward liberals and conservatives. It could be that Democratic activists are simply more tolerant of their opponents than Republican activists rather than being more centrist. One can feel warmly toward a group without identifying oneself with it.

A better measure of changing ideology among party activists would be to look directly at changes in self-identified ideology. The NES asks respondents to place themselves on a 7-point scale ranging from extremely liberal to extremely conservative. Here, then, is a final chart showing trends for activists in each party, with ideology measured as the distance of activists from “4” – the midpoint of the seven-point scale. The actual data points are connected and the smoothed trends are shown as black dashed lines. It should be noted that this chart is based on even smaller sample sizes than Hacker and Pierson’s, so I show the margin of error for the data points as dashed vertical lines. I also omit off-year elections to make the chart less noisy.

This chart confirms that Republican activists more often than not have been more extreme than Democratic activists, though the two groups were statistically tied in 1972, 1976, 1992, and 2004. There is a clear trend toward greater extremism among Republican activists. Among Democratic activists, there was little consistency between 1972 and 1998, but they appear to have moved to the center in 2000 and 2002 before jumping up to the level of Republican extremism in 2004.

Finally, there is the claim by Hacker and Pierson that Democratic activists are more centrist than other Democrats. In my results, this was not true in 2004 whether one used the thermometer index or the self-identified seven-point ideology measure and was not true in 2002 unless one used the seven-point measure (which Hacker and Pierson did not). Regardless, none of the differences between the two groups – in my results or theirs – are statistically significant due to the small sample sizes.

In sum, Republican activists have generally been at least as extreme as Democratic activists and often more so, though not in 2004, which makes the Republican pattern seem less worrisome. Furthermore, while in 2002 it looked like Republican extremism had increased and Democrats had become more moderate, by 2004 Democrats had completely caught up to Republicans. Republican and Democratic activists were equally far from the center in 1972 and in 2004, so the shift was of the same magnitude for both. And there’s no reliable evidence that Democratic activists are more moderate than other Democrats.

The Bush administration and the Republican Congress may have used various tactics in order to pass an agenda that lacked strong support. But they were not “off center” if that phrase is taken to mean that their agenda was outside the bounds of what the public supported. Or more specifically, where Republicans succeeded, their agenda was not out of bounds. Hacker and Pierson downplayed the extent to which Republicans had to reach out to the center in what they did or did not favor. Education spending, for instance, 
increased more under Bush than under Clinton, in a nod to “compassionate conservatism.” Furthermore, where Republicans truly moved off center, they failed, as with Social Security privatization. And of course, 2006 and 2008 happened.

(This is cross-posted from ProgressiveFix.com, the new online face of the Progressive Policy Institute, where I will be posting regularly.  Give 'em a look.)

OK, to review the debate so far: I wrote a post suggesting progressives might want to think twice before jettisoning the filibuster. Ed thought twice and said, yup, still want to get rid of it.  Ezra did the sameI wrote another post saying, oh well whatever nevermind and tried to shift the subject to polarization being the real problem. I said I’d follow up about whether increasing polarization has been a one-sided affair. Crickets chirped. All hell broke loose on the health care reform front. And here we are.

So….one-sided polarization….Ever since Jacob Hacker and Paul Pierson’s 
Off Center, all good progressives know that the growing political polarization has been one-sided, with Republicans pulling public policy “off center” through various nefarious means. Right?

Well….yes and no. Hacker and Pierson argued that, as of 2005, Republican activists and legislators had grown more conservative, but Democratic activists and legislators had not grown more liberal (and had even moved to the right themselves in some regards). Along with this shift, Republicans had developed effective strategies to move public policy further rightward than the typical voter preferred.

Since the rightward shift of Republicans occurred during a period in which Hacker and Pierson showed the distribution of self-identified ideology had not changed, the implication was that the electorate was being deprived of the more progressive policies that it desired. But a closer look at their data and analyses shows that while the increase in polarization among legislators has occurred disproportionately among Republicans, the evidence hints that this is because it proceeded from a Nixon-era Democratic Congress that was well to the left of the electorate.

Rather than refuting the idea that policy reflects the preferences of voters in the middle (the “median voter theorem”), as Hacker and Pierson claimed, the evidence actually bolsters this view. Correcting their claims is important if progressives are to govern effectively. Republicans did not simply pull public policy to the right of where Americans preferred, and now that Democrats are back in control of Congress, progressives should not assume that the median voter is leftier than she really is.

Why Off Center Is Off

To argue their case, Hacker and Pierson turned to scores created by Keith Poole and Howard Rosenthal that put members of Congress past and present on a common scale measuring ideological position. Hacker and Pierson report that the polarization of Congress between the early 1970s and the early 2000s was almost entirely due to growing extremism among Republicans. Democratic legislators had not moved nearly as far from the center. Because of the increasing conservatism of Republicans, Congress was, in the early 2000s, far to the right of the median voter, who had not grown more conservative over time. But Hacker and Pierson’s account is flawed.

Consider the Senate.* Poole and Rosenthal’s scores, using every vote by every member of every Congress through the 108th Congress (which ran from 2003 to 2004), indicate that the “center” as of 2003-04 was typified by northeastern Republicans such as Lincoln Chafee, then-Independent Jim Jeffords, and William Cohen; Arlen Specter (now, of course, a Democrat); and by red-state Democrats such as Ben Nelson and John Breaux. In 1971-72, the median senator had a score of -0.056, equivalent to Ben Nelson’s score in 2003-04. By 2003-04, the median senator had a score of 0.061, equivalent to Arlen Specter in 2003-04.

This small change in the median of the Senate as a whole only hints at the fact that, as Hacker and Pierson claim, Republican senators did move farther ideologically than Democratic senators. The evidence that Hacker and Pierson presented describes how the median in one year compared with then-recent senators’ scores. In the early 1970s, according to Hacker and Pierson, the median Republican senator lay “significantly to the left of current GOP maverick John McCain of Arizona—around where conservative 
Democrat Zell Miller of Georgia stood” [where the references to McCain and Miller are to their 2003-04 scores, italics in the original]. The median Republican senator’s score then “doubled” by the early 2000s so that it sat “just shy of the ultraconservative position of Senator Rick Santorum.”

These descriptions do not quite reflect what the Poole-Rosenthal scores show. The median Republican senator’s score in 1971-72 was 
equidistant between McCain in 2003-04 and Miller in 2003-04, not closer to Miller, and it was just as close to McCain as the median Republican senator’s score in 2003-04 was to Santorum.

This claim also raises a technical issue. The Poole-Rosenthal scores are not ratio scales with a meaningful zero point. The distance between 0.2 and 0.4 is supposed to be the same as that between 1.2 and 1.4, but 1.2 is not “six times as conservative” as 0.2, because a score of 0 does not indicate the complete absence of conservatism. The zero point is completely arbitrary. The doubling from 0.2 to 0.4 would become an increase of just 50 percent if we added 0.2 to all of the scores (from 0.4 to 0.6). We cannot know whether Republican senators grew twice as conservative between the early 1970s and the early 2000s. Indeed, the phrase “twice as conservative” has no obvious meaning.

More to the point, Hacker and Pierson’s interpretation of these results is an even bigger problem. Rather than the Republican Party drifting ever rightward (the whole time increasingly “off center”), if the Democratic Party was “off center” in the early 1970s, then the movement among Republicans could be interpreted as a restoration of an equilibrium reflecting voter preferences. This is exactly what appears to have happened.

First of all, the medians for the 2003-04 Senate were 0.379 and -0.381 for Republicans and Democrats – essentially identical. That means that after this great rightward shift by Republicans, the parties were equally “extreme” by historical standards. Furthermore, the median Democratic senator in 1971-72 wasn’t much less extreme than the median senator from either party in 2003-04.

Second, at least in terms of self-identification, the ideological distribution of Americans was unchanged over this period, with roughly twice as many people calling themselves conservative as calling themselves liberal.**

Taking these facts together – a rightward shift by Republican legislators, an end state where Democrats and Republicans are equally “extreme”, and an ideological distribution among voters that was static over the period (and right-leaning) – the conclusion that best fits is that the Democratic Congress of 1971-72 was off center rather than the Republican Congress of 2003-04. The median Republican became more extreme over time, but that was because Congress became 
more representative of the electorate, not less. The story on the House side is much the same, except that the median Republican was a bit more “extreme” than the median Democrat by 2003-04 (although no more extreme than the median Democrat was in 1971-72).

Comparing the Activists

Hacker and Pierson also argue that Republican activists grew more extreme while Democratic activists became less so (becoming even less extreme than Democrats in general), but these claims are also problematic. Hacker and Pierson began by defining an activist as someone who self-identifies as a Democrat or a Republican and who participated in three out of five election-related activities asked about in the American National Election Studies. They measured ideology using a combination of two “thermometer” items – one of which asks respondents how warm or cold they feel toward liberals and one inquiring about conservatives. These scales range from 0 (cold) to 97 (hot). (The scale ends at 97 rather than 100 because in some years, the NES used codes 98 and 99 as missing value codes.) The liberal score is subtracted from 97 (so that high numbers then signify cold feelings) and then added to the conservative score. This number is divided by two, 0.5 is added to it, and the decimal is dropped. The resulting measure ranges from 0 (extremely warm toward liberals and extremely cold toward conservatives) to 97 (extremely cold toward liberals and extremely warm toward conservatives).

To determine how far activists drift from the center, they compared the activist scores on this index to the scores for independent voters. The distance from independents is expressed in percentage terms (e.g., 10 percent more conservative or liberal). Hacker and Pierson plotted the average distance from independents for Republican and Democratic activists and then “smoothed” the trends by imposing curves to describe them. The result is a graph that I replicated, more or less:
The graph shows that Republican activists were more extreme than Democratic activists to begin with, that they became more conservative over time, and that after becoming more liberal, Democratic activists tacked back toward the center. The first important thing to note about this graph is how much the nice, smooth lines depend on fitting the data points to a quadratic equation. The original data – without the smoothing – looks much messier:
The upward trend among Republican activists is still readily apparent, but the trend for Democratic activists no longer points toward moderation. The bouncing around is partly due to different turnout patterns in off-year elections, but also a result of statistical noise, as the sample sizes for each group are less than 70 – and as low as 18 – in each year. Furthermore, Republican and Democratic activists are statistically the same distance from the center for much of the period between 1968 and 1992. To illustrate further how deceptive the smoothed trend lines can be, look what happens to them when 2004 data – which was not available when Hacker and Pierson created the graph – is added:
The Republican line hardly changes, but now Democratic activists appear to grow steadily more liberal. It still appears as though Republican activists drifted from the center more than Democratic activists did, and Republican activists look more extreme in all years.

OK, take a breather. Tomorrow I’ll wrap up with some revealing evidence about how Hacker and Pierson’s definition of “the center” affects these analyses of political activists.

To read the second part of this post, click here.


* Following their recent book, Polarized America (McCarty, Poole, and Rosenthal, 2006), I use scores on Poole and Rosenthal’s first DW-NOMINATE dimension (for details, see  
http://polarizedamerica.com/ and http://www.voteview.com). Hacker and Pierson report using “d nominate” scores, but these are only constructed through the 99th Congress, so I am inclined to believe that they too used the first DW-NOMINATE dimension scores.

** Hacker and Pierson (2004), page 38. Hacker and Pierson cite ANES data. According to Gallup data showing self-identified ideology, the breakdown among Americans as a whole in 2004 was roughly 20 percent liberal, 40 percent moderate, and 40 percent conservative (Wave 2 of the June Poll, Question D10). In 1972, it was 25 percent, 34 percent, and 37 percent (Poll 851, Question 14).
(This is cross-posted from ProgressiveFix.com, the new online face of the Progressive Policy Institute, where I will be posting regularly.  Give 'em a look.)

If you’ll forgive me for egregiously mixed metaphors, I want to draw attention to an implicit assumption among many health care reform advocates related to controlling healthcare spending: that if not for the politics involved, it would be fairly easy to rein in costs.

That’s because, the argument goes, there is easily identifiable inefficiency in the way we currently spend health care dollars. There are enormous regional disparities in, for instance, per capita Medicare spending. What is more, these differences are apparently unrelated to differences in the health of the underlying populations, and they don’t produce better outcomes. Rather, the differences reflect the ways that health care providers diagnose and treat patients in different parts of the country. So say the much-revered Dartmouth College health researchers, whose findings have been fairly uncritically embraced by many on the left.

Politics aside (the difficulty is that one person’s wasteful diagnostic test is another’s life-saving intervention), I always was suspicious of this argument. If there are excess profits to be made, then why is it that providers in only some parts of the country go after them or successfully extract them? Then a fascinating study came out that was mostly ignored but that should have raised questions about the Dartmouth research.

A potential problem with the Dartmouth research is that if there are unmeasured differences in health between patients who go to different providers, then the finding that greater spending is unrelated to outcomes could simply derive from people in worse health being very expensive to treat. The Dartmouth researchers use relatively crude measures to statistically control for these differences (because they are the only ones available).

MIT economist Joseph Doyle got around this problem by looking at patients who needed emergency care while they were visiting Florida. Because there is no reason to expect that unhealthy tourists are more likely to end up in higher-spending ERs, any differences in outcomes between those who went to high-spending hospitals and those who went to low-spending ones should reflect only the spending difference. Doyle found that higher spending did produce better outcomes.

Disparities in Data

Now MedPAC, the panel that monitors how Medicare reimburses providers and makes recommendations to Congress, has released a study that shows that disparities in Medicare spending are quite a bit smaller when other important factors — such as regional differences in wages and extra reimbursement related to medical education — are taken into account (hat tip to Mickey Kaus). If one looks only at per capita Medicare spending, high-spending areas of the country have costs that are 55 percent higher than low-spending areas of the country (I’m talking about the 90th and 10th percentiles, for those of you statistically inclined). After making MedPAC’s adjustments, however, that difference shrinks to 30 percent.

Thirty percent might still be considered a big number — in a perfect world adjusted spending shouldn’t differ at all — but other evidence in the MedPAC data gives reason to question the precision of any of these kinds of comparisons. I put the figures for all 404 geographic areas into a spreadsheet (which you can get from me if you’re interested — data wants to be free!) and looked at the top and bottom quarter of adjusted spending.

High-spending areas are dominated by the South, particularly the states stretching from Florida across to Texas and Oklahoma. They also include 15 of the 30 biggest metropolitan areas, including all of the biggest southern and midwestern metros, save Atlanta and Minneapolis, and none of the biggest northeastern or western metros, save Los Angeles, Las Vegas, Phoenix, Denver, and Pittsburgh.

On the other hand, low-spending areas are dominated by the West, particularly Alaska, Hawaii, Washington, Oregon, Idaho, and most of California (with the exception of Los Angeles and San Diego). Also overrepresented are small metropolitan areas in the upper Midwest and Dakotas, in New York, Maine, Virginia, and Georgia. None of the biggest ten metropolitan areas are represented in the bottom quarter, and only four of the biggest thirty are (San Francisco, Seattle, Portland, and Sacramento).

Compare these findings to those of the Dartmouth folks (Map 1). While many of the same conclusions show up in their map, there are some notable differences. Most importantly, California and the Boston-Washington corridor look like they spend a lot more in the Dartmouth map than they do in the MedPAC data (and the Mountain West states look like they spend a lot less).

Fixing Inefficiencies Not a Silver Bullet

If different sets of rankings differ as notably as these two do, then that says to me that there is a lot of noise in these rankings and that perfectly adjusted spending figures would potentially produce a distribution of areas that would look different from either set. In particular, I suspect that it would show that the vast majority of spending variation could be explained by factors that had nothing to do with inefficiencies.

The point is that even discounting the political difficulties of enacting policies that rely on comparative effectiveness research to weed out inefficiencies in healthcare spending, it’s not at all clear that regional variation in healthcare spending is proof that such inefficiencies exist. That’s not to say that there are no inefficiencies, but weeding them out won’t be as simple as making Florida providers act like Minnesota ones.

The views expressed in this piece do not necessarily reflect those of the Progressive Policy Institute.

(This is cross-posted from ProgressiveFix.com, the new online face of the Progressive Policy Institute, where I will be posting regularly.  Give 'em a look.)

I was going to title this post, “Ed Kilgore, You are Dead to Me,” but then again, I like Ed a lot, and he’s far more knowledgeable about politics than I am, and I don’t disagree with much of what he’s said about the filibuster.

Just as Ed isn’t “hell-bent on eliminating the filibuster,” neither would I shed many tears if it were to go away. I, too, object to how routine filibuster threats have become. That said, I do think that its elimination would have the potential to hurt progressive aims. Saying that the Senate “has a built-in red-state bias” makes the point — get rid of the filibuster and that bias means that red-state priorities are more likely to benefit from its elimination.

What I’d like to do here is start the first of a couple of posts on political polarization to defend my position that the filibuster wouldn’t be such a problem if we could make the Congress more representative of the nation. I think this point is actually implicit (almost explicit!) in commentary from Mark Schmitt and Ezra Klein that notes how the routinization of the filibuster is a recent phenomenon that owes its timing to the completion of what Bill Galston and Elaine Kamarck have called “The Great Sorting-Out.” Over the past 40 years, liberal Republicans and conservative Democrats have gone the way of the dodo bird, making the parties more polarized along ideological lines.

LBJ could count on Medicare passing in 1965 because the existence of liberal and moderate Republicans made the successful deployment of the filibuster unlikely. On the GOP side, conservatives would have had to court a sizeable number of right-leaning Democrats to make a filibuster threat credible. The difficulty of doing so (particularly with a southern Democrat as intimidating as LBJ applying countervailing pressure) gave Republican moderates little incentive to go along with such a threat. On the Democratic side, the opportunity for a single senator to engage in grandstanding or deal-making in exchange for his vote was limited by the same dynamics — the ability to get moderate GOP votes would have allowed the leadership to ignore such threats. Unless the issue was one as momentous and controversial as civil rights, southern Democrats and conservative Republicans would not collaborate across the aisle.

Fast-forward to 1994, when there were far fewer conservative Democrats and far fewer moderate Republicans. In such an environment, the filibuster became an obvious strategy — because it could work. The filibuster was not a problem until the completion of The Great Sorting-Out. (And yes, Republicans have deployed filibuster threats far more often than Democrats have, largely because the Democrats are more dependent on their moderates than the Republicans are on theirs — a point to which I’ll return in the next post.)

Now, Ed is right that the power that party primaries give the least-moderate voters is not solely to blame for this (though let’s not discount the likelihood that the primary reforms between 1968 and 1972 accelerated the ideological sorting between the parties). But a solution to political polarization need not address its causes.

The key questions, it seems to me, are (1) whether one thinks that the parties are ideologically representative of their supporters or members and (2) whether one thinks that that is true on both sides. Kicking (2) to my next post, I’ll just say that Morris Fiorina’s research definitively shows that the obvious political polarization among elites, political junkies, and elected officials is not reflected among Americans as a whole. The reason that we have more political polarization — even between presidential candidates — is because the candidates on offer have been chosen by less-moderate primary voters and activists. Because relatively moderate voters still have to choose between two options, the growing polarization of party activists and primary voters translates into growing polarization among elected officials — even as the electorate has remained relatively moderate.

Whether you think the electorate is, in its heart of hearts, moderate is irrelevant in some sense, but what is fairly clear is that at least by the measures available, it has not become more polarized. And to circle back to my original contention that progressives should think twice before wanting to throw out the filibuster, political polarization makes the filibuster more important as a check against small majorities. The less moderate the two caucuses are, the more unrepresentative of popular preferences will be the legislation that can pass with narrow margins.

The views expressed in this piece do not necessarily reflect those of the Progressive Policy Institute.