Today, the DC progressive/new media promotional machine launches Jacob Hacker and Paul Pierson's new inequality tract, Winner-Take-All Politics, with an event at the New America Foundation.  I don't want to tell you not to buy the book or that it is likely to be wrong—I've bought it myself, but only just started it.  What I do want to tell you is that since Hacker has been making grand statistics-based arguments—beginning with his and Pierson's Off Center, and continuing with his Great Risk Shift—his books have been provocatively and cogently argued, have told progressives exactly what they want to hear, and have been based on statistical evidence that I have found to be completely wrong.

First, in Off Center, Hacker and Pierson argued that Republican success in the aughts invalidated the "median voter hypothesis" that argues that the parties will tend to take policy positions oriented toward the preferences of moderate voters.  They claimed that in recent decades, the Republican caucus had moved steadily rightward (true) while the Democratic caucus had, if anything moved rightward too (ehhh...OK).  But because the ideological distribution of the electorate hadn't changed, that meant that Republicans had somehow pulled policy "off center", which Hacker and Pierson say was accomplished through various dirty tricks and hard-knuckled tactics.

What actually happened is that at the start of the 1970s, the Democratic Congress was "off center"—to the left of voters—and so the rightward shift of Congress and the Republicans reflected a move that produced a Congress more consistent with the views of voters.  In other words, the median voter hypothesis explains the changes rather well.

In The Great Risk Shift, Hacker argued that economic volatility had skyrocketed—more than doubling between 1974 and 2002 and nearly quadrupling between 1974 and 1994 alone.  Oops—these results turned out to hinge on an arcane methodological issue that Hacker should have caught.  When I uncovered this problem, Hacker was forced to revise his book for the paperback edition (no, you won't find documentation that my discovery was the reason behind the revision, but it's in my in-box archives).  When I produced my own estimates of income swings, I found that they had increased over time, but rather modestly, so that if a household's typical income swings were 15 to 16 percent of their income in the early 1970s, they were probably about 17 to 18 percent in the early 2000s.

I also found that, contrary to Hacker's assertions, the evidence on economic risk in other aspects of life also implied fairly modest changes in recent decades.

(Incidentally, I will present evidence in a month at a Census Bureau conference that Hacker's latest effort, an "economic security index" for the Rockefeller Foundation, is also botched.  Full details once I have the green light to circulate them after the conference.)

I hope to blog over the next two or three weeks on the new book as I get further into it, but today, let me just provide some commentary on two central claims about what has happened to inequality.

Claim #1:  The share of income going to the top 1 percent increased from 8 to 18 percent from 1974 to 2007—from 9 to 24 percent including capital gains.

A strong case can be made that the increase over time was just four percentage points or less, not 10 or 15.  In a post I did a year ago, I showed that much of this increase could be explained by two phenomena: (1) a steady rise in tax filing as "subchapter S" corporations (with income reported on individual tax returns) instead of "subchapter C" corporations (with income not included on individual tax returns and thus missing from the IRS data Thomas Piketty and Emmanuel Saez use, whose work Hacker and Pierson rely on); and (2) a jump from 1986 to 1988 in wealthy taxpayers shifting to income and stock options reported on individual tax returns from fringe benefits and stock options not reported on individual tax returns in response to the tax cuts of 1986.  These insights are not mine—they come from Cato's Alan Reynolds, who has been making the points for several years now.  By adjusting the trend line for these changes (using data from Saez), I showed that the change in the top's share of income probably rose only 4 percentage points rather than 10 from 1974 to 2006 (the increase from 1974 to 2008 would be similar—Saez just released his 2008 estimates).

Are these adjustments warranted?  Well, doing so ends up producing estimates that match the trend found by Richard Burkhauser and his colleagues using the Current Population Survey.  The adjusted estimates also raise doubts about the claim that the top income share has not been higher since 1928 (since they put the top share when capital gains are excluded lower than every year between 1928 and 1941).

Furthermore, while I have not seen any research examining the question, I am pretty sure that these levels—and the increase over time—would be lower with ideal data.  Piketty and Saez identify the top 1 percent in their data by estimating the number of single adults and married households in the population and using that as their baseline.  From there, they simply look at the richest people in the tax returns until they have a group equal to one percent of this baseline.  But as Steve Rose notes in his new book, Rebound, their baseline is almost 30 percent higher than the number of households in the U.S., primarily due to multiple tax returns in households with roommates, non-married romantic partners, and adult and teenage children.  Inflating the overall baseline by nearly 30 percent means inflating the number of people in the top one percent by 30 percent too, which would not be problematic except that we can presume that essentially all of the inflation in the IRS data occurs in the bottom 90 percent.

Let's talk concrete numbers to give a sense of why this is an issue.  In 2007, Piketty and Saez use 150 million "tax units" as their baseline population, meaning that to look at the top one percent, they need to focus on the 1.5 million richest tax returns in the IRS data.  Rather than look at households, as Rose does, let's just distinguish families and unrelated individuals from each other and look at them (a more conservative approach than looking at households, since there are fewer households). The Current Population Survey indicates that there were just 134 million of these, ten percent fewer than the number of tax units.  So the top one percent of families/unrelated individuals included 1.3 million people rather than 1.5 million.  To know what share of income the "top one percent" received, one should look at the 1.3 million richest tax returns, not the 1.5 million richest.  By looking at the richest 1.5 million, the "true" top one percent is exaggerated by about 15 percent.

To back into the more meaningful figure, we can assign incomes to 200,000 people and subtract them out from the aggregate received by the top 1.5 million.  A rough way to do this is to give them the average income received by people in the top five percent of income, which in 2007 was $364,000 according to the IRS data (in 2008 inflation-adjusted dollars).

One other adjustment should be made—because those 16 million additional tax returns in the IRS data represent people with relatively low incomes (think teenagers and college kids), the aggregate amount of income in the bottom 90 percent is lower than in the CPS.  The difference should be added to what the IRS shows as total aggregate income (the denominator when computing income shares).  

Making these adjustments, the top one percent received 15.5 percent of income in 2007 rather than the 18.3 percent indicated by the Piketty/Saez results.  And that doesn't include the adjustments I outlined above due to tax law changes.  For 1974, the figures are 7.0 versus 8.1 percent.  For the change over time, I show a change of 8.5 percentage points rather than 10.2.  Combine this adjustment with the analysis I conducted around the effect of tax law changes, and the increase in the top share since 1974 gets awfully small—probably less than a four-point rise over 35 years.

Claim #2: The nation has moved steadily from "Broadland"—typified by the expansion of the 1960s, when most of the income gains went to the bottom 90 percent—to "Richistan", where over half the gains go to the top one percent.

These numbers are actually pretty solid and robust to shortcomings of the IRS data.  However, focusing on changes in the income share is actually a pretty uninformative way of looking at things.  Consider the last expansion, from 2002 to 2007.  Something like 60 percent of the income gains went to the top one percent.  It's only slightly an oversimplification to say that what happened was that Greenspan and Bernanke juiced the economy by keeping interest rates low, which had the unfortunate side effect of sparking all sorts of crazy in the financial sector (including pay increases almost surely out of line with the value these geniuses added to the economy).  This is Raghuram Rajan talking, but I think he's completely right.  When businesses failed to invest, the result was a weak recovery, small income gains for most Americans, and enormous gains to a bunch of 12-year-olds on Wall St.

For the median family/unrelated individual in the bottom 90 percent, the increase in income according to the CPS was from $37,000 in 2002 to only $38,000 in 2007.  Of course, health insurance costs were rising rapidly during this period, so the increase in total compensation was greater.  But still, warm beer.

Consider the counterfactual, however.  What if the Fed hadn't goosed the economy?  Or what if Congress had taxed the income of financial "wizards" until the gains to the top were much smaller?  Either might have mitigated the share of gains that went to the top.  But neither would have helped the bottom 90 percent much, and without the monetary stimulus (and the tax stimulus from the Bush Administration), the expansion of 2002 to 2007 might instead have been the expansion of 2004 to 2007, with a prolonged recession dragging into the mid-2000s.  Of course, the bursting of the bubble in 2008 would not have happened either in that case, but it's not at all clear (to me) that the bottom 90 percent would be in a better place in this counterfactual scenario, despite having successfully limited gains to the top.

On the other hand, had the expansion been broad and robust, producing solid gains for the bottom 90 percent and the twelve-year-old Wall Streeters still received 60 percent of the gains, I don't know that there would be reason to be equally frustrated as many are today.  Hacker and Pierson's story is about us versus them, but it seems to me that they don't persuasively defend this view.  If we can have a bigger pie, but only if we let the rich have a bigger piece of it, then the whole question gets a lot more complicated.  Research by my former advisor, Christopher Jencks, indicates that higher inequality doesn't seem to increase a country's growth, but nor does it hurt it.  I'm with Dalton Conley—we should care less about inequality and more about living standards at the bottom.

More soon....
 
 

(cross-posted at ProgressiveFix.com and FrumForum)
*added note: Mike informs me that I missed the joke in his title, a Scott-Pilgrim-Versus-The-World nod.  I like to think I'm clever and witty, but clearly my lack of sleep from parenting a newborn has left me not so quick on the uptake...)

Mike returned from vacation and promptly put up a
post criticizing my take-down of Edward Luce's horrible Financial Times piece on "the crisis of the middle class".  It's become apparent to me over the past few years that I've been in D.C. that you can't refute a specific empirical question about the situation of the poor or middle class (e.g., is it in crisis? as in much worse off than in the past?) without being attacked on much broader grounds than you staked out and being called an opponent of these groups or an insensitive jerk.  I actually don't disagree with much that Mike writes "against" my "views".

What I do disagree with is the contention that the middle class is in crisis.  And I think that it's bad to believe (and assert for mass audiences) that that's true because it hurts consumer sentiment, prolonging high unemployment, and diverts attention from the truly disadvantaged who really
are in crisis.  Mike can say that that pits me against the middle class (his post was titled, "Scott Winship versus the Middle Class"), but then let me ask Mike and others who would disagree with me a simple question:  Why do you think Americans are deluded about their economic conditions, since in June, 7 in 10 American adults said their "current household financial situation" is better than "most" Americans' (Q.25, disclosure: the poll was commissioned by my old employer)?  Why are you against the middle class?

Mike says that when I say some problem affects a tiny fraction of the population, that's like a hit man saying that he doesn't kill that many people as a fraction of the population--the "
Marty Blank gambit" as he calls it.  But look, that's not an apt analogy.  If I were saying that we shouldn't give a rat's ass about the tiny share of the population that experiences a bankruptcy, that would be using the Marty Blank gambit.  I never said that, and I wouldn't.  But if you convince everyone in the middle class that they are just one bad break away from bankruptcy, then you shouldn't be surprised when they don't spend their money and the recovery continues to stall.  It's important to convey the facts correctly.  Mike is stalling the recovery!  Why are you against the middle class, Mike??

Finally, I think the best chart I've seen that puts all of this into perspective (which I made myself) is the following showing health insurance trends:
Picture

Anyone who wants the data can email me at scott@scottwinship.com.


And contrary to Mike's assertion, the fraction of under-insured has not increased.  You can read the
conclusion of my dissertation if you want to see what the facts show.

I'll keep being concerned about the people who are in crisis, but I'm not going to buy in to the conventional wisdom among progressives that the middle class is in crisis.
 
 
Kevin notes my last post and then wonders, “What I'm more curious about is what this looked like in the 50s, 60s, and 70s. Was optimism about our kids' futures substantially higher then?”
  
The results I showed were mostly from a fantastic database of polling questions called “
Polling the Nations”, which I recommend to everyone (though it’s not free, it’s not that expensive relative to other resources).  That’s why they only start in the mid-80s, and there’s a gap between the mid-00s and the two or three polls I cite from this year and last (my look at this question was a few years ago).
  
Anyway, Kevin’s query reminded me that there’s another compilation of polling questions that is also amazing—the book,
What’s Wrong, by public opinion giants Everett Carll Ladd and Karlyn Bowman.  And it’s a free pdf.
  
So, let me add some results to those I posted before.  I’m focusing, to the extent possible, on questions that ask parents about their own children.  When people are asked about “kids today” instead of their own kids, they are much more likely to be Debbie Downers—a phenomenon that journalist
David Whitman dubbed the “I’m OK, They’re Not” syndrome, which is much more general than questions about children’s future living standards.  Also, let’s be careful to distinguish between levels and trends.
  
First, let’s look at the confidence parents have that life for their children will be better.
·       Roper Starch Worldwide (1973)—26% were very confident, 36% only fairly confident, and 30% not at all confident
·       Roper Starch Worldwide (1974)—25% very confident vs. 41% only fairly vs. 28% not at all
·       Roper Starch Worldwide (1975)—23% vs. 39% vs. 32%
·       Roper Starch Worldwide (1976)—31% vs. 39% vs. 25%
·       Roper Starch Worldwide (1979)—25% vs. 41% vs. 29%
·       Roper Starch Worldwide (1982)—20% vs. 44% vs. 32%
·       Roper Starch Worldwide (1983)—24% vs. 38% vs. 33%
·       Roper Starch Worldwide (1988)—20% vs. 45% vs. 28%
·       Roper Starch Worldwide (1992)—17% vs. 46% vs. 31%
·       Roper Starch Worldwide (1995)—17% vs. 44% vs. 34%
·       Washington Post/Kaiser Family Foundation/Harvard (2000)—46% said they were confident that life for their children will be better than it has been for them, vs. 48% saying no
  
That last one shouldn’t be directly compared with the others—not only did it only offer a yes-or-no response, it was also asked of all adults.  More on that in a sec.  What we see from the Roper surveys is a fairly steady decline in solid confidence, but not much of a trend in pessimism.  The main dynamic is that parents have moved from being “very” confident to “only fairly” confident.  It looks like there may have been a small decline in optimism from the late 1980s through the mid-1990s.  But it’s interesting that from 1973 to 1995, between 61% and 70% were at least fairly confident that their kids would be better off. 
  
The Washington Post polling result provides a nice opportunity to look at the I’m OK, They’re Not pattern, since all adults were asked the question, even though fewer than half had children under 18 in their household.  In a poll my employer* commissioned from Greenberg Quinlan Rosner Research and Public Opinion Strategies, we asked parents about their expectations for their children’s living standards.  We asked people who had no children under 18 at home about “kids today”.  Pooling everyone together, 47% of adults said kids would have higher living standards. But the parents were much more optimistic about their own children, with 62 percent saying their kids’ living standards would improve.  So the Washington Post result might have been right in the range of the Roper results had the question been asked only of parents.
  
Other polls have asked whether parents think their children will be better off when they are the same age:
·       ABC News/Washington Post (1981)—47% said better off vs. 43% not better off (non-parents told to imagine they had children)
·       ABC News/Washington Post (1982)—43% vs. 41%
·       ABC News/Washington Post (1983)—44% vs. 45%
·       ABC News/Washington Post (1985)—62% vs. 29%
·       ABC News/Washington Post (1986)—74% vs. 19%
·       ABC News/Washington Post (1991)—66% vs. 25%
·       Newsweek (1994)—47% vs. 39% worse off (question uses “better off” rather than “better off financially”, asked only of adults with children under 18 in the household)
·       ABC News/Washington Post (1995)—54% vs. 39%
·       ABC News/Washington Post (1996)—52% vs. 39%
·       Pew Research Center (1996)—51% said their children will be better off than them when they grow up (question uses “better off” rather than “better off financially”, asked only of adults with children under 18 in the household)
·       Pew Research Center (1997)—51%
·       Pew Research Center (1999)—67%
  
So optimism declined between the mid-1980s and early-1990s, recovered starting in the mid-1990s, and generally remained above early 1980s levels (when the economy was in recession).  Except for 1983 majorities or pluralities hold the optimistic position.
  
Another series of polls asked parents whether their children will have a better life than they have had.  They also indicate a decline in optimism from the late 1980s to the early 1990s and a subsequent rebound:
·       BusinessWeek (1989)—59% said their children will have a better life than they had (and 25% said about as good)
·       BusinessWeek (1992)—34% said their children will have a better life than they had (and 33% said about as good)
·       BusinessWeek (1995)—46% said their children will have a better life than they have had (and 27% said about as good)
·       BusinessWeek (1996)—50% expected their children would have a better life than they have had (and 26% said about as good)
·       Harris Poll (2002)—41% expected children will have a better life than they have had (and 29% said about as good)

Strong majorities thought the children would have as good a life as them or better, and while more people thought their kids would have a better life than thought they would have a worse life, optimism failed to win a majority of parents in a number of years.  The trends appear to reveal a decline in optimism from the mid- or late-1990s to the early 2000s.  Considering all of these trends thus far, a fairly clear cyclical pattern is emerging, as Kevin observed in his post.
  
The early 2000s dip also shows up in Harris Poll questions asking whether parents feel good about their children’s future:
·       Harris Poll (1997)—48% felt good about their children’s future
·       Harris Poll (1998)—65%
·       Harris Poll (1999)—60%
·       Harris Poll (2000)—63% \
·       Harris Poll (2001)—56%
·       Harris Poll (2002)—59%
·       Harris Poll (2003)—59%
·       Harris Poll (2004)—63%

The dip is revealed to be related to the 2001 recession, as optimism rebounded thereafter, again following the business cycle. Again, solid majorities generally take the optimistic position.

The longest time series available asks parents whether their children’s standard of living will be higher than theirs.  Unfortunately, it appears that most of these polls ask the question of adults without children too:
·       Cambridge Reports/Research International (1989)—52% said their children’s standard of living will be higher vs. 12% lower
·       Cambridge Reports/Research International (1992)—47% vs. 15%
·       Cambridge Reports/Research International (1993)—49% vs. 17% lower
·       Cambridge Reports/Research International (1994)—43% vs. 22% lower
·       General Social Survey (1994)—45% said their children’s standard of living will be better vs. 20% worse
·       Cambridge Reports/Research International (1995)—46% vs. 17% lower
·       General Social Survey (1996)—47%
·       General Social Survey (1998)—55%
·       General Social Survey (2000)—59%
·       General Social Survey (2002)—61%
·       General Social Survey (2004)—53%
·       General Social Survey (2006)—57%
·       General Social Survey (2008)—53%
·       Economic Mobility Project (2009)—47% said their children’s standard of living will be better (62% among those with kids under 18)
·       Pew Research Center (2010)—45% said their children’s standard of living will be better vs. 26% worse

Once again the cyclical pattern emerges, though it is not quite as clear in the mid-2000s.  Optimism is far more prevalent than pessimism in every year, reaching majorities from the late 1990s until the current recession.  Even today, optimism is no lower than in the mid-1990s, and the EMP poll implies that when looking just at parents with children under 18 living at home, solid majorities continue to believe their kids will have a higher living standard.


Taken together, there is very little evidence that a supposed stagnation in living standards is reflected in Americans’ concerns about how their children will do.  The survey patterns show that parental optimism follows a cyclical pattern, generally is more prevalent than pessimism, and did not decline over time.  In fact, we can compare beliefs in 1946 to 1997 for one question—whether “opportunities to succeed” (1946) or the “chance of succeeding” (1997) will be higher or lower than a same-sex parent’s has been:
·       Roper Starch Worldwide (1946)—64% of men said their sons’ opportunities to succeed will be better than theirs (vs. 13% worse); 61% of women said their daughters’ opportunities to succeed will be better than theirs (vs. 20% worse)
·       Princeton Religion Research Center (1997)—62% of men said their sons will have a better chance of succeeding than they did (vs. 21% worse); 85% of women said their daughters will have a better chance (vs. 7% worse)
  
As one would expect, mothers in 1946 believed their daughters would have more opportunity, but surprisingly that view was even more prominent in 1997.  And among men, there was very little change.  Notably, unemployment was slightly lower in 1946 than in 1997, so this isn’t a matter of apples to oranges.
  
Or even more strikingly, consider two polls asking the following question:
Do you think your children’s opportunities to succeed will be better than, or not as good as, those you have? (If no children:) Assume that you did have children. 
·       Roper Starch Worldwide (1939)—61% better vs. 20% not as good vs. 10% same (question asked about opportunities of sons compared with fathers)
·       Roper Starch Worldwide (1990)—61% better vs. 21% not as good vs. 12% same
  
While the 1939 question only refers to males, given the relatively low labor force participation of women at the time, it is perhaps still comparable to the 1990 question.  However, the unemployment rate was
17.2% in 1939 compared with 5.6% in 1990.  Still, the two are remarkably close.
  
OK, can we put this question to bed?  Americans believe their children will do as well or better than they have done, and this belief hasn’t weakened over time.  Now let’s get back to arguing about objective living standards rather than subjective fears about them.
  


* For the love of God, nothing you’ll ever read on my blog has anything to do with my job—there are people at Pew whose ulcers flare at employees’ side hustles like mine.
 
 
(Cross-posted at ProgressiveFix and Frum Forum)




Everyone’s approvingly linking to this Edward Luce piece on “the crisis of middle-class America”.I want to set myself on fire.



Seriously, it’s discouraging to see so many people who should know better (because they’ve argued these points with me before) promoting this article.I can’t think of another piece in the doomsday genre—and there are many—that gets it so consistently wrong.  I'll stipulate that none of the criticisms below are intended to minimize the struggles that many people are facing.  But it's important to get this stuff right.  Let me dive in, with Luce’s words in italics and my responses following:

Yet somehow things don’t feel so good any more. Last year the bank tried to repossess the Freemans’ home even though they were only three months in arrears. 

The share of mortgages either in foreclosure or 3 or more months delinquent is
11.4 percent, which, because 30 percent of homeowners have paid off their mortgage, translates into 8 percent of homes.So the Freemans’ situation is typical of about one in twelve homeowners, or just over 5 percent of households (since one-third rent).*Their son, Andy, was recently knocked off his mother’s health insurance and only painfully reinstated for a large fee. 

Luce is arguing that there’s a new crisis facing the current generation.About 30 percent of those age 18 to 24 were uninsured in 2008 when the National Health Interview Survey contacted them.I don’t have trends for that age group, but the share of Americans under age 65 without health insurance coverage was
14.7 percent in 2008, up from….14.5 percent in 1984.

And, much like the boarded-up houses that signal America’s epidemic of foreclosures, the drug dealings and shootings that were once remote from their neighbourhood are edging ever closer, a block at a time. 

Well, the violent crime rate in 2008 was
19.3 per 1,000 people age 12 and up, down from 27.4 in 2000 and 45.2 in 1985. 

Once upon a time this was called the American Dream. Nowadays it might be called America’s Fitful Reverie. Indeed, Mark spends large monthly sums renting a machine to treat his sleep apnea, which gives him insomnia. “If we lost our jobs, we would have about three weeks of savings to draw on before we hit the bone,” says Mark, who is sitting on his patio keeping an eye on the street and swigging from a bottle of Miller Lite. “We work day and night and try to save for our retirement. But we are never more than a pay check or two from the streets.”

The key question is, again, Is this worse than in the past?The risk of a large drop in household income has risen modestly, but people experiencing a drop end up much better off than in the past.For example, the risk of a 25 percent drop in income over 2 years has risen from 7 percent among married couples in the late 1960s to 14 percent in the mid-2000s (based on my computations from Panel Study of Income Dynamics data).But if you look at the average income of married-couple families after their 25 percent drop, it rose from $40,000 to $63,000 (in constant 2009 dollars).

Solid Democratic voters, the Freemans are evidently phlegmatic in their outlook. The visitor’s gaze is drawn to their fridge door, which is festooned with humorous magnets. One says: “I am sorry I missed Church, I was busy practicing witchcraft and becoming a lesbian.” Another says: “I would tell you to go to Hell but I work there and I don’t want to see you every day.” A third, “Jesus loves you but I think you’re an asshole.” Mark chuckles: “Laughter is the best medicine.”

Hmmm….just a typical American household…..

The slow economic strangulation of the Freemans and millions of other middle-class Americans started long before the Great Recession, which merely exacerbated the “personal recession” that ordinary Americans had been suffering for years. Dubbed “median wage stagnation” by economists, the annual incomes of the bottom 90 per cent of US families have been essentially flat since 1973 – having risen by only 10 per cent in real terms over the past 37 years. That means most Americans have been treading water for more than a generation. Over the same period the incomes of the top 1 per cent have tripled. In 1973, chief executives were on average paid 26 times the median income. Now the multiple is above 300. 

Adjusting for household size and using the PCE deflator to adjust for inflation, median household income in the Current Population Survey rose from $29,800 in 1973 to $40,500 in 2008 (in 2009 dollars, again based on my compuatations).Factoring in employer and government noncash benefits would show even more impressive growth.
In the last expansion, which started in January 2002 and ended in December 2007, the median US household income dropped by $2,000 – the first ever instance where most Americans were worse off at the end of a cycle than at the start. 

This is entirely a function of changes in the population composition (more Latinos) and in the share of employee compensation going to health insurance and retirement plans.

Worse is that the long era of stagnating incomes has been accompanied by something profoundly un-American: declining income mobility. 

Nope.The
evidence is ambiguous, but the best studies imply that intergenerational economic mobility hasn’t changed that much in the past few decades.Intra-generational earnings mobility has increased since the 1950s, though it has declined among men.

Alexis de Tocqueville, the great French chronicler of early America, was once misquoted as having said: “America is the best country in the world to be poor.” That is no longer the case. Nowadays in America, you have a smaller chance of swapping your lower income bracket for a higher one than in almost any other developed economy – even Britain on some measures. To invert the classic Horatio Alger stories, in today’s America if you are born in rags, you are likelier to stay in rags than in almost any corner of old Europe.

Tim Smeeding’s research based on the Luxembourg Income Study shows that in general Americans have higher incomes than their European counterparts as long as they are in the top 80 to 90 percent of the income distribution.Below that, incomes are more comparable across countries, and the living standards of Americans look less impressive.The US has comparable intergenerational earnings mobility to Europe, according to
Markus Jantti’s research, except among men (but not women) who start out at the bottom.In terms of occupational mobility, David Grusky’s research shows we're as good or better as anywhere else, but this doesn't translate into earnings mobility because we let people get rich or poor to a greater extent than other countries do.Jantti and Anders Bjorklund have estimated that Sweden would have the same mobility as the U.S. if the return to skill was as high there as it is here.Finally, employer benefits further complicate how "bad" we look.

Combine those two deep-seated trends with a third – steeply rising inequality – and you get the slow-burning crisis of American capitalism. It is one thing to suffer grinding income stagnation. It is another to realise that you have a diminishing likelihood of escaping it – particularly when the fortunate few living across the proverbial tracks seem more pampered each time you catch a glimpse. “Who killed the ­American Dream?” say the banners at leftwing protest marches. “Take America back,” shout the rightwing Tea Party demonstrators. 

The rise in income inequality is mostly about the
top 5% of the top 1% pulling away from everyone else, and existing estimates overstate inequality and its growth by ignoring employer and government noncash benefits and possibly by ignoring different rates of inflation in different parts of the income distribution.

Unsurprisingly, a growing majority of Americans have been telling pollsters that they expect their children to be worse off than they are. 

Totally wrong.The key here is to only look at polling questions that ask people about their own kids, not kids in general.Here are the relevant survey results I could find:

General Social Survey (1994)—45% said their children’s standard of living will be better (vs. 20% worse)
General Social Survey (1996)—47%
General Social Survey (1998)—55%
General Social Survey (2000)—59%
General Social Survey (2002)—61% said their children’s standard of living will be better (vs. 10% worse)
General Social Survey (2004)—53%
General Social Survey (2006)—57%
General Social Survey (2008)—53%
Economic Mobility Project (2009)—62% said their children’s standard of living will be better (vs. 10% worse)(unlike GSS and PRC, asked only of those with kids under 18)
Pew Research Center (2010)—45% said their children’s standard of living will be better (vs. 26% worse)
 
BusinessWeek (1989)—59% said their children will have a better life than they had (and 25% said about as good)
BusinessWeek (1992)—34% said their children will have a better life than they had (and 33% said about as good)
BusinessWeek (1995)—46% said their children will have a better life than they have had (and 27% said about as good)
BusinessWeek (1996)—50% expected their children would have a better life than they have had (and 26% said about as good)
Harris Poll (2002)—41% expected children will have a better life than they have had (and 29% said about as good)
 
Harris Poll (1997)—48% felt good about their children’s future
Harris Poll (1998)—65% felt good about their children’s future (17% N.A.)
Harris Poll (1999)—60% felt good about their children’s future (15% N.A.)
Harris Poll (2000)—63% felt good about their children’s future (17% N.A.)
Harris Poll (2001)—56% felt good about their children’s future
Harris Poll (2002)—59% felt good about their children’s future
Harris Poll (2003)—59% felt good about their children’s future
Harris Poll (2004)—63% felt good about their children’s future
 
Pew Research Center (1997)—51% said their children will be better off than them when they grow up
Pew Research Center (1999)—67% said their children will be better off than them when they grow up
 
Bendixen & Schroth (1989)—68% said their children will be better off than they are
Princeton Religion Research Center (1997)—62% of men said their sons will have a better chance of succeeding than they did; 85% of women said their daughters will have a better chance
Angus Reid Group (1998)—78% said children will be better off than them
Washington Post/Kaiser Family Foundation/Harvard (2000)—46% said they were confident that life for their children will be better than it has been for them
Economic Mobility Project (2009)—43% said it would be easier for their children to move up the income ladder
Economic Mobility Project (2009)—45% said it would be easier for their children to attain the American Dream

Also, polls consistently show that Americans say they have higher living standards than their parents.

And although the golden years were driven by the rise of mass higher education, you did not need to have graduated from high school to make ends meet. Like her husband, Connie Freeman was raised in a “working-class” home in the Iron Range of northern Minnesota near the Canadian border. Her father, who left school aged 14 following the Great Depression of the 1930s, worked in the iron mines all his life. Towards the end of his working life he was earning $15 an hour – more than $40 in today’s prices. 

Thirty years later, Connie, who is far better qualified than her father, having graduated from high school and done one year of further education, makes $17 an hour. 

It’s not valid to compare her pay mid-career to her father’s at the end of his career—and also, how much work experience does she have relative to him?Did she take time off to raise kids?

The pace of life has also changed: “We used to sit around the dinner table every evening when I was growing up,” says Connie, who speaks with prolonged vowels of the Midwest. “Nowadays that’s sooooo rare.” 

Time-use surveys show that while parents spend more time working (because of mothers) than in the past, they do not spend less time with children.They spend less time doing things by themselves.

Then there are those, such as Paul Krugman, The New York Times columnist and Nobel prize winner, who blame it on politics, notably the conservative backlash which began when Ronald Reagan came to power in 1980, and which sped up the decline of unions and reversed the most progressive features of the US tax system. 

Fewer than a tenth of American private sector workers now belong to a union. People in Europe and Canada are subjected to the same forces of globalisation and technology. But they belong to unions in larger numbers and their healthcare is publicly funded. 

Though unionization has declined markedly in most of these countries, and their healthcare policies are increasingly becoming too costly.Also, most of the decline in unionization in the U.S. occurred
before Reagan took office.

More than half of household bankruptcies in the US are caused by a serious illness or accident. 

This is
bad Elizabeth Warren research—she counts a bankruptcy as being “caused” by illness or accident if one was reported, but the household could have been in serious debt before these occurred.At any rate, bankruptcies are exceedingly rare (under 1 percent of households—see Figure 13).

Pride of place in Shareen Miller’s home goes to a grainy photograph of her chatting with Barack Obama at a White House ceremony last year to inaugurate a new law that mandates equal pay for women. 

As an organiser for Virginia’s 8,000 personal care assistants – people who look after the old and disabled in their own homes – Shareen, 42, was invited along with several dozen others to witness the signing. 

Ah…another representative household…..

More and more young Americans are put off by the thought of long-term debt.

Evidence???

Had enough?  I have speculated that to the extent economic insecurity has increased, it reflects the impact of a negativistic media (amplified by gloom-and-doom liberalism).

Picture
Pieces like Luce’s—and the blog posts it generates—affect consumer sentiment.Ben Bernanke and Tim Geithner aren’t the only people who can inadvertently talk down the economy.



*Originally said "just under 3 percent", which was incorrect.  -srw
 
 
I keep seeing that chart that shows how employment declines in the current recession are so much worse than in past ones.  You know, this one:
Picture
On many dimensions, of course, the current recession is much worse, but this chart has always seemed funny to me.  And after reading Paul Krugman mock the idea that the recessions of the 1970s and 1980s were at all comparable, I decided to make my own damn chart.  Because the above chart looks at employment levels, which are affected by labor force growth, I decided to look at employment rates instead (subtracting the unemployment rate for each month from 100).  Because the composition of the labor force has also changed over time (lots more married women, most notably), I decided to confine to white men ages 20 and up.  And because it's unclear to me what "peak" is used in this chart (see the vague note at the bottom of Rampell's chart) and since the relationship of the NBER business cycle peak to the unemployment rate involves a lag, I decided to measure from the peak employment level.  Got all that?  Here's my chart:
Picture
I've labeled the lines the same way that Rampell's chart is labeled, by the recessions that followed each employment rate peak.  The figures are from BLS and are based on their seasonally adjusted series.

This approach makes clear why people were disappointed by the "jobless" recoveries from the recessions of the early 1990s and 2000s, which were no faster than after the much more severe recession of the early 1970s (though of course, the declines in employment were much smaller to begin with).  More to the point, it also shows that while the current recession still looks bad, bad, bad, the decline in employment is comparable to the decline during the double-dip recession, which is apparent from the "1980" line.  That's not the most fantastic news of course, but it's worth noting.  Unfortunately, I doubt this is the chart you'll see others use and update as things evolve in the next few months.
 
 

(cross-posted at ProgressiveFix.com and FrumForum.com)

When it comes to economic conditions, I'm generally a glass-three-quarters-full kind of guy.  Take unemployment.  Quick—what was the risk in 2008 that an American worker would experience at least one bout of unemployment?  Chances are you thought that that risk was higher than one in eight.*  But figures from government surveys indeed suggest that thirteen out of fifteen workers (or would-be workers) had not a single day unemployed during the first year of the "Great Recession".** (Incidentally, the recessions of the mid-1970s and the early 1980s were also called the "Great Recession" by some commentators.)

The 2009 data won't be out until later in the year, but if last year ends up comparable to the depths of the early 1980s recession, then the average worker will "only" have had a seven in nine chance of avoiding unemployment.***  But these figures overstate economic risk because some unemployment is voluntary and much of it is brief.  According to the Congressional Budget Office, the chance that a worker experienced an unemployment spell lasting more than two weeks during the three years from 2001 to 2003 was just one in thirteen—a period covering the last recession.


So as I've been following the debate about unemployment insurance and whether it actually worsens the unemployment rate, I've actually been open to the idea that being able to receive benefits for up to two years might create perverse incentives.  The research is not as uniformly dismissive of the idea as some liberal assessments have implied (go to NBER's website and search the working papers for "unemployment" if you want to check this out yourself).

In particular, the idea that there were 5 people looking for work for every job opening struck me as sounding overly alarmist.  So I started looking into the numbers to determine whether I thought they were reliable.  The figures folks are  using rely on a survey from the Bureau of Labor Statistics called the Job Openings and Labor Turnover Survey, which unfortunately only goes back to December of 2000.  But the Conference Board has put out estimates of the number of help wanted ads since the 1950s.  Through mid-2005, the estimates were based on print ads, as far as I can tell, but the Conference Board then switched to monitoring online ads.  You can find the monthly figures for print ads here and the ones for online ads here.  The JOLT and unemployment figures are relatively easy to find at BLS's website.

When I graphed the two Conference Board series (which requires some indexing to make them consistent--the print ad series being an index pegged to 1987 while the online series gives the actual number of ads) against the number of unemployed, and then the JOLT series against the unemployed, here's what I found:
Picture


I'll just say I was shocked and that I am much more sympathetic to extension of unemployment insurance than I was yesterday.

*The post originally said one in ten, which was wrong (the result of mistakenly using a figure I had computed  for an older age range).  Technically, the the figure was 13.2%, or 1 in 7.6.
** The original post said nine out of ten.
*** The original post said that if it reaches the depths of the 1990s recession, then the average worker will have had a five in six chance of unemployment.  I located data for the early 1980s recession, which is a better comparison to the current one.
 
 

With Dave Weigel's departure from the Washington Post after the revelation of anti-conservative posts to the private Journolist listserv, the blogosphere has seen commentary from across the ideological spectrum (EzraMattAmbinderSullivanFrumSanchezDouthat).  Unlike most of the bloggers weighing in, I don't know Dave.  The most sensible take I've seen on the ethics involved comes from Amy Sullivan, for what it's worth.  But I was on Journolist for most of its first year-and-a-half, and my perspective on the listserv differs somewhat from what I've read by other participants.
First, though, let's dispense with the D.C.-centric question: who was the leaker?  Here are my top 3 guesses, based only on my knowledge of the dynamics of the list while I was on it and on a not-particularly-astute understanding of the politics of D.C. journalism:

1. An older member of the group with ties to old-school print journalism who leaked to mediabistro & Daily Caller in concert with 
WaPo reporters jealous of Dave, Ezra, and Greg Sargent.  That would explain the coincidence of Tucker Carlson's interest in joining the group soon before his site published the posts--the leaker would have been aware of his interest in a Journolist scandal from the group's debate over letting him join.  Think high-level but shadowy insider with a proclivity for behind-the-scenes intra-party distribution of dirt. And who has professional ties to the Post.


2. An older leftier member of the group opposed to Dave's libertarianism and offended by his colorful language (see, "ratfucker")


3. Someone at Politico (definitely not Ben Smith) who wanted to give the Post a black eye. [Just to clarify-- that's not sarcasm:


Discuss...
One important point, on which I strongly agree with other Journolisters, is that conservative fantasies about the list being used to enforce ideological conformity or as a shadow group-editing device are riotously off the mark.  Jim Geraghty of the National Review spins some particularly entertaining fantasies, contrasting Journolist to the conservative Rightblogs list, which he describes as follows:

I’m on a conservative mailing list called Rightblogs, and from what I have seen, it succeeds at hiding conservative disagreements about as effectively as BP controls oil spills. If Rightblogs was set up to ensure that conservatives settled differences among themselves away from the eyes of the public, I think we can declare it an epic catastrophic failure on par with picking Ryan Leaf with the second overall pick in the NFL draft. Of course, I think it was just set up as a way for conservative bloggers to talk to each other; the vast majority of messages seem to be variations of, “Hey, look what I wrote!”

This summary describes the exchanges on Journolist just as well.  People disagreed vigorously over A LOT on the list.  Economists from different think tanks fought about trade and living standards.  Political scientists, historians, and bloggers fought about the proper interpretation of polls.  Men and women fought about gender representation.  Everyone fought about political strategy.  You should have seen the exchanges between Obama and Clinton supporters during the 2008 primary, which were as acrimonious as most left-right debates I've encountered on the blogosphere.
The best summary I've seen of what Journolist entailed comes from Change to Win's Rich Yeselson, a name that is equally obscure as my own in comparison to the bigger-name Journolist members (and if you can think of a left-of-center writer, blogger, or pundit associated with opinion journalism or mainstream print journalism, they were probably on the list).  Rich and I disagreed strongly about a lot of things, and other times we fought alongside each other against other folks.  The great thing about Journolist was the diversity of backgrounds involved, and the leveling of status differences--if Paul Krugman couldn't defend a point about inequality, then he lost the argument, regardless of how well-known his interlocutor was.  It was to Ezra's great credit that he conceived of the list and had the relationships and reputation to assemble the group he put together.  And he was a hands-off manager of the list.  There was NO enforcement of a party line whatsoever.
That said, I think that in practice the list did end up reinforcing liberal perspectives on the issues of the day.  But the way that it did so was simply an extension of the way that the internet reinforces conventional perspectives in other ways.  Much of conventional wisdom on political and policy issues, I believe, can be understood by the simplifying assumptions that most people read sources that simply justify and reinforce their pre-existing views and that they are able to rationalize away facts that challenge those views.  Most left-of-center folks read liberal blogs, and vice versa for conservatives.  Few challenge their own views by reading blogs with which they are in philosophical disagreement, and those who do are often able to convince themselves that arguments causing cognitive disonance are wrong.  Drew Westen has gotten some mileage out of this insight for the past few years.  Interestingly, if you read Westen's research, it is based on a self-selecting sample of political partisans and ideologues (he advertised for subjects in places where he could nab true believers).

What this means in practice is that complicated viewpoints and worldviews that don't fit into liberal or conservative boxes neatly (and that are relatively scarce in the population of political and policy junkies) are marginalized.  That is true in the blogosphere (how many moderate-hosted blogs can you name versus liberal- or conservative-hosted ones?) and it was true on Journolist.  While I don't have hard evidence to back me up, I strongly suspect that the rise of the blogosphere has hardened political polarization through a dynamic along the following lines: ideologically neat bloggers and writers gain large audiences, those sources become authoritative ones for news and opinion, less ideologically neat bloggers and writers are influenced by the increasing prominence of ideologically neat views and the decreasing availability of ideologically messy views, those bloggers and writers become more ideologically neat.  That is a dynamic that I think Journolist reinforced, simply because most people (liberals included) are ideologically neat, and when ideologically messy members raised arguments with conventional views, few people ever really changed their minds (including the ideologically messy members!), and the ideologically neat people out-numbered the ideologically messy.

Because of my frustration with what I perceived to be these dynamics (and because I needed to write my dissertation rather than spend all my free time fighting with Journolisters!), I left the list in early 2008.  Other ideologically messy people left earlier, others never jumped into the fray to begin with once they joined.  And others, presumably, soldiered on after I left.
To be clear, none of this was Ezra's fault--the membership was at least as diverse as the left-of-center punditry in general--and none of this was the result of a centrally-enforced set of rules or of concerted and organized pressure from members.  It was just the natural result of putting a bunch of people with a strong attachment to their views together such that the composition resembled the wider left-of-center universe.  I'm sure Rightblogs has the same problem.


 
 


(Cross-Posted at www.progressivefix.com--I'm behind in getting these up on my blog...)
Mike Konczal’s inequality post as a guest blogger for Ezra is getting a bit of attention in the blogosphere. Konczal jumps off of an interesting post by Jamelle Bouie to argue that contrary to those who argue that “inequality isn’t so bad,” the unhealthy nature of the cheaper food that is purchased by the poor negates the fact that the poor face a lower inflation rate. Since he suggests I (and Will Wilkinson) think that “inequality isn’t so bad,” I wanted to correct a misconception that Konczal has about theargument of economist Christian Broda that he is responding to. Broda’s actual argument really doesn’t have anything to do with how healthy the things purchased by the poor are.

Here’s Konczal:

One argument that has become popular recently is that the increase in income inequality isn’t quite as bad because both the rich and the poor have different ‘inflation’ rates — the prices at which goods increase for the rich have been increasing much faster than the prices at which goods have been increasing for the poor. So even though the poor or median person hasn’t had any wage growth, he has much more purchasing power because of this effect.



This isn’t quite the argument that has become popular recently. What fans of the Broda research argue (i.e., what Broda and his colleagues argue) is that the 
apparent increase in income inequality may overstate the actual increase in inequality because the poor appear to have a lower inflation rate than the rich. If true, then it’s not that “the poor or median person hasn’t had any wage growth,” it’s that they have had wage growth because of their lower inflation rate — and the wage growth has been big enough that it has kept the ratio of rich-to-poor incomes roughly constant.

Think of it this way. Broda and his colleagues find that the prices of what the poor buy (that is, “price” when the satisfaction derived, or utility, is held constant) have risen less than the prices of what the rich buy. That’s because when prices of related goods change, the poor are more likely to switch to cheaper goods, all the while maintaining their overall level of satisfaction with their purchases. If it becomes cheaper to maintain a constant level of satisfaction, then one’s wages have effectively grown. So poor consumers may switch from Green Giant frozen veggies to generics when the latter go on sale, or they might buy their frozen veggies at the chain a couple of neighborhoods over rather than the local grocery store when the latter’s prices go up. Rich consumers, on the other hand, may be relatively unlikely to stop buying Whole Foods vegetables when the plebian chain’s prices are cut. They may not switch to generics as those products become cheaper relative to those on offer at the farmer’s market.

It’s not that we should be excited about how great the generic frozen veggies bought by the poor are compared with the Whole Foods produce. It’s that we should be excited that the poor are either more willing or more able to economize to maintain a constant lifestyle than the rich are, and so inflation eats into their quality of life to a lesser extent than it does among the rich, holding in check other forces that would increase inequality.

Now, Broda’s research is based on purchases of a limited number of commodities and over a limited number of years, but if his findings extend to other goods and services and to earlier periods (which he believes they do), then the implication is that inequality between the poor and the well-off — 
though not necessarily the richest of the rich — has not grown. We can still worry about the quality of the food purchased by the poor and their health outcomes, but that’s a story about poverty and deprivation, not about inequality or growth in inequality.

 
 


(Cross-Posted at www.progressivefix.com--I'm behind getting these up on my blog...)
James Kwak, coauthor of the new financial crisis book 
13 Bankers, recently sought to explain his thesis “in 4 pictures.” And impressive pictures they are. But I’ve been particularly struck by one of them — this chart, from a paper by economists Thomas Philippon and Ariell Reshef, showing the close correspondence between deregulation trends on the one hand and the ratio of financial sector wages to private sector wages on the other. My reaction to the chart was essentially, Huh. Those trend lines look like the basic income inequality trend line.

But to my knowledge, no one has really made this point since the chart has circulated widely. Certainly no one has tried to illustrate it.

Maybe people just lack my whiz-bang PowerPoint and Excel skills, or maybe I’ve actually had an Original Thought. But take a look at the chart I created, which overlays a trend line showing the share of income received by the top one percent (the black line) on top of the Philippon-Reshef chart. The trend line comes from the widely cited work of economists 
Thomas Piketty and Emmanuel Saez, who used IRS data to look at the incomes of the very rich:



Picture

I’ve argued before that I think the Piketty-Saez top-share trend line overstates the recent rise in income inequality, but I don’t see much reason to doubt the basic U-shape of the trend since the Great Depression. For all of the consensus around the basic inequality trend, there’s surprisingly little agreement or understanding as to why it looks the way it does (a major theme of Paul Krugman’s Conscience of a Liberal). Could it really be as simple as the extent of financial regulation? Every analyst bone in my body says this is too easy, but…but….

Of course, saying it’s all financial regulation trends isn’t necessarily inconsistent with Krugman-esque arguments that it’s all about changes in cultural acceptance of inequality.  Maybe financial regulation flows from public attitudes about inequality.

Anyway, interesting — no?
 
 


(Cross-Posted at www.progressivefix.com--I'm behind getting these up on my blog...)
Ezra Klein links to a Slate article by Ben Eidelson that, I think, is quietly devastating to the idea that the Senate filibuster has somehow destroyed the democratic process. Eidelson shows that from 1991 to 2008, in the typical successful filibuster, the senators behind the filibuster (i.e., opposing the cloture motion) represented states comprising 46 percent of the U.S. population. If filibustering Senators represented 51 percent of the population, then we would conclude that the typical successful filibuster was supported by senators representing a majority of Americans. In that case, at least by small-r republican principles, the filibuster would protect the will of the majority.

Forty-six percent is not 51 percent, of course. But here’s another way of thinking about the effect of the filibuster. It could be argued that, to account for the fact that most Americans’ views on most issues are only weakly held, we should have a higher threshold for legislation passing than support by a simple majority of senators, or even support by enough senators to represent a simple majority of Americans. Instead, for legislation to pass, we might decide that enough senators representing 55 percent of Americans should support the legislation. If that were the procedural guideline, then 
on average, the way the filibuster has worked has been consistent with that guideline.

For the practice of the filibuster when Republicans have been in the minority to be consistent with a procedural guideline, the rule would have to be that enough senators to represent 60 percent of Americans should support the legislation (see 
Eidelson’s table). Interestingly, however, despite the greater use of the filibuster among Republicans, in Eidelson’s data Republican minorities had an average of 20 successful filibusters per Congress, compared with 16.6 successful filibusters per Congress by Democratic minorities. That’s a fairly small difference, although the current Congress is not included in these figures.

Unlike most progressive bloggers, I remain ambivalent about the filibuster. Eidelson’s data shows that Republican filibusters are much more likely to be anti-majoritarian than Democratic filibusters (even if they are not dramatically anti-majoritarian). He proposes as a compromise, replacing the 60-vote rule for cloture votes with a 55-vote rule, which historically would have eliminated most successful Republican filibusters while retaining most successful Democratic ones. Another compromise that’s consistent with small-r republicanism and small-d democracy that might be more palatable to Republicans would be to implement instead something like a 55-percent-
of-the-population rule for cloture votes (while still requiring a majority of senators too). This would set a higher threshold for support than simple majority-senator-rule, would ensure that small-state senators could not thwart the preferences of senators representing a solid majority of Americans, and would not have such dramatically partisan consequences compared with a 55-vote rule (meaning it would have a better chance of being implemented).