“We all have some experience of a feeling which comes over us occasionally, of what we are saying and doing having been said or done before,. . . .of our having been surrounded, dim ages ago, by the same faces, objects and circumstances—of our knowing perfectly what will be said next, as if we suddenly remembered it.”
Charles Dickens, David Copperfield
Six times in the last century (most recently in the early 1990s), health reform has appeared on the national stage, only to be chased off by anxieties about state power, the promise of private alternatives, and the political clout of doctors and employers and insurers. The results are equally notorious: We spend nearly twice as much on health care as any other country. We insure a dwindling share of the population. And we routinely crowd the bottom of any international ranking of health outcomes.
Not surprisingly, given these familiar facts and changing of the guard in the nation’s capital, health care is once again at center stage. “From Maine to California, from business to labor, from Democrats to Republicans,” as Barack Obama waxed optimistically during the 2008 campaign, “the emergence of new and bold proposals from across the spectrum has effectively ended the debate over whether or not we should have universal health care in this country.”
Not so fast. Despite assurances that this time things will be different—that the stakes have changed, that powerful interests are now on board, that Congress will be coddled into cooperation—there is a pall of familiar failure settling over the health care debate. The same sense of urgency and optimism accompanied past stabs at reform, all of which were spectacular failures. What lessons does this history offer?
Let’s be clear what we are talking about.
A lot of people (on both sides of the debate) are casually throwing around terms like “national health insurance” and “universal coverage”—but current bills before Congress propose nothing of the kind. Until the 1950s, health insurance in the United States covered not the cost of care but the cost of not working while you were sick. This indemnity coverage (think AFLAC) was gradually displaced by job-based “first dollar” coverage for about two-thirds of the workforce. But almost as soon as that system (at least for covered workers and their families) was in place, coverage began to retreat again. The “managed care” revolution has used co-payments and deductibles to recreate a system of essentially catastrophic coverage, in which insurance kicks only after substantial out-of-pocket spending.
In the current climate, “health insurance” is now a pretty elastic concept. For many it still means something akin to conventional private or public coverage. For the right, it means virtually any private insurance product, including high-deductible plans and health savings accounts. For employers and employees, it has come to mean almost any job-based benefit –from a collectively-bargained health plan to a pharmaceutical discount card. The meaning of “universal health care” is equally ambiguous. In a single-payer system, both coverage and the right to coverage are universal. In the US, even the most optimistic reforms envision a patchwork of coverage based on a variety of claims (job status, age, income). Increasingly, universal coverage is understood to mean only universal access – a threshold satisfied by treating health premiums like home mortgage interest in the tax code. This is the sad irony of the town hall outbursts against socialized medicine and the horror stories imported from Britain and Canada: nothing remotely resembling national health care has ever been on the table.
Don’t move furniture into a burning house.
At the root of our health crisis lies an historical accident: reliance on jobs as a means of distributing and paying for health coverage. This began as an ad hoc arrangement to evade World War II-era wage and tax regulations. The system stuck at the war’s end because the cost of health care was minimal, the prevalence of large-firm employment offered an easy way to spread the risk, and American firms did not face competition from countries where the health care costs and risks were socialized. We don’t live in this world anymore, but our health system does.
There is no good reason to let the distribution of jobs determine the distribution of a basic social good—an arrangement which leaves many workers uninsured, burdens insured workers (for whom a decision to switch jobs, let alone to strike out on their own, exposes themselves and their families to the capricious risk-rating of private insurers) and penalizes responsible employers whose competitors (bottom-feeders at home, firms with socialized health costs abroad) face no such costs. All of this has been hammered home in a recession marked by stark job losses and persistent health inflation: every percentage point increase in the national unemployment rate yields 1.1 million more uninsured adults; every 10 percent increase in costs pushes another 1.4 million Americans (about a third of them children) into either public programs or the growing ranks of the uninsured.
Despite all of this, reform proposals insist on propping up job-based care and half-heartedly mopping up around its edges. This ignores a couple of pretty compelling problems: For starters, job-based coverage—which is notoriously uneven, capricious, wasteful and expensive—is the problem. As importantly, the coverage and standards of job-based care are in free fall. Only one in three workers gets health care, in their own name, from their own employer. For those lucky enough to have coverage, the burden of higher premiums, co-payments, deductibles, and uncovered services has risen dramatically in recent years (between 2000 and 2008, for example, health insurance premiums grew over three times as fast as wages). Fully 62 percent of personal bankruptcies in 2007 were related to medical expenses—and 80 percent of those filing were insured.
Mopping up around job-based coverage was a bad idea in 1965, when Medicare and Medicaid passed on the premise that private employers and insurers could cherry-pick the good risks and dump the rest onto public programs. It was a bad idea in 1994, when the Clinton Plan offered up a combination of debased private coverage and tepid universalism. And it is a worse idea today, when both the reach and the affordability of job-based coverage are in full retreat.
Sometimes half a loaf is worse than nothing.
Scratch almost any of the “health care for all” ideas floating around and you will find a long list of discrete and often disconnected proposals: “health care for kids,” “health care for students,” “health care for the employees of small businesses,” and so on. This reflects a longstanding reluctance to take on the political or fiscal burden of displacing job-based insurance. Historically, this instrumentalism has yielded some important achievements. Medicare not only created near universal access for the elderly, but also dramatically reduced poverty among seniors. The State Children’s Health Insurance Program promises the same for kids. And states (Minnesota is a good example) with expansive eligibility for their public programs have accomplished near universal coverage at modest cost.
But there has been a price. Over the last century, arguments that some citizens (kids, seniors, parents, veterans) are especially deserving of public insurance have also carried the implication that others are not. As a result, the largest cadre of the currently uninsured – working parents and their families – have no real claim on public resources and no recourse but to get another job (or two). This crazy patchwork is also expensive. It costs real money to sort the insured from the uninsured. Public programs, by their very logic, end up with the most expensive and intractable risks. The uninsured do get care, but often too late and in the wrong place. And, of course, all these costs are recovered in the premiums charged to private plans – encouraging employers to dump such commitments and making it less likely that anyone outside a large job-based pool can get insurance at all.
In this respect, one baseline assumption of health reform has remained unchanged since the 1930s: We already pay for national health insurance, we just don’t get any of the benefits. As a direct consequence of our piecemeal and incremental muddling around the edges of the health crisis, we spend as much, in public dollars, on health care as any of our democratic and industrialized peers. We spend as much again in private dollars – for a per capita bill more than double the OECD average — and claim not much (rising uninsurance, widespread insecurity, lousy health outcomes, high business costs) in return.
And yet there is now a bipartisan (and near-religious) devotion to incremental reform, the elements of which–if it ever even gets that far—include the following: expansion of existing public programs, tighter regulation on insurance underwriting, an employer mandate (requiring some to cover workers and penalizing some who don’t), an individual mandate (requiring everyone to carry health insurance and penalizing those who don’t), subsidies for employers and/or individuals who can’t afford to buy insurance, and maybe (or maybe not) a public insurance option.
If you live in Massachusetts, Oregon, Maine, Minnesota, or Tennessee there should be an eerie ring to the this combination of hodgepodge reform and optimistic projections. Nowhere has it worked. None of these reform laboratories have been able to reign in costs, stem the bleeding in job-based care, or control costs. The reason are simple. Propping up the current system—or pushing more people into it via individual or employer mandates—does nothing to address the administrative waste, actuarial complexity, or naked profiteering that created the health care crisis in the first place.
Beware of HMO’s bearing gifts.
Much has been made in the last year of the important stakeholders who are “on board” this time around. But this is nothing new. Health interests—doctors, hospitals, employers, insurers—have always jockeyed around reform efforts, looking to shape or stem legislation according to their distinct (and often quite contradictory) interests. For much of the last century, the AMA — employing a combination of professional authority, political clout, and precocious lobbying skills — lead the charge against national health insurance. Over time, the influence of doctors was eclipsed by employers and private insurers – first cooperating against the scourge of socialized medicine, then battling each other over the costs of job-based plans.
Today, things look quite different. The AMA spent the 1990s backing the HMO as the last best alternative to state medicine. The results, for doctors and patients, have been disastrous – a fact which has mobilized the latter but defanged the AMA as a political force. Employers, always torn between spreading the costs of job-based care and abandoning that responsibility altogether, are leaning toward the latter. Insurers and drug companies are now the big dogs in the race, leading the health sector in contributions to both parties.
As importantly, none of these interests speak with one voice. Big firms with big health care commitments (remember GM?) have always flirted with the prospect of socializing those costs. But not so Pizza Hut or Walmart. And the bottom feeders are increasingly calling the shots—not only through organizations like the National Federation of Independent Business but through the Chamber of Commerce as well. Some big insurers are salivating at the prospect of an individual mandate (which would force the uninsured to buy their products) even if they are leery of the public option (which would underscore how overpriced those products are). But many insurers are not sure that increased political oversight is worth the bargain. And while many health professionals support broad-based reform, the AMA’s official position—that the market will heal all—has changed little since the 1980s.
Health reform has faced some formidable opponents over the last century, and has made its task that much harder trying to anticipate or pre-empt that opposition. Fear of southern segregationists in Congress doomed much of the timid universalism of the New Deal. Deference to the AMA virtually immobilized health reform in the 1940s and 1950s, and vastly inflated the costs of Medicaid and Medicare after 1965. Since then, efforts to appease diverse (and often quite incompatible) interests have routinely turned good intentions into either bad policy or legislative shipwrecks. That is clearly what happened in 1992-94, when the Clinton’s eagerness to please everyone made opponents of them all. And that is clearly what is happening today, as Congressional reforms retreat to plan that manages to be ambitious enough to scare off the stakeholders, but modest enough to make little difference.
Don’t believe the myths.
Our current (and historical) hesitance to address the health crisis been sustained by a series of tenacious and overlapping myths about the provision of health care. In 1917, insurance executives raised the fear of “Prussian” or “Bolshevik” medicine. In the 1940s, the AMA made up a quote from Lenin—“socialized medicine is the keystone in the arch of the socialist state”—to punctuate its Cold War campaign against public health insurance. The same arguments, further refined in the 1992-4 debate, are being made today. Republican pollster Frank Luntz’s now-infamous memo, “The Language of Healthcare 2009,” lays out the talking points: any public program or option is slippery slope to “government takeover,” and national health systems (insert Canadian or British horror story here) stifle innovation, encourage malingering, and ration care—either by forcing patients to wait or pulling the plug.
Let’s examine these in turn. The AMA and others have always raised the specter of the “moral hazard,” of describe the ways in which insurance might change the behavior of the insured. Those who need a doctor, as the AMA argued in 1955, “will have to compete for the doctor’s time with the whole gamut of people who have only minor complaints, imaginary ailments, trivial requests, or just a desire to ‘cash in’ on whatever benefits are available.” But this assumes that health care is a commodity like mufflers or breakfast cereal, and that people will consume more if it is cheaper. And this is clearly not the case. People don’t want to consume health services and their is no evidence from national health care settings that they flock to the proctologist just because its free. And, as we know from a generation of “managed competition,” making health care more expensive does not magically conjure up “responsible” consumption. Indeed, those who have trouble affording health care put off both—making the system more expensive and less efficient for everyone.
A close analogue to this is the myth of market efficiency—best captured by the discipline supposedly imposed on patients and providers by “managed competition.” But this too is just a useful ideological illusion, because the provision of health care confounds almost all of the assumption and rules of neoclassical economics. There is no conventional “supply” of health care, which is delivered by a tangle of professional and local monopolies. And there is no conventional “demand” for health care. The top ten percent of the non-elderly population (ranked by their share of expenditures) accounts for almost two-thirds of health spending, the top five percent account for almost half, and the top one percent account for almost one quarter. This is not a market, and its sheer folly to expect to act like one.
Perhaps the most pervasive myth—retold in different ways at hundreds of town hall meetings this summer—is that of government incompetence. Here, of course, is where the examples of Canada and Britain are actually useful. By any measure, we spend more on health care and get less in return than any of our democratic peers. The money squandered on administrative waste (the additional costs of running our patchwork system) alone is enough to cover all those uninsured. Indeed the only corner of our health care system which consistently demonstrates both patient satisfaction and administrative efficiency is that which most resembles the Canadian system—Medicare.
Finally, all of this bedpan rattling about freedom and coercion tends to obscure what is really at stake. The Clinton-era bumper sticker–“National Health Care? The Compassion of the IRS! The Efficiency of the Post Office! All at Pentagon Prices!”—got it nearly exactly wrong. What we have now, under our patchwork system of job-based care and private insurance, is a nightmare of inflation, waste, and bureaucratic intrusion. Unfortunately—as the opposition mobilizes and Congress retreats to incremental muddling—it looks as if real reform will be dead on arrival once again.
Colin Gordon is a professor in the history department of the University of Iowa and author of Dead on Arrival: The Politics of Health Care in Twentieth-Century America.