Umut Sarpel, MD; Bruce C. Vladeck, PhD; Celia M. Divino, MD; Paul E. Klotman, MD
Ann Surg. 2008;247(4):563-569. ©2008 Lippincott Williams & Wilkins
Posted 06/06/2008

Abstract and Introduction


The United States has the most expensive and complex healthcare system in the world. Despite the magnitude of funds spent on the system, Americans do not achieve the high standards of health seen in other developed countries. The current model of health insurance has failed to deliver efficient and effective healthcare. The administrative costs and lack of buying power that arise out of the existing multipayer system are at the root of the problem. The current system also directly contributes to the rising number of uninsured and underinsured Americans. This lack of insurance leads to poorer health outcomes, and a significant amount of money is lost into the system by paying for these complications. Experience from other countries suggests that tangible improvements can occur with conversion to a single-payer system. However, previous efforts at reform have stalled. There are many myths commonly held true by both patients and physicians. This inscrutability of the US healthcare system may be the major deterrent to its improvement. A discussion of these myths can lead to increased awareness of the inequality of our healthcare system and the possibilities for improvement.


Although most people readily agree that the US healthcare system is deeply troubled, there is little consensus on how to resolve its problems. The current system is so cumbersome and confusing that anything but the most superficial discussion seems to require an advanced knowledge of medicine and public policy. As a result, both patients and their physicians are alienated from trying to understand and improve the system. Debates about the plight of the uninsured and the rising costs of health insurance tend to end in an exasperated sigh of resignation to the status quo.

Our peculiar system of employment-based private insurance has been called an accident of history. During World War II, a wage freeze prevented employers from attracting workers by offering higher salaries. However, fringe benefits were not controlled and employers soon discovered that they could compete for workers by providing health insurance packages. This benefit was highly valued by wartime Americans and enrollment in insurance plans soared. Employers willingly participated in this new union because they received financial benefits from the tax-free status of health insurance. Job-linked health insurance gained momentum and evolved into the system that stands today.

Many of the problems encountered in the system are deeply interconnected with other complex issues. For example, it is hard to talk about the reason for the growing number of uninsured without also discussing the history of Medicare/Medicaid, funding for medical education, government lobbyists, and tort reform. These are worthy topics that cannot be addressed within the scope of this paper.

The goal of this article was to give physicians the groundwork to launch conversations about the US healthcare system. The format for this discussion has been adopted from Dr. Robert Lebow’s Health Care Meltdown: Confronting the Myths and Fixing Our Failing System.[1] Lebow identified many myths that are commonly held by both patients and physicians; for this discussion we have focused on 5 common misconceptions. The following myths are not intended to be an all-inclusive list of the woes of our healthcare system. However, they represent key and common misperceptions that are stumbling blocks on the path to improvement.

Perhaps it is important to state up front that this effort makes certain assumptions; it is understood that 1) members of a civilized society agree that there is an obligation to provide healthcare to those who need it, and that 2) there cannot be a tiered class system for access to healthcare, although people who are willing to spend more could have the option to do so. Our goal is simple: to spark discussion and focus attention to the issue of inequity of healthcare.

Myth 1: The US Healthcare System Is the Best in the World

This idea has been called the alpha myth because it is fundamentally the root of all other myths.[1] It is the straightforward belief that Americans have access to the highest quality healthcare available in the world. A different way to present this myth is to state that citizens in other countries experience long waits for healthcare, that they must rely on generalists, and that they suffer worse outcomes as a result.

This belief is widespread and well-entrenched in the American mindset. So it is perhaps surprising that in a 10-nation 1990 survey on the level of satisfaction with the national healthcare system, the United States ranked 10th.[2] These results were then reproduced a decade later.[3] Although Americans believe the US system is the best, clearly they are not as satisfied with the healthcare they receive as are citizens of other countries.

In fact, this disparity between perception and reality has been captured in several studies. In the year 2000, the World Health Organization (WHO) dedicated its annual World Health Report to a comparison of healthcare across the globe.[4] In this exhaustive analysis, American superiority was not borne out: the United States ranked 32nd for infant survival, 24th for life expectancy, and 54th for fairness. The fairness ranking was derived from a comparison of the individual financial contribution required with the quality of healthcare received. The current US system is known as a regressive system; that is, the poor pay relatively more for healthcare. In fact, the poorest fifth of Americans spend 18% of their income on healthcare, whereas the richest fifth of Americans spend about 3%.[5] In this type of regressive system, it is clear why about 50% of personal bankruptcies in the United States are related to medical bills.[6] Tragically, 75% of individuals declaring medical bankruptcy had medical insurance at the onset of their illness.[6] Overall, the WHO ranked the United States 37th in the world.

Similar results were found by the Commonwealth Fund in a recently released scorecard on the performance of the US health system.[7] Outcomes in the United States were compared against those achieved by top countries or the top 10% of US states, hospitals, or other providers. The scorecard evaluated multiple indicators of health outcomes, including mortality, life expectancy, and the prevalence of health conditions that limit the capacity of adults to work or children to learn. The average ratio score for the United States was a 69 out of a possible 100.[7] The United States ranked 15th out of 19 countries with respect to preventable deaths before the age of 75, with a death rate more than 40% higher than the benchmark countries of France, Japan, and Spain. The United States ranked last in infant mortality out of 23 industrialized countries, with rates more than double the benchmark countries of Iceland, Japan, and Finland. The United States tied for last on healthy life expectancy at age 60.[7]

Despite these mediocre results in objective parameters of health outcomes, the United States spends far more than any other country for its healthcare. In 2000 the United States spent 13% of its gross domestic product on national health expenditures.[8] The next highest spending countries were Germany at 10.6% and France at 9.5%. In a graph of life expectancy versus health spending per capita, the United States falls far off the curve, both spending more and gaining less than other countries.[4] Another example of this contradiction is seen in the outcome of patients on hemodialysis. Although there are more hemodialysis centers per capita in the United States,[9] when end-stage renal disease patients were matched for severity of disease in Canada and the United States, patients in the United States were less likely to receive a kidney transplant and also had a higher mortality rate while on hemodialysis.[10]

Myth 2: There Will Always Be a Certain Segment of the Population That Remains Uninsured

There is a general misconception that the uninsured are also unemployed, that they represent the marginalized section of society. Epidemiological studies clearly show that this is false: two-thirds of the nonelderly uninsured are employed.[11] And rather than representing an invisible minority, the number of uninsured persons is steadily growing. At last count, there were 46.6 million uninsured people in the United States,[12] but this number is projected to grow to 56 million by 2013.[13] It is notable that one-third of the recent increase in the number of uninsured adults occurred among those with incomes more than 200% of the federal poverty level, and about half the growth was among young adults ages 19 to 34.[14]

The reasons for the rising number of uninsured are many and complex. In part, the character of the US workforce has changed. From 1977 to 1998, there was a decrease in unionized factory jobs from 25% to 15%, with a concomitant rise in service or clerical work from 19% to 29%.[15] Members of unions receive insurance, whereas workers in the service industry typically have no health coverage. This shift in the workforce created a new rank of people, still employed but now without health insurance.

The single most influential factor in the rise of the uninsured has been the increasing cost of health insurance and the growing share that employers expect employees to absorb. In a system in which employment-based insurance is still the dominant mode, a smaller and smaller proportion of employers now offer health insurance to their employees at all.[16] Of those that do, most are increasing the amount that individual employees must contribute at a rate that is rising much faster than employees’ incomes. This cost-shifting to employees has had an especially significant effect on the coverage of dependents.[16] Finally, the growing cost of health insurance policies has essentially destroyed the market for individual self-purchased policies in most states.

Even for those with insurance, the growing share of expenses that must be covered out of pocket through deductibles and copayments is a mounting problem. Some health economists have recommended high out-of-pocket payments, as a means of deterring frivolous discretionary use of health services.[17] Judging by certain parameters, cost-sharing is effective. In the widely cited RAND Health Insurance Experiment, between 1975 and 1982 about 4000 patients in 6 cities around the United States were randomly enrolled into 1 of 4 test plans with varying amounts of cost-sharing. Those patients in plans with the largest deductible used 25% to 30% fewer medical services than patients with free care.[18] However, high-deductible plans also resulted in some lower health outcomes: poorer control of blood pressure, corrected vision, and oral hygiene.[18,19] In the subgroup of patients who were poor and sick, the subpar control of blood pressure increased the annual likelihood of death by 10%.[20] Thus, although total physician visits dropped, the financial deterrent did not differentiate between necessary and unnecessary visits. Patients who were unable to afford their copayment, despite the fact that they were insured, chose to forgo necessary visits and suffered a decline in health.[18]

An extraordinary demonstration of the effects of cost-sharing occurred when, in 1996, the province of Quebec instituted a 25% copayment for prescriptions that had previously been free to the elderly and those on welfare.[21] This new policy affected over 1 million Canadians. In the following year there was a reduction in the use of essential medications by 9% in the elderly and 14% in those on welfare (Figure 1). Adverse events rose 117% in the elderly and 97% in those on welfare. Emergency department visits rose 43% in the elderly and 78% in those on welfare.[21]

Figure 1.
Observed and predicted use of essential medication in the prepolicy and postpolicy periods. Reproduced with permission from Tamblyn et al.[21]

Those individuals who are officially insured, but still suffer from lack of healthcare, have been termed the under-insured. They may miss doctors’ appointments, leave prescriptions unfilled, or defer recommended laboratory tests. People without any insurance at all represent only the tip of the iceberg; there are far more individuals who are under-insured. Currently, there are an estimated 50 million individuals who are considered under-insured.[22] Meanwhile, the burden of cost-shifting continues to grow. Average yearly premiums have risen precipitously, at far greater rates than general inflation (Figure 2). [23] Rates of copayments and deductibles continue to rise markedly as well, creating increased burdens on family budgets. In the years 1996-1997 and 2001-2002, the average family’s out-of-pocket spending rose nearly twice as fast as did family income.[24]

Figure 2.
Cumulative changes in health insurance premiums, overall inflation, and workers’ earnings. Reproduced with permission from The Kaiser Family Foundation.

Cost-shifting may decrease initial healthcare spending up front, but it does not decrease the national healthcare expenditure.[21,25,26] The United States, for example, has both the highest insurance deductibles and the highest national healthcare expenditures in the world. As the number of under-insured grows due to the demands of cost-shifting, more and more people will be driven to give up their insurance completely.

Rather than comprising a static group of people, the number of uninsured is steadily rising, and this new generation of uninsured is made largely of people who are employed and have-or recently had-medical insurance. The current system of job-based insurance and cost-shifting are creating this dysfunctional environment.

Myth 3: The Uninsured Have Equal Access to Medical Care Through the Emergency Room

While admitting that the system is flawed, many people feel a sense of complacency, believing that the uninsured still have access to healthcare through emergency departments. Although inconvenient to obtain, the same quality of care is ostensibly available to the uninsured and the insured. Although this is true in theory, it is not realized in practice.

Multiple studies have shown that the uninsured receive less healthcare and have worse outcomes than the insured.[27,28] The Institute of Medicine reports that working-age Americans without health insurance are more likely to receive too little medical care and receive it too late; be sicker and die sooner; and receive poorer care when they are in the hospital, even for acute situations like a motor vehicle crash.[27] After adjusting for age, gender, smoking, and education, lack of insurance alone increases risk of death by 25%.[29]

This increase in mortality is independent of income level.[30,31] In other words, it is not being poor per se that leads to the adverse effects of being uninsured. The lack of insurance interrupts the patient-physician link, which is necessary to pursue good health. For example, diabetes cannot be cured but its effects on the body can be attenuated with strict glucose control and regular surveillance. In a study comparing insured and uninsured patients with diabetes, those lacking insurance had statistically lower rates of the recommended eye and foot examinations, vaccines, glycosylated hemoglobin screening, and checks of cholesterol level (Figure 3). [32] In patients with kidney failure, the uninsured start dialysis at a later stage of disease. Uninsured patients with HIV are less likely to receive effective drug therapy. The uninsured are screened less often for hypertension and are less likely to take blood pressure medication prescribed to them.[27]

Figure 3.
Diabetes management among insured and uninsured adults. Reproduced with permission from The Institute of Medicine.

The uninsured suffer poorer outcomes across surgical disciplines as well. Although the insured meet goals of 90% for cervical cancer screening, 70% for breast cancer screening, and 50% for colorectal cancer screening, a recent study shows that the uninsured fall well short of these goals, reporting 77%, 52%, and 29%, respectively.[31] As a result, uninsured people (or those with Medicaid) have more advanced cancers at the time of diagnosis and/or lower survival rates for breast cancer, colon cancer, cervical cancer, prostate cancer, and melanoma.[27,33,34] Patients without health insurance present more often with perforated appendicitis.[35] Individuals with ulcerative colitis and private insurance are more likely to undergo colectomy than their uninsured counterparts.[36] In a study of over 5000 patients with abdominal aortic aneurysms, those without medical insurance were significantly more likely than individuals with private insurance to suffer rupture.[37] There are a remarkable number of studies that draw a strong correlation between lack of insurance and poorer health.

Finally, the care administered to the uninsured in the emergency department setting is generally more costly to the national budget. When there is no preexisting patient-doctor relationship, the physician finds it more difficult to gauge a patient’s symptoms and to rely on follow-up. For example, a patient in the emergency room with mild pain on urination will undoubtedly be prescribed an antibiotic. This same patient, if seen in a generalist’s office, might be asked to leave a urine sample and will only be medicated if the culture confirms a urinary tract infection. Similarly, a patient with abdominal pain who might be observed with serial examinations will more likely undergo a computed tomography scan if seen first in the emergency department.

Relying on emergency rooms to provide primary healthcare for the uninsured is clearly a lose-lose situation. The patients themselves experience poorer health outcomes and lost wages, whereas the system suffers from the cost of overly conservative medical decisions.

Myth 4: A Free Market Is the Best Way to Get the Highest Quality Health Insurance for the Lowest Cost

Competition for goods and services generates maximum quality for minimal price. Policy makers often refer to this tenet when defending the multipayer system that exists in the United States. However, a free market only works when the consumer can use buying power to influence the price and quality of goods. In the current healthcare system, insurance is usually purchased by third parties (ie, employers), not by the consumer directly. Also, healthcare is not a discretionary desire; patients cannot delay purchase until prices drop. As a result, the consumer is not in charge of directing the market and thus there is no feedback loop to increase quality or reduce cost. The current system is not a free-market but is instead a for-profit system driven by private insurance providers who are immune to the checks and balances associated with the free-market ideal. This system, which has been in place for decades, has led to increases in healthcare expenditures, poorer health outcomes, and less choice in providers.

Despite the lack of market influence, many consumers still prefer private insurance companies over government-run programs. There is a pervasive belief that the government is fundamentally inefficient, and any private health insurance company will be better run than a public one. In truth, federal health insurance is much more cost-efficient than private insurance because of its ability to streamline costs. The existence of multiple private insurance companies increases the complexity of the system and administrative costs. At present, the US system is overrun by hundreds of for-profit insurance providers. Medicare’s administrative costs run less than 3%, whereas private insurance administrative costs are above 16% of budget (Figure 4). [38-40] These funds are spent on increasing revenue by aggressive marketing and billing, and decreasing losses with programs such as utilization reviews (labor-intensive patient chart surveys performed to monitor billing practices). This policy does save individual insurance companies money, but there is a substantial overhead involved in this labor-intensive process. Furthermore, there is no evidence that utilization reviews decrease the national healthcare expenditure, they merely shift the financial burden away from the individual company.[41] Private insurance companies vie to cut their own costs without regard to the effects on the national healthcare expenditures.

Figure 4.
Administrative cost as percent of benefits. Reproduced with permission from The Commonwealth Fund.

Other nations with single-payer systems, where only one agency provides insurance, demonstrate significantly lower administrative costs.[42] In the United States, the total administrative costs alone were over $100 billion in 2002.[38] Furthermore, administrative costs are the fastest-growing component of national healthcare expenditures.[39,40] The largest savings lie in decreasing the administrative costs of insurance companies, which are inseparable from a for-profit system.

Finally, because of the special nature of healthcare, even a functional free market system would not result in high quality medical care for all its consumers. There is no incentive in a profit-driven market to attend to low-yield issues such as mental healthcare, preventive care, and chronic illnesses. These fields of medicine are considered money sinks; although clinically important, they do not generate profit for insurance companies. As a result, these areas tend to be neglected. For example among the elderly, those in HMOs were more likely to suffer a decline in health over a 4-year period than those in a nonmanaged care plan (54% versus 28%).[43] In a profit-driven system, there is no mechanism for those without buying power to affect their care.

Myth 5: We Just Cannot Afford to Cover Everyone

This myth is founded on the belief that you have to pay more to get more; because the United States already spends too much on healthcare expenditures, the nation cannot afford to cover even more of its citizens. But clearly this logic cannot be entirely sound: every other industrialized nation in the world offers universal coverage, and all accomplish it with lower national health expenditures than the United States. Much of our nation’s healthcare money is spent on costs that arise directly from a multipayer system with limited coverage. It is vital to identify these spending sinks to find the funds that will provide for universal coverage. Painless cost control measures reduce costs without a resultant decrease in quality of care. Established targets for painless cost control include providing preventive care, training more generalists, controlling drug prices, decreasing unnecessary procedures, and reducing administrative costs in health insurance.

On average, Americans pay more for the same medications than do patients in other countries.[44,45] This disparity has been defended by the assertion that the United States supports the world by developing more new pharmaceuticals, and therefore these research costs result in higher drug prices. This altruistic rationalization is unfounded: combined, the European nations produce on average the same number of new pharmaceuticals per year as the United States.[1] Drug prices can be lowered by preventing pharmaceutical companies from advertising directly to the public, by increasing use of generic drugs, and by collective bargaining though a centralized healthcare system.

End-of-life extreme care is another area of financial inefficiency. Thirteen percent of Medicare’s total funds are spent on healthcare provided during the final 60 days of life. Although we pride ourselves on providing cutting-edge technology to our patients, there is clearly a point where technology no longer provides the best care for our patients’ needs. Lower-cost measures that increase the quality of remaining life should take precedence over high-cost measures that only extend quantity not quality of life.

Another potential method of cost-control lies in reducing the number of unnecessary medical procedures performed in the United States. The rate of coronary angioplasty in the United States is 300% the rate in Canada, with no associated increase in life expectancy. In 2002, 26% of all births in the United States were by cesarean section.[46] This rate is twice that seen in the next highest country.

Despite these figures, the cost problem in the United States is not solely a matter of overutilization. Other countries with far lower healthcare expenditures have longer hospital stays, perform more imaging, and prescribe more medications than the United States.[44,45] And so even more significant than overuse is overpricing. The United States spends more on healthcare without providing more services than other countries do. This suggests that the difference in spending is largely attributable to higher prices of goods and services: hospitals are more expensive and patients are treated more intensively.[9]

Higher prices for medical goods and services are generated by the incredible complexity of the US system. Whereas in other countries governments bargain directly with suppliers, in the US health system money flows from patients to providers through a vast network of middlemen. This highly fragmented system weakens buying power and results in overall higher prices of goods.

Real-life lessons on cost control can be gleaned from Taiwan’s experience. In 1995 Taiwan transitioned from a US-style system to a single-payer system with universal coverage, similar to the Canadian system. Before the switch in 1995, less than 60% of the population was insured. By 2001, 97% of the population had health coverage.[47] What is remarkable is that this marked expansion in coverage was accomplished with essentially no change in national healthcare expenditures.[47]

It is important to recognize that, in one form or another, we already pay for the health costs of the uninsured. The Institute of Medicine estimates that the value of covering the uninsured is $65 to $130 billion per year.[48] A substantial portion of the cost of universal coverage, approximately half, is already in the system and is being spent by the government on the healthcare costs of the uninsured.[49] It is a matter of redirecting funds to create the greatest good for the most people.


In the effort to ameliorate the problems with our healthcare system, recently several programs for reform have been launched. Although well intentioned, these proposals have limited ability to affect change. For example, employer-mandated plans, such as the one recently instituted in Massachusetts, require individuals to purchase private sector insurance coverage. Pretax dollars are used to purchase policies, and a small portion of the cost is covered by an employer contribution. Although this proposal forces individuals to purchase insurance (under fear of tax penalties), it does not guarantee the existence of affordable plans. Serious concerns exist over the long-term financial viability of the program. As enrollment costs continue to rise, insurance companies can give the illusion of affordability by excluding services. Forcing consumers to purchase stripped-down plans does little to improve the quality of healthcare.

Another attempt at reform is through the creation of health savings accounts. Individuals can shelter part of their income from taxes by making deposits in such accounts and using these funds toward medical bills. By definition such programs favor individuals who are in higher tax brackets since they have more to gain from diverting pretax dollars. Obviously, individuals in these higher tax strata are not the appropriate targets for healthcare reform.

Adding graduated increases in coverage, although politically more palatable, has largely failed to impact the lives of the uninsured. Incremental plans such as the State Children’s Health Insurance Program (SCHIP) are worthwhile but have failed to defray the ever-rising number of uninsured. Although SCHIP is responsible for a modest decrease in the number of uninsured children by 25% from 1996 to 2005, it has come at higher costs than anticipated.[50] The program is facing funding shortfalls in several states.

In the end, these paths at reform suffer from the same fatal flaw: they leave in place the existence of a multipayer, for-profit system. It is this infrastructure that is the Achilles’ heel of the United States healthcare system. The crux of effective reform is the development of a simple, streamlined system of universal coverage by a single-payer.

Financial savings and good patient care flow naturally from universal coverage by a single-payer. All individuals would have access to cost-saving preventive care through generalists. Fewer people would have to rely on inefficient and expensive emergency departments for their primary care. A single-payer system maintains the bargaining power necessary to contract with pharmaceutical companies to lower the costs of medications and biotechnology. In addition, evidence-based utilization standards could be defined to guide selection of medications and procedures.

The largest source of savings in reforming our system would come from cutting the administrative costs associated with multiple private insurance carriers. Competition between for-profit insurance companies drives cost-shifting and ever-increasing out-of-pocket payments for patients. As patients’ costs go up, more and more under-insured people are unable to afford healthcare. When many insurance carriers exist, they must compete for patients, and this competition is financed by massive administrative marketing costs. Many experts believe that universal coverage would likely pay for itself by creating a more efficient system.

Universal coverage and a single-payer plan could be created in different ways. Specific proposals have been published by various groups.[51] Universal coverage does not necessarily mean Medicare for all. Certainly, universal coverage could be provided by a single-payer government-run program as in Canada or the United Kingdom. Although this is the most straightforward approach, other countries have developed successful systems composed of private companies coupled with governmental organizations. For example, most of the German population receives its health insurance through sickness funds, which are nonprofit, closely regulated semiprivate organizations. The key is that these companies are required to cover a broad range of medical services and are prohibited from excluding individuals due to illness. Even in countries like Japan and Germany where health insurance is job-linked, times of unemployment, changes in workplace, and periods of self-employment do not create interruptions in healthcare coverage.

Finally, universal coverage and a single-payer plan do not exclude the option for purchasing additional private insurance. Supplemental insurance could exist that would cover nonessential medical care such as cosmetic surgery, private nursing, or even pay for expedited essential care. A new healthcare plan could be tailored to the preferences of the American population.

Myths have the ability to perpetuate themselves in the absence of supporting evidence. The myths concerning the state of the US healthcare system need to be actively dispelled-quickly. There are already overwhelming data showing the dangers of uninsurance and the benefits of universal coverage. There is no more deliberation that needs to be done. We must instead move on to making universal coverage a reality.