The Other Drug War

Richard Epstein*

Richard Epstein

Richard Epstein

Earlier this month, the Food and Drug Administration rejected the application of Biomarin Pharmaceutical to market its drug KyndrisaTM (drisapersen) for use in the treatment of Duchenne muscular dystrophy. The FDA, as is often the case when it rejects a drug application, listed all sorts of technical reasons why the data presented was not sufficient to establish by respectable scientific means that the drug in question was safe and effective in its intended use. Without question, much evidence from the clinical trials revealed serious complications from the drug’s use, including blood-platelet shortages that were potentially fatal, kidney damage, and severe injection-site reactions. But the no-treatment alternative could prove far worse.

Duchenne is a rare but fatal genetic disorder that attacks only young boys, roughly 1 in 3,500 to 5,000. Typically, it first manifests itself between two and five years of age. With time, it relentlessly weakens the skeletal muscles that control movement in the arms, legs, and trunk. Most of its victims are wheelchair-bound between the ages of seven and 13. By 20, many have died.

The source of the problem is the absence from the cell of the key chemical dystrophin, which is needed to control muscular movement. The proposed treatment is known as “exon-skipping,” which allows the body to produce the needed quantities of dystrophin. At present no drugs are on the market to fix the genetic defect. But other drugs are also under investigation. If the door is closed for drisapersen, it remains ajar for an unnamed drug produced by Sarepta Therapeutics, which will be reviewed by the FDA shortly. But, based on early rumblings from the FDA, it is likely that this drug too will be kept from the marketplace.

As might be expected, the decision by the FDA has left parent groups and their physicians tied up in knots. You can get a sense of their frustration by looking at the desperate petition of a mother whose son has the disease. Tonya Carlone wrote a public letter to the FDA pleading for the drug to be allowed on the market: “This medication has allowed my son, Gavin, to be able to ride a 2 wheel bike, to play on a soccer team, to run and play with his healthy 10 year old peers. Dr. Craig McDonald of UC Davis Medical Center and a Duchenne expert of over 30 years, has stated that he has never seen a boy with Duchenne at the age of 10 have as much function as Gavin.”

All irrelevant, says the FDA. But it’s critical to understand why parents like Ms. Carlone and physicians like Dr. McDonald are right and why the FDA is dead wrong. The FDA thinks the problem lies in the merits of a particular drug when it really lies in its deeply flawed approval process. That process got started in the early 1960s after Thalidomide was taken off the market for causing serious birth defects and deaths among children.

In its rush to judgment after the incident, Congress passed the 1962 Kefauver Harris Amendments, which initiated the modern system of drug review that featured two major reforms. The first changed an FDA rule that was on the books since 1938 that allowed drugs to reach the market if the FDA did not ask to review them within 60 days after they were ready for market. The second required the FDA to review these new drug applications prior to approval not only for safety, but also for effectiveness. Now for a drug to reach market it must be supported by “substantial evidence” which is “evidence consisting of adequate and well-controlled investigations, including clinical investigations, by experts qualified by scientific training and experience to evaluate the effectiveness of the drug involved.” As the Supreme Court observed in 1973, the FDA’s “strict and demanding standards, barring anecdotal evidence indicating that doctors ‘believe’ in the efficacy of a drug, are amply justified by the legislative history.”

The FDA still treats the 1962 law as a triumphant moment that “revolutionized drug development” because its scientific safeguards ensure “that consumers will not be the victims of unsafe and ineffective medications.” And therein lies the problem. The FDA celebrates the supposed advantages of Kefauver-Harris, but it ignores the major monkey wrench that it has introduced into drug development. Its proclamation looks at only the benefits of the drug-approval process, but wholly ignores its attendant costs by tacitly assuming that the only drugs that the FDA keeps off the market are unsafe and ineffective.

Regrettably, in this, as in all other regulatory endeavors, there are two kinds of error. The FDA is keen to note the bad drugs that are kept off the market. But it downplays the good drugs caught inside its web that are kept off the market. In some cases, there are deadly delays in getting good drugs onto the market. As the period for drug review becomes longer, the cost of getting a drug onto the market rises. Taking into account both the time value of money and out-of-pocket expenditures, that cost has beenestimated to be $2.6 billion. That figure, of course, does not include the social losses from drugs that never get through to clinical trials because of the heavy obstacles that the FDA places in the path of their development—nor does it include the number of lives lost or compromised as a result of the FDA’s regulatory hurdles.

To make matters worse, the clinical trial format, which often works well for mainline drugs, like those used to control cholesterol and high-blood pressure, is less effective for drugs aimed to treat certain rare diseases, where too many potential drugs are chasing too few patients. Thus in the Sarepta study, both the treatment and the placebo group each enrolled only 12 patients, which enabled the FDA to challenge the comparability of the two groups, and for both parties to dispute individual patient responses. It doesn’t help that the FDA insists that individual patient and doctor reports do not count as scientific evidence because of their anecdotal nature.

Here too, the FDA errs. Individual variation in drug response is a common feature, and individual patients like Gavin Carlone are well advised to continue using the drug, no matter what the FDA’s overall evaluation of the drug’s features. The two forms of information should always be used in tandem. To be sure, the risk of adverse side effects can never be ignored, but neither can the deadly effects of Duchenne, for which there is no recourse. The FDA’s major blunder in this area is to rely exclusively on the statistical significance of various treatment options, ignoring all evidence from other sources.

Doing so became a true disaster. Under the 1962 law, drug after drug was removed from the market after years of successful use because the FDA decided that well-controlled clinical trials, for all their cost and limitation, were better than the long-term success of various drugs in the marketplace. It is for good reason that children, parents, and physicians are asking a different question from that which the FDA puts to itself: are they better off with the drug than without it? And when there is no alternative remedy, the answer is that they are better off with it.

At this point, the first question has nothing to do with abstract standards of scientific evidence. It has to do with the simple issue of who gets to decide what type of evidence, systematic or anecdotal, is most valid. American law today wrongly vests that power in the FDA on the ground that its expertise is needed on matters of public health. But Duchenne and similar genetic diseases are not communicable, as most public health concerns are. They are individual, not interconnected, tragedies.

It is simply mindboggling that the FDA should extol its naked paternalism in keeping patients from becoming “victims of unsafe and ineffective drugs,” when it is cutting them off from their only chance of salvation. The usual judicial conceit is that FDA regulation just deals with economic matters, as if a child’s fight against a deadly disease is to be treated in the same fashion as a minimum wage or maximum hour law. Both types of regulation are unwise—but, that said, no one should ignore their differential impact, as rich and poor alike are throttled by the FDA.

Fortunately, there are two developments that can help reduce the FDA’s deadly grip on pharmaceutical progress. The first is that it cannot prevent off-label uses of permitted drugs. Under current law, once a drug is approved for any purpose, physicians can prescribe it for any other purpose they please. The FDA is not allowed to regulate the practice of medicine, and thus physicians can put these drugs to use without approval.

Needless to say, by every estimate, off-label uses are common, especially in cancer cases. These uses are not unstructured, as there are many voluntary institutions, such as the National Comprehensive Cancer Network, that collate the clinical experiences that the FDA ignores and make recommendations on which drugs should be used in what sequence and in what combinations to treat various kinds of conditions. The off-label uses commonly set the standard for medical malpractice for physicians in ways that bypass the FDA approval process altogether.

Yet the inability to get that first approval forces desperate people to beg the FDA and drug companies for a compassionate license for experimental use, which often comes too late, even for cancer drugs like Erbitux, which later makes it on the market. Indeed, this off-label process was especially important for thalidomide, for once it was allowed on the market under the brand-name Thalomid to treat leprosy, its effectiveness as a miracle drug for cancer became apparent from its off-label use. Dr. Frances Kelsey of the FDA, who discovered the drug’s harmful side effects on children, should have issued stern warnings on its use in pregnancy. But, in retrospect, the FDA was wrong to ban the drug from the market.

The second major development deals with the ability of drug companies to present truthful information about off-label uses to physicians and patients. The FDA has long vigorously asserted that it is a criminal offense for a drug company to make any statements or distribute any information that tends to promote the use of an approved drug for an off-label use. Its view was that these statements made false representations that the drug had been approved for that purpose. The FDA could not prevent the identical statements from being made in medical journals or by individual physicians, but its ban on truthful company promotion obviously slowed down the adoption of off-label uses. Two recent cases have broken the FDA’s stranglehold on information on off-label uses: one in 2012, dealing with drug promotion by individual sales representatives, and a second in 2015, dealing with a company’s publication of the full record of its futile negotiations with the FDA to get formal approval for a new permitted use for a drug already on the market.

The FDA’s richly deserved defeat on this front is consistent with the general libertarian view that government agencies have ample power to prevent fraud and misbranding, but none to prevent speech that is neither false nor misleading. The current First Amendment law thus breaks down the FDA’s monopoly over information. But it does nothing to break down the FDA’s monopoly over the licensing of new drugs, which are all too often kept in limbo with endless haggling over clinical trials. That situation has to change now.

Congress should strip the FDA of its power to keep individuals from receiving drugs for experimental purposes before they receive FDA approval. If not, the courts should do it on constitutional grounds, holding that the current legal regime is an intolerable interference of personal autonomy. No one would ever let the FDA use its now formidable police powers to force people to take medicines that it thinks appropriate. That same logic should not prevent informed patients, with the aid of their own physicians, from taking medicines that they think offer the best opportunity for their restored health and survival.

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

Why Democrats Could Win In 2016

Richard Epstein*

Richard Epstein

Richard Epstein

With the hotly contested Iowa primaries only a week away, the level of political polarization is higher than it’s been in decades.Hillary Clinton and Bernie Sanders are veering sharply to the populist left as they each champion a brand of democratic socialism. On the Republican side, the rise of Donald Trump and Ted Cruz reveals the rise of a muscular conservatism that appeals to the far right. By November, this political divide will become more pronounced. No one will be able to say, to quote George Wallace’s oft-repeated remark, that there is not a “dime’s worth of difference” between the two parties.

One area of huge contention is domestic policy. The two Democratic front-runners are responding to a strong anti-market sentiment by pushing for higher taxes, more income redistribution, and more extensive regulation, targeting both large businesses and wealthier individuals. Income inequality is at the center stage. On these issues, though, the Republican candidates offer a more pro-growth agenda, but do so only in muted terms. If Republicans want to win in November, they must boldly articulate an alternative to the policies of Clinton and Sanders.

One of the most interesting political trends of recent years is the rise of progressive populism. These days, the American left uses the term “progressive” much more frequently than the word “liberal” to describe itself—which represents a symbolic shift. The word “progressive” consciously hearkens back to the era between 1900 and the election of Franklin Roosevelt in 1932. The Progressive Movement held that, in an industrial age, markets did not function adequately, that the antitrust laws did not meet the challenges of the modern industrial economy, and that the imposition of a large administrative state, technically expert but politically aware, was needed to contain powerful private sector firms. These are essentially the same claims Democrats make today, even though the economy is more regulated than ever before.

The chief targets of the Progressives were many key Supreme Court decisions that opposed their worldview. The Court’s 1895 decision in United States v. E. C. Knight blocked systematic regulation of the economy by ruling that manufacture, mining, and agriculture fell outside the limits of Congress’s commerce power, so much so that in 1906, the passage of the Pure Food and Drug Act only let the federal government regulate drug manufacture in the territories, even though it could block the shipment of adulterated or misbranded drugs in interstate commerce. Likewise, in the 1905 case of Lochner v. New Yorkthe Supreme Court sharply limited the power of both the state and federal government to impose maximum hours or minimum wage laws on the economy. Three years later, the Court struck down a federal collective bargaining statute in Adair v. United States, and seven years after that it did the same with respect to state collective bargaining law. To top it all off, in Loewe v. Lawlor (1908), the Supreme Court unanimously applied the 1890 Sherman Act to secondary union boycotts. That decision sparked a furious progressive blowback in the 1912 Presidential election, after which Loewe was overturned by Section 6 of the 1914 Clayton Act that explicitly exempted unions from the operation of the antitrust laws.

Each of these decisions contributed to the enormous growth in productivity during the period before the New Deal. But as the ideological battles raged on, the entire edifice of the Old Court was eventually struck down in a series of decisions. Most notable were National Labor Relations Board v. Jones & Laughlin Steel (1937) and Wickard v. Filburn (1942). The two cases expanded the power of the federal government to regulate all economic activities. This meant that the 1938 revisions of the federal drug act allowed Congress to expand its jurisdiction to cover manufacture within the state. In addition, Jones & Laughlin Steel brushed aside all opposition to mandatory collective bargaining at the state and federal level. And any objection to the widespread reliance on administrative agencies to run the new ambitious state was ended by key decisions such as Yakus v. United States, which in 1944 sustained a criminal conviction for a violation of price control laws, and NBC v. United Stateswhich in 1943 upheld the allocation of broadcast frequency under the flaccid standard of the public interest, convenience, and necessity.

The contrast between the progressive and classical liberal views could not be more vivid. Justice Felix Frankfurter blanched at the thought that the Federal Communication Commission could be limited to setting the rules of the road for radio broadcast. He thought their delegated powers allowed them to determine “the composition of that traffic.” A year later in The Road to Serfdom, Friedrich Hayek presented the opposite vision. The genius of a highway system was that it set the rules of the road and allowed private individuals, each in pursuit of their own mission, to enter and exit as they saw fit—a privilege that was denied to commercial carriers under the Motor Vehicle Act of 1935. By the end of World War II, the entire classical liberal synthesis had been decisively rejected.

The New Deal victory ushered in a period of consolidation and retrenchment. The Republicans swept back into power in 1946, and that year saw the passage of the Administrative Procedure Act, still very much alive today, to rein in the widely perceived excesses of administrative power under the earlier New Deal statutes. One year later, Congress passed, over a Truman veto, the Taft-Hartley Act that limited the scope of the original 1935 Wagner Act, but which by no means undid the earlier law. And so the stage was set for a longish period of American legal scholarship where the major task of academic lawyers was to make sure that the new administrative state operated with dispatch and good sense. The enterprise should not be belittled today, given the vast difference between an administrative state that tolerates corrupt union elections and one that seeks to give union members a fair shot at appointing the leaders whom they prefer.

Yet, what was notable about this activity was that Congress set the major parameters of regulation, and the courts took the faithful implementation of the law as their task. The mood of the time was captured in the famous mid-twentieth-century course entitled The Legal Process, led by Harvard professors Henry Hart and Albert Sacks, which taught students the virtues of an incremental adjustment of the legal system to the new challenges that it faced. Moving back to first principles was decidedly second-tier. Instead, both left and right made peace with the major innovations of the New Deal, which they sought to rationalize and understand.

Yet the turn away from theory could not last. Even with the apparent mid-century serenity, the rising pressure on race relations, culminating with the 1954 desegregation decision in Brown v. Board of Education, forced questions of theory to the fore on matters of race. And it was only a short time thereafter that this more reflective mode of inquiry made its way into other areas. The rise of the law and economics movement in the 1960s and 1970s was important because it provided various intellectual tools that mounted a challenge to the unquestioned sway of regulation over large swathes of the economy.

Thus in 1959, Ronald Coase wrote the Federal Communications Commission, which had the temerity to argue that the market could do a better job in allocating spectrum use than the FCC, which never created coherent administrative criteria for frequency allocation. In 1962, James Buchanan and Gordon Tullock wrote The Calculus of Consent, which showed how easy it was for majoritarian institutions to fall prey to faction and intrigue. Writers like Henry Manne stressed the importance of the market for corporate control, which was inconsistent with the cozy relationship between big government and big business exemplified by the legal protection afforded to AT&T before its monopoly was broken up in 1982. And so it happened that competition, not regulation, became the new watchword for the overall legal system.

Today, the classical liberal challenge to the New Deal view of the world is stronger than ever. Underneath any piece of New Deal legislation lays the grubby reality of cartel formation in labor and agricultural markets. Fortunately, the high court is taking halting steps to return to the pre–New Deal state of affairs now that Friedrichs v. California Teachers Association may well undermine coerced union contributions and Horne v. Department of Agriculture has already punctured the invulnerability of the raisin cartel.

Yet the progressives have simultaneously revived New Deal principles by advocating for greater regulation to jump-start the flagging economy. In addition, the rise of “critical legal studies” and an ever-greater concern with race, gender, and inequality has moved much of the legal community further to the left. At the same time the law and economics community fractured, giving rise to strong pro-regulatory movements.

None of this conceals the vulnerability of the progressive agenda. Of course, the economy is in vast need of prompt and powerful improvement. But in order to prescribe a cure, it is necessary to diagnose the underlying ailment. Back in 1935, it was hard to find any coherent intellectual opposition to the ever-greater surge of government power. It was all too common then to confuse the size of a firm with its market power, and to assume that any restriction on the contractual freedom of the employer necessarily improved the position of the employee. Anyone who reads the Findings and Policies of the NLRA should be struck by its misguided and outdated argument that its vast administrative apparatus could increase the efficiency of the economic system or the real income of employees.

The tragic feature of today’s populist revival is that it uses obsolete economic and political philosophy to propel the case for additional layers of regulation. These are likely to prove more counterproductive than the initial New Deal blunders because the second round of regulation comes off a diminished economic base. The current slow growth environment and the reduction of labor market participation only compound the miseries of the poor and the vulnerable, whom the progressives want to protect. Yet as long as the Republican candidates fail or refuse to address this challenge, the odds will only increase that the next president will be deeply committed to policies that will cement the Obama legacy of economic stagnation and political turmoil. The intellectual tools to combat these false hopes are available. Let us hope that the Republican nominee will deploy them to maximum advantage. 

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

The Uneasy Legacy Of Henrietta Lacks

Richard Epstein*

Richard Epstein

Richard Epstein

Recently, Rebecca Skloot, author of the major best-seller The Immortal Life of Henrietta Lacks, wrote an impassioned plea in the New York Times, urging people to support sweeping revisions to the Federal Policy for the Protection of Human Subjects, which is now under active review in the Department of Health and Human Services. These revisions are directed to the rules that now govern the collection and use of “clinical biospecimens,” which include all the organic substances that are routinely removed from the human body as a consequence of surgery, childbirth, or even normal testing. At first appearance, these materials look like waste products best disposed of in a safe and sanitary manner. But, in fact, they are invaluable in medical research to treat cancer and a host of other genetic and life-threatening diseases.

Without question, the most dramatic illustration of this process involves the so-called HeLa cell line derived from the cancer cells of Henrietta Lacks, an African American tobacco farmer who died of cancer in 1951 at the age of 31. When she was treated at Johns Hopkins Medical Center, her cancer cells were given to the pathologist Dr. George Gey. Gey found to his amazement that, unlike other cancer cells, Lack’s cells were immortal in that they could be cultured and reproduced indefinitely. Within three years of her death, her cell line helped develop the Salk polio vaccine. In the 65 years since Lacks died, about 20 tons of her cell line have been reproduced and distributed worldwide for medical research.

But just what did Lacks and her family get out of the arrangement? At the time, nothing. In accordance with the then standard practice, the Johns Hopkins researchers collected and used her cells without her knowledge or consent. In more recent years, she has received countless public honors for her contributions to medical research. But, at the same time, the many researchers who worked with her cell line collected substantial royalties from the patented cells and the devices developed with their assistance.

So should Lacks and her family have received some fraction of that wealth? The issue was addressed in Moore v. The Regents of California (1990), in which the California Supreme Court held that John Moore did not have property rights to his distinctive cell line. Moore had hairy-cell leukemia, and that resulted in the removal of his “grossly enlarged” and diseased spleen, which proved to be a veritable treasure trove for medical research. Moore’s case did not involve the mere use of cells drawn from his body after his death. Instead, following his initial surgery, the doctors consistently lied to Moore about the supposedly medical purposes for which they collected his various body cells and fluids, which they then used to create a patented cell line of immense value.

Faced with these novel facts, the California Supreme Court issued a split decision. It held that the doctors who took various bodily materials from Moore had not converted his body to their own use, on the odd ground that he did not own the cells after they left his body. Why they could not assert ownership of them before surgery was left unexplained. But, as a way to offset that decision, the Court held that the doctors did breach their duty of informed consent to him. However, this did not allow Moore to recover any royalties from the doctors or any other downstream parties who benefited from using his cell line.

As Skloot and others insist, there is something deeply odd about letting doctors and hospitals profit from cell lines without paying a single dime to the patient from whose body they were obtained, and without obtaining the patient’s permission.

But what’s the best way to correct this odd state of affairs? To people like Skloot, the answer is that all medical researchers should be required to obtain “informed consent” for any research done with a biospecimen, “even if,” as the government proposal puts it, “the investigator is not being given information that would enable him or her to identify whose biospecimen it is.” Such consent would not need to be obtained for each specific research use of the biospecimen, but rather could be obtained using a “broad” consent form in which a person would give permission for future unspecified research uses. Skloot claims optimistically that these people will “probably” say yes, so that research could go on largely as before—but she thinks, as a matter of fundamental fairness, that they should be asked.

There are, however, some powerful objections against the use of the informed consent standard. The consent requirement will result in a vast increase in administrative costs. At a minimum, the new standard will usher in a huge expansion in the number of forms that have to first be explained and then filled out by every patient whose bodily materials are needed for medical research. This means obtaining consent from many thousands of patients as large-scale genomic research is so common. Informed consent would severely slow down such research.

We already have extensive experience with the nightmarish consent requirements under HIPAA (the Health Insurance Portability and Accountability Act of 1996), which created a massive government apparatus for deciding whose consent is needed and when for the myriad uses of routine medical records. The privacy interest with respect to bodily fluids and liquids, especially after death, is far weaker. Why impose an apparatus that costs billions to implement, when there is no real evidence that the current system is broken? After all, the use of the waste products does not affect the patient’s health, well-being, or treatment, even as it facilitates its groundbreaking research.

A larger issue arises if an individual chooses not to sign a blanket consent form for the use of his or her biospecimens. Can the patient decide to not sign the broad form, and limit the use of his or her biospecimens only to some but not to all purposes? If consent is originally given, can it thereafter be revoked, perhaps on the ground that background disclosures were not sufficiently precise? Can family members intervene and claim that, with minors and unconscious people, the patient is not competent to give consent? Is a hospital or physician entitled to refuse to treat a patient who does not acquiesce? May they impose extra charges on them to offset their research losses from not being able to use their biospecimens?

This complex game is not worth playing. The simple answer to all of these endless complications in the routine cases is this: each patient coming into the hospital gets the benefit of the accumulated knowledge acquired from previous patients whose biospecimens have been put to good medical use. It is not too much to insist that patients in routine cases be required to continue to participate in the virtuous circle. There may not be consent, but just compensation is supplied in-kind to all patients who benefit from the medical advances made possible by the research conducted using biospecimens.

At the same time, this generalized form of compensation does not work well with the unique cases like Lacks or Moore. The magnitude of their individual contributions should be compensated somehow. But nonetheless, it does not follow, as Skloot insists, that individual consent for using these biospecimens should be required. With transactions this large, it seems highly unlikely that most patients who have been informed of the benefits that can be derived from their biospecimens would happily sign them over to a research hospital free of charge. Rather, they or their guardians would be well advised to hold out for remuneration as a condition of allowing any of their biospecimens to be used in medical research. Those patients could receive large windfalls without bearing any of the economic and development-related risks that the research hospitals bear.

Outside the medical area, the law has long been reluctant to allow any party to exert this form of monopoly power without legal constraint. Starting with the writings of the British jurist Sir Matthew Hale in the late seventeenth century, the common law has held that common carriers with a monopoly business were “affected with the public interest,” and thus not free to charge whatever they choose for their services. Rather, they must restrict themselves to reasonable and nondiscriminatory rates, commonly called RAND. The system did not require public utilities to supply their services for free, but allowed them a risk-adjusted competitive return on their initial investments, while denying them a monopoly profit.

In modern intellectual property law, RAND rules have been carried over to standard-essential patents, which allow competing companies to share information over an integrated network system. Choosing the right measure of compensation for these patents is never easy, but it is not impossible—and this inquiry may well be easier for biospecimens, which should be made available for medical research for a reasonable royalty interest on the basic research patents, perhaps fixed as a matter of law at a fraction of, say, five percent. Others may prefer to use compulsory arbitration to resolve disagreements over royalty rates. But, critically, both these proposals explicitly reject Skloot’s consent model, which poses a threat to the entire medical research enterprise.

The problem becomes even more acute when, as with Moore but not Lacks, a live patient is asked to contribute further biospecimens to medical research. Usually, the requested intrusions in this case are no greater than those in which the specimens are collected for normal diagnostic purposes, so it is a close question as to whether these transactions should be done solely on a voluntary basis, given the hold-out risk. Alternatively, it is possible to invoke the same compulsory purchase regime that works best for normal waste products.

For the moment, it’s best to keep in place whatever regime is now used. My fear, however, is that any movement toward demanding consent for using biospecimens will undermine the willingness of ordinary patients to participate in medical research. Of course everyone should be uneasy with forced exchanges, and no one should think that individual consent is not needed for ordinary medical treatment. But when transaction costs get high, and monopoly power becomes a serious risk, the model of just compensation in forced exchanges should prevail. It may seem odd to apply standard industrial organization models to biomedical research. But the parallel is precise. The many doctors and hospitals that have vehemently resisted the new proposals that Skloot endorses may not understand the finer points of monopoly power and rate regulation. But they are right to reject unwise proposals to demand broad consent for the use of biospecimens in medical research.

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

End The 'Agency Shop'

Richard Epstein*

On Monday, January 11, the United States Supreme Court will address in Friedrichs v. California Teachers Association two long-standing issues about the status of public unions that have vexed the Court for at least 60 years.

The first of these questions asks whether the Court’s 1977 decision in Abood v. Detroit Board of Education should be overruled. That case allows for so-called “agency shop agreements,” under which a union is entitled to collect from nonmembers fees that cover the cost of its collective bargaining expenses with the government employer, but not those expenses that are said to be properly chargeable to the political activities of the union.

The oft-stated rationale behind Abood, which has been affirmed on multiple occasions, is that the union should be allowed to collect the first set of expenses in order to make sure that non-union members do not “free ride” on the union’s efforts to improve their working conditions for all members of the bargaining unit. Thus in Lehnert v. Ferris Faculty Association, conservative stalwart Justice Antonin Scalia defended the union case as follows: “Mandatory dues allow the cost of [the union’s bargaining] to be fairly distributed; they compensate the union for benefits which ‘necessarily’—that is, by law—accrue to the nonmembers.”

Under the conventional wisdom, moreover, that union power is offset by the so-called duty of fair-representation under which a union, as the exclusive bargaining agent for workers within a defined bargaining unit, must also protect the dissident workers from whom it collects dues. But that quid pro quo logic, Abood holds, does not apply to general political efforts of unions to change the background law and attitudes. To force dissident workers to contribute to these efforts is to require a form of compulsory speech that is blocked by the First Amendment guarantee of freedom of speech.

The second-tier issue in Friedrichs is more technical but also vital. Right now, the law puts the burden on individual workers who disagree with the union’s political activities to opt out on an annual basis. The petitioners in Friedrich want that burden to be reversed so that the union has, again on an annual basis, to persuade individual members to opt in—that is, to contribute dues. The power of inertia is enormous, as it is always costly to persuade people to move away from any baseline that has been set by legislation. Indeed, we know both from the operation of the right-to-work laws and forms of collective bargaining, such asWisconsin Act 10, that union membership and union behavior are heavily affected by any changes in the overall structure of collective bargaining laws. The vast attention given to Friedrichs reflects the fact that it is a very important case. However it is decided, it will have a huge impact on the future development of American labor law.

There is no question that Abood has been under siege since the Court’s 2012 decision in Knox v. Service Employees International Union, dealing with highly technical issues, which made it clear that many of the conservative members of the court were unhappy with the continued application of Abood in the context of public unions. In figuring out how the Court should respond in Friedrichs, everything turns on the starting point of the analysis. The defenders of the National Labor Relationship Act start by treating collective bargaining as the highest social good and thus work against narrowing any exceptions to its scope. Most of the modern attacks on the law take as given the constitutionality of collective bargaining statutes and then argue that what matters today is not the low level of protection that is afforded to economic liberties, but the far higher level of scrutiny that necessarily attaches to First Amendment claims.

But why start there? The threshold question is: Does it make any sense to allow for exclusive collective bargaining under the auspices of the National Labor Relations Board? To answer that question, it is useful to examine the free-rider assumption that lies at the core of the Abood synthesis that is now under attack. The basic argument of union leaders is that a rising tide raises all ships and does so equally. To union supporters, the ability of workers to engage under Section 7 of the NLRA in “concerted activities for the purpose of collective bargaining or other mutual aid or protection” is an unassailable premise of the modern law.

Unfortunately, that rosy vision is incorrect for two reasons. The first deals with successful unions, namely, those whose concerted activity raises wages and lowers overall consumer welfare. Historically, the dangers of successful union activity were well understood. Thus in the great 1908 case of Loewe v. Lawlor, commonly known as the Danbury Hatter’s Case, a unanimous Supreme Court held that secondary boycotts, i.e. those against firms that did business with any firm that a union wished to organize, were per se illegal contracts in restraint of trade under the 1890 Sherman Act. Progressive politics overturned this by statute in Section Six of the 1914 Clayton Act, which announced that “the labor of a human being is not a commodity or article of commerce.”

But it is not as though the legislation prohibited all contracts. Quite the opposite, it exempted unions from the antitrust law. But to this day, its defenders have not explained why concerted union action is any better than concerted actions by business. Successful union activity should be subject to the same scrutiny as any other horizontal agreement when challenged as a contract in restraint of trade.

A key assumption of modern labor laws relates to the supposed internal cohesion of union membership. Thus the free-rider argument takes as its implicit premise that union workers are not heterogeneous in their preferences, but in fact share the same views on all negotiating issues. Given that strong, but wrong, assumption, the free rider argument makes sense. The union that is attentive to the preferences of its members is necessarily equally attentive to the “identical” preferences of its nonmembers as well. On this stout assumption, the union can only survive if nonmembers are made to bear their fair share of the collective costs. Otherwise workers face a giant prisoner’s dilemma game where everyone loses the benefits of collective action because no one is willing to bear his or her fair share of the cost. (Just this public goods argument motivated Mancur Olson’s immensely influential 1965 book, The Logic of Collective Action, which explains forcefully why taxation is necessary to provide standard public goods like law and order.)

But the argument becomes much more difficult to sustain whenever preferences are not homogeneous. Thus, it is easy to explain why every society enforces prohibitions against murder, but far more difficult to explain whether they should intervene in foreign wars whose merits are subject to intense differences in public opinion. We have to tolerate a collective response to that issue, because the United States can pursue only one foreign policy. But in the labor context, where the same heterogeneity of preferences is evident, there is no need for appointing any union the exclusive representative of all the workers. Union selection takes place through bitterly contested elections. Sometimes, a majority of workers favor unionization, but differ on their choice of two competitive unions. In other situations, many workers may deeply oppose union representation precisely because they believe it will make them worse off.

Looking at the duty of fair representation, which was so critical to the Abood synthesis, brings this point home. The duty was created by the 1944 Supreme Court’s decision in Steele v. Louisville & Nashville Railroad. Why? Because Steele involved racial injustice that occurred once the leadership of a majority-white union, organized under the collective bargaining provisions of the 1926 Railway Labor Act, got some 21 railroads to amend their collective bargaining agreements to exclude black firemen from their traditional jobs. Chief Justice Stone refused to let that action go unchallenged, and by judicial fiat imposed on union leaders a much needed duty to represent their minority workers fairly.

Steele teaches us two important lessons. First, it is a lot easier to announce a duty of fair representation than it is to implement it. Some thirteen years later, the black firemen were back in the Supreme Court again in Conley v. Gibson still seeking fair representation. The blunt truth is that these black firemen were far better off when represented by their black leadership in their traditional all-black union, so great were the conflicts when they were integrated into the other union.

Second, most disputes are not touched by Steele because they have no racial overtones. Thus shortly after the passage of the NLRA, a major conflict emerged between the American Federation of Labor, which favored craft unions, and the newly minted Congress of Industrial Organizations, which wanted negotiations to take place on a plant-wide basis. This protracted conflict was over the distribution of the gains from unionization. Craft unions could hold out for higher wages for their specialties that could be transferred to the large number of unskilled or lesser skilled workers in any plant wide setting. Different voting arrangements led to different outcomes.

The duty of fair representation places no effective constraints on how unions and employers divvy up the benefits among different groups of covered workers. Indeed one of the reasons why unions in the private sector have fared so badly is that they cannot easily overcome these problems. Within the public sector, the dynamic is usually different because state law often guarantees unions their representative status without having to persuade individual teachers to work for them. At this point, the choices left to antiunion teachers are sharply limited when they cannot just opt out. Some of these teachers may wish to free ride on union efforts.

But others do not, for they correctly perceive that they are worse off with union representation. Thus excellent teachers often favor merit raises. They oppose seniority preferences that tie wages and job protection to years of service. They bridle under rules that give weak or incompetent teachers outsized protection against dismissal. Yet these unhappy teachers cannot quit because they know that all other public school systems are burdened with similar rules. It is therefore perfectly sensible for them to prefer no union at all to one that gives them union representation free of charge.

This simple point undermines Abood, insofar as the case rests on the supposed common interests of all workers. As a matter of first principle, unions are the source of two evils: unified, they wreak harm on public services; divided, they offer shabby treatment to their dissident members. In an ideal world, the Supreme Court would use Friedrichs to dismantle mandatory collective bargaining root and branch. But short of that, what the Court should do, and do unanimously, is set dissenting workers free from union domination by striking down all agency shop provisions. 

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

Is Obamacare Sustainable?

Richard Epstein*

Richard Epstein

Richard Epstein

It has been over five years since the Patient Protection and Affordable Care Act (ACA) was passed into law on March 23, 2010. Today, the major legal challenges are over. In 2012, the Supreme Court sustained the power of Congress to enact the law inNFIB v. Sebelius. Three years later, it held that the ACA allowed for the payment of subsidies for all applicants who enrolled through either the state or federal exchanges. Chief Justice Roberts wrote both decisions. They will not be overturned.

But if the legal battle over Obamacare is over, the economic battle over Obamacare has just begun. The issue here is simple enough. Can the plan, which has weathered the legal challenges, survive in today’s highly dynamic economic market? The prospects are uncertain to say the least. Some clear signposts indicate the answer is no. The ACA cannot succeed simply by securing first-time enrollments in its exchanges. Insurance policies are subject to annual renewals. The first year of operations will give information about how the second year will go.

On the insurer side, it has proved unclear whether the premiums collected have been sufficient to cover the incurred losses. No one yet knows how the various new types of coverage required by the ACA will be priced going forward.  For plans now running a deficit, belts have to be tightened.

On the insured’s side, a year’s experience could lead many customers to think that they pay too much for benefits they would rather not have. The point is especially true of people who are both healthy and young, from whom Obamacare exacts a heavy cross-subsidy that they won't pay year after year. Market rate insurance will always contain differentials that reflect these risk differences.  Liberals may decry the supposed inequity, but in so doing they overlook the decisive advantage of market rate plans.  They are stable in ways that Obamacare is not, because customers will not leave plans from which they derive a net benefit unless they can get a better alternative.

These forces are now exerting tectonic pressures in many, but not all states.  Across the country, many insurance companies are increasing their rates between 25 and 35 percent as they adjust to the “shock waves set off by the Affordable Care Act” in the marketplace. But the full story is necessarily far more complicated because a lot more goes into providing an insurance policy than setting the annual premium. Equally critical are the rules on coverage, how high the deductibles and copays are, where the plan facilities are located, and what the options are in the choice of physicians. And, of course, there is the tantalizing question of whether the current round of increases are one-shot adjustments, or whether they represent the onset of a consistent trend that will replicate itself in future years?

Without detailed information, it is not possible to access the peculiarities of the individual plans. But it is possible to predict that the slow death of Obamacare has become more likely. Most obviously, any premium increases within the exchanges can lead potential and current enrollees to direct their healthcare dollars elsewhere, perhaps by doing without any insurance at all or by signing up for Medicaid. Ironically, it will be hard to win these defectors back with advertisement or improvements in plan coverage, because these options are tightly constrained by Obamacare, which by design limits competition only to the choice of various care levels. Ordinary markets allow for innovation on all dimensions of service, and thus have a resilience that is all too lacking in Obamacare.

Here are some instructive results. As of early June, some 1.5 million people dropped out of the exchanges by failing to pay premiums, reducing the number covered from a February 2014 high of 11.7 million enrollees to 10.2 million four months later. That figure was still a substantial increase over the 6.3 million people insured at the end of 2014. But in the next three months, the downward trend continued so that by September 2015, the number of enrollees tumbled to 9.9 million, which was still above the administration’s goal of having 9 million on the rolls by the end of this year. But the current negative trend line is all the more striking given that some 8.3 million subscribers receive a subsidy of about $270 per month, which works out to a program wide subsidy of about $224 billion per year.

At this point, most of the gain in coverage, about 71 percent of the total, has come through the expansion of Medicaid, which in general offers inferior care to that provided by private insurance carriers. The decline in enrollees on the exchanges represents a displacement of ordinary people from insurance plans that they chose for those which come with a government stamp of approval.

The second straw in the wind is the looming failure of the private co-op plans that were intended in 2010 to offer some stiff competition to the commercial healthcare plans that were otherwise expected to dominate the overall system. The most recent casualty—the ninth to date out of a total of 23—has been Tennessee’s Community Health Care Alliance, with some 27,000 subscribers now forced once again to find coverage in order to stave off payment under the Obamacare individual mandate. Most, if not all, of the remaining 14 plans are also likely to go belly up.

The recent pattern of events raises two questions. First, how did we get here? Second, where do we go next?

The difficulties in the healthcare exchanges can be traced back to the original design choices of Obamacare. Its fundamental conceit was that a federal program could allow for the delivery of higher quality care at lower cost than could be obtained from ordinary private health insurance carriers. To make good on that claim, the centerpiece of the ACA was its benefits package, which offered a list of ten essential benefits covering ambulatory, emergency, laboratory, pediatric, preventive and wellness services, maternal and newborn care, mental health and substance abuse disorder, prescription drugs, rehabilitative and habilitative services, and devices. Some of these items, like mental health coverage, are exceedingly hard to provide because of the difficulty of measuring and monitoring diagnosis and cure. Others, like habilitative services, were rarely if ever offered in the voluntary market.

It is, however, one thing to prepare a list of services. It is quite another thing to specify the level and types of care required in each of the separate categories. Today, officials at the Department of Health and Human Services who have no bottom-line responsibility make those choices. Their tendency is to aim for the moon by requiring coverage that private firms would never offer voluntarily. After all, private firms respond to adverse selection, whereby people who are in greatest need of coverage will flock to the richest plans. They are also attempting to control moral hazard, which implies that the availability of insurance increases the likelihood of the occurrence of the insured event. The government takes it as an article of faith that private plans are inefficient. But that unfortunate mindset leads to additional government oversight. The upshot is reduced business flexibility coupled with an additional layer of administrative costs.

In principle, the burdens could become lighter over time as firms learn how to adjust to the government programs. But if the government continues to push hard, that won’t happen. It is therefore disheartening to know that Obama’s response to the high premiums is the following: “If commissioners do their job and actively review rates, my expectation is that they’ll come in significantly lower than what’s being requested.” Idle talk like that from a ratemaking amateur will only aggravate the problem. It is precisely the risk of insufficient rate hikes that undermines the stable healthcare environment needed for these markets to work. The only way in which to reverse these pressures is to go for partial deregulation.

The only difficulty with a proposal for market liberalization is that it cannot happen in an Obama administration. Given its inflexible commitment to high government involvement in the healthcare market, the likely response of the Obama administration to its own failure is to renew its call for a single-payer plan that the less radical Democrats of 2010 were not prepared to endorse. But the utter decimation of the centrist bloc of the Democrats in Congress means that the new claim will be that the exchanges and co-ops failed because they did not go far enough. Sadly, however, any single-payer plan will fall prey to all the pathologies of over-ambition, given that monopoly government agencies cannot manage complex businesses that have frustrated the implementation of the more modest Obamacare plan.

In the absence of any meaningful market role for private healthcare insurers, it will be no longer possible to benchmark public norms to sensible standards of private behavior. But on this issue, progressives believe in the one-way ratchet: When markets fail, turn to government regulation. When government regulation fails, turn to more government regulation.

In the midst of this chaos, the fundamental truth about the superiority of competitive markets is effectively shielded from view. Markets work because they match supply with demand. Consumers vote with their feet and their wallets. They fully know that there is never a perfect balance between the amount paid and the package of benefits secured. But they also know, or at least intuit, that relevant trade-offs between price and quality of service—whether in the choice of doctors, locations, coverage, or deductibles—must be made at the margin. Governments are tone deaf to marginal adjustments.

There is, however, this ray of hope: The high level of deductibles and the reduced level of coverage could help stronger market institutions emerge from the ashes of today’s government failures. As mandated health plans start to crash, people will be left to their own devices in the marketplace. Some will opt for concierge care, whereby they cut out the government middleman and pay a direct monthly fee to doctors for the privilege of having direct personal relations. As with all such models, as the usage increases, the price starts to drop to the point where it could easily make sense to avoid government plans altogether.

And if that does not work in all cases, there are all sorts of new walk-in clinics like City MD that offer cheap and efficient healthcare with plans that anyone can understand: “Need A Doctor? Just Walk In. No Waiting & Open 365 Days A Year. Over 40 Locations. No Appointment Needed. See A Doctor Immediately.”

Back in 2010 no one thought that healthcare markets were perfect. But we took the wrong fork in the road. Instead of opting for systematic deregulation in healthcare and insurance markets, we opted for cradle-to-grave regulation. We need to have the courage to recognize and correct that mistake now.

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

Our Fickle Fed

Richard Epstein*

Richard Epstein

Richard Epstein

Two related topics have defined our news cycle of late. The first is the deep populist discontent in the face of prolonged tepid economic growth rates and anemic labor markets. The second is that the Federal Reserve is once again uncertain about whether to raise the interest rate above the near-zero level where its lingered since December 2008. As recently as September 24, 2015, Fed Chairwoman Janet Yellen warned the financial markets that the low rates would not be kept forever, and that firms should adjust by gradually increasing wages. But less than one month later, her plan seems to have been derailed by the disappointing performance in wages, job creation, and consumer spending. The new thinking is that the Fed wants to wait until prices and wages firm up before it begins raising rates. It may wait until 2016 or later. Some experts, like former Fed advisor Andrew Levin, recommend interest rate increases be postponed until the labor markets are nursed back to health.

That analysis is necessarily complicated at the outset because it is difficult to estimate the level of slack in the labor markets. Right now the current unemployment rate stands at about 5.1 percent, which could be read as relatively full employment. That in turn means that wage levels should start to rise as employers compete for new workers. But the low rate of employment growth in the past several years points to serious economic underemployment, well caught in the consistently low levels of labor market participation. The 5.1 percent official unemployment rate does not include those people who have quit looking for work because their job prospects are grim. Nor does it include the many workers with only part-time employment who would be only too happy to take full time jobs if they were available.

From this baseline of conventional wisdom, Levin urges that we delay any increase in interest rates until the slack is absorbed so that employers will be willing to assume the costs of higher interest rates. But just when will this take place? Yellen takes the conventional line and assumes that the recent round of bad news is best attributable to temporary factors including weak energy prices and low import prices. But this is wishful thinking. The low energy prices are in part a function of improved fracking technologies that are only going improve with time. And they could still go lower if at long last the U.S. allows the export of crude oil, and OPEC members maintain their current high production levels. Yellen also believes that the flood of imports attributable to the relative weakness of the Euro is another factor that explains our weak economy. But this, too, is not likely to change any time soon.

Even if both of my predictions are proven wrong, temporary economic impediments are never in short supply. A cold spell, or hurricane, or political disruption could count as a short-term factor. None of these temporary factors precluded the far higher rates of growth that the United States enjoyed during much of the post-World War II period. But now, these factors always seem to count.

It’s likely that the slack in labor markets has become the new normal, which means that interest rates will never be allowed to rise. We have now tried to boost the economy and labor markets with a zero-interest rate strategy for close to seven years, with nothing to show for it. The belief that low interest rates will make the economy stronger is the triumph of hope over experience. In my view, our sorry state of affairs is no surprise. There is no theoretical reason to believe that any interest rate manipulation could repair fractured labor markets.

Let’s start with the simple observation that if low interest rates work in hard times, why don’t they work in good times? The answer is that artificially low interest rates are bad at all times. The stated rationale is that these interest rates are needed to stimulate borrowing and investment. Of course, low rates induce people to borrow. But from whom? It can only be from the people who will be less willing to lend at low interest rates. The affected classes include the large number of retired individuals whose ability to maintain their standard of living is reduced by the low rates of interest that they are likely to receive. They may now have to cut their consumption in the short run in order to make ends meet, just as their lower savings translate into reduced pools of investment capital. The entire strategy is just an ill-disguised subsidy to borrowers from lenders, which at best is a zero-sum transaction with little or no net economic effect.

Indeed a fuller evaluation is more pessimistic. Individuals and firms must borrow and invest over the long term. They cannot introduce major capital improvements in a day, a month, or even a year. Every time the Fed flips in one direction and then another, the added uncertainty negatively impacts borrowers, lenders, savers, and consumers alike. That added uncertainty reduces participation on both sides of any market, consistent with the low long-term levels of growth.

The unbroken record of disappointment should give rise to a fresh start. The first place to look is the Fed’s own mission statement, which lists as its first duty “conducting the nation’s monetary policy by influencing the monetary and credit conditions in the economy in pursuit of maximum employment, stable prices, and moderate long-term interest rates.” The difficulty here is that the Fed cannot be all things to all people. There is no question that only the Fed can make adjustments in interest rates that will stabilize the money supply and eliminate one measure of uncertainty in lending, labor, and sales markets—thus increasing the ease of voluntary business transactions.

Nonetheless, the successful execution of this part of its mandate is inconsistent with the Fed’s efforts to create maximumemployment. There is no way that the Fed can pick a single interest number that discharges both functions if stable money requires a market rate of interest and maximum employment requires a low rate of interest. There is no reason why the Fed should even try to serve two inconsistent objectives. It is striking that virtually none of the discussions of the Fed’s role on the labor market asks this simple question: What other factors influence efficiency in labor markets?

Why that reticence? The simplest explanation is that the Fed thinks that the interest rate is the close to an all-powerful determinant of labor market health. But this view reflects a total lack of awareness of the massive impact that multiple forms of labor market regulation can have on job creation and wages. Instead of relying on complex macroeconomic calculations, let’s take a hard look at some of our microeconomic realities. It is no secret that President Obama thinks that we need a wealth of government interventions in the economy to improve the position of the middle class. As he said at the White House Worker Voice Summit: “Labor unions were often the driving force of progress,” and “the middle class itself was built on a union label.” It is also no secret that the current populist agenda favors a push of the minimum wage to $15, mandatory paid-leave policies, the repeal of right-to-work laws, treating franchisors like McDonald’s as employers subject to the National Labor Relations Act, and expanded overtime coverage under the Fair Labor Standards Act.

The President’s pro-union policies are deeply regressive. They hearken back to a clunky and rigid legal regime from an older economy defined by large assembly lines. That regime cannot keep pace with the dynamic technological innovations that define today’s economy. The President’s harsh rhetoric may play well to a gallery of like-minded enthusiasts. But it strikes terror in the hearts of those individuals who are on the fence about opening a new business or expanding an old one. This entire class of potential employers includes, of course, many people who regard themselves as part of the shrinking and beleaguered middle class. To them any restriction on their freedom to set the terms and conditions of their employment is a good reason not to start a business in the first pace. Yet it is just policies of this sort that receive legislative enactment, such as the misnamed California Fair Pay Act. These policies pose a serious threat to the vitality of their respective labor markets.

There is, unfortunately, a close relationship between the Fed’s effort to make labor markets whole and the unending array of government initiatives targeted at the labor market. It is easier to the supposed reformers of labor markets to think that their job is to ensure equity so long as it is tacitly understood that the push toward full employment can be handled best by the Fed. The role of the Fed thus makes it possible to debate labor reform measures in terms of their distributional aspirations, given that the institution is supposedly there to set interest rate policies that will return labor markets to the full employment of the post-World War II era.

This sorry combination gets both sides of the equation wrong. There is almost nothing that monetary policy can do to get the cobwebs out of the labor markets. The Fed should stop acting as though its interest rate decisions can do any good. At the same time, everyone must understand that the current stagnation in labor markets is driven by policies to tighten government control. No amount of political rhetoric can conceal the central point about the sound operation of labor markets. The key government role is to reduce the barriers to voluntary contracts in order to maximize economic activity. Tighter regulation pushes in exactly the wrong direction. It’s like feeding sugar to a diabetic. There is little support for deregulating labor markets so we should expect the stagnation to continue for some time, regardless of what the Fed chooses to do. 

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

The Eleventh Annual Friedrich A. von Hayek Lecture

The Eleventh Annual Friedrich A. von Hayek Lecture

 

“Our Illiberal Administrative Law”

The Honorable Douglas H. Ginsburg, US Court of Appeals for the District of Columbia Circuit

 

Thursday, October 15, 2015, 6:00 p.m.

New York University School of Law

Greenberg Lounge, Vanderbilt Hall, 40 Washington Square South, New York, New York 10012

 

We are pleased to invite you to the Eleventh Annual Friedrich A. von Hayek Lecture, featuring The Honorable Douglas H. Ginsburg, US Court of Appeals for the District of Columbia Circuit.  Judge Ginsburg will deliver the evening’s keynote address titled “Our Illiberal Administrative Law.”  Trevor Morrison, Dean and Eric M. and Laurie B. Roth Professor of Law, NYU Law, and Richard Epstein, Laurence A. Tisch Professor of Law, NYU Law, will make introductory remarks. 

 

The event is jointly sponsored by the Classical Liberal Institute at NYU Law and the New York University Journal of Law and Liberty and will be held on Thursday, October 15, 2015 from 6:00 to 7:45 p.m. in Vanderbilt Hall, Greenberg Lounge, located at 40 Washington Square South.  A reception will immediately follow the lecture.

 

This event has been approved for 1.5 New York State CLE credits in the Areas of Professional Practice category.  It will be appropriate for both experienced and new attorneys (those admitted to the New York Bar for less than two years) and is presented in traditional (in person) format.

 

If you would like to take this opportunity to register online, please click here or copy and paste the link below:

 

https://nyu.qualtrics.com/jfe/form/SV_9FeUFElfKi8fCgl

 

Senior Circuit Judge Douglas Ginsburg was appointed to the United States Court of Appeals for the District of Columbia in 1986; he served as Chief Judge from 2001 to 2008.  After receiving his B.S. from Cornell University in 1970, and his J.D. from the University of Chicago Law School in 1973, he clerked for Judge Carl McGowan on the D.C. Circuit and Justice Thurgood Marshall on the United States Supreme Court.  Thereafter, Judge Ginsburg was a professor at the Harvard Law School, the Deputy Assistant and then Assistant Attorney General for the Antitrust Division of the Department of Justice, as well as the Administrator of the Office of Information and Regulatory Affairs in the Office of Management and Budget.  Concurrent with his service as a federal judge, Judge Ginsburg has taught at the University of Chicago Law School and the New York University School of Law.  Judge Ginsburg is currently a Professor of Law at the George Mason University School of Law, and a visiting professor at the University College London, Faculty of Laws.

 

Judge Ginsburg is the Chairman of the International Advisory Board of the Global Antitrust Institute at the Law and Economics Center of George Mason University School of Law. He also serves on the Advisory Boards of: Competition Policy International; the Harvard Journal of Law and Public Policy; the Journal of Competition Law and Economics; the Journal of Law, Economics and Policy; the Supreme Court Economic Review; the University of Chicago Law Review; The New York University Journal of Law and Liberty; and, at University College London, both the Center for Law, Economics and Society and the Jevons Institute for Competition Law and Economics. 

 

As is the custom with the Hayek lectures, Judge Ginsburg’s lecture will be published in the New York University Journal of Law and Liberty.  The Hayek lecture series has addressed many different topics since its inception, but it remains true to its mission: to challenge audiences to help shape a better world.

 

If you have any questions, please contact Jennifer Canose, Assistant Director, Classical Liberal Institute, at jennifer.canose@nyu.edu.

Filtering the Clean Water Act

Richard Epstein*

Richard Epstein

Richard Epstein

On August 27, the Environmental Protection Agency and the Army Corps of Engineers suffered a rare judicial setback. On that date, District Judge Ralph Erickson of North Dakota issued a preliminary injunction that blocked the enforcement of the joint “Clean Water Rule: Definition of Waters of the United States,” which was supposed to go into effect the next day. This decision limits the power of the EPA under the Clean Water Act to expand its jurisdiction by fanciful readings of the statutory phrase “waters of the United States,” defined further as “navigable waters.”

Passed in 1972, the CWA sought to “restore and maintain the chemical, physical, and biological integrity of the Nation’s waters.” Once these waters are identified, they are subject to extensive regulation, most notably under Section 404 of the CWA, which requires all private parties and local governments to obtain permits before they can engage in any activity that has some impact upon these navigable waters. Obtaining these permits is never easy, both for private and public parties.

Oftentimes county and local governments, which do not have the luxury of inaction, find themselves in an awkward position where their own routine maintenance work to clean out waste and debris from ditches can be delayed by permitting requirements. The result is that they can easily find themselves in a double bind. Act, and there are serious penalties to pay. Don’t act, and there is flooding and potential liability for harms attributable to their neglect of basic duties.

Any decision by the EPA and the Corps to expand the scope of their activities will, as the American Farm Bureau points out in painful detail, impose onerous permit requirements on literally thousands of small ditches, often on a case-by-case basis. The same concerns are raised by the expanded definition of what counts as a tributary of a river, or in some instances a tributary of a tributary. None of these definitional anxieties are eased by the constant EPA refrain that its object is to “clarify and simplify implementation of the Act,” with bright line rules no less. In reality, the new rule is replete with areas in which case-by-case determinations need to be made on whether, for example, low-lying farm areas are covered by the CWA.

To see how far the current disputes have moved from the 1972 baseline, it is instructive to go back to some basics. The key statutory definition under the CWA is keyed to “navigable waters,” which in turn are defined as “the waters of the United States, including the territorial seas.” The CWA then makes it unlawful for any person to discharge any pollutant, broadly defined to include rock, sand, and dirt, into these navigable waters. The reference to navigation and the territorial seas makes it clear that the reach of the statute is limited to discharges into large bodies of water, where navigation is possible, if not just by steam ships then at least by canoes.

For many years, the definition of the waters of the United States received that traditional meaning. For major bodies, it makes sense to require a permit before discharging refuse or waste into a river, as lawsuits after the fact are a poor substitute for avoiding pollution in the first place. Permits that limit planned discharges are an effective way to organize pollution control.

For many years, the traditional definition of navigable waters held, but soon came under judicial attack that led to its total transformation. In its 1975 decision in Natural Resources Defense Council v. Callaway a federal district court issued a one page opinion that simply declared that Congress by using the phrase “‘the waters of the United States, including the territorial seas,’ asserted federal jurisdiction over the nation’s waters to the maximum extent permissible under the Commerce Clause of the Constitution. Accordingly, as used in the Water Act, the term is not limited to the traditional tests of navigability.”

At this point a court order held the CWA hostage to the vast expansion of commerce power inaugurated by the New Deal that covered much more than interstate transportation and communication. Now, all productive activity within the United States, including agriculture, mining, and manufacturing fell under federal power. But Callaway offered no explanation as to why that jurisprudence has to be crammed into a statutory definition that works its way back to the 1899 Rivers and Harbors Act, passed when federal commerce power jurisdiction was far more limited.

Nonetheless, this one judicial decision started a massive expansion of the scope of the CWA, as the Army Corps speedily went about making sure that anything that was water related was in fact subject to the permitting process of the CWA. When the stoked up version of the regulations reached the Supreme Court in its 1985 decision in United States v. Riverside Bayview Homes, the new definition of navigable waters included all wetlands, which were in turn “those areas that are inundated or saturated by surface or ground water at a frequency and duration sufficient to support, and that under normal circumstances do support, a prevalence of vegetation typically adapted for life in saturated soil conditions.”

The regulation was unanimously upheld in a strongly statist opinion by Justice White. He first concluded that there was no need to give a narrow definition to the waters of the United States to avoid a potential problem that the government action might take private property without just compensation, and he then further concluded that it was an “easy” decision to conclude that the Corps had acted well within its regulatory powers.

Spurred on by that decision, the Corps continued to expand its jurisdiction. The only modest setback along the way was in the 2001 Supreme Court decision in Solid Waste Agency of Northern Cook County v. U.S. Army Corps of Engineers, where the Court balked at letting the Corps impose its “migratory bird rule,” by which the Corps sought to control intrastate waters “which are or would be used as habitat” by migratory birds.

But that one victory did not slow down the rest of the Corps operations. The massive nature of this new regulation is made plain in the introductory paragraph of Justice Antonin Scalia’s 2006 plurality opinion in Rapanos v. United States:

In April 1989, petitioner John A. Rapanos backfilled wetlands on a parcel of land in Michigan that he owned and sought to develop. This parcel included 54 acres of land with sometimes-saturated soil conditions. The nearest body of navigable water was 11 to 20 miles away. Regulators had informed Mr. Rapanos that his saturated fields were “waters of the United States,” that could not be filled without a permit. Twelve years of criminal and civil litigation ensued.

Justice Scalia was right to think that the case itself represented a massive and unwarranted expansion of government power under the CWA. But his opinion only carried four votes. Four dissenters led by Justices Breyer and Stevens took the position that Congress had indeed exercised its maximal powers of the waters of the United States so that the entire matter was best left in administrative hands. Thus, as is often the case, the decisive vote was cast by Justice Kennedy who sought to split the difference by deciding the proper definition required that “to constitute ‘navigable waters’ under the Act, a water or wetland must possess a ‘significant nexus to waters that are or were navigable in fact or that could reasonably be so made.”

Justice Kennedy’s wobbly position duly became the applicable standard. But just what does it mean? Justice Kennedy himself did not know, so he punted the entire matter back to the lower courts to figure out exactly what his “significant nexus” test meant. It does not take the benefit of hindsight to realize the fatal mistake in that decision. Jurisdictional rules have to be clear, and the substantial nexus test is a pure matter of degree. That test might have had some bite if Justice Kennedy said that the Corps did not come close to meeting the standard in Rapanos. But his remand signals the exact opposite conclusion, that significant in law could be turn out to be rather puny in practice.

Note that in claiming permit power, the Corps did not make any claim that Rapanos’s activities actually had any perceptible negative effect, real or anticipated, on the navigable waters of the United States. What the permitting process therefore did was to put the burden on Rapanos to try to prove the negative in a setting in which there is at most a de minimis likelihood that filling in dirt could result in damage to navigable waters located long distances away. At this point, the systematic mistake of a CWA on steroids is that it alters for the worse the standard common law rule that in private disputes an injunction against certain activities will be issued only on a showing of actual or imminent harm, at which point the activity is stopped until the situation is corrected. There is little unnecessary wastage under this rule, but no real loss in environmental protection.

Unfortunately, after Rapanos, the permit process was quickly untethered from any actual pollution. Now permits could turn on how an agency, as Justice Scalia notes, views the “economics,” “aesthetics,” or “recreation” of a proposal, and “in general, the needs and welfare of the people.” These standards are so fluid that virtually any result is consistent with them, and Kennedy’s mysterious requirement of some significant nexus does little in practice to hem in that discretion.

The great vice of the CWA permit system is its overkill: these exhaustive requirements are imposed in countless small cases where there is no risk of any harm. The result is a huge addition to administrative costs, and a delay in various projects. Nonetheless, it is this mindset that the EPA and the Corps seek to use to expand their power, without so much as offering a word of explanation as to just how much benefit can be obtained from this massive investment of private and public funds in the permitting system.

In light of the long history of deference to administrative agencies, it was somewhat of a surprise that Judge Erickson decided to put some teeth back into the significant nexus test. But in this instance, Erickson had enough ammunition to support the conclusion that the expanded definitions could not achieve their stated purpose, and that the rule therefore should be struck down as “arbitrary and capricious,” a tough standard under current administrative law. He took especial umbrage at the proposed rules that targeted “intermittent and remote wetlands” without showing their nexus to navigable waters of the United States, and further challenged a rule under which waters that are located within a 100-year floodplain or “within 4,000 feet of the high tide line or ordinary high water mark of a traditional navigable water, interstate water, the territorial seas, impoundment, or covered tributary” will require a “case-specific determination of a significant nexus.” So much for those clear and objective EPA standards.

The great tragedy of this case is that the litigation battles would never have taken place if the basic constitutional and statutory framework had not been twisted beyond recognition by earlier decisions. But under current law, litigating regulations like the CWA definitions is an exercise in the theater of the absurd. Let us hope that a modicum of common sense remains so that the EPA and Corps are sent back to craft a rule that makes economic and environmental sense.

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

How Democrats Stifle Labor Markets

Richard Epstein*

Richard Epstein

Richard Epstein

The National Labor Relations Act of 1935 (NLRA) introduced a major revolution in labor law in the United States. Its reverberations are still acutely felt today, especially after the recent, ill-thought-out decision in the matter of Browning Ferris. There, the three Democratic members of the National Labor Relations Board overturned well-established law over the fierce dissent of its two Republican members. If allowed to stand, this decision could reshape the face of American labor law for the worse by the simple expedient of giving a broad definition to the statutory term “employer.”

Right now, that term covers firms that hire their own workers, and the NLRB subjects those firms to the collective bargaining obligations under the NLRA. Under its new definition of employer, the NLRB majority expands that term to cover any firm that outsources the hiring and management of employees to a second firm, over which it retains some oversight function. In its decision, the NLRB refers to such firms and those to whom they outsource the hiring as “joint employers.”

Just that happened when a Browning Ferris subsidiary contracted out some of its recycling work to an independent business, Leadpoint. Under traditional labor law, Browning Ferris would not be considered the “employer” of Leadpoint’s employees—but the Board’s decision overturns that traditional definition. No longer, its majority says, must the employer’s control be exercised “directly and immediately.” Now “control exercised indirectly—such as through an intermediary—may establish joint-employer status.”

By this one move, the Board ensures that unions will now have multiple targets for their organizing efforts. A union can sue the usual employer who hires and fires, and it may well be able to sue one or more independent firms who have outsourced some of their work to that firm. The exact standards by which this is done are not easy to determine in the abstract. Instead, the new rules depend on some case-by-base assessment of the role that the second firm has in setting the parameters for hiring workers, determining their compensation, and supervising their work.

There is a quiet irony in this ill-considered transformation. The Democratic majority protests that it is only applying “common law,” i.e. judge made definitions of employers and employees to shape out the structure of the NLRA, which explicitly repudiates every key move in the common law of labor relations.

The analysis of the term “joint employer” starts with the NLRA’s definition of an employer, which reads: “The term ‘employer’ includes any person acting as an agent of an employer, directly or indirectly,” and then tacks on a list of exclusions including the United States and various state governments. The complementary definition of an “employee” under NLRA section 2(3), which, with largely irrelevant qualifications, states none too helpfully: “The term ‘employee’ shall include any employee. . . .”

The NLRA makes no effort to define joint employers of a single employee. Nor does it contain any indication that this category should occupy a large space in the overall analysis, even though Browning Ferris opens up the possibility that a very large fraction of the labor force has multiple employers.

To reach this conclusion the Democratic majority notes that the common law definition of an employee is the benchmark for its statutory choice. Yet the common law approach to labor relations was historically the antithesis of the creaky administrative machinery mandated by the NLRA. Under the common law rules, there were no limitations as to the bargain that could be struck between the employer and the worker. All contractual provisions relating to wages and the conditions of work were left for the parties to decide, as was the case with respect to the duration of the arrangement. In general, most employment contracts wereat will, which meant that the employer could fire the employee for a good reason, a bad reason, or no reason at all, just as all employees could quit for a good reason, a bad reason, or no reason at all.

To the naïve, this system of contractual freedom often looks as though it is an open invitation for labor market instability, but in fact it has proved exactly the opposite. The flexibility over contractual duration and terms keeps both sides in line, and thus adds immeasurably to the overall productivity of human capital.

The NLRA, passed 80 years ago, has posed a serious threat to the productivity of labor markets from day one. Its basic structure imposes a duty on each employer to bargain collectively with a union that has been selected by a majority of workers within a statutorily defined bargaining unit. That bargaining system blocks the constant short-term adjustments always needed in response to changing conditions in either labor or product markets by imposing a rigid set of restrictions on any unilateral contract changes offered to workers on a take-it-or-leave-it basis. These proposed contract changes are deemed unfair labor practices and prohibited.

The NLRA’s top-heavy labor agreements impose onerous work rules on once free businesses that hurt workers as well as employers in an actively moving marketplace, and these agreements leave unionized firms at a serious competitive disadvantage with their more nimble competitors. Over the last 60 years, even with little or no change in the substantive law, thelevel of unionization in the private sector has declined rapidly, from a peak of about 35 percent in 1954, to under 7 percent today. Time has only exposed the defects of a statute flawed from its creation at the height of the New Deal.

The efforts of the NLRB majority to cover “joint employers” is clearly an effort to breathe life into a moribund labor movement, by making hash out of the common law rules on which it purports to rely. From the beginning, the labor law had to ask which individual workers were employees of the firm, subject to unionization, and which were independent contractors who were not.

In the 1944 case of NLRB v. Hearst, the Supreme Court held that individual “newsboys” who sold Hearst papers were to be treated as employees for the purposes of the labor statute, even though their contracts with Hearst had designated them as independent contractors. The majority of the current Board relies on Hearst for its willingness to apply the statutory definitions “broadly . . . by underlying economic facts.” But that majority conspicuously neglects to mention that Hearst was in fact repudiated in the Taft-Hartley Act of 1947, which excluded (most imperfectly) “any individual having the status of an independent contractor,” without giving a workable definition of who those individuals are.

Nonetheless, the Hearst case did have one critical feature that the NLRB’s majority ignored. In Hearst, the only question was whether ordinary workers should be treated as an employee or an independent contractor. There was, however, no third party involved, and hence no issue of whether two or more firms should be treated as joint employers of any individual worker. Exactly the same point holds in the 1968 decision in NLRB v. United Insurance Co. of America, relied on by the majority, where the Supreme Court held that individual “debit agents” of the defendant insurance company were in fact its employees.

It should be evident why United Insurance Co. of America might well be correct. The whole point of the NLRA is to protect the right of employees to unionize. If an employer could redescribe individual employees as independent contractors, the basic protections of the NLRA (however ill-advised in principle) could be circumvented by a simple labeling exercise, without making a difference in the day-to-day operation of the overall business.

The Democratic majority also relied heavily on Section 220 of the Restatement of Agency that deals with the classification of certain independent workers as independent contractors or employees. In dealing with this issue, the NLRB majority discusses the ten factors that the Restatement invokes to answer this question, without noting two gaps in its argument. The first is that each factor is directed toward individual workers and a single employer, without reference to any joint employee situation. How else to explain the one factor that observes “(e) whether the employer or the workman supplies the instrumentalities, tools, and the place of work for the person doing the work; . . .?” The Restatement also asks about the distinct nature of the worker’s occupation, the duration of the relationship, and the kind of day to day supervision, none of which are relevant to the joint employer question.

Second, this lengthy inquiry arose when the at-will rule at common law let the parties cut whatever deal they saw fit. What the NLRB majority forgot to do was to look at the material that immediately preceded section 220 in Chapter 7 that begins: LIABILITY OF PRINCIPAL TO THIRD PERSONS TORTS. Section 219 is then headed; When Master Is Liable for Torts of His Servants.” At this point, the punch line should be clear. The independent contractor question did not govern their relationship, but arose to make sure that the common law employer did not escape liability for the torts committed by his servants against strangers by labeling them as independent contractors. But as between the parties, the question of categories doesn’t matter—at least in the absence of regulation.

Today of course, the argument is that the law has to look over this arrangement to see that other statutory obligations imposed on employers are not breached. It was for that reason that the California Commission in Berwick v. Uber Technologies reverted to the common law definition of an independent contractor to tackle the question in the most inappropriate way possible while determining case-by-case which Uber drivers were employees and which were independent contractors. The same issue arises when employers try to classify part-time individual workers as independent contractors to avoid various statutory obligations on family leave, sick pay, overtime, and the like.

None of those issues is relevant here, where the correct inquiry asks whether the joint employer rules will disrupt the settled historical pattern of collective bargaining. The NLRB majority made a passing effort to justify its decision by quoting some government statistic that indicated an increase in the number of “contingent” and “temporary” employees as of 2005 to about 4.1 percent of employment, or 5 million workers. But that factoid reveals nothing about the efficiency of the proposed modifications to the collective bargaining system.

On that question, the new joint employer rules will likely batter today’s already grim labor market, as they will not only disrupt the traditional workplace but will completely wreck the well established franchise model for restaurants and hotels. As the majority conceded, the so-called joint employer does not even know so much as the social security number of its ostensible employees. It has no direct control over the way in which the current employer treats its workers, and yet could be hauled into court for its alleged unfair labor practices. That second firm knows little or nothing about the conditions on the ground in the many businesses with which it has forged these alliances, which eases the operations for both. Those advantages will be lost if the joint employer rule holds up in court. At the very least, the majority’s decision would require each and every one of these contracts and business relationships to be reworked to handle the huge new burden that will come as a matter of course, leaving everyone but the union worse off than before.

It would be one thing, perhaps, if the majority saw the light at the end of the tunnel. But over and over again it disclaims any grand pronouncements, making the legal question of who counts as an employer a work in progress that will be finished no time soon. Against this background it is irresponsible to undo the current relationships by a party-line vote. That point should also be clear to the courts and to Congress. The quicker this unfortunate decision is scrubbed from the law books, the better.

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.

The Truth About Campaign Finance Reform

Richard Epstein*

Richard Epstein

Richard Epstein

One regrettable feature of modern politics is that presidential campaigns now run for the better part of two years, which gives ample time for all sorts of crackpot ideas to make their way to center stage. It is a sign of the times that Bernie Sanders has made enormous headway on the Democratic side of the ledger, taking away attention from Hillary Clinton whose misguidedreform agenda has been overshadowed by the federal investigations and inquiries into her private email server when serving as Secretary of State. On the Republican side, a boorish know-nothing, Donald Trump, has surged to the lead, giving the mistaken impression that the conservative voters of this country are concerned with little more than “white identity politics.”

But more curious still is the incipient presidential campaign of Harvard Law Professor Lawrence L. Lessig, who has taken it upon himself to become the hero of the common man, by seeking to amass one million dollars by Labor Day to project himself into the election on the singular pledge that, if elected, he will leave office once his major campaign reform has been put into place, yielding the presidency to some Vice President like Bernie Sanders, Hillary Clinton, or Elizabeth Warren.

The centerpiece of the Lessig program is a radical reform in electoral politics, which is intended to make sure that the nameless rich are no longer allowed to “rig the system” in their favor. The sole item on his quixotic campaign is a motley collection of legislative reforms that he bundles into a single package called the Citizens Equality Act of 2017. His calling card reads as follows:

EQUAL RIGHT TO VOTE

We must have a system that guarantees a meaningfully equal freedom to vote. To achieve that, we must at a minimum enact the Voting Rights Advancement Act of 2015 and the Voter Empowerment Act of 2015. We should as well add automatic registration, and shift election day to a national holiday.

I shall pass by the vices of automatic registration to concentrate on two components of Lessig’s plan that are now gathering dust in Congress: the Voting Rights Advancement Act of 2015, and the Voter Empowerment Act of 2015. Not surprisingly, govtrack.us has the same prognosis for both statutes: zero percent chance of passage.

For Lessig, the merits of his legislation are self-evident. The Voter Advancement Law, for example, calls for the introduction of elaborate preclearance provisions on a wide range of “covered practices” that are said to impair the right to vote. It is clearly an effort to resuscitate in reinvigorated form the preclearance of Section 5 of the 1965 Voting Rights Act, which was struck down inShelby County v. Holder, for the simple reason that the reality of racial exclusion in the South (and for that matter, the North) have changed so markedly in the last 50 years that the turnout rate for black voters is higher than it is for whites. For someone interested in radical reform to clean up electoral politics, it seems odd to support a new and onerous set of preclearance procedures whose primary effect will be to slow down electoral politics and to allow eager Democratic officials to harass local election officials.

To get a flavor of what is afoot, it is sufficient to look at one key provision of the 2015 Advancement Act that is intended to expand the number of situations under which the preclearance procedures of the 1965 Voting Rights Act can be invoked. That bill would first expand the definition of violation to cover not only violations of the Fourteenth and Fifteenth Amendments, but also “violations of this Act, or violations of any Federal law that prohibits discrimination in voting on the basis of race, color, or membership in a language minority group.” To make matters worse, it targets for enactment any state if there were “(i) 15 or more voting rights violations occurred in the State during the previous 25 calendar years; or (ii) 10 or more voting rights violations occurred in the State during the previous 25 calendar years, at least one of which was committed by the State itself.”

This appalling provision in effect guarantees that large numbers of states will necessarily be subject to future regulation of their conduct based on actions completed long before the passage of the legislation. It takes little imagination for any expert in the field to tally those states that are already caught within the web, and then to find, under the Act itself, new violations, which will allow the noose to be kept in place for the indefinite future. There is no allowance for differences in the size of the state, for the severity of the violations, or for trends in state or local behavior. The impact of these provisions on legislative outcomes is obscure at best. Why anyone thinks that the robust enforcement of this misguided legislation will do anything other than inflame electoral politics is a mystery to me.

The situation on the ground is in fact worse than this brief analysis suggests. The first point to note is the inherent dangers in all forms of modern democratic politics. As James Madison wrote long ago in Federalist Number 10, the dangers of special interests (what he called “factions”) are everywhere. In some cases, it is an outraged majority that can take advantage of an embattled minority. Sometimes it is a clever minority that can use its influence to collect power and wealth from the disorganized majority. The Lessig narrative is that the system is rigged against the ordinary person. There is no mention that the organized majority, or more accurately, a huge panoply of activist groups, has taken its pound of flesh from businesses and from people of high income, many of whom have fueled the innovation and advances from which the public at large benefits.

There is no easy way to keep score on whose political intrigue matters most, but there is little doubt that the progressive forces in the Obama administration have made the redress of inequality a central theme of their policymaking. This has translated into slow growth brought on by higher taxation and more extensive regulation of that heterogeneous group, known as the rich, whose members are often in conflict with each other.

In my view, any effort to concentrate exclusively on general electoral issues obscures the kind of fundamental reforms that are really needed to get this country back on track. Here are two key proposals needed to reverse the long-term national decline.

First, we need to cut back on the overall scope of federal powers of regulation and taxation. The current laws, as exemplified in the ObamaCare decision, NFIB v. Sebelius, give the government virtually unlimited powers of regulation and taxation over the economy as a whole. This current understanding of the law is at sharp variance with the original, classical liberal constitutionalstructure, which recognized the serious perils of an inordinate concentration of power. The level of discretion enjoyed by government officials at all levels is a magnet to anyone with a partisan political agenda. The amount of money spent on campaigns at all levels reflects the fact that our current structures make it possible to use legislation to gain advantages from other groups, without having to go through the bother of selling goods and services that other people want to buy.

This simple truth gives rise to the second yawning defect in our current governance structures. The need for influence is not only felt part and parcel of electoral politics. It also depends on the way the various committees inside each house of Congress operate; these committees have to pass on particular bills that come before them. Thereafter, we can count on a huge amount of slippage between the passage of any piece of legislation and its administration. Under current administrative law, the courts apply the so-called Chevron doctrine to give enormous discretion to administrators who decide how to implement the oft-vague guidelines of particular statutory provisions. And once those regulations or other directives are adopted, they often be can be altered at will by the next president and his team of administrators, leading to serious discontinuities in the operation of the law.

These huge flip-flops in legal interpretation carry with them troubling political implications. Power abhors a vacuum, which means that any interest group that has not been able to carry its way in the electoral process will mobilize its resources to target particular rules in particular agencies that matter most to its operation. And contrary to the Lessig’s childish populist story, lobbying and logrolling is a game that liberal activist groups can play every bit as skillfully as their conservative rivals. Furthermore, many corporations and individuals give to both parties to cover their bases.

One vital administrative law reform needed is to break down the ability of agencies to adopt bad rules that are often at war with the statutory authority under which they act. And in this regard, the usual extensions have been done, most notably, by the Obama administration with its eagerness to push the envelope on executive authority on a wide range of issues from immigration reform to environmental, labor, securities, communications law, and much more. It is all too often the case that business interests are forced to lobby, not in an effort to gain some special favor, but to resist some law or regulation which threatens to drive successful businesses, like Uber, out of business.

In dealing with these issues, the populist agenda only aggravates the underlying problem. What it first does is confirm the vast power of government to control all essential matters of government. The greater the political power, the larger the role of money in politics. Well aware of this, the progressive agenda championed by Lessig and others works over time to rig (yes, it is proper to use their word against them) and then introduce a set of electoral reforms whose disparate impact will help them cement their own power.

Thus one of many approaches would put sharp limitations on contributions made to PACs, but do nothing to restrain the ability of people to work in electoral campaigns, which would give a huge advantage to labor unions—the most regressive political force in America—against those people with the ability (on all sides of the political spectrum) to use their dollars to obtain political influence. It is for this reason that the progressive movement is so vehement in its opposition to Citizens United v. Federal Election Commission. That decision gives small government forces the means to oppose its agenda—which is the opposite of what Lessig and his fellow progressive supporters want, but precisely what this country needs to counter the growth of government and all of the problems that come with it. 

*Considered one of the most influential thinkers in legal academia, Richard Epstein is known for his research and writings on a broad range of constitutional, economic, historical, and philosophical subjects.