News & Updates

Dynamic Scoring: Caveat Emptor

by | Jun 21, 2016 | Economy, Taxes

In recent years, there has been a push at both the federal and state levels—coming primarily from conservatives—to adopt “dynamic scoring” to evaluate the impact of changes in tax and fiscal policies. Dynamic scoring attempts to predict the impact of fiscal policy changes, taking into account their anticipated economic repercussions. Dynamic scoring is an alternative to the more traditional “static scoring,” which focuses on the more immediate effects of tax changes, without taking into account longer-term macroeconomic consequences.

For example, static scoring of an income tax rate reduction measures the immediate revenue loss and possibly attempts to anticipate some partially offsetting revenue gain as taxpayers invest less money in tax-exempt bonds, which become less attractive in light of the income tax reduction, but that’s about as far as the analysis would go.* Dynamic scoring, on the other hand, would attempt to determine the impact of the tax reduction on consumption, job creation, and investment, and the corresponding impact that each of these would have in offsetting the initial revenue loss resulting from the immediate impact of the tax rate reduction.

At first blush, dynamic scoring might seem like a good idea; after all, shouldn’t all of the consequences of tax policy decisions be considered? However, there are good reasons why many fiscal analysts charged with forecasting the revenue impact of tax changes are circumspect—if not leery—of efforts to base budget decisions on dynamic scoring.

George Bernard Shaw observed that, “If all economists were laid end to end, they would not reach a conclusion.” There is a particular lack of consensus among economists on the impact of tax changes on labor supply, savings, investment, and consumption, and the ultimate impact of all these changes on tax revenue. For example, there are contrasting views among economists about the macroeconomic consequences of changes in top tier income tax rates.

A dynamic scoring model is no better than the assumptions upon which it is based. Given the lack of consensus among economists, it should not be surprising that there is no one single dynamic scoring model, but a multiplicity of models, each based on a separate set assumptions and each yielding divergent results. For example, in 2003, the Congressional Budget Office analyzed the impact of the Bush tax cuts on the federal deficit over the next decade using not one, but nine different models; two models showed a significant improvement in the deficit, while the other seven did not. A 2003 Wall Street Journal article summarized the results:

Some provisions of the president’s plan would speed up the economy; others would slow it down. Using some models, the plan would reduce the budget deficit from what it otherwise would have been; using others, it would widen the deficit.

The particular outcome of each model is less important than the fact that nine different models had to be used, thereby underscoring the lack of consensus regarding the impact of tax cuts. A multiplicity of outcomes—each based on a separate dynamic scoring model—does not provide much meaningful guidance to policymakers.

The track record of dynamic scoring at the state level is not any better than at the federal level. In 2011 testimony before the Senate Tax Committee, staff from the Minnesota Department of Revenue (DOR) noted that, “In states that have used them, it has been difficult to generate trust in the models (or those who set the model’s parameters).” As a result, dynamic scoring models are generally not used for budget scoring, but only for informational purposes. In states that have experimented with dynamic scoring, more time was spent arguing about which dynamic scoring model to use and less time debating the merits of specific policies.

The bottom line is that there is no one dynamic scoring model that will point the way to economic truth (or if there is “one true model,” there is no consensus as to which one it is). Rather than providing guidance to policymakers, the multiplicity of models could give policymakers the option to cherry pick the model that best suits their ideological predispositions regarding tax policy. This is not likely to improve the quality of public decision making.

Another problem with using dynamic scoring models as a basis for budget decisions is that they generally focus on the consequences of tax cuts or tax increases, but ignore the corresponding impact upon the economy of an increase or decrease in public spending. More on this in the second and final installment in this series.

 

 

*Occasionally, the Minnesota Department of Revenue (DOR) will incorporate a behavioral response to a tax policy, but only if there is a “reasonably objective way to measure the magnitude of the behavioral response…” (Quotation from a 2010 DOR memo.) For alcohol and tobacco tax changes, for example, behavioral responses to a tax change are incorporated into revenue estimates because there is a well understood relationship between changes in these taxes and corresponding impact on consumption. In general, however, DOR does not take into account the kind of behavioral changes considered in dynamic scoring models because there is a lack of consensus as to the nature and magnitude of these behavioral changes and their impact on state revenue.

Digging into Danger: Broadband Installation Damage in Minnesota

In 1998, a crew installing broadband cable for high-speed internet in downtown St. Cloud struck a gas line. The resulting explosion killed four people, destroyed six buildings, and caused $400,000 in damage.[1] More than 25 years later, too little has changed. In...

Contact Us

Use this form to get in touch with North Star staff, or send your questions, suggestions, and ideas to staff@northstarpolicy.org.