It’s hard to know if you are heading in the right direction if you don’t know where you are going. In today’s context of fiscal constraint, it is vital to justify expenditures or to choose among competing options by evaluating how much they are moving us, or potentially will move us, towards our goals. This is particularly true for public policy and action.
The Healthy Transportation Compact, a component of the 2008 law creating today’s Massachusetts Department of Transportation, commits the agency to “support healthy transportation…reducing greenhouse gas emissions…improving access to services for persons with mobility limitations, and increasing opportunities for physical activities… increasing bicycle and pedestrian travel…[creating] complete streets for all users…”
The state’s 2008 Global Warming Solutions Act commits MassDOT to significant reductions in greenhouse gas emissions in the transportation sector, which currently produces over a third of the state’s greenhouse gas emissions. The agency’s GreenDOT program goes even further. Its three “primary goals are to reduce greenhouse gas (GHG) emissions; promote the healthy transportation options of walking, bicycling, and public transit; and support smart growth development….[as well as] incorporate sustainability into all of its activities.”
How are we doing? It’s hard to know – despite the fact that one of the requirements written into the Healthy Transportation Compact is to “develop goals…and measure progress.” First, these rather general and abstract goals have to be quantified in both amount and time frame. Second, we need to select indicators that evaluate progress towards those goals, either directly or through some appropriate surrogate. And, third, we need to actually do the measurements and announce the results.
As good educators know, the value of a report card is not just its snapshot of current status but, even more, its ability to motivate and guide future effort – to serve as an annual review’s progress report rather than as an exit interview’s termination agreement.
Picking indicators that serve both purposes requires that they are relevant, available, transparent, actionable, sensitive, educational, benchmarked,and dramatic. In addition to meeting these criteria, indicators should describe three different categories of information: outcomes, enabling factors, andinputs/processes. And within each of these, at least some of the indicators in each category should evaluate possible disparities among key subpopulations.
This post explores these criteria and categories in the context of proposing indicators that might be used to evaluate bicycle and pedestrian travel at the state level. However, this approach should also be valid for examining transportation, public health, community development and other issues at the local, regional, and national levels as well.
Of course, it’s not true that the only things that count are those that are counted. Many (if not most) aspects of a good life, and of good communities, are purely subjective, context sensitive, and hard to measure. The personal experience of lived reality is ultimately more important than any statistic.
Still, numbers are useful. They feel objective and understandable – even though their ultimate “truth” is as malleable as the most cryptic prophecy. In any case, given our culture’s current obsession with transparency and accountability, we give great credence to numerical Report Cards. And, fortunately, with some creativity even most qualitative issues can be summarized numerically or on some kind of simple scale.
But what to count? How to measure? What should be our indicators of progress?
INDICATOR CRITERIA & CATEGORIES:
Other than being numeric, a good indicator should have the following characteristics. It’s a long list, which points out how hard it is to find really good indicators. From the most obvious to the most political, an indicator should be:
- Relevant – The indicator should accurately measure something related to the prevalence of the issue or to strategies for dealing with the issue.
- Available – The best data is that which someone else is already collecting, and is likely to keep collecting for a while. (This often turns out to be the biggest obstacle of all. Recently, while creating a “New England Healthy Weight Trends Report” I found that I was unable to find data from all six states for most of the indicators we wished to use – and we quickly learned that trying to collect it ourselves was more complicated and expensive than we could deal with.)
- Transparent – The measuring methods and the computation of the indicator numbers from those measurements should be straightforward and verifiable by others. Formulas and factor weightings can be somewhat arbitrary, but have to be explicit and explained.
- Actionable – The issues and actions should lie within the scope of responsibility of some (potentially) involved organization.
- Sensitive – the measurements should change in response to changes in status or effort;
- Educational – the indicators should focus public, media, professional staff, and decision-makers attention on both standard best practices and promising innovations.
- Benchmarked – At least some of the indicators should have a target to aim for over a specific time period; at both intermediate and long-term time frames.
- Dramatic – through their content, packaging, or marketing, the indicators should be designed to catch public and media attention, which also requires keeping the list short – perhaps divided into provocative “Primary Indicators” and an appendix of “Supporting Statistics” broad enough to make it clear the Primary Indicators were representative rather than anomalous.
In addition, a useful Progress Report, like a good project evaluation, measures degree of achievement in at least three categories, with at least some of the indicators in each category selected to reveal possible disparities among key subpopulations such as income levels, racial and language groups, age, gender, primary language, geographic distribution, etc.:
- Outcomes – both ultimate goals and intermediate targets
- Enabling Factors – such as programs, policies, physical structures, public input and decision-making processes
- Inputs & Processes – amount of time, money, or other resources being devoted to the task.
Outcome Indicators measure how far we’ve come towards our goals. Grand, inspirational visions of perfect conditions are a wonderful starting point, and often key to engaging public interest or support, but they are insufficient for program design or evaluation. “Environmental justice” is a powerful slogan, but by itself means very little. Similarly, “no child left behind” is an inspirational vision for school reform, but educational success is dependent on so many variables beyond a school’s control that setting such a utopian goal means that every school in the nation will inevitably fail (which may have been one of the Bush Administration’s motivations for proposing the original idea, since in most of the country penalties for failure only apply to public schools and not the private schools conservatives prefer).
Just as in science an untestable theory is a useless conjecture, in public life a non-measureable goal is a meaningless promise. To be meaningfully evaluated, an abstract goal must be turned into concrete outcome descriptions, with measurable outcomes over various time periods – including (especially if it is politically impossible to endorse less-than-utopian ultimate goals) partial goals or targets to aim for in the short- and medium-term. Often, getting from vision to specifics requires several steps, with the test of success being if the process ends up with a do-able action statement whose implementation can actually be measured.
For example, Massachusetts’ transportation goals fall into the too abstract category. Even the greenhouse gas reductions, which are the most concrete of the bunch, leave much unsaid. But they are a good starting point, needing only that we make their implications explicit. It wouldn’t be too hard to create a set of goals describing a series of 3-year increases in the percentage of trips taken via transit or bicycle by various groups of people for various purposes – the “mode split” for commuting, doing errands, socializing. Similarly, a set of goals could be developed describing a series of 3-year decreases in the average amount of non-renewable energy used per mile (or per trip) by various types of vehicles used in the state. Or a set of goals could be developed describe a series of 3-year changes in the patterns of indirect costs to society of our transportation sector, including environmental, climate volatility, public heath, land use, and more.
Transportation is also deeply connected to economic opportunity, another general term which also needs to be operationalized in order for progress to be evaluated. According to the Brookings Institute report, Missed Opportunity: Transit and Jobs in Metropolitan America, only about 30% of U.S. jobs are readily accessible to the typical metropolitan commuter who depends on transit services to get to work. So another set of goals could be developed describing changes in the percentage of new (or total) jobs accessible to low-income areas by transit.
Outside of Harry Potter, it takes more than waving a wand and muttering a spell to make things happen. Getting from “here” to “there” is always more complicated than we anticipate.
Enabling Factors can be either direct or indirect. Things that directly affect our ability to move towards a goal include structures (e.g. a bike lane), policies (e.g. complete streets), programs (including bike skills in elementary school physical education classes), and processes (e.g. including enough opportunities for public or advocacy input into roadway designs).
But indirect factors can also impact our likelihood of success. It may seem obvious that a zoning provision requiring bicycle parking spaces in commercial buildings will increase the likelihood that employees will bike to work. But transportation is also intimately related to land use, as proponents of “transit oriented development” already know. So “form-based” or other kinds of zoning that allow, or fiscal policies that encourage, mixed-use development will increase the amount of local shopping – along with walking and cycling. Similarly, increasing population density is a prerequisite for mass transit, so the spread of financial incentives for “smart growth” that promotes “village centers” in suburban towns will also lay the foundation for improved regional bus service – as well as the likelihood that people will get around by walking or cycling rather than driving.
For example, if the goal is to increase the number of times people walk to the local store instead of drive, then we need to make sure that there are solid sidewalks, safe intersections, and an inviting surrounding environment. And to get those built, policies have to be in place allowing – or maybe even requiring – their presence: complete streets, zoning, etc. We also need policies creating a design-review process that gives the public – including advocates for walking, bicycling, and wheelchair users – meaningful opportunities for input.
In some cases, an Enabling Factor evaluation is simply a “yes/no” choice. But it is more revealing to dig below the mere existence of a policy or program to measure its impact. For example, it is one thing to know if the state has an official Complete Streets policy. It’s another to track what percentage of recent road repair/upgrade projects, or perhaps what percentage of the state’s entire street inventory, have sidewalks and bike facilities. For example, it’s good to know if traffic calming strategies are routinely included in road designs, its even better to know what percentage of residential and commercial road projects incorporate those strategies.
While the people responsible for a program or achieving a goal need to pay attention to every Enabling Factor, a Report Card containing them all will be both overwhelming and boring to most other people. So a good Progress Report focuses on those factors with the greatest impact.
INPUTS & PROCESSES:
Inputs are often the easiest to measure and the most difficult to get approved – the amount of money being spent, the number of people-hours being devoted to the effort, the acres of land or number of parking places or other physical resource being used.
Of course, everyone hates “process” – the endless meetings where everyone needs to say their piece. There are times when the Chinese method of “benevolent dictatorship” seems very appealing. But that has its own downsides, as our media are quick to point out. Still, it is important to track the project-selection and implementation decision-making process, including all the provisions for public input. What are the steps needed to move from proposal to action? And has a follow-up analysis been made to see if the original goal has actually been achieved: are more people now walking to the local stores?
As with the Enabling Factors, while some people need to be aware of every step in the process and understand the flow of dependencies from one to the next, a Report Card should focus on the “critical path” – the steps that have to be in place for others to proceed. In the sidewalk example, the critical path might be the money trail – securing local government approval, followed by identifying possible funding, followed by securing the funding, followed by a contract-approval process that builds in good-design and construction safeguards, followed by a final analysis of the success of the program in meeting the original goals at the anticipated level of expenditure.
Previous posts on related topics include: