From the Law of Unintended Consequences to Goal-Free Evaluation

If you want to live a happy life, tie it to a goal, not to people or things. ― Albert Einstein

Goals are important in life. They’re universally accepted as instrumental to social, educational, economic, public health, leadership and numerous other types of progress. That’s not to say that every goal has value or that we all set and pursue them. But when they do exist they can be motivating.

Although goals can be set and abandoned (as they often are with New Year’s resolutions), when there is follow-through the results can be powerful. Consider President John F. Kennedy’s ambitious 1961 goal of putting Americans on the moon and returning them safely by the end of that decade. In July 20, 1969, the Apollo 11 mission did just that. By comparing Kennedy’s goal with the outcome there is no other conclusion than the goal was achieved.

But the pursuit and achievement of goals can also lead to side effects which weren’t intended. That’s particularly true of goals in the social realm. In 1936, American sociologist Robert K Merton published his famous study, The Unanticipated Consequences of Purposive Social Action, from which grew the popular meme, the ‘law of unintended consequences’. He made the important point that “unforeseen consequences should not be identified with consequences which are necessarily undesirable.” In short, unintended consequences may be desirable, undesirable, or neither.

So an important question emerges about what gets counted, what gets consideration, when (if) a goal-based initiative, ‘a purposive action’ (e.g., an organizational change initiative, a diversity and inclusion program, an Organizational Development, or an education or training program) gets evaluated? Do evaluators look for and count only what was intended by the goal? Do any unanticipated desirables or undesirables get summed into ROI?

That general question was what inspired the great evaluation scholar and philosopher Michael Scriven in about 1971 to train his attention goal-based evaluation, to the idea that evaluation models and methods developed to that point presented an incomplete range on the evaluation epistemology spectrum.

His remedy was an entirely new epistemology, and a model which he called Goal-Free evaluation (GFE). The fundamental characteristic, the foundational rule, of GFE is that the evaluator must evaluate without particular knowledge of the goals and objectives a program is intended to achieve.

Though GFE has been criticized for lacking in methods, social, environmental, medical, scientific, and other histories offer many accounts of programs and products that were accompanied by unanticipated side effects. Australia, for example, has a notorious history of new species introduction programs (the European rabbit for recreational hunting and the Common Myna to control locust) which were later discovered to have a damaging effect on native species’ habitat. Minoxidil, a drug developed to treat hypertension was discovered to also stimulate hair growth. It’s now more popularly known as Ragaine and used to treat balding. But as a 2010 report in Clinical and Translational Science clearly states the case of Minoxidil is exceptional because “Information necessary to recognize unexpected drug efficacy is not routinely collected.”

Michael Scriven observed similarly:

“the whole language of “side-effect” or “secondary effect” or even “unanticipated effect” (the terms were then used as approximate synonyms) tended to be a put-down of what might well be the crucial achievement, especially in terms of new priorities. Worse, it tended to make one look less hard for such effects in the data and to demand less evidence about them ― which is extremely unsatisfactory with respect to the many potentially very harmful side-effects that have turned up over the years.

“It seemed to me, in short, that consideration and evaluation of goals was an unnecessary but also a possibly contaminating step.”

In their 2014 review study, The Selfish Goal, American social psychologists Julie Huang and John Bargh argue that “human goal pursuit – whether operating consciously or unconsciously – constrains a person’s information processing and behaviors in order to increase the likelihood that he or she will successfully attain that goal’s end-state.”

If that’s true, can the constraints that form around our information processing and behaviors for goal pursuit also serve to blind us to unintended consequences? It’s long been established, for example, that scientists can rationalize away data or occurrences that don’t fit their theories and goals. The great historian of science Thomas Kuhn explained it through his “incommensurability thesis”. Professor Merton offered a simple aphorism: Was the rider thrown from his horse or did he simply dismount?

The point of this essay is to say that while goals are indisputably important to individual and collective achievement and progress, it’s also important to be intellectually diligent and to have methodological checks in place that give us visibility on the backwash effects, on the unintended consequences that arise from our pursuit of goals.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.