Saturday Morning Breakfast Cereal had a comic recently explaining the argument against evolution based on the 2nd law of thermodynamics: “Life on Earth can’t get more complex because that would require energy, and the sun doesn’t exist.” The understanding of entropy is there, but conspicuously missing is the distinction between open and closed systems and the fact that increased entropy in the system does not preclude localized negentropic environments, such as those on Earth sustaining life.
This specific failure mode for thinking I call the Spherical Cow fallacy, after the classic physics joke.
For those unfamiliar, here’s the version on Wikipedia:
Milk production at a dairy farm was low, so the farmer wrote to the local university, asking for help from academia. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, notebooks crammed with data, where the task of writing the report was left to the team leader. Shortly thereafter the physicist returned to the farm, saying to the farmer, “I have the solution, but it only works in the case of spherical cows in a vacuum”.
This fallacious reasoning also comes up frequently and noticeably in political discussions, in which the actual distributions of demographics, power blocs, or assets are ignored in favor of a model that, while tractable, is about as realistic as the above spherical cows in a vacuum.
Sometimes the omitted dimensions are excluded from consideration for good reasons, but even this reasoned omission results in distortion by generating inappropriate intuitions about probability distributions. This happened in mathematics and physics with linear equations, as Ian Stewart explained in Does God Play Dice?:
Classical mathematics concentrated on linear equations for a sound pragmatic reason: it could not solve anything else … So docile are linear equations, that classical mathematicians were willing to compromise their physics to get them. So the classical theory deals with shallow waves, low-amplitude vibrations, small temperature gradients [that is, linearizes non-linearities]. […] Linearity is a trap. The behaviour of linear equations … is far from typical. But if you decide that only linear equations are worth thinking about, self-censorship sets in. Your textbooks fill with triumphs of linear analysis, its failures buried so deep that the graves go unmarked and the existence of the graves goes unremarked.
You see a similar gap between rhetoric and reality amongst technical folks in discussing software. The survival and success of “lesser” or imperfect solutions is a source of much weeping and gnashing of teeth in forums and blog comments everywhere, but one pattern that seems to recur in one after another Daily WTF entry is what I’ll call a power gradient.
A power gradient functions in a social system like the sun does for our terrestrial system. Yes, the tendency of the system is towards entropy, but a strong energy gradient allows for a localized entropic reversal that is conductive to the development and continued existence of life. Analogously, a power gradient allows technical solutions and software to come into existence that would otherwise have never seen the necessary conditions obtain, and it allows fatally flawed systems to continue to exist against all odds.
The power source may be internal or external. For instance, the sheer volume effect of the success of the Microsoft Office suite or the Google AdSense program functions as a sort of gravitational well, with customer dollars serving as the external power source that comes with the side effect of some strong disincentives. For instance, consider the lengths to which Microsoft has, historically, preserved backwards compatibility in Windows for legacy third-party programs coded in asinine ways. The failure of XHTML and the subsequent investment in HTML 5 was another example of the technically superior solution (from the spherical cow point of view) failing in the face of inertia and mountains of existing, shoddy HTML.
Internally, software projects may persist long beyond their sell-by date because they are attached to an executive’s priority item. They may be overly literal regurgitations of inadequately considered customer specifications (see the glorious short, The Expert, or Bob Cringely excoriate IBM for using customer requirements as weapons). Most interesting, to me, is where individuals in positions of power on a project (perhaps a product lead, or an architect, or a lead developer) are shielded from the consequences of failure due to the influence of a senior executive. The latter case happens most clearly in nepotist situations, but smaller shops with expert beginners in possession of seniority or disproportionate influence are a not uncommon occurrence, as well.
Thankfully, moving a system where an inferior solution is being sustained by a power gradient into a new equilibrium in which a superior solution can come to replace it is easier than bringing reality into alignment with the creationist argument at the top.
The first step is identifying what the sources of the existing solution’s strength are. In many legacy systems, it’s the accumulation of years (sometimes decades) of institutional knowledge that a “clean slate” solution would lose permanently. It might be a particular customer demographic that accounts for a majority of sales. It may be a clade that forms a stable sub-system inside your organization’s hierarchy.
Identifying the sources of power does not make changing or removing them easy. In some cases, a successful diagnosis may show the likelihood of change to be almost nil. An example from the world of politics shows this clearly. Gene Sharp has written voluminously and convincingly on the superiority of nonviolent resistance (see Waging Nonviolent Struggle, for instance), but his theory was shown to be dependent on the demographics of participants and the nature of the struts that held the existing power in place. His theory is elegant, and useful, but by ignoring some of the dependent conditions it’s merely a sophisticated variant of a spherical (polyhedral?) cow. Syria saw more than two years of concerted non-violent protest. However, unlike the successes seen by Indians against the British, or by black Americans, the Assad regime was unmoved, and in fact consolidated its power. There was nearly no overlap between the constituencies which ensured the regime’s ability to sustain itself and the constituencies in the streets being shot at (Ammar Abdulhamid summarizes other factors, here). This disconnect, explained by Bruce Bueno de Mesquita and Alastair Smith in The Dictator’s Handbook, applies to businesses and other organizations as well, not simply nation-states. If an IT director insists upon using a particular software package and has sources of support and funding that are insensitive to user feedback or success, your ability to change that situation will be limited.
However, that grim reality doesn’t obviate the need for such an analysis, because barring an intractable situation, you now know what to target to see the situation shift. A remarkable example of this approach can be seen in Pando’s reporting on the campaign in Colorado to legalize marijuana. Steve Fox, who was later involved in the Marijuana Policy Project, noticed that there was a strong correlation between people who understood that the health concerns associated with marijuana were fewer and less severe than the same for alcohol and nicotine, and people who agreed with legalization. Unlike California and other states, then, the MPP explicitly targeted the health and risk angle, understanding that this would equip voters to counter the fearmongering counter-campaign effectively. The approach paid off, while the spherical cow approach in California (which stressed, for instance, the illogic of paying for prison space for third strike possession offenders) didn’t.
In short: any attempt at changing a situation with a strong power gradient will take, at minimum, a lot of work and a very clear understanding of the problem. This doesn’t guarantee success, but it creates the necessary conditions for it. As long as you maintain a spherical cow model your proposed solutions will be not even wrong.
Very nice! I like the power gradient analogy, most importantly in giving a name to the shape of the problem.
The Spherical Cow in the title sucked me right in. I like an alternate punch line:
The leader of the team of professors called the farmer and exclaimed “We found your problem! Can we come over tomorrow afternoon?” The dairy farmer agreed, eagerly awaiting their visit. The professors arrived and sat down in his living room with broad smiles all around. The team leader stood up and began: “Postulate a spherical cow…”
I had heard the phrase, “Postulate a spherical camel on a perfectly frictionless surface…” as a rejoinder to ideas that worked well in theory.
Sometimes it seems that our entire society is one big network of anti-pragmatic power gradients. Part of understanding that is acknowledging that the unintended effects of technology are far, far more important than the intended effects. Part of it is that there is no such thing as a fact without an emotion attached to it. We have emotional reactions to science and technology and use emotional “logic” to reach a conclusion. Rationalization of this result comes afterward.
I experienced this when working in the nascent electric vehicle industry back in the 90s. What was most important about a new vehicle design is that it *felt familiar.* There could be anything at all under the hood or hidden behind the bodywork as long as the consumer could have a comforting emotional interaction with it.
I suppose that with software it is the GUI. Hack up the back end any way you want, but change a font in the GUI and you’ll generate endless comment strings.
Dictators and providers of inferior technology rely on this emotional conservatism.
Read the NYT internal report today, and it’s a good example of taking that first step, the hard look:
http://www.niemanlab.org/2014/05/the-leaked-new-york-times-innovation-report-is-one-of-the-key-documents-of-this-media-age/
Jordan, if you haven’t yet read Phil Agre’s work on critical technical practice and his efforts to get AI out of spherical-cow mode, you should.
Thank you very much, I’ve read neither.
I liked the articles as well.
However I’m skeptic about the success of getting AI out that mode while still preserving it. What is left of AI if one admits that formalization of concepts we might understand intuitively like making “plans” leads to a reductio ad absurdum or saying it less pejoratively, to a programming language paradigm where the differentia specifica goes away: everything becomes a plan and then we realize we have just built yet another Turing tarpit.
The supernatural productivity and efficiency of a formalism is not a problem per se, only in case we strive to build a mind from the formalization of psychological concepts. Non-psychological abstractions like “artificial neurons” don’t suffer from this criticism but then one essentially replaces AI with neuroinformatics and closes the case.