Ever hear of American philosopher John Searle’s Chinese Room? If not, here’s the gist of it: Searle’s Chinese Room is a thought experiment devised as an argument against “strong artificial intelligence,” the school of thought maintaining that machines can meet and exceed human cognitive capabilities, given sufficient processing horsepower, proper architecture, the right software and so on. In other words, a computer can not only do some of the things minds do, it can actually be a mind (and thus enjoy all the rights and responsibilities we reserve for minds, but that’s a topic for another day).
In the Chinese Room, a person sits isolated, locked in, possessing nothing but the clothes on his or her back and a collection of rules written in English. These rules are simple “if-then” statements that tell the person what to do when presented with patterns of ink on slips of paper, which can be passed one at a time through a small slit in one wall of the room.
As it turns out, those patterns of ink on paper represent questions posed in Chinese. And as it also turns out, the person in the room, who doesn’t know a lick of Chinese, can correctly answer those questions, in Chinese, simply by manipulating the symbols according to the rules. The system works so well, in fact, that those asking the questions believe they’re getting answers from a Chinese-speaking person, instead of an English-speaking person robotically manipulating Chinese symbols.
Assuming that’s true, the question is: Does the person sitting in the Chinese Room understand Chinese or not? Certainly, the person seems to behave intelligently, but does he or she have any sense of what’s really going on?
The parallels of the Chinese Room to many modern businesses, where there are lots of rules about how to enact a change but little or no understanding of what’s really going on, should be obvious. But does all that really matter, given that the outcome is the same regardless? I’d argue that it does matter, thanks to a concept at the heart of computer science: GIGO, or garbage in, garbage out.
GIGO means that nonsense fed into a rule-based system will be nothing but garbage when it exits the system (Shakespeare-typing monkeys aside – we don’t have forever to wait on accidentally brilliant, practically unrepeatable outcomes). This is one of the biggest reasons that top-down, authoritarian, because-we-say-so change processes fail.
Rules-based systems behave intelligently as long as the inputs make sense. When those inputs – competitive changes, organizational changes, random fluctuations in the Matrix – stop making sense, the outputs also stop making sense. It’s as if you asked the guy in the Chinese Room, What’s the color of funny? Answer: armadillo marmalade.
To counter that, you could ensure that every input makes sense. If you can manage that, you should be running the country. You could also design the system to stop dead in its tracks when it encounters something that doesn’t make sense. Users of Microsoft Windows call this the “blue screen of death” or BSOD (Mac users should feel only slightly smug here – you’ve got the somewhat less frequent GSOD).
Or, instead of becoming omnipotent or rebooting every time a glitch comes along, you could endow the system with strong AI, the capacity to actually understand what it’s doing and why it’s doing it, as opposed to following orders. This is why we focus so much on learning – and, by extension, communication – in change management. This is why it’s so important that everyone in your organization understand the motives and meaning of change, as well as the method. And that’s why, when it comes right down to it, every change initiative is a learning initiative.