Karl Popper—or, more accurately, Karl Popper’s theory of knowledge—came to my rescue when I found myself in intellectual difficulties as an architecture student in London in the 1970s. At that time the architectural profession was still congratulating itself on its collective decision to throw away thousands of years of accumulated architectural knowledge and start from scratch using a supposedly rational approach that it called “functionalism.” Both the name and the method were inspired by Louis Sullivan’s well-known epigram, “Form follows function,” which had been informally adopted as the profession’s new motto.
With the new approach, architects were to concern themselves exclusively with utilitarian functions, and the form of each building was expected to emerge naturally from the process of providing for those functions. While buildings can and do perform many functions, the focus was always on efficiently accommodating the activities to be performed inside the building.
In my first year in architecture school I learned how this method was supposed to work in a studio course called the Housing Unit. Having been given a site plan and a brief (consisting, say, of a list of rooms and their associated floor areas or similar list of unit types and sizes) we were expected to sit down at our drawing boards and come up with a design that would satisfy the requirements of the brief.
This approach bothered me for several reasons. In the first place, it seemed absurd to be designing houses from scratch when, as I wrote to a friend, “People have been building houses—millions upon millions of them—for thousands of years. If we had to start from scratch each time we’d still be living in caves.”
Furthermore, it seemed obvious that, in reality, architects didn’t design buildings by making sure the form of each building followed directly from it functional specification. On the contrary, what actually happened was that some architect with a particularly dominant personality or gift for self-promotion would develop a distinctive style, and then that style would be copied by other architects. Both the originator and the imitators would apply that style to everything they designed, from single-family houses to high-rise office buildings, regardless of any specific functional requirements and regardless of the local conditions. The result was that, in any given period, most architect-designed buildings all over the world were built in whatever the dominant style or styles happened to be. (In the middle years of the 20th century the two most dominant styles were Corbusier’s concrete boxes on stilts with strip windows and Mies van der Rohe’s glass boxes with exposed steel frames.) Occasionally a new style might come along and at least partially supersede an old one, but when this happened the change from the old style to the new one wouldn’t be in response to a change in functional requirements; it would just be a matter of fashion.
Most damning of all, the buildings that were designed using the functionalist approach actually functioned very badly—the flat roofs leaked; the exposed concrete corroded; the glass walls caused overheating in summer, cold drafts in winter, and high energy bills all year round. Public housing and other “urban renewal” projects destroyed beautiful and well-loved cities while exacerbating the social problems they were supposed to solve, and the modern built environment became so unremittingly ugly and uncomfortable that modern architects and modern architecture came to be reviled by the general public.
Taking all this into consideration, I came to the conclusion that functionalism was not, in fact, an improvement on the traditional approach to architectural design it had replaced. With the traditional approach, the architect would select from a range of inherited architectural forms those that seemed most appropriate in the circumstances, and additions and changes to the stock of inherited architectural forms occurred as needed in response to problems.
During my final year in architecture school I had the good fortune to stumble on an essay by Peter Medawar discussing Popper’s theory of knowledge. This inspired me to start reading Popper’s own works, and I was soon convinced, not only that his theory was correct, but that it could shed some light on the methodological problem that was bothering me. I even wrote a paper in which I said as much, arguing against the direct, “form follows function” approach in which the designer works in isolation, designing individual buildings on a one-off basis by specifying goals and generating designs, and in favor of an indirect, evolutionary approach in which the designer works in the context of a design community, helping to perfect generic designs over multiple cycles of design, construction, and use by obtaining and applying feedback about the actual performance of actual buildings.
I didn’t have any success persuading my professors of these ideas, and, after I graduated and started practicing architecture, I didn’t have any success persuading my professional colleagues either. Most discouraging of all, I didn’t have any success when I tried to explain the ideas to Popper himself.
This happened while I was in England on architectural business in the late 1980s. A family friend—Jean Medawar—very kindly arranged for me to have tea with Sir Karl at his house in Kenley. All I had wanted to do was shake his hand and tell him how much his work had meant to me, but he surprised me by asking if there was anything I wanted to ask him. I tried, rather badly, to explain my ideas about design methodology using some illustrations in a book about ancient Greece that happened to be lying on a table. I pointed to a photo of a Doric temple and suggested that its architects could not have designed columns with such pleasing proportions by applying the modern, form-follows-function method of design. Instead, I said, they must have done so by applying rules, (e.g., “Make each column six lower diameters in height,” “Make the capital of each column one-half of one lower diameter in height,” and so on) and those rules must have been refined and improved over time through an evolutionary process similar to the process through which scientific knowledge grows and improves. He didn’t seem to buy it, and I didn’t have the confidence or the fluency to argue the matter, and, in any case, I was happy simply to have met the man, so I let it drop.
Some years later, however, after I’d given up architecture and enrolled in law school, my interest in the practical implications of Popper’s theory of knowledge was revived when I discovered that—like architects—lawyers (or at least lawmakers and legal theorists) tend to take an instrumental, “form-follows-function” approach to practical reasoning. Moreover, unlike architects—who only tried it once—lawmakers have repeatedly chosen to sweep away an accumulation of existing knowledge and—from Robespierre to Pol Pot—the consequences have invariably been far worse than leaking roofs and crumbling concrete! As a law student, therefore, I tried again to make the case for an evolutionary, Popperian alternative. As before, I had no luck convincing anyone, and, as before, I let the matter drop and got on with my life.
My feelings about of Popper’s theory of knowledge never changed, however. I continued to believe Popper’s solution to the problem of induction constitutes a major turning point in the history of ideas—comparable in its way to the Copernican and the Darwinian revolutions—and I continued to believe his “logical analysis of the method of the empirical sciences” can and should be extended to the practical sciences, i.e., to disciplines like architecture and lawmaking in which decisions are made about what is to be done. What’s more, I still believe those things, which is why, in recent years, I have begun trying again to make the case for a logic of practical discovery—mostly to lawmakers and judges because that’s what my job entails, but also to anyone else who will listen.
Admittedly, it’s often hard going. I find that even intelligent, well-educated people—people who accept without question that Darwinian evolution accounts for the emergence of order in the biological world—nevertheless find it hard to imagine any alternative to intelligent design when it comes to things like human technology and human institutions. Nevertheless, this time around I’m not going to let it drop. I’m going to keep going until the infirmities of old age make it impossible to continue.
Because it’s been so grossly and repeatedly misrepresented in the secondary literature and because it seems to needlessly confuse people, I try to avoid talking about falsifiability as a demarcation criterion. Instead, I focus on the three elements of Popper’s theory that seem to me to be particularly relevant to practical enquiry: its fallibilism, its critical rationalism, and its focus on the growth of objective knowledge. By quoting and paraphrasing Popper’s logical arguments with regard to these three topics, and by supplementing those arguments with examples from the world of physical design—everything from the history of pocket watches to the homely art of baking of bread—I encourage practical decision-makers to learn the following “lessons from Popper.”
First, we should abandon the instrumental, “form-follows-function” approach to practical reasoning that I criticized when I was an architecture student and that I now call direct design. All of the problems that Popper identifies in his critique of induction—and some other problems as well—make that approach unworkable. There is no procedure or algorithm for generating answers to practical questions that are bound to be correct simply by virtue of the way they are generated, and, regardless of how they are generated, answers to practical questions can never be verified or justified by means of logical inference. In practical matters as in the natural sciences, we must accept that we are, necessarily, fallible. Attempting to use reason to answer our questions directly, correctly, and once and for all is always futile. Attempting to do so when the outcome of our decision-making will significantly affect others—as is typically the case for architects—is negligent. Attempting to do so when the outcome will be forcibly imposed on others—as is always the case for lawmakers—is grossly negligent, or worse.
Second, in lieu of direct design, we should apply the approach to practical deliberation that I have been advocating ever since I was an architecture student and that I now call evolutionary design. As the practical application of critical rationalism, evolutionary design proceeds, not through algorithmic reasoning or logical inference, but through an endlessly repeated cycle of trial-and-error as problems with existing designs are identified and tentative improvements to those designs are introduced and tested. By systematically applying the evolutionary method, people in practical disciplines can learn from their mistakes in much the same way that scientists learn from theirs—by treating all knowledge as conjectural and by using reason and experience critically to help them discover and eliminate errors.
Third, we must understand that objective knowledge can and should play a role in the practical sciences that is analogous to the one it plays in the natural sciences. The invention of language, writing, and other symbolic systems has enabled mankind to accumulate a vast body of objective knowledge. It includes factual statements like creation myths, sports statistics, and scientific theories, and it also includes prescriptive statements like rules of etiquette, moral precepts, proportional systems for building parts, and laws. Taken as a whole, those prescriptive statements constitute our stock of objective practical knowledge. It has, for the most part, evolved into a comprehensive, reliable, and effective guide to action, and, most of the time, we do well to be guided by its contents. There is far too much suffering in the world, however, for us to simply accept our inherited stock of practical knowledge without question. As evolutionary designers, we must try to improve it by using reason and experience critically to identify problems, by using non-rational, creative intuition to invent potential improvements, and—using reason and experience again—by subjecting these potential improvements to further criticism and testing. By systematically taking that critical, evolutionary approach to all its many elements, we can encourage our stock of objective practical knowledge to become more comprehensive, more reliable, and more effective over time.
Finally, we should be humble and tolerant, and we should take responsibility for the consequences of our actions. Fallibility across the board is an inescapable part of the human condition. We cannot know for certain what constitutes the good or how best to promote it. We cannot know for certain what ought to be done. Because we may be wrong about these things even when we feel sure we are right, we ought to bear the risks of our mistakes ourselves. Because others may be right even when we feel sure they are wrong, we ought to allow them to do what they think is best, providing only that they too are willing to bear the risks of their mistakes. In addition to being an ethical imperative for anyone with a rational understanding of human fallibility and a moral aversion to unnecessary violence, there is a good prudential reason for trying to foster and institutionalize this combination of tolerance and responsibility—it provides an environment in which critical rationalism in general, and evolutionary design in particular, can flourish.
Free people living in an open society can undertake a greater number and a wider range of experiments—and generate more data about more options more quickly—than is possible in a closed and centrally controlled society. That is one of the primary reasons why open societies are invariably more prosperous and more scientifically and technologically advanced that closed ones. Free people make a lot of mistakes, but—as long as the institutional environment encourages, and if necessary requires, them to acknowledge and take responsibility for their mistakes—that is a good thing. The more mistakes we make, the more opportunities we have to learn.
Raleigh, NC, USA – Contributor
Jon Guze is Director of Legal Studies at the John Locke Foundation. Before joining the John Locke Foundation, Jon practiced law in Durham, North Carolina for over 20 years. He received a J.D., with honors, from Duke Law School in 1994 and an A.B. in history from Harvard College in 1972. In between, he studied architecture and, as a Vice President at HOK, Inc., he managed numerous large architectural and engineering projects across the US and in the UK.
Jon lives in Durham, North Carolina with his wife of more than 40 years. He has four children and six grandchildren.
Jon can be reached at email@example.com.