BRINGING PHILOSOPHY TO LIFE
INTELLIGENCE: Why Prefer the Artificial to the Real Thing?
BRINGING PHILOSOPHY TO LIFE #14
INTELLIGENCE: Why Prefer the Artificial to the Real Thing?
ChatGPT is a remarkable new form of artificial intelligence that blurs the line between human beings and computers. On May 1, 2023, Dr. Geoffrey Hinton, who is largely responsible for developing the foundations of the recent developments in A.I., resigned from Google after working there for more than a decade (Cade Metz, The New York Times, May 1, 2023). He quit “so he can freely speak out about the risks of A.I.” The importance of this recent development in artificial intelligence and some of the possible risks was reported in The Economist on April 20:
“Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence, and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in artificial intelligence has sparked anxiety about the potential dangers of the technology” (The Economist, April 20, 2023).
In the 20th century, automation developed to the point that millions of people who work with their hands were replaced by machines that saved a lot of back-breaking toil for workers and allowed employers to cut labor costs. Jeremy Rifkin told an earlier version that story in a book called The End of Work (1995). The subtitle of Rifkin’s book is: “The Decline of the Global Labor Force and the Dawn of the Post-market Era.” Unfortunately, Rifkin’s optimistic projection about what would happen in the “post-market era” has been eclipsed by a recent book by Shoshana Zuboff called The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Instead of a post-market era, Zuboff documents the replacement of “industrial capitalism” and its markets with “surveillance capitalism,” a form of market capitalism that would probably have shocked even Karl Marx by its power and extent. The obvious answer to why artificial intelligence is expanding in the marketplace is that it reduces expensive human labor costs by replacing human brains with computers and their programs. Today, machines threaten to replace people who work mostly with their minds, allowing corporations to own the means of production and radically reduce reliance on human beings who demand autonomy and dignity. But should we trust ChatGPT to report the news, diagnose illness, prescribe medical treatment, replace psychiatrists, plan military campaigns, render legal opinions and judgments, or make laws? In reporting on why Dr. Hinton is so concerned about the current trend that he left Google. Cade Metz put the problem this way: “Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line . . . it could be a risk to humanity” (NYT, May 1, 2023). Earlier, Metz interviewed Sam Altman, who has convinced Microsoft and others to invest more than a billion dollars in Artificial General Intelligence. Altman’s goal is to develop “a machine that could do anything the human brain can do” (NYT, March 31, 2023). Based on that interview and related research, Metz concluded that ChatGPT is “a fundamental technological shift, as significant as the creation of the web browser or the iPhone.” As Atlantic editor Elli María Korducki put it, what is so disconcerting is that ChatGPT has “the gift of gab” (“Don’t Be Misled by GPT-4’s Gift of Gab,” The Atlantic, March 15, 2003). Interacting with this program is remarkably like chatting with a knowledgeable neighbor, a family member, a physician, or a lawyer.
Artificial Intelligence has come a long way since 1950 when Alan Turing published his famous paper that posed the question: “Can Machines Think?” I first wrote about this subject in the late 1960s in an article called “Minds and Machines: A Modest Proposal” (https://www.agorapublications.com/minds-and-machines.html). The question I posed in 1968 was not whether machines could “think” but whether they were destined to replace the human species. Even then there was little question about whether computers and their programs can behave in ways that simulate human thinking by using symbols. However, at that point I was so confident of the uniqueness of human existence and its intrinsic value that I wrote the paper as satire, never dreaming that this would still be a serious issue more than five decades later. Not only are we still asking the philosophical question about the uniqueness and intrinsic value of human beings, but considering recent developments, it is even more difficult to explain what the human brain can do that a computer and its programs could not do without human assistance.
I will expand on the philosophical issues related to this new development in A.I. by offering two suggestions concerning the distinction between biological beings and their brains and the programs that run on digital computers. A short version of the first suggestion was by posed by Cal Newport in The New Yorker. The difference he proposes lies in the distinction between creation and imitation. Newport provides a helpful review of how the process used by ChatGPT works and concludes that “a system like ChatGPT doesn’t create, it imitates” (The New Yorker, April 13, 2003). What is the difference between creation and imitation? Newport says: “The A.I. is simply remixing and recombining existing writing that is relevant to a prompt.” That is imitation, not a genuine creation of something new.”
A primary focus of my research and writing throughout my career has been philosophy of the arts, with a special emphasis on the role of creative art in human existence. From the beginning, I have opposed the “scientific philosophy” that prevailed in the 1960s and 1970s, devoting much of my scholarship to exploring and explaining how human values emerge from the arts and how they are implemented in life. This task was especially important because in the 20th century various forms of analytic philosophy systematically denied a viable place for values like justice, goodness, beauty, and holiness in the realm of philosophical truth, which was limited to logical propositions that can be verified by sense experience. A.J. Ayer, a British philosopher who taught at Oxford University, presented an extreme but influential version of what was called “logical positivism” in a book called Language, Truth, and Logic (1936). Digital computers and their programs have subsequently played a special role in promoting that philosophical vision, one that has been embraced by the dominant forms of political economy that place profit and the acquisition of capital at the center of human life. Numbers and data are central to the development of ChatGPT, as the historian Jill Lepore shows in another recent article from The New Yorker (3/30/23). We need more than numbers and data to live a proper human life; we need genuine creativity and imagination, not only the kind of simulation that comes from calculation based on sense experience.
My second suggestion is that we step back and reconsider the entire worldview that emerged in the 17th century, which has spawned the illusion that computers and their programs can and should replace human beings and other biological species. That worldview, which I call “scientific materialism,” allows no way of explaining why a dog or a cat, not to mention a human being, is preferable to a robot as a companion. To understand that fundamental difference, it might be helpful to examine both human beings and digital computers from the standpoint of what Aristotle identified as four different kinds of cause: (1) material, (2) efficient, (3) formal, and (4) final. To understand anything fully, it is necessary to consider all four of its causes. I will use an example to show what Aristotle had in mind. Consider the chair sitting in my study next to the desk. (1) The material cause is the cherry wood, the glue, from which it was made; (2) The efficient cause is the carpenter who turned that wood and glue into a chair; (3) the formal cause is the blueprint or set of instructions the carpenter used to construct the chair; and (4) the final cause is the purpose of the chair—a comfortable place for me or a visitor to sit.
The first three of these causes are sufficient to explain a digital computer and its software. Final causes require a radically different way of thinking. The difference lies in the freedom to choose which values to incorporate. When I purchased that chair, I had many chairs from which to choose, allowing me to integrate aesthetic values and other intrinsic values that were not determined by what already existed. I could have selected a chair for sentimental values (for example, a chair that had belonged to a relative), as an investment (an antique or one created by a famous architect such as Mies van der Rohe), or to add to a collection of chairs spread throughout the house. The final cause is the most important one—it is the very purpose for having the chair at all. “Scientific philosophy” such as that of A. J. Ayer explicitly rejects the intrinsic values that are essential to final causes, especially ideas that are found in ethics, aesthetics, and religion. Artificial intelligence, which relies on data that already exists, has no way to incorporate final causes because they require values that have yet to be imagined. Despite its power to draw upon a huge data base, artificial intelligence has no imagination; it can only copy and imitate, not create.
To understand the threat from artificial intelligence, I suggest that we turn to genuineintelligence. What is genuine intelligence? The contemporary philosopher Alfred North Whitehead called it “reason” in a series of lectures he gave at Princeton University in 1929—The Function of Reason (https://www.amazon.com/dp/B0864T6J2J?plink=Zg5TL653B0mCW6Pf&ref_=adblp13npsbx_0_2_im). According to Whitehead, “the function of reason is to promote the art of life.” That function, I suggest, lies beyond the power of artificial intelligence. The art of life requires intrinsic values that are freely chosen.
Whitehead presents reason as a unified process that has two aspects, which he illustrates by using two figures from the ancient Greeks: Plato and Ulysses. He says: “The one shares reason with the gods; the other shares it with the foxes” (Whitehead, The Function of Reason, Lecture 1). The reason used by the foxes has clear and achievable goals, such as devising ways to remove chickens from the henhouse. The reason of Ulysses is aligned with the technological reason that has shaped the modern world in both good and in frightening ways. Whitehead reminds us that “some of the major disasters of humankind have been produced by the narrowness of people with a good methodology. Ulysses has no use for Plato, and the bones of his companions are strewn on many a reef and many an isle” (Whitehead, The Function of Reason, Lecture 1). Geoffrey Hinton has parted ways with Google because he fears that some such disaster for humankind is on the horizon.
The reason of Plato seeks “complete understanding,” which, for finite human beings like us, must always remain a goal, not an achievement. Even if it can never be attained, the kind of “abstract speculation” sought by Plato is essential. Whitehead says that “abstract speculation has been the salvation of the world” by making systems and then transcending them, taking reason to the limit of abstraction. He calls setting limits to speculation “treason to the future.”
Perhaps the most important role of Plato’s kind of reason is that it will not settle for partial views, so it will never dismiss elements that are vital to human experience. The logical positivists and their current followers did just that—refuse to admit justice, beauty, holiness, goodness, dignity, and other such ideas because they did not conform to their methodology. Rather than separating intelligence into two different realms, I suggest that we think of artificial intelligence as a subset of intelligence. Technical reason is an important part of reason, so we need a way of thinking about intelligence that preserves all its aspects, especially the intrinsic values that we need not just to live and survive but to seek the best possible life.
To unify the worldview that has become so divided and fragmented, I agree with Whitehead’s proposal that we restore the final causes that were discarded from the scientific methodology that emerged in the 17th century.
Those physiologists who voice the common opinion of their laboratories tell us with practical unanimity that no consideration of final causes should be allowed to intrude into the science of physiology. In this respect physiologists are at one with Francis Bacon, at the beginning of the scientific epoch, and also with the practice of all the natural sciences” The Function of Reason, Lecture 1).
According to Whitehead, the problem with eliminating final causes and the values they manifest is that “almost every sentence we utter and every judgment we form presuppose our unfailing experience in this element of life. … For example, we speak of the policy of a statesman or of a business corporation. Cut out the notion of final causation, and the word “policy” has lost its meaning (The Function of Reason, Lecture 1). Without intrinsic values, life itself lacks meaning. If the function of reason is “to promote the art of life,” then it is essential to incorporate final causes into our worldview. That is possible only if we move to another way of thinking, a way that is not limited to what is or what has been. It must include what might be and what should be, which have yet to be imagined. Imagination evolved in the biosphere, the realm of organisms that are driven by possibilities—goals, purposes, desire, love, and intuition.
Today, we need a new form of what Whitehead calls “abstract speculation.” Genuine intelligence is preferable to artificial intelligence because it is present in the actual world, not only in the virtual world of computer programs. It connects both to the material aspect of our being and with the values that make life worth living. In other words, it includes not only material, efficient, and formal causes but also final causes. It incorporates three vital aspects of the cosmos: the geosphere (the inorganic realm of matter), the biosphere (the evolving realm of biological life), and the noosphere (the dynamic realm of the mind or soul). These three aspects are best understood as concentric circles that constitute being itself. We do not yet have a worldview that explains all three of these aspects of the world, especially the noosphere. The reason that comprehensive vision does not yet exist is that it is in the process of being created, a function of the imagination. This is a process that requires final causes, which a digital computer does not have or ever could have.