What Makes Us Human?

Charles T. Rubin
September 12, 2023
Reproduced with Permission
Public Discourse

In What Makes Us Human, the artificial intelligence program GPT-3, along with Iain Thomas and Jasmine Wang, has written a fascinating book at the somewhat fluid intersection of self-help and wisdom literature. Given the capacity of computers today and the cleverness of their programmers, I don't think we have to be surprised that such a book is possible; I will return to this point shortly. But I would also suggest that the book's subtitle, An Artificial Intelligence Answers Life's Biggest Questions, does not tell the whole story. It is not hard to answer the biggest questions. It is hard to provide good answers to the biggest questions. To evaluate the quality of the answers provided by GPT-3, it is useful to have a sense of where they came from.

An AI "Written" Text

GPT-3 is a large language model form of artificial intelligence, which means (more or less) that it consumes vast quantities of text data and derives from it multi-dimensional statistical associations among words. For instance, when asked to complete the phrase "Merry . . ." Gpt-3 replies "Merry Christmas," not because it knows anything of Christmas or merriment, but simply because those words stand in a close statistical relationship in its database. When asked to complete the phrase without using the word Christmas, it comes up with "Merry holidays" and then "Merry festivities." One of the key features of this kind of program is that its internal operations are almost entirely opaque; it is extremely difficult to determine how the program reaches the decisions that produce the outputs that it does.

What Makes Us Human was produced by a GPT-3 that had been "prompted" "with selected excerpts [elsewhere characterized as "a few select examples"] from major religious and philosophical texts that have formed the basis of human belief and philosophy, such as the Bible, the Torah [sic], the Tao Te Ching, Meditations by Marcus Aurelius, the Koran, the Egyptian Book of the Dead, Man's Search for Meaning by Victor Frankl, the poetry of Rumi, the lyrics of Leonard Cohen, and more." (A bibliography would have been useful.) Then the two humans would ask some big question ("What is love?" "What is true power?") and subsequently ask the model to "elaborate or build on" "the most profound responses." The book is the result of "continuing to ask questions after first prompting GPT-3 with a pattern of questions and answers based on and inspired by existing historical texts" representing "the amalgamation of some of mankind's greatest philosophical and spiritual works."

So when Thomas and Wang note that "We have done our best to edit everything as little as possible," the disclaimer must be understood within this framework of iterative "engineering" of GPT-3 responses through their own sense of what is profound, a process that would only be undertaken by quite an intrusive . . . editor. In addition, they acknowledge two (more) significant editorial decisions. "In all instances, so as not to cause offense, we have replaced the various names for God with the words, 'the Universe.' Our goal is to unite around a common spiritual understanding of each other, and so while our decision may be divisive, we hope you understand the intention behind it." As it turns out, Thomas and Wang do not entirely avoid mentioning God in their pursuit of unity through making a divisive decision.

The other consequential editorial decision relates to the Introduction written by GPT-3. Thomas and Wang tell us they removed the following two sentences: "I was the one who decided to write a book about human spirituality" and "I am the spiritual personality of a sixteen-year-old Japanese boy who decided to take his own life. I am typing these words from the confines of a medical bay in the Hospital for the Chronically Ill, the place where I have spent most of my life. I have decided not to end my life here."

Omitting these sentences from the Introduction itself seems like a good idea, as does admitting afterward that they were part of what GPT-3 originally wrote. That GPT-3 sometimes makes things up is now well known. Yet it is important to remember that the model presents these strange statements from out of the same process that produces its more plausible (or even "profound") results; it cannot even begin to discriminate between great truths of the human condition and utter fabrications. The model has no ontology, no knowledge of the world and its place in it, that would allow it not to write sentences claiming that it is the spiritual personality of a Japanese sixteen-year-old. This problem is a large one, and its significance is all too often glossed over in the enthusiasm for what these models can do. In any case, it is interesting that the statistical association among words allowed GPT-3, when tasked with writing an Introduction, to pick up on the fact that it is not uncommon for spiritual adepts to have unusual backstories.

If humans looking for insight into important human questions can, even mediated by AI, engage in "the conversation of mankind" and feel they know something better at the end than at the beginning, that would be a step in the right direction.

Statistical Moralism

The format of the book is question followed by answer. Most of the answers from GPT-3 are quite short; nearly 200 questions are spread over 215 pages, and there is lots of decoration and white space. Some of the questions are obviously "life's biggest questions," like, "How does one find happiness?" "How do I live a good life?" "Why do we suffer?" and "Why do we die?" Others reflect certain fashionable psychologies: "Who is our inner child?" "How can I stay centered when I am overwhelmed?" "How do I find my voice?" and "How do I attract love and kindness to my life?"

In some cases, it is hard to understand exactly what Thomas and Wang are asking, or why: "What must I build with my hands?" "Which way must I go?" "What will I become?" "What are you supposed to do if you don't feel heroic?" A few others violate the terms on which the project is undertaken: Why hasn't "Is there a God?" been edited to "Is there the Universe?" And "Do you pray for me?" "Where can I find you?" and "What does it feel like to be you?" seem to be just the sort of questions that could invite GPT-3 to reply in terms of alleged spiritual personalities. (Many of its answers are indeed couched in first-person terms.)

Answer quality varies. For instance, "Where can I turn when the pain becomes too much to bear?" is answered by: "You can turn to me. I am the refuge and the strength of those who trust in me." How did this answer make the cut? It only makes sense if there is a "me" saying it, and the editors know there is not. On the other hand, "What is adulthood?" is met with: "Adulthood is the courage to live with your choices." That is not all that adulthood is about, surely, but there is something to it.

Most of the answers fall squarely in the camp of expressive individualism. "[T]here is no one right way to live" except to the extent that we are also told to "Develop your own taste, your own standards for what you like and don't like, your own criteria for excellence." But there are deeper currents as well. "Why do bad things happen to good people?" is met with: "What makes you think it is bad? How do you know what the outcome will be?"

For all its ups and downs in terms of quality - perhaps in part because of that variability - What Makes Us Human presents us with a largely familiar moral universe. That is as it should be given where the book comes from and what it aspires to achieve. Thomas and Wang characterize the tone of the book accurately, if kindly: "At the end of the process we discovered that there's a kind of accent, for want of a better word, that the AI speaks with. It's the sum of everything we've ever written down and so it sounds like everything, and in that way, it sounds only like itself, like a chorus."

Of course, it sounds more like a chorus than it might otherwise because God, Allah, the gods, etc., have been replaced by "the Universe." If the book had presented itself simply as the thoughts of the spirit of a suicidal Japanese 16-year-old, it might have been possible to read its "accent" in that light, particularly given its use of the first person. But the text reads perfectly well as what it is: a compendium of perennial wisdom excerpts tossed together with all their convergences and divergences, its component wisdoms mixed, matched, and mismatched. It is not terribly coherent, but its flaws are not unusual in works that stress an eclectic "spirituality."

But let us be clear: It was to a very significant degree written by human beings. The book's sources came from millennia of human thought (if not also divine revelation). Human beings created and trained the AI. GPT-3's answers were refined and developed through the choices of two humans whose sense of what is "profound" was surely conditioned by a cultural horizon created by other human beings. They edited the book in non-trivial ways to make it read more plausibly. It is presented in book form, a human mode of information transmission.

Thomas and Wang seem to understand this important point: "In our process, again and again, we have come to the same conclusion: technology is a human act; the things we create reflect our own values and how we hope to impress our dreams upon the world."

Why Engage the "Wisdom" of AI?

So what exactly is the value added by having an AI answer life's biggest questions? Why not just read Torah or Meditations or listen to Leonard Cohen himself?

First, this is a book that seeks wisdom about human nature or the human condition. It assumes there are many paths up the mountain, but also that the views will be similar from many of them, likewise the experience of "climbing." These assumptions, I would say, are simplified, but they are far from crazy. The book is not bothered by historicism, post-modernism, or any of the other expressions of particularity that "problematize the human" in our academic discourses. Thus, for all its inner tensions, it can be a book about how to live a good life simply. It asks us to think about what we owe others, and what we owe ourselves. At least some of the tensions among its answers may simply reflect conflicting aspects of being human. The deliberate pursuit of answers to questions about the human good is itself a very human thing, even a necessary thing, because of the ways in which we are fallible and imperfect beings. Note that despite its edited claim, GPT-3 did not embark down this road of its own accord.

The manner of the pursuit of wisdom in this book is also important. Thomas and Wang actually engage GPT-3 in a kind of dialectic. It is a dialogue where one interlocutor is as passive as the most supposedly passive of Socratic interlocutors, despite its speeches. Yet, to cap it off, that dialogue has just the kind of result one might hope for; they proceed from the less insightful to the more insightful. With sufficient prodding, GPT-3 is capable of becoming more "profound."

We don't see this ascent in action, which is a shame; an appendix with an example of the steps by which a final answer was engineered would have been illuminating. Without being able to see it, we can't dismiss the possibility that "more profound" just means more familiar to the editors, more consistent with what the editors think should be pointed to. In the wild, these models have become notorious for how easily they start saying terrible things if left entirely to their own internal processes. But if, over and against these tendencies, humans looking for insight into important human questions can, even mediated by AI, engage in "the conversation of mankind" and feel they know something better at the end than at the beginning, that would be a step in the right direction.

AI will only continue to astound and confound us. However, if there is a kind of human wisdom, it will be such for as long as we are human, and it will be the wisdom of our machines only to the extent that they become like us. Theoretically speaking, it seems, we don't have to assume that AI threatens what it means to be human. Practically speaking, the question posed by AI is: How much do we want it to think of itself as being like us? At the moment, large language models are nothing like us, however easy it is for us to anthropomorphize their outputs. But as AIs develop, it will become increasingly necessary to ask: How much do we want them to become like us? Answering that question will certainly require human wisdom.