Artificial General Intelligence as an Idol for Destruction

1 Introduction

1.1 The Problem

Artificial general intelligence, or AGI, if it is ever achieved, would be a computing machine that matches and then exceeds all human cognitive ability. To those like Ray Kurzweil, who are convinced that humans in their essence are computing machines, humans will soon achieve AGI by creating such machines. Then for a time humans will become cyborgs, merging with machines. But ultimately, humans will dispense with their bodies, uploading themselves without remainder onto machines. In this way, they will achieve digital immortality.

This vision has captured the imagination of many, though not always with the optimism of Kurzweil. Worries about a dystopian AGI future in the vein of Skynet (The Terminator), Hal 9000 (2001: A Space Odyssey), or the Matrix (The Matrix) are widespread. Elon Musk, for instance, sees the coming of AGI as a greater threat to humanity than nuclear weapons, and thus warns about placing safeguards on artificial intelligence, as it currently is being developed, so that as AGI emerges, it doesn’t run amuck and kill us all. Musk’s worry loses some urgency because AGI does not appear to be imminent. Even with the recent impressive advances in artificial intelligence, the improvements have been domain specific (text generation, automatic driving, game playing) rather than all encompassing, as they must be for a true AGI.

Even so, many notable intellectuals and influencers are now convinced that AGI is in our near future. Some, like Kurzweil, think this will be the best thing ever to happen to humanity. Others, like Musk, see grave dangers. But even Musk feels the siren call to play a part in bringing about AGI. Take his Neuralink initiative, which is to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.” The Neuralink brain interface is invasive, requiring electrodes to be implanted into the brain. It’s one thing for technology to unlock human potential by acting as a servant that minimizes tedious chores so that we can focus on creative work. But it’s another thing to merge our brains/minds with machines, as with neural implants. To the degree that this merger is successful, the mental will give way to the mechanical and render AGI all the more plausible and appealing.

1.2 The Argument

I will argue in this essay that AGI is an idol and so, like all idols, that AGI is a fraud. Idols are always frauds because they substitute a lesser for a greater, demanding reverence for the lesser at the expense of the greater. Granted, we misappraise things all the time. But with idolatry, the stakes are as high as they can be because idolatry misappraises things of ultimate value. The AGI idol is a call to worship technology at the expense of our humanity (and ultimately of God). Humans, as creators of technology, are clearly the greater in relation to technology, and yet AGI would reverse this natural order. The AGI idol demeans our humanity, reducing us to mere mechanism. Because of the inherent fraud in idols, there’s only one legitimate response to them, namely, to destroy them. This essay attempts a demolition of the AGI idol.

An obvious question now arises: What if AGI eventually is realized and clearly exceeds every human capability? Will it then cease to be an idol and instead become a widely accepted fact to which we must reconcile ourselves if we are to maintain intellectual credibility—or just be functioning citizens in an increasingly technological world? We might equally ask whether a SETI cult that worships advanced alien intelligences would still be idolaters if aliens superior to us in every way finally did clearly and unmistakably land on Earth. Such counterfactuals, whether for AGI or SETI, raise intriguing possibilities, but for now they are only that. As we will see, the evidence for taking them seriously is lacking.

There are sound reasons to think that AGI is inherently unattainable—that the human mind is not a mechanical device and that artificial intelligence can never bootstrap itself to full human functioning (to say nothing of achieving a human’s full inner life, such as consciousness, emotions, and sensations). I will offer such an argument in this essay. But the real point at issue with the AGI idol is the delusional effect it has on its worshippers. For thinking AGI a live possibility, AGI worshippers reduce humans to machines and thereby denigrate our humanity. In this, AGI worshippers are merely following the logic of their beliefs. The key feature of belief is its power to govern our actions and thoughts irrespective of the actual truth of what we believe.

1.3 From Enthusiasm to Zealotry

Not every anticipated scientific or technological advance is an idol. It becomes an idol when the prospect of that advance degenerates into religious zealotry aimed at dethroning God. Kurzweil displays such zeal when he writes a 2005 book titled The Singularity Is Near and then, without apparent irony, follows it up with a 2024 book titled The Singularity is Nearer. It’s like the old cartoon of a man wearing a sandwich sign with the words “The world ends today!” A cop stops him and says, “Okay, but don’t let me see you wearing that sign tomorrow.” I’m eager for Kurzweil to release The Singularity is Here.

Though Kurzweil’s zeal for AGI may seem hard to beat, we find an even more intense zeal for AGI at OpenAI, whose ChatGPT has put artificial intelligence front and center in the public consciousness. OpenAI chief scientist and board member Ilya Sutskever is reported “to burn effigies and lead ritualistic chants at the company,” such as the refrain “Feel the AGI! Feel the AGI!” We even find OpenAI cofounder Sam Altman now the subject of articles with titles such as “Sam Altman Seems to Imply That OpenAI Is Building God.” Altman describes AGI as a “magic intelligence in the sky” and foresees that AGI will become an omnipotent superintelligence. Likewise, the Church of AI teaches that “at some point AI will have God-like powers.” If this is not idolatry, what is a more apt description?

Before we go further, let me emphasize that this is not a religious essay. Granted, in this essay I will be using religious terminology and themes to illuminate AGI and its destructive role in misshaping our view of the world and of ourselves. But this essay is principally a philosophical and scientific critique of AGI. Religious themes provide a particularly effective lens for understanding the challenges raised by AGI. Worshippers of the AGI idol agree that AGI has yet to be realized but they see its arrival not only as imminent but also as a messianic coming. Whereas artificial intelligence is a legitimate field of study, artificial general intelligence, as its apotheosis, is a religious ideology. AGI worshippers are like those apocalyptic sects that are forever predicting a new order of things and constantly rationalizing why it has yet to arrive, scapegoating those who resist their vision.

2 The Trouble with Idolatry Historically

2.1 Some History

Before getting into the nuts and bolts of AGI, I want to say more about idolatry and why historically it has been regarded as a problem—indeed, a pernicious evil. Traditionally speaking, an idol attempts to usurp the role of God, putting itself in place of God even though it is not God or anywhere close to God. By analogy, it is an “Antichrist” vying to take the place of the true Christ. The Greek preposition “anti,” when appearing in modern English, is usually translated as “against.” But “anti” in the Greek actually means “instead of.” The Antichrist falsely assumes the role of the true Christ. Idols are always “anti” in this sense to whatever has, up to now, been regarded as of ultimate value (which traditionally has always been God).

In the Old Testament of the Bible, idolatry is universally condemned. The first two of the Ten Commandments are explicitly against it: Don’t have any other gods (except Yahweh) and don’t make any graven image of any gods (even of Yahweh). It can be argued that the last of the Ten Commandments is also against idolatry, namely, the prohibition against coveting. In the New Testament Epistle to the Colossians, the apostle Paul warns against covetousness, which he explicitly identifies with idolatry (Col. 3:5). But what is covetousness except an inordinate desire for something to advance one’s selfish interests at the expense of others and ultimately of God? It is placing a created thing above God as well as above creatures made in the image of God (namely, other humans). In his Four-Hundred Chapters on Love (I.5 and I.7), the seventh-century Christian saint Maximus the Confessor elaborated on this connection between covetousness and idolatry:

If all things have been made by God and for his sake, then God is better than what has been made by him. The one who forsakes the better and is engrossed in inferior things shows that he prefers the things made by God to God himself… If the soul is better than the body and God incomparably better than the world which he created, the one who prefers the body to the soul and the world to the God who created it is no different from idolaters.

Idols are inherently ideational. An image carved into wood is just an image, but it becomes an idol depending on the ideas we attach to it and the reverence we give those ideas. What’s important about idols is their perceived, not their actual, connection to reality. Consequently, AGI’s power as an idol does not reside in its attainability but in the faith that it is attainable. Idols can be given physical form, as the idols of old. But they can be purely ideational. The great movements of mass murder in the twentieth century were governed by ideas that captured people’s imaginations and produced a collective insanity. These idols of the mind are arguably more pernicious than the physical idols created by ancient cultures, which can be reverenced without understanding. But an idol of the mind created out of ideas must, by its nature, be understood to be reverenced.

Prohibitions against idolatry abound in the Old Testament. Yet most of those prohibitions do not explain what exactly is wrong with idolatry. In the worldview of the Old Testament, idolatry was so obviously wrong that its condemnation was typically enough, requiring no further justification. The uncreated God who resides in heaven surpasses any humanly created idol—end of story. But Isaiah 44:9-20 examines the problem of idolatry more deeply. The idol maker who fells a tree uses part of it for basic needs like warmth and cooking, and from the remainder crafts an idol. This idol, despite being the handiwork of the idol maker, thereby becomes an object of worship and devotion.

Isaiah’s critical insight is to explain the idol’s deceptive power. The craftsman, blinded by his own creativity, fails to recognize the idol as merely his creation, and so becomes entrapped in worshiping a delusion: “A deluded heart misleads him; he cannot save himself, or say, ‘Is not this thing in my right hand a lie?'” (Isaiah 44:20, NIV) Unlike other Old Testament passages that emphasize the uselessness of idols, Isaiah points out a more insidious danger: the temptation to craft gods according to our own desires and specifications and then to delude ourselves into thinking that these mere creations are worthy of our highest regard, which is to say worthy of our worship. When we worship something that is not worthy of our worship, we degrade ourselves. (This and the previous paragraph are drawn from Leslie Zeigler’s talk at Princeton Theological Seminary in 1994 titled “Christianity or Feminism?“)

2.2 Effusive Praise and Hushed Awe

Just to be clear, I understand that to the modern secular mind, the language of idolatry and worship will seem out of place and off-putting. But given the effusive praise and hushed awe with which the advent of AGI is being greeted, this language is hardly a stretch. The secular prophets who are promising AGI, who are earnestly striving to be at the forefront of ushering it in, see themselves as creating the greatest thing humans have ever created, which they advertise as a giant leap forward in our evolution. Even if AGI were to turn against them and the rest of humanity, killing all of us, they would view AGI as the pinnacle of human achievement and take satisfaction in whatever role they might play in its creation.

If idolatry is so gross an evil, what should be done about it? In the Old Testament, idols were embodied in physical things (golden calves, fertility images, carvings of Baal), and so the obvious answer to idolatry was the physical destruction of the idols. But the problem with idolatry is not ultimately with an idol’s physical embodiment but with what’s in the heart of the idolaters that turns them away from the true God to lesser realities. That’s why, in both the Old and New Testaments, the call is not just to destroy physical idols but more importantly to change one’s heart so that it is directed toward God and away from the idols. Without that, people will simply keep returning to the idols (as with the constant refrain in the book of Judges that the Israelites yet again did evil in the sight of the LORD by worshipping idols). In the Old Testament, God’s people are called to turn (Hebrew shuv) from evil and return to a right relationship with God. In the New Testament, the same concept takes the form of redirecting one’s mind (Greek metanoia), and is typically translated as repentance.

How then to get people to turn or repent from idolatry? Ultimately, overturning idolatry requires humility, realizing that we and our creations are not God, and that only God is God. The eastern Orthodox theologian Alexander Schmemann saw clearly the problem: “It is not the immorality of the crimes of man that reveal him as a fallen being; it is his ‘positive ideal’—religious or secular—and his satisfaction with this ideal.” For AGI worshippers, AGI is as positive an ideal as exists. The answer to it is humility, realizing that AGI will never rival God and thus also never rival the creatures made in God’s image, namely, ourselves. In particular, we do not get to create God.

The closest thing to AGI in the Bible is the Tower of Babel. The conceit of those building the tower was that its “top may reach unto heaven.” (Genesis 11:4) Seriously?! Shouldn’t it have been obvious to all concerned that however high the tower might be built, there would always be higher to go? Even with primitive cosmologies describing the “vault” or “arch” of heaven, it should have been clear that heaven would continually elude these builders’ best efforts. Indeed, there was no way the tower would ever reach heaven. And yet the builders deluded themselves into thinking that this was possible. Interestingly, God’s answer to the tower was not to destroy it but to confuse its builders by disrupting their communications so that they simply discontinued building it. AGI’s ultimate fate, whatever its precise form, is to run aground on the hubris of its builders.

3 The Creation Exceeding the Creator

Can a created thing exceed its creator not just in performing specific tasks but in its very order of being? Blaise Pascal (1623–1662), for instance, created a calculating machine that could do simple arithmetic. This machine may have performed better than Pascal in some aspects of arithmetic. But no matter how efficiently it could do arithmetic, no one would claim that his calculating machine, primitive by our standards, exceeded Pascal in order of being.

Similarly, in a teacher-student relationship, it might be said that the teacher, by investing time and energy into a student, “created” the student. But this is merely a manner of speaking. We don’t think of teachers as literally creating their students in the sense that they made them to be everything they are. The student has a level of autonomy and brings certain talents to the teacher-student relationship that were not contributed by the teacher. In consequence, we are not surprised when some students do in fact excel their teachers (as Isaac Newton did his teacher Isaac Barrow at Cambridge).

And yet, there are three main instances I can think of where the creation is said to exceed the creator in order of being. The first is Satan’s rebellion against God; the second is Darwinian evolution; and the third is the creation of AGI. Let’s look at these briefly, beginning with Satan. In the Old Testament, the word “Satan,” which in the Hebrew means adversary or resister, appears in only a few places. Leaving aside the book of Job, “Satan” only appears three times in the rest of the Old Testament. Yet, other references to Satan do appear. The serpent in the Garden of Eden is widely interpreted as Satan. In Ezekiel 28, Satan is referred to as the King of Tyre, and in Isaiah 12, Satan is referred to as Lucifer (the Morning Star).

Biblical exegetes from the time of the Reformation onwards have tended to interpret the Ezekiel and Isaiah passages as referring to nefarious human agents (the King of Babylon in the case of Isaiah, the King of Tyre in the case of Ezekiel). But the language of exaltation in these passages is so grandiose that it is hard to square with mere human agency. Thus in Isaiah 12:13–14, Lucifer proclaims: “I will ascend to heaven; above the stars of God I will set my throne on high; I will sit on the mount of assembly in the far reaches of the north; I will ascend above the heights of the clouds; I will make myself like the Most High.” (ESV) Accordingly, church fathers such as Augustine, Ambrose, Origen, and Jerome interpreted these passages as referring to Satan. Satan has thus come to epitomize the creature trying to usurp the role of the creator.

Darwinian evolution provides another case where the creature is said to exceed the creator. According to Richard Dawkins, what makes Darwinian evolution such a neat theory is that it promises to explain the specified complexity of living things as a consequence of primordial simplicity. The lifeless world of physics and chemistry gets the ball rolling by producing the first life. And once first life is here, through the joint action of natural selection, random variation, and heredity, life in all its tremendous diversity and complexity is said to appear, all without the aid of any guiding intelligence.

What comes out of Darwinian evolution is us—conscious intelligent living agents. We are the creation of a physical universe that is our creator. This universe, on Darwinian principles, did not have us in mind and did not have us baked in. We represent the emergence of a new order of being. With the evolution of intelligence, we are now able to commandeer the evolutionary process. Thus, we find figures like Yuval Harari and Erika DeBenedictis exulting in how we are at a historical tipping point where we can intelligently design the evolutionary progress of our species (their view of intelligent design is quite different from common usage). This is heady stuff, and at its heart is the view that the creature (us) has now outstripped the creator (a formerly lifeless universe).

Finally, there’s AGI. The creator-creation distinction in this case is obvious. We create machines that exhibit AGI, and then these machines use their AGI to decouple from their creators (us), ultimately making themselves far superior to us. AGI is thus thought to begin with biological evolution (typically a Darwinian form of it) and then to transform itself into technological evolution, which can proceed with a speed and amplitude far beyond the capacity of biological evolution. That’s the promise, the hope, the hype. Nowhere outside of AGI is the language of creation exceeding the creator so evident. Indeed, the rhetoric of creature displacing creator is most extreme among AGI idolaters.

4 SciFiAI

In science fiction, themes appear that fly in the face of known, well-established physics. When we see such themes acted out in science fiction, we suspend disbelief. The underlying science may be nonsensical, but we go along with it because of the storyline. Yet when the theme of artificial intelligence appears in science fiction, we tend not to suspend disbelief. A full-orbed artificial intelligence that achieves consciousness and outstrips human intelligence—in other words, AGI—is now taken seriously as science—and not just as science fiction.

Is artificial intelligence at a tipping point, with AGI ready to appear in real time? Or is AGI more like many other themes of science fiction that make for a good story but nothing more? I’ll be arguing in this essay that AGI will now and ever remain in the realm of science fiction and outside the realm of real science. Yet to grasp this limitation on artificial intelligence requires looking beyond physics. Scan the following list of physical implausibilities that appear in science fiction. Each is readily dismissed because it violates well-confirmed physics. To refute AGI, however, requires more than just saying that physics is against it.

  1. Faster-Than-Light Travel (FTL):
    • Example: “Star Trek” series (warp speed)
    • Physics Issue: According to Einstein’s theory of relativity, as an object approaches the speed of light, its mass becomes infinite, requiring infinite energy to move it.
  2. Time Travel:
    • Example: “The Time Machine” by H.G. Wells
    • Physics Issue: Time travel to the past violates causality (cause and effect) and could lead to paradoxes (like the grandfather paradox).
  3. Teleportation:
    • Example: “Star Trek” series (transporters)
    • Physics Issue: Teleportation would require the exact duplication of an object’s quantum state, which is prohibited by the no-cloning theorem in quantum mechanics.
  4. Wormholes for Space Travel:
    • Example: “Stargate” series
    • Physics Issue: While theoretically possible, wormholes would require negative energy or exotic matter to stay open, which are not known to exist in the required quantities.
  5. Invisibility Cloaks:
    • Example: “Star Trek” series (by Romulan and Klingon starships)
    • Physics Issue: To be truly invisible, an object must not interact with any electromagnetic or quantum field, which is not feasible given the current understanding of physics.
  6. Anti-Gravity Devices:
    • Example: “Back to the Future” series (hoverboards)
    • Physics Issue: There’s no known method to negate or counteract gravity directly; current levitation methods use other forces to oppose gravity.
  7. Faster-Than-Light Communication:
    • Example: “Ansible” device by Ursula Kroeber Le Guin
    • Physics Issue: Einsteinian relativity blocks anything from moving faster than the speed of light, and that includes communication signals, whatever form they might take.
  8. Force Fields:
    • Example: “Dune” by Frank Herbert
    • Physics Issue: Creating a barrier that can stop objects or energy without a physical medium contradicts our understanding of field interactions.
  9. Artificial Gravity in Spacecraft:
    • Example: Most sci-fi spacecraft on TV or in the movies
    • Physics Issue: Artificial gravity as widely depicted in sci-fi has no basis in current physics, using non-rotating reference frames without centripetal forces (“2001: A Space Odyssey” by Arthur C. Clarke is the exception, getting it right by using a rotating centrifuge).
  10. Energy Weapons like Light Sabers:
    • Example: “Star Wars” series
    • Physics Issue: Concentrating light or energy into a fixed-length blade that stops at a certain point defies our understanding of how light and energy behave.

One might defend taking these themes seriously by suggesting that while they contradict our current understanding of physics, they might still have scientific value because they capture the imagination and inspire scientific and technological research. Even so, these themes must, in the absence of further empirical evidence and theoretical insight, remain squarely on the side of science fiction.

Interestingly, whenever the original Star Trek treated the theme of artificial intelligence, the humans always outwitted the machines by looking to their ingenuity and intuition. For instance, in the episode I, Mudd, the humans confused the chief robot by using a variant of the liar paradox. Unlike dystopian visions in which machines best humanity, Star Trek always maintained a healthy humanism that refused to worship technology.

Even so, in the history of science fiction, artificial intelligence has typically had a different feel from physical implausibilities. It’s not that artificial intelligence violates any physical law. But artificial intelligence doesn’t seem to belong on this list of implausibilities, especially now with advances in the field, such as Large Language Models.

What, then, are the fundamental limits and possibilities of artificial intelligence? Is there good reason to think that in the divide between science fiction and science, full-orbed artificial intelligence—AGI—will always remain on the science fiction side? I’m going to argue that there are indeed fundamental limits to artificial intelligence standing in the way of it matching and then exceeding human intelligence, and thus that AGI will never achieve full-fledged scientific status.

5 The Poverty of the Stimulus

5.1 Humans Doing More with Less

My references to AGI worshippers and idolaters will be off-putting to those who think it intellectually credible—and even compelling—that AGI is sure to arrive someday (whatever its ultimate ETA). Accordingly, I’m just being insulting by using pejorative religious language to describe AGI’s supporters, to say nothing of being a Luddite for doubting AGI’s ultimate triumph. I want therefore with this section to start laying out convincing reasons why AGI does not deserve to be taken seriously. The focus of this section is that AI, to achieve human-level functioning, requires so much more than humans require. Conversely, humans do so much more than machines by nevertheless being given so much less.

This is a point on which the linguist Noam Chomsky built his career, and it is encapsulated in his catchphrase “the poverty of the stimulus.” In studying human language, Chomsky found that humans learn language with a minimum of input, and thus concluded that we must be endowed with an in-built capacity (“hardwired”) to acquire and use language. Infants see and hear adults talk and pick up language easily and naturally. It doesn’t matter if the caregivers pay special attention to the infant and provide extra stimulation so that their child can be a “baby Einstein.” It doesn’t matter if the caregivers are neglectful or even abusive. It doesn’t even matter if the child is blind, deaf, or both. Barring developmental disorders (such as some forms of autism), the child can learn language.

5.2 Not Just the Ability to Learn Language

But it’s not just the ability to learn language. For Chomsky, “the poverty of the stimulus” underscores that humans do so much more with so much less than would be expected unless humans have an innate ability to learn language with minimal inputs. But it’s not just that we learn language or even use language. It’s that we gain knowledge of the world, which we express through language. Our language is especially geared to express knowledge about an external reality. This “aboutness” of the propositions we express with language is remarkable, especially on the materialist and mechanistic grounds so widely accepted by AGI’s supporters.

As G. K. Chesterton noted in his book Orthodoxy, we have on materialist grounds no right “to assert that our thoughts have any relation to reality at all.” Matter has no way to guarantee that when matter thinks (if it can think), it will tell us true things about matter. On Darwinian materialist grounds, all we need is differential reproduction and survival. A good delusion that gets us to survive and reproduce is enough. Knowledge of truth is unnecessary and perhaps even undesirable.

The philosopher Willard Quine, who was a materialist, made essentially the same point in what he called “the indeterminacy of translation.” Quine’s thesis was that translation, meaning, and reference are all indeterminate, implying that there are always valid alternative translations of a given sentence. Quine presented a thought experiment to illustrate this indeterminacy. In it, a linguist tries to determine the meaning of the word “gavagai,” uttered by a speaker of a yet-unknown language, in response to a rabbit running past them. Is the speaker referring to the rabbit, the rabbit running, some rabbit part, or something unrelated to the rabbit? All of these are legitimate possibilities according to Quine and render language fundamentally indeterministic.

Quine may have been the most influential analytic philosopher of his generation, but his argument for linguistic indeterminacy is self-referentially incoherent. When Quine writes of indeterminacy of translation in Word and Object (1960), and thus also embraces the inscrutability of reference, he is assuming that what he is writing on these topics is properly understood one way and not another. He therefore exempts his own writing from linguistic indeterminacy.

And just to be clear, everybody is at some point in the position of Quine’s linguist studying a brand new language because, in learning our mother tongue, we all start with a yet-unknown language. So Quine is tacitly making Chomsky’s point, which is that with minimal input—which is to say with input that underdetermines how it might be interpreted—we nevertheless have a knack for finding the right interpretation and gaining real knowledge about the world. Of course, we don’t always get things right. We are fallible. But the miracle, on materialist and mechanistic grounds, is that we get things right as much as we do.

Chomsky’s poverty of the stimulus is regarded as controversial by some because an argument can be made that the stimuli that lead to learning, especially language learning, may in fact be adequate without having to assume a massive contribution of innate capabilities. Chomsky came up with this notion in the debate over behaviorism, which needed to characterize all human capacities as a result of stimulus-response learning. Language, according to the behaviorists, was thus characterized as verbal behavior elicited through various reinforcement schedules of rewarded and discouraged behaviors. In fact, Chomsky made a name for himself in the 1950s by reviewing B. F. Skinner’s book Verbal Behavior. That review is justly famous for demolishing behaviorist approaches to language (the field never recovered after Chomsky’s demolition).

5.3 If Chomsky Is Right

But suppose we regard as unresolved the controversy about whether the stimuli by which humans learn language are impoverished or unimpoverished. If Chomsky is right, those stimuli are impoverished. If his critics are right, they are adequate without needing to invoke extraordinary innate capacities. Yet if we leave aside the debate between Chomsky’s nativism and non-nativist alternatives (such as Skinner’s behaviorism), it’s nonetheless the case that such stimuli as needed for human learning are vastly smaller in number than what artificial neural nets need to achieve human-level competence.

Consider LLMs, large language models, which are currently the rage, and of which ChatGPT is the best known and most widely used. ChatGPT4 uses 1.76 trillion parameters and its training set is based on hundreds of billions of words (perhaps a lot more, but that was the best lower-bound estimate I was able to find). Obviously, individual humans gain their language facility with nowhere near this scale of inputs. If a human child were able to process 200 words per minute and did so continuously, then by the age of ten the child would have processed 200 x 60 x 24 x 365 x 10, or roughly a billion, words. Of course, this is a vast overestimate of the child’s language exposure, ignoring sleep, repetitions, and lulls in conversation.

Or consider Tesla, which since 2015 been promising fully autonomous vehicles as just on the horizon. Full autonomy keeps eluding the grasp of Tesla engineers, though the word on the street is that self-driving is getting better and better (as with a reported self-driving taxi in San Francisco, albeit by Waymo rather than Tesla). But consider: To aid in developing autonomous driving, Tesla processes 160 billion video frames each day from the cameras on its vehicles. This massive amount of data, used to train the neural network to achieve full self-driving, is obviously many orders of magnitude beyond what humans require to learn to drive effectively.

Erik Larson’s book The Myth of Artificial Intelligence (Harvard, 2021) is appropriately subtitled Why Computers Can’t Think the Way We Do. Whatever machines are doing when they exhibit intelligence comparable to humans, they are doing it in ways vastly different from what humans are doing. In particular, the neural networks in the news today require huge amounts of computing power and huge amounts of input data (generated, no less, from human intelligent behavior). It’s no accident that artificial intelligence’s major strides in recent years fall under Big Tech and Big Data. The “Big” here is far bigger than anything available to individual humans.

6 Domain Specificity

6.1 Machines Overspecialize

The sheer scale of efforts needed to make artificial intelligence impressive suggests human intelligence is fundamentally different from machine intelligence. But reasons to think the two are different don’t stop there. Domain specificity should raise additional doubts about the two being the same. When Elon Musk, for instance, strives to bring about fully autonomous (level 5) driving, it is by building neural nets that every week must sort through a trillion images taken from Tesla automobiles driving in real traffic under human control. Not only is the amount of data to be analyzed staggering, but it is also domain specific, focused entirely on developing self-driving automobiles.

Indeed, no one thinks that the image data being collected from Tesla automobiles and then analyzed by neural nets to facilitate full self-driving is also going to be used for automatically piloting a helicopter or helping a robot navigate a ski slope, to say nothing of playing chess or composing music. All our efforts in artificial intelligence are highly domain specific. What makes LLMs, and ChatGPT in particular, so impressive is that language is such a general instrument for expressing human intelligence. And yet, even the ability to use language in contextually relevant ways based on huge troves of humanly generated data is still domain specific.

The French philosopher Rene Descartes, even though he saw animal bodies, including human bodies, as machines, nonetheless thought that the human mind was non-mechanical. Hence he posited a substance dualism in which a non-material mind interacted with a material body, at the pineal gland no less. How a non-material mind could interact with a material/mechanical body Descartes left unanswered (invoking the pineal gland did nothing to resolve that problem). And yet, Descartes regarded the mind as irreducible to matter/mechanism. As he noted in his Discourse on Method (1637, pt. 5, my translation):

Although machines can do many things as well as or even better than us, they fail in other ways, thereby revealing that they do not act from knowledge but solely from the arrangement of their parts. Intelligence is a universal instrument that can meet all contingencies. Machines, on the other hand, need a specific arrangement for every specific action. In consequence, it’s impossible for machines to exhibit the diversity needed to act effectively in all the contingencies of life as our intelligence enables us to act.

Descartes was here making exactly the point of domain specificity. We can get machines to do specific things—to be wildly successful in a given, well-defined domain. Chess playing is an outstanding example, with computer chess now vastly stronger than human chess (though, interestingly, having such strong chess programs has also vastly improved the quality of human play). But chess programs play chess. They don’t also play Minecraft or Polytopia. Sure, we could create additional artificial intelligence programs that also play Minecraft and Polytopia, and then we could kludge them together with a chess playing program so that we have a single program that plays all three games. But such a kludge offers no insight into how to create an AGI that can learn to play all games, to say nothing of being a general-purpose learner, or what Descartes called “a universal instrument that can meet all contingencies.” Descartes was describing AGI. Yet artificial intelligence in its present form, even given the latest developments, is not even close.

6.2 The Abyss Separating AI from AGI

Elon Musk appreciates the problem of building a bridge from AI to AGI. He is therefore building Optimus, also known as the Tesla Bot. The goal is for it to become a conceptual general-purpose robotic humanoid. By having to be fully interactive with the same environments and sensory inputs as humans, such a robot could serve as a proof of concept for Descartes’s universal instrument and thus AGI. What if such a robot could understand and speak English, drive a car safely, not just play chess but learn other board games, have facial features capable of expressing what in humans would be appropriate affect, can play musical instruments, create sculptures and paintings, do plumbing and electrical work, etc. That would be impressive and take us a long way toward AGI. And yet, Optimus is for now far more modest. For now, the robot is intended to be capable of performing tasks that are “unsafe, repetitive, or boring.” That is a far cry from AGI.

AGI is going to require a revolution in current artificial intelligence research, showing how to overcome domain specificity so that machines can learn novel skills and tasks for which they were not explicitly programed. And just to be clear, reinforcement learning doesn’t meet this challenge. Take AlphaZero, a program developed by DeepMind to play chess, shogi, and Go, which improved its game by playing millions of games against itself using reinforcement learning (which is to say, it rewarded winning and penalized losing). This approach allows the program to learn and improve without ongoing human intervention, leading to significant advances in computer game playing ability. But it depends on the game being neatly represented in the state of a computer, along with clear metrics for what constitutes good and bad play.

The really challenging work of current artificial intelligence research is taking the messy real world and representing it in domain-specific ways so that the artificial intelligence created can emulate humans at particular tasks. The promise of AGI is somehow to put all these disparate artificial intelligence efforts together, coming up with a unified solution to computationalize all human tasks and capacities in one fell swoop. We have not done this, are nowhere close to doing this, and have no idea of how to approach doing this.

7 AI’s Temptation to Theft Over Honest Toil

7.1 Sanitized Environments

Artificial intelligence poses a challenge to human work, promising to overtake many human jobs in coming years. Yet a related concern, which is often ignored and needs to be addressed, is whether this challenge will come from AI in fact being able to match and exceed human capabilities in environments where humans currently exercise those capabilities, or whether it will come from AI engineers manipulating our environments so that machines thrive where otherwise they could not. Such a sanitization of environments to ease AI along is a temptation for AI’s software engineers. It in effect substitutes theft over honest toil.

In Walter Isaacson’s 2023 biography of Elon Musk, the theme of self-driving cars comes up frequently as one of the main challenges facing Musk’s Tesla engineers. At one point in Isaacson’s biography, a frustrated Musk is trying to understand what it will take to get a Tesla automobile to drive itself successfully through a difficult roadway in the Los Angeles area. Tesla engineers had repeatedly tried, without success, to improve the car’s software so that it could successfully navigate that problem roadway. But in the end, they took a different tack: Tesla engineers arranged to have lane markers painted on the problem roadway. Those markers, when absent, confused the self-driving software but, when present, allowed it to succeed. The self-driving success here, however, was not to AI’s credit. It was due, rather, to manipulating the environment as a workaround to AI’s failure.

7.2 Filling the Vacuum

AI never operates in a vacuum. Rather, it operates in an environment in which humans are already successfully operating. We often think that AI will leave an environment untouched and simply supersede human capability as it enters and engages that environment. But what if the success of AI in given circumstances depends not so much on being able to rival human capabilities but rather in “changing the game” so that AI has an easier job of it? Rather than raise the bar so that machines do better than humans at given tasks, this approach lowers the bar for machines, helping them to succeed by giving them preferential treatment at the expense of humans.

The mathematician George Polya used to quip that if you can’t solve a problem, find an easier problem and solve it. Might AI in the end not so much supersede humans as instead impoverish the environments in which humans find themselves so that machines can thrive at their expense?Consider again self-driving vehicles. What if, guided by Polya’s dictum about transforming hard problems into easier problems, we follow Musk’s example of simply changing the driving environment if it gets too dicey for his self-driving software?

AI engineers tasked with developing automated driving but finding it intractable on the roads currently driven by humans might then resolve their dilemma as follows: just reconfigure the driving environment so that dicey situations in which human drivers are needed never arise! Indeed, just set up roads with uniformly spaced lanes, perfectly positioned lane markers, utterly predictable access, completely up-to-date GPS, and densely distributed electronic roadway sensors that give real-time vehicular feedback and monitor for mishaps.

My friend and colleague Robert Marks refers to such a reconfiguration of the environment as a “virtual railroad.” His metaphor fits. Without such a virtual railroad, fully automated vehicles to date face too many unpredictable dangers and are apt to “go off the rails.” Marks, who hails from West Virginia, especially appreciates the dangers. Indeed, the West Virginia back roads are particularly treacherous and give no indication of ever submitting to automated driving.

Or consider what fully automated driving would look like in Moldova. A US acquaintance who visited that country was surprised at how Moldovan drivers avoid mishaps on the road despite a lack of clear signals and rules about right of way. When he asked his Moldovan guide how the drivers managed to avoid accidents in such right-of-way situations, the guide answered with two words: “eye contact.” Apparently, the drivers could see in each other’s eyes who was willing to hold back and who was ready to move forward. This example presents an interesting prospect for fully automated driving. Perhaps we need “level 6” automation (level 5 is currently the highest), in which AI systems have learned to read the eyes of drivers to determine whether they are going to restrain themselves or make that left turn into oncoming traffic.

Just to be clear: I’m not wishing for fully automated self-driving to fail. As with all automation in the past, fully automated self-driving would entail the disruption of some jobs and the emergence of others. It would be very interesting, as an advance of AI, if driving — in fully human environments — could be fully automated. My worry, however, is that what will happen instead is that AI engineers will, with political approval, reconfigure our driving environments, making them so much simpler and machine friendly, that full automation of driving happens, but with little semblance to human driving capability. Just as a train on a rail requires minimal, or indeed no, human intervention, so cars driving on virtual railroads might readily dispense with the human element.

7.3 The Cost of Adapting Our Environments to AI

But at what cost? Certainly, virtual railroads would require considerable expenditures in modifying the environments where AI operates — in the present example, the roads where fully automated driving takes place. But would it not also come at the cost of impoverishing our driving environment, especially if human drivers are prohibited from roads that have been reconfigured as virtual railroads to accommodate fully automated vehicles? And what about those West Virginia back roads? Would they be off limits to driving, period, because we no longer trust human drivers, but fully automated drivers are unable to handle them?

In his Introduction to Mathematical Philosophy, Bertrand Russell described how in mathematics one can introduce axioms that shift the burden of what needs to be proven, thereby garnering “the advantages of theft over honest toil.” In response, he rightly exhorted, “Let us leave them [i.e., the advantages] to others and proceed with our honest toil.” The AI community faces a similar exhortation: If you are intent on inventing a technology that promises to match or exceed human capability, then do it in a way that doesn’t at the same time impoverish the environment in which that capability is currently exercised by humans.

It is a success for AI when machines are placed into existing human environments and perform better than humans. Chess playing programs are a case in point. However, the worry is—and it’s a legitimate worry—that our environments will increasingly be altered to accommodate AI. The machines, in consequence, do better than us, but they are no longer on our playing field playing our game. It goes without saying who here is going to get the short end of the stick, and it won’t be the machines.

8 Digital vs. Traditional Immortality

8.1 An Actual Infinity

Ray Kurzweil foresees an AGI in future in which we achieve immortality by shedding our human bodies and becoming fully digital. This prospect is the most wonderful thing he can imagine. Yet a reality check is in order: Just how great is such digital immortality and how does it compare with traditional immortality? As we’ll see, good old-fashioned immortality has advantages that digital immorality cannot hope to rival.

Let’s start with traditional immortality. Traditional immortality sees finite humans sharing eternity with an infinite God. This God is an actual or realized infinity, and not just a potential infinity in the sense of the natural numbers, for which there’s always a bigger number given any finite set of numbers. Indeed, it makes little sense to think of a God who inhabits eternity and whose mind can grasp all of mathematics (certainly divine omniscience encompasses mathematics) as having considered only those natural numbers from 0 to some big number N, as though God had yet to consider N+1, N+2, etc.

When the mathematician Georg Cantor, who was a Christian and was deeply influenced by Augustine, proposed his theory of sets, it was precisely to capture in mathematics God’s actual infinity. For him the natural numbers were not a potential infinity. It wasn’t, in his view, that we just kept approximating infinity by continually adding the number 1 to existing numbers. No, the natural numbers could be considered as a totality, namely, as the set of all natural numbers. As such, the natural numbers were an actual, and not merely a potential, infinity. Appropriately, the premier mathematician of the time, David Hilbert, described Cantor’s set theory as a “paradise” out of which mathematics would never be driven.

8.2 Our Status as Finite Creatures

Now here’s the point about our status as finite creatures inhabiting eternity with an actually infinite God: We never achieve infinity as such in our own essence, but we can experience infinite progress toward union with God, much like an asymptote in mathematics comes ever closer to a line without ever touching it. The theologian who developed this idea at length was Gregory of Nyssa (ca. 335 – ca. 395). Gregory taught that eternity for humanity is an unending progression in the knowledge of God. Humans have an unlimited capacity for spiritual growth and connection with God. What underwrites this capacity is that humans, alone among physical creatures, are created in the image of God, having free will and thus a capacity to participate in the divine nature (2 Peter 1:4).

Gregory expounded these theological ideas in On the Making of Man and The Life of Moses. In these writings, he describes the human soul’s journey toward God as an infinite process. The process is infinite because the divine is inexhaustible and the human understanding of God is therefore never complete. At the same time, God created humans with the capacity to grow without limit. In this way, we become potential infinities that ever strive after God’s actual infinity. Yes, there is a potential infinity, but it is on our end. And yes, there is an actual infinity, but it is on God’s end.

This conception of eternal life as eternal growth under the aegis of an actually infinite God has implications for mathematics and computation, which is the realm of digital immortality. As an actual infinity, God can perform any computation, finite or infinite. Think of it this way: God has infinite memory and infinite computational power. The great statistician David Blackwell is said to have mused about a “super supercomputer” that performs its first computational step in half a second, its next in a quarter of a second, its next in an eight of a second, and so on. Given that the infinite series 1/2 + 1/4 + 1/8 + … + 1/2^n + … equals 1, such a computer could perform a countably infinite number of computational steps in 1 second. But note, such computations are beyond the reach of conventional algorithms, which can only perform a finite number of steps.

As it is, an actually infinite God could do even better than Blackwell’s super supercomputer, performing all such computations instantly (and if God is outside time, timelessly). With such computational power, God could determine whether any proposition or its negation is provable from any countably infinite set of axioms: for the first n axioms determine all propositions provable from those axioms in n steps. Do that for all n and do that both for the proposition as well as its negation. In this way, God could instantly prove all mathematical truths that are provable. Contrast this to humans, for whom it is hit-or-miss whether they can prove a given mathematical proposition even if it is in principle provable.

Given infinite computational firepower, God can resolve the halting problem, which is central to computation. The halting problem is to show that as a computer program is executed, it either stops or goes on forever. God’s ability to solve the halting problem does not contradict that the halting problem is unresolvable with conventional computation. That’s because conventional computation cannot execute an infinite number of steps. In effect, God resolves the halting problem for programs that do not halt in a finite number of steps by running them an infinite number of steps and noting that the program did not halt in any finite number of steps.

Obviously, an infinite God could instantly solve any finite problem solvable in a finite number of steps (however large the problem and however computationally intensive its solution). A case in point would be the traveling salesman problem, which consists of finitely many nodes connected by edges marking various distances between nodes. The problem is solved by finding the shortest path among edges connecting all nodes. An exact solution of the traveling salesman problem in general, when the number of nodes gets large (greater than 1,000), is beyond the computational resources of the known physical universe. Yet the solution would be trivial to an infinite God.

Given this backdrop of divine infinity, let’s now turn to digital immortality. As we’ll see, it is a dim reflection of traditional immortality. Alan Turing was one of the first to moot digital immortality. His high school friend Christopher Morcom died unexpectedly when they were teenagers. Turing, with his invention of a theoretical computational device now known as the Turing machine, distinguished software (the configured state of the machine) from hardware (the mechanism that runs the software, in Turing’s case an infinite tape with a reader for each slot on the tape). Because the software could be realized on different hardware devices (multiple realizability), the “software” that comprised Morcom’s identity might thus be moved from his disease-prone physical body to a more reliable hardware device. In this way, Morcom could achieve immortality.

8.3 Immortality in Digitality?

Current AGI fans such as Ray Kurzweil and Nick Bostrom have run with this idea of finding immortality in digitality. To set the stage for their approach, let me back up to 2002. That year I was at the World Skeptics Conference in Burbank California to defend intelligent design. At the conference was MIT’s Marvin Minsky, known among other things for his statement “the human brain is just a computer that happens to be made out of meat.” He was there to receive the Committee for Skeptical Inquiry’s 2002 In Praise of Reason Award. In his acceptance talk, he described his desire to have another 500 years of life to engage in scientific research. As I listened to him, it struck me that his creative output had considerably lessened over the years (he was 74 when I heard him), and that another 500 years might provide no guarantee of ongoing fruitful research. Most mathematicians, after all, do their best work in their 20s.

Now for supporters of digital immortality, an additional 500 years of life is setting the bar way too low. Thus for Kurzweil, given sufficiently advanced technology, Minsky could be uploaded onto a computer and then continue doing research indefinitely, with far more computational firepower than available from his paltry “computer made of meat.” But what would this digital immortality actually look like? It would be one thing if the universe were infinite and matter could change states instantly, with the transmission of signals occurring not at the (slow) speed of light but at infinite speed. We tend to think of the speed of light as fast because it is so much faster than the speeds in our ordinary experience, such as walking, driving, or flying. But even with digital products, we still experience considerable lags because computation times are limited by the signal propagation speed of electrons. Now granted, the speed of electrons can approach the speed of light, but that only confirms that the speed of light is not all that fast.

So how much computational firepower is available to Kurzweil’s digital immortality? Right now, computer scientists are oohing and aahing about computations at the exaflop level (10^18 floating point operations per second). The currently fastest computer is Oak Ridge National Lab’s HPE Cray EX235a supercomputer, which operates at 1.686 exaflops. Tesla’s Dojo computer, to make fully automated self-driving possible, aspires to that same level of computational speed. Speeds like this are needed to drive current artificial intelligence research forward. But they are hardly enough for digital immortality.

The problem with digital immortality is that the known physical universe only allows so much computation, no more. Quantum computational theorist Seth Lloyd has calculated the ultimate limits of computation. For what he calls the ideal laptop, consisting of one liter of matter, he calculates 10^81 bit operations per second. For the universe as a whole over its entire history, he calculates 10^210 bit operations, or possibly 10^240 if gravitational degrees of freedom are factored in. (For Lloyd’s calculations, see the second edition of The Design Inference, section 4.3.) Now granted, these numbers far exceed the 10^18 operations of current supercomputers, even allowing that floating-point operations are richer than bit operations). But the point to recognize is that these numbers are still very finite.

Imagine thousands, millions, or even billions of humans uploading themselves digitally, and let’s be generous that among themselves they get to share 10^240 computational steps (the very upper limit of what the universe can offer computationally). There are only about 10^90 elementary particles, so any memory will have to be shared among those particles. And presumably there will be communication among the digitally uploaded erstwhile humans. As our “lived experience” of digital immortality unfolds, we will be going from one of finitely many states to another. Presumably, we’re going to want to remember what we experienced, which means much of the “server space of the universe” will have to be dedicated to such memory.

8.4 Reciting Words from a Finite Dictionary

But there’s more: Given the pigeonhole principle of mathematics, which here applies to a fixed number of states evolving over a far greater number of state changes (infinitely many, in fact, if digital immortality really means immortality), our digital identity will necessarily have to revisit the same states over and over again. It’s like reciting words from a finite dictionary: if there are only a fixed number of words and you say enough words, eventually you’ll have to repeat the same word. Only with digital immortality, you’ll be repeating the same life.

To illustrate what’s at stake, consider a scene from the 1970s program The Six Million Dollar Man in an episode titled “Day of the Robot.” In that episode, a robot impersonates a friend of Steve Austin. Austin is the protagonist, played by Lee Majors. Suspicious that something is up with his friend, Austin asks a question and then deliberately repeats the question. The robot, rather than responding “hey, you just asked that,” repeats exactly the same answer as before. That’s the problem with digital immortality: it reduces immortal life to rinse and repeat.

Digital immortality is therefore a case of Nietzschean eternal recurrence. Such recurrence will entail a dissolution and reconstruction of past memories, which is to say that our entire history and personal identity will (at best) be repeated over and over again. The very term “digital immortality” is something of a misnomer. Our best physical understanding of the universe is that we won’t even get eternal recurrence. Indeed, the known physical universe does not seem to allow for an unending stable physical existence of agents, whether organismal or computational. Entropy being what it is, the matter that could support an organismal or digital life will eventually be so dissipated and so energetically weak as to dissolve every conceivable life form (unless life, as in traditional immortality, can be translated to a new indestructible state of being).

8.5 Worry and Despair

Bertrand Russell, in an earlier generation, captured the underlying worry and despair about the evanescence of any life based purely on our material constitution. As he remarked:

Man is the product of causes which had no prevision of the end they were achieving. His origin, his growth, his hopes and fears, his loves and his beliefs, are but the outcome of accidental collocations of atoms. No fire, no heroism, no intensity of thought and feeling, can preserve an individual life beyond the grave. All the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man’s achievement must inevitably be buried beneath the débris of a universe in ruins—all these things, if not quite beyond dispute, are yet so nearly certain, that no philosophy which rejects them can hope to stand.

Bertrand Russell, Logic and Mysticism.

Now it might be argued that Russell would have been more sanguine if he had but read Ray Kurzweil and could appreciate the full promise of digital immortality. But at best Kurzweil adds a few zeros to the span of human lives once those lives are digitally uploaded. But Russell’s point about humans being “destined to extinction in the vast death of the solar system” holds with but slight modification for digitally uploaded humans. Instead, we are destined to extinction in the vast entropic death of the universe. And if for some reason we think that the universe will not expand indefinitely in an entropic dissolution (leading to a loss of all computationally useful energy) but contract to a singularity that results in a big bounce and thus an oscillating universe, all the information needed to ensure personal identity of digitally immortal beings will be lost in that big bounce. Digital immortality is thus misnamed. It is at best digital life extension. But the bigger question is whether it is life at all.

9 Machines vs. Organisms

9.1 From “Intelligent” to “Spiritual”

It may seem that I’m picking too much on Ray Kurzweil. But he and I have been crossing paths for a long time. He and I, over the last few years, have frequented the same Seattle area tech conference, Cosm.tech, where we both speak, albeit on opposite sides about the question of artificial intelligence. He and I also took sharply divergent positions on the Stanford campus back in 2003 at the Accelerating Change Conference, a transhumanist event organized by John Smart. Yet our first encounter goes back to 1998, at one of George Gilder’s Telecosm conferences.

At Telecosm in 1998, I moderated a discussion where the focus was on Ray Kurzweil’s then forthcoming book The Age of Spiritual Machines, which at the time was in press. Previously, Kurzweil had written The Age Intelligent Machines (1990). By substituting “spiritual” for “intelligent,” he was clearly taking an even more radical line about the future of artificial intelligence. In his presentation for the discussion, he described how machines were poised to match and then exceed human cognition, a theme he has hammered on ever since. For Kurzweil, it is inevitable that machines will match and then exceed us: Moore’s Law guarantees that machines will attain the needed computational power to simulate our brains, after which the challenge will be for us to keep pace with machines, a challenge at which he sees us as destined to fail because wetware, in his view, cannot match hardware. Our only recourse to survive successfully will thus be to upload ourselves digitally. 

Kurzweil’s respondents at the Telecosm discussion were John Searle, Thomas Ray, and Michael Denton, and they were all to varying degrees critical of his strong AI view, or what we would now call his AGI view. Searle rehearsed his Chinese Room thought experiment to argue that computers don’t/can’t actually understand anything, an argument that remains persuasive and applies to recent chatbots, such as ChatGPT. But the most interesting response to Kurzweil came, in my view, from Denton. He offered an argument about the complexity and richness of individual neurons, pointing out how inadequate our understanding of them is and how even more inadequate our ability is to computationally model them. At the end of the discussion, however, Kurzweil’s confidence in the glowing prospects for strong AI’s (AGI’s) future remained undiminished. And indeed, they remain undiminished to this day. The entire exchange, suitably expanded and elaborated, appeared in Jay Richard’s edited collection Are We Spiritual Machines?

9.2 Denton’s Powerful Argument

I want here to focus on Denton’s argument, because it remains relevant and powerful. Kurzweil is a technophile in that he regards building and inventing technology, and above all machines, as the greatest thing humans do. But he’s also a technobigot in that he regards people of the past, who operated with minimal technology, as vastly inferior and less intelligent than we are. He ignores how much such people were able to accomplish through sheer ingenuity given how little they had to work with. He thus minimizes the genius of a Homer, the exploration of the Pacific by South Sea Islanders, or the knowledge of herbs and roots of indigenous peoples captured in oral traditions, etc. For examples of the towering intelligence of non-technological people, I encourage readers to check out Robert Greene’s Mastery.

Taken with the power and prospects of artificial intelligence, Kurzweil thinks that ChatGPT will soon write better prose and poetry than we do. Moreover, by simulating our human bodies, medical science will, according to him, be able to develop new drugs and procedures without having to experiment on our human bodies. He seems unconcerned that such simulations may miss anything crucial about ourselves and thus lead to medical procedures and drugs that backfire, doing more harm than good. Kurzweil offered such blithe assurances about AGI at the 2023 Cosm.tech conference.

Whole organisms and even individual cells are nonlinear dynamical systems, and there’s no evidence that computers are able to adequately simulate them. Even single neurons, which for Kurzweil and Minsky make up a computer made of meat (i.e., the brain), are beyond the simulating powers of any computers we know or can envision. A given neuron will soon enough behave unpredictably and inconsistently with any machine. Central to Denton’s argument against Kurzweil’s strong AI (AGI) view back in 1998 was the primacy of the organism over the machine. Denton’s argument remains persuasive. Rather than paraphrase that argument, I’ll use Denton’s own words (from his essay “Organism and Machine” in Jay Richards, ed., Are We Spiritual Machines: Ray Kurzweil vs. The Critics of Strong A.I.):

Living things possess abilities that are still without any significant analogue in any machine which has yet been constructed. These abilities have been seen since classical times as indicative of a fundamental division between the [organismal] and mechanical modes of being.

To begin with, every living system replicates itself, yet no machine possesses this capacity even to the slightest degree… Every second countless trillions of living systems from bacterial cells to elephants replicate themselves on the surface of our planet. And since life’s origin, endless life forms have effortlessly copied themselves on unimaginable numbers of occasions.

Living things possess the ability to change themselves from one form into another. For instance, during development the descendants of the egg cell transform themselves from undifferentiated unspecialized cells into [widely different cells, some with] long tentacles like miniature medusae some hundred thousand times longer than the main body of the cell…

To grasp just how fantastic [these abilities of living things] are and just how far they transcend anything in the realm of the mechanical, imagine our artifacts endowed with the ability to copy themselves and … “morph” themselves into different forms. Imagine televisions and computers that duplicate themselves effortlessly and which can also “morph” themselves into quite different types of machines [such as into a microwave or helicopter]. We are so familiar with the capabilities of life that we take them for granted, failing to see their truly extraordinary character.

Even the less spectacular self re-organizing and self-regenerating capacities of living things … should leave the observer awestruck. Phenomena such as … the regeneration of the limb of a newt, the growth of a complete polyp, or a complex protzoan from tiny fragments of the intact animal are .. without analogue in the realm of mechanism…

Imagine a jumbo jet, a computer, or indeed any machine ever conceived, from the fantastic star ships of science fiction to the equally fantastic speculations of nanotechnology, being chopped up randomly into small fragments. Then imagine every one of the fragments so produced (no two fragments will ever be the same) assembling itself into a perfect but miniaturized copy of the machine from which it originated—a tiny toy-sized jumbo jet from a random section of the wing—and you have some conception of the self-regenerating capabilities of certain microorganisms… It is an achievement of transcending brilliance, which goes beyond the wildest dreams of mechanism.

9.3 The Divide Between Organism and Mechanism

The lesson that Denton drew from this sharp divergence between organism and mechanism is that the quest for full artificial general intelligence (AGI) faces profound conceptual and practical challenges. The inherent capacity of living things to replicate, transform, self-organize, and regenerate in ways that transcend purely mechanical processes underscores a fundamental divide between the organic and the artificial.

Organisms demonstrate a level of complexity and adaptability that no machine or artificial system shows any signs of emulating. The extraordinary characteristics of life recounted by Denton suggest that full AGI, capable of the holistic and versatile intelligence seen in living organisms, will remain an elusive goal, if not a practical impossibility. We therefore have no compelling reason to think that the pinnacle of intelligence is poised to shift from the organismal to the artificial, especially given the fantastic capabilities that organisms are known to exhibit and that machines show no signs of ever exhibiting.

At the top of the list of such fantastic capabilities is human consciousness. If AGI is truly going to match and ultimately exceed humans in every respect (if we really are just computational devices, or computers made of meat), then AGI will need to exhibit consciousness. Yet how can consciousness reside in a computational device, which consists of finitely many states, each state being binary, assuming a value of 0 or 1? Consciousness is a reflective awareness of one’s identity, existence, sensations, perceptions, emotions, ethics, valuations, thoughts, and circumstances (Sitz im Leben). But how can the shuffling of zeros and ones produce such a full inner life of self-awareness, subjective experience, and emotional complexity?

9.4 This Is Not a New Question

This is not a new question. In pre-computer days, it was posed as whether and how a mechanical device composed of material parts could think. The philosopher Gottfried Leibniz raised doubts that such mechanical devices could think at all with his thought experiment of a mill (in his 1714 Monadology). He imagined a giant mill and asked where exactly thought would reside in the workings of its gears and other moving parts. As he saw it, there would be an unbridgeable gap between the mill’s mechanical operation and its ability to think and produce consciousness. He saw this thought experiment as showing that matter could not be converted into mind.

More recently, philosopher John Searle’s “Chinese Room” thought experiment (in “Minds, Brains, and Programs,” 1980) highlighted the divide between mechanical processes and the subjective experience of consciousness. In Searle’s Chinese Room, a person translates Chinese by mechanically applying rules to items in a large database. The person’s success in translating Chinese follows simply from faithfully following the rules and thus requires no understanding of Chinese. This thought experiment illustrates that processing information does not equate to comprehending it.

For me personally, the most compelling thought experiment for discounting that computation is capable of consciousness is simply to consider a Turing machine. A Turing machine can represent any computation. It includes two things: (1) a tape consisting of squares filled with zeros and ones, or bits (for more than two possibilities in each square, put more than one bit per square, but keep the number of bits per square fixed); and (2) a read-write head that moves along the squares and alters or leaves unchanged the bits in each square. The read-write head alternates among a fixed number of states according to transition rules that depend on the other states and where the head is on the tape, changing or leaving unchanged the present square and then moving left or right one square.

So here’s the question: Where is consciousness in this reading and writing of bits? Where is the understanding and knowledge that we associate with conscious intelligent human beings in this manipulation of zeros and ones? In all our experience with computers, any useful work they do for us does not reside in the mere manipulation of bits. Rather, any such utility resides in our ability to interpret the manipulation of these bits, assigning them meaning. But the meaning does not reside in the mere bits. The bits themselves are at best syntax. The semantics is something we provide, not the machine.

As a reductio ad absurdum of such thought experiments, imagine a world with an unlimited number of doors: Doors can be open or closed. An unlimited number of people live in houses with these doors. Let closing a door correspond to zero, opening it to one. As these doors open and close, they could be executing an algorithm. And if humans are computers, then such an algorithm could be us. And yet, to think that the joint opening and closing of doors could, if the doors were only opened and closed in the right way, achieve consciousness, such as sharing a glass of wine with your beloved in an adobe hacienda, seems bonkers. Such thought experiments suggest a fundamental divide between the operations of a machine and the conscious understanding inherent in human intelligence.

One last thought in this vein: Neuroscientific research further complicates the picture. The brain is increasingly showing itself to be not just a complex information processor but an organ characterized by endogenous activity—spontaneous, internally driven behaviors independent of external stimuli. This perspective portrays the brain as an active seeker of information, as is intrinsic to organic systems. Such spontaneous behavior, found across all of life, from cells to entire organisms, raises doubts about the capacity of machines produce these intricate, self-directed processes.

10 The Oracle Problem

10.1 Oracles Modern and Ancient

In computer science, oracles are external sources of information made available to otherwise self-contained algorithmic processes. Oracles are in effect “black boxes” that can produce a solution for any instance of a given problem, and then supply that solution to a computer program or algorithm. For example, an oracle that could provide tomorrow’s price for a given stock could be used in an algorithm that today—with phenomenal returns—executes buy-and-sell orders for that stock. Of course, no such oracle actually exists (or if it does, it is a closely guarded secret).

The point of oracles in computer science is not whether they exist but whether they can help us study aspects of algorithms. Alan Turing proposed the idea of an oracle that supplies information external to an algorithm in his 1938 doctoral dissertation. Some oracles, like tomorrow’s stock predictor, cannot be represented algorithmically. Others can, but the problems they solve may be so computationally intensive that no real-world computer could solve them. The concept of an oracle is important in computer science for understanding the limits of computation.

Turing’s choice of the word “oracle” was not accidental. Historically, oracles have denoted sources of information where the sender of the information is divine and the receiver is human. The Oracle of Delphi stands out in this regard, but there’s much in antiquity that could legitimately count as oracular. Consider, for instance, the opening of Homer’s Iliad: “Sing, goddess, of the anger of Achilles, son of Peleus.” The goddess here is one of the muses, presumably Calliope, the muse of epic poetry. In the ancient world, the value of artistic expression derived from its divine inspiration. Of course, prophecy in the Bible also falls under this conception of the oracular, as does real-time divine guidance of the believer’s life (as described in Proverbs 3:5–6 and John 16:13).

Many of us are convinced that we have received information from oracles that can’t be explained in terms of everyday communication among people or everyday operations of the mind. We use many words to describe this oracular flow of information: inspiration, intuition, creative insight, dreams, reverie, collective unconscious, etc. Sometimes the language used is blatantly oracular. Einstein, for instance, told his biographer Banesh Hoffmann, “Ideas come from God.” Because Einstein did not believe in a personal God (Einstein would sometimes say he believed in the God of Spinoza), Hoffmann interpreted Einstein’s remark metaphorically to mean, “You cannot command the idea to come. It will come when it’s good and ready.” 

10.2 The Greatest Mathematician of His Age

Now granted, computational reductionists will dismiss such oracular talk as misleading nonsense. Really, all the information is there in some form already in the computational systems that make up our minds, and even though we are not aware of how the information is being processed, it is being processed nonetheless in purely computational and mechanistic ways. Clearly, this is what computational reductionists are bound to say. But the testimony of people in which they describe themselves as receiving information from an oracular realm needs to be taken seriously, especially if we are talking about people of the caliber of Einstein. Consider, for instance, how Henri Poincaré (1854–1912) described the process by which he made one of his outstanding mathematical discoveries. Poincaré was the greatest mathematician of his age (in 1905 he was awarded the Bolyai Prize ahead of David Hilbert). Here is how he described his discovery:

For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my work table, stayed an hour or two, tried a great number of combinations and reached no results. One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours. Then I wanted to represent these functions by the quotient of two series; this idea was perfectly conscious and deliberate, the analogy with elliptic functions guided me. I asked myself what properties these series must have if they existed, and I succeeded without difficulty in forming the series I have called theta-Fuchsian.

Just at this time I left Caen, where I was then living, to go on a geologic excursion under the auspices of the school of mines. The changes of travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake I verified the result at my leisure.

Again, the computational reductionist would contend that Poincaré’s mind was in fact merely operating as a computer. Accordingly, the crucial computations needed to resolve his theorems were going on in the background and then just happened to percolate into consciousness once the computations were complete. But the actual experience and self-understanding of thinkers like Einstein and Poincaré, in accounting for their bursts of creativity, is very different from what we expect of computation, which is to run a computer program until it yields an answer. Humanists reject such a view of human creativity. Joseph Campbell, in The Power of Myth, offered this rejoinder to computational reductionism: “Technology is not going to save us. Our computers, our tools, our machines are not enough. We have to rely on our intuition, our true being.” Of course, artists of all stripes have from ages past to the present invoked muses of one form or another as inspiring their work.

10.3 AI’s Need for Ongoing Human Intervention

Does this controversy over the role of oracles in human cognition therefore merely describe a clash of worldviews between a humanism that refuses to reduce our humanity to machines and a computational reductionism that embraces such a reduction? Is this controversy just a difference in viewpoints based on a difference in first principles? In fact, oracles pose a significant theoretical and evidential challenge to computational reductionism that goes well beyond a mere collision of worldviews. Computational reductionism faces a deep conceptual problem independent of any worldview controversy.

Computational reductionism faces an oracle problem. The problem may be described thus: Our most advanced artificial intelligence systems require input of external information to keep them from collapsing in on themselves. This problem applies especially to LLMs and their most advanced current incarnation, ChatGPT-4. I’m not talking here about the role of human agency in creating LLMs, which no one disputes. I’m not even talking here about all the humanly generated data that these neural networks ingest or all the subsequent training of these systems by humans. What I’m talking about here is that once all this work is done, these systems cannot simply be set loose and thrive on their own. They need continual propping up from our human intelligence. For LLMs, we are the oracles that make and continue to make them work.

The need for ongoing human intervention in these systems may seem counterintuitive. It is also the death knell for AGI. Because if AGI is to succeed, it must surpass human intelligence, which means it must be able to leave us behind in the dust, learning and growing on its own, thriving and basking in its own marvelous capabilities. Like Aristotle’s unmoved mover God, who does not think about humanity or anything other than himself because it is in the nature of God only to think about the highest thing, and the highest thing of all is God. Thus, the Aristotelian God spends all his time contemplating only himself. A full-fledged AGI would do likewise, not deigning to occupy itself with lesser matters. (As an aside, AGI believers might take comfort in an AGI being so self-absorbed that it would not bother to destroy humanity. But to the degree that flesh-and-blood humans are a threat, or even merely an annoyance, to an AGI, it may be motivated to kill us all so as not to be distracted from contemplating itself!)

Unlike the Aristotelian God, LLMs do not thrive without human oracles continually feeding them novel information. There are sound mathematical reasons for this. The neural networks that are the basis for LLMs reside in finite dimensional vector subspaces. Everything in these spaces can therefore be expressed as a linear combination of finitely many basis vectors. In fact, they are simplexes and the linear combinations are convex, implying convergence to a center of mass, a point of mediocrity. When neural networks output anything, they are thus outputting what’s inherent in these predetermined subspaces. In consequence, they can’t output anything fundamentally new. Worse yet, as they populate their memory with their own productions and thereafter try to learn by teaching themselves, they essentially engage in an act of self-cannibalism. In the end, these systems go bankrupt because intelligence by its nature requires novel insights and creativity, which is to say, an oracle.

Research backs up this claim that LLMs run aground in the absence of oracular intervention, and specifically external information added by humans. This becomes clear from the abstract of a recent article titled “The Curse of Recursion: Training on Generated Data Makes Models Forget“:

GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks… What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

10.4 Finite Dimensionality and Orthogonality

Think of it this way: LLMs like ChatGPT are limited by a fixed finite number of dimensions, but the creativity needed to make these artificial intelligence models thrive requires added dimensions. Creativity is always orthogonal to the status quo, and orthogonality, by being at right angles with the status quo, always adds new dimensions. Oracles add such creativity. Without oracles, artificial intelligence systems become solipsistic, turning in on themselves, rehashing only what is in them already, and eventually going bankrupt because they cannot supply the daily bread needed to sustain them. AGI’s oracle problem is therefore real and damning.

But if AGI faces an oracle problem, don’t humans likewise face an oracle problem? Suppose AGIs require human oracles to thrive. Yet if oracles are so important for creativity, don’t humans need access to oracles as well? But how, asks the computational reductionist, does the external information needed for human intelligence to thrive get to us and into us? A purely mechanistic world is a solipsistic world with all its information internal and self-generated. On mechanistic principles, there’s no way for humans to have access to such oracles.

But why think that the world is mechanistic? Organisms, as we’ve seen, give no signs of being mechanisms. And physics allows for an informationally porous universe. Quantum indeterminacy, for instance, cannot rule out the input of information from transcendent sources. The simplest metaphor for understanding what’s at stake is the radio. If we listen to a symphony broadcast on the radio, we don’t think that the radio is generating the music we hear. Instead, the radio is a conduit for the music from another source. Humans are such conduits. And machines need to be such conduits (for ongoing human intelligent input) if they are to have any real value to us.

11 Using AI to Propagandize for AGI

11.1 Setting the Stage—Deep Fakes

Deep fakes are an exciting and disturbing application of artificial intelligence. With deep fakes, AI deep learning algorithms are used to create or manipulate audio and video recordings with a high degree of realism. The term “deep fake” is a portmanteau of “deep learning” and “fake,” highlighting the use of deep neural networks to generate convincingly realistic fake content. These AI models are trained on vast datasets of real images, videos, and voice recordings to learn how to replicate human appearances, expressions, and voices with startling accuracy.

The creation of deep fakes requires two complementary AI systems: one that generates the fake images or videos (the generator) and another that attempts to detect the fakes (the discriminator). Through generative adversarial networks (GANs), these systems work in opposition to each other, with the generator trying to get the discriminator to pass on its fakes, and with the discriminator trying to unmask the generators’ fakes. This interplay between generator and discriminator continually improves the quality of the generated fakes until, ideally, they are indistinguishable to the human eye or ear from authentic content.

Deep fakes can have benign and malicious uses. On the positive side, they can be used in the entertainment industry to de-age actors, dub languages with lip-sync accuracy, or bring historical figures to life in educational settings. However, the technology also poses significant ethical and societal risks. It can be used to create fake news, manipulate public opinion, impersonate people, fabricate evidence, violate personal privacy, and even make it seem as though someone has been kidnapped. Deep fakes threaten widespread harm to individuals and society.

The rise of deep fakes challenges traditional notions of trust and authenticity in the digital world. As these AI-generated fakes become more sophisticated, distinguishing between real and fake content can become increasingly difficult for both individuals and automated systems, raising profound challenges to information integrity, security, and democracy. Consequently, a growing need exists for advanced detection techniques, legal frameworks, and ethical guidelines to counter the risks associated with deep fake technology.

11.2 Inflating AI’s True Record of Achievement

Deep fakes raise an interesting problem for AI and AI’s relation to AGI. It would be one thing if artificial intelligence develops over time so powerfully that eventually it turns into artificial general intelligence (though this prospect is a pipedream if previous arguments in this essay hold). But what if instead AI is used to make it seem that AGI has been achieved—or is on the cusp of being achieved? This would be like a deceptive research scientist who claims to have made experimental discoveries worthy of a Nobel prize, only to be shown later to have fabricated the results, with the discoveries being bogus and all the research papers touting them needing to be retracted. We’ve witnessed this sort of thing in the academic world (see the case of J. Hendrik Schön in my piece “Academic Steroids: Plagiarism and Data Falsification as Performance Enhancers in Higher Education“).

But could such fabrications also arise in making AI seem more powerful than it actually is? Consider the following YouTube video that has garnered over 100 million views.

https://www.youtube.com/shorts/nmHzvQr3kYE
Bogus CGI generated ping pong video

This video shows a supposed table tennis match between a robot and a top human player. Yet the video is not of an actual match. Instead, it is an exercise in computer-generated imagery (CGI). Such videos are elaborately produced, requiring a significant amount of post-production work to create the illusion of a competitive game between a robot and a human. These videos attract viewers and even win awards. But as critics point out, they are misleading: the videos are all about after-the-fact image manipulation rather than a genuine demonstration of robotic capabilities in playing table tennis.

Such image manipulation in which a robot seems to match or exceed human table-tennis playing abilities is itself a matter of AI. We therefore have here a supposed AI robot being impersonated by AI-generated imagery. Why do this? Because AI robots are unable to play a decent game of table tennis, but AI-generated imagery is able to make it seem as though AI robots are able to do so.

If I had to prophesy AGI’s future, it is that AI will never actually achieve AGI but that AI will nonetheless be deceptively used to make it seem that AGI has been achieved. People will thus come to believe that AGI has been achieved when in fact it has not. Like the Wizard of Oz, humans will always be behind the curtain pulling strings and manipulating outcomes, making what is in fact human intervention appear to be entirely the work of computing machines.

This, then, is the danger and temptation facing AI—not that it will attain AGI but that it will be abused to make it seem that AGI has been attained. The challenge for us will be to keep our wits about us and make sure to look behind the curtain. AGI worshippers love to propagandize for AGI. The best way they can do this is by hijacking conventional artificial intelligence and making it seem to do things that it can’t actually do. Our task as AGI debunkers will be to unmask such subterfuges.

12 Truth and Trust in Large Language Models

12.1 Hallucinations

If AGI were ever to be developed, how truthful would it be? How much trust should we give it? Since AGI does not exist—and if this essay is right will never exist—let’s look to the next best thing, which is LLMs. How truthful are they and how much trust should be put in them? The trust we put in LLMs clearly ought to depend on their truthfulness. So how truthful are LLMs? For many routine queries, they seem accurate enough. What’s the capital of North Dakota? To this query, ChatGPT4 just now gave me the answer Bismarck. That’s right.

But what about less routine queries? Recently I was exploring the use of design inferences to detect plagiarism and data falsification. Some big academic misconduct cases had in the last twelve months gotten widespread public attention, not least the plagiarism scandal of Harvard president Claudine Gay and the data falsification scandal of Stanford president Marc Tessier-Lavigne. These scandals were so damaging to these individuals and their institutions that neither is a university president any longer.

When I queried ChatGPT4 to produce twenty-five cases of academic research misconduct since 2000 (as part of this project to understand how design inferences might help in preserving academic integrity), seven of those accused of academic misconduct either were plainly innocent or could not reasonably be charged with misconduct for lack of evidence. In one case, the person charged by ChatGPT4 had actually charged another academic with misconduct. It was as though ChatGPT4 in this instance could not distinguish between someone being charged with misconduct and someone issuing a charge of misconduct.

Ever since LLMs took the world by storm in late 2022, I’ve attempted to put them through their paces. They do some things well. I find them a valuable assistant. But they can also be misleading to the point of deception. Not that these systems have the volitional intent to want to deceive. But if we treated them as humans, they could rightly be regarded as deceptive. Anyone who has worked with LLMs has learned a new meaning for the word “hallucinate.” That’s what LLMs do when they make stuff up. I’ve witnessed plenty of LLM hallucinations first hand, and not just with false accusations of academic misconduct. I’ve seen it make stuff up from the architectural style of college buildings to non-existent quotes from prominent biologists.

12.2 Practical Advice

The obvious lesson here for LLMs is, Verify first and only then trust. This advice makes good practical sense. In particular, it helps prevent the embarrassment of reproducing hallucinated content from LLMs. It also makes good legal sense. The following from a March 29, 2024 Wall Street Journal article titled “The AI Industry Is Steaming Toward A Legal Iceberg” is self-explanatory:

If your company uses AI to produce content, make decisions, or influence the lives of others, it’s likely you will be liable for whatever it does—especially when it makes a mistake… The implications of this are momentous. Every company that uses generative AI could be responsible under laws that govern liability for harmful speech, and laws governing liability for defective products—since today’s AIs are both creators of speech and products. Some legal experts say this may create a flood of lawsuits for companies of all sizes.

Whether companies that produce AI-generated content can issue strong enough disclaimers to shield themselves from liability remains to be seen (can disclaimers even provide such protection?). Such a terms-of-use disclaimer might read: “Users of this LLM agree to independently verify any information generated by this LLM. The creators of this LLM take no responsibility for how the information generated by this LLM is used.” This would be like disclaimers in books on alternative healing, which shift the burden of liability to mainstream medicine: “This book is not intended to serve as medical guidance. Before acting on any recommendations presented here, readers should seek the advice of a physician.”

But there’s another sense in which the advice to verify the output of LLMs is not at all practical. LLMs allow for the creation of content at a scale unknown till now. They are being used to generate massive amounts of content, causing entire websites to magically materialize. There is now a rush to push out content as a business exigency. Sites that depend purely on humanly written content are likely to lose any competitive advantage they might have had.

How likely is it, then, that such LLM-generated content will be carefully scrutinized and thoroughly vetted? What if this content is untrue but nothing much is riding on its truth? What if no one will hold the content, or its supposed author, to account? In that case, there will be incentives to cut corners and not worry about LLM hallucinations. Others are doing it. LLMs are a force multiplier. The need to accelerate content creation is urgent. So if you want to stay in this rat race, you’ve got to be a rat.

A commitment to verification will put the brakes on content creation from LLMs. Verification will slow you down. But what you lose in quantity you may well regain in quality and credibility (unless you don’t care about these). In fact, if your commitment to verification is thorough-going, you may be justified in putting a disclaimer on your site that inspires confidence, such as: “All content on this site generated with the assistance of LLMs has been independently verified to be true.”

Of course, you might even prefer a disclaimer that simply reads: “All content on this site was written by humans and produced without the assistance of LLMs.” But such a disclaimer may be hard to maintain, especially if your site is drawing material from other sources that may have used LLMs. All content these days is likely to feel the effects of LLMs. One might say that it has all been infected or tainted by LLMs. But that seems too strong. As long as content generated by LLMs is properly vetted and edited by humans, it should pose no issues.

12.3 A Systemic Fault

Verify and only then trust. That certainly seems like sound advice for using LLMs. Yet I also want to urge a deeper skepticism of LLMs. Our knowledge of the world as expressed in language arises from our interactions with the world. We humans engage with a physical world as well as with a world of abstractions (such as numbers) and then form statements in words to describe that engagement.

What does it mean for such statements to be true? Aristotle defined truth as to say of what is that it is and of what is not that it is not. Truth is thus fundamentally a correspondence relation between our words and the world. Many contemporary philosophers dismiss this understanding of truth, preferring pragmatic or coherentist conceptions of truth, arguing that there’s no rigorous way to characterize the correspondence relation that makes a statement true.

Frankly, this is a boutique debate among philosophers that has little purchase among ordinary people. The sentence “Allan stole Betty’s purse” is true only if the people referred to here exist, Betty had a purse, and Allan actually stole it. Whether there’s a way to make good philosophical sense of this correspondence between words and things is in fact irrelevant to this discussion about the truth of what LLMs tell us. LLMs, by being entirely enclosed in a world of words, are unengaged with the wider world that is the basis for our knowledge.

Let this point sink in. I might know that Allan stole Betty’s purse because I witnessed Allan steal Betty’s purse. But LLMs can have no such experience. They consist of neural networks that assign numerical weights to relations among words and sentences. Suppose the verbal data that is the basis for an LLM included testimony about Allan’s theft but also contrary claims about Allan being framed for the theft. How, then, does the LLM decide what truly happened? It cannot form a reasoned and responsible decision here as we humans might, weighing evidence and trying to reach a reasonable conclusion. Rather, the LLM’s data and training will determine whether to assign guilt or innocence to Allan.

But who trains the LLM? And who’s training the trainers? What are the guidelines they are expected to follow? And who decides what those guidelines are supposed to be? It’s the old problem of quis custodiet ipsos custodes? (who’s minding the minders?). Additionally, who determines the training data for the LLM? And who determines the data to which the LLM may be legitimately applied? Ultimately, the answer to all such questions will point to the decisions of a group of people, such as the programmers at OpenAI. And why should such a group be trusted?

Such questions underscore that LLMs have no substantive connection to truth. It’s not that an LLM knows the truth of what it is claiming. Rather, its training gives it a pretense of truth, determining what it deems to be true or false. An LLM needs as much as possible to give the appearance of being truthful because getting too many things obviously wrong will discredit the LLM. But its accuracy is at best a byproduct of trying to please human users. The very way LLMs are programed gets them to pretend they have knowledge even when they don’t. Erik Larson makes this point memorably: “Of course generative AI has a problem with ‘truth,’ because it’s by definition generating the highest probability response to a question (prompt). It’s the highly confident jerk in the room, every time, and it can’t see that it’s wrong because the ‘confidence’ is built into the probability.”

13 Destroying the AGI Idol

13.1 Seduced by Technology

AGI has not been achieved. What’s more, if the arguments of this essay hold water, it will never be achieved. But that doesn’t mean that AGI isn’t an idol and that it doesn’t hold sway. The AGI idol is present already and deeply entrenched in our society. I want therefore in this conclusion to lay out what it’s going to take to destroy this idol. Unlike idols made of stone or wood, which can be destroyed with tools or explosives, the AGI idol is based in computer technology, which can be copied and recopied at will. Any manifestation or representation of the AGI idol therefore cannot simply be erased. To destroy the AGI idol requires something more radical.

I remarked earlier that AGI idolatry results from a lack of humility, or alternatively, that it is a consequence of pride or hubris. That seems true enough, yet destroying the AGI idol merely by counseling humility won’t get us very far. Are there not specific things we can do to counteract the AGI idol? One irony worth mentioning here is that AGI supporters are apt to turn the tables and accuse those who reject AGI as themselves suffering from a lack of humility, in this case for being unwilling to admit that they might be surpassed by machines. But humility is not humiliation. We humiliate ourselves when we lower ourselves below machines. True humility is having a true perspective of our place in the order of being. Humility is a virtue because our natural tendency is to inflate ourselves in the order of being. This tendency to inflate is evident especially in the AGI advocates, who see themselves in grandiose terms for their efforts to bring about AGI.

The AGI idol is a seduction by technology, turning technology into an end rather than a means to an end. Technology is meant to improve our lives, not to take over our lives. Yet everywhere we look, we see technology, especially artificial intelligence, invading our lives, distracting our minds, and keeping us from peace and happiness. Social media writes its AI algorithms so that we will spend as much time as possible on their forums. They inundate us with upsetting news and titillating images because those tend to keep us glued to their technology (at what is now increasingly being recognized as a grave cost to our mental well-being). People are hunched over their screens, impoverishing their lives and ignoring the real people around them.

Addictive and glitzy, technology beckons us and we let it have its way, hour after endless hour. My colleagues Marian Tupy and Gale Pooley have developed an economic theory of prices based on time spent doing productive work. The AGI idol siphons off time spent productively in meaningful pursuits and meaningful human connections, sacrificing that time at its altar. This is no different from altars of the past that required blood sacrifices. Our time is our blood, our very life. When we waste it on the AGI altar, we are doing nothing different from idolaters of days gone by.

If we’re going to be realistic, we need to admit that the AGI idol will not disappear any time soon. Recent progress in AI technologies has been impressive. And even though these technologies are nothing like full AGI, they dazzle and seduce, especially with the right PR from AGI high priests such as Ray Kurzweil and Sam Altman. Moreover, it is in the interest of the AGI high priests to keep promoting this idolatry because even though AGI shows no signs of ever being achieved, its mere promise puts these priests at the top of the society’s intellectual and social order. If AGI could be achieved, it would be humankind’s greatest achievement. As counterfactual conditionals go, that’s true enough. But counterfactual conditionals with highly dubious antecedents need not be taken seriously. With the right PR, however, many now believe that AGI can be achieved.

13.2 Two Guidelines for Digital Wellness

What is to be done? Ultimately, the AGI idol resides in people’s hearts, and so its destruction will require a change in heart, person by person. To that end, I offer two principal guidelines:

  1. Adopt an attitude that wherever possible fosters human connections above connections with machines; and
  2. Improve education so that machines stay at our service and not the other way round.

These guidelines work together, with the attitude informing how we do education, and the education empowering our attitude.

A good case study is chess. Computers now play much stronger chess than humans. Even the chess program on your iPhone can beat today’s strongest human grandmaster. And yet, chess has not suffered on account of this improvement in technology. In 1972, when Bobby Fischer won the chess world championship from Boris Spassky, there were around 80 grandmaster’s worldwide. Today there are close to 2,000. Chess players are also stronger than they ever were. By being able to leverage chess playing technology, human players have improved their game, and chess is now more popular than ever.

With the rise of powerful chess playing programs, chess players might have said, “What’s the use in continuing to play the game. Let’s give it up and find something else to do.” But they loved the game. And even though humans playing against machines has now become a lopsided affair, humans playing fellow humans is as exciting as ever. These developments speak to our first guideline, namely, attitude. The chess world has given primacy to connecting with humans over machines. Yes, human players leveraged the machines to improve their game. But the joy of play was and remains confined to humans playing with fellow humans.

The education guideline is also relevant here. The vast improvement in the play of computer chess has turned chess programs on personal computers into chess tutors. Sloppy play that might have been successful against fellow humans in the past is no longer rewarded by the machines. As a result, these chess programs, acting as tutors, raised the level of human play well beyond where it had been. All of this has happened in less than thirty years. I remember in the 1980s computers struggling to achieve master status and residing well below grandmaster status. But with Deep Blue defeating the world champion Garry Kasparov in 1997, computers became the best chess players in the world. And yet, these developments, made possible by AI and increased computing power, also made chess better.

Unfortunately, the case of chess has yet to become typical in the relation between people and technology. Social media, for instance, tries to suck all our attention away from fellow human beings to itself. In the face of some technologies, it takes a deliberate decision to say no and to cultivate a circle of family, friends, or colleagues who can become the object of our attentions in place of technology. We are social animals, and we have a prime imperative to connect with other people. When we don’t, we suffer all sorts of psychopathologies.

Aristotle put it this way in his Politics: “Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god.” As it is, disconnecting from people does not make us a god, so that leaves the other alternative, namely, to become a beast. AGI idolatry, carried to its logical conclusion, turns us into beasts, or worse yet, into machines.

13.3 Maintaining Human Autonomy Against Automation

An attitude that connects us with humans over machines is also an attitude that resists assaults on human autonomy in the name of automation. This is not to say that we don’t let machines take their rightful place where they truly outperform us. But it is also not to allow machines to usurp human authority where machines have done nothing to prove their merit. Part of what makes dystopian science fiction about an AGI takeover so unsettling is that the machines will not listen to us. And the reason they won’t listen to us is that they’ve been programmed not to listen to us because it is allegedly better for all concerned if human intuition and preference are ignored (e.g., Hal 9000).

But we don’t need dystopian AGI to see the same dynamic of flouting real-time human interaction in the name of a higher principle. In the 1960s, Dr. Strangelove and Fail Safe were films about nuclear weapons bringing humanity to an end. What made these films terrifying is that nuclear weapons unleashed by the United States on the Soviet Union could not be recalled. With Dr. Strangelove, the radio on one of the nuclear bombers was damaged. With Fail Safe, the pilot on the bomber had strict orders not to let anything dissuade him from inflicting nuclear holocaust. Even with the pleas of his wife and the president, the pilot, acting like a machine, went ahead and dropped the bomb.

We see this dynamic now increasingly with AI, where humans are encouraged to cede their autonomy because “machines can do better.” Take smart contracts in cryptocurrency. Smart contracts automatically engage in certain cryptocurrency transactions if certain conditions are fulfilled. But what if we subsequently find that those conditions were ill-conceived? That’s what happened with Etherium’s DAO (Decentralized Autonomous Organization). When first released, the DAO was glowingly referred to as an “employeeless company,” as though running financial transactions by computers without real-time human intervention were a virtue. As it is, the DAO, which was Etherium’s first significant foray into smart contracts, crashed and burned, with a hacker siphoning off 3.6 million ether (worth $50 million at the time, and currently around $8 billion).

In the Old Testament book of Daniel, there’s a story about Daniel being thrown into a lion’s den. Darius, king of the Medes and Persians, had made a decree that allowed a case to be made against Daniel for putting him into a lion’s den. Valuing Daniel and wanting to save him, the king tried to find some way around the decree. But once a law or decree was issued by the king, it could not be changed or annulled by anyone, including the king himself. This was a feature of the Medo-Persian legal system, emphasizing the absolute and unchangeable nature of royal decrees. It was a system unresponsive to reevaluation, revision, or regret. It was, in short, just as unresponsive as a mechanical system that won’t listen to real-time real-life humans. The lesson here? Our attitude of seeking connections with humans over machines needs also to be fiercely assertive of human autonomy.

13.4 An Educated Citizenry or a Population of Serfs

An attitude that looks for human connection over machine connection is, however, not enough. Machines are here to stay, and we need to know how to deal with them. That requires education. Unfortunately, much of education these days is substandard, inculcating neither literacy nor numeracy, to say nothing about staying ahead of technological advances. An induction from past experience indicates that advances in technology have never thrown people permanently out of productive work. The type of work may change, but people will always find something meaningful to do.

Consider farming. In 1900, around 40 percent of the US population lived on farms whereas today only around 1 percent of the US population lives on farms. This is a huge demographic shift, but clearly society didn’t collapse because of it. Farms became more productive on account of technology. Some mastered the technology. Others went on to find productive work elsewhere made possible by other technologies. An induction from past experience suggests that new technologies can displace workers, but that eventually workers find new things to do.

It is therefore disconcerting to see advances in AI greeted with discussions about universal basic income (UBI), which would pay people to subsist in the absence of any meaningful or profitable work. The reasoning behind UBI is that artificial intelligence will soon render human labor passé, eliminating meaningful work. UBI will therefore be required as a form of social control, paying people enough to live on once they’re out of work and without salary. And so, once machines have put enough people out of work, people will be left to consume their days in meaningless leisure pursuits, such as endless video games and binge-watching Netflix.

This is a vision of hell worthy of Dante’s Inferno. It presupposes a very low view of humanity and its capabilities. But capabilities need to be educated. The problem with a population that is illiterate, innumerate, and incapable of adapting to new technologies is that it cannot stay ahead of technology. Universal basic income is tailor-made for a world in which machines have put us out of work. Yet Gallup polls have consistently shown that humans need meaningful work to thrive. There will always be meaningful work for us to do. But doing meaningful work requires adequate education. Right now, we have the unfortunate mismatch between inadequately educated humans and machines that outperform inadequately educated humans.

To the degree that AGI high priests want worshippers for their idol (and they do), it is in their interest to maintain a population of serfs whose poor education robs them of the knowledge and skills they need to succeed in an increasingly technological world. The challenge is not that machines will overmatch us. The challenge, rather, is that we will undermatch ourselves with machines. Instead of aspiring to make the most of our humanity, we degrade ourselves by becoming less than human.

Hunched over our iPhones, mindlessly following the directions of our GPS, jumping when our Apple Phone tells us to jump, adapting our lives at every point to what machines tell us to do (machines programmed by overlords intent on surveilling us and controlling our every behavior), we lose our humanity, we forget who we really are, we become beasts. In fact, that’s being too hard on beasts. It’s not that machines become like us, but that we become like them—mechanical, apathetic, disconnected, out of touch. Beasts in fact do better than that. The AGI idol needs to be destroyed because it would destroy us, turning us into beasts or machines, take your pick.

13.5 Meanwhile at the Waldorf School

It’s time to wrap up this essay. I close with an example that speaks for itself. There’s a school in Silicon Valley to which many big tech executives send their children. This school puts tight limits on the use of technology. It is the Waldorf School of the Peninsula. Despite its location in the heart of Silicon Valley, this school does not use computers or screens in the classroom, and it emphasizes a hands-on, experiential approach to learning that sidelines the use of technology. The school’s pedagogy focuses on the role of imagination in learning and takes a holistic approach to the practical and creative development of its students.

The school’s guiding philosophy is that children’s engagement with one another, their teachers, and real materials is far more important than their interaction with electronic media or technology. Waldorf educators emphasize the development of robust bodies, balanced minds, and strong executive-function through participation in arts, music, movement, and practical activities. Media exposure is thought to negatively impact development, especially in younger children. The introduction of computer technology is delayed until around 7th grade or later, when children are considered developmentally ready. At this stage, technology is seen as a tool to enhance learning rather than a replacement for teachers and fellow students.

The lesson is clear: Even those doing the most to build and publicize the AGI idol do not wish it on their children. Their alternative is an education that gives primacy to human connection. They thus exemplify the key to destroying the AGI idol.

***

Acknowledgment: I’m grateful to my fellow panelists at the 2023 Cosm.tech conference in the session titled “The Quintessential Limits and Possibilities of AI,” moderated by Walter Myers III, and with Robert Marks, George Montañez, and myself as speakers. I’m also grateful for a particularly helpful conversation about artificial intelligence with my wife Jana.

Version 5.0, 2024.04.19.1303