Ray Kurzweil and the Singularitarians
Is the Singularity really nearer now? I offer an overview of the man and his ideas, and Chris Edwards reviews Kurzweil's new book The Singularity is Nearer
I have met Ray Kurzweil on a number of occasions and written about him and his work, most recently in my 2018 book Heavens on Earth: The Scientific Search for The Afterlife, Mortality, and Utopia, in the context of the singularitarians and their goal of engineering immortality by, among other things, transferring your soul—the pattern of information that represents your thoughts and memories as stored in the connectome of your brain—into a computer. I am skeptical, inasmuch as even if this were technologically possible (which it isn’t), it would just be a copy of you (your MemorySelf or MEMself); your Point-of-ViewSelf (POVself)—the moment-to-moment experiencing self—would not awaken in the cloud but still be inside your head.
Nevertheless, I find the man and his ideas strangely compelling, almost transcendent in a religious sense. In fact, Kurzweil is called “the transcendent man” in Barry Ptolemy’s documentary film of that title, an apt sobriquet for this scientist, futurist, author, and inventor of such life-changing technologies as the first optical character recognition program and CCD flatbed scanner, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, Kurzweil electronic keyboards, and more. At age 15 he was designing computer programs to aid in homework, and at age 17 he won the Westinghouse Science Talent Search contest, which landed him an invitation to the White House. Recipient of the 1999 National Medal of Technology and inductee into the National Inventor Hall of Fame, Kurzweil’s books on The Age of Intelligent Machines and The Age of Spiritual Machines influenced the field of Artificial Intelligence, and his book The Singularity is Near popularized the term and the hope that one day soon (2045 by his calculations) we will witness the singularity and achieve immortality.
How will the singularity come about? It begins with what Kurzweil calls “the law of accelerating returns,” which holds not just that change is accelerating, but that the rate of change is accelerating. Moore’s Law has accurately projected the doubling rate of computer power since the 1960s. The Singularity is Moore’s Law on steroids and applied to all science and technology. In the past century, the world has changed more in a century than it has in the previous 1000 centuries, which is staggering enough. As we approach the Singularity, says Kurzweil, the world will change more in a decade than in a 1000 centuries, and as the acceleration continues and we reach the Singularity the world will change more in a year than in all pre-Singularity history.
Post-Singulartarians will be to us what we are to our pets: so vastly smarter that we won’t even know how intelligent they are. Within a quarter century, Kurzweil projects, “nonbiological intelligence will match the range and subtlety of human intelligence” then “soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge.” Compare the room-size computers of the 1950s to the pocket-size computers we carry around in our pockets and then follow that trajectory downward in size over the same amount of time, or less, and you arrive at cellular size computers that can be digested in tablet form. Once such nanotechnologies exist in the form of nanorobots that will repair cells, tissues, and organs (including brains), when coupled with other biotechnologies like designer drugs and engineered genes, the aging process will be halted, and possibly even reversed, enabling us “to live long enough to live forever,” as he proclaimed in his book Transcend.
To secure your health until 2045, Kurzweil’s book, Fantastic Voyage: Live Long Enough to Live Forever (co-authored with Terry Grossman), recommends that we adopt “Ray and Terry’s Longevity Program,” which includes 250 supplements a day and weekly rounds of biochemistry reprogramming through intravenous nutritionals and blood cleansing. To boost antioxidant levels, for example, Kurzweil suggests a concoction of “alpha lipoic acid, coenzyme Q10, grapeseed extract, resveratrol, bilberry extract, lycopene, silymarin, conjugated linoleic acid, lecithin, evening primrose oil (omega-6 essential fatty acids), n-acetyl-cysteine, ginger, garlic, 1-carnitine, pyridoxal-5-phosphate, and Echinacea.”
Bon appétit.
Ray Kurzweil is not a big man, but when I saw him on stage in full Singularity-is-Near mode at the Singularity Summit in New York City and on the TED stage in Vancouver, BC, he was larger than life. Here he is in 2016, with the full backing of the tech giant Google behind him as one of their directors of engineering, explaining in a Playboy interview what we have to look forward to:
By the 2030s we will have nanobots that can go into a brain non-invasively through the capillaries, connect to our neocortex and basically connect it to a synthetic neocortex that works the same way in the cloud. So we’ll have an additional neocortex, just like we developed an additional neocortex 2 million years ago, and we’ll use it just as we used the frontal cortex: to add additional levels of abstraction.
Not just smarter, but healthier:
As they gain traction in the 2030s, nanobots in the bloodstream will destroy pathogens, remove debris, rid our bodies of clots, clogs and tumors, correct DNA errors and actually reverse the aging process. I believe we will reach a point around 2029 when medical technologies will add one additional year every year to your life expectancy.
Kurzweil calls this longevity escape velocity—as the rate of progress of medical technology accelerates we will live more than a year longer for every year of life, and then the extra years will pile up for decades, centuries, and beyond…forever.
If there is one thing I have learned in a third of a century of professional skepticism it is this: Beware the prophet who proclaims the end of the world, the apocalypse, doomsday, or judgment day is upon us, or that the second coming, the resurrection, paradise, or the Greatest Thing to Happen to Humanity Ever will happen in the prophet’s own lifetime. The belief that we can transcend what no one before us has is due to our natural inclination to assume that we are special and that our generation will witness the new dawn. Call it the Ptolemy Principle, the belief, after its namesake, that we are not only at the center of the universe, but that we are specially created, chosen people, and living at the most unique time in history. People have always embraced the Ptolemy Principle, but it is gainsaid by the Copernican Principle that, pace its namesake, holds that the Earth is not the center of the solar system, the solar system is not the center of our galaxy, our galaxy is not the center of the universe, humans are not specially created apart from all other animals, and we are not living in the most important time in history.
Thus, the chances that even a science-based prophecy such as those proffered by the singulartarians will come true is highly unlikely. Prognosticators of both religious and secular utopias always include themselves as members of the chosen few, with the paradisiacal state just within reach. For once, I would like to hear a scientific futurist or a religious diviner predict that the Big Thing is going to happen in, say, the year 7510. But where’s the hope in that for the living?
As well, I’m skeptical of extrapolating trend lines very far into the future. History is highly contingent, nonlinear, and unpredictable. All those nifty graphs of accelerating technological change may not continue at those rates, nor apply to all technologies. The downsizing of computers from room-size to pocket-size is one thing; it is quite another to go from pocket-size to cellular-size. The miniaturization of computer chips must one day collide with the limitations imposed by the laws of physics and impede many of the laws of accelerating returns Kurzweil envisions getting us to forever. Plus, in my opinion, the problems of aging are orders of magnitude harder than anyone anticipated.
Such skepticism aside, Ray Kurzweil is one of the biggest thinkers of our time, and his new book is well worth reading (and it’s available on audio!). Skeptic contributor Chris Edward, who regularly writes about big ideas in these pages, reviews the book below.
Chris Edwards, EdD, teaches AP world history and an English course on critical thinking at a public high school in the Midwest and is the author of To Explain it All: Everything You Wanted to Know about the Popularity of World History Today; Connecting the Dots in World History; Femocracy: How Educators Can Teach Democratic Ideals and Feminism; and Beyond Obsolete: How to Upgrade Classroom Practice and School Structure.
The Singularity (Still) Isn’t Clear
Chris Edwards
In 2005, the futurist and philosopher Ray Kurzweil published The Singularity is Near: When Humans Transcend Biology. The book achieved bestseller status, and Kurzweil’s core statements about how exponential trends in technological progress will eventually lead to a “transhumanist” future remain embedded in the culture. One need not be a “singularatarian” (a true believer in Kurzweil’s philosophy, although the term pre-dates his work) to daily see changes created by expanded technological capacities. Nearly two-decades later, Kurzweil’s new book The Singularity is Nearer: When We Merge with AI attempts to make another case for the coming “singularity.” However, Kurzweil prophecies on a spectrum and grades his predictions on a curve. For predictions to be valid, they need to be clear and falsifiable, and because the singularity as a concept isn’t any clearer than it was in 2005, it cannot be definitively nearer either.
What is “The Singularity?”
In 2011, Skeptic published “The Singularity Isn’t Even Close”, in which I outlined three logical and/or scientific errors in Kurzweil’s predictions made in his 2005 book, the most serious of which was his claim that in order for universe to “wake up” (in his Epoch 6 of humanity—more below) information would need to spread out from Earth at a pace faster than the speed of light, which Einstein demonstrated is impossible.
In his new book, Kurzweil presents a less dramatic thesis, but one still plagued with logical errors. Compare, for example, quotes from Kurzweil’s first book and his most recent.
From 2005 (p. 22):
To put the concept of Singularity into further perspective, let’s explore the history of the word itself. “Singularity” is an English word meaning a unique event with, well, singular implications. The word was adopted by mathematicians to denote a value that results when dividing a constant by a number that gets closer and closer to zero.
And (p. 24):
From my perspective, the Singularity has many faces. It represents the nearly vertical phrase of exponential growth that occurs when the rate is so extreme that technology appears to be expanding at infinite speed. Of course, from a mathematical perspective, there is no discontinuity, no rupture, and the growth rates remain finite, although extraordinarily large. But from our currently limited framework, this imminent event appears to be an acute and abrupt break in the continuity of progress.
From 2024 (pp. 1-2):
The term “singularity” is borrowed from mathematics (where it refers to an undefined point in a function, like when dividing by zero) and physics (where it refers to the infinitely dense point at the center of a black hole, where the normal laws of physics break down). But it is important to remember that I use the term as a metaphor. My prediction of the technological Singularity does not suggest that rates of change will actually become infinite, as exponential growth does not imply infinity, nor does a physical singularity. A black hole has gravity strong enough to trap even light itself, but there is no means in quantum mechanics to account for a truly infinite amount of mass. Rather, I use the singularity metaphor because it captures our ability to comprehend such a radical shift with our current level of intelligence. But as the transition happens, we will enhance our cognition quickly enough to adapt.
In 2005, Kurzweil described the Singularity as an event, but in 2024 the Singularity has evolved into a metaphor. What makes Kurzweil think that the Singularity is nearer? To make his point, he puts humans on his spectrum of change, and then uses present trends to make predictions about where exponential growth will be going in the future. At the end of his first chapter, “Where are We in the Six Stages?”—Epoch 1: Physics and Chemistry, Epoch 2: Biology, Epoch 3: Brains, Epoch 4: Technology, Epoch 5: Merger of Technology and Human Intelligence, Epoch 6: The Universe Wakes Up—Kurzweil writes (p. 10):
Humans are now in the Fourth Epoch, with our technology already producing results that exceed what we can understand for some tasks. For the aspects of the Turing test that AI has not yet mastered, we are making rapid and accelerating progress. Passing the Turing test, which I have been anticipating for 2029, will bring us to the Fifth Epoch.
A key capability in the 2030’s will be to connect the upper ranges of our neocortices to the cloud, which will directly extend our thinking. In this way, rather than AI being a competitor, it will become an extension of ourselves. By the time this happens, the nonbiological portions of our minds will provide thousands of times more cognitive capacity than the biological parts.
As this progresses exponentially, we will extend our minds many millions-fold by 2045. It is this incomprehensible speed and magnitude of transformation that will enable us to borrow the singularity metaphor from physics to describe our future.
In The Singularity is Nearer, Epoch Six no longer seems so important and the “Singularity” now seems to be the moment, in 2045, when humans can employ it as a metaphor for whatever big thing might happen then.
Chapter Two is really where the new stuff begins, and Kurzweil smartly generates a more manageable thesis than he did in his first book by observing (p. 11):
If the whole story of the universe is one of evolving paradigms of information processing, the story of humanity picks up more than halfway through. Our chapter in this larger tale is ultimately about our transition from animals with biological brains to transcendent beings whose thoughts and identities are no longer shackled to what genetics provides. In the 2020’s we are about to enter the last phase of this transformation—reinventing the intelligence that nature gave us on a more powerful digital substrate, and then merging with it. In so doing, the Fourth Epoch of the universe will give birth to the Fifth.”
If one does not fully understand the definition of “the Singularity” or fully believe in Epoch Six, it is possible to read a more coherent thesis for The Singularity is Nearer in the words quoted above. Kurzweil wants to argue that humanity will merge with Artificial Intelligence and he does so by stating that humans will pass into the next Epoch once AI passes the Turing Test, about which Kurzweil notes two problems (p. 69):
1. Humans are easier to “fool” at lower-level interactions, which is why Kurzweil argues for a strong Turing test where AI can make you think it is human at an immersive level.
2. For AI to pass the strong Turing test, it will need to possess such vast computational power that it is “actually going to have to dumb itself down!”
Question: what if Turing did not intend his famous test to be a “test” at all, but rather an epistemological thought experiment? Essentially, Turing found that you cannot code a machine to tell you when it becomes sentient. The only way that the machine could judge sentience would be to use its own internal standards and definitions. It could declare itself to be sentient any time without employing any actual thought or emotion.
The only way to avoid tautology in the case of machine sentience is to define the term as relational. In other words, a machine becomes sentient when a person says the machine is sentient. If this is the case, then the merging of AI and human consciousness would end the relational definition of sentience and moot the concept. Kurt Gödel did not write that certain mathematical concepts were incomplete as a way of challenging mathematicians; he wrote the Incompleteness Theorem to expose that theoretical mathematics was, at its core, tautological. Why do we assume that Turing was challenging AI developers and not exposing an unsolvable philosophical problem in the concept of sentience? (Indeed, Kurzweil treats the Turing test as if it is a video game boss humans will have to defeat before passing on to the next level.)
In his famous paper on “Computing Machinery and Intelligence,” in which he asks “Can Machines Think?,” Alan Turing cryptically wrote “It might be urged that when playing the ‘imitation game’ the best strategy for the machine may possibly be something other than imitation of the behavior of a man.” What does that mean? And what might an AI that wants to pass a test be thinking about doing to the test, or us? What if AI decides to pass its test by dumbing us down?
If we accept Kurzweil’s definition of the Turing Test, however, then he predicts that humans will soon be able to meld with a strong AI through the use of nanotechnology, which can be used to connect the human brain to an AI interface. Using nanorobots “won’t require some kind of sci-fi brain surgery—we’ll be able to send nanobots into the brain noninvasively through the capillaries. Instead of human brain size being limited by the need to pass through the birth canal, it can then be expanded indefinitely. That is, once we have the first layer of virtual neocortex added, it won’t be a one-shot deal—more layers can be stacked on top of that one (computationally speaking) for ever more sophisticated cognition” (p. 72).
Let’s return to the Turing Test to understand the problem with Kurzweil’s argument. Turing declared that consciousness is a relational concept. A machine cannot declare itself to be thinking. In the same way that a conversation is defined as an interaction between at least two people, the concept of consciousness must be defined through an interaction of two separate individuals. If two people talking then somehow meld into one person, the new “melded” person is no longer having a conversation, but just talking to himself. If humans and AI merge, we will be doing the same thing and there will be no one to declare that the Turing Test has been passed.
Kurzweil seems to recognize something of this problem under the larger banner of “consciousness”, and in chapter three tries to define the term. He reaches again for a spectrum argument, defining “consciousness” essentially by computing power (p. 77):
Fruit flies don’t exactly recite Shakespeare, but they do carry out behaviors in response to their environment and have brains consisting of about 250,000 neurons. Cockroaches have about 1,000,000. This is only about 0.001 percent as many neurons as the human brain has, though, so there is much less room for complex and hierarchical networks like our own.
From this, Kurzweil states (admits?) (p. 81):
[T]he Turing test would not just serve to establish human-level functional capability but would also furnish strong evidence for subjective consciousness and, thus, moral rights. While the legal implications of conscious artificial intelligence are profound, I doubt that our political system will adapt fast enough to enshrine such rights in law by the time the first Turing-level AI’s are developed. So initially it will fall to the people developing them to formulate ethical frameworks that can restrain abuses.
No one argues that ethical reasoning grows exponentially more ethical. We have all seen what happened when social media giants like Meta tried to generate ethical standards for Facebook and Instagram while at the same time growing market share. Do we want that process replicated?
After Chapter Four, Kurzweil stops trying to torture the data into his Epochs, and his writing really becomes enjoyable. Instead of arguing for a Singularity, Kurzweil makes some manageable predictions about future trends. In fact, from Chapter Four on, the book resembles Kai-Fu Lee’s AI 2041: Ten Visions for Our Future (that’s a compliment). Chapter Four, titled “Life is Getting Exponentially Better” provides real data to counter the gloom-and-doom bias that infects us news junkies. This reviewer found it enormously refreshing to read an author who accepts unreservedly Steven Pinker’s data about the decline of violence. The graphs that Kurzweil includes about the global spread of flush toilets, home electricity, and other life-enhancing technologies, is uplifting. His insight that incremental improvements in the human condition tend not to make the news, while horrific events become lead stories, echo Pinker’s sentiments, and they cannot be overstated.
There is at least one place where Kurzweil’s excitement for exponential growth gets in the way of the basic laws of economics. Of the meat industry, he writes (p. 170):
Meat taken from animal carcasses has several major disadvantages: it inflicts suffering on innocent creatures, it is often unhealthy for humans, and it causes severe environmental impacts through both toxic pollution and carbon emissions. Growing meat from culture cells and tissues can solve all those problems. No living animals would suffer, it could be designed to be both healthier and better-tasting, and harm to the environment could be minimized with ever-cleaner technology.
Then, later, he wrote about agricultural improvements (p. 202):
During the twentieth century the advent of improved pesticides, chemical fertilizers, and genetic modification led to an explosion in crop yields. For example, in 1840 wheat yields in the United Kingdom were 0.44 ton per acre. As of 2022, they had risen to 3.84 tons per acre. During roughly that same span, the United Kingdom’s population rose from about 27 million to 67 million, so food production was able to not just accommodate a growing number of citizens but make food much more abundant for each person.
While every study on crop production indicates these stunning improvements in yield, it is not the case that crop production will continue to increase if humans start eating lab-grown meat. About half of US crop production goes to feeding livestock, and almost none of the corn grown in the United States goes directly to feed humans. About forty percent goes to feed cattle with the rest being used for ethanol.
The last chapters of Kurzweil’s book focus on how the job market will look in Epoch Five, but readers might find themselves struggling to understand how to reconcile such mundane employment concerns with an era where our neocortex receives regular updates from cloud-based nanobots. For anyone who read Annie Jacobsen’s new and utterly horrifying book, Nuclear War: A Scenario, Kurzweil’s conclusions about nuclear weapons sound a little sunny (p. 270):
There is reason for measured optimism about the trajectory of nuclear risk. MAD has been successful for more than seventy years, and nuclear states’ arsenals continue to shrink. The risk of nuclear terrorism or a dirty bomb remains a major concern, but advances in AI are leading to more effective tools for detecting and countering such threats.
Readers might like to hear more about how AI might affect MAD.
Conclusion
Because Kurzweil’s definitions of the “Singularity” are fluid, he can fall back on any form of technological progress as proof that it is getting closer. While his descriptions of technological trends, and his analysis of future projects always inform and entertain, he invokes a type of blind faith in technological progress as a means of addressing any concerns. At some level, this is not much different from a believer responding to religious objections with the phrase “God can do anything.” By redefining the Singularity as a metaphor, and by more-or-less jettisoning the absurdity of Epoch Six with its faster-than-light fantasies, Kurzweil wielded a more manageable thesis in his sequel.
Still, the singularity can’t be near if it isn’t clear, and Kurzweil seems willing to grade everything, including his own predictions, on an upward curve.
Robert Heinline in a novel published in the 1940s, Methusalahs Children, told us how to extend human life span by a technology that has been known to farmers since Neolithic times: Selectively breed humans for longer life spans.
In the story, the rest of the humans, the ones not part of the long-lived minority, not beleiving their longer life span was the result of selective breeding and thinking it must be from some medical discovery they were keeping secret, tried to capture the ones who resulted from the breeding program to torture the supposed secret of long life out of them. The long lived were frorced to flee from earth to avoid being rounded up and forced to reveal a supposed secret they did not posess.
https://en.wikipedia.org/wiki/Methuselah%27s_Children
Today, most of the world still is composed of followers of religions that would consider tampering with human life span immoral and a defiance of God's will. I suspect the computer freaks have not taken into account the posibility that there might be some opposition to their pipe dreams.
Physics, as we currently understand it, is undeniably incomplete. Einstein's theory of relativity offers profound insights, like the notion that for a photon, no time passes during its journey from the Sun to Earth—a journey that, from our perspective, takes 8 minutes. For the photon, moving along a lightlike worldline, the spacetime distance from emission to absorption is effectively zero. Take that in for a moment.
However, while relativity explains much of the universe, it doesn’t unify with quantum mechanics, nor does the Standard Model account for 95% of the cosmos, which we now attribute to mysterious entities like dark matter and dark energy.
The enigmas of quantum mechanics remain not only unresolved but, in many ways, still defy meaningful interpretation. We are still grappling to understand the full picture of the universe.
To hear Chris dismiss Ray’s vision of “Epoch 6” based on our rudimentary grasp of the universe, while the Standard Model remains woefully inadequate, is akin to cave men debating the feasibility of traveling to the moon or Mars, or beyond.
No doubt, such discussions took place between the Rays and Chrises of that ancient epoch. What we once deemed impossible was simply a reflection of our limited understanding, much like today’s skepticism about the future of AI and the cosmos.