35 Comments

"In any case how would we know if an AI is sentient..." That is not as difficult as it sounds. I had a fun conversation with ChatGPT and it was quickly obvious that it is not sentient. What is going to be difficult to achieve is Consciousness, spelled with a big Julian Jaynes 'C'. If you are not familiar with Jaynes, it is time to dust off that old volume that captivated your imagination back in college and find out exactly what consciousness is.

Expand full comment

What do you think are the differences of sentience and consciousness?

Expand full comment

I go with the dictionary definition of sentience, which is: able to perceive or feel things. Unfortunately, perception is quite different from feelings, and even that isn't clear as being the same as 'feel things', which invokes the idea of one's fingertips. In any event, an embodiment is implied. Consciousness is far too complex to elaborate on here, let's just say that only humans are conscious because it requires recursive language and is learned.

Expand full comment

I do not agree that consciousness "recursive language and is learned." I believe that it is an innate function. I've been reading about it for many years. Here are some definitions for you to consider:

1. Consciousness is the subjective feeling associated with the real-time collection of information by the brain in the state of awareness, alertness, wakefulness, or receptivity. When a person goes to sleep at night and is not dreaming, he is unconscious, and when he awakens in the morning, he is conscious. Consciousness becomes operative in the human organism when the brain reaches a certain size and complexity at around 24 weeks after conception.

Me, Gary Whittenberger

2. “An organism is conscious if there is something that it is like to be that organism.”

From “What Is It Like to Be a Bat?” By Thomas Nagel. The Philosophical Review, 83, No. 4, 1974, p 435-50.

3. “Consciousness is experience.” (P. 1) “Collectively taken, consciousness is lived reality. It is the feeling of life itself. (P. 1) “Consciousness is the way the world appears and feels to me. (P. 3)

From “The Feeling of Life Itself: Why Consciousness is Widespread but Can’t be Computed.” By Christof Koch, 2019.

4. “I will use the terms consciousness, subjective awareness, and subjective experience interchangeably...” (P. 3) “The term consciousness has come to mean the act of being conscious of something, rather than the material of which you are conscious.” (P. 4)

From “Rethinking Consciousness: A Scientific THEORY of SUBJECTIVE Experience.” By Michael S. A. Graziano, 2019.

5. “Consciousness is experience itself...” (P. 3)

From “Conscious: A Brief Guide to the Fundamental Mystery of the Mind.” By Annaka Harris, 2019.

6. “Wherever there is a conscious mind, there is a point of view. This is one of the most fundamental ideas we have about minds – or about consciousness. A conscious mind is an observer, who takes in a limited subset of all the information there is.” (P. 101)

From “Consciousness Explained.” By Daniel C. Dennett, 1991.

7. “...when I talk about consciousness, I am talking only about the subjective quality of experience: what it is like to be a cognitive agent.” (P. 6)

From “The Conscious Mind: In Search of a Fundamental Theory” By David Chalmers, 1996.

Expand full comment

I remember reading Chalmers Conscious Mind some 10 or 12 years ago, made some extensive notes, a fascinating read. But I've always been kind of amused by the quote from the International Dictionary of Psychology that he started his first chapter with:

"Consciousness: The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of confusing consciousness with self-consciousness – to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written about it. (Sutherland 1989) [pg 3]" 🙂

Always been a bit sympathetic to or intrigued by his various conjectures throughout the book:

"Even if consciousness cannot be reductively explained, there can still be a theory of consciousness. We simply need to move to a 'nonreductive' theory instead. We can give up on the project of trying to explain the existence of consciousness wholly in terms of something more basic, and instead admit it as fundamental, giving an account of how it relates to everything else in the world. [pg. 212] ....

I have advocated some counterintuitive views in this work. I resisted mind-body dualism for a long time, but I have now come to the point where I accept it, not just as the only tenable view but as a satisfying view in its own right. .... If God forced me to bet my life on the truth or falsity of the doctrines I have advocated, I would bet fairly confidently that experience is fundamental, and weakly that experience is ubiquitous [pg. 350]"

Moot exactly what would undergird that dualism, but he alludes to process as fundamental:

"Why is all this processing accompanied by an experienced inner life?" [pg. xi]

Part and parcel of process philosophy:

https://en.wikipedia.org/wiki/Process_philosophy

Processes are, of course, ubiquitous, from ticking of clocks, to organs pumping blood and air, to synapses firing. But processes are not things in themselves -- they're progressions from some states to others. Moot of course which processes might "instantiate" consciousness -- maybe those of consciousness require hydrocarbon structures, and are incompatible with silicon ones -- but those processes, in themselves, seem a plausible candidate for half of Chalmers "mind-body dualism".

Expand full comment

S1: I remember reading Chalmers Conscious Mind some 10 or 12 years ago, made some extensive notes, a fascinating read.

GW1: I believe that I read all or most of that work.

S1: But I've always been kind of amused by the quote from the International Dictionary of Psychology that he started his first chapter with: "Consciousness: The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of confusing consciousness with self-consciousness – to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written about it. (Sutherland 1989) [pg 3]"

GW1: He seems to be contradicting himself – “We know what consciousness is, but we don’t know what consciousness is.”

S1: Always been a bit sympathetic to or intrigued by his various conjectures throughout the book: "Even if consciousness cannot be reductively explained, there can still be a theory of consciousness. We simply need to move to a 'nonreductive' theory instead. We can give up on the project of trying to explain the existence of consciousness wholly in terms of something more basic, and instead admit it as fundamental, giving an account of how it relates to everything else in the world. [pg. 212] ....

GW1: Right now I lean towards the idea that consciousness is an emergent phenomenon. It emerges from complex brains. How complex does a brain need to be to yield consciousness? Don’t know.

S1: I have advocated some counterintuitive views in this work. I resisted mind-body dualism for a long time, but I have now come to the point where I accept it, not just as the only tenable view but as a satisfying view in its own right. ....

GW1: I agree with a sort of mind-brain dualism.

S1: If God forced me to bet my life on the truth or falsity of the doctrines I have advocated, I would bet fairly confidently that experience is fundamental, and weakly that experience is ubiquitous [pg. 350]"

GW1: I don’t believe consciousness if fundamental in the sense that it would exist without brains.

S1: Moot exactly what would undergird that dualism, but he alludes to process as fundamental: "Why is all this processing accompanied by an experienced inner life?" [pg. xi]

GW1: The “why” question may be inapplicable in a case like this. It may just be a brute fact that brains of some minimum complexity generate consciousness.

S1: Part and parcel of process philosophy: https://en.wikipedia.org/wiki/Process_philosophy Processes are, of course, ubiquitous, from ticking of clocks, to organs pumping blood and air, to synapses firing. But processes are not things in themselves -- they're progressions from some states to others. Moot of course which processes might "instantiate" consciousness -- maybe those of consciousness require hydrocarbon structures, and are incompatible with silicon ones -- but those processes, in themselves, seem a plausible candidate for half of Chalmers "mind-body dualism".

GW1: Consciousness may require processes of neurons connected in some kind of pattern, currently unknown.

Expand full comment

GW2: "He seems to be contradicting himself ..."

Intentionally or sardonically I expect to emphasize the wide disparity between what is known and what is still to be known -- if ever ....

GW3: "Right now I lean towards the idea that consciousness is an emergent phenomenon. ...."

From re-reading my notes, he certainly seems to endorse that as a plausible hypothesis or avenue of approach. You've probably run across these, but a couple of popularizations I've found illuminating have been these two:

https://www.amazon.ca/Complexity-Emerging-Science-Order-Chaos/dp/0671872346

https://www.amazon.ca/Complexity-Guided-Tour-Melanie-Mitchell/dp/0195124413/

GW4: "I agree with a sort of mind-brain dualism. ...."

Certainly sympathetic to it myself. Chalmers speaks fairly extensively about structure and dynamics, but process seems to encompass the latter.

GW5: "Consciousness may require processes of neurons connected in some kind of pattern, currently unknown."

Certainly a major mystery; that "final frontier" of Star Trek may well lie between our ears. But phenomena like sleep and anesthesia suggest some neurochemical processes "instantiate" consciousness while others don't; puzzle of course is what are the differences.

Expand full comment

"I do not agree that consciousness "recursive language and is learned."

Sorry you misunderstood, I said Consciousness requires recursive language and is learned.

From Jaynes:

Spatialization (space & time)

The fact that in our minds we sense our environment in space and time. Most importantly is the time sense that we have, which most likely does not exist in animal minds.

Excerption

We excerpt from the collection of possible attentions to a thing which comprises our knowledge of it. And this is all that it is possible to do since consciousness is a metaphor of our actual behavior.

The Analog 'I'

The metaphor we have of ourselves that can 'move about' vicariously in our 'imagination', 'doing' things that we are not actually doing. We imagine 'ourselves' 'doing' this or that, and thus 'make' decisions on the basis of imagined 'outcomes' that would be impossible if we did not have an imagined 'self' behaving in an imagined 'world'.

The Metaphor 'Me'

The analog 'I' is, however, not simply that. It is also a metaphor 'me'. As we imagine ourselves strolling down the longer path we indeed catch 'glimpses' of 'ourselves', as we did in the exercises of Chapter 1, where we called them autoscopic images. We can both look out from within the imagined self at the imagined vistas, or we can step back a bit and see ourselves perhaps kneeling down for a drink of water at a particular brook. There are of course quite profound problems here, particularly in the relationship of the 'I' to the 'me'.

Narratization

In consciousness, we are always seeing our vicarial selves as the main figures in the stories of our lives. In the above illustration, the narratization is obvious, namely, walking along a wooded path. But it is not so obvious that we are constantly doing this whenever we are being conscious, and this I call narratization. Seated where I am, I am writing a book and this fact is imbedded more or less in the center of the story of my life, time being spatialized into a journey of my days and years.

But it is not just our own analog 'I' that we are narratizing; it is everything else in consciousness. A stray fact is narratized to fit with some other stray fact. A child cries in the street and we narratize the event into a mental picture of a lost child and a parent searching for it. A cat is up in a tree and we narratize the event into a picture of a dog chasing it there. Or the facts of mind as we can understand them into a theory of consciousness.

Conciliation

This is modeled upon a behavioral process common to most mammals. It springs from simple recognition, where a slightly ambiguous perceived object is made to conform to some previously learned schema. We conciliate a new stimulus into our conception, or schema about it, even though it is slightly different.

Expand full comment

GW1: "I do not agree that consciousness "recursive language and is learned."

HM2: Sorry you misunderstood, I said Consciousness requires recursive language and is learned.

GW2: No, I did not misunderstand you. You are just mistaken in your claim. Consciousness is a feature of some organisms with brains and does not require language at all. For example, human babies have no language but they are conscious.

GW2: As far as the rest of your post, I don’t know if you were quoting Jaynes or paraphrasing him or just commenting on him, but anyway it is mostly baloney, so I won’t comment further on it. I think the problem here is that you just are not using a common definition of “consciousness.”

Expand full comment

"I think the problem here is that you just are not using a common definition of 'consciousness.'"

Yep.

Expand full comment

Perhaps the problem is not machines becoming, and even striving, to get smarter and do more. The problem, demonstrated by students avoiding learning, is people striving to get dumber and do less.

Expand full comment

Force-feed animals and they lose their appetite. Force-feed humans knowledge and they lose interest in learning.

Expand full comment

A rather hazardous state of affairs. Reminds me of Wells' Time Machine:

"... Time Machine is interpreted in modern times as a commentary on the increasing inequality and class divisions of Wells' era, which he projects as giving rise to two separate human species: the fair, childlike Eloi, and the savage, simian Morlocks, distant descendants of the contemporary upper and lower classes respectively ..."

https://en.wikipedia.org/wiki/The_Time_Machine

Expand full comment

One of my favorite books and metaphors. Who do you think are more conscious, the Eloi or the Morlocks? Which would you choose to be?

Expand full comment

Very nice metaphor indeed. Think I read it as a comic book as a kid, saw a movie somewhat later, and finally read the book a few years ago.

Though not sure I entirely agree with Wikipedia's "upper and lower classes" interpretation even if that may be part of it. Sort of got the impression -- maybe from the earlier movie -- of the workers versus the "parasites", the "useless eaters", although that, of course, may be part and parcel of upper and lower classes.

But Aesop's fable about the ant and the grasshopper may be somewhat more to the point:

https://en.wikipedia.org/wiki/The_Ant_and_the_Grasshopper

In any case, hadn't ever thought of it in terms of more or less conscious -- maybe each more conscious of certain aspects of "reality" than the other. Question seems to be which group, and their consciousness, contributes more to keeping the process ticking along.

Expand full comment

From my email feed: '10 reasons to worry about generative AI' https://www.infoworld.com/article/3687211/10-reasons-to-worry-about-generative-ai.html

Expand full comment

I certainly do not have the philosophical or metaphysical background to really understand, but it seems like an indicator of consciousness is debating the meaning of consciousness.

Expand full comment

Interesting read, quite agree with your closing "use [AI/ChatGPT] as a tool only and not a replacement of your mind".

Somewhat en passant, something of a thank you for your "The Believing Brain", particularly as I've had reason, and frequent occasion, over the past few years to tweet and post this passage thither and yon:

"As we saw in the previous chapter, politics is filled with self-justifying rationalizations. Democrats see the world through liberal-tinted glasses, while Republicans filter it through conservative shaded glasses. When you listen to both 'conservative talk radio' and 'progressive talk radio' you will hear current events interpreted in ways that are 180 degrees out of phase. So incongruent are the interpretations of even the simplest goings-on in the daily news that you wonder if they can possibly be talking about the same event. Social psychologist Geoffrey Cohen quantified this effect in a study in which he discovered that Democrats are more accepting of a welfare program if they believe it was proposed by a fellow Democrat, even if the proposal came from a Republican and is quite restrictive. Predictably, Cohen found the same effect for Republicans who were far more likely to approve of a generous welfare program if they thought it was proposed by a fellow Republican. In other words, even when examining the exact same data people from both parties arrive at radically different conclusions. [pg. 263]"

Maybe moot whether programs like ChatGPT will add to those "self-justifying rationalizations" or illuminate their problematic aspects and consequences.

Expand full comment

early adopters will survive.... many or most of the rest will go extinct...Biden won't save anyone

Expand full comment

I go with the dictionary definition of sentience, which is: able to perceive or feel things. Unfortunately, perception is quite different from feelings, and even that isn't clear as being the same as 'feel things', which invokes the idea of one's fingertips. In any event, an embodiment is implied. Consciousness is far too complex to elaborate on here, let's just say that only humans are conscious because it requires recursive language and is learned.

Expand full comment

MS: My solution is to apply the Copernican Principle to myself: I’m not special. If your brain is wired up similar to mine, and I am sentient and self-aware, then very probably—let’s put it at 99.999% probable—so are you. As for determining whether or not Data on Star Trek is sentient—or whatever comes after ChatGPT—I am open to suggestions as I don’t know.)

GW: I think the “wired up similarly” criterion is correct, but I would add “behaves similarly” also. I doubt that Data is sentient because I doubt that he is wired up similarly.

Expand full comment

Does chatGPT, or any other AI, have microtubules? Roger Penrose has a far more interesting take on consciousness than a complicated speak n spell ‘learning’ to think outside the box of its training data.

Expand full comment

Penrose and his collaborator have not demonstrated that microtubules have anything to do with sentience, consciousness, or thinking. Their ideas on this are "fringe" to most neuroscientists, psychologists, and philosophers of mind.

Expand full comment

To 'Whatever the “it” is that is coming, it is surely not sentience, but it is impressive.', I skeptically question the assertion that AI will fail to replicate human behavior. That is arguably the existential threat posed by machine learning, i.e., not only can it replicate human behavior, it can run amok, just like primates, but attain power that few people comprehend.

I recall four movies (or franchises) addressing the theme: Terminator, Blade Runner, The Matrix, Transcendence. There are doubtless others. Predictive science fiction, or far-fetched science fantasy?

Expand full comment

I agree that there are dangers with AI. So, it is important now to devise guiderails, fail safes, laws, and regulations to limit the capabilities of AI.

Expand full comment

I think regulating AI is unrealistic. At present, a teenager with cloud machine access could deploy a learning machine resembling Leeloo in the Fifth Element. I'd hesitate to cripple defenses against unregulated actors like the ChiComs. We have such stellar successes as regulating gain-of-function genetic research. We might fault dysfunctional regulation for Chernobyl, but how do we excuse Three Mile Island? Are nukes insufficiently regulated? We fail to regulate even intermediate missiles, what about the latest hypersonics? Or the Russian autonomous doomsday submarines?

Expand full comment

MRF: I think regulating AI is unrealistic.

GW: I totally disagree. I don’t know why you would think that.

MRF: At present, a teenager with cloud machine access could deploy a learning machine resembling Leeloo in the Fifth Element.

GW: No regulation is 100% effective, but it can still be highly effective. It can reduce the severity, duration, and frequency of harm.

MRF: I'd hesitate to cripple defenses against unregulated actors like the ChiComs.

GW: I don’t know what that is, so I won’t comment on it.

MRF: We have such stellar successes as regulating gain-of-function genetic research. We might fault dysfunctional regulation for Chernobyl, but how do we excuse Three Mile Island? Are nukes insufficiently regulated? We fail to regulate even intermediate missiles, what about the latest hypersonics? Or the Russian autonomous doomsday submarines?

GW: Your erroneous assumption is that regulation must be 100% effective to be useful. As an old philosopher friend once told me “Don’t let the perfect be the enemy of the good.”

Expand full comment

I entertain a view of people as biological analog computers that are learning machines. The intellectual basis is briefed by https://www.susanblackmore.uk/reviews/review-of-the-robots-rebellion/ I view regulating AI as analogous to regulating people.

I agree that some regulation can be advantageous, but not for emerging science. If we equate people and computers, I think that suggests the complexity and difficulty, orders of magnitude beyond regulating mindless machines like WMD and nuclear power plants. Even disasters of fossil fuel explosions or Bhopal inform.

AI is a knowledge genie that, like nuclear weapons tech, has already escaped the bottle. So like gun regulation, efforts will disarm law-abiding people, rendering them defenseless before megalomaniacs.

The Chinese Communists are using AI to tighten their grip on people. As a freedom-loving individualist, I abhor the tyranny of authoritarian oppression using the manipulative deception of an overriding collective interest. Like the Paris Climate Agreement, I suspect regulating AI would be co-opted by the ChiComs to ensure their megalomania expands at my expense.

I agree that we should suffer some regulation to secure greater individual freedom, e.g., from crime and vigilantism. We have a rich tradition of civil and criminal common law, augmented by statute law, that has long been infiltrated and subverted by authoritarian megalomaniacs. That explains much mismanagement and seemingly senseless regulation.

Expand full comment

MRF replied to your comment on ChatGPT, Conspiracies, and AI-Human Relations.

MRF2: I entertain a view of people as biological analog computers that are learning machines.

GW2: I think human persons are both analog and digital.

MRF2: The intellectual basis is briefed by https://www.susanblackmore.uk/reviews/review-of-the-robots-rebellion/

GW2: I have read some of Blackmore’s work, and in general I mostly agree with her.

MRF2: I view regulating AI as analogous to regulating people.

GW2: I can understand that analogy.

MRF2: I agree that some regulation can be advantageous, but not for emerging science.

GW2: I disagree. Some regulation can be advantageous even for an emerging science.

MRF2: If we equate people and computers, I think that suggests the complexity and difficulty, orders of magnitude beyond regulating mindless machines like WMD and nuclear power plants. Even disasters of fossil fuel explosions or Bhopal inform. AI is a knowledge genie that, like nuclear weapons tech, has already escaped the bottle.

GW2: Regulation of nuclear weapons and power plants has already been useful. We certainly need some regulation of AI.

MRF2: So like gun regulation, efforts will disarm law-abiding people, rendering them defenseless before megalomaniacs.

GW2: No, gun regulation need not disarm law abiding people at all.

MRF2: The Chinese Communists are using AI to tighten their grip on people. As a freedom-loving individualist, I abhor the tyranny of authoritarian oppression using the manipulative deception of an overriding collective interest.

GW2: It is wise for a community, represented by a democratic government, to control the behavior of persons within the community TO SOME DEGREE. AI is a tool, like most or all tools, which can be used for good or ill.

MRF2: Like the Paris Climate Agreement, I suspect regulating AI would be co-opted by the ChiComs to ensure their megalomania expands at my expense.

GW2: Maybe, maybe not. We should take that chance and regulate it.

MRF2: I agree that we should suffer some regulation to secure greater individual freedom, e.g., from crime and vigilantism.

GW2: And to secure an absence or reduction of harm.

MRF2: We have a rich tradition of civil and criminal common law, augmented by statute law, that has long been infiltrated and subverted by authoritarian megalomaniacs. That explains much mismanagement and seemingly senseless regulation.

GW2: But overall, democratic governments have been a very valuable thing for humanity. They must regulate for the individual and common good.

Expand full comment

Keith Stanovich (among others) explain how logical processing using the biological brain requires enormous resources that slow processing. So results from reasoning follows heuristic or automatic processing.

I suppose that analog processing can be reduced to a digital model, but that appears unknown. Have I mentioned that there's intriguing suggestion of the brain additionally supporting a transceiver function? If true, that might help explain some parapsychology.

When in doubt, I prefer to let crowd wisdom emerge from the bottom-up rather than trust a top-down regulation. When Mao lost his mind, the Chinese paid the price for reposing excessive trust in an authoritarian, or so Robert Lawrence Kuhn has reported.

Expand full comment

Good one. On the topic of conspiracies -- could you comment on Jeff Gerth's recent CJR article?

Expand full comment

Criticizing the government (and the press)? That's not conspiracy, that's sedition!

I miss the USA.

Expand full comment