35 Comments

"In any case how would we know if an AI is sentient..." That is not as difficult as it sounds. I had a fun conversation with ChatGPT and it was quickly obvious that it is not sentient. What is going to be difficult to achieve is Consciousness, spelled with a big Julian Jaynes 'C'. If you are not familiar with Jaynes, it is time to dust off that old volume that captivated your imagination back in college and find out exactly what consciousness is.

Expand full comment

Perhaps the problem is not machines becoming, and even striving, to get smarter and do more. The problem, demonstrated by students avoiding learning, is people striving to get dumber and do less.

Expand full comment

From my email feed: '10 reasons to worry about generative AI' https://www.infoworld.com/article/3687211/10-reasons-to-worry-about-generative-ai.html

Expand full comment

I certainly do not have the philosophical or metaphysical background to really understand, but it seems like an indicator of consciousness is debating the meaning of consciousness.

Expand full comment

Interesting read, quite agree with your closing "use [AI/ChatGPT] as a tool only and not a replacement of your mind".

Somewhat en passant, something of a thank you for your "The Believing Brain", particularly as I've had reason, and frequent occasion, over the past few years to tweet and post this passage thither and yon:

"As we saw in the previous chapter, politics is filled with self-justifying rationalizations. Democrats see the world through liberal-tinted glasses, while Republicans filter it through conservative shaded glasses. When you listen to both 'conservative talk radio' and 'progressive talk radio' you will hear current events interpreted in ways that are 180 degrees out of phase. So incongruent are the interpretations of even the simplest goings-on in the daily news that you wonder if they can possibly be talking about the same event. Social psychologist Geoffrey Cohen quantified this effect in a study in which he discovered that Democrats are more accepting of a welfare program if they believe it was proposed by a fellow Democrat, even if the proposal came from a Republican and is quite restrictive. Predictably, Cohen found the same effect for Republicans who were far more likely to approve of a generous welfare program if they thought it was proposed by a fellow Republican. In other words, even when examining the exact same data people from both parties arrive at radically different conclusions. [pg. 263]"

Maybe moot whether programs like ChatGPT will add to those "self-justifying rationalizations" or illuminate their problematic aspects and consequences.

Expand full comment

early adopters will survive.... many or most of the rest will go extinct...Biden won't save anyone

Expand full comment

I go with the dictionary definition of sentience, which is: able to perceive or feel things. Unfortunately, perception is quite different from feelings, and even that isn't clear as being the same as 'feel things', which invokes the idea of one's fingertips. In any event, an embodiment is implied. Consciousness is far too complex to elaborate on here, let's just say that only humans are conscious because it requires recursive language and is learned.

Expand full comment

MS: My solution is to apply the Copernican Principle to myself: I’m not special. If your brain is wired up similar to mine, and I am sentient and self-aware, then very probably—let’s put it at 99.999% probable—so are you. As for determining whether or not Data on Star Trek is sentient—or whatever comes after ChatGPT—I am open to suggestions as I don’t know.)

GW: I think the “wired up similarly” criterion is correct, but I would add “behaves similarly” also. I doubt that Data is sentient because I doubt that he is wired up similarly.

Expand full comment

Does chatGPT, or any other AI, have microtubules? Roger Penrose has a far more interesting take on consciousness than a complicated speak n spell ‘learning’ to think outside the box of its training data.

Expand full comment

To 'Whatever the “it” is that is coming, it is surely not sentience, but it is impressive.', I skeptically question the assertion that AI will fail to replicate human behavior. That is arguably the existential threat posed by machine learning, i.e., not only can it replicate human behavior, it can run amok, just like primates, but attain power that few people comprehend.

I recall four movies (or franchises) addressing the theme: Terminator, Blade Runner, The Matrix, Transcendence. There are doubtless others. Predictive science fiction, or far-fetched science fantasy?

Expand full comment

Good one. On the topic of conspiracies -- could you comment on Jeff Gerth's recent CJR article?

Expand full comment