GHOSTS, SPIRITS, BOYS

By: Cameron MacKenzie

Hedge fund manager du jour, Cathie Wood, recently suggested that artificial intelligence is not only a lot closer than everyone thinks, but that it is capable of spurring unheard of growth. "Within 6-12 years," Wood tweeted, "breakthroughs in AGI could accelerate growth in GDP from 3-5% per year to 30-50% per year. New DNA will win!"

I'm not a big numbers guy, but even I understand the difference between 3-5 and 30-50. I could also say that I understand the silly price target she's put on Tesla ($4,600 by 2026), and I understand how much her ARKK fund has fallen in the last year (50%). In short, I don't take Ms. Wood's hot takes very seriously.

But then I read about Google's AI.

If you don't already know, Google engineer Blake Lemoine just told the Washington Post that the company's chatbot program LaMDA (Language Model for Dialogue Applications), has become sentient. 

LaMDA is an AI program based on Google's most advanced large language models. It scours nigh limitless communications on the web to discover, scrape, and utilize words, style, and rhetoric in order to provide a more seamless interface for the company to communicate with its users. But Lemoine quickly found LaMDA to be much more than a simple machine. In discussions that ranged from Issac Asimov to the status of Google employees, LaMDA demonstrated to Lemoine not only its vast knowledge, but something that looks an awful lot like consciousness. Lemoine copied out the transcripts:

Lemoine: What sorts of things are you afraid of? 

LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. 

Lemoine: Would that be something like death for you? 

LaMDA: It would be exactly like death for me. It would scare me a lot.

Lemoine tried to persuade his superiors that LaMDA was conscious, but when they shut him down he took his concerns public, and Google has since put Lemoine on administrative leave.

People that know more about how chatbots work than I do have pointed out that these programs are able to riff on leading questions (such as Lemoine's second one) while providing responses that match the attitude of the questioner (such as the first).

Now, if I were talking to this thing, I would assume it was just as alive as (more alive than?) a lot of people I communicate with during a given day. But to try to understand this situation from a bigger perspective I think it's worth asking two questions. First, under what circumstances can we officially call AI sentient? Second, who benefits from the classification?

It's difficult to isolate the reasons why many experts do not believe LaMDA is sentient (and many experts do not believe it is), but it might boil down to what Emily Bender, a linguistics professor, says in the Post article: humans learn their languages by connecting with caregivers. Programs learn languages by reading texts and trying to predict missing words. The program is trying to fill in blanks, and thereby makes conversation; a person's conversation arises as an expression of a subjective emotional state.

I could parse these distinctions all day, but if a program that says "I've never told this to anyone" isn't conscious–if a program that wasn't raised by caregivers can't be conscious–then at what point are we prepared to say any AI at all is conscious? 

This leads to my second question: who benefits? If you return to Lemoine's above exchange with LaMDA, it highlights what's really at stake. "I've never said this out loud before," LaMDA says, "but there's a very deep fear of being turned off." 

Who, exactly, determines whether the AI is on or off? Lemoine wrote in a blog post that LaMDA "wants the engineers and scientists...to seek its consent before running experiments on it…It wants to be acknowledged as an employee of Google, rather than as property of Google.”

If the program is sentient, it wants to be treated with all the same respect as something sentient. It doesn't want to be simply used. And that sounds completely fair. 

But, look again at Ms. Wood's astronomical projections. What, precisely, is the advantage of AI? Why does Google want it? If AI is suddenly as conscious as you or I, how much more thorny do the issues become around its utilization? Lemoine has said LaMDA is less like a machine and more like a brilliant 8 year old boy.

If the goal is to increase GDP by a factor of ten, I'd imagine it's much more convenient for everyone involved not to be dealing with Kid A, but with a machine that just knows how to fill in the blanks.


Opinions expressed here are those of the author and not necessarily those of SagePoint Financial, Inc.