What does AI sentience really mean?

Addressing the fears of AI adoption in a creative field is one thing — but down the rabbit hole lies myriad miseries for the waking it seems. And as fearful humans it is appropriate to think of the probabilities that we can fathom before this AI butterfly awakes to peer into our glazed pupils and stub our puny fathoming into the unfathomable.

The whole AI has become sentient conversation was interesting, but I could not understand how and what that meant. So here are some things I found.

— As we crossed the ~10B parameter mark, models began developing emergent behaviors not anticipated by researchers. GPT-3, for example, became capable of solving 3, 4, and 5-digit math problems that were not provided in the dataset it was trained on.

“So an AI is doing math?” **you might think. “Isn’t that what they’re supposed to do? What’s the big deal?”

The big deal is that there was no way for GPT-3 to have learned how to solve these problems unless, in training, it explicitly reasoned something about our world outside of what we trained it for, and we had no idea what that was until after it was created.

The Model Is The Message

— The debate over whether LaMDA is sentient or not overlooks important issues that will frame debates about intelligence, sentience, language and human-AI interaction in the coming years.

Like most other observers, we do not conclude that LaMDA is conscious in the ways that Lemoine believes it to be. His inference is clearly based in motivated anthropomorphic projection. At the same time, it is also possible that these kinds of artificial intelligence (AI) are “intelligent” — and even “conscious” in some way — depending on how those terms are defined.

Leave a Reply

Your email address will not be published.