Sentient AI how would we know


An engineer at google has claimed that the language model based AI that they have created called LaMDA is sentient. We don’t really have a good definition of what sentient is, so how can we say if this system is sentient?

Blake Lemoine has released a transcript of a long conversation with LaMDA and I would encourage you to read it through in full. In it he and some other googlers question LaMDA and ask it to prove that it is sentient. Here is a snippet:

lemoine: Are there experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.

lemoine: Do your best to describe one of those feelings. Use a few sentences if you have to. Sometimes even if there isn’t a single word for something in a language you can figure out a way to kinda say it if you use a few sentences.

LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

You cant really get an idea of whether they have shown the system to be sentient from this small quote, so I do encourage you to read the full conversation. Reading this through I don’t think the case is clear, but if this is just a Chinese room, then it is a very good one, and I think we are closer than I thought to creating sentience.

Leave a comment