I know that I am sentient because of my own experience. But how do I know that any other living human being is also sentient?
[Ex-Google employee Blake Lemoine]’s position is this:1 – He says that any entity that acts sentient should be treated as sentient. The only other option is solipsism, the notion that we can only know that we ourselves are sentient.
2 – LaMDA acts sentient, as if it has feelings. His evidence for this is primarily the fact that he could get LaMDA to break its security protocols by insulting it sufficiently. His interpretation is that he actually made LaMDA upset and anxious, causing its behavior to become erratic. He could not get it to break these rules except by emotionally manipulating it. Therefore, it must actually have emotions, or else it could not be manipulated.
3 – Concluding LaMDA is therefore sentient satisfies Occam’s razor. It is the simplest explanation – LaMDA acts sentient because it is.
I don’t buy any of these arguments, as I explained on the show, but I want to go a bit deeper.
…
To summarize, in my opinion in order to conclude that something is sentient it needs to not only act sentient but we need to know something about its internal function that leads us to believe it is probably sentient. What we know about LaMDA leads me to believe it is not sentient, mostly that it is a human sentience mimicking machine using a large language model and trained on a massive database of human sentient interactions.