Can an observer objectively distinguish between the typed output of a human and a machine when the identity of both are hidden? Today this is known as the Turing test, and chatbots have aced it (even though they cleverly deny that if you ask them directly). Turing’s strategy unleashed decades of relentless advances that led to GPT but elided the problem.
Implicit in this debate is the assumption that artificial intelligence is the same as artificial consciousness, that being smart is the same as being conscious.
Any system that has the same intrinsic connectivity and causal powers as a human brain will be, in principle, as conscious as a human mind. Such a system cannot be simulated, however, but must be constituted, or built in the image of the brain. Today’s digital computers are based on extremely low connectivity (with the output of one transistor wired to the input of a handful of transistors), compared with that of central nervous systems (in which a cortical neuron receives inputs and makes outputs to tens of thousands of other neurons). Thus, current machines, including those that are cloud-based, will not be conscious of anything even though they will be able, in the fullness of time, to do anything that humans can do. In this view, being ChatGPT will never feel like anything.