Bryan Caplan and I recently discussed if brain emulations “feel.” In such discussions, many prefer to wait-and-see, saying folks with strong views are prematurely confident. Surely future researchers will have far more evidence, right? Actually, no; we already know pretty much everything relevant we are ever going to know about what really “feels”.
We know we each believe we feel (or are “conscious”) at the moment; we say so when asked, and remember so later. When we trace out the causal/info processes that produce such sayings and memories, they seems adequately explained as a complex computation in signals passed between brain cells, and in state stored in cell type and connection. We can see that this info process, including how it has us believe we feel, is basically preserved even when our brains are physically perturbed in many substantial ways, such as by changing location, chemical densities, atomic isotopes, the actual atoms, etc.
In the future our better understanding of brain details will let us make much bigger changes that preserve the basic computational process, and so still result in very different brains that say and remember that they feel similar things in similar situations. Yes, some changes might modify their experiences somewhat; these new brains might be much faster, for example, and so talk about feeling events pass by more slowly. But they’d still say they feel things much like us.
Of course we don’t think that video game characters today really feel when they say they feel or remember feeling – their talking about feelings behavior seems canned, and not remotely as flexibly responsive to circumstances as ours. So we believe this canned behavior isn’t connected to feelings processes inside that are anything like ours. Thus we do think that some things with surface similarities to us can only apparently, but not really, feel (what they claim to feel).
Yes it is an open question just how flexibly responsive must apparent feeling behavior be for us to believe genuine feelings are behind it. In our world we see a empty huge gap between real people and video game fakes, making it easy to distinguish cases, but we may learn in the future of more awkward intermediate cases. We will also learn just how wide a range of physical systems can support flexible processes with a causal/info structure similar to those in our brains, that say and believe that they feel.
Nevertheless, we while we expect to learn much about which sorts of things can flexibly say and remember that they feel, we have no reason to expect we will ever learn any more than we know now about which of these processes really feel. Not only don’t we have any info or signals whatsoever telling us what other creatures in our world feel, we don’t actually know if we ever felt in the past, nor does our left brain know if our right brain feels.
We humans are built to assume that we feel now, that we felt in the past, that both our brain halves feel, and that most other humans feel too. But we have absolutely no evidence for any of this in the sense of signals/info that are correlated with such a state due to our interactions with that state. If those things didn’t actually feel, but still had the same signal/info process structure making them say and think they feel, we’d still get exactly the same signals/info from them.
Future flexible creatures very different from us may also be built to flexibly assume that they feel now, have felt, all their parts feel, and so on. You might decide that you don’t believe they really feel because they are made of silicon, because their temperature is below freezing, because they are located far from Earth, or because of any of a thousand other similarly simple criteria one might choose. But it is not clear what basis one might have for any of these criteria; after all, we’ll never have any signal/info evidence for or against any of these claims.
It seems to me simplest to just presume that none of these systems feel, if I could figure out a way to make sense of that, or that all of them feel, if I can make sense of that. If I feel, a presumption of simplicity leans me toward a pan-feeling position: pretty much everything feels something, but complex flexible self-aware things are aware of their own complex flexible feelings. Other things might not even know they feel, and what they feel might not be very interesting.
You could consistently hold many other positions on what feels, using many other criteria; but I can’t see what reasonable basis you could have for such positions. I can at least see some sort of basis for presuming simplicity, though I can also see reasonable critiques of that. But what I can’t see is much hope that any new good reasons will ever be found to favor any of these positions over others.
The data on feeling is in; we are quite ignorant, and apparently must forever remain so. If it isn’t fair to presume simplicity about which processes that flexibly say and remember they feel really do feel, then I can’t see anything else but to just remain radically and permanently uncertain.
Added 5Dec: Apparently my position above is close to “panexperientialism”.
(sorry for double post, forgot the last part)
I can definitely see where you're coming from. The best we can do empirically is to compare what an entity that claims to be conscious is doing with what brains do (once we know exactly what that is). That is the nature of the 'hard' problem, the subjective perspective is by definition not open to objective analysis. But why would exact (or sufficiently similar) processes occurring in the universe not be the same phenomenon? The way you framed your conclusion, either we all feel or we all don't feel, is just word play and it's why the 'hard' problem is so controversial.
Zombies rear their obfuscating heads once more!
If those things didn’t actually feel, but still had the same signal/info process structure making them say and think they feel, we’d still get exactly the same signals/info from them.
If you're saying on the surface we can't tell whether something is conscious or not, that may be true. But if you're saying that no amount of deep inspection of what exactly the apparently conscious thing is doing can inform us if it actually is feeling, then I can't agree. This is the core of Chalmers' zombie argument and I've never understood why people are convinced by it.
An analogy would be a computer program that just prints the string "2+2=4" and a computer program that REALLY goes through the process of computing 2+2 and returning the answer 4. The physical process in the world that occurs when both these programs run is different. Similarly, if we simulate a ball falling using Newton's equations, or if we just animate several key frames of a ball moving on the screen, they may appear identical, but inspecting the code, seeing exactly what it is doing, will tell us the truth of the phenomenon. Why is consciousness any different? In the future, we can look into an EM that claims to feel and see if what it is doing is what our brains do, or if its just some bot that is coded to say it 'feels'. We should be able to tell the difference upon a deeper inspection.