The necessary features for consciousness in prominent physical theories of consciousness that are actually described in terms of physical processes do not exclude panpsychism, the possibility that consciousness is ubiquitous in nature, including in things which aren’t typically considered alive. I’m not claiming panpsychism is true, although this significantly increases my credence in it, and those other theories could still be useful as approximations to judge degrees of consciousness. Overall, I’m skeptical that further progress in theories of consciousness will give us plausible descriptions of physical processes necessary for consciousness that don’t arbitrarily exclude panpsychism, whether or not panpsychism is true.
Models of Sentience
This sub-section is about models of sentience, identity and substance. They are organized as follows:
Models of sentience (How is the sentience? Where does it come from?):
Materialism
Physicalism
Functionalism
Idealism
Emergentism
Immersionism
Panpsychism
Platonism
Models of personal identity (Who feels?):
Open Individualism
Empty Individualism
Closed Individualism
Models based on the different number of types of substances (How many types of substance do exist?):
Monism
Dualism
Trialism
Quadrialism
Luke and Mr. Tomasik found that they agreed about the following:
- Physicalism and functionalism about consciousness.
- Specifically, Mr. Tomasik endorses “Type A” physicalism, as described in his
article “Is There a Hard Problem of Consciousness?” Luke isn’t certain he
endorses Type A physicalism as defined in that article, but he thinks his
views are much closer to “Type A” physicalism than to “Type B” physicalism. - Consciousness will likely turn out to be polymorphic, without a sharp dividing
line between conscious and non-conscious systems, just like (say) the line
between what does and doesn’t count as “face recognition software.” - Consciousness will likely vary along a great many dimensions, and Luke and
Mr. Tomasik both suspect they would have different degrees of moral caring
for different types of conscious systems, depending on how each particular
system scores along each of these dimensions.
A core disagreement
In Luke’s view, a system needs to have certain features interacting in the right way in order to qualify as having non-zero consciousness and non-zero moral weight (if one assumes consciousness is necessary for moral patienthood).
In Mr. Tomasik’s view, various potential features (e.g. ability to do reinforcement learning or meta-cognition) contribute different amounts to a system’s degree of consciousness, because they increase that system’s fit with the “consciousness” concept, but all things have non-zero fit with the “consciousness” concept.
Luke suggested that this core disagreement stems from the principle described in Mr. Tomasik’s “Flavors of Computation are Flavors of Consciousness“:
It’s unsurprising that a type-A physicalist should attribute nonzero consciousness to all systems. After all, “consciousness” is a concept — a “cluster in thingspace” — and all points in thingspace are less than infinitely far away from the centroid of the “consciousness” cluster. By a similar argument, we might say that any system displays nonzero similarity to any concept (except maybe for strictly partitioned concepts that map onto the universe’s fundamental ontology, like the difference between matter vs. antimatter). Panpsychism on consciousness is just one particular example of that
principle.
And if so, should we be continuing to develop it?
I have to admit that I don’t know much about how the system works, but I’m genuinely curious: how do we know that it doesn’t feel anything? I’m just concerned because I’m seeing more and more articles about its creation and the many amazing things it’s been able to do so far but none that tell us about the ethical implications of its creation or that reassure me that the fact that it exists is entirely not a bad thing. It seems to me that the system is now able to do many complex things and it’s worrying me that it might also (eventually) be able to experience something akin to suffering.
Se also: Is gpt-3 a step to sentience?
Danny Donabedian wrote:
Assuming a pan-psychist view of consciousness for a moment, with the potential for suffering subroutines to extend to fundamental physical particles and the simplest of physical systems, I am unsure if more complex systems like those found in living creatures and their neural networks or even unicellular organisms increase or decrease net suffering.
If it turns out that net universal suffering decreases when matter is incorporated into more complex (life-like) systems, I guess we must switch it to have more babies.
While I don’t necessarily believe that, I do believe that it can’t be excluded as a possibility due to the significant s-risk involved with making such a mistake if it turns out to be true. I guess a third option would be that the suffering of simple systems is no greater or less than complex systems.
Though if I had to give a reason per say why one might think simple systems contain more suffering, perhaps a cessation of suffering, a tranquilism, or knowledge/certainty of its attainment, is only observed in certain complex systems and is an emergent property of those systems.
Timothy Chan wrote:
Does that idea that complexity decreases net suffering rely on the consciousnesses of simple systems being cancelled by being incorporated into a complex one though? If that’s the idea I’m not too sure about the intuition behind that. It seems difficult to draw a boundary around a system and say that it’s the ‘terminal’ system that cancels everything simpler.
Danny Donabedian wrote:
Even if their consciousnesses weren’t fully canceled during such an incorporation, perhaps the simpler systems are affected in some other positive manner (with the suffering component of consciousness broadcasted upstream?). But I agree with you that carving up boundaries is challenging and at least not possible for the time being.
Manu Herrán wrote:
Another (scary) very similar hypothesys is that the basic state of simple matter is intense suffering and it thrives to more complex systems in an attempt to avoid that suffering. I put that idea long ago on lyrics of a song called Hypothesis Mass of a Death Metal band called Mortem Tirana.
Danny Donabedian wrote:
That’s similar to but a better/updated model reminiscent of the buddhist approach to realms where hellishness seems to be very simple and correlates with decreased complexity and intense craving, and its converse, happy godliness is associated with max complexity and less overall craving compared to the lower realms.
—
This conversation takes place in July 2020 in a thread started by Wolf Bullmann in the “Sounds like something Brian Tomasik would be against but ok” private facebook group. Excerpts taken with the consent of the authors.
This essay explains my version of an eliminativist approach to understanding consciousness. It suggests that we stop thinking in terms of “conscious” and “unconscious” and instead look at physical systems for what they are and what they can do. This perspective dissolves some biases in our usual perspective and shows us that the world is not composed of conscious minds moving through unconscious matter, but rather, the world is a unified whole, with some sub-processes being more fancy and self-reflective than others. I think eliminativism should be combined with more intuitive understandings of consciousness to ensure that its moral applications stay on the right track.
…
My version of eliminativism does not say that consciousness doesn’t exist. […] Rather, eliminativism says that “consciousness” is not the best concept to use when talking about what minds do.
I think eliminativism should be combined with more intuitive understandings of consciousness to ensure that its moral applications stay on the right track.
Compare an insect with a human. Rather than imagining the human as conscious and the insect as not, or even the human as just more conscious than the insect, instead picture the two as you would a professional race car versus a child’s toy car.
Compare your brain with another part of your nervous system — say the peripheral nerves in your hand. Why is your brain considered “conscious” and your hand not? […] The eliminativist approach encourages us to stop thinking about neural operations as “unconscious” or “conscious”.
Those who value conscious welfare […] aim to attribute degrees of sentience to different parts of physics and then value them based on the apparent degree of happiness or suffering of those sentient minds. Because it’s mistaken to see consciousness as a concrete thing, sentience-based valuation, like the other valuation approaches, involves a projection in the mind of the person doing the valuing. But this shouldn’t be so troubling, because metaethical anti-realists already knew that ethics as a whole was a projection by the actor onto the world. The eliminativist position just adds that the thing (dis)being valued, consciousness, is itself something of a fiction of the moral agent’s invention.
Actually, calling “consciousness” a fiction is too strong.
A humanoid doll that blinks might look more conscious than a fruit fly, but the 100,000 neurons of the fruit fly encode a vastly more complex and intelligent set of cognitive possibilities than what the doll displays. Judging by objective criteria given sufficient knowledge of the underlying systems is less prone to bias than phenomenal-stance attributions.
I think attacking the core confusion about consciousness itself is quite important, for the same reason that it’s important to break down the confusions behind theism.
Viewing consciousness as a definite and special part of the universe is a systematic defect in one’s world view, and removing it does have practical consequences.
Looking at the universe from a more physical stance has helped me see that even alien artificial intelligences are likely to matter morally, that plants and bacteria have some ethical significance, and that even elementary physical operations might have nonzero (dis)value.
A paper from the Quantum Gravity Research institute proposes there is an underlying panconsciousness.
The physical universe is a “strange loop” says the new paper titled “The Self-Simulation Hypothesis Interpretation of Quantum Mechanics” from the team at the Quantum Gravity Research, a Los Angeles-based theoretical physics institute founded by the scientist and entrepreneur Klee Irwin. They take Bostrom’s simulation hypothesis, which maintains that all of reality is an extremely detailed computer program, and ask, rather than relying on advanced lifeforms to create the amazing technology necessary to compose everything within our world, isn’t it more efficient to propose that the universe itself is a “mental self-simulation”? They tie this idea to quantum mechanics, seeing the universe as one of many possible quantum gravity models.
The Wolfram Physics Project,is an ambitious attempt to develop a new physics of our universe.
FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.
And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.
The necessary features for consciousness in prominent physical theories of consciousness that are actually described in terms of physical processes do not exclude panpsychism, the possibility that consciousness is ubiquitous in nature, including in things which aren’t typically considered alive. I’m not claiming panpsychism is true, although this significantly increases my credence in it, and those other theories could still be useful as approximations to judge degrees of consciousness. Overall, I’m skeptical that further progress in theories of consciousness will give us plausible descriptions of physical processes necessary for consciousness that don’t arbitrarily exclude panpsychism, whether or not panpsychism is true.
The non-eliminativist view of consciousness is the view that consciousness is real and that its existence cannot reasonably be doubted. All the beliefs we are aware of appear in consciousness, and hence to express disbelief in the existence of consciousness amounts to reading off and trusting at least some aspect of one’s conscious experience – the thing believed not to exist – which renders such disbelief nonsensical. To deny the existence of consciousness, the non-eliminativist position holds, is to deny one’s own existence. At most, one can utter the words.
What Does Consciousness Realism Entail? To be a realist about consciousness is to insist that whether someone is conscious, and what their conscious experience is like, is a fact of the world. If someone is experiencing torture, there is no amount of interpretation an outside observer can make that changes what it is like to undergo that experience. The experience is an inherent property of the world, like physical pressure, that is independent of external observers. It is an objective fact of the world that subjective, first-person facts are a feature of reality.