Some problems of the very intuitive evolutionary emergentist paradigm trying to explain consciousness from neurons

Some problems of the very intuitive evolutionary emergentist paradigm trying to explain consciousness from neurons, thanks to Andrés Gómez Emilsson and Chris Percy at Qualia Research Institute:

The “Slicing Problem” is a thought experiment that raises questions for substrate-neutral computational theories of consciousness, particularly, in functionalist approaches.

The thought experiment uses water-based logic gates to construct a computer in a way that permits cleanly slicing each gate and connection in half, creating two identical computers each instantiating the same computation. The slicing can be reversed and repeated via an on/off switch, without changing the amount of matter in the system.

The question is what do different computational theories of consciousness believe is happening to the number and nature of individual conscious units as this switch is toggled. Under a token interpretation, there are now two discrete conscious entities; under a type interpretation, there may remain only one.

Both interpretations lead to different implications depending on the adopted theoretical stance. Any route taken either allows mechanisms for “consciousness-multiplying exploits” or requires ambiguous boundaries between conscious entities, raising philosophical and ethical questions for theorists to consider.

Source:

https://www.researchgate.net/publication/365706040_The_Slicing_Problem_for_Computational_Theories_of_Consciousness

More info:

https://qri.org/

Kolmogorov theory of consciousness. An algorithmic model of consciousness

Characterizing consciousness is a profound scientific problem with pressing clinical and practical implications. Examples include disorders of consciousness, locked-in syndrome, conscious state in utero, in sleep and other states of consciousness, in non-human animals, and perhaps soon in exobiology [astrobiology] or in machines. Here, we address the phenomenon of structured experience from an information-theoretic perspective.

We start from the subjective view (“my brain and my conscious experience”):

1 “There is information and I am conscious.”

2 “Reality, as it relates to experience and phenomenal structure, is a model my brain has built and continues to develop based on input–output information.”

Source:

https://academic.oup.com/nc/article/2017/1/nix019/4470874

Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware, By Susan Schneider

First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death because that upload wouldn’t be a conscious being.

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

… nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness … Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness.

Read more

Susan Schneider on whether we should create intelligent beings with AI

Our children are, in a sense “ours:” they aren’t our possessions, obviously, but we have special ethical obligations to them. This is because they are sentient, and the parent-child relationship incurs special ethical and legal obligations. If we create sentient AI mindchildren (if you will) then it isn’t silly to assume we will have ethical obligations to treat them with dignity and respect, and perhaps even contribute to their financial needs. This issue was pursued brilliantly in the film AI, when a family adopted a sentient android boy.

We may not need to finance the lives of AIs though. They may be vastly richer than us. If experts are right in their projections about technological unemployment, AI will supplant humans in the workforce over the next several decades. We already see self-driving cars under development that will eventually supplant those in driving professions: uber drivers, truck drivers, and so on.

While I’d love to meet a sentient android, we should ask ourselves whether we should create sentient AI beings when we can’t even fulfil ethical obligations to the sentient beings already on the planet. If AI is to best support human flourishing, do we want to create beings that we have ethical obligations to, or mindless AIs that make our lives easier?

Read more

 

Robots need civil rights, too

If “consciousness” is a similarly broad concept, then we can see degrees of consciousness in a variety of biological and artificial agents, depending on what kinds of abilities they possess and how complex they are. For example, a thermostat might be said to have an extremely tiny degree of consciousness insofar as it’s “aware” of the room temperature and “takes actions” to achieve its “goal” of not letting the room get too hot or too cold. I use scare quotes here because words like “aware” and “goal” normally have implied anthropomorphic baggage that’s almost entirely absent in the thermostat case. The thermostat is astronomically simpler than a human, and any attributions of consciousness to it should be seen as astronomically weaker than attributions of consciousness to a human.

Source: https://reducing-suffering.org/machine-sentience-and-robot-rights/

Suffering is what concerns Brian Tomasik, a former software engineer who worked on machine learning before helping to start the Foundational Research Institute, whose goal is to reduce suffering in the world. Tomasik raises the possibility that AIs might be suffering because, as he put it in an e-mail, “some artificially intelligent agents learn how to act through simplified digital versions of ‘rewards’ and ‘punishments.’” This system, called reinforcement learning, offers algorithms an abstract “reward” when they make a correct observation [actually, “observation” should be changed to “action”]. It’s designed to emulate the reward system in animal brains, and could potentially lead to a scenario where a machine comes to life and suffers because it doesn’t get enough rewards. Its programmers would likely never realize the hurt they were causing.

Source: https://www.bostonglobe.com/ideas/2017/09/08/robots-need-civil-rights-too/igtQCcXhB96009et5C6tXP/story.html

 

Sentience in machines and anti-substratism: Can machines feel?

First version: Dec. 2016. Updated: Jan. 2017 I have created this text from the materials I prepared for the talk I gave at the Faculty of Philosophy of the University of Santiago de Compostela on December 15, 2016 along with Brian Tomasik, which was entitled “Outlook and future Risks of artificial consciousness”.

“Digital computers have eclipsed analog, but perhaps the extraordinary advantages of analog computers, as his “infinite” precision or its ability to efficiently solve problems such as ordination could be a requirement for sentience, because machines for which we have overwhelming proof of sentience (animals in general) are analog machines.”

Source: http://manuherran.com/wp-content/uploads/Sentience-in-machines.pdf

Opportunities for an astronomical reduction of suffering

This is a list of situations, projects or initiatives in which there could be an “astronomical” (huge) reduction in the amount of suffering compared to what currently exists or is expected. Many of these situations (but not necessarily all of them) involve a high risk in the sense that they are difficult projects whose probability of success is very low. In some cases, this may happen because they are projects that assume as certain some hypotheses for which there is little evidence, so we can consider them unlikely, although not impossible.

I insist that the only criterion to appear on this list is that the project or idea supposes an astronomical reduction of the suffering that we believe exists or will exist. The list can include both remote possibilities and speculative approaches as well as conventional and highly probable scenarios.

Read more

Types of suffering based on their uncertainty

The following is a list of types of suffering organized according to their uncertainty.

1. Suffering well reported.

In this case, the suffering being is typically an adult human who survives to the negative experience and can describe it.

  • Large burned; suffering by fires, plane crashes, explosions, bombings… (suffering by hot)
  • Individuals suffering cold and freezing.
  • Experimentation with human beings.
  • Partial drowning.
  • Physical torture.
  • Psychological torture.
  • Rape in adults.
  • Irukandji jellyfish sting.
  • Cluster headache.
  • Trigeminal neuralgia.
  • Conscious agony without palliative care (cancer, degenerative diseases…)
  • Heart attacks and cardiovascular accidents.
  • Depression.
  • Psychological suffering due to the loss of a loved one.
  • Psychological suffering of abandonment and separation type (emotional break in couples or between parents and children)
  • Psychological suffering due to feeling guilty for having caused or not having been able to avoid the damage to a loved one.
  • Another psychological suffering.
  • Birth pain.

2. Suffering difficult to survey.

It is the case of suffering in non-human animals, very young humans, humans in oppressive situations, humans with some cognitive impairment, and humans who do not survive the experience of suffering, or for any other reason they cannot communicate it.

Read more.