Some problems of the very intuitive evolutionary emergentist paradigm trying to explain consciousness from neurons

Some problems of the very intuitive evolutionary emergentist paradigm trying to explain consciousness from neurons, thanks to Andrés Gómez Emilsson and Chris Percy at Qualia Research Institute:

The “Slicing Problem” is a thought experiment that raises questions for substrate-neutral computational theories of consciousness, particularly, in functionalist approaches.

The thought experiment uses water-based logic gates to construct a computer in a way that permits cleanly slicing each gate and connection in half, creating two identical computers each instantiating the same computation. The slicing can be reversed and repeated via an on/off switch, without changing the amount of matter in the system.

The question is what do different computational theories of consciousness believe is happening to the number and nature of individual conscious units as this switch is toggled. Under a token interpretation, there are now two discrete conscious entities; under a type interpretation, there may remain only one.

Both interpretations lead to different implications depending on the adopted theoretical stance. Any route taken either allows mechanisms for “consciousness-multiplying exploits” or requires ambiguous boundaries between conscious entities, raising philosophical and ethical questions for theorists to consider.

Source:

https://www.researchgate.net/publication/365706040_The_Slicing_Problem_for_Computational_Theories_of_Consciousness

More info:

https://qri.org/

Consciousness baffles me, but not the Hard Problem

Simply put, the Hard Problem asks the following question: how can the machinery of the brain (the neurons and synapses) produce consciousness — the colours that we see, for example, or the sounds that we hear?

https://www.abc.net.au/news/2017-07-07/david-chalmers-and-the-puzzle-of-consciousness/8679884

“Consciousness baffles me, but not the Hard Problem. The Hard Problem arises only if one makes a metaphysical assumption, namely that the intrinsic nature of the world’s quantum fields – the essence of the physical – is non-experiential.”
David Pearce

https://www.facebook.com/tyler.s.anderson.54/posts/pfbid02VaMvEC4E6H7ip4k2diwnkvpLEDnkDdteesjnSvsJs9qZ1tfEGudjAUSfJfyMbjskl

Proto-Intelligence in Qualia: a Simple Case

>>
Do qualia like love, fear, pain, and pleasure causally influence us? I think that the evolutionary argument that qualia must influence us is sufficiently clear and easy to understand that there should be very little room for disagreement on the matter. Evolution wouldn’t have built phenomenal world-simulations composed of qualia unless they increased our inclusive fitness in some way, because an increase in fitness is a logically necessary condition for evolution to select traits of any kind.

>> … Why does pain repel? Not for any mechanical reason, but instead because the raw feel of pain is intrinsically and irreducibly negative, and we (as receptive qualia systems) thus seek to avoid it.

>> …
Consider the phenomenon of intense love. It’s a trope that love changes the raw qualitative feel of the world, oneself, music, one’s beloved, and a broad range of other things. Love is very selective in the things that it preserves and in the things that it changes. It wouldn’t change the physical orientation of buildings, their color, or their form, because all of these things have survival utility, and the utility function of love doesn’t seek its own extinction. Instead, love acts selectively on the aesthetic qualities that interpenetrate gestalts, such as cities, one’s self-model, one’s beloved, and music

Read more:
https://autonoetic.blogspot.com/2022/12/proto-intelligence-in-qualia-simple-case.html

Consciousness as something fundamental, that pre-exists in our Universe

Consciousness is fundamental, pre-exists our Universe and manifests in everything that we think of as real. A brain, as important as it seems, is nothing more than the way that non-local consciousness operates at an “avatar” level during a lifetime. The evidence that all of this is true is consistent and overwhelming. But mainstream science is still bound by the centuries-old “materialist dogma” and stuck with the “hard problem” of consciousness. ​If we assume that consciousness doesn’t arise from the brain activity, as some neuroscientists still presume to be true, where does it come from?

Read more:

Consciousness: Redefining the Mind-Body Problem by Alex Vikoulov

Kolmogorov theory of consciousness. An algorithmic model of consciousness

Characterizing consciousness is a profound scientific problem with pressing clinical and practical implications. Examples include disorders of consciousness, locked-in syndrome, conscious state in utero, in sleep and other states of consciousness, in non-human animals, and perhaps soon in exobiology [astrobiology] or in machines. Here, we address the phenomenon of structured experience from an information-theoretic perspective.

We start from the subjective view (“my brain and my conscious experience”):

1 “There is information and I am conscious.”

2 “Reality, as it relates to experience and phenomenal structure, is a model my brain has built and continues to develop based on input–output information.”

Source:

https://academic.oup.com/nc/article/2017/1/nix019/4470874

An organism able to learn and move with no brain, no mouth, no stomach, no eyes and 720 sexes

A Paris zoo is showcasing a mysterious creature dubbed the “blob,” a yellowish collection of unicellular organisms called a slime mold that looks like a fungus, but acts like an animal.

This newest exhibit of the Paris Zoological Park, which goes on public display on Saturday, has no mouth, no stomach, no eyes, yet can detect food and digest it.

The blob also has almost 720 sexes, can move without legs or wings and heals itself in two minutes if cut in half.

“The blob is a living being which belongs to one of nature’s mysteries,” said Bruno David, director of the Paris Museum of Natural History, of which the Zoological Park is part.

“It surprises us, because it has no brain but is able to learn (…) and if you merge two blobs, the one that has learned will transmit its knowledge to the other,” David said.

The blob was named after a 1958 science-fiction horror B-movie, starring a young Steve McQueen, in which an alien life form consumes everything in its path in a small Pennsylvania town.

“We know for sure it is not a plant but we don’t really [know] if it’s an animal or a fungus,” said David.

“It behaves very surprisingly for something that looks like a mushroom … it has the behaviour of an animal, it is able to learn.”

Source:

https://www.cbc.ca/news/technology/paris-zoo-blob-1.5325747

 

How trees secretly talk to and share with each other

Trees secretly talk to each other underground. They’re passing information and resources to and from each other through a network of mycorrhizal fungimykós means fungus and riza means root in Greek—a mat of long, thin filaments that connect an estimated 90% of land plants. Scientists call the fungi the Wood Wide Web because ‘adult’ trees can share sugars to younger trees, sick trees can send their remaining resources back into the network for others, and they can communicate with each other about dangers like insect infestations.

Source:

https://thekidshouldseethis.com/post/the-wood-wide-web-how-trees-secretly-talk-to-and-share-with-each-other

 

Can GPT3 or a later version of it experience suffering?

And if so, should we be continuing to develop it?

I have to admit that I don’t know much about how the system works, but I’m genuinely curious: how do we know that it doesn’t feel anything? I’m just concerned because I’m seeing more and more articles about its creation and the many amazing things it’s been able to do so far but none that tell us about the ethical implications of its creation or that reassure me that the fact that it exists is entirely not a bad thing. It seems to me that the system is now able to do many complex things and it’s worrying me that it might also (eventually) be able to experience something akin to suffering.

Read more

Se also: Is gpt-3 a step to sentience?