The Challenge of Determining Whether an A.I. Is Sentient, by Carissa Véliz

“…sentience may go unnoticed for years, as was the case with Martin Pistorious [1] … Because brain death can be misdiagnosed [2], and because we have little understanding of the necessary and sufficient causes for consciousness and therefore cannot be certain of when someone might be in pain, some experts have called for the use of anesthesia [3] for organ donation procedures.”

Read more:

https://slate.com/technology/2016/04/the-challenge-of-determining-whether-an-a-i-is-sentient.html

[1] https://www.ted.com/talks/martin_pistorius_how_my_mind_came_back_to_life_and_no_one_knew?language=en

[2] https://www.nytimes.com/2012/04/01/books/review/the-undead-by-dick-teresi.html?_r=0

[3] https://onlinelibrary.wiley.com/doi/full/10.1046/j.1365-2044.2000.055002105.x

 

Kolmogorov theory of consciousness. An algorithmic model of consciousness

Characterizing consciousness is a profound scientific problem with pressing clinical and practical implications. Examples include disorders of consciousness, locked-in syndrome, conscious state in utero, in sleep and other states of consciousness, in non-human animals, and perhaps soon in exobiology [astrobiology] or in machines. Here, we address the phenomenon of structured experience from an information-theoretic perspective.

We start from the subjective view (“my brain and my conscious experience”):

1 “There is information and I am conscious.”

2 “Reality, as it relates to experience and phenomenal structure, is a model my brain has built and continues to develop based on input–output information.”

Source:

https://academic.oup.com/nc/article/2017/1/nix019/4470874

An organism able to learn and move with no brain, no mouth, no stomach, no eyes and 720 sexes

A Paris zoo is showcasing a mysterious creature dubbed the “blob,” a yellowish collection of unicellular organisms called a slime mold that looks like a fungus, but acts like an animal.

This newest exhibit of the Paris Zoological Park, which goes on public display on Saturday, has no mouth, no stomach, no eyes, yet can detect food and digest it.

The blob also has almost 720 sexes, can move without legs or wings and heals itself in two minutes if cut in half.

“The blob is a living being which belongs to one of nature’s mysteries,” said Bruno David, director of the Paris Museum of Natural History, of which the Zoological Park is part.

“It surprises us, because it has no brain but is able to learn (…) and if you merge two blobs, the one that has learned will transmit its knowledge to the other,” David said.

The blob was named after a 1958 science-fiction horror B-movie, starring a young Steve McQueen, in which an alien life form consumes everything in its path in a small Pennsylvania town.

“We know for sure it is not a plant but we don’t really [know] if it’s an animal or a fungus,” said David.

“It behaves very surprisingly for something that looks like a mushroom … it has the behaviour of an animal, it is able to learn.”

Source:

https://www.cbc.ca/news/technology/paris-zoo-blob-1.5325747

 

The search for invertebrate consciousness

There is no agreement on whether any invertebrates are conscious and no agreement on a methodology that could settle the issue. How can the debate move forward? I distinguish three broad types of approach: theory‐heavy, theory‐neutral and theory‐light. Theory‐heavy and theory‐neutral approaches face serious problems, motivating a middle path: the theory‐light approach. At the core of the theory‐light approach is a minimal commitment about the relation between phenomenal consciousness and cognition that is compatible with many specific theories of consciousness: the hypothesis that phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus. This “facilitation hypothesis” can productively guide inquiry into invertebrate consciousness. What is needed? At this stage, not more theory, and not more undirected data gathering. What is needed is a systematic search for consciousness‐linked cognitive abilities, their relationships to each other, and their sensitivity to masking.

Read more

 

Conversations about the badness of involuntary suffering

I have the intuition that voluntary suffering might not be bad. This is primarily due to personal experience: I often feel sad (sympathy) when I encounter sad stories or sad situations, but I don’t have the intuition that this is bad for me, because I don’t feel like I ought to look away or stop feeling sad in response to these and I often feel like thinking/learning/reading more about these situations even if I feel more sadness because of it (and I usually do). This happens to me with both real and fictional situations (I was a fan of tragedies for a while). Furthermore, sometimes in the past, when I’ve been depressed about my own life, I didn’t want to be happy and even preferred to be miserable.

It’s suffering that’s bad, intrinsically (though suffering can be instrumentally good)

I’m a hedonistic utilitarian, and I think that even voluntary suffering is be intrinsically bad, as long as it’s still suffering at that point.

Buddhism would say that if you experience sadness without craving that the sadness go away, you continue to feel sadness but you don’t suffer from it.

My intuition is that suffering is bad, but sometimes (all things considered) I prefer to suffer in a particular instance (e.g. in service of some other value). In such cases it would be better for my welfare if I did not suffer, but I still prefer to.

I think we don’t quite have the words to distinguish between all these things in English, but in my mind there’s something like

  • pain – the experience of negative valence
  • suffering – the experience of pain (i.e. the experience of the experience of negative valence)
  • expected suffering – the experience of pain that was expected, so you only suffer for the pain itself
  • unexpected suffering – the experience of pain that was not expected, so you suffer both the pain itself and the pain of suffering itself from it not being expected and thus having negative valence

Of them all, unexpected suffering is the worst because it involves both pain and meta-pain.

I noticed that reading only “positive” and “joyous” stories eventually feel empty. The answer seem that sad elements in a story bring more depth than the fun/joyous ones. In that sense, sadness in stories act as a signal of deepness, but also a way to access some deeper part of our emotions and internal life.

Source

 

Physical theories of consciousness reduce to panpsychism

The necessary features for consciousness in prominent physical theories of consciousness that are actually described in terms of physical processes do not exclude panpsychism, the possibility that consciousness is ubiquitous in nature, including in things which aren’t typically considered alive. I’m not claiming panpsychism is true, although this significantly increases my credence in it, and those other theories could still be useful as approximations to judge degrees of consciousness. Overall, I’m skeptical that further progress in theories of consciousness will give us plausible descriptions of physical processes necessary for consciousness that don’t arbitrarily exclude panpsychism, whether or not panpsychism is true.

Source

How trees secretly talk to and share with each other

Trees secretly talk to each other underground. They’re passing information and resources to and from each other through a network of mycorrhizal fungimykós means fungus and riza means root in Greek—a mat of long, thin filaments that connect an estimated 90% of land plants. Scientists call the fungi the Wood Wide Web because ‘adult’ trees can share sugars to younger trees, sick trees can send their remaining resources back into the network for others, and they can communicate with each other about dangers like insect infestations.

Source:

https://thekidshouldseethis.com/post/the-wood-wide-web-how-trees-secretly-talk-to-and-share-with-each-other

 

A conversation between Brian Tomasik and Luke Muehlhauser

Luke and Mr. Tomasik found that they agreed about the following:

  •  Physicalism and functionalism about consciousness.
  •  Specifically, Mr. Tomasik endorses “Type A” physicalism, as described in his
    article “Is There a Hard Problem of Consciousness?” Luke isn’t certain he
    endorses Type A physicalism as defined in that article, but he thinks his
    views are much closer to “Type A” physicalism than to “Type B” physicalism.
  •  Consciousness will likely turn out to be polymorphic, without a sharp dividing
    line between conscious and non-conscious systems, just like (say) the line
    between what does and doesn’t count as “face recognition software.”
  • Consciousness will likely vary along a great many dimensions, and Luke and
    Mr. Tomasik both suspect they would have different degrees of moral caring
    for different types of conscious systems, depending on how each particular
    system scores along each of these dimensions.

 

A core disagreement

In Luke’s view, a system needs to have certain features interacting in the right way in order to qualify as having non-zero consciousness and non-zero moral weight (if one assumes consciousness is necessary for moral patienthood).

In Mr. Tomasik’s view, various potential features (e.g. ability to do reinforcement learning or meta-cognition) contribute different amounts to a system’s degree of consciousness, because they increase that system’s fit with the “consciousness” concept, but all things have non-zero fit with the “consciousness” concept.

Luke suggested that this core disagreement stems from the principle described in Mr. Tomasik’s “Flavors of Computation are Flavors of Consciousness“:

It’s unsurprising that a type-A physicalist should attribute nonzero consciousness to all systems. After all, “consciousness” is a concept — a “cluster in thingspace” — and all points in thingspace are less than infinitely far away from the centroid of the “consciousness” cluster. By a similar argument, we might say that any system displays nonzero similarity to any concept (except maybe for strictly partitioned concepts that map onto the universe’s fundamental ontology, like the difference between matter vs. antimatter). Panpsychism on consciousness is just one particular example of that
principle.

Source

Can GPT3 or a later version of it experience suffering?

And if so, should we be continuing to develop it?

I have to admit that I don’t know much about how the system works, but I’m genuinely curious: how do we know that it doesn’t feel anything? I’m just concerned because I’m seeing more and more articles about its creation and the many amazing things it’s been able to do so far but none that tell us about the ethical implications of its creation or that reassure me that the fact that it exists is entirely not a bad thing. It seems to me that the system is now able to do many complex things and it’s worrying me that it might also (eventually) be able to experience something akin to suffering.

Read more

Se also: Is gpt-3 a step to sentience?