Against Wishful Thinking by Brian Tomasik

Some people hold more hopeful beliefs about the world and the future than are justified. These include the feeling that life for wild animals isn’t so bad and the expectation that humanity’s future will reduce more suffering than it creates. By feeding these dreams, optimistic visions of suffering reduction, while noble, may in fact cause net harm. We should explore ways of increasing empathy that also expose the true extent of suffering in the world, e.g., information about factory farming, brutality in nature, and unfathomable amounts of suffering that may result from space colonization.

Read more

Robots need civil rights, too

If “consciousness” is a similarly broad concept, then we can see degrees of consciousness in a variety of biological and artificial agents, depending on what kinds of abilities they possess and how complex they are. For example, a thermostat might be said to have an extremely tiny degree of consciousness insofar as it’s “aware” of the room temperature and “takes actions” to achieve its “goal” of not letting the room get too hot or too cold. I use scare quotes here because words like “aware” and “goal” normally have implied anthropomorphic baggage that’s almost entirely absent in the thermostat case. The thermostat is astronomically simpler than a human, and any attributions of consciousness to it should be seen as astronomically weaker than attributions of consciousness to a human.

Source: https://reducing-suffering.org/machine-sentience-and-robot-rights/

Suffering is what concerns Brian Tomasik, a former software engineer who worked on machine learning before helping to start the Foundational Research Institute, whose goal is to reduce suffering in the world. Tomasik raises the possibility that AIs might be suffering because, as he put it in an e-mail, “some artificially intelligent agents learn how to act through simplified digital versions of ‘rewards’ and ‘punishments.’” This system, called reinforcement learning, offers algorithms an abstract “reward” when they make a correct observation [actually, “observation” should be changed to “action”]. It’s designed to emulate the reward system in animal brains, and could potentially lead to a scenario where a machine comes to life and suffers because it doesn’t get enough rewards. Its programmers would likely never realize the hurt they were causing.

Source: https://www.bostonglobe.com/ideas/2017/09/08/robots-need-civil-rights-too/igtQCcXhB96009et5C6tXP/story.html

 

What is the problem of consciousness?

The problem of consciousness can be formulated as follows: how is it that, from a purely material basis (a brain or a centralized nervous system), consciousness emerges? This is what the problem of consciousness really boils down to. Answering this requires answering the question, what structures must be present in an organism and how would they function for consciousness to be possible? In other words, of all the different ways that the bodies of animals are arranged, which ones contain structures and arrangements that give rise to consciousness? There is no reason to suppose that only a human-like central nervous system will give rise to consciousness, and a great deal of evidence that very different types of animals are conscious. An example is bird brains, which have many structural similarities to mammalian brains, but different arrangements of neurons. Yet their brain circuits seem to be wired in a different way that creates a similar effect in terms of consciousness and cognition. An octopus is an invertebrate with a very different type of nervous system. But an octopus exhibits behavior and responds to her environment like a conscious being.

Read more

Plants live in a tactile world, perceive light, have a sense of smell, taste, and respond to sound

Are plants sentient? We know they sense their environments to a significant degree; like animals, they can “see” light, as a New Scientist feature explains. They “live in a very tactile world,” have a sense of smell, respond to sound, and use taste to “sense danger and drought and even to recognize relatives.” We’ve previously highlighted research here on how trees talk to each other with chemical signals and form social bonds and families. The idea sets the imagination running and might even cause a little paranoia. What are they saying? Are they talking about us?

Maybe we deserve to feel a little uneasy around plant life, given how ruthlessly our consumer economies exploit the natural world. Now imagine we could hear the sounds plants make when they’re stressed out. In addition to releasing volatile chemicals and showing “altered phenotypes, including changes in color, smell, and shape,” write the authors of a new study published at bioRxiv, it’s possible that plants “emit airborne sounds [their emphasis] when stressed—similarly to many animals.”

Read more

The big lie

People wonder about the cause of poverty when scarcity is the natural state of things. Why is scarcity the natural state of things? Because we are “designed” (metaphorically) to survive and reproduce our genes as much as possible. Not to discover reality. Not to enjoy. This is why evolution has selected in us the fear of death and the belief that life is always worth living. We are “programmed” to make our life as long as possible, at any cost. Evolution has designed us (metaphorically) to believe that life is worth living and is more important than avoiding suffering. We are ‘designed’ to survive, not to enjoy.

Read more

 

Discussion on the concept of sentience

drugmonkey said:

You may have noticed a rash of posts around the ScienceBlogs decrying the ARA terrorist extremists who have vowed, again, to target the children of a UCLA neuroscientist. Dario Ringach famously gave up his nonhuman primate research in 2006 because of threats against his family. His participation in last week’s dialog held at the UCLA campus apparently induced the extremist attention seekers, angry at having the momentum and PR shift to their slightly more rational co-travelers, to renew their threats. This is utterly despicable. Utterly.

This would be a great time for people who purport to be non-extremist animal rights advocates or sympathizers to do some deep soul searching. Soul searching that does not just easily write off the terrorists as a crazy fringe but asks penetrating questions about the nature of their own beliefs.

I cannot help you with this difficult work but I noticed something a little odd and new to me popping up in comment threads following the posts linked above. It has to do with the concept of sentience.

Wandering over to the Wikipedia entry I find a rather interesting set of observations.

Sentience is the ability to feel or perceive subjectively. The term is used in philosophy (particularly in the philosophy of animal ethics and in eastern philosophy) as well as in science fiction and (occasionally) in the study of artificial intelligence. In each of these fields the term is used slightly differently.

In eastern philosophy, sentience is a metaphysical quality of all things that requires our respect and care. In science fiction, sentience is “personhood”: the essential quality that separates humankind from machines or animals. Sentience is used in the study of consciousness to describe the ability to have sensations or experiences, known to some Western academic philosophers as “qualia”.

Some advocates of animal rights argue that many animals are sentient in that they can feel pleasure and pain, and that this entails being entitled to some moral or legal rights.

Well this certainly explains my confusion. To me, “sentience” has always been the science fiction concept. I suspect quite strongly that for most people, this is the connotation of the term.

Interesting, is it not, that animal rights people would co-opt this term to mean “can feel pleasure or pain”? Why create this new use for the term, particularly when it has such strong associations with the full-human capacity, different from animals and machines science-fiction type of definition?

Just another dishonest ploy to sway people to their way of thinking on something other than the merits. Of course they know what they are doing. Of course they know that they are creating this blurring of definitions in the minds of the undecided public. And of course they are hoping to lure everyone into using their terminology so that when people who are in favor of animal research say, well of course animals can feel pain, the ARA nut can claim that such people are admitting to sentience.

When of course they are doing no such thing.

Challenge anyone who uses this “sentience” gambit, eh? Get them to specify exactly what they mean. And ask what they are trying to pull with this redefinition nonsense.

Read more

Consciousness and self-consciousness

Consciousness is being aware. Self-consciousness is being aware of oneself. Being conscious, rather than self-conscious, is the key concept in ethics.

Consciousness can be defined as the state of having experiences. Conscious states, or mental states, are situations in which one is having any kind of experience, be it a sensorial experience, a thought, an emotion or whatever.

Self-consciousness, a particular form of consciousness, is a broad term that is used to mean different forms of awareness regarding oneself and one’s experiences. The way we understand the concept of the self depends on which meaning of self-consciousness we use.

Read more

Philosopher Philip Goff answers questions about panpsychism

“—we need both the science and the philosophy to get a theory of consciousness. The science gives us correlations between brain activity and experience. We then have to work out the best philosophical theory that explains those correlations. In my view, the only theory that holds up to scrutiny is panpsychism.

When I studied philosophy, we were taught that there were only two approaches to consciousness: either you think consciousness can be explained in conventional scientific terms, or you think consciousness is something magical and mysterious that science will never understand. I came to think that both of these views were pretty hopeless. I think we can have hope that we will one day have a science of consciousness, but we need to rethink what science is. Panpsychism offers us a way of doing this.”

Read more

 

How does the world view of a believer in physicalism differ from one of idealism?

Physicalism is the view that no “element of reality” (Einstein) is missing from the mathematical equations of physics – more strictly, tomorrow’s physics beyond the Standard Model plus GR.
Idealism is the view that reality is experiential.
Most physicalists aren’t idealists, and most idealists aren’t physicalists, but a small minority of researchers are both idealists and physicalists.

The intrinsic nature of quantum states is disputed. But if quantum mechanics is complete, and if the equations of physics describe fields of sentience rather than insentience, then physicalistic idealism is true. If so, there is no Hard Problem of consciousness as normally framed. Fields of insentience are destined to go the way of luminiferous aether. Formally, physical reality is described by the universal wavefunction. By contrast, consciousness is often said to be ill-defined. Yet if physicalistic idealism is true, then we already possess the mathematical apparatus of a theory of consciousness. All that’s hard is to “read off” the textures of experience from the solutions to the equations. The conjecture that relativistic QFT describes fields of sentience rather than insentience still leaves the mystery of why anything exists for the equations to describe: one big mystery rather than two. Yet even here, the superposition principle of QM hints at an answer.

Read more

Is the Orthogonality Thesis Defensible if We Assume Both Valence Realism and Open Individualism?

“I suppose it’s contingent on whether or not digital zombies are capable of general intelligence, which is an open question. However, phenomenally bound subjective world simulations seem like an uncharacteristic extravagance on the part of evolution if non-sphexish p-zombie general intelligence is possible. Of course, it may be possible, but just not reachable through Darwinian selection. But the fact that a search process as huge as evolution couldn’t find it and instead developed profoundly sophisticated phenomenally bound subjectivity is (possibly strong) evidence against the proposition that zombie AGI is possible (or likely to be stumbled on by accident).


If we do need phenomenally bound subjectivity for non-sphexish intelligence and minds ultimately care about qualia valence – and confusedly think that they care about other things only when they’re below a certain intelligence (or thoughtfulness) level – then it seems to follow that smarter than human AGIs will converge on valence optimization.”

Read more