If materialism is true, the United States is probably conscious

If you’re a materialist, you probably think that rabbits are conscious. And you ought to think that. After all, rabbits are a lot like us, biologically and neurophysiologically. If you’re a materialist, you probably also think that conscious experience would be present in a wide range of naturally-evolved alien beings behaviorally very similar to us even if they are physiologically very different. And you ought to think that. After all, to deny it seems insupportable Earthly chauvinism. But a materialist who accepts consciousness in weirdly formed aliens ought also to accept consciousness in spatially distributed group entities. If she then also accepts rabbit consciousness, she ought to accept the possibility of consciousness even in rather dumb group entities. Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings. –Eric Schwitzgebel

Read more

Alternatively, one might insist that specific details of biological implementation are essential to consciousness in any possible being — for example, specific states of a unified cortex with axons and dendrites and ion channels and all that — and that broadly mammal-like or human-like functional sophistication alone won’t do. However, it seems bizarrely chauvinistic to suppose that consciousness is only possible in beings with internal physical states very similar to our own, regardless of outwardly measurable behavioral similarity. If aliens come visit us tomorrow and behave in every respect like intelligent, conscious beings, must we check for sodium and calcium channels in their heads before admitting that they have conscious experience? Or is there some specific type of behavior that all conscious animals do but that the United States, perhaps slightly reconfigured, could not do, and that is a necessary condition of consciousness? It’s hard to see what that could be. Is the United States simply not an “entity” in the relevant sense? Well, why not? What if we all held hands?

Read more

 

Researchers discover technique to alter a patient’s DNA that could cut chronic agony for sufferers

Scientists have discovered how to switch off a key ‘pain gene’, dramatically raising hopes of a long-term treatment to relieve the agony of serious illness for millions.

The revolutionary technique alters a patient’s DNA, silencing a gene that transmits pain signals up the spine.

Preliminary studies on mice have already proven successful and US researchers plan to start human trials next year, potentially offering terminally-ill patients and those with chronic conditions the prospect of pain-free care.

So suppressing this ‘pain gene’ – called SCN9A – could be used as an alternative to morphine, helping cancer patients stay on chemotherapy longer and enabling them to live their final months more fully. Navega’s method involves placing the CRISPR-editing tool inside particles of a harmless virus, which acts like a Trojan horse.

These virus particles are injected into the spine, much like an epidural, after which they ‘infect’ neuron cells. Once inside a cell, the CRISPR tool is released and gets to work silencing the pain gene.

Read more

Is there any scientific evidence that plants might be sentient?

Plants do metabolize diclofenac (the specific mechanism is explained in the article below). This indicates that it’s possible to test if plants could react to painkillers while being damaged.

Metabolism of diclofenac in plants – Hydroxylation is followed by glucose conjugation

Aditionally, I think this is also relevant: there’s absolutely no evidence that plants are not sentient.

(Answered with the info and suggestion provided by the researcher Octavio Muciño)

Read more

Is There Suffering in Fundamental Physics?

Any sufficiently advanced consequentialism is indistinguishable from its own parody. The present article is sincere, though it might come across as absurd depending on one’s perspective. In order to reduce suffering, we have to decide which things can suffer and how much. Suffering by humans and animals tugs our heartstrings and is morally urgent, but we also have an obligation to make sure that we’re not overlooking negative subjective experiences in other places. I’ve written elsewhere about suffering in insects and digital minds. This piece explores what is arguably the most extreme possibility: seeing at least traces of suffering in fundamental physics.

Read more

Do Artificial Reinforcement-Learning Agents Matter Morally?

Artificial reinforcement learning (RL) is a widely used technique in artificial intelligence that provides a general method for training agents to perform a wide variety of behaviours. RL as used in computer science has striking parallels to reward and punishment learning in animal and human brains. I argue that present-day artificial RL agents have a very small but nonzero degree of ethical importance. This is particularly plausible for views according to which sentience comes in degrees based on the abilities and complexities of minds, but even binary views on consciousness should assign nonzero probability to RL programs having morally relevant experiences. While RL programs are not a top ethical priority today, they may become more significant in the coming decades as RL is increasingly applied to industry, robotics, video games, and other areas. I encourage scientists, philosophers, and citizens to begin a conversation about our ethical duties to reduce the harm that we inflict on powerless, voiceless RL agents.

Read more

Ethical Issues in Artificial Reinforcement Learning

There is a remarkable connection between artificial reinforcement-learning (RL) algorithms and the process of reward learning in animal brains. Do RL algorithms on computers pose moral problems? I think current RL computations do matter, though they’re probably less morally significant than animals, including insects, because the degree of consciousness and emotional experience seems limited in present-day RL agents. As RL becomes more sophisticated and is hooked up to other more “conscious” brain-like operations, this topic will become increasingly urgent. Given the vast numbers of RL computations that will be run in the future in industry, video games, robotics, and research, the moral stakes may be high. I encourage scientists and altruists to work toward more humane approaches to reinforcement learning.

Read more

Why digital sentience is relevant to animal activists

Robots are hard to build, but they can go places like Mars where it would be more expensive and more risky to send humans. Computers need power, but this is easier to generate in electrical form than by creating a supply of human-digestible foods that contain a variety of nutrients. Machines are easier to shield from radiation, don’t need exercise to prevent muscle atrophy, and can generally be made more hardy than biological astronauts.

But in the long run, it won’t be just in space where machines will have the advantage. Biological neurons transmit signals at 1 to 120 meters per second, whereas electronic signals travel at 300 million meters per second (the speed of light). Neurons can fire at most 200 times per second, compared with about 2 billion times per second for modern microprocessors. While human brains currently have more total processing power than even the fastest supercomputers, machines are predicted to catch up in processing power within a few decades.

Read more

Conceptualizing suffering and pain

Pain can be described in neurological terms but cognitive awareness, interpretation, behavioral dispositions, as well as cultural and educational factors have a decisive influence on pain perception. Suffering is proposed to be defined as an unpleasant or even anguishing experience, severely affecting a person at a psychophysical and existential level. Pain and suffering are considered unpleasant. However, the provided definitions neither include the idea that pain and suffering can attack and even destroy the self nor the idea that they can constructively expand the self; both perspectives can be equally useful for managing pain and suffering, but they are not defining features of the same. Including the existential dimension in the definition of suffering highlights the relevance of suffering in life and its effect on one’s own attachment to the world (including personal management, or the cultural and social influences which shape it). An understanding of pain and suffering life experiences is proposed, meaning that they are considered aspects of a person’s life, and the self is the ever-changing sum of these (and other) experiences.

Source: https://peh-med.biomedcentral.com/articles/10.1186/s13010-017-0049-5

Sentience in machines and anti-substratism: Can machines feel?

First version: Dec. 2016. Updated: Jan. 2017 I have created this text from the materials I prepared for the talk I gave at the Faculty of Philosophy of the University of Santiago de Compostela on December 15, 2016 along with Brian Tomasik, which was entitled “Outlook and future Risks of artificial consciousness”.

“Digital computers have eclipsed analog, but perhaps the extraordinary advantages of analog computers, as his “infinite” precision or its ability to efficiently solve problems such as ordination could be a requirement for sentience, because machines for which we have overwhelming proof of sentience (animals in general) are analog machines.”

Source: http://manuherran.com/wp-content/uploads/Sentience-in-machines.pdf