Can GPT3 or a later version of it experience suffering?

And if so, should we be continuing to develop it?

I have to admit that I don’t know much about how the system works, but I’m genuinely curious: how do we know that it doesn’t feel anything? I’m just concerned because I’m seeing more and more articles about its creation and the many amazing things it’s been able to do so far but none that tell us about the ethical implications of its creation or that reassure me that the fact that it exists is entirely not a bad thing. It seems to me that the system is now able to do many complex things and it’s worrying me that it might also (eventually) be able to experience something akin to suffering.

Read more

Se also: Is gpt-3 a step to sentience?

Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware, By Susan Schneider

First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death because that upload wouldn’t be a conscious being.

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

… nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness … Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness.

Read more

Susan Schneider on whether we should create intelligent beings with AI

Our children are, in a sense “ours:” they aren’t our possessions, obviously, but we have special ethical obligations to them. This is because they are sentient, and the parent-child relationship incurs special ethical and legal obligations. If we create sentient AI mindchildren (if you will) then it isn’t silly to assume we will have ethical obligations to treat them with dignity and respect, and perhaps even contribute to their financial needs. This issue was pursued brilliantly in the film AI, when a family adopted a sentient android boy.

We may not need to finance the lives of AIs though. They may be vastly richer than us. If experts are right in their projections about technological unemployment, AI will supplant humans in the workforce over the next several decades. We already see self-driving cars under development that will eventually supplant those in driving professions: uber drivers, truck drivers, and so on.

While I’d love to meet a sentient android, we should ask ourselves whether we should create sentient AI beings when we can’t even fulfil ethical obligations to the sentient beings already on the planet. If AI is to best support human flourishing, do we want to create beings that we have ethical obligations to, or mindless AIs that make our lives easier?

Read more

 

Do Artificial Reinforcement-Learning Agents Matter Morally?

Artificial reinforcement learning (RL) is a widely used technique in artificial intelligence that provides a general method for training agents to perform a wide variety of behaviours. RL as used in computer science has striking parallels to reward and punishment learning in animal and human brains. I argue that present-day artificial RL agents have a very small but nonzero degree of ethical importance. This is particularly plausible for views according to which sentience comes in degrees based on the abilities and complexities of minds, but even binary views on consciousness should assign nonzero probability to RL programs having morally relevant experiences. While RL programs are not a top ethical priority today, they may become more significant in the coming decades as RL is increasingly applied to industry, robotics, video games, and other areas. I encourage scientists, philosophers, and citizens to begin a conversation about our ethical duties to reduce the harm that we inflict on powerless, voiceless RL agents.

Read more

Ethical Issues in Artificial Reinforcement Learning

There is a remarkable connection between artificial reinforcement-learning (RL) algorithms and the process of reward learning in animal brains. Do RL algorithms on computers pose moral problems? I think current RL computations do matter, though they’re probably less morally significant than animals, including insects, because the degree of consciousness and emotional experience seems limited in present-day RL agents. As RL becomes more sophisticated and is hooked up to other more “conscious” brain-like operations, this topic will become increasingly urgent. Given the vast numbers of RL computations that will be run in the future in industry, video games, robotics, and research, the moral stakes may be high. I encourage scientists and altruists to work toward more humane approaches to reinforcement learning.

Read more

Why digital sentience is relevant to animal activists

Robots are hard to build, but they can go places like Mars where it would be more expensive and more risky to send humans. Computers need power, but this is easier to generate in electrical form than by creating a supply of human-digestible foods that contain a variety of nutrients. Machines are easier to shield from radiation, don’t need exercise to prevent muscle atrophy, and can generally be made more hardy than biological astronauts.

But in the long run, it won’t be just in space where machines will have the advantage. Biological neurons transmit signals at 1 to 120 meters per second, whereas electronic signals travel at 300 million meters per second (the speed of light). Neurons can fire at most 200 times per second, compared with about 2 billion times per second for modern microprocessors. While human brains currently have more total processing power than even the fastest supercomputers, machines are predicted to catch up in processing power within a few decades.

Read more

Opportunities for an astronomical reduction of suffering

This is a list of situations, projects or initiatives in which there could be an “astronomical” (huge) reduction in the amount of suffering compared to what currently exists or is expected. Many of these situations (but not necessarily all of them) involve a high risk in the sense that they are difficult projects whose probability of success is very low. In some cases, this may happen because they are projects that assume as certain some hypotheses for which there is little evidence, so we can consider them unlikely, although not impossible.

I insist that the only criterion to appear on this list is that the project or idea supposes an astronomical reduction of the suffering that we believe exists or will exist. The list can include both remote possibilities and speculative approaches as well as conventional and highly probable scenarios.

Read more

Types of suffering based on their uncertainty

The following is a list of types of suffering organized according to their uncertainty.

1. Suffering well reported.

In this case, the suffering being is typically an adult human who survives to the negative experience and can describe it.

  • Large burned; suffering by fires, plane crashes, explosions, bombings… (suffering by hot)
  • Individuals suffering cold and freezing.
  • Experimentation with human beings.
  • Partial drowning.
  • Physical torture.
  • Psychological torture.
  • Rape in adults.
  • Irukandji jellyfish sting.
  • Cluster headache.
  • Trigeminal neuralgia.
  • Conscious agony without palliative care (cancer, degenerative diseases…)
  • Heart attacks and cardiovascular accidents.
  • Depression.
  • Psychological suffering due to the loss of a loved one.
  • Psychological suffering of abandonment and separation type (emotional break in couples or between parents and children)
  • Psychological suffering due to feeling guilty for having caused or not having been able to avoid the damage to a loved one.
  • Another psychological suffering.
  • Birth pain.

2. Suffering difficult to survey.

It is the case of suffering in non-human animals, very young humans, humans in oppressive situations, humans with some cognitive impairment, and humans who do not survive the experience of suffering, or for any other reason they cannot communicate it.

Read more.