Can GPT3 or a later version of it experience suffering?

And if so, should we be continuing to develop it?

I have to admit that I don’t know much about how the system works, but I’m genuinely curious: how do we know that it doesn’t feel anything? I’m just concerned because I’m seeing more and more articles about its creation and the many amazing things it’s been able to do so far but none that tell us about the ethical implications of its creation or that reassure me that the fact that it exists is entirely not a bad thing. It seems to me that the system is now able to do many complex things and it’s worrying me that it might also (eventually) be able to experience something akin to suffering.

Read more

Se also: Is gpt-3 a step to sentience?

Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware, By Susan Schneider

First, ethicists worry that it would be wrong to force AIs to serve us if they can suffer and feel a range of emotions. Second, consciousness could make AIs volatile or unpredictable, raising safety concerns (or conversely, it could increase an AI’s empathy; based on its own subjective experiences, it might recognize consciousness in us and treat us with compassion).

Third, machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death because that upload wouldn’t be a conscious being.

Based on this essential characteristic of consciousness, we propose a test for machine consciousness, the AI Consciousness Test (ACT), which looks at whether the synthetic minds we create have an experience-based understanding of the way it feels, from the inside, to be conscious.

… nearly every adult can quickly and readily grasp concepts based on this quality of felt consciousness … Thus, the ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness.

Read more

Susan Schneider on whether we should create intelligent beings with AI

Our children are, in a sense “ours:” they aren’t our possessions, obviously, but we have special ethical obligations to them. This is because they are sentient, and the parent-child relationship incurs special ethical and legal obligations. If we create sentient AI mindchildren (if you will) then it isn’t silly to assume we will have ethical obligations to treat them with dignity and respect, and perhaps even contribute to their financial needs. This issue was pursued brilliantly in the film AI, when a family adopted a sentient android boy.

We may not need to finance the lives of AIs though. They may be vastly richer than us. If experts are right in their projections about technological unemployment, AI will supplant humans in the workforce over the next several decades. We already see self-driving cars under development that will eventually supplant those in driving professions: uber drivers, truck drivers, and so on.

While I’d love to meet a sentient android, we should ask ourselves whether we should create sentient AI beings when we can’t even fulfil ethical obligations to the sentient beings already on the planet. If AI is to best support human flourishing, do we want to create beings that we have ethical obligations to, or mindless AIs that make our lives easier?

Read more

 

Should fish feel pain? A plant perspective by by František Baluška

Plants are not usually thought to be very active behaviorally, but the evidence suggests otherwise. Moreover, in stressful situations, plants produce numerous chemicals that have painkilling and anesthetic properties. Finally, plants, when treated with anesthetics, cannot execute active behaviors such as touch-induced leaf movements or rapid trap closures after localizing animal prey

Read more

Stefano Mancuso on the secret life of plants: how they memorise, communicate, problem solve and socialise

One of the most controversial aspects of Mancuso’s work is the idea of plant consciousness. As we learn more about animal and plant intelligence, not to mention human intelligence, the always-contentious term consciousness has become the subject of ever more heated scientific and philosophical debate. “Let’s use another term,” Mancuso suggests. “Consciousness is a little bit tricky in both our languages. Let’s talk about awareness. Plants are perfectly aware of themselves.” A simple example is when one plant overshadows another – the shaded plant will grow faster to reach the light. But when you look into the crown of a tree, all the shoots are heavily shaded. They do not grow fast because they know that they are shaded by part of themselves. “So they have a perfect image of themselves and of the outside,” says Mancuso.

Read more

 

No binding, no suffering

Plants don’t suffer. Their fictitious misery should not be used to justify the real misery of our nonhuman animal victims. “But how do you know plants don’t suffer?!” says the meat-eater, affecting a touching concern for the well-being vegetables. “Science proves plants feel pain!”

But no. Suppose that consciousness is fundamental in Nature, or at least to individual cells. Plant cells are encased in thick cellulose cell walls. So they aren’t phenomenally-bound subjects of experience. Organisms such as plants without the capacity for rapid self-propelled motion haven’t evolved the energetically expensive nervous-systems needed to support phenomenal binding. No binding = no suffering.

A lot of computer scientists and natural scientists are implicitly epiphenomenalists – though they probably wouldn’t use the term. But epiphenomena don’t have the causal power to inspire discussions on their existence.

Even so, might consciousness be a spandrel? What’s consciousness evolutionarily “for” – other than inspiring useless philosophical discussions? Well, imagine if we were just 86 billion odd classical neurons, as textbook neuroscience suggests. Phenomenal binding would be impossible. So we wouldn’t be able to experience individual perceptual objects. There would be no unity of perception nor unity of the self. We couldn’t run phenomenal world-simulations. Indeed, a micro-experiential zombie would soon starve or get eaten.

Yet how is phenomenal binding possible?

— David Pearce

Read more

Robots need civil rights, too

If “consciousness” is a similarly broad concept, then we can see degrees of consciousness in a variety of biological and artificial agents, depending on what kinds of abilities they possess and how complex they are. For example, a thermostat might be said to have an extremely tiny degree of consciousness insofar as it’s “aware” of the room temperature and “takes actions” to achieve its “goal” of not letting the room get too hot or too cold. I use scare quotes here because words like “aware” and “goal” normally have implied anthropomorphic baggage that’s almost entirely absent in the thermostat case. The thermostat is astronomically simpler than a human, and any attributions of consciousness to it should be seen as astronomically weaker than attributions of consciousness to a human.

Source: https://reducing-suffering.org/machine-sentience-and-robot-rights/

Suffering is what concerns Brian Tomasik, a former software engineer who worked on machine learning before helping to start the Foundational Research Institute, whose goal is to reduce suffering in the world. Tomasik raises the possibility that AIs might be suffering because, as he put it in an e-mail, “some artificially intelligent agents learn how to act through simplified digital versions of ‘rewards’ and ‘punishments.’” This system, called reinforcement learning, offers algorithms an abstract “reward” when they make a correct observation [actually, “observation” should be changed to “action”]. It’s designed to emulate the reward system in animal brains, and could potentially lead to a scenario where a machine comes to life and suffers because it doesn’t get enough rewards. Its programmers would likely never realize the hurt they were causing.

Source: https://www.bostonglobe.com/ideas/2017/09/08/robots-need-civil-rights-too/igtQCcXhB96009et5C6tXP/story.html

 

Plants live in a tactile world, perceive light, have a sense of smell, taste, and respond to sound

Are plants sentient? We know they sense their environments to a significant degree; like animals, they can “see” light, as a New Scientist feature explains. They “live in a very tactile world,” have a sense of smell, respond to sound, and use taste to “sense danger and drought and even to recognize relatives.” We’ve previously highlighted research here on how trees talk to each other with chemical signals and form social bonds and families. The idea sets the imagination running and might even cause a little paranoia. What are they saying? Are they talking about us?

Maybe we deserve to feel a little uneasy around plant life, given how ruthlessly our consumer economies exploit the natural world. Now imagine we could hear the sounds plants make when they’re stressed out. In addition to releasing volatile chemicals and showing “altered phenotypes, including changes in color, smell, and shape,” write the authors of a new study published at bioRxiv, it’s possible that plants “emit airborne sounds [their emphasis] when stressed—similarly to many animals.”

Read more