Robots need civil rights, too

If “consciousness” is a similarly broad concept, then we can see degrees of consciousness in a variety of biological and artificial agents, depending on what kinds of abilities they possess and how complex they are. For example, a thermostat might be said to have an extremely tiny degree of consciousness insofar as it’s “aware” of the room temperature and “takes actions” to achieve its “goal” of not letting the room get too hot or too cold. I use scare quotes here because words like “aware” and “goal” normally have implied anthropomorphic baggage that’s almost entirely absent in the thermostat case. The thermostat is astronomically simpler than a human, and any attributions of consciousness to it should be seen as astronomically weaker than attributions of consciousness to a human.

Source: https://reducing-suffering.org/machine-sentience-and-robot-rights/

Suffering is what concerns Brian Tomasik, a former software engineer who worked on machine learning before helping to start the Foundational Research Institute, whose goal is to reduce suffering in the world. Tomasik raises the possibility that AIs might be suffering because, as he put it in an e-mail, “some artificially intelligent agents learn how to act through simplified digital versions of ‘rewards’ and ‘punishments.’” This system, called reinforcement learning, offers algorithms an abstract “reward” when they make a correct observation [actually, “observation” should be changed to “action”]. It’s designed to emulate the reward system in animal brains, and could potentially lead to a scenario where a machine comes to life and suffers because it doesn’t get enough rewards. Its programmers would likely never realize the hurt they were causing.

Source: https://www.bostonglobe.com/ideas/2017/09/08/robots-need-civil-rights-too/igtQCcXhB96009et5C6tXP/story.html

 

Leave a Reply

Recent Posts

Categories

Recent Comments

Let’s keep in touch!

Loading