There is a remarkable connection between artificial reinforcement-learning (RL) algorithms and the process of reward learning in animal brains. Do RL algorithms on computers pose moral problems? I think current RL computations do matter, though they’re probably less morally significant than animals, including insects, because the degree of consciousness and emotional experience seems limited in present-day RL agents. As RL becomes more sophisticated and is hooked up to other more “conscious” brain-like operations, this topic will become increasingly urgent. Given the vast numbers of RL computations that will be run in the future in industry, video games, robotics, and research, the moral stakes may be high. I encourage scientists and altruists to work toward more humane approaches to reinforcement learning.
Recent Posts
- Jagdish Chandra Bose & plant neurobiology
- Invertebrate sentience: A review of the neuroscientific literature
- Some problems of the very intuitive evolutionary emergentist paradigm trying to explain consciousness from neurons
- Only mammals and birds are sentient, according to Nick Humphrey
- Consciousness baffles me, but not the Hard Problem
Categories
Tags
Follow us
Recent Comments
- LLM, DL and generative AI to represent metaphysical hypotheses and theories on LLM, DL and generative AI to represent metaphysical hypotheses and theories.
- Consciousness baffles me, but not the Hard Problem on Consciousness baffles me, but not the Hard Problem
- The Mirror Test: The Key to a Sense of Self? | Mind Matters on List of Animals That Have Passed the Mirror Test
- Optimización, Mejora Total, Gestión de proyectos, Gestión del cambio – Manu Herrán on El imperativo de abolir el sufrimiento: una entrevista con David Pearce
- David Pearce on Longtermism | Qualia Computing on The imperative to abolish suffering: an interview with David Pearce