Can GPT3 or a later version of it experience suffering?

And if so, should we be continuing to develop it?

I have to admit that I don’t know much about how the system works, but I’m genuinely curious: how do we know that it doesn’t feel anything? I’m just concerned because I’m seeing more and more articles about its creation and the many amazing things it’s been able to do so far but none that tell us about the ethical implications of its creation or that reassure me that the fact that it exists is entirely not a bad thing. It seems to me that the system is now able to do many complex things and it’s worrying me that it might also (eventually) be able to experience something akin to suffering.

Read more

Se also: Is gpt-3 a step to sentience?

Leave a Reply

Recent Posts

Categories

Recent Comments

Let’s keep in touch!

Loading