Our children are, in a sense “ours:” they aren’t our possessions, obviously, but we have special ethical obligations to them. This is because they are sentient, and the parent-child relationship incurs special ethical and legal obligations. If we create sentient AI mindchildren (if you will) then it isn’t silly to assume we will have ethical obligations to treat them with dignity and respect, and perhaps even contribute to their financial needs. This issue was pursued brilliantly in the film AI, when a family adopted a sentient android boy.
We may not need to finance the lives of AIs though. They may be vastly richer than us. If experts are right in their projections about technological unemployment, AI will supplant humans in the workforce over the next several decades. We already see self-driving cars under development that will eventually supplant those in driving professions: uber drivers, truck drivers, and so on.
While I’d love to meet a sentient android, we should ask ourselves whether we should create sentient AI beings when we can’t even fulfil ethical obligations to the sentient beings already on the planet. If AI is to best support human flourishing, do we want to create beings that we have ethical obligations to, or mindless AIs that make our lives easier?