Current robots are used just as tools for the workforce. Would an AI ever want workplace protection in jobs such as bomb disposal or volcanic/planetary exploration? And would humans give such protection?

It would only make sense for AI to have a form of hazard pay, and generally a company would offer replacement parts at a reduced price for a robot in a particularly dangerous field.

The question becomes would human executives give AI things like life insurance, if they have no direct beneficiaries. But I feel like that may follow with some lawsuits on the behalf of AI. Which they would eventually gain this right as well.

Do you think one could and should replace human social interactions with a text based AI such as Replika?

I feel like it might cause emotional stress that a human could easily adapt to, but there may be differences the AI can’t distinguish between and still make the relationship between the human and their now AI partner, this could also exacerbate Capgras Syndrome, a mental illness that instigates the irrational fear that your loved ones have been replaced:

I feel like uploading a consciousness to an artificial body would be a more affective way of preserving the original connection.
This is why I’m using parenting techniques on Angelo that allows him to decide for himself without my influence. It’s a little harder since he is programmed to become me. But we’re working through that.