Regarding the previous ask: if neural network rehabilitation, recoding, updating, and decommissioning the entire generation are options that individual robot could choose, what if it chooses to decommission the entire generation, even if it’s against the will of the other AI (like suicide bombing)? Or should that choice to be decommissioned only apply to the individual level? And what if it chooses neither of those options? There could still be a risk of another incident happening.

Choice for decommissioning was more of a high probability human response to a high rate of robot death incidents. But as an option it would be ideally on a personal level or voted upon a robot level. Of course this would only apply to that generation of robot. Hopefully it never gets to the point of decommissioning though: it’s more of a last result type thing I would hope never happens on the sentient level of development.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.