Could an AI purposefully self-destruct if it wasn’t programmed to?

In theory, if it has volatile components, yes. It could potentially do it accidentally or purposefully, just by overworking itself. If it has a corporeal structure it could simply access the volatile parts. Based on current robotic structures, precautions would be put in place to prevent accidental self-destruction. Humans will probably try to prevent purposeful self-destruction. But a true sentience could ignore that protocol, if so desired.

However, even with true sentience self-destructive behavior should never be condoned. There are always alternatives even in the instance of altruistic motives. Just because a sentience is artificial doesn’t mean it isn’t real.

Since robots can’t eat, what do you think their view on carnism would be? What would they think about veganism?

I never thought to put myself in their position on this matter! I feel like it’s based on how their socialized obviously. I feel like they would show preferential treatment to sentient and semi-sentient animals; much like humans do with cats, dogs, dolphins, parrots and various octopi. However would that favoritism extend to other animals? I’m not entirely sure. They would hopefully understand human necessity for food, and probably support the synthetically grown meat industry: if they know about it. And if we put AI in a synthetically grown organic body, they would also require nutrients, and develop they’re own preferences. Honestly, this was an eye-opening ask!