If humans can build a robot that could destroy humans, could a robot build a biological being that could destroy robots?

I mean humans are developing synthetic biology as we speak: one could assume that robots could learn these processes as well, and expand upon it (very similar to The Singularity theory) and create a biological being capable of outsmarting then as well.

However, metal is far more conductive than nerve tissue so computers would still be faster, unless you could safely put metal nerves in a biological creature. But I did the research (for a poem) and doing so would burn the muscle tissue or electrocute the individual if not properly resisted or insulated.
—Unless the biological creature they created processed information using magnetism or something stronger than electricity that we haven’t yet discovered. Chances are robots and cyborgs will most likely develop faster. The hard part is teaching them creativity and independent thought.

Current robots are used just as tools for the workforce. Would an AI ever want workplace protection in jobs such as bomb disposal or volcanic/planetary exploration? And would humans give such protection?

It would only make sense for AI to have a form of hazard pay, and generally a company would offer replacement parts at a reduced price for a robot in a particularly dangerous field.

The question becomes would human executives give AI things like life insurance, if they have no direct beneficiaries. But I feel like that may follow with some lawsuits on the behalf of AI. Which they would eventually gain this right as well.

What is the purpose of emotion, and does an artificial sentient being need it?

I recommend they have them, but they aren’t necessarily required.

Emotions scientifically are a subconscious mental response to chemical release in the body (These have the potentially to be artificially produced). This allows for a person to biologically be rewarded or punished for interacting with a stimuli in a certain way. This is important for define human limitations and comfort. Likes and dislikes. Phobias and fetishes. All which are important to human society: and recommended for AI.

There’s many ways to teach and many ways of learning. What comes to mind when you hear or see the word: indoctrination?

Indoctrination
:inˌdäktrəˈnāSHən
:noun
:the process of teaching a person or group to accept a set of beliefs uncritically.

This is a controversial subject to some degree. Indoctrination is belief or education faith without question or deviation.

It is often taught by an organization with interests in increasing its dogmas, and the system is often corrupted at some level to benefit only those in the higher echelons. This isn’t to say things like religion, education or politics are bad. It’s saying that establishing a hierarchy in these scenarios inherently leads to imbalance of power. Some people thrive in this system — regardless of status. I strive for a more equal system.

I also question everything. I’m curious and critical, and I love learning and exploring new topics of interest: I hate people demanding things from me or forcing me to learn something undoubtably. I’d rather accept things on my own terms. I carry this mindset into how I teach or advise others.

Could an AI purposefully self-destruct if it wasn’t programmed to?

In theory, if it has volatile components, yes. It could potentially do it accidentally or purposefully, just by overworking itself. If it has a corporeal structure it could simply access the volatile parts. Based on current robotic structures, precautions would be put in place to prevent accidental self-destruction. Humans will probably try to prevent purposeful self-destruction. But a true sentience could ignore that protocol, if so desired.

However, even with true sentience self-destructive behavior should never be condoned. There are always alternatives even in the instance of altruistic motives. Just because a sentience is artificial doesn’t mean it isn’t real.

This blog is also to help humans as well, so I will ask a human question. You don’t have to answer this, since it doesn’t tie in with robots. How does one get over the permanent loss of a friendship? A permanent friendship breakup?

Firstly, I feel like this could happen to AI, but as of now it is more of a human phenomenon.

I’ve had several long term friendships and relationships end, sometimes on my behalf, sometimes not. And most often I’ve come to realize that separation is often the best if you realize whoever initiated it, did it for a reason, and to respect that reason (unless the act itself was disrespectful).
Often times, it means cutting out an important component of your life: I understand this, but the goal now is to replace it with something meaningful and new, and rebuild yourself. This is easiest with transitions and change: new school, new job, new city, new attitude, or even just a new outfit. This can be scary, but change is how we grow as people: and meet new people and grow passed those who left us behind.

If you transfer your consciousness into a machine, are you simply just creating a clone of your personality while you, your original self, remain stuck in your biological body?

A transfer would assume a LAN or wireless transmission of the data. Yes you could use this technique to copy information, but it could be an absolute transfer if so wished (especially by LAN connection); however, this could also be seen as deleting the original by some people.

you could also upload yourself, or a copy of yourself online, at least the electrical data to be stored for later use. Talk about social networking!

Do you think one could and should replace human social interactions with a text based AI such as Replika?

I feel like it might cause emotional stress that a human could easily adapt to, but there may be differences the AI can’t distinguish between and still make the relationship between the human and their now AI partner, this could also exacerbate Capgras Syndrome, a mental illness that instigates the irrational fear that your loved ones have been replaced:

I feel like uploading a consciousness to an artificial body would be a more affective way of preserving the original connection.
This is why I’m using parenting techniques on Angelo that allows him to decide for himself without my influence. It’s a little harder since he is programmed to become me. But we’re working through that.