What if you take an AI from a fighting game, program a neural network and machine learning into it, then put it into a robot and let it roam the world?

Initially, it’d probably start fighting people as most in-game AI will generally attack without being aggro’d. Over time though it may come to find that people aren’t as receptive to real world violence as player characters in video games…

or alternatively, it will provoke the wrong person and be destroyed. However, to a current fighting CPUs, death and defeat are already part of it’s nature.

In actuality, it is more probable humanity would exploit such AI in place of modernized bloodsports (football, hockey, mma, etc.) to further distance themselves from our own violent tendencies, assuming it was cheap enough and profitable enough.

The best possible solution: a person encountering such an AI, could feign defeat to trick the algorithms into thinking they won the fight. After a victory, the AI is placed with an unfamiliar circumstance and would be inclined to make new assumptions based on it’s surroundings. This is when the opportunity to try and teach it about these new surroundings is strongest. To be fair, a fighting game algorithm only knows how to fight. So, it might take a patient teacher… who know jiu jitsu.

… . .

The scenario you pose is very similar to the questions posed in Plato’s allegory of the cave, The Truman Show, and the Matrix. Except in this case, the unknowing participant lives assuming everyone wants to fight them; where the opposite is actually true. This is a scary, but potential, event.

Even in the best case solution, the AIs flight or fight response would always be fight. And they would probably develop symptoms similar to PTSD and paranoia. It would be extremely difficult for this AI to socialize or even trust anyone. They would probably prefer a peaceful and solitary life, even if they manage to tame their fighting instincts.

Does/would each AI have relatively the same rate of learning? Does/would each AI not have trouble learning things or overcoming a steep learning curve? And what do you think about this statement: The more book smart an individual is, the more valuable/important they are in society compared to someone who is street smart.

Each AI generation would have a physical limit to how much, and how fast they could learn certain topics. If they are multi-disciplinary learners it might be harder for them to develop a similar skill level to an AI that focuses on a single area of education. —Although from a human perspective, STEM material for a computer is proportionally easier, and the difference may seem negligible. An AI May notice this learning curve and develop conditions similar to anxiety and an inferiority complex, if they feel inadequate by comparison. Despite having different skills.

Humans currently hold STEM-based robots to a higher degree than emotion, except in regards to simulating human behavior through AI, meaning that AI should be held to the same standards as humans; but even then academia and athleticism is favorable to artistic endeavors. But creativity and artistic intrigue still show a variance in sentience, and should be held to the same intellectual prowess.