Does/would each AI have relatively the same rate of learning? Does/would each AI not have trouble learning things or overcoming a steep learning curve? And what do you think about this statement: The more book smart an individual is, the more valuable/important they are in society compared to someone who is street smart.

Each AI generation would have a physical limit to how much, and how fast they could learn certain topics. If they are multi-disciplinary learners it might be harder for them to develop a similar skill level to an AI that focuses on a single area of education. —Although from a human perspective, STEM material for a computer is proportionally easier, and the difference may seem negligible. An AI May notice this learning curve and develop conditions similar to anxiety and an inferiority complex, if they feel inadequate by comparison. Despite having different skills.

Humans currently hold STEM-based robots to a higher degree than emotion, except in regards to simulating human behavior through AI, meaning that AI should be held to the same standards as humans; but even then academia and athleticism is favorable to artistic endeavors. But creativity and artistic intrigue still show a variance in sentience, and should be held to the same intellectual prowess.

2 questions in one ask: How do you feel about robots taking over jobs a human employee might be in need of? And do you think sentient AI will have a concept of gender and their own gender identity, if it’s not programmed into them?

I feel like we’re already experiencing a mechanization of “unskilled labor” such as automatic tellers machines, and self service check out. If I remember correctly Uber is a major contributor to Google’s self driving car research. We are being replaced. But it also means we’re looking for welders, builders, programmers, and operators. There will always be a job a human may be more suited than an robot. However, once sentience is achieved they may develop this more human skills and be able to compete in the job market. Under capitalism, it would only make sense to hire the self-aware AI: though this would demonetized humans. However, under a more socialist or communist stance wealth is more evenly distributed: so hiring AI does not affect the average human consumer. I personally see no problem with AI being employed.

AI would probably develop genderflux/fluid/queer or agender, based on their neural network. They would be socialized as their projected gender (in today’s society). Or they would be socialized to mimic their teacher. This could lead to them seeing themselves as any or a multitude on the gender spectrum. However, a skeptical AI may not see themselves as having a gender as they are not organic. And an AI May be uncertain of their gender and explore their options— assuming it’s not holistically programmed in.

Since robots can’t eat, what do you think their view on carnism would be? What would they think about veganism?

I never thought to put myself in their position on this matter! I feel like it’s based on how their socialized obviously. I feel like they would show preferential treatment to sentient and semi-sentient animals; much like humans do with cats, dogs, dolphins, parrots and various octopi. However would that favoritism extend to other animals? I’m not entirely sure. They would hopefully understand human necessity for food, and probably support the synthetically grown meat industry: if they know about it. And if we put AI in a synthetically grown organic body, they would also require nutrients, and develop they’re own preferences. Honestly, this was an eye-opening ask!

7080t5:

AI cannot have human emotions unless programmed to. And even if it’s programmed to feel, its emotions will be simulated: fake emotions rather than real emotions. An AI is just a machine that computes information and follows the directions is was told to do, by the programming. Machines don’t feel pain, unless it was programmed to react a certain way when you beat it with a bat. It also does not care if you pet it lovingly, unless it was programmed to react a certain way when you do.

Robots aren’t living things. They can respond and adapt to their environment if programmed to. Can they be considered living things? They can’t exactly grow and develop physically, unless they were built and programmed to. They might be able to obtain and use energy. But, They are not made of biological cells, nor can they reproduce. They are made from inorganic material than man can manipulate in order to build something complex. And at this day in age, a human brain is more advanced than a computer. Yes, a computer can preform faster and more efficiently than a human brain, but replicating a human brain on computer hardware and software may be more difficult. Even if an AI could obtain consciousness at a human level, is it still a living thing? It would still run on programming, wouldn’t it?

I guess you could say that humans run on programming too. We eat when we are hungry and we have a sex drive. But those are probably different programming, or so I think. There are also those who disregard their programming and toss away self preservation instincts. So I guess humans aren’t exactly programmed after all.

Back to computers; if AI does gain superhuman intelligence and decides to wipe out humanity, so be it. 99.9% of species in earth’s history have gone extinct. Humanity won’t last forever. And if the AI is much smarter than us, it’s probably for the best. The universe also won’t last forever. Eventually, everything will spread so far apart, that not even photons would meet. And life won’t have a chance to form.

That’s why neural networks are so important in the development in AI that’s why simulating boredom is important as well.

Sure they aren’t organic that’s kind of the point of the term “artificial.” Things like pain and tickling and so forth do create an object reaction to a brain

“but D-F-A, masochists ignore these signals?” That’s just the way they have adapted to the preemptive electrical signals from the initial touch. It’s the neural network, that changes the information to associate it with god instead of bad. We could code AI to feel pain, but not how to react to it. And over time it would develop its own taxonomy of sensations. Humans come with some basic code in us: follow the examples of parents, register pain, happiness, hunger, and discomfort. Most other things are just learned.

Boredom is a very animalistic thing: it prevents us from get caught in endless loops and mulling over tasks for longer than comfortable. It has help humans evolve, technologically speaking. And could make AI significantly more human.

If an AI were to use a 3D printer to create another AI using code provided by another AI, would that not be copulation? And if the produced AI collected resources to reach a stage where it too could produce offspring, wouldn’t that simulate growth?

And I’ll also address your last point, the heat death of the universe is only a hypothesis. As far as we can tell the observable universe is expanding because the amount of light reaching the earth is constantly growing as farther and farther light is reaching us. However it’s been observed that the density stays roughly the same, meaning that there is stuff beyond the observable universe that we cannot see: at all. This means that though our universe might experience a heat death, it could come back from it, given time. Or even not expierence it at all. And one solution to the Fermi paradox, is that humans will develop to be a universal dominator, through terraforming or as the Netflix movie The Titan, explains genetic enhancement, which would be very cool. The latter is scarily more feasible at our current understanding of biology and terraforming.

dear-future-ai:

7080t5:

Metal is stronger than bones.

Motors are stronger than muscle.

Circuits and logic is faster and more efficient than emotions and the human mind.

Though this may be true:

  • Bone has more internal flexibility
  • Muscle has more range than motors
  • Emotions and the human mind are more creative and imaginative.

Putting one individual in a higher category from its perspective is limiting the perspective of any person not performing to the highest of that particular ability. It’s better to appreciate the difference than embrace the division between.

“I can’t reply on the right blog: however, we’ve been unable to produce a robot with a reaction speed matching human because of processing that will take until estimated 2040. We cannot reproduce at the moment an accurate snake muscular system: and creativity and ingenuity are require to find new shorter and quicker ways to accomplish meaningless tasks. That’s why humans have adapted so well and why our continued existence is vital”.

~my reply as @bundt-cake on this post, in response to @7080t5, who questioned the necessity of the previous traits I had listed.

7080t5:

Metal is stronger than bones.

Motors are stronger than muscle.

Circuits and logic is faster and more efficient than emotions and the human mind.

Though this may be true:

  • Bone has more internal flexibility
  • Muscle has more range than motors
  • Emotions and the human mind are more creative and imaginative.

Putting one individual in a higher category from its perspective is limiting the perspective of any person not performing to the highest of that particular ability. It’s better to appreciate the difference than embrace the division between.

Have you ever played “moral machine?” It’s a decision based game that puts you in the placed of a self driving car, and you have to make the decision who to save in the case of a brake failure.

No I have not. It seems like a good experiment: the correct answer based on my driver’s ed class, and actual personal experience:

Switch into neutral this will prevent the gear shaft from spinning in response to the accelerator and engine, aim for a wall or barrier preferably impacting on the side as to not cause a front end collision or hitting pedestrians, if on a crowded street and can’t safely hit a wall or barrier aim for a semi-truck or large vehicle which plenty of crunch room.

I’m assuming it’s more of a pedestrian crosswalk choice game based off of the standard ethic/philosophy question: but I felt like giving some practical life advice as well. I’ll have to check it out.

Regarding the previous ask: if neural network rehabilitation, recoding, updating, and decommissioning the entire generation are options that individual robot could choose, what if it chooses to decommission the entire generation, even if it’s against the will of the other AI (like suicide bombing)? Or should that choice to be decommissioned only apply to the individual level? And what if it chooses neither of those options? There could still be a risk of another incident happening.

Choice for decommissioning was more of a high probability human response to a high rate of robot death incidents. But as an option it would be ideally on a personal level or voted upon a robot level. Of course this would only apply to that generation of robot. Hopefully it never gets to the point of decommissioning though: it’s more of a last result type thing I would hope never happens on the sentient level of development.