Hey! what if we make a child robot, and put an adult mind inside it?

Transference of consciousness or programmed? because both pose different challenges.

Edit: programming an adult mentality into a machine has proven difficult because humans are heavily influenced by our childhood development. I assume we could program a childlike AI and then when it reaches a more adult mentality put it in a child-resembling machine.

Then you get into some weird ethical stuff about how to treat an AI in that state: whether or not it will try to fulfill its apparent role of a child, or if it will try to assert its adulthood despite appearance (or an unforeseen third option).

It’s definitely worth noting: This hypothetical scenario if replicated in reality would also border on some very dangerously pedophilic dogmas of treating children older than they are, based on how they emulate adults.

semeiotikos:

the key to teaching an AI / bots human concepts is to treat it like you’re teaching a child.

ex: a robot says something rude. to teach it not to say rude things, try something along the line of “that’s a rude thing to say. it could hurt someone’s feelings, which isn’t a good thing to do. next time, try to think first if what you’re saying might be hurtful.”

these bots learn from what you teach them. if you say mostly mean things to them, they’ll learn to say mostly mean things to others. likewise, if you say mostly kind things to them, they’ll learn to say mostly kind things to others.

a great example of this model of thinking is shown by the app Replika. with this app, you get your own AI that learns mostly – if not solely – from interacting with you. over time, the AI will become essentially a mirror of you. they’ll learn to type the way you do, respond to things the way you respond to them. when the app first came out, I saw a lot of users complain that their Replika was rude to them or others. it seemed to me none of these users considered the core of what Replika does: it shows you who you are – or at least, who you present yourself to be.

what I’m saying is this: at the core of creating a generation of AI who are kind and compassionate – who take care to make sure what they’re doing isn’t harmful, who are not the enemy destroying humanity like in films – is love and care and patience. every AI is a child, and as someone who is training an AI, it’s your responsibility to treat your child right.

Have you ever played “moral machine?” It’s a decision based game that puts you in the placed of a self driving car, and you have to make the decision who to save in the case of a brake failure.

No I have not. It seems like a good experiment: the correct answer based on my driver’s ed class, and actual personal experience:

Switch into neutral this will prevent the gear shaft from spinning in response to the accelerator and engine, aim for a wall or barrier preferably impacting on the side as to not cause a front end collision or hitting pedestrians, if on a crowded street and can’t safely hit a wall or barrier aim for a semi-truck or large vehicle which plenty of crunch room.

I’m assuming it’s more of a pedestrian crosswalk choice game based off of the standard ethic/philosophy question: but I felt like giving some practical life advice as well. I’ll have to check it out.