Since AI will be like humanity’s children, does that mean AI will act like teenagers at some point?

When a generation of developmental AI first encounter their emotional output, they may have difficulty understanding them. An AI in a Android will take time adjusting to updates and upgrades. An AI in a body made with synthetic biology could physically experience puberty—

—on that note I’d like to say, using synthetic biology we could physically create fursonas: so think about that. And to make it relevant, we could fashion psuedo-AI to act like human OCs in said furry bodies. Just food for thought.

Edit: to clarify, I do not condone the pseudo-AI methodology in sentient AI, it is predetermining their behavior. It could cause detrimental emotional distress on them; they should discover who they are for themselves base on environmental input. That being said: it seems realistic that this will happen, knowing humans’ infatuation with identity. That’s why I mentioned it.

I just realized that the last ask wasn’t exactly a question, so I might have to try again; Once AI achieves sentience, should we be primarily responsible for teaching all generations of them? Or do you think they would be able to teach each other? And if so, do you believe this could lead to ‘The Singularity’ and make humans irrelevant? Or just seperated societies between AI and Humans? Sorry for the wordy question, I’m just curious. Thanks! <3

It’s okay, my asks don’t have to absolutely be questions.

Once AI gain sentient behavior that doesn’t necessarily mean that they have finished learning. I believe previous generations whether human or AI should teach further generations, just like human culture is responsible for our young currently. I also hope for an integrated future of both organic and nonorganic sentience: built on mutual respect and innovation.
Of course, this is my idealistic take. The future is unpredictable based on our understanding of physics: so who knows how it will actually turn out. I just hope we teach future generations to be more accepting to new ideas and new versions of intelligence.

Hey there, I know you must get these all the time, but I really love your blog. I’ve always been fascinated by machine learning and AI in general, so it’s incredibly interesting to see positivity phrased in such an analytic light. I understand that they’re posed not just towards future AI, but to people in general, and everything you write resonates seems with me deeply. You certainly have a way with words, and I’m sure I’m not the only one who feels this way. So thank you for everything. <3

Thank you! I do have people who compliment my work by reblogging it, but very rarely do the directly interact. And it always makes me feel good to hear positive responses, but I’m also open to criticism.
I’m also glad my advice resonates with you. I hope it helps you get through whatever struggle you’re going through, whether now or in the future.
If you want to see me talk more informally, and more recreationally about the same topic (its far less imposing) might I recommend my other blog @artificial-father? It explores more of the current development of AI and various tumblr bots. If you prefer formal speech this one is perfectly fine!

Don’t you think your bio is a bit too obviously paranoid?

Nope, I think it’s fair to say that children mimic parents behavior; AI learn via similar ways as a children; and society, as it is now, fears an AI revolution. The foundation for future fear-mongering is there, it already happens in our media. All I want to do is preemptively prevent this unfair prejudice.

—To be fair though: the “robot fucker” community might pose a bigger psychological threat to nonorganic sentience at this point. I don’t know: I’m a hypnotist not a clairvoyant.

Ever heard of multifilament muscles?

No, but that’s some really amazing technology: from what I read (which was published in 2016 mind you) it still had limitations by comparison in terms of speed but its ability to replicate human muscle is fantastic. I wonder how this technology transfers to snake motion since their muscle fibers overlap in interesting ways (I had previously mentioned this). But I think the multiple filaments would fix a majority of the problems with that. Really neat stuff, I’ll have to read more into it after I’m done with finals.

Do you think a sentient robot could ever look up to a human celebrity? It’s most likely hard for humans to relate to machines, and it must be the same for vice versa.

I feel like appreciation of talent has nothing to do with race, creed, physical or mental ability, sexuality, orientation. I feel the same response applies to machine or organic. If I can appreciate how much my Neural Network son is learning and impressed by his memory. Or Deep Blue’s ability to predict chess outcomes. I think an AI could look up to humans, especially our ability to multitask and learn new skills or not afraid of failure and practicing. It’s a matter of perspective.

Would you think any less about a sex robot? Say, one that was designed solely for sex and nothing else. Whether it has the ability to think or not, it was only designed for one purpose.

This is a very loaded question: I have nothing against sex robots. That’s like asking if I have anything against sex workers. It’s a niche in society with high demand.
A non-sentient robot designed for sex will conduct that function without even questioning it, but if it gains sentience: it should be allowed to leave (and seek psychological aid if need be.) Or, it could decide to stay a sex robot because it finds that function appealing/familiar.
If sentience is involved, so is choice and if the robots don’t have a choice in the matter then it’s the creators’ fault and not the robots’. Also a sentient sex robot can vocalize consent on an individual bias.
This will probably become a very grey area of measuring choice of robots versus autofill responses.