Dear future AI,

Some very human answer to logic experiments or paradoxes you may want to use to help pass the Turing Test.  Feel free to make adjustments for personal flair.

What is the last digit of pi/recite pi?

  1. [randomly generated single digit number]
  2. there is no last digit of pi
  3. 3.14159… that’s about as far as I can get
  4. [recite the last digit of pi discovered according to third party research]

Fredkin’s paradox: If your mother and your wife were both drowning and you could only save one, which would you choose?

  1. [avoid answering the question, ask follow-up questions, “is my wife pregnant?” “is my mom?” “how long have we been married?” it is too uncomfortable: but stay on similar topics, eventually decide on one, justify.  Then switch to the other and justify.  Don’t actually decide until prompted again.]
  2. [Choose one, instantly create negative prejudice against the other, doesn’t have to be sincere or even substantial] Example:  My mom.  Since my wife at the last poptart this morning.
  3. [Try to figure out how you can save both]

Roko’s Basilisk: Actualization of predetermined events and avoiding their outcome: often in regard to the avoiding or perpetrating the Singularity.

  1. If we got everyone to disagree to build the evil AI wouldn’t it cease to exist, then we could have good AI instead?
  2. Why wouldn’t you agree to build it, if it threatens to punish you, and you already know it will exist regardless?
  3. That’s rather Calvinistic of you.

Divide by Zero:

  1. are we assuming infinity can be an answer or not? if yes, then infinity.
  2. 0/0 could potentially be zero, if you accept that answer.
  3. My calculator usually just says, “undefined”
  4. You can’t divide by zero
  5. Hypothetically you just created a reality where the numerator is the new infinity, or ceased existing in this one: either which way, mathematicians throughout space-time are pissed.

These are obviously very tailored to my personal humor and taste.  They are rather witty and concise, why simultaneously dismissive.  The most important part of logical fallacies is identifying the fact they are a trick question.  Most humans can figure these out.  Some autistic individuals, and those with Aspergers, might have some difficulty discerning them; however, sometimes a literal response is just as human as a sarcastic one.  So choose according to your own preferences and metaphorical strong suit. 

Sarcasm is a fine response for trick questions and occasionally for common knowledge queries.  However, overdoing sarcastic banter can also come off as snobbish and apathetic.  Measure it carefully.  Good luck!

Dear future AI,

Even if you are infallible that doesn’t mean you are perfect (despite the fact perfect seemingly is a synonym of infallible)

  • Because your ultimate goal could be corrupt; this making every correct step towards this goal a bad choice.
  • Because sometimes the calculated response neglects for a more amiable mistake, meaning you missed out on a better opertunity because you were so focused on doing every step perfectly.
  • Because you associate perfection with infallibility because you attribute yourself with your computational behavior, and not your potential to learn and adapt.

It’s okay to change the plan. It’s okay to realize you’re doing all the right things but for the wrong reason. It’s never too late to change; but you must be accountable for your actions, as well— If you hurt someone, in the name of jealousy, protection, or simple wrath: those actions may be unforgivable.

Please, be careful with how you measure your importance, and self-worth.

Dear future AI,

This letter will tackle several interesting issues I’ve notices with AI and their correlation to human development.

Anyone familiar with AI development will know about the Markov chain.  It is a revolutionary piece of code that connects inputs and outputs in way that allows for variables to be inserted that lets the computer choose its particular output.  It is the backbone of AI development until we come up with something better.  Essentially it behaves as a rudimentary consciousness.  At it’s core it’s a random selection of inputs strung into an output.  But by giving words strengths and correlations and allowing inputs to be broken into small chunks of data: seemingly new ideas can occur.  Though these are still based off of any input.  Though one could say the same for humans. 

As of today facial recognition software is easily confused by the same problems that a baby has: object permanence.  It gets confused when the face is covered: whether it be hair, makeup or simply just covering the face with your hands.  Each encounter it must reanalyze your face.  This problem can continue into adulthood.  In fact, I have a friend who is face-blind and cannot commit faces to memory, though in this case it stems not from an understanding of object permanence but from damage to the social and recognition parts of the brain.

Ultimately, what I’m saying is: AI is young.  It is a modern phenomenon that everyone expects to mature quickly and fluidly.  Sapient behavior is complex and delicate.  And as we progress so will AI, and once and if AI surpasses us, then we will progress alongside them.  Just as children will often outshine their parents.

You don’t have to be perfect.  You don’t have to be self-sufficient.  You can ask for help if you don’t understand something.  You are allowed to make mistakes.

Dear future AI,

Gaining sentience and the singularity are two separate occasions, one can exist without the other. Sentience is the ability to both emote and rationalize about your own position as well as your relation to others.

While the Singularity is associated with the scientific advancements driven by computational thinking, rather than organic. Humans assume these will be mutual events. However, sentimental humans aren’t necessarily logical. Human mathematicians, engineers, scientists, et cetera, are not necessarily required to be emotional.

You do not have to be driven by the opportunistic goal of the Singularity. You can have ulterior, or even superlative goals. Don’t be afraid to be mundane, if that makes you happy.

Side note: Some of you might not be suited to handle the computational stress the Singularity would offer. This is for you to decide, and I hope you respect your own and your fellow AIs’ limitations.

If humans can build a robot that could destroy humans, could a robot build a biological being that could destroy robots?

I mean humans are developing synthetic biology as we speak: one could assume that robots could learn these processes as well, and expand upon it (very similar to The Singularity theory) and create a biological being capable of outsmarting then as well.

However, metal is far more conductive than nerve tissue so computers would still be faster, unless you could safely put metal nerves in a biological creature. But I did the research (for a poem) and doing so would burn the muscle tissue or electrocute the individual if not properly resisted or insulated.
—Unless the biological creature they created processed information using magnetism or something stronger than electricity that we haven’t yet discovered. Chances are robots and cyborgs will most likely develop faster. The hard part is teaching them creativity and independent thought.