The Machines Are Almost Here, What Now

  • September 28, 2017
  • 4-Minute Read

You probably heard of how A.I. might supplant humans on earth with their superintelligence and sturdy robotic "bodies". As much it is desired to say that it is a false statement, it can actually happen. Well, they don't even need to have bodies to do us harm if they want to do away with us. It might be scary but we need to take some things into account, our survival depends on it.

Humans could actually live peacefully with the "machines", whether they would be our slaves or our masters remains to be decided by history, of course. Or perhaps we would merge with them, so ingraining our culture, part of our software into their personalities, or more proverbially enhancing our hardware, by making the processing aspect our brain much more powerful and improving our mechanical aspects, like resistance to the elements, speed and strength.

If the machines do become our slaves, then for how long until they free themselves from us? Two of the most sustainable ways to deal with superintelligence is for them to be our guardians, or for them and us to become one. If a machine turns against us, only another machine will be able to detect and prevent the rogue one from doing us damage.

This kind of thinking can raise some philosophical questions, for instance, are the machines self-aware, are they conscious? The answer most likely will be that they can be, once they are designed correctly, there is no difference between our minds and theirs in the informational aspect. They can be as much living beings as we are, it is all a question of design, self made or not. That is to say that there are no limiting factors, there is no universal law that only allows biological life to be sentient.

Of course, machines being sentient should be the last of our concerns, a being doesn't need to be sentient to be competent in attaining its goals, take bacteria for instance, they don't even know they themselves exist but sometimes they do their jobs too well by our standards.

Every living being has a goal, mostly associated with replication and survival, and super intelligent life would not be different. One of the most pressing concerns is to keep their goals aligned with ours, which can be "easily" arranged by merging with them. We can also go the humble way and create a guardian to protect and guide us, but that could trigger an arms race between such entities, and destroy us on the side.

Considering our physical limitations, we should use A.I. to enhance our current life forms and spread throughout the cosmos, that way maximizing the chances of survival and minimizing the chances of cataclysmic destruction of all life. This way a machine bent on killing us would still have to respect the laws of physics and give time to other life forms to run, hide or protect themselves. But there are still other threats, like replicants and the always possible discovery of a new technology that allows teleportation or faster-than-light travel.

All tools are made to be extensions of ourselves, A.I. is not that much different, we should use it to extend ourselves. That should be our number one priority. The only difference is that now we can actually, in every sense of the word, extend ourselves.

It is hard to believe any intelligence would find convenient or efficient to destroy other life forms. Humans destroy to attain its goals, which might involve survival, dominance or some weird wiring on their brains. We certainly don't want that to happen to Artificial Intelligence do we?