We have seen it so many times in the movies – Computers getting smarter than humans and ultimately trying to exterminate us so they can roam free in the world and in our universe. But can it actually happen? Could a computer gain so much self-awareness that it would start making decisions against what it was programmed to do? It’s a hard question to answer since we don’t yet know just how powerful we can really make a computer. Some people say that a computer can only become as smart as the humans who make it, but I have to say that I disagree. Sure, we are the masters of the code implemented, but what happens when we give the computer the same tools as we have, meaning we teach it how to program itself?
That is exactly what researchers and inventors are trying to achieve. So far, attempts to create a self-learning computer have been quite primitive. But we know from experience that once we figure out the crucial building blocks for self-awareness and learning, we’ll start making real progress. One of the most advanced artificial intelligence robots to date is Philip, an artificial intelligence robot that both thinks and speaks freely depending on what you ask it.
It is programmed to take in and teach itself about the environment around it as well as everything that is said to it so that it can further build up its library of responses which will become its entire intelligence. To some extent, Philip is programmed, but it is programmed to learn and it combines both its programmed responses with the ones that it teaches itself in order to make it even more intelligent. It’s really interesting to see how far artificial intelligence has come since we first saw its ultimate awareness in Terminator. Where it will all end is of course hard to say, but judging from the answer Philip gives when asked if robots will rule the world in the future, it is learning really quickly, and the answer doesn’t really comfort me as much as much as I would have liked it to.