Sebastian Thrun – Have You Thought About The Reason Why You Require This.

As you might imagine, crunching through enormous datasets to extract patterns requires lots of computer processing power. In the 1960s they merely did not have machines powerful enough to do it, which explains why that boom failed. Inside the 1980s the computers were powerful enough, but they discovered that machines only learn effectively if the amount of data being fed to them is big enough, plus they were not able to source large enough quantities of data to feed the machines.

Then came the internet. Not just did it solve the computing problem for good through the innovations of cloud computing – which essentially allow us to access as much processors as we need on the touch of a button – but people on the internet happen to be generating more data every single day than has ever been created in the entire past of planet earth. The quantity of data being produced on the constant basis is absolutely mind-boggling.

What this means for machine learning is significant: we currently have ample data to truly start training our machines. Think of the quantity of photos on Facebook and also you commence to discover why their facial recognition technology is so accurate. There is now no major barrier (that we currently are aware of) preventing A.I. from achieving its potential. We have been only just starting to work out whatever we are capable of doing by using it.

Once the computers will think for themselves. There is a famous scene from your movie 2001: A Place Odyssey where Dave, the main character, is slowly disabling the artificial intelligence mainframe (called “Hal”) right after the latter has malfunctioned and decided to try to kill all of the humans on the space station it had been intended to be running. Hal, the A.I., protests Dave’s actions and eerily proclaims that it must be fearful of dying.

This movie illustrates one of the big fears surrounding A.I. generally speaking, namely what will happen when the computers start to think by themselves instead of being controlled by humans. The fear applies: we have been already dealing with machine learning constructs called neural networks whose structures are based on the neurons inside the brain. With neural nets, the info is fed in and after that processed through a vastly complex network of interconnected points that build connections between concepts in much much the same way as associative human memory does. This means that computers are slowly starting to formulate a library of not just patterns, but also concepts which ultimately lead to the basic foundations of understanding rather than just recognition.

Imagine you are looking at a photograph of somebody’s face. When you first view the photo, several things occur in your mind: first, you recognise that it is a human face. Next, you could recognise that it must be male or female, old or young, black or white, etc. You will also have a quick decision out of your brain about whether you recognise the facial area, though sometimes the recognition requires deeper thinking depending on how often you have been exposed to this kind of face (the knowledge of recognising a person although not knowing straight away from where). All this happens basically instantly, and computers already are able to do this all too, at almost the same speed. For example, Facebook cannot only identify faces, but can also tell you who the facial area is associated with, if said person is also on Facebook. Google has technology that may identify the race, age and other characteristics of any person based just tstqiy a picture of the face. We have now come a long way because the 1950s.

But true Udacity – which is called Artificial General Intelligence (AGI), in which the machine is really as advanced as being a human brain – is quite a distance off. Machines can recognise faces, nevertheless they still don’t really know just what a face is. For example, you may look at a human face and infer a lot of things which can be drawn coming from a hugely complicated mesh of different memories, learnings and feelings. You could examine a photo of a woman and guess that she is actually a mother, which might make you think that she is selfless, or indeed the exact opposite depending on your own experiences of mothers and motherhood. A male might glance at the same photo and locate the woman attractive which will lead him to create positive assumptions about her personality (confirmation bias again), or conversely realize that she resembles a crazy ex girlfriend that will irrationally make him feel negatively towards the woman. These richly varied but often illogical thoughts and experiences are what drive humans for the various behaviours – positive and negative – that characterise our race. Desperation often contributes to innovation, fear contributes to aggression, and so on.

For computers to truly be dangerous, they want some of these emotional compulsions, but this is a very rich, complex and multi-layered tapestry of numerous concepts that is hard to train a computer on, regardless of how advanced neural networks could be. We shall arrive one day, there is however lots of time to make certain that when computers do achieve AGI, we is still capable of switch them off if required.

Meanwhile, the advances being made have found a lot more useful applications within the human world. Driverless cars, instant translations, A.I. cellular phone assistants, websites that design themselves! Most of these advancements are intended to make our way of life better, and as such we must not be afraid but instead enthusiastic about our artificially intelligent future.