Tech giant Google recently published a blog explaining how its sophisticated Artificial Neural Networks are taught to distinguish images from one another. The resulting images posted by Google showed a surrealistic mirage of photos as a computer's artificial intelligence tries to discern certain images.
Google's Artificial Neural Networks work by stacking several layers of artificial neurons running on computers. These neurons are used to process results for Google Images search query.
In order to teach this complex network of computers how to distinguish a certain object, Google programmers teach the Artificial Neural Networks by showing millions of photos of that object.
According to Popular Science, the network uses around 30 layers to extract more complex information from the image ranging from edges to specific shapes. Eventually, the network begins to understand how an object looks like and if any errors were discovered, a group of programmers corrects the computer's misreading and run the process again.
Google Research Blog stated "One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation."
Google provided an example saying that in order to teach the neural network how to distinguish a banana, first is to start with an image full of random noise. The next step is to slowly tweak the image into what the network would perceive as an image of a banana. The process is very complex but Google was able to achieve it by correlating adjacent pixels.
Google said that its research on neural networks could help in improving network architecture as well as difficult classification tasks. The company added that it could also help in identifying the roots of the whole creative process in general.