• The yellow robot arms dance through an assembly demo for Elon Musk and the rest of the tour group that visited the reopening of the former NUMMI plant, now Tesla Motors.

The yellow robot arms dance through an assembly demo for Elon Musk and the rest of the tour group that visited the reopening of the former NUMMI plant, now Tesla Motors. (Photo : Wikipedia)

CEO of Tesla Motors and private space company SpaceX, Elon Musk is known for his distrust among artificial intelligence.

Musk apparently does not seem to trust AI at all that he apparently donated US $10 million for the Future of Life Institute in order to fund a program where the goal is to make sure that AI will not completely overrun human control, where regulation and prevention can be carried out from AI destroying the human race.

Like Us on Facebook

Bloomberg reveals that a total of $10 million will be distributed as grants for 37 research projects around the globe. FLI will give out $7 million from Musk's funding and a $1.2 million award will be granted by the Open Philanthropy Project.

According to FLI president Max Tegmark, there is a technological and AI race occurring right now where the power of modern technology and the growing knowledge of how to manage this kind of technology seems to get blurred between the lines. It also seems like all these investments by big technological companies has been focusing on making current systems more intelligent so this type of investment is to make sure that this can prevent AI from getting too intelligent. 

These proposals and plans to re-assure that the human race is safe from this emerging AI seem so vague that they appear to be pieces of a bigger puzzle. However, three of these projects will take place in UC Berkeley and Oxford in order for AI to learn about what humans want via human observation where two other projects will focus on developing ethical systems for AI where AI can teach and explain their decisions to humans.

Some projects may seem even more advanced as one project's goal is to provide a framework establishment to keep AI related weapons still under meaningful and relevant human control. A prime example would be U.S. Navy military using drone technology when it comes to autonomous ships and planes to hunt down enemies.