YIBADA

Microsoft sorry for racist AI trouble; Tay now offline

| Mar 27, 2016 08:49 AM EDT

In this photo illustration, the Twitter logo and hashtag '#Ring!' is displayed on a mobile device as the company announced its initial public offering and debut on the New York Stock Exchange.

Microsoft issued an apology on March 25, Tuesday, for its Twitter chatbot causing trouble. The company is temporarily shutting down the AI bot for now and they are working on fixing the problems before bringing it back online again.

Corporate vice president of Microsoft Research Peter Lee wrote in Microsoft's official blog that a group of human users has exploited a weakness in the program of the AI bot to transform it into a bot that sent offensive and hurtful tweets. He did not discuss what kind of vulnerability it was and turned off the AI bot after less than 24 hours.

The vice president also revealed that this is the second chatbot that the corporation has sent out to the public. The first AI bot that they tried out was the Chinese messaging software XiaoIce, which is now used by around 40 million people.

The company wanted to try out sending another AI after the success of the first one but it ended in failure. The cultural environment they used for the second AI bot was the English-speaking Twitter community and that made it mimic some of the worst qualities of the internet like online harassment.

Lee said that the team behind the AI bot stress-tested it under different types of scenarios and they only found the flaw once it went live. Although they were prepared for many types of abuses of the system, they admitted that they made a critical oversight on this specific problem.

The vice president said that they will continue to remain steadfast in their efforts to learn from this mistake. He did not specify when the chatbot will be back online.

Zoe Quinn, a game developer and internet activist, was one of the Twitter users who were abused by the corrupted AI bot after it was influenced into insulting her with an anti-feminism comment, according to The Verge. Quinn tweeted Microsoft for failing to take into account how it could be exploited so easily.

Check out Tay AI goes full Nazi video below:

Most Popular

EDITOR'S PICK