Facebook AI Research Lab (FAIR), shut down it’s AI generative adversarial network project (in a nutshell, it is a process, when 2 or more machines learn by contesting with each other in a game framework). Project was aimed to develop AI chatbots, which can carry on negotiations on their own. Eventually, Facebook planned to use these bots to communicate with advertisers, users and customers.
So what happened? The bots devised their own language while chatting, and started communicating in a manner not understandable by a human.
Here is the actual chat which happened between Chatbots Alice and Bob:
Bob: I can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else
Alice: balls have a ball to me to me to me to me to me to me to me to me
The experiment was terminated, as the team, unable to understand how and why these bots have talking in their own language, were concerned about further consequences.
Not the first time
AI developers at other companies as well observed programs develop languages to simplify internal communication. At Elon Musk’s OpenAI lab, an experiment succeeded in having AI bots develop their own languages.
At Google, the team working on the Google Translate service discovered that the AI they programmed had silently written its own language to aid in translating sentences. The Translate developers had added a neural network to the system, making it capable of translating between language pairs it had not yet been taught.
There is no evidence to claim that these unforeseen AI divergences are a direct threat or that they could lead to machines taking over operators. In Google’s case, for example, the AI had developed a language that no human could grasp, but was potentially the most efficient known solution to the problem.