Site icon Wonderful Engineering

Meta’s AI Has Outwitted Human Players In Diplomacy – The First AI To Ever Do It

CICERO is the first artificial intelligence (AI) agent to achieve human-level performance in the popular strategy game Diplomacy. Diplomacy is known for its deep strategic gameplay and the need for players to work together while also pursuing their individual goals and has been viewed as a nearly impossible challenge in AI.

This is because it requires players to understand people’s motivations and perspectives, make complex plans and adjust strategies, and use language to convince people to form alliances.

When playing, CICERO scored more than double the average human player and ranked in the top 10% of players with multiple games. CICERO’s proficiency in using natural language in Diplomacy has even caused other players to prefer working with it over other human participants.

By combining powerful AI models for strategic thinking and natural language processing, Meta CICERO can outsmart any other virtual or human player.

CICERO marks the beginning of a new era for AI that can collaborate with people in gameplay using strategic reasoning and natural language processing, and the learnings from technology like this could one day lead to intelligent assistants that can collaborate with people.

While CICERO is only capable of playing Diplomacy, the technology behind it is relevant to many other applications. For example, current AI assistants can complete simple question-answer tasks, like telling you the weather.

To develop this unique skill set, Meta’s team began by training the machine on 2.7 billion parameters based on text scraped from across the Internet before fiddling further and fine-tuning over 40 thousand human interactions taken from webDiplomacy.net. 

By open-sourcing the code and models we hope that AI researchers can continue to build off our work in a responsible manner. And while it’s a wonderful innovation, it has some risks too. It can be used to manipulate humans by impersonating people and misleading them, which can be dangerous depending on the context.

While the risk cannot be eliminated entirely, Meta has committed itself to detect and blocking ‘toxic messages’ which may arise from online texts ingested while training its system.

Exit mobile version