As humans, we have a gift. We know how to lie. And that gives us an edge over machines. The future war against the machines will most likely be decided on who can lie better. But before that, machines need to learn how to lie first. When DeepMind’s AlphaGo AI beat Lee Seedol, it was reacting to Seedol’s move on a board that is visible to both the human and AI. That, though, is not the case at the Brains vs Artificial Intelligence competition going on at the River Casino in Pittsburgh.
Libratus is an AI designed to play poker and take on human champions at the game that has often been considered to be more about math than luck. Indeed, Poker is a game of probabilities and the added element of bluffing makes it even more complex. No-Limit Hold ‘Em poker is its purest form, where players can bet as much as they like, with no limit on how much money is being played for.
In essence, a No Limit Poker hand can go in countless different ways. While the outcome, in terms of the hands each player has are finite, the way they bet and what they bet are practically unpredictable. Experience Poker players use a combination of probabilities and experience to beat the game. That’s easy for a machine to learn over time. However, what’s difficult is the prospect of bluffs, where a player is advertising something he or she doesn’t have. How does a machine predict that? Moreover, there is absolutely no way to predict what the next card dealt in the middle would be.
We don’t really know yet, but what is certain is that a machine has finally done so. Libratus has won over $1.5 million in chips, from pro Poker players, including Dong Kim, Jason Les, Jimmy Chou and Daniel McCauley. Each of the matches are being live streamed over Twitch.
Imperfect Information
A big challenge in AI research has been to deal with imperfect information. Poker requires you to interpret a player’s actions and decide what they mean. The same bet can be used as a bluff in one hand and as intimidation in another. This is exactly why professionals have often reiterated that you play the player, not the hand.
When AI beat professionals at Chess and Go, they had millions of possibilities to process, but they were still a finite number. After playing millions of rounds of the said games, the machines learned to react to different moves. That’s easier said than done in Poker. In fact, Tuomas Sandholm, computer science professor at Carnegie Mellon university, who designed Libratus with his Phd student Noam Brown, told The Guardian that he wasn’t confident about the AI being successful in beating Poker Pros. International betting sites had placed them at 4-1 odds, he said.
Sandholm also told The Guardian that Libratus hadn’t actually been taught to play Poker. The researchers gave the AI the rules and it learned to play over a period of time, after playing millions of hands. Brown and Sandholm had every reason to not be confident, too. After all, their previous AI, named Claudico, had failed to beat professionals at 2015’s Brains vs Artificial Intelligence event. Libratus is an improved version of Claudico.
Interestingly, much like AlphaGo’s victory over Lee Seedol, the pros involved in this game said the AI was more aggressive than usual and used plays that humans usually wouldn’t. They described the AI’s style as “aggressive”. Brown himself told The Guardian that he didn’t know his AI could bluff humans. He confessed that it’s not something Libratus had been taught, and learned over time. For example, in a hand against Jason Les, Libratus was awaiting a club to complete its flush. However, with the river card (final card) being dealt, it missed the club. Instead of folding, which would be the right thing to do, the AI made an aggressive bet, leaving the human flummoxed as to what its hand was. Similarly, the AI has been over-betting the pot, meaning it has been betting far more money than is actually available for winning. So, in a $100 pot, the AI can makes bets that are ten times as large. This would make little sense to humans, since probabilities and odds say you're making a bad decision. Unlike humans, however, the AI does whatever is thinks is necessary to win. Would the same fly in an AI vs AI match? We don't know. But it sure worked against humans.
The robot rising begins?
Much like AlphaGo’s victory, Libratus’ victory also has deeper meaning. Both the machines have proven themselves capable of near intelligent though, which can now be utilised for other applications. Libratus in particular may be useful for purposes with imperfect information, as in most interactions in life. The ability to deal with imperfect information is what makes humans the most advanced species on the planet. Libratus could be a big step forward in AI research, and the fact that it knows how to lie could actually lead to specific use-cases being solved.
Of course, Libratus currently plays only heads-up (one-on-one) against the pros, but though there's a manifold increase in possibilities on a nine-player table, it should also allow the AI to learn and improve. Neural Networks within an AI are designed for this kind of implementation.