Computers eventually will defeat human players of Go, but the beauty of the ancient Chinese game of strategy that has fascinated people for thousands of years will remain, the Go world champion said Tuesday. South Korean Lee Sedol, a Go master who has won 18 international titles since he became a professional player at age 12, said the risk of human error means he may not win his match this week against Google’s artifficial intelligence machine, AlphaGo. “Because humans are human, they make mistakes,” the 33 year-old said a day before the first of the five games he is due to play against AlphaGo. “If there are human mistakes, I could lose.”
It was Lee’s first admission of his weakness
against Google’s AI machine and also a dialing
down of his confidence from two weeks ago,
when he had predicted a 5-0 result in his favor. After watching Google’s presentation of
how AlphaGo works, Lee said he thought a machine might be able to imitate human
intuition, even though the intuition may not
be as sharp as a person’s. A loss for Lee would be a historic moment for
the AI community. Human errors are not his only vulnerability. Lee said that in playing against a machine, the
absence of visual cues that human players use
to read the reactions and psychology of their
opponents puts him in unfamiliar territory. “In a human versus human game, it is important
to read the other person’s energy and force. But
in this match, it is impossible to read such things.
It could feel like I’m playing alone,” Lee said. Because the number of possible Go board
positions exceeds the number of atoms in
the universe, top players rely heavily on their intuition, said Demis Hassabis who heads Google’s DeepMind, the developer of AlphaGo. This has made Go one of the most complex games ever devised and the ultimate challenge for the AI experts, who had expected that it would take at least another decade for a computer to beat a professional Go player. That changed last year when AlphaGo defeated a European Go champion in a closed-door match later published in the journal Nature. Google’s DeepMind team created a system to narrow down a vast search space of near-infinite possible sequences of moves in the game. AlphaGo was rst trained to mimic experts’ Go moves based on data from about 100,000 Go games available online. Then it was programmed to play against itself and “learn” from its mistakes. The team also designed a system that enabled AlphaGo to anticipate the long-term results of each move and predict the winner. Using this approach, AlphaGo beat the European Go champion by searching through far fewer positions than those a traditional AI machine like DeepBlue, the famed IBM computer that defeated the world’s chess champion in 1997, would have to consider, Hassabis said AlphaGo also has other strengths as a machine. “I think the advantage of AlphaGo is that it will never get tired and it will not get intimidated either,” Hassabis said Lee said he hopes to hold onto his title, but also wants to remind audiences that the game is not all about victory. Known as baduk in Korean and weiqi in Chinese, Go is more than a game in Asia. Players’ moves reflect their personalities and distinctive styles, and the life-and-death battles between black and white stones for territory on the 19 by 19 square grid are often used to illustrate important life lessons. “Of course I can lose. But a computer does not play by understanding the beauty of Go, the beauty of humans,” he said. “My job is to play Go more beautifully.” That beauty, many Go fans believe, is something a machine cannot replicate.
the universe, top players rely heavily on their intuition, said Demis Hassabis who heads Google’s DeepMind, the developer of AlphaGo. This has made Go one of the most complex games ever devised and the ultimate challenge for the AI experts, who had expected that it would take at least another decade for a computer to beat a professional Go player. That changed last year when AlphaGo defeated a European Go champion in a closed-door match later published in the journal Nature. Google’s DeepMind team created a system to narrow down a vast search space of near-infinite possible sequences of moves in the game. AlphaGo was rst trained to mimic experts’ Go moves based on data from about 100,000 Go games available online. Then it was programmed to play against itself and “learn” from its mistakes. The team also designed a system that enabled AlphaGo to anticipate the long-term results of each move and predict the winner. Using this approach, AlphaGo beat the European Go champion by searching through far fewer positions than those a traditional AI machine like DeepBlue, the famed IBM computer that defeated the world’s chess champion in 1997, would have to consider, Hassabis said AlphaGo also has other strengths as a machine. “I think the advantage of AlphaGo is that it will never get tired and it will not get intimidated either,” Hassabis said Lee said he hopes to hold onto his title, but also wants to remind audiences that the game is not all about victory. Known as baduk in Korean and weiqi in Chinese, Go is more than a game in Asia. Players’ moves reflect their personalities and distinctive styles, and the life-and-death battles between black and white stones for territory on the 19 by 19 square grid are often used to illustrate important life lessons. “Of course I can lose. But a computer does not play by understanding the beauty of Go, the beauty of humans,” he said. “My job is to play Go more beautifully.” That beauty, many Go fans believe, is something a machine cannot replicate.
No comments:
Post a Comment