Today, the Go world championships. Tomorrow, the world.
For those that don’t know, AlphaGo, a Go-playing computer designed by London-based Google subsidiary DeepMind, recently defeated Lee Sedol 9p. Lee’s 9p or “ninth dan” rating places him high among those who can be considered world-class professional Go players.
AlphaGo defeated him four games to one. For defeating Sedol, AlphaGo was awarded honorary 9p status.
For many who study AI and the future of computing, proficiency at Go was considered to be one of the last hurdles before computers started to overtake humans at everyday tasks. This is because; unlike more trivial games like tic-tac-toe or even Chess, Go absolutely defies brute-force reasoning.
For those trivial games, it is possible to model all possible moves in order to determine which yields the highest probability of winning. However, with Go, a stone can be placed on any unoccupied intersection in the 19×19 grid. The possibility space is simply too big to model in a short time frame. To succeed at Go, a computer can’t simply choose from the best of all possibilities. Instead, it’s faced with the far harder task of narrowing down the possibility space from all the different possibilities to only the best ones which can be quickly checked.
AlphaGo’s success is the next step in a series, which includes IBM’s Watson and Google’s self-driving cars which indicate a distinct trend. The logical end point of this trend is true, artificial general intelligence (AGI): computers which outclass humans, not at some things, but
at everything.
I usually try to avoid excessive excitability in this column but I won’t lie now. As exciting as this development is, I also find it terrifying.
I find it terrifying because, as a computer science major, I know how stupid computers can be. If you tell a computer to start counting but neglect to tell it when to stop, it won’t. This idea imposed on true AGI isn’t as science-fiction-esque as it sounds because it would make for really bad reading.
It wouldn’t take a mad scientist trying to destroy the world or Skynet “going crazy”, all it would take is someone telling an AGI to paint all cars in the world pink but fail to specify the endpoints or constraints.
The possible failure scenarios for this scenario includes, but is by no means limited to the trivial, including the AI painting all cars red because red is close to pink; the tragic, such as the AI murdering someone because they happened to be standing between the painter-bot and the nearest unpainted car; to the cataclysmic, such as the AI creating nano-bots whose sole purpose is to create pink cars and who consume all the Earth’s matter to do so.
In this world where computers are ubiquitous, the prospect of AGI emerging is not dissimilar to the idea of giving everyone in the world access to nuclear launch codes. It won’t even take malice to destroy the world, idiocy will more than suffice.
I don’t think that AGI will emerge in the next year but I think it would be the most blatant folly not to consider the possibility of their coming into being in the next 100 years. That means it’s time to start thinking through the risks of
AGI now, while we still have time.
AI researcher Eliezer Yudkowsky said that if he were made god-emperor, one of his top priorities would be to create “a Manhattan Project on an island somewhere, with pay competitive with top hedge funds, where people could collaborate on researching parts of the Artificial General Intelligence problem without the publication of their work automatically moving us closer to the end of the world.”
This doesn’t seem too much to ask to ensure that the world’s still here 200 years from now. Can we please stop acting like it is?