SHARING AMERICA'S TECH NEWS FROM THE VALLEY TO THE ALLEY
by Cade Metz (courtesy arstechnica)
Geoffrey Hinton (right), one of the machine-learning scientists hard at work on the Google Brain. Photo: University of Toronto
Your brain is a collection of neurons — tiny cells that use electro-chemical signals to send and receive information. But as Google builds an artificial brain that will help drive everything from its web search engine to Google Street View to the voice-recognition app on Android smartphones, it’s using very different materials. Among them: graphics microprocessors, the same sort of silicon chips that were first designed to process images and videos on your desktop computer.
That’s the word from Geoffrey Hinton, the artificial intelligence guru who was recently hired by the search giant to continue work on the so-called Google Brain. When we spoke to Hinton just after his “deep learning” operation was acquired by Larry Page and company, he didn’t provide specifics, but he said that Google is now using graphics processing units, or GPUs, to help power its brain-mimicking neural networks.
It’s a counter-intuitive arrangement. Though GPUs were designed for processing images and video and games, Google is using them in a more general way, as you would normally use a machine’s main microprocessor, or CPU. But because they’re so good at processing large amounts of information in parallel, completing many small tasks at the same time, GPUs can be applied to almost any computing task that require some hefty horse power.
“I can’t comment on what Google is doing. But it’s a natural fit. GPUs love big problems,” says Ian Buck, a engineer at graphics chip maker Nvidia who founded the CUDA project, a software platform that helps developers build applications for GPUs. “They’re designed to process huge amounts of information in parallel. Mimicking the human brain — where you have billions of neurons all firing at the same time — is really just one big parallel simulation.”
‘GPUs love big problems. They’re designed to process huge amounts of information in parallel. Mimicking the human brain — where you have billions of neurons all firing at the same time — is really just one big parallel simulation.’ — Ian Buck
Google is just one of many companies that are now using GPUs for all sorts of tasks inside the modern data center. The London-based Shazam is using GPUs to help identify songs and artists that match your particular music tastes. Salesforce has installed GPUs to analyze information streaming across millions of Twitter feeds. Amazon has long offered a cloud service that provides instant GPU power to anyone who wants it. And a San Francisco startup called imgix now provides an GPU-based online service that lets virtually any website rejigger images as they’re served onto user PCs and mobile devices.
“The graphics processor is almost like a misnomer now,” says imgix CEO and co-founder Chris Zacharias, who cut his teeth as a software engineer at Google and YouTube. “A GPU is just something that does a kind of mathematics, and those mathematics can be applied to many, many fields.”
GPUs have long lent their parallel processing power to a decent chunk of the world’s supercomputers, those massive machines that run specialized scientific applications across tens of thousands of chips. These chips are ideal for, say, building a simulation of the world’s weather patterns. About 50 of the planet’s 500 fastest supercomputers now rely on GPUs, including the Oak Ridge National Laboratory machine that sits atop the list.
But these chips have only recently moved into the data centers that help drive the web. Amazon launched its GPU cloud service in 2010, and this spring, Nvidia revealed that Salesforce and Shazam were using Nvidia GPUs to power their online services. But Google’s project takes the trend even further, potentially moving GPUs into some of the web’s most widely used services, including the primary Google search engine.
Salesforce declined to comment on its use of GPUs. And Shazam wasn’t immediately available to discuss its GPU work. But according to Nvidia and public documents discussing these two projects, both are tapping GPU for their raw parallel processing power. In a public presentation, Salesforce engineer Brendan Wood says the company uses GPUs to search vast numbers of Tweets and other social networking posts for certain keywords. The company’s “Marketing Cloud” analyzes about 500 million incoming tweets a day, looking for about a million different keywords.
This has nothing to do with graphics processing. But if need be, these chips can certainly be applied to graphics services, the sort of thing they were originally designed for. imgix has built up a GPU-powered infrastructure that can re-crop and re-format web images in real-time, as they’re served onto end-user machines. If someone visits your site with an Apple iPad, for instance, imgix can instantly resize the image for the tablet’s Retina display. The company plans to eventually rejig videos in similar ways.
Nvidia, one of the world’s leading graphics chip makers, has spent years trumpeting the GPU as the future of massively parallel processing. But now it appears that this future is finally here. Where Google goes, the rest of the web follows.