Deep learning, neural networks, imitation games—what does any of this have to do with teaching computers to “learn”?
Machine learning is the process by which computer programs grow from experience. This isn’t science fiction, where robots advance until they take over the world. When we talk about machine learning, we’re mostly referring to extremely clever algorithms. In 1950 mathematician Alan Turing argued that it’s a waste of time to ask whether machines can think. Instead, he proposed a game: a player has two written conversations, one with another human and one with a machine. Based on the exchanges, the human has to decide which is which. This “imitation game” would serve as a test for artificial intelligence. But how would we program machines to play it? Turing suggested that we teach them, just like children. We could instruct them to follow a series of rules, while enabling them to make minor tweaks based on experience. For computers, the learning process just looks a little different. First, we need to feed them lots of data: anything from pictures of everyday objects to details of banking transactions. Then we have to tell the computers what to do with all that information. Programmers do this by writing lists of step-by-step instructions, or algorithms. Those algorithms help computers identify patterns in vast troves of data. Based on the patterns they find, computers develop a kind of “model” of how that system works. For instance, some programmers are using machine learning to develop medical software. First, they might feed a program hundreds of MRI scans that have already been categorized. Then, they’ll have the computer build a model to categorize MRIs it hasn’t seen before. In that way, that medical software could spot problems in patient scans or flag certain records for review. Complex models like this often require many hidden computational steps. For structure, programmers organize all the processing decisions into layers. That’s where “deep learning” comes from. These layers mimic the structure of the human brain, where neurons fire signals to other neurons. That’s why we also call them “neural networks.” Neural networks are the foundation for services we use every day, like digital voice assistants and online translation tools. Over time, neural networks improve in their ability to listen and respond to the information we give them, which makes those services more and more accurate. Machine learning isn’t just something locked up in an academic lab though. Lots of machine learning algorithms are open-source and widely available. And they’re already being used for many things that influence our lives, in large and small ways. People have used these open-source tools to do everything from train their pets to create experimental art to monitor wildfires. They’ve also done some morally questionable things, like create deep fakes—videos manipulated with deep learning. And because the data algorithms that machines use are written by fallible human beings, they can contain biases. Algorithms can carry the biases of their makers into their models, exacerbating problems like racism and sexism. But there is no stopping this technology. And people are finding more and more complicated applications for it—some of which will automate things we are accustomed to doing for ourselves--like using neural networks to help run power driverless cars. Some of these applications will require sophisticated algorithmic tools, given the complexity of the task. And while that may be down the road, the systems still have a lot of learning to do.