How to Do Exactly What You Want with Artificial Intelligence
17th March 2017 | Steven Struhl
Wanting computers to do more of the heavy lifting in problem-solving has been a long-held wish. It goes far back, at least to the earliest days of machine learning. Around 1959, an article about teaching computers to play checkers hoped for the happy day when we would not need to do so much programming for every step. More work for the computer, rather than us, remains a goal. But what exactly do you want the machine to do?
In an old joke, a computer salesman tells an executive that a new computer will cut his work by 50%. “In that case,” the executive answers, “I will take two.”
This may not be the funniest joke you’ve heard but it underscores an important question. Where do you want to be in control, and where do you cede this control to the computers, and/or the friendly vendors running them? We raise this issue more than once in Artificial Intelligence Marketing and Predicting Consumer Choice.
Many of the newer machine learning methods, like ensembles, are basically incomprehensible. That is, they do something and reach an answer, but so much computation and so many steps are involved that their workings remain inscrutable. Neural networks epitomize the class of methods that we cannot fully understand. Precisely what they are doing remains nearly entirely out of sight.
In the online sample chapter of Artificial Intelligence Marketing, we showed some of the output from a deep learning neural network. The coefficients (or variable strengths) could in no way be squared with what seemed sensible. The coefficients’ sizes looked wrong and their signs—positive or negative—seemed backward about half the time. How would you know that this does not reflect a basic mistake?
You could argue that the proof is in the doing. But then, this particular network did not do that well. It predicted poorly. It also showed a salient problem with overfitting of data—modeling features that were peculiar to that one data set, but that would not be found in the outside world.
Some forms of artificial intelligence, like neural networks, may not do well unless they have mountains of data and tremendous amounts of training to refine the rules that they use. Our example used some 70 data items measured across 1700 people. Since neural networks count each separate value of each data item as an input, and these were 5-point rating scales, the program had some 630,000 data points to use in its models. This apparently was not enough.
A recent AI application learned to identify different types of flowers from photographs with impressive accuracy. However, it required training with some 800,000 images of flowers. Similarly, Google’s self driving cars may be ready for the road any time now—but getting there has taken years and over 2 million miles on the road. The cars need to be trained for every possible contingency, and as we all know, the real world is unpredictable and messy.
A set of remarkable experiments involving sentences called the Winograd schemas also show us that computers have trouble with common-sense, even with inferences that a toddler would breeze though easily. For instance, we have the sentence, “The ball went through the table because it was steel.” We then ask, “What was steel, the ball or the table?” So far, computers are averaging 50% correct, no better than chance.
Still, machines are being trained increasingly finely. We may well be coming to the time when we set computers loose on data and they learn by themselves. But no amount of digging through transactions will think up a new strategy for you to follow. Even the most exquisite analysis of historical data cannot give you a single idea for a new product or service. You still need to put in the effort and do the hard thinking.
Computers and machine learning methods have been working alongside us to help solve problems for years. They have provided admirable assistance, zeroing in on predictive relationships that we could never have found without them. You have to decide on what meets your needs. Your author has an obvious bias in favor of methods—even the most advanced ones—that keep the details out in the open, where you can test the model against your experience and acumen. These seem to be the unquestionably good uses of machine learning.
How much you want to trust the computer to go off on its own, and even learn in ways that you can barely understand, is becoming an increasingly important decision. Recall that we still (theoretically) have an edge in common sense. Then you need to choose.