Two Problems With Today’s Predictive AI

Numerous reports this week opined about AI outmoding human prediction. Computer-based prediction, these reports said, would soon be faster, cheaper, more precise and more reliable than human prognostication.

On one level, this makes sense. As learning systems… learn… within the frameworks of their designated areas of experience, they should become more and more predictive of any repeatable result that is derived from previous experience. If something has happened enough times, and a sophisticated AI has seen the patterns around that activity frequently enough, it should be able to predict when and how those patterns will re-emerge in the future.

But there are two chunky fat flies in this smooth, creamy soup of the future AI prediction dominance.

First is the reality that AIs -- at least those based on today’s tech -- will only ever be really good at extrapolating from the past. They will be good at answering this question: How likely and under what circumstances is Past Event A likely to happen again? This is useful, but actually rather limited. That’s because the biggest impacts of the future are hardly ever straight line extrapolations of the past. The unexpected, the unanticipated, the rounding error is what actually triggers transformation and alters the course of human events. A meme I saw recently put this truth well: the light bulb was not the result of endlessly improving the candle. The ability to make uncanny, sometimes apparently nonsensical, leaps away from derived experience is so far uniquely human and I believe it will remain so for some time to come.

That brings us to the bigger limitation to the notion that AIs represent our future prognostication overlords: trust. Even if all the assembled Solons of Earth swear a particular AI prediction is highly likely, who says human beings will go along with it? Heck, today we have plenty of folks who simply refuse to accept deeply established science, such as that vaccines actually work or the average temperature on Earth is rising. If our confounding species won’t accept hard facts, why in heaven’s name would anyone expect us to accept anything an AI has to say?

Even more perversely, any knowledge the general population has about AI-based predictions could actually cause a backlash that, by dumping into reality a set of unexpected behaviors outside the AI’s frame of reference, could on its own render that prediction untrue. In other words, by acting like an AI prediction is inaccurate humans could actually render it inaccurate, thereby reinforcing their initial prejudice.

Holy Asimov, Batman, what a pair a dox!

Over the long haul, I believe, human-AI interaction will develop and deepen in largely positive ways. We can use AI as sophisticated tools across a whole range of activities to free us to bring those intrinsically human aspects into play. Even if AIs prove supremely powerful and move into realms once sacrosanct to us, humans are so flexible that we can adjust to even that, and perverse enough that we may not be able to stop ourselves from doing so. We are in for some serious rough weather when AIs try to predict what humans will do and the circumstances we will create.

So, if AIs do move toward developing emotional responses, I’m quite sure the first required of them will be patience.