Ignore the noise about a “terminator scenario” where machines become self-aware and seek to destroy their flawed human masters. Those of us who live and work in the “salt mines” of machine learning and artificial intelligence are almost universally unafraid. Still, a few well-known technical folk heroes continue to push this “sky-is-falling” narrative. The most prolific of them is Elon Musk, famed founder of Tesla and SpaceX. Not only do I think he’s wrong, I think his own company, Tesla Motors, is a compelling proof point against his argument.
When it comes to AI, Musk and Tesla are a fascinating combination of contradictions. Tesla is one of the heaviest AI manufacturing companies in the world. The company almost single-handedly started the discussion on self-driving cars. (How much more AI can you get than a self-driving car?) Yet even while investing billions into AI to make smart self-driving cars, Musk rails against AI as a threat to our very existence. This is an especially strange narrative when you consider Tesla’s own difficulties with AI: they produced just 260 new Model 3 cars in the third quarter of 2017, with Musk blaming the delays on a subcontractor dropping the ball, requiring Tesla to “rewrite the software from scratch.” Rumor has it that the software in question was an AI element of the battery module production line.
Let me stop here for a moment and drill down on this point: Tesla has bet its future on fully automating the production of the Model 3, removing humans from the equation entirely. I’ll admit, I know nothing about manufacturing cars, but I do know software, and I think this is a shrewd play. A new, fully-automated manufacturing process is going to take a while to sort out and get operating well. But once researchers work through those bugs, the line will move much faster than it ever could with humans as part of the process.
Let’s return to our original question, “Will machines destroy their flawed human masters?” The AI to run a factory is very complex and hard to get right, but at least there are a finite number of variables involved. Even so, Tesla has a difficult time bringing this online. If you can’t make machines solve a finite state problem that is pretty unchanged from day to day, I’m not sure why you’d ever worry that the machines will become self-aware and destroy us.
Tesla is having a tough time bringing its automated production facility online, yet they and all the other car manufacturers want us to trust them to build a fully self-driving car. I don’t know about you, but that scares me. As hard as it is to build the AI for an automated manufacturing facility, it’s a whole lot easier than building the AI for a self-driving car. A car is produced the same way today as it was yesterday, and the same way it’ll be produced tomorrow. On the other hand, there are always new and unexpected variables in navigating the traffic from your house to the corner store. So, if you can’t automate your production facility with absolute perfection, don’t ask me to believe my car can drive me anywhere I want in safety.
In time, AI will get better at approximating human intelligence, but we are still a long, long way from that point. So, I’m sorry Elon, but I don’t think Arnie and Cyberdyne will destroy the planet anytime soon. The technology to make that kind of leap doesn’t yet exist, and personally, I doubt it ever will.
Jeff Catlin is the chief executive officer of Lexalytics, a company that provides cloud and on-prem text analytics solutions.