THANK YOU FOR SUBSCRIBING
Artificial Intelligence (“AI”) is seeing a lot of fervent enthusiasm. Claims are being made that entire industries will change, self-driving vehicles and robots will take over, and millions of jobs will be displaced. Since 2010, venture funding into AI has increased 20-fold, from around $500 million to $10.8 billion in 2017. Big corporations spent even more on AI—possibly as much as $40 billion or more in 2017. The initial focus has been from large tech companies but 70 percent or more of companies have no actual AI efforts underway.
Assuming you are an executive at the majority of companies that are not currently utilizing AI, is all the talk about computers taking over hype or is it time to act?
AI has a History of False Starts and Exaggeration
It is natural for business leaders to be skeptical, especially given that AI has suffered from a history of false promises and irrational exuberance.
The 1950’s were considered the “Golden Age” of artificial intelligence. In 1956, after IBM scientist Arthur Samuels demonstrated a checkers playing program on television on one of IBM’s first mainframe computers, IBM’s stock went up by 15 points in one day. Claude Shannon, a brilliant researcher at Bell Labs, who invented the field of information theory, demonstrated a mechanical mouse named “Theseus” that could find its way through a maze. Researchers at major universities developed AI theory and made a lot of great breakthroughs. Simultaneously, government and private companies invested heavily in AI.
"Deep learning was always thought to be a powerful “universal function approximator”—e.g., it can be used to model even complex non-linear problems like human vision or natural language processing"
By the late 1960’s, many well-known scientists were convinced that machines would outpace human performance within a very short time—and they were not shy about declaring their exuberance. In the November 1970 issue of Life Magazine, AI researcher Marvin Minsky asserted that in “three to eight years we will have a machine with the general intelligence of an average human being.”
Sadly, the wild claims of super-human robots were over-zealous and the revolution did not happen. AI funding was cut and most of the decade of the 1970’s became known as the First AI Winter.
Elation over AI returned in the 1980’s when major companies looked to utilize so-called “Expert Systems” to enhance business operations. By the mid 1980’s, investment in AI had increased from a trickle to many billions of dollars per year.
But, despite a lot of initial enthusiasm, Expert Systems were expensive and inefficient and became displaced by workstations and PC’s. By the late 1980’s, AI funding was again cut and researchers declared that the Second AI Winter had begun.
Having survived two AI winters, AI practitioners took a more somber approach from the early 1990s through 2012. Attendance at major AI research conferences dropped precipitously. The advances that came were no longer overhyped. Although IBM’s Deep Blue computer defeated human champion Gary Kasparov in 1997, the IBM Research website cautions that the breakthrough was “not really AI” but instead massive computing power that was able to look many moves ahead.
By 2010, machine learning algorithms, most of which were originally developed in the 1950’s, were able to shine as a result of exponentially faster computers and plenty of online data from the internet revolution. But the systems were still time and knowledge intensive to program, difficult to implement, and generally unable to do better than human experts.
Time to Pay Attention
This time really is different. Wild excitement about AI is back and some are projecting AI to grow exponentially from here. Enthusiasm is justified because recent advances are achieving once elusive targets and demonstrating systems that really do surpass human ability when focused on specific tasks.
Deep Learning, an approach originally inspired by the workings of the human brain, has produced a succession of breakthroughs. Deep learning is based on interconnected nodes of hidden layers between input and output layers. Deep learning was always thought to be a powerful “universal function approximator”—e.g., it can be used to model even complex non-linear problems like human vision or natural language processing. But breakthroughs were needed to build and train these systems. In the past couple decades, researchers methodically found clever solutions to make deep neural networks work.
In more recent years, the pace of breakthroughs applying deep learning to traditional AI techniques has been breathtaking. Deep learning systems have demonstrated superhuman performance doing very human tasks such as image recognition. AI systems can already categorize photos of skin cancers better than dermatologists. Systems that translate from one language to another or that process speech are getting close to achieving human ability.
In December 2017, Google’s UK-based subsidiary DeepMind, demonstrated a deep learning based program that when given only the rules of a board game such as chess or Go, could play itself over and over until, within days, it played at superhuman levels that could trounce any human or other computer. Google has used similar self-learning systems to significantly optimize and reduce costs in their server farms. Self-driving cars designed by Waymo, a Google subsidiary, can drive more than 5,000 miles without need for human intervention.
A deep learning framework called a Generative Adversarial Network, or “GAN,” can even demonstrate an imagination. After learning from examples, say from pictures of movie stars, a GAN can generate pictures of imagined movie stars. Recently, a Silicon Valley firm, vue.ai, applied this technology to train a system to learn how to transfer photos of clothes hanging against a plain background so that they realistically appeared on computer imagined models.
Applying to Your Business
These new advances are surprisingly accessible. Major companies such as Google and Facebook have made their software libraries publicly available. This means that your company can have direct use of the same application software used by market leaders that have spent billions developing their platforms. This, combined with a robust and open research community, is great news for newcomers looking to apply these technologies.
The effort to apply AI is well justified. According to global consultancy firm McKinsey & Company, businesses that have proactively adopted AI have demonstrated significantly higher profit margins compared to their peers. Applying AI technology after its acquisition of robotics company Kiva, Amazon cut its operating costs by 20 percent by reducing inventories and cutting “click to ship” cycle time from 60 to 15 minutes. Netflix estimates that it saves $1 billion per annum from otherwise canceled subscriptions by implementing an AI algorithm to personalize recommendations.
AI is actively applied in marketing for customer acquisition and retention, in production for planning and maintenance, and in finance for analytics, pricing, cost control, purchasing, and investment. Tech, auto, and financial services companies are leading the way but that does not mean other industries are not looking for a competitive edge. Big River Steel worked with Noodle.ai, a Silicon Valley firm, to build an entire steel mill in Arkansas with integrated AI solutions to, among other things, predict demand, reduce energy costs, enhance purchasing, manage inventory, and optimize production.
Whether working with outside experts or building an internal capability, executives need to become familiar with these new technologies or risk falling behind competitors.