Many executives are enthusiastic about the business potential of machine learning applications. But business leaders often overlook a key issue: To fully unlock the benefits of artificial intelligence, you’ll need to upgrade your people’s skills — and build an empowered, AI-savvy workforce.

There is no question that artificial intelligence (AI) is presenting huge opportunities for companies to automate business processes. However, as you prepare to insert machine learning applications into your business processes, I recommend that you not fantasize about how a computer that can win at Go or poker can surely help you win in the marketplace. A better reference point will be your experience implementing your enterprise resource planning (ERP) system or another enterprise system. Yes, effective ERP implementations enhanced the competitiveness of many companies, but many other companies found the experience more of a nightmare. The promised opportunity never came to fruition.

Why am I raining on the AI parade? Because, as with enterprise systems, AI inserted into businesses drives value by improving processes through automation. But eventually, the outputs of most automated processes require people to do something. As most managers have learned the hard way, computers can process data just fine, but that processing isn’t worth much if people are feeding them bad data in the first place or don’t know what to do with information or analysis once it’s provided.

With my fellow researchers, Cynthia Beath, Monideepa Tarafdar, and Kate Moloney, I’ve been studying how companies insert value-adding AI algorithms into their processes. As other researchers and managers have also observed, we are finding that most machine learning applications augment, rather than replace, human efforts. In doing so, they demand changes in what people are doing. And in the case of AI — even more than was true with ERP systems — those changes eliminate many nonspecialized tasks and create skilled tasks that require good judgment and domain expertise.

For example, fraud detection applications may reduce the time that people spend looking for anomalies, but increase requirements for deciding what to do about those anomalies. An AI application might allow financial analysts to spend less time extracting data on financial performance, but it adds value only if someone spends more time considering the implications of that performance. With the help of AI applications, customer service staff can spend fewer hours resolving routine problems, but they are more likely to improve operations if at least some of that saved time is reallocated to better understanding the problems customers are experiencing with the company’s most recent offerings.

Many leaders think that they will generate value from AI by recruiting more data scientists. Of course, there’s a shortage of data scientists — and some of them are more attracted to the challenge of building an application that wins at poker than solving a business need. Others will be inspired to find a cure for cancer or to mitigate global warming. So financial services and insurance companies attempting to uncover fraud and technology companies hoping to improve customer satisfaction will be fighting over the remaining talent.

But recruiting data scientists is not your biggest challenge. Data scientists can develop useful algorithms, but domain experts are needed to help train the machine to recognize important patterns and understand new data. Domain experts include top analysts, contract managers, salespeople, recruiters, and other specialists who are not only experts at their jobs but who are acutely aware of how they deliver excellence. That may involve just a few key people for a given application, but they’d better be good. And we still haven’t gotten to the really hard part!

Ultimately, you need people who can use probabilistic output to guide actions that make your company more effective. Probabilistic outputs are no problem when, say, an application such as Salesforce.com Inc.’s AI tool, Einstein, indicates that one lead has a 95% chance of converting into a sale, while another has a 60% chance. The salesperson knows what to do with that information. But what’s the next step when a recruiter learns from an AI application that a job candidate has a 50% likelihood of being a good fit for a particular opening?

When a machine learning application is helping a lawyer identify potentially relevant legal precedents, or helping a vendor management team ensure compliance with a contract, helping a banker decide whether a particular customer qualifies for a loan, the machine is taking over mundane tasks. Machines can surely learn to develop spreadsheets and search large databases for relevant information. But to generate competitive advantage from machine learning applications, you’ll need to upgrade your employees’ skills. You’ll also need to redesign their accountabilities, so that they are empowered and motivated to deploy machines when they believe that doing so will enhance outcomes. In short, you will need to build an entire workforce of intelligence-consuming, action-oriented superstars.

There are, of course, examples of AI algorithms fully automating a process rather than augmenting human efforts. Google DeepMind might automatically adjust temperature settings in a data center. Similarly, IBM Watson can trigger automated alerts to insurance customers in an area likely to be hit by a hailstorm. But these are exceptions. More often, machine learning applications are helping people accomplish something. Like people, machine has natural limits, which tend to leave parts of the tasks — the parts that don’t fit the algorithms well — to people. When a machine detects fraud or predicts customer or employee churn with 90% accuracy, people must address the other 10% — and that will be the toughest 10%. The machine will assuredly take care of the easy cases.

Addressing the toughest instances is particularly challenging because AI algorithms can produce indecipherable results. When a machine learning algorithm decides who gets a loan and who doesn’t, forget about trying to advise a client about how to qualify. Machine intelligence is not a substitute for human intelligence, because, as organizations, we need to be able to understand why we’re doing what we’re doing.

None of the issues associated with using AI to augment your employees’ skills are insurmountable. Great companies are already empowering their people with better information produced by smart machines. Those machines sift through far more data, and do it much faster, than people can. They also discover complex relationships that can be exposed only with massive amounts of data and a large pool of contrasting outcomes. Companies are succeeding with AI by partnering smart machines with smart people who are learning how to take advantage of what those machines can do. In short, AI implementation success depends on your ability to hire and develop problem-solvers, equip them with data (and potentially AI), and then empower them to actually solve problems. Note that addressing skill requirements this way may well require major changes to your existing hiring and development practices.

Companies that view smart machines purely as a cost-cutting opportunity are likely to insert them in all the wrong places and all the wrong ways. These companies will automate existing processes rather than imagine new ones. They will cut jobs rather than upgrade roles. These are the companies who will find that implementing AI is little more than a reprise of the ERP nightmare.

8 Comments On: The Fundamental Flaw in AI Implementation

  • Michael Zammuto | July 18, 2017

    Well thought out and a very interesting topic. I think the way we are accelerating machine learning shows we can extend the boundaries of it. As for general AI, I dont feel like we know the limits. From gaming to art to predicting cancer we continue to surprise ourselves and trace a new lune in the sand. We tell ourselves that there is something different between the way our brains work and AI. But that misses the point. The Turing test is irrelevant and algorithms dont need to function like brains to replace human judgement and creativity. Thank you for bringing up one of the critical issues of our time. How do we relate to our own inventions

  • David Johnston | October 12, 2017

    A very timely and relevant article that should be required reading for any C-Suite executive.

    In my experience, the article calls out and highlights some of the most important best practices such as (paraphrased) “there is little to no value in (further) automating existing processes” Instead one wants to envision / engineer new processes that will deliver significant change driving bottom line benefits”!

    Also, the article goes on to reinforce the need for “new” business / management processes that focus on translating the “results into “now what do we do” … requiring new management and control processes but particularly performance management processes and planning/change management processes — all together forming new views of an evolving “Enterprise Architecture”

  • Vicente vicente.miranda@br-asgroup.com | October 13, 2017

    I always comment about Bad data at the begin ! If you don’t close this door, you are delivering bad information as fast as before, and don’t present information is better than present bad information.

    Recently, I have made an interview to HBR Brazil and showed my vision about “impact of TD/IIot” and my observation is that Brazilian workforce will have troubles because people are nor prepared to change (Intelectual capital is not rich – people’s skills) !

  • José Antonio FLORES RODRIGUEZ | October 16, 2017

    I think that this topic has to be on the table again and again as is the case of this paper. It has so many aspects that we need to keep in mind where our organizations must choose to be able to flow in the maelstrom of changes that we are living and not only to resist. In particular resisting an AI initiative.
    Instead of looking for a solution that evokes Einstein, Leonardo, I propose a solution of AI that will help us to reflect as Socrates would. It is always good to start with a good question.

  • Raj Ramesh | December 28, 2017

    Right on spot! I used to be a hard-core technologist (specifically AI), and I was gung-ho about it. Then I moved over to the business. That’s when I realized that business problems are more nuanced.

    Technology has not been able to address ‘soft’ problems associated with the human side – like culture, intuition, creativity. Perhaps that will happen in 60 years after we have hit the singularity, but until then, we need human-machine collaboration to get useful things done.

    Many leaders sadly believe that simply bringing in artificial intelligence into the organization is enough. As you point out through your examples and our learnings form ERP implementations, that is not true.

    Hope leaders understand that earlier than later before they let the AI hype consume them and set unrealistic expectations on what AI by itself can deliver.

  • Kurt Hahlbeck | January 18, 2018

    My own anecdotal experiences in over 30 years of software development and deployment are completely consistent with your research perspectives. I look forward to future insights based on the progress of your research.

  • Zvikomborero Murahwi | January 24, 2018

    AI in the Enterprise is unavoidable – How its going to be adopted and rolled-out is going to be the key issue. It is pleasing to note that efforts have been and are being made to address underlying issues.

  • HP Bunaes | December 19, 2018

    Completely agree on the challenges of integrating AI and ML into the operating model and business process. It’s the upstream (data) and downstream (delivery and usage) that make or break an AI investment. I’ve too often seen business leaders unable or unwilling to take on the change management challenges needed to fully realize value from these investments. One point of disagreement – – model explainability has come a long way. At DataRobot we have built in capabilities (Feature Impact, Prediction Explanations, Partial Dependence) that explain what data drives predictions, when that data matters and when it doesn’t, and explains individual predictions.
    – HP Bunaes SM’87

Add a comment