A recent survey by Deloitte of “aggressive adopters” of cognitive technologies found that 76% believe that they will “substantially transform” their companies within the next three years. There probably hasn’t been this much excitement about a new technology since the dotcom boom years in the late 1990s.
The possibilities would seem to justify the hype. AI isn’t just one technology, but a wide array of tools, including a number of different algorithmic approaches, an abundance of new data sources, and advancement in hardware. In the future, we will see new computing architectures, like quantum computing and neuromorphic chips, propel capabilities even further.
Still, there remains a large gap between aspiration and reality. Gartner estimates that 85% of big data projects fail. There have also been embarrassing snafus, such as when Dow Jones reported that Google was buying Apple for $9 billion and the bots fell for it, or when Microsoft’s Tay chatbot went berserk on Twitter.
So, how do you make sure that your organization is going to get more successful results from its AI endeavors?
First, you need to make your purpose clear. AI does not exist in a vacuum, but in the context of your business model, processes, and culture. Just as you wouldn’t hire a human employee without an understanding of how he or she would fit into your organization, you need to think clearly about how an artificial intelligence application will drive actual business results.
Insight Center
-
Adopting AI
Sponsored by SAS
How companies are using artificial intelligence in their business operations.
“The first question you have to ask is what business outcome you are trying to drive,” Roman Stanek, CEO at GoodData, told me. “All too often, AI projects start by trying to implement a particular technical approach and, not surprisingly, front-line managers and employees don’t find it useful, so there’s no real adoption and no ROI.”
While change is often driven from the top of the organization, implementation is always driven from lower down. So it’s important to communicate a sense of purpose clearly. If front-line managers and employees believe that artificial intelligence will help them do their jobs better, they will be much more enthusiastic about it, and more effective in making the project successful.
“Those who are able to focus on business outcomes are finding that AI is driving bottom-line results at a rate few had anticipated,” Josh Sutton, CEO of Agorai.ai, told me. He points to a recent McKinsey study that pegs the potential economic value of cognitive tools at between $3.5 trillion and $5.8 trillion as just one indication of the possible impact.
Second, choose the tasks you automate wisely. While many worry that cognitive technologies will take human jobs, David Autor, an economist at MIT, sees the primary shift as being between routine and non-routine work. In other words, artificial intelligence is quickly automating routine cognitive processes much like industrial era machines automated physical labor.
To understand how this can work, just go to an Apple store. Clearly, Apple is a company that fully understands how to automate processes, but the first thing you see when you walk into an Apple store is a number of employees waiting to help you. That’s because it has chosen to automate background tasks, not customer interactions.
However, AI can greatly expand the effectiveness of human employees. For example, one study cited by a White House report during the Obama Administration found that while machines had a 7.5% error rate in reading radiology images, and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.
Perhaps most importantly, this approach can actually improve morale. For instance, some factory workers have actively collaborated with robots they programmed themselves to do low-level tasks. In some cases, soldiers build such strong ties with robots that do dangerous jobs that they hold funerals for them when they “die.”
Third, choose your data wisely. For a long time, more data was considered better. Firms would scoop up as much of it as they could and then feed it into sophisticated algorithms to create predictive models with a high degree of accuracy. Yet it’s become clear that’s not a great approach. As Cathy O’Neil explains in Weapons of Math Destruction, we often don’t understand the data we feed into our systems and data bias is becoming a massive problem. A related problem is that of “overfitting”. It may sound impressive to have a model that is 99% accurate, but if it is not robust to changing conditions, you might be better off with one that is 70% accurate and simpler.
Finally, with the implementation of GDPR in Europe and the likelihood that similar legislation will be adopted elsewhere, data is becoming a liability as well as an asset. So you should think through which data sources you are using and create models that humans can understand and verify.
Finally, shift humans to higher-value social tasks. One often overlooked fact about automation is that once you automate a task, it becomes largely commoditized and value shifts somewhere else. So if you are merely looking to use cognitive technologies to replace human labor and cut costs, you are most probably on the wrong track.
One surprising example of this principle comes from the highly technical field of materials science. A year ago, I was speaking to Jim Warren of the Materials Genome Initiative about the exciting possibility of applying machine learning algorithms to materials research. More recently, he told me that this approach has increasingly become the focus of materials research.
That’s an extraordinary shift in one year. So should we be expecting to see a lot of materials scientists at the unemployment office? Hardly. In fact, because much of the grunt work of research is being outsourced to algorithms, the scientists themselves are able to collaborate more effectively. As George Crabtree, Director of the Joint Center for Energy Storage Research, which has been a pioneer in automating materials research put it to me, “We used to advance at the speed of publication. Now we advance at the speed of the next coffee break.”
And that is the key to understanding how to implement cognitive technologies effectively. Robots are not taking our jobs, but rather taking over tasks. That means that we will increasingly see a shift in value from cognitive skills to social skills. The future of artificial intelligence, it seems, is all too human.
from HBR.org https://ift.tt/2NuboBz