fbpx

Elite CIOs, CTOs & execs offer firsthand insights on tech & business. Opinions expressed by Forbes Contributors are their own.

Post written by

Jeff Catlin

CEO of Lexalytics, the leader in cloud and on-prem text analytics solutions.

Jeff CatlinJeff Catlin ,

Shutterstock

Head to any startup’s “about us” page today and you’ll see some version of an all-too-common line: “We’re here to change the world.” Setting big goals is a good thing. But setting them so big that you’re invariably going to fall short isn’t.

In some ways, AI is its own enemy. Sure, it has the potential to help solve the biggest problems we face. But potential isn’t the same thing as achievement. As high as our hopes are for AI, we need to temper our expectations a little. AI may get there one day, but it isn’t there yet.  

Championing AI As A Miracle Cure

Every time AI is trotted out as a panacea to all that ails us, failure isn’t far behind. Changing the world isn’t what we’re thinking about — underwhelming it is.

Overreaching is one of the ways that we can drastically underdeliver with AI. Solving world hunger and reversing climate change are valuable undertakings. But using AI as a tool to solve large, overarching problems before solving the smaller ones that underpin them isn’t the way to go. You have to crawl before you walk or fly.

In our sentiment analysis work at Lexalytics, we’ve been using AI to detect specific individual emotions such as anger, fear and joy. Only when our AI has mastered each individual emotion will we stitch them together with a larger meta-model. It’s a bottom-up approach that works far more effectively than trying to teach an AI program to recognize the entire gamut of human emotions all at once.

Leaping Into High-Risk Domains

Using AI to identify specific emotion types in text is a relatively low-risk endeavor. Using AI in domains such as transport, medicine or diplomacy, however, considerably raises the stakes — and the potential for backlash.

Page 1 / 3