fbpx
t_kimura/Getty Images

After a public outcry over privacy and their inability — or unwillingness — to address misleading content, Facebook, Twitter, and other social media platforms finally appear to be making a real effort to take on fake news. But manipulative posts from perpetrators in Russia or elsewhere may soon be the least of our problems. What looms ahead won’t just impact our elections. It will impact our ability to trust just about anything we see and hear.

The misinformation that people are worried about today, such as made-up news stories or conspiracy theories, is only the first symptoms of what could become a full-blown epidemic. What’s coming are “deep fakes” — realistic forgeries of people appearing to say or do things that never actually happened. This frightening future is a side effect of advances in artificial intelligence that have enabled researchers to manipulate audio and video — even live video.

The end result of this manipulated reality may be that people no longer believe what they hear from a world leader, a celebrity, or a CEO. It could lead to “reality apathy,” when it is so hard to distinguish truth from lies that we stop trying altogether. This means a future in which people believe only what they hear from a small circle of trusted friends or family — more similar to a conflict region than to a modern economy. Try factoring that into your quarterly earnings call or televised speech.

An obvious scenario, one that some companies might find themselves dealing with in the not-too-distant future, is a faked video of their CEO making racist or sexist comments, or bribing a politician. But equally damaging could be a video about, for example, corporate spending.

Imagine an authentic-seeming video of a CEO saying their company will donate $100 million to feed starving children. This surprise announcement — which never actually happened — leaves the company with a stark choice: go ahead with the donation or publicly state that you don’t care that much about starving children after all.

As corporate leaders grapple with the question of how to prove something is (or isn’t) real, they will need to invest in new technology that helps them keep one step ahead of bad actors. And they will have to do it quickly. A company won’t be able to stay ahead of determined, tech-savvy manipulators if it has a yearlong procurement cycle.

One crucial step is for the social media platforms to incorporate real-time forgery detection into all of their products, building out systems that can adapt with improvements in the technology. But that technology is still in its early stages, and as it develops you can be sure that bad actors will be working on ways to defeat it.

It may also be possible to create software that can timestamp video and audio, showing when they were created and how they have been manipulated. But relying on the tech sector to quickly address societal challenges like this without accountability from regulators and users hasn’t worked all that well in the past.

Corporate marketers and communicators, as the people who supply platforms with the money that is their lifeblood, are in a strong position to push for faster action. Last year P&G pulled $140 million in digital ad spend, in part because of brand safety concerns that arose when its ads were placed next to questionable content.

You can bet this got the attention of social media companies. But it worked only because P&G was willing to back up its words with action. Platforms are more likely to take proactive measures if they know that inaction will hurt their profitability. Industry pressure helped push YouTube, for example, to reevaluate its content policies and dramatically increase its investment in human moderation.

Industries often form coalitions to influence the government on regulations affecting their business interests. With some of the biggest tech companies starting to rival governments in their reach and power, the same model could be employed here, using the threat of lost ad revenue. These coalitions may find it helpful to partner with consumer groups and NGOs to amplify their message. Pushing these platforms to take the future of misinformation seriously would be good not only for corporations but also for society at large.

In addition, companies must begin to factor deep fakes and other reality-distortion techniques into their crisis-scenario planning. Reputation protection in this new world will require adding a new layer to a company’s rapid response and communications strategies. Executives must be prepared to communicate the facts quickly and to correct the fictions before they spread too far.

Communicators should make sure they have the right tools in place to deal with a fast-moving manipulated-reality crisis. New companies are forming that use technology, open-source intelligence techniques, and crowdsourcing to quickly discern what’s real and what’s not. The key to uncovering a falsehood may lie in someone using geolocation, or simply their own knowledge, to recognize that a street sign in a faked video isn’t really at that location. As with any crisis, social-media analytics tools are critical when it comes to tracking the spread of misinformation. These tools can help executives see whether a story is gaining traction and identify the most-influential people spreading the misinformation, whether wittingly or unwittingly.

It is crucial that individual companies learn to understand and mitigate their particular risks — but that alone will not protect them. Our information ecosystem is like a game where deceivers have a massive edge; a company may lose even if it “plays” perfectly. That’s why we need to fix the rules. We all must pitch in to support cross-company, cross-industry, and even cross-sector efforts to turn the tide. It will be incumbent on everyone with a stake in a reality-based society to work together to ensure that we can continue to discern fact from fiction.

from HBR.org https://ift.tt/2FcaCJV