public relationsHow many of you use artificial intelligence (AI) on a regular basis?

That’s a trick question, because we all do.

AI is part of search.

It powers your smartphone

You use it to get directions that find the fastest route by taking traffic patterns into account.

It helps us choose a movie or TV show we’d like on Netflix.

Artificial intelligence is here.

And it’s only going to get more pervasive.

But it hasn’t really penetrated the enterprise in the way that we’ve all been anticipating or dreading.

Yet.

And public relations and communications professionals seem unsure of our place or the role we will play.

I believe that starts with getting a basic grasp on what AI is and does.

Once we know that, we can begin to get a handle on where communicators and public relations pros might fit in, and develop strategies on how to approach it.

Two Types of Artificial Intelligence

But first, we need a few basics.

There are really two distinct types of AI:

  • Artificial Narrow Intelligence (ANI)
  • Artificial General Intelligence (AGI)

While they’re related, they’re also very different.

And we tend to confuse them.

Narrow AI is single purpose, designed to accomplish one specific goal better than us, like IBM Watson’s Jeopardy win, digital voice assistants, or self-driving cars.

But we’re much more complex beings, and when a machine becomes more like us, able to transfer knowledge and complete multiple tasks at once, that’s called Artificial General Intelligence.

AGI is what’s most often seen in movies and on TV, like Westworld or Her.

And it’s scary because the AI is certainly smarter than us at performing tasks, and could become conscious.

And we’ve all seen what happens in The Terminator!

I’m not saying we shouldn’t worry about AGI.

We should and I encourage you to read more about it.

The Big Nine, a new book by Amy Webb, is a good place to start.

Narrowing the AI Focus to Public Relations

Right now, I’m going to look at narrow AI, because that’s what’s having the biggest direct impact us, and what we do.

For starters, the idea of artificial intelligence isn’t new.

It’s been around since the ‘50s.

It was coined by John McCarthy, a computer scientist, who was trying to figure out how machines could solve problems in a more human-like way.

Predicting the Future in a Statistical Way

Narrow AI is all about predictions.

Or, more specifically, using statistics to make predictions based on past behavior or actions.

And in order to do that, it needs data.

A lot of data.

You could even say narrow AI is ravenous for data.

So it’s a good thing we produce 2.5 quintillion bytes of data each year.

And while that volume seems overwhelming, we can actually break the data itself into three distinct types:

  1. Structured data are organized, labeled, and fits well in a spreadsheet.
  2. Unstructured data are like the words in a blog post or images and video, which are harder to classify.
  3. Semi-structured data are really a combination of the two like the number of followers you may have on Twitter and the content of your tweets.

More Data = Better Statistical Predictions = Better Results

AI is really very simple.

The key is statistics.

Of course, AI can now interpret context from the things we write and say—that’s called natural language processing.

And it can identify the things we see using computer vision.

But that’s still narrow AI.

So How Does AI Work?

In machine learning, narrow AI learns to make better predictions by trial and error, kind of like us.

Well, really more like the way a child learns.

Kids absorb the world around them, imitate patterns they see, make assumptions, and adjust.

In the beginning, an AI is like a small child that doesn’t know very much.

Its world is data, and it learns by absorbing the data we feed it, including a mishmash of words and phrases, labeled pictures, sets of numbers.

And then its algorithms—that is, the steps it follows to solve a problem—analyze the data, look for patterns and make a prediction based on whatever has the highest statistical probability of being right.

The algorithm is designed to adjust its learning process based on whether or not its predictions get more or less accurate over time.

Of course, humans still need to validate those predictions, and we can’t forget that!

Artificial Intelligence in Public Relations

Let’s look at an example of AI in action in something we do everyday: search.

Google uses AI to power its semantic search algorithm, which essentially understands the relationship between words.

It knows, for example, that Martin Waxman is probably a person.

And Google’s conversational interface can also recognize the relationship between that name and a personal pronoun that describes it.

You can try it in Chrome, by enabling your mic, and asking Google a series of related questions replacing names with pronouns.

The results are surprising, because Google’s AI understands our context.

But they’re also a bit scary—because it kind of seems like Google gets us, and knows what we want.

And that’s the beginning of building trust, which is a fundamental part of public relations.

Are You Feeding Your AI a Healthy Meal or Junk Food?

Of course AI is only as good as the data it’s fed.

So think of data as the AI equivalent of a great meal.

And just like people, we can fill our data sets with junk food, or healthier choices.

And the quality of data has a big impact on the results.

If your data is all junky—or has little variety, or veracity because you haven’t spent the time making sure it’s a diverse or clean dataset, the AI you build will be filled with errors and biases.

And that’s bad when you’re talking about predictive analytics for projections, and worse when you have biased data for facial recognitions.

Data bias and lack of diversity is one of the biggest issues in AI.

It’s why Alexa and Google Assistant are having trouble understanding questions in accented English.

Or why Amazon’s facial recognition app, matched the identity of 28 members of congress to mug shots of criminals.

Ethics, Privacy and Trust

If you think about it, a biased dataset has implications for ethics, privacy, trust—and reputation.

And that sounds a lot like what we deal with everyday in communications and public relations.

As we begin to understand the implications AI might have on our organizations and audiences, those issues become a conversation we can take part in.

And will help us demonstrate the type of value we bring to discussions on how business can implement AI, and communicate the benefits and risks to stakeholders.

If you’re interested in hearing more, check out the 2019 Institute for Public Relations Bridge PR and Communications Conference, where I’ll be doing a keynote on “Putting the AI in Public Relations.”

And I’ll be writing more about AI and communications on Spin Sucks.

What have your experiences been with narrow AI? Do you trust it? Why or why not?

Please share your answers in the comments below. My chatbot will respond :).

Photo by Joseph Chan on Unsplash

Martin Waxman

Martin Waxman, MCM, APR, is a senior advisor to Spin Sucks and runs a consultancy, Martin Waxman Communications. He leads digital and social media training workshops, conducts AI research, and is a LinkedIn Learning author and one of the hosts of the Inside PR podcast. Martin teaches social media at McMaster University, the Schulich School of Business, Seneca College, and the UToronto SCS and regularly speaks at conferences and events across North America.

View all posts by Martin Waxman