Dennis D. McDonald (ddmcd@ddmcd.com) consults from Alexandria Virginia. His services include writing & research, proposal development, and project management.

Ensuring Trust in AI-Mediated Communication: The Challenge Is Now

Ensuring Trust in AI-Mediated Communication: The Challenge Is Now

By Dennis D. McDonald

If it looks like a human, talks like a human, and acts like a human -- is it a bot?

I received a request last week from a stranger to connect via LinkedIn. My first thought was, “Is this person whom I don’t know real or someone making use of an AI-based chatbot?”

I’ve been researching AI base tools lately (to support the technical writing process) and am aware of how they can also be used to interactively simulate the looks, actions, and words of real people. Soon it may be next to impossible to tell whether someone seen online or on TV is real or synthetic.

We’ve seen how foreign governments use social media to influence our elections. It’s likely that the new AI-based tools will be used by all manner of bad actors to influence our actions and behaviors. How do we ensure that the communication channels we use are trustworthy while at the same time ensuring that government and corporations won’t use AI-based communication for nefarious reasons?

I’m not referring here to preventing the erotic or romantic relationships that develop voluntarily between an adult and a simulated online personality. People should be free to do that if they want. But already fears are being raised about how crooks can use voice copying technology to scam unsuspecting targets. Will such calls be as easy to recognize as the famed “Nigerian prince” email scams? I would hope so but am not optimistic.

I have long believed that one of the factors that has contributed to the decay of social media is a tolerance for anonymity. People can say things while hiding behind fake personas and “screen names” they might not otherwise say were they required to reveal their true identities. With AI‘s ability to simulate how real people look, sound, and behave electronically, a challenge to both media and law enforcement will be to establish and maintain the trust needed when people communicate and exchange information.

Whose job will it be to ensure that happens?

On the one hand, we want as a society to take advantage of the benefits that new technologies offer. On the other, who can ensure and maintain trust and security in the systems people use for both personal and business communications? How do the people who use social media ensure that they are interacting with real people?

We don’t always want to interact with real people when we go online. Sometimes the ubiquitous customer service chatbots that have proliferated online actually do provide useful help. There may also be legitimate reasons for individuals preferring not to interact with a live person on the phone when discussing private, financial, personal, or health details. In such cases the impersonality of an AI-based chatbot may be perfectly acceptable.

Accepting such an interaction should be at the option of the requestor. I would suggest that using robotic or synthetic tools in, say, customer service should be allowed—as long as the use of such tools is disclosed in advance.

The bottom-line issue for me is one of trust. How can we trust the communication channels that we use so that we interact with the people or institutions we actually intend to interact with? Given the power of the tools becoming available, I foresee great political differences and how best to manage the situation. I therefor see that ensuring trustworthiness of communications as being an important competitive advantage to be provided by all manners of communication networks, media, and social media – and the sooner the better.

An Experiment Using GPT-4 As An Editing Tool

On 3/20/23 I performed an experiment using GPT-4, successor to ChatGPT. As a paying subscriber I was able to bypass that ChatGPT was down and went directly to GPT-4. I copied the text in the above article and pasted it into GPT-4 following this question: " Please edit the following text for clarity and grammar." GPT-4 immediately generated the edited text that I have pasted here:

Last week, I received a connection request on LinkedIn from a stranger. My initial thought was, "Is this a real person or someone utilizing an AI-based chatbot?"

Lately, I have been researching AI-based tools for supporting the technical writing process and have become aware of their capabilities to simulate the appearance, actions, and speech of real people. Soon, it might be nearly impossible to discern whether someone online or on TV is genuine or synthetic.

We are already aware of foreign governments using social media to influence our elections. It is likely that new AI-based tools will be employed by various malicious actors to sway our actions and behaviors. The question is: how do we ensure the trustworthiness of our communication channels while also preventing governments and corporations from using AI-based communication for nefarious purposes?

I am not suggesting we restrict adult interactions with simulated online personalities for romantic or erotic purposes. People should be free to engage in such relationships if they desire. However, concerns are being raised about criminals using voice imitation technology to scam unsuspecting victims. Will these scams be as recognizable as the infamous "Nigerian prince" email cons? I am not particularly optimistic.

I have long believed that anonymity has contributed to the degradation of social media. People can hide behind fake personas and "screen names" to say things they might not otherwise express if required to reveal their true identities. With AI's ability to simulate the appearance, voice, and behavior of real people, both media and law enforcement face the challenge of establishing and maintaining trust in communications and information exchange.

But who will be responsible for ensuring this trust?

On one hand, we want to embrace the benefits new technologies offer as a society. On the other hand, how can we ensure and maintain trust and security in the systems people use for personal and business communications? How can social media users be confident that they are interacting with real individuals?

We do not always want to interact with real people online. In some cases, customer service chatbots can provide useful assistance. There may also be legitimate reasons for preferring not to speak with a live person when discussing private financial, personal, or health details. In such instances, the impersonal nature of an AI-based chatbot might be perfectly acceptable.

However, the choice to engage with such interactions should be up to the user. I propose that using robotic or synthetic tools in customer service should be allowed, as long as their usage is disclosed in advance.

For me, the fundamental issue is trust. How can we ensure the reliability of the communication channels we use to interact with the intended individuals or institutions? Considering the power of the emerging tools, I anticipate significant political debates on how to best manage the situation. Therefore, I view the provision of trustworthy communication as an essential competitive advantage for all communication networks, media, and social media platforms – the sooner, the better.

Discussion

What do I think about the edited version? Well, it does read somewhat better. I have a habit when writing to construct long sentences. I usually go back and revise them down to flow more cleanly; GPT-4 did a nice job of that.

When I write short think-pieces like this I usually write them out quickly in long hand. I then use speech-to-text software to create an editable file I can edit. I'm usually in a hurry when writing something like this for my own web site so my own editing -- which usually misses something first time through -- is rather quick.

While I think the GPT-4-edited piece does read better while avoiding some of the awkwardness of my originally dashed-off text, there are parts of it that, well, just don't sound like me.

Were I writing , say, a section for a client's technical proposal (my day job) that might not be an issue since my primary goal there is to reflect the interests of the client while being compliant with the contracting agency's Statement of Work.

Still, parts of the GPT-4-edited text don't sound like I wrote them. Since my goal with my web site www.ddmcd.com is to discuss things that interest me and which might be of interest to colleagues, friends, and potential clients, I do want my “voice” to come through.

Perhaps my next challenge should be to see how to prompt GPT-4 (or its successor) to “write like me”? Or, better yet, “Write not like me”?

Copyright 2023 by Dennis D. McDonald

Related Topics

Findability: NIH Wants Your Thoughts on "Enhanced Public Access to NIH-Supported Research"

Findability: NIH Wants Your Thoughts on "Enhanced Public Access to NIH-Supported Research"

TikTok and Challenges to Social Media Disentanglement

TikTok and Challenges to Social Media Disentanglement