www.ddmcd.com

View Original

Understanding "Author Intent" When Evaluating Online Information

By Dennis. D. McDonald

A Book Review

Becca Rothfield in the December 2 online edition of The Washington Post reviewed a book by Walter Scheirer titled "History of Fake Things on the Internet".

According to Rothfield the book’s author says that “…concerns about digital misinformation are overblown and alarmist." The thesis of the book, according to the review, is that technology and media have always been used as vehicles for deception, fakery, creativity, and art. For us to be overly alarmed about today's online misinformation is, the book’s author suggests, an overreaction. His rationale: humans have always occupied two worlds, a world of reality and a world of imagination. Storytelling and fiction come naturally to us. We should not be surprised that new media are used to communicate fiction.

Lies

“And lies!” respond many of the book review’s online commenters. They point out that we can't afford to just ignore lies, a key example being that Donald Trump supposedly won the 2020 presidential election in the United States.

I doubt it is the book author’s intention to excuse the lies. The line between reality and imagination is sometimes difficult to discern. Witness current debates about fake news, misinformation, and the use of AI based tools to fabricate realistic looking and sounding messages. Such distinctions are scant comfort to those who suffered from the January attacks on the US capital.

Considering Intent

It is impossible to evaluate online information--or misinformation--without also considering the intent of the creator. There are other factors to consider (for example, the intent of the recipient), but for now let’s focus on the intent of a piece of information’s creator.

Identity Fabrication

One impediment to discerning the intent of the creator regarding truth or falsehood of the message is the ease with which a creator’s identity can be shrouded or fabricated. While some suggest an author’s words should be able to stand on their own, it can be difficult to assess veracity when the author’s identity is shrouded. This is especially true when online commenting and conversations allow for anonymous comments. This is the case with online legacy publications such as the Washington Post. When reading an anonymous comment in the Post, for example, it’s impossible to tell if the commenter is sincere, is a trouble-making troll, is part of an organized political effort to sway public opinion, or is a paid foreign agent. If you can't be sure who is talking, how can you tell for sure if the speaker is sincere or lying? (I discuss this issue in more detail in Some Folks Just Don't Want to Pull Back the Identity Curtain.)

More Lies

Modern communication media did not invent the ability for people to lie in public. Such media do make it easier to quickly and easily reach and influence others almost instantaneously with misinformation and lies. Another concern now is the role that artificial intelligence (AI) based systems can play in the authoring of content. It may now become impossible to tell whether or not the creator of a work is actually human.

Concern about this eventuality includes concerns about the role of AI in authorship of scientific and technical journal articles; see Can research transparency & AI defend against fake science?

Questions can be raised about whether or not the author of a work is actually "human.” Granted, there may be situations where this is really not an issue. In the Science article As scientists face a flood of papers, AI developers aim to help, for example, it is reported that researchers are now using AI based tools to summarize large amounts of literature as part of their research, and some journals are using AI to help create a standard journal article component, the article abstract.

The Importance of Context

Such uses of AI may not be completely foolproof but they do point out the potential importance of the personal and organizational context in which they work is created. Just as I would like to know who actually wrote a comment I read to an article published in the Washington Post, I would also like to know if that author is in fact human!

Returning to the main concern of this article (knowing an author’s intent as input to evaluating and acting upon the information contained in a work) we see that knowing “intent" can be even more complicated than just knowing when an author is in fact human. Let’s say an author of a research article explicitly uses AI to summarize and help analyze the findings of a research project’s experiment. Does the reader need to know something about what tools were used at different stages of the reported research?

Just New Tools

That's a complex question. Scientist have long used tools to augment what humans are able to do physically in their own. Think of how Galileo and Newton interpreted data gathered using the instruments available to them. It's not hard to extend that thought to include Watson, Crick, and Franklin's use of data gathered via 1950s-era x-ray crystallogaphy to deduce the structure of DNA. How different, really, are these examples from today's use of tool such as ChatGPT to help create project planning documents?

This returns us to the issue of intent. Does the author intend to deceive, regardless of which tools are used in the creation and dissemination of that work? Does the author provide enough information regarding his or her interests and experience to facilitate reader assessments?

With respect to this latter consideration, what should scientists reveal about themselves when reporting research? Hiding identity online by posting anonymously prevents an assessment by a reader of the author’s sincerity, competence, and reliability. A variation of this is posting information via a false identity which is becoming even more possible with the rising popularity of AI tools that can very closely mimic human behavior patterns.

Guarding against such chicanery is not something non-specialist individual are prepared to do so the burden of validating the identity of authors must almost by default fall on the publishing organization, which today consists of "legacy" organizations such as newspapers and professional societies as well as social media and social networks.

Responsibility for Truth

Should a social media company, for example, be legally liable for damages related to misinformation it publishes? That's too complex an issue to be addressed here but I keep coming back to the very basic concern. When I read something on the Internet I wonder the following: has the author’s identity been vetted by the publisher? Is he or she who he or she says? Is enough information provided by the author for me to evaluate whether to trust this person?

With the advance of AI it is becoming increasingly easy to publish something that sounds real by an author who looks legitimate. I am hoping that those who control media distribution are paying attention--and spending money--on ensuring that what they distribute is not misinformation.

Perhaps media distributors will one day charge higher subscription rates not only for advertising-free access, but also to guarantee legitimacy of the author’s identity?

Copyright (c) 2023 by Dennis D. McDonald. The image at the top of this article was created by Microsoft Bing’s Image Creator in response to the prompt, “create a simple but elegant image that portrays symbolically the importance of understanding "author intent" when evaluating online information.“