(4 Apr 2023) On March 30, CCC hosted a town hall via LinkedIn, ChatGPT & Information Integrity. Chris Kenneally, CCC’s senior director of content marketing, moderated the live event, inviting the speakers to share their experiences with ChatGPT and other AI tools and to express their concerns and questions about this rapidly changing technology. AI tools are bound to change scholarship and research, he said, but we don’t yet know in what ways.
The speakers were:
- Gordon Crovitz, co-CEO and co-editor in chief of NewsGuard
- Tracey Brown, director of Sense about Science
- Gina Chua, executive editor at Semafor
- Mary Ellen Bates, founder and principal of Bates Information Services
- Steven Brill, co-CEO and co-editor in chief of NewsGuard
Each speaker provided introductory discussion points. Crovitz said that NewsGuard ran tests using ChatGPT, and it displayed false information, including that the children who were killed at Sandy Hook Elementary School were actually paid actors. With its latest release, ChatGPT repeated 100 out of 100 false narratives that NewsGuard fed it.
Brown asserted that we are underprepared to have a conversation about AI. Society is still at the level of wondering whether a customer service representative is a chatbot or a real person. She said that we need to be focused on who is going to take responsibility for each AI tool.
Chua called AI tools “extremely good autocomplete machines.” Semafor has been using them for basic proofreading and copy editing, which has been going well. They are language models and are not particularly smart on their own yet.
Brill said the key to moving forward with AI is accountability plus transparency. The newest version of ChatGPT is good at reading and mimicking language, and that makes it more persuasive in perpetrating hoaxes. He cited the example of cancer.org, the official site of the American Cancer Society, and cancer.news, a site rife with misinformation. ChatGPT reads the information on the .org site with the same regard as the .news site, not differentiating the veracity of the information on each.
Bates believes that the transition away from traditional information gathering isn’t a bad thing; for example, she finds Google Maps to be much more effective at keeping her from getting lost than paper maps. She likened AI tools to teenagers: She wouldn’t trust a 17-year-old to do her research for her, but they could give her a good start. AI tools will never be a substitute for a professional researcher, she said.
Brill noted that while ChatGPT has been proven to be able to pass a legal bar exam, it isn’t great at discerning misinformation. Crovitz talked about NewsGuard for AI, a new solution that provides data for AI tools to train them to be able to recognize false information, thus minimizing the risk of spreading misinformation. He said that in the responses chatbots generate, there needs to be a way to access information about whether the answer that was given is likely to be true.
Brown’s Sense about Science advocates for a culture of questioning: Ask where data comes from and whether it can bear the weight someone is putting on it. One of the key questions that gets missed with machine learning is, How is the machine doing the learning? Also, what is its accuracy rate? Does it push people toward extreme content? What kind of wrong information is tolerable to receive?
Kenneally reinforced these ponderings by saying that there is no question that AI models are amazing, but we need to examine how well they perform.
Brown cited the Nature policy that AI language models will not be accepted as co-authors on any papers. She said more organizations need to say they won’t accept AI authors because AI can’t be held accountable. There is a lack of maturity in AI discussions, she believes, and not enough thought put into the real-world context they’ll be released into. There needs to be a clearer sense of who is signing off on what when it comes to AI developers.
Chua underscored her earlier point that AI tools are not actually question-and-answer machines, they’re language machines. They don’t have any sense of verification; they only mimic what they’ve been fed. She noted that they say what is plausible, not what is true or false. We can use them to help us formulate questions because of their attention to written style. She did an experiment with one of the AI tools: She created a news story and asked it to write in the style of The New York Times, then The New York Post, then Fox News. Each time, it mimicked that outlet’s style well. This type of usage is currently the best way to employ AI tools, she said.
Bates said researchers should keep in mind that the tools are doing simple text and data mining, looking for patterns. They can’t infer something you’re not asking; only real people can take context into account. A chatbot doesn’t know what you’re planning to do with your research, it doesn’t ask follow-up questions, and it’s not curious like a human researcher is. A chatbot is a helpful paraprofessional, but anything it provides needs to be reviewed by a professional, she said.
The presenters continued their discussion and addressed some comments from attendees. Access the recording of the town hall at youtube.com/watch?v=RF3Gs-BNOtM.
Source: Information Today