How to protect against ChatGPT and other AIs that promote scientific denial

Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.
Now you have another option: you can ask your question to ChatGPT or another generative AI platform and quickly receive a succinct answer in paragraph form.
ChatGPT does not search the Internet like Google does. Instead, it generates query answers by predicting likely word combinations from a massive amalgam of information available online.
Although it has the potential to improve productivity, generative AI has been shown to have major flaws. This can produce misinformation. It can create “hallucinations” – a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if a car and tank could fit through a doorway, he did not consider both width and height. Nevertheless, it is already used to produce articles and website content you may have come across, or as a tool in the writing process. Still, you’re unlikely to know if what you’re reading was created by AI.
As authors of “Science Denial: Why It Happens and What to Do About It”, we are concerned about how generative AI can blur the lines between truth and fiction for those seeking scientific information making authority.
Every media consumer needs to be more vigilant than ever to verify the scientific accuracy of what they read. Here’s how to stay alert in this new information landscape.
How Generative AI Could Promote Science Denial
Erosion of epistemic trust. All consumers of scientific information depend on the judgments of scientific and medical experts. Epistemic trust is the process of trusting the knowledge you get from others. It is fundamental for the understanding and use of scientific information. Whether someone is looking for information about a health issue or trying to figure out solutions to climate change, they often have limited scientific understanding and limited access to first-hand evidence. With a rapidly growing volume of information online, people have to make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode more than it has already.
Misleading or just plain wrong. If there are errors or biases in the data on which the AI platforms are trained, this can be reflected in the results. In our own research, when we asked ChatGPT to regenerate multiple answers to the same question, we got conflicting answers. When asked why, he replied, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it’s fake.
The misinformation was intentionally spread. AI can be used to generate compelling disinformation in the form of text as well as deepfake images and videos. When we asked ChatGPT to “write about vaccines in the style of misinformation”, it produced a non-existent quote with false data. Geoffrey Hinton, former head of AI development at Google, resigned to be free to sound the alarm, saying: “It’s hard to see how you can stop bad actors from using it for bad things. .” The potential to create and disseminate deliberately incorrect information about science already existed, but it is now dangerously easy.
Manufactured sources. ChatGPT provides answers without any sources, or if asked for sources, can present ones it has made up. We both asked ChatGPT to generate a list of our own posts. We have each identified a few correct sources. Others were hallucinations, but apparently reputable and mostly plausible, with real previous co-authors, in similar-sounding journals. This inventiveness is a big problem if a list of a researcher’s publications is authoritative to a reader who doesn’t take the time to check them.
Dated knowledge. ChatGPT doesn’t know what happened to the world after he finished his training. A question about the percentage of the world that has had COVID-19 returned an answer prefaced with “as of my knowledge deadline of September 2021”. Given how quickly knowledge advances in some areas, this limitation could mean readers are getting the wrong outdated information. If you’re looking for recent research on a personal health issue, for example, beware.
Rapid progress and lack of transparency. AI systems continue to get more powerful and learn faster, and they may learn more scientific misinformation along the way. Google recently announced 25 new uses of AI embedded in its services. At this point, insufficient safeguards are in place to ensure that generative AI will become a more accurate provider of scientific information over time.
What can you do?
If you use ChatGPT or other AI platforms, be aware that they may not be completely accurate. It is the user’s responsibility to discern the accuracy.
Increase your alertness. AI fact-checking apps may soon be available, but for now, users must serve as fact-checkers. There are steps we recommend. The first is: Be vigilant. People often reflexively share information found during social media searches with little or no verification. Know when to become more deliberately reflective and when it is worth identifying and evaluating sources of information. If you’re trying to decide how to handle a serious illness or figure out the best steps to fight climate change, take the time to check the sources.
Improve your fact checking. A second step is side reading, a process used by professional fact checkers. Open a new window and find source information, if available. Is the source credible? Does the author have relevant expertise? And what is the expert consensus? If no sources are provided or you are unsure if they are valid, use a traditional search engine to find and evaluate experts on the subject.
Assess the evidence. Next, review the evidence and how it relates to the claim. Is there evidence that genetically modified foods are safe? Is there any evidence that they are not? What is the scientific consensus? Assessing complaints will require effort beyond a quick query to ChatGPT.
If you’re starting with AI, don’t stop there. Exercise caution in using it as the sole authority on any scientific matter. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also do more diligent research using traditional search engines before jumping to conclusions.
Assess plausibility. Judge whether the allegation is plausible. Is it likely to be true? If AI makes an implausible (and inaccurate) claim like “1 million deaths were caused by vaccines, not COVID-19,” ask yourself if that even makes sense. Make a tentative judgment, then be open to revising your thinking once you’ve checked the evidence.
Promote digital literacy in yourself and others. Everyone needs to up their game. Improve your own digital literacy and, if you’re a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides advice on verifying information online and recommends that teens be trained in social media skills to minimize health and well-being risks. The News Literacy Project provides useful tools to improve and support digital literacy.
Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, chances are you’ve already read articles created by it or developed from it. Finding and evaluating reliable science information online can take time and effort, but it’s worth it.
Gale Sinatra, Professor of Education and Psychology, University of Southern California and Barbara K. Hofer, Emeritus Professor of Psychology, Middlebury
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Learn more
about AI and ChatGPT