Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

How to be on the lookout for misinformation when using generative AI

Fast Company: “Until very recently, if you wanted to know more about a controversial scientific topic—stem cell research, the safety of nuclear energy, climate change—you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust. Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form. ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by predicting likely word combinations from a massive amalgam of available online information. Although it has the potential to enhance productivity, generative AI has been shown to have some major faults. It can produce misinformation. It can create hallucinations—a benign term for making things up. And it doesn’t always solve reasoning problems accurately. For example, when asked if both a car and a tank can fit through a doorway, it failed to consider both width and height. Nevertheless, it is already being used to produce articles and website content you may have encountered, or as a tool in the writing process. Yet you are unlikely to know if what you’re reading was created by AI. As the authors of Science Denial: Why It Happens and What to Do About It, we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information. Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape…”

Sorry, comments are closed for this post.