Combating Fake Information In The Era Of Generative AI² – Analysis

By

By Stephen Marcinuk

It was the year of generative AI. The twelve-month period between January and December 2022 gave us DALL-E, Midjourney and ChatGPT, powerful tools that put the combined power of a search engine, Wikipedia and a top-notch content generator at our fingertips.

Tools like Bard, Adobe Firefly and Bing AI quickly followed, rapidly expanding the abilities of your average internet user beyond anything we could’ve imagined just a few years ago. With a couple of simple keystrokes, we can now generate captivating images or pages of written content that, this time last year, would’ve taken hours, days, or weeks to produce—even for illustrators or writers with years of training.

Indeed, generative AI is changing the landscape beneath our feet—while we’re standing on it. But this pace of innovation comes with risks; namely, of losing our footing and letting algorithms override human discernment. As a recent article in the Harvard Business Review highlighted, the creation of fake news and so-called deep fakes poses a major challenge for businesses—and even entire countries—in 2023 and beyond.

Fortunately, innovation in AI is not just producing results for content generation. It’s also a tool that, when coupled with good, old-fashioned human instinct, can be used to resolve problems in the systems themselves. But before examining these strategies in more detail, it’s important we understand the real-world threats posed by AI-generated misinformation.

Recognizing the threats

The potential threats of AI-generated content are many, from reputational damage to political manipulation.

I recently read in The Guardian that the journal’s editors received inquiries from readers about articles that were not showing up in its online archives. These were articles that reporters themselves couldn’t even recall writing. It turns out, they were never written at all. ChatGPT, when prompted by users for information on particular topics, referenced Guardian articles in its output that were completely made up.

If errors or oversights baked into AI models themselves weren’t concerning enough, there’s also the possibility of intentional misuse to contend with. A recent Associated Press report identified several risk factors of generative AI use by humans ahead of the 2024 US presidential election. The report raised the specter of convincing yet illegitimate campaign emails, texts, or videos, all generated by AI, which could in turn mislead voters or sow political conflict.

But the threats posed by generative AI aren’t only big-picture. Potential problems could spring up right on your doorstep. Organizations that overly and uncritically rely on generative AI to meet content production needs could unwittingly be spreading misinformation and causing damage to their reputations. 

Generative AI models are trained on vast amounts of data, and data can be outdated. Data can be incomplete. Data can even be flat-out wrong: generative AI models have shown a marked tendency to “hallucinate” in these scenarios—that is, confidently assert a falsehood as true.

Since the data and information that AI models train on are typically created by humans, who have their own limitations and biases, AI output can be correspondingly limited and biased. In this sense, AI trained on outdated attitudes and perceptions could perpetuate certain harmful stereotypes, especially when presented as objective fact—as AI-generated content so often is.

AI vs. AI

Fortunately, organizations that use generative AI are not prisoners to these risks. There are a number of tools at their disposal to identify and mitigate issues of bad information in AI-generated content. And one of the best tools for this is AI itself.

These processes can even be fun. One method in particular, known as “adversarial training,” essentially gamifies fact-checking by pitting two AI models against each other in a contest of wits. During this process, one model is trained to generate content, while the second model is trained to analyze that content for accuracy, flagging anything erroneous. The second model’s fact-checking reports are then fed back into the first, which corrects its output based on those findings.

We can even juice the power of these fact-checker models by integrating them with third-party sources of knowledge—the Oxford English DictionaryEncyclopedia Britannica, newspapers of record or university libraries. These adversarial training systems have developed sophisticated-enough palates to differentiate between fact, fiction and hyperbole.

Here’s where it gets interesting: The first model, or the “generative” model, learns to outsmart the fact-checker, or “discriminative” model, by producing content that is increasingly difficult for the discriminative model to flag as wrong. The result? Steadily more accurate and reliable generative AI outputs over time.

Adding a human element

Although AI can be used to fact-check itself, this doesn’t make the process hands-off for all humans involved. Far from it. A layer of human review not only ensures delivery of accurate, complete and up-to-date information, it can actually make generative AI systems better at what they do. Just as it tries to outsmart its discriminative nemesis, a generative model can learn from human corrections to improve future results.

What’s more, internal strategies like this can then be shared between organizations to establish industry-wide standards and even a set of ethics for generative AI use. Organizations should further collaborate with other stakeholders, too—including researchers, industry experts and policymakers—to share insights, research findings and best practices.

One such best practice involves data collection efforts that prioritize quality and diversity. This involves careful selection and verification of data sources, by human experts, before they’re fed into models, taking into consideration not just real-time accuracy, but representativeness, historical context and relevance.

All of us with stakes in making better generative AI products should likewise commit to promoting transparency industry-wide. AI systems are increasingly used in critical fields, like health care, finance and even the justice system. When AI models are involved in decisions that impact peoples’ real lives, it’s essential that all stakeholders understand how such a decision was made and how to spot inconsistencies or inaccuracies that could have major consequences.

There could be consequences of misuse or ethical breaches for the AI user too. A New York lawyer landed himself in hot water earlier this year after filing a ChatGPT-generated brief in court that reportedly cited no fewer than six totally made-up cases. He now faces possible sanctions and could lose his law license altogether.

Generative AI modelers therefore shouldn’t be afraid of sharing documentation on system architecture, data sources and training methodologies, where appropriate. The competition to create the best generative AI models is fierce, to be sure, but we can all benefit from standards that promote better, more reliable, and safer products. The stakes are simply too high to be playing our cards so close to our chests.

The strides taken by generative AI in the last year are only a taste of what’s to come. We’ve already seen remarkable transformation not just in terms of what models are capable of, but in how humans are using them. And as these changes continue, it’s critical that our human instinct evolves right along with them. Because AI can only achieve its potential in combination with human oversight, creativity and collaboration.

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

About the author: As the Co-founder and Head of Operations at Intelligent Relations, Steve is actively involved in all aspects of operations and growth for the company – this ranges from the generation of the AI PR technology for the platform all the way across to client services. Steve is a Wharton School of Business graduate with several startups under his belt and a keen eye for unique business ventures.

Source: This article was published by Fair Observer

Fair Observer

Fair Observer is an independent, nonprofit media organization that engages in citizen journalism and civic education. Fair Observer's digital media platform has 2,500 contributors from 90 countries, cutting across borders, backgrounds and beliefs. With fact-checking and a rigorous editorial process, Fair Observer provides diversity and quality in an era of echo chambers and fake news. Fair Observer's education arm runs training programs on subjects such as digital media, writing and more. In particular, Fair Observer inspires young people around the world to be more engaged citizens and to participate in a global discourse.

Leave a Reply

Your email address will not be published. Required fields are marked *