Artificial intelligence technology is not new, but dramatic advances in generative AI have captured the world’s attention and are transforming the information landscape.
This infographic provides an overview of how this technology works and offers six news literacy takeaways to keep in mind as these tools evolve:
- Generative AI tools are not objective: They are subject to the biases of the humans who make them, and any biases in the training data may show up when they are used.
- . . .or reliably factual: AI tools might feel authoritative and credible, but the responses they generate are routinely riddled with inaccuracies.
- It’s not all bad: AI tools also have tremendous upsides. (For example, they can boost scientific research and make complicated or specialized tasks more accessible, like writing computer code or building websites.)
- Content is easier than ever to create — and fake: AI chatbots and image generators produce text and visuals at an unprecedented scale — and have the potential to supercharge the spread of misinformation. Be ready to encounter even more information with less transparency about its origin.
- It signals a change in the nature of evidence: The rise of more convincing photos and videos means that determining the source and context for visuals is often more important than hunting for visual clues of authenticity. Any viral image you can’t verify through a reliable source — using a reverse image search, for example — should be approached with skepticism.
- Reputable sources matter more than ever: Credible sources follow processes to verify information before sharing it, and this should translate into higher levels of trust.
Don’t let AI technology undermine your willingness to trust anything you see and hear. Just be careful about what you accept as authentic.
Source: https://newslit.org/tips-tools/6-things-to-know-about-ai/.