Navigating the Labyrinth: Misinformation and AI in the Information Age by Maria Fernanda Chanduví

Posted in 2024 The Gnovis Blog  |  Tagged , , , ,

By Maria Fernanda Chanduví

The digital landscape teems with information, both real and fabricated. Artificial intelligence (AI) appears as a double-edged sword, exerting the potential to highlight truth or obfuscate reality with fabricated narratives. Understanding the complex interplay between AI and misinformation needs a fine exploration, delving into its risks and promises.

On the one hand, AI’s capacity for data manipulation poses a significant threat. Its ability to automate content creation brings fabricated news articles, social media posts, and even deepfakes – meticulously prepared digital copies capable of confusing truth and fiction. 

Deepfakes are AI-generated audio-visual materials that aim to appear as authentic speech records (Westling, 2019, p.1). This technology implies simulated works meant to appear realistic, using “Generative Adversarial Networks,” or GANs, that produce generative and discriminative models to create and detect the synthetic content (Goodfellow & al, 2014, p.1).

Deepfakes can be used in many ways. Since it is a technology meant to create “realistic but fake” audiovisual content, it can make someone appear where they are not, saying something they did not say or were told by someone else or was said differently. There are multiple possibilities for this dangerous technology.

The hyper-realistic nature of these AI-generated narratives can deceive even the most perceptive minds, deteriorating trust in verified information sources and promoting a climate of uncertainty. AI algorithms, susceptible to intrinsic biases within their training data, can amplify existing misinformation, creating echo chambers and reinforcing pre-existing beliefs.

Recently, many of us witnessed a few deepfakes that greatly impacted different fields: the case of President Joe Biden’s fake audio preventing people in New Hampshire from voting in the primaries elections and the fake intimate video of Taylor Swift.

President Biden’s audio circulated online through robocalls, featuring a voice manipulated resembling his. The message discouraged voters from casting their ballots in the primary, falsely claiming that their vote held more weight and was more important in the general election (Seitz-Wald & Memoli, 2024).

This incident raised alarms among disinformation experts, highlighting the potential deepfakes to manipulate public opinion and interfere in elections—from which we are only nine months away. 

What is truly concerning is the increasing sophistication of deepfakes, which are becoming more difficult to distinguish from genuine content. Election season is a particular target for misinformation and deep fakes, and the potential for widespread use of deepfakes is concerning, especially regarding vulnerable demographics.

The case of Taylor Swift is no different. Recently, explicit deepfake images featuring Taylor Swift on various social media platforms, involving non-consensual and sexually explicit content, raised concerns among fans and no-fans of the celebrity (Conger & Yoon, 2024).

Consequently, the New York Times disclosed that Reality Defender, a cybersecurity company, concluded that 90% of the video was created using A.I. and commented on how the industry has grown rapidly, leading to tools that make it easier and cheaper to create deepfakes in no time. 

The company’s report highlights the experts’ concerns that deepfakes are becoming a powerful tool for disinformation, allowing users to create fake content. The report also mentions the difficulty in regulating such content, as individuals find ways to bypass rules imposed by companies providing generative A.I. tools (Conger & Yoon, 2024). 

The lack of regulation on deepfakes and AI-generated content contributes to developing and spreading misinformation, deteriorating trust in media, interfering with informed decision-making, potentially influencing elections, or inciting social restlessness (Weiner et al., L, 2023, 8).

However, AI is not just a messenger of misinformation. Its analytical capability can be a potent weapon in fighting for the truth. AI-powered fact-checking tools can rapidly analyze vast volumes of online content, identifying patterns of falsities (Anderson & Rainie, 2023, p. 14). By automating the initial stages of fact-checking, AI can increase human efforts, creating a more efficient debunking of misinformation. 

The effectiveness of this AI-driven counterattack is related to responsible development and ethical implementation. Transparency regarding data sources and algorithmic biases is essential in ensuring the reliability of AI-powered solutions. Moreover, promoting human-AI collaboration, where human judgment guides and sharpens AI algorithms, is crucial in protecting against harmful consequences.

Ultimately, the battle against misinformation transcends technological solutions. Cultivating critical thinking skills and media literacy remains the linchpin of a resilient information landscape. Encouraging the ability to critically evaluate information sources, evaluate evidence, and recognize manipulative tactics empowers individuals to navigate online content with understanding and confidence.

The road ahead in the AI-infused information age needs a multi-disciplinary approach. We must recognize the risks and promises of AI in the context of misinformation, promoting its responsible development while empowering individuals with the critical thinking skills needed to distinguish truth from fiction. Only through such a collaborative effort can we protect the integrity of the information landscape.

References

  1. Anderson, J., & Rainie, L., (2023, June 21). As AI Spreads, Experts Predict the Best and Worst Changes in Digital Life by 2035. Pew Research Center. 
  2. Conger, K., & Yoon, J. (2024, January 26). Explicit deepfake images of Taylor Swift elude safeguards and swamp social media. The New York Times. https://www.nytimes.com/2024/01/26/arts/music/taylor-swift-ai-fake-images.html 
  3. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … & Bengio, Y. (2014). Generative adversarial networks. Communications of the ACM, 63(11), 139–144.
  4.  NBCUniversal News Group. (2024, January 25). Taylor Swift Nude Deepfake goes viral on X, despite platform rules. NBCNews.com. https://www.nbcnews.com/tech/misinformation/taylor-swift-nude-deepfake-goes-viral-x-platform-rules-rcna135669 
  5. Seitz-Wald, A., & Memoli, M. (2024, January 22). Fake Joe Biden Robocall tells New Hampshire Democrats not to vote Tuesday. NBCNews.com. https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote-tuesday-rcna134984 
  6. Weiner, D. I., & Norden, L. (2023, December 12). Regulating AI deepfakes and synthetic media in the political arena. Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena  
  7. Westling, J. Are Deep Fakes a Shallow Concern? A Critical Analysis of the Likely Societal Reaction to Deep Fakes (July 24, 2019). TPRC47: The 47th Research Conference on Communication, Information and Internet Policy 2019, Available at SSRN: https://ssrn.com/abstract=3426174 or http://dx.doi.org/10.2139/ssrn.3426174