We’ve all heard the phrase “data is the new oil;” or at least I have hundreds of times. But what exactly does this mean? According to Clive Humby, the mathematician who originally stated that “data is the new oil,” data is analogous to oil in that it is pretty damn useless in its raw state; it needs to be refined, processed, and repurposed in order to serve as a valuable commodity. Unfortunately, organizations and people across the world have misconstrued this initial narrative and turned it into an ideology that ALL data, regardless of its quality, is beneficial. I can’t even begin to tell you how much crap I see on the internet that turns out to be fake news, misinformation, or generated content that provides no underlying value, but instead simply “appeals” to viewers. It’s hard to find quality data nowadays if you’re a layperson (like myself), because there is so much gatekeeping and contradictory statistics that blur the line between real and fake. In fact, “A survey conducted in December 2020 assessing if news consumers in the United States had ever unknowingly shared fake news or information on social media found that 38.2 percent had done so.“
The rise of misinformation and fake news on the internet is no surprise. In fact, the concept of fake news has been around for hundreds of years, The Salem Witch Trials being a great example of the diffusion and rapid spread of fake information, leading to mass hysteria and extremism. So how exactly does all this relate to Generative AI? Well, the growth of AI generated content will inevitably lead to even more garbage on the internet, and can potentially culminate in an era where generated misinformation will control the public narrative.
Current State of Misinformation
As of 2022, 62% of the internet is made up of unreliable information. One of the largest and most widely spread topics of misinformation relates to health; particularly COVID-19. I know, I know, this is a touchy topic for many people and it’s turned into some sort of political spectacle where your beliefs on the COVID-19 vaccine dictate whether you’re a “sheep” or not (such a stupid insult btw). In fact, I got into an argument with someone recently claiming that Bronny James’ cardiac arrest was caused by the COVID-19 vaccine. After presenting some facts (and common sense) to him clearly showing that the vaccine was incredibly unlikely to have caused this, he went on a tirade, calling me all sorts of derogatory names, lowering his perceived intelligence to that of a rock.
That aside, false claims surrounding the COVID-19 vaccine purport that it contains “microchips” that can be used to control and track people, that it causes infertility or death, that it will alter human DNA, and that the pharmaceutical industry has fabricated the results of vaccine trials or covered up harmful side effects to boost its profits. This has led to vaccine hesitancy and negative affects on people’s health behaviors. Much like the AIDS denialism in South Africa contributed to more than 330,000 deaths between 2000 and 2005, COVID-19 misinformation made the ultimate death toll worse than it had to be.
Another large pool of false news revolves around politics, and we’ve already seen some effects of Generative AI in the political sphere. For example, in the lead-up to the 2024 election, AI-generated political disinformation has gone viral online, including a doctored video of President Biden appearing to give a speech attacking transgender people and AI-generated images of children supposedly learning satanism in libraries. I’m not a huge Biden fan, but I’m pretty sure he didn’t do those things.
This is where we are at. We are already in an age where fake news and falsity on the internet has led to a degradation in the trust of institutions and a resurgence of measles and chickenpox in the United States. The infiltration of Generative AI will only serve to increase these trends, and despite global legislation to contain false information, it seems as though people’s emotions behind false claims is what they ultimately care about; not whether information is right or wrong. This is a huge problem and I think there’s a lack of the ability to critically think due to the mass spread of misinformation.
The Future of Misinformation with Generative AI
We’ve already seen how Generative AI can cause real world effects on the public narrative and the actions people take daily. The future of misinformation on the internet with the burgeoning of Generative AI is incredibly scary, and here are 5 ways/reasons why:
- Deepfakes: Generative AI can create highly realistic images, videos, and audio that can be difficult to distinguish from the real thing, leading to the spread of false information and further blurring of the line between real and fake.
- Fake News: Generative AI can be used to create fake news stories and social media posts, manipulating public opinion and making it a powerful tool for those with nefarious intentions. This can also be used as international propaganda for countries to target others as an attempt to increase nationalism and enhance support for offensive actions.
- Targeted Disinformation: Generative AI can produce content that seems legitimate and relatable to specific individuals or groups, enabling personalized disinformation campaigns. This can lead to further discrimination and disenfranchisement between groups of people.
- Erosion of Trust: The prevalence of AI-generated content can lead to a general erosion of trust in digital media, affecting people’s ability to discern genuine information from misinformation. This can eventually lead to the break down of institutional systems, and the re-emergence of tribalism.
- Copyright Ambiguities: The widespread use of generative AI raises questions about who holds the copyright to content created using these programs, as the AI’s user, the AI’s programmer, and the AI program itself all play a role in the creation of these works. What will the future of work look like, and where does authenticity emanate?
Conclusion
I don’t think when the internet and social media platforms were first created, the majority of people were actively concerned about the spread of false information and the societal implications behind the disruptive technology; they were probably just happy to be able to communicate with others across the globe and post original content. I think we are at an inflection point now where we can look back and see all the negative (and positive) aftershocks of the internet and global connectivity, and use the knowledge we’ve gained to protect ourselves for the future.
Generative AI is different because of the amount of content it can create rapidly, however it is all subject to the data it is trained on. And if 62% of the internet is unreliable information, how can we be sure that the data we are training our models is accurate and unbiased? We can’t. There is literally no way to know for sure ALL of the factual content online. The most we can do, as people, is critically think about any claim online, and research with established organizations and institutions. Yes, there’s lots of lobbying and political scheming that goes on at the top of organizations, but if you can read studies and papers and dispense with your emotions, you can find the truth. There needs to be a push for media literacy, fact-checking tools, and responsible AI use.
At the same time, I think we need to be pessimistic and skeptical of any new information or claims we see or hear on the internet; anything at all. There’s a plethora of data on the internet, and the majority of it is a fucking dumpster fire. Learn to read, learn to think for yourself, incorporate your values and morals into what you do, and despite the Generative reality being created in front of us, you’ll come to a factually based conclusion that will allow you to live your best life for yourself, and for those around you. Just think god dammit!
Leave a comment