Artificial intelligence, more commonly referred to as AI, is becoming more and more prevalent in daily life. There are real dangers that come from the rapidly growing technology that poses a threat to cybersecurity and overall privacy.
The AI technology that is most familiar to college students is ChatGPT. There are very few people who haven’t at least used the site to quickly gather their sources for a paper, but it is also an unspoken truth of college life that a growing percentage of work is entirely AI-generated.
Though this makes the lives of college students easier, the rabbit hole of problems that AI has opened in the world is countless. The scariest threat of them all may be the concept of which has been entitled “deepfaking.”
There are thousands of videos and pictures online that have been deepfaked to the point where it is unrecognizable from the original source. Merriam-Webster defines deepfake as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
Technology, like deepfakes, has scary implications when looked into the world of politics and world affairs. Imagine a deepfake video of President Joe Biden declaring war on Russia. An incident like that, if not handled quickly and adequately, could lead to disastrous internal relations.
In fact, AI is a conversation that was recently in the Oval Office. Biden signed an executive order on Oct. 30 that is aimed at establishing regulations for what AI is allowed to do. However, this bill is going to have to go to Congress for any real change to be made.
On a much smaller scale, AI is also stealing from artists and creators who upload their work online. Since AI will take information from whatever it can find, many of the generated images that emerge are stolen from artists. The New Yorker reported on the case of Kelly McKernan, who learned that her name had been used nearly 12,000 times in AI generation prompts. The images that resulted were nearly uncanny to her own unique art style.
“I can see my hand in this stuff, see how my work was analyzed and mixed up with some others’ to produce these images,” McKernan told the New Yorker.
This case, along with the experience of many others, brings forward the need for clear ownership online and what falls under fair use when it comes to AI.
Overall, it is clear that AI needs to be regulated on a federal level. There are currently only bills in the early stages of development, but they cannot keep pace with the rapidly improving technology. Some aspects are being focused on more heavily, like the privacy and security parts, but the issue needs to be looked at as a whole. If the regulation is not quick to come into action, we will start seeing everyday life and work being affected by AI.
Natalie Peck is a junior communication major. Natalie can be reached through her email, [email protected].