Better to be Feared Than Loved? The Impact of AI in the Media
How are sites such as ChatGPT affecting journalism?
By Ava Scruggs
What would happen if you could read an entire news article about an event that never happened? Or obtain a recording of the president saying words that never left their mouth? Or if you had the power to write a college admissions essay based on an experience you never had? These are all threats that currently exist due to the rise of artificial intelligence within our media.
So, how is AI affecting the media? Does it support or threaten journalists in their work?
AI has become an increasingly popular resource for many, regardless of their occupation. Because it is so easily accessible to the public, this means that anyone is able to use AI to their advantage (or perhaps, a disadvantage to others). With the increase in popularity on sites such as ChatGPT, Gemini (formerly Bard), or even Snapchat’s AI Bot, AI is surrounding the world quickly. Even with the advantages it brings on expediting tasks and quickly scanning through the heaps of material that exists on the internet, AI also has its negative repercussions.
Particularly for journalists and people working in the media, AI can serve as a potential threat to job opportunities, as it is able to complete tasks so much faster, and in such a manner that expands beyond human capability– why write a script when a robot could do it for you in an instant? With over 100 million monthly users, sites such as ChatGPT certainly have amassed a large following. This topic has become particularly prevalent, especially for the Writer’s Guild of America (WGA), who recently in the past year went on strike for a variety of reasons, one major one being the exploitation of AI in the industry. The WGA’s contract on AI states, “Neither traditional AI… nor generative AI … is a writer, so no written material produced …can be considered literary material.”
In order to better understand the effects of AI on the general population, it is important to hear the voices and viewpoints of journalists and people who work in the media industry themselves. Tracey Kemble, a distinguished film and television producer based in Los Angeles that works with Netflix, has shared her thoughts on the situation.
“What’s so crazy is that now, with AI, you have reduced the time– it almost feels like a science fiction movie, which is frightening. You are allowing a machine to perform a human’s skill sets. The time it takes to build characters and understand how characters interact with each other is being taken over,” Kemble said.
Kemble also shares how she finds the quality of the work generated by these AI services to be “more robotic”, and have less of a personal effect on the public. However, instead of fearing AI, perhaps we can use it as a tool. Kemble suggests that AI should be used as a “rough draft, and a starting point to build off of.”
This topic is the exact same that led many members of the WGA to go on strike in the past year. Susan Fales Hill currently serves as an Executive Producer for the television series “And Just Like That” on HBO Max, a sequel to the series “Sex and the City.” She noted that fears about AI were a significant concern raised by union members in their recent strike.
“One of the big negotiating issues between my union, the Writers Guild of America and the AMPTP that led to our six month strike last spring was the danger that Chat GPT could pose to writers’ livelihoods. In the end, the WGA won a moratorium on its use for the next three years,” Hill said. “We will all be affected by AI and must adapt. We have to find ways to transform it into an educational tool.”
Even students working for the Yale Daily News share similar feelings about AI. Ben Raab ’26, a history major at Yale and one of the paper’s current print managing editors, has completed research on the exact topic and has a slightly different outlook.
“My biggest takeaway from this is that we still have a ways to go in terms of understanding this technology…but the fear is what we’ll be able to do with it in 10-15 years,” Raab said. “So far, I think AI has had minimal impact on our newsroom…but I can think of one positive effect, which is Otter.ai; it’s a tool that can transcribe our interviews, which definitely is more efficient.”
Another Yale student who works for the SciTech section of Yale Daily News, Esma Okutan, shares similar views to Raab in that she is intrigued by AI’s potential and curious for the path it will take in the future.
“At first there was a fear of competition that my work wouldn’t be as good as AI, but now the benefits of it override my fears.”
Raab also added that Yale’s lack of an AI policy is one of his “biggest concerns” at the moment.
Melissa Murray, a professor at NYU Law and a legal analyst for MSNBC, shares that NYU is similar to Yale in that it does not have an official policy on AI, and instead encourages individual professors to choose themselves.
“As a law professor I read a lot of exams, and my students have access to ChatGPT,” Murray said. “I made my exams closed book so my students can’t access other material; my policy eliminates the inequity to people who don’t know it’s available, and makes it fair for everyone. Nobody wants to make policies without considering the full extent— I think we’re all still determining what the right decision is.”
Murray later added that law firms and law professors are still adapting to AI, searching for a way to use AI to add value without risking accuracy in legal work.
From this evidence, it is clear that AI is leaving an impact on a variety of journalists and those working in the media industry, regardless of their specific position.
As Susan Fales Hill puts it, “There is no turning back and we deny, or ignore the coming revolution at our peril. We must all find ways to embrace this new force.”