If you read the news, you’ve been exposed to the name Sam Altman at least a couple of times during the past week. In the span of four days, Altman stepped down from his role as CEO of OpenAI, entered Microsoft as the head of their new AI department, only to reclaim his previous position on Nov. 21. The board of OpenAI, company responsible for ChatGPT, justified firing Altman by stating that he was hindering the board’s responsibilities.
Analysts of this jaw-dropping power struggle in the tech world explain that the main divergence between Altman and the board was a philosophy named Effective Altruism, which defends maximizing positive impact with one’s available time and resources. More specifically, board members of OpenAI were strongly concerned with AI’s potential to destroy humanity in the future, adhering to some of the utilitarian concerns that compose Effective Altruism. On the other hand, as CEO, Sam Altman embodies the commercial interests of the company, thus acting more in accordance with OpenAI’s main investor, Microsoft Corp. It is also important to mention that Altman’s return occurred after nearly all 800 employees of OpenAI threatened to quit and transfer to Microsoft’s AI department, calling for the CEO’s reinstatement and for the resignation of the board.
The fact that Effective Altruism emerged as a point of contention among OpenAI executives is quite significant and has intriguing implications in light of the recent SAG-AFTRA and Writers’ Guild of America strike. This strike, the second-longest actors’ strike ever, marks the first time since 1960 that actors and screenwriters’ unions have simultaneously walked out. Their concerns are not only about low pay rates but also about the growing use of AI in the industry. Performers and writers fear that AI could be employed to reproduce performances or generate new scripts from existing material, potentially substituting human creativity with automation.
In this context, the rise of Creative Automation presents both opportunities and challenges. While Creative Automation holds the potential to enhance efficiency and innovation in content creation, it also raises ethical questions about its role in displacing human artistry. As the industry grapples with these changes, it’s crucial to strike a balance that respects the creative essence of the work while embracing the advancements that technology offers.
If the multibillion-dollar industry that is Hollywood and OpenAI, a company at the forefront of technological development, are both facing internal disagreements about what limits should be imposed on AI, it is undoubtedly a relevant and urgent question. There is a fine line between the important contributions and the potential negative consequences of large language models and generative AI — one that, if misinterpreted, could lead to lack of efficiency on one side, or job losses on the other, to say the least.
So what is the solution? At this point in time, it remains unclear. While some argue that the progression of technology is “inescapable”, others hold that there is no point in permitting its unrestrained development. Even amongst specialists, there is a widespread uncertainty about how current societal structures may be transformed with the advent of AI. And, since there is not a clear, well-defined understanding of the future consequences of such recent tools, it becomes even harder to define where to set constraints.
It goes without saying that decisions regarding AI regulation can have diametrically opposite conclusions within less than 96 hours, or can take more than four months to reach a tentative agreement. The undeniable truth is that AI is permeating the most diverse professional fields, leading to arduous debates and fissions due to its controversial effects.
OpenAI and SAG-AFTRA don’t point us to a definite answer on the impacts of emerging technology. However, these cases definitely highlight some crucial questions with respect to AI’s ramifications for power disputes, the creative industry, and the maintenance — or not — of lower-level jobs. If anything, these examples show us that a seemingly distant technology is already affecting peoples’ lives in a very practical way — and, if it hasn’t yet, it will likely influence our own soon.
LAURA WAGNER is a sophomore in Benjamin Franklin College. Contact her at laura.wagner@yale.edu.