If you read the news, you’ve been exposed to the name Sam Altman at least a couple of times during the past week. In the span of four days, Altman stepped down from his role as CEO of OpenAI, entered Microsoft as the head of their new AI department, only to reclaim his previous position on Nov. 21. The board of OpenAI, company responsible for ChatGPT, justified firing Altman by stating that he was hindering the board’s responsibilities. 

Analysts of this jaw-dropping power struggle in the tech world explain that the main divergence between Altman and the board was a philosophy named Effective Altruism, which defends maximizing positive impact with one’s available time and resources. More specifically, board members of OpenAI were strongly concerned with AI’s potential to destroy humanity in the future, adhering to some of the utilitarian concerns that compose Effective Altruism. On the other hand, as CEO, Sam Altman embodies the commercial interests of the company, thus acting more in accordance with OpenAI’s main investor, Microsoft Corp. It is also important to mention that Altman’s return occurred after nearly all 800 employees of OpenAI threatened to quit and transfer to Microsoft’s AI department, calling for the CEO’s reinstatement and for the resignation of the board.

The fact that Effective Altruism came up as a source of divergence amongst OpenAI executives is extremely significant, and it bears an interesting connection with the recently concluded strike of the Screen Actors Guild–American Federation of Television and Radio Artists, SAG-AFTRA, and Writers’ Guild of America.. For the first time since 1960, actors’ and screenwriters’ unions entered a simultaneous strike, in what turned out to be the second-longest actors strike ever. Concerned with low pay rates and producers’ growing use of AI, performers and writers are increasingly worried about their prospects within the industry. They fear that AI could be used to reproduce performances or generate new scripts based on existing material. Unethical or excessive use of cutting-edge technology could substitute human workforce in a field that, for some, was once seen as protected from automation due to its creative and artistic character. 

If the multibillion-dollar industry that is Hollywood and OpenAI, a company at the forefront of technological development, are both facing internal disagreements about what limits should be imposed on AI, it is undoubtedly a relevant and urgent question. There is a fine line between the important contributions and the potential negative consequences of large language models and generative AI — one that, if misinterpreted, could lead to lack of efficiency on one side, or job losses on the other, to say the least. 

So what is the solution? At this point in time, it remains unclear. While some argue that the progression of technology is “inescapable”, others hold that there is no point in permitting its unrestrained development. Even amongst specialists, there is a widespread uncertainty about how current societal structures may be transformed with the advent of AI. And, since there is not a clear, well-defined understanding of the future consequences of such recent tools, it becomes even harder to define where to set constraints. 

It goes without saying that decisions regarding AI regulation can have diametrically opposite conclusions within less than 96 hours, or can take more than four months to reach a tentative agreement. The undeniable truth is that AI is permeating the most diverse professional fields, leading to arduous debates and fissions due to its controversial effects. 

OpenAI and SAG-AFTRA don’t point us to a definite answer on the impacts of emerging technology. However, these cases definitely highlight some crucial questions with respect to AI’s ramifications for power disputes, the creative industry, and the maintenance — or not — of lower-level jobs. If anything, these examples show us that a seemingly distant technology is already affecting peoples’ lives in a very practical way — and, if it hasn’t yet, it will likely influence our own soon. 

LAURA WAGNER is a sophomore in Benjamin Franklin College. Contact her at laura.wagner@yale.edu.