Pexels

The Yale Cyber Leadership Forum held its final session for this year last Friday with two panel discussions on AI Ethics.

This year’s forum is titled “Bridging the Divide: National Security Implications of Artificial Intelligence” and is a collaboration between the Jackson Institute for Global Affairs and the Yale Law School, with the goal of bridging the gap between law, policy and technology. Friday’s event was the final of three events the forum held this school year.

“The series of events that I’ve seen at Yale and that I’ve been able to be a part of, for me, are really a sign of how the field is changing, how computer science is opening itself up to that sort of cross-disciplinary dialogue,” Brian Christian, an author who was one of the panelists, told the News. “Increasingly, we’re seeing the policy folks, the philosophers, the legal theorists in the same room as the machine learning experts […] and so for me, what I take away is that this is a team effort, and it’s going to require some collaboration across those traditional disciplinary lines and I think that’s what we’re starting to see.”

The first panel featured Elena Kvochko, chief trust officer of enterprise software giant SAP; Andi Peng ’18, a doctoral student at the Massachusetts Institute of Technology; and Dragomir Radev, a professor of computer science at Yale. Ted Wittenstein, executive director of International Security Studies, moderated the panel.

The discussion focused on technical safety research and trust in AI. Radev opened the talk with a presentation on the limits of natural language processing — the ability of an AI to learn and understand human language. He illustrated this with the example of an ethical AI, known as Ask Delphi, that was trained on the social media site Reddit and began to give racist, homophobic, sexist and illogical responses to ethical queries. Radev also pointed out that a significant amount of AI and machine learning research is only conducted in English, which can lead to a language bias in its results.

Peng discussed the risks and limitations of physically deploying robots in the real world and the difficulty in communicating the desired effect to a machine when designing the machine’s systems. In response to a question from the audience, she described how one of the earliest examples of a cleaning robot was trained to maximize the amount of dust it could vacuum. When they put the robot in a house, the robot would vacuum dust, dump it out and then vacuum it up again, because this maximized the amount of dust that it was able to collect. This problem of communication is only exacerbated, she said, when robots then interact with data that might be unclean or biased.

“I think a key question we as a field are grappling with right now is the question of alignment, which is, how do we align the intended objectives of our users with those of the actual systems that we have trained?” Peng said at the panel.

The discussion then moved on to trust in the private sector and its ability to fend off cyber threats. The “threat landscape” of today, Kvochko told the audience, poses a number of challenges. There have been 36 billion records exposed by data breaches, 58 percent of which involved personal data, she cited. Additionally, she said, over 90 percent of breaches start with a phishing email, emphasizing how vulnerable many industries and individuals are.

The challenges facing the cybersecurity sector are continuously evolving, with the increasingly remote workforce representing a new attack vulnerability and staffing shortages in cybersecurity resulting in insufficient manpower and knowledge to tackle the problem, Kvochko said. She stressed the importance of private companies, like SAP, in fostering trust by avoiding biased training, allowing customers to opt out of sharing data with third-party vendors and addressing data breaches, among others. She also noted the importance of the research community partnering with the business community to answer questions about trust in automated machines.

Peng echoed Kvochko’s sentiment, underscoring the value of being exposed to and working with other industries alongside academia.


“As an academic, I would say, first and foremost, oftentimes, we get super siloed with these little shells of research that we do, that we end up incentivizing our own work along the lines of what our community values, and oftentimes we forget what matters in the world and what doesn’t,” Peng said to the panel. “… Over time, you realize that there actually is a common thread that unites a lot of these things.”

The second panel featured Christian and Scott Shapiro, a professor of law and philosophy at Yale. It was moderated by Oona Hathaway, professor of international law and director of the Yale School Center for Global Legal Challenges.

Centered on developing AI ethics and norms, Christian began the conversation by describing the “unusual position” he was in as a human subject at a Turing Test competition — the annual competition in which different computer programs are tested against human control subjects to see which is the hardest to distinguish — in 2009. Christian amusedly recalled that there was both an award for the most convincing computer program, as well as a “most human human award.”

“I think it’s pretty fair to say that with the rise of large language models […] we can consider the Turing Test kind of in the rearview mirror,” Christian said. “I don’t think that Alan Turing would have been able to imagine that in the year 2022, Turing Tests are essentially like one of the annoying chores of being a person who’s on the internet.”

Christian’s most recent book, “The Alignment Problem,” looks at the gap between what humans operationalize and what people had hoped the system would do. He argued that the alignment problem is one that gets “worse rather than better” as the power of models improves. If a program like Github co-pilot, a Microsoft product that uses large language models to autocomplete a user’s code, is presented with buggy code, the program tends to autocomplete the code with even more bugs.

Christian brought up the application of artificial intelligence in criminal justice, where statistics like re-arrest predictions rely on very simple models with a handful of parameters. He cited a model by Cynthia Rudin, a computer scientist at Duke University, that “fits into one English sentence,” including predicting recidivism for men under 20, those under 23 with two prior arrests or anyone with three or more prior arrests. This model apparently rivals for accuracy the controversial proprietary closed source system, COMPAS, that is used by many states.

From the perspective that governments’ increasingly mandate the use of these tools, he said, they become in effect extensions of the law. Yet, not all data from these AI systems are made publicly available. There have been cases, Christian said, when defendants have been unable to access the data fed into the model.

Shapiro, who described himself as a “philosopher [who] walks into an AI Lab,” considered the implications of using artificial intelligence in the courtroom. He cited and repudiated Chief Justice John Roberts’ advocacy for the development of an AI system that would give ‘better’ decisions in legal cases. A reliance on AI to make data-driven decisions about sentencing, as opposed to using sentencing guidelines, is simply an “unnecessary extension of the law,” Shapiro argued. He suggested that such talk is only returning to the same questions about how much discretion to give to decision-makers — or, in the case of AI, decision-making tools — since in both the case of AI and in the law, there will always be “a human in the loop” to make the final call.

“One of the things that I’m really struck by, and I think back on the history of AI and machine learning, is that the field was really launched with a kind of interdisciplinary spirit,” Christian said. “I go back to the mid-1950s, there was a series of events called the Macy Conferences specifically that brought together, you know, cybernetics experts and neurologists, mathematicians, but also anthropologists, and psychiatrists and it seems to me like we are just now starting to recover some of that truly interdisciplinary spirit as we appreciate the intrinsically cross-disciplinary nature of these problems.”

This year’s forum was the fifth since the program’s start.

Correction, April 6: Sentence explaining correction: A previous version of this article mislabelled the Macy Conferences. The article has been updated to reflect this.

MIRANDA JEYARETNAM
Miranda Jeyaretnam is the University desk editor. She previously covered the Jackson Institute of Global Affairs and developments at the National University of Singapore and Yale-NUS. Formerly the opinion editor under the YDN Board of 2022, she co-founded the News' Editorial Board and wrote for her opinion column 'Crossing the Aisle' in 2019-20. From Singapore, she is a junior in Pierson College, majoring in English.