On Friday, Yale professors from several departments hosted a workshop on artificial intelligence at 150 York St., engaging students and faculty in conversations about the social and ethical implications of AI.

The conference, which drew about 130 attendees, approached the topic from a multidisciplinary angle, connecting AI to fields ranging from philosophy, management and sociology to computer science, computational biology and data science. This multifaceted approach enabled the workshop to draw a wide variety of participants, including individuals working in business, psychology, philosophy, sociology and law.

Questions and direct conversation among audience members and faculty followed the panel discussions. The conversations combined holistic insights on the impact of machine intelligence and the more technically focused analysis of the current state of AI, its relevant applications and its future potential.

“AI algorithms are used to filter what shows up on your social media feeds — be it Twitter, Facebook or Instagram. What’s problematic is that this content influences your opinions, quite dramatically,” said Nisheeth Vishnoi, professor of computer science and one of the principal organizers of the workshop.

According to Vishnoi, governments around the world must develop more AI-related laws and regulations driven by an understanding of ethics and the underlying algorithms. The workshop aimed to bring this ongoing dialogue at the intersection of AI and ethics to the Yale community.

Several of the speakers spoke about the increasing accessibility of machine learning and computing techniques over the past two decades, noting that it is much easier now than it was twenty years ago for people to collect data, to contribute to open source machine learning platforms and to start online businesses.

In light of the increased accessibility and influence of machine learning, the first set of conversations discussed biases introduced by data collection and data mining. The workshop also considered potential ways to address the limitations of bias, inherent both in the data and in the algorithms that derive insights from the data.

“Compared to biases in humans, biases in AI are not as obvious,” said Tristan Botelho, professor of organizational behavior at the School of Management.

It is easier to target the algorithm at the machine level than at the individual or societal level since most individual biases form subconsciously, according to Balázs Kovács, assistant professor of organizational behavior at the School of Management.

Because many algorithms focus on finding a pattern in the majority of the population, it can be challenging to discover ways to capture minority populations not well-represented in the data, according to Elisa Celis, professor of statistics and data science.

Data scientists and AI engineers are developing new approaches that merge computer science and statistics to simultaneously keep diversity intact and analyze macro-level patterns. Interdisciplinary conversations in the humanities and social sciences are also important when thinking about the diversity of the surveyed population, Celis added.

Shelly Kagan, professor of philosophy, discussed the challenge of allocating decision-making responsibilities to autonomous machines due to their lack of moral or ethical agency, unlike human beings.

“The idea of bringing a new autonomous entity capable of decision making into the world is hardly a new thing. It’s like having children, for example, who grow into autonomous beings — adults who make their own decisions,” Shelley said. “Except that in children, we are somewhat more comfortable with this idea because of the moral and ethical instruction that guides their decision-making.”

The members of the workshop also emphasized the difficulty in assigning accountability in an artificial intelligence system. The panel discussed the difficulty in holding those accountable for the legal or ethical violations committed by artificial intelligence systems, since decisions are not being made by a person, but rather by an intelligent, self-learning and self-governing machine.

To minimize these vulnerabilities or anticipate failures, Zhong Shao, professor of computer science, discussed the possibility of certifying AI systems. Software verification systems could be added at various levels of complexity of the tasks of the AI. For example, in a self-driving car system, the engineers would try to verify that a car cannot get into an accident by certifying various levels of complex tasks of the self-driving car system, according to Shao.

“The future of AI is what we call a heterogeneous system,” Shao said. “We will see a wide range of complex hardware and software components, and the solution is rigorous system design.”

Vishnoi plans to organize a second iteration of this workshop sometime in the fall.

Viola Lee | viola.lee@yale.edu

Ishana Aggarwal | ishana.aggarwal@yale.edu

Correction, April 8: A previous version of this story stated that 20 people attended the workshop. In fact, about 130 people attended the workshop.

KYOUNG A LEE
ISHANA AGGARWAL