Yasmine Halmane, Photo Editor

The Yale Cyber Leadership Forum opened last Friday with its first session discussing “Big Data, Data Privacy, and AI Governance.”

The session, which was hosted in person for members of the Yale community and made available to the public on Zoom, was the first of three sessions aimed at connecting law, technology, policy and business approaches to cybersecurity. The theme of this year’s forum is “Bridging the Divide: National Security Implications of Artificial Intelligence.” The session consisted of two panel discussions, both of which were moderated by Oona Hathaway, the director of the Yale School Center for Global Legal Challenges, and featured panelists from the Yale community.

As the panelists delved into the multifaceted intricacies of cybersecurity, they navigated the complex terrain of data protection and the evolving dynamics of privacy, weaving together the threads of national security, AI governance, and the ever-expanding digital realm. . With data privacy as a central concern, the discussions underscored the imperative of safeguarding our digital footprints in an era where the boundaries between security and privacy continue to blur. It became evident that the Yale Cyber Leadership Forum was not merely a platform for dialogue but a catalyst for forging tangible solutions at the nexus of privacy, cybersecurity, and AI governance, charting a course toward a safer and more resilient digital future.

The discussions at the Yale Cyber Leadership Forum shed light on the intricate web of cybersecurity challenges, emphasizing the pressing need for comprehensive solutions. Recognizing this imperative, platforms like https://op-c.net/ stand as invaluable allies, offering expertise in cyber research, penetration testing, incident response, training, and forensics. These specialized initiatives are essential guides, equipping organizations with the tools to navigate the complexities of the digital world securely. By embracing these resources, institutions can bolster their defenses and foster a safer, more resilient online environment for all. Just as the Forum serves as a hub for innovative ideas, these practical initiatives pave the way for a secure and adept digital future.

“Every year we try to innovate,” Hathaway said. “We aim to chart new ground in each set of discussions. This time we decided to really focus on the Yale community. […] We saw this as an opportunity at this key moment to try to build bridges across departments and programs.”

Hathaway explained that the choice to draw panelists from the Yale community was partially driven by the pandemic’s travel restrictions, but also because so many faculty and students at Yale are working on relevant issues.

Friday’s session featured computer science professor Joan Feigenbaum and computer science professor and co-founder of the Computation and Society Initiative Nisheeth Vishnoi. The session focused on the risks and opportunities of AI, particularly for privacy and surveillance.

The panel opened with a discussion of the current state of AI and machine learning, in particular focusing on the implications of facial and voice recognition technology. Vishnoi described AI and machine learning as having made “tremendous progress in the last decade,” but went on to discuss the failings of AI in terms of algorithmic biases.

Adversarial examples, Vishnoi said, such as the possibility that the algorithm of an autonomous vehicle may not be able to recognize a stop sign that has some “very small perturbation to it that a human eye will not be able to detect,” have shown that current machine learning technology is very brittle to such attacks. 

Vishnoi suggested that a participatory design could be a fruitful method for eliminating algorithmic bias, by having every party that could be affected involved in every step of the design process.

The discussion then moved to the implications of AI and big data on issues of privacy. Vishnoi brought up the problem of creating policy that protects people’s privacy, such as difficulties applying the “right to be forgotten” — an assurance that individuals can request for their data to be deleted — to AI and machine learning.

Feigenbaum, on the other hand, shared fewer concerns about the risks of AI, and expressed doubt on the ability of AI and emerging technology such as quantum computers to outperform human intelligence.

“I am a bit of an AI skeptic,” Feigenbaum said. “Skepticism is not rejection […] Often I hear discussions about the effect of AI on society, the effect of AI on even just the technological world […] and really the thing that is affecting us is not really AI it’s just automation, it’s just technology, it’s just computers.”

Feigenbaum also discussed the conflict between policy and technology in her field of cryptography, including how mandating access to encryption for law enforcement would result in less secure encryption systems overall.

The two panelists debated the progress of AI as well as its core definition.

“We need to dispel this myth that […] there is nothing intelligent about artificial intelligence,” Vishnoi said. “To regulate this kind of intelligence, we have to understand this kind of intelligence.”

Feigenbaum responded by arguing that much of the intelligence in the design of algorithmic frameworks is in fact human intelligence.

Hathaway noted that this was a productive debate that was a good place to start off the forum.

“That’s a matter on which there’s widespread disagreement, and airing that issue was a great place for the Forum to begin,” Hathaway said to the News.

The second panel featured Anat Lior, a fellow at the Yale Law School’s Information Security Project; Nathaniel Raymond, international affairs lecturer; and Wendell Wallach, chair of the technology and ethics research group at the Yale Interdisciplinary Center for Bioethics. The panelists discussed the difficulties of implementing policies for regulating AI, as well as the approach of different countries to creating such policy.

Lior described the differences between the United States and the European Union in terms of policy, pointing out that the European Union is moving towards a “harmonized framework” for regulating AI, while the United States is taking a more fragmented approach.

“Creating regulations obviously creates some sort of constraints — ethical constraints, legal constraints — which will slow down the process, and in that sense, the US, I think, is trying to create some sort of lenient framework for AI innovation to thrive,” Lior said. “The fear of stifling innovation is very big […] The mantra of moving fast and breaking things in the process is very American.”

In light of the recent cyberattack on the International Committee of the Red Cross in November 2021, Raymond, who works with the Red Cross, talked about the necessity for organizations to disclose such data critical incidents where humanitarian data has been breached, intercepted or handled negligently, which the Red Cross did within four days of discovering the breach.

“This really is not a technological story as much as it is a story of an absence of norms,” Raymond said. “While the state department spokesman Ned Price called for accountability in the case of the Red Cross hack, there has been no unified international statement of condemnation that humanitarian data is equal to a humanitarian facility, a humanitarian vehicle […] it highlights in big yellow highlighter, the gaping hole in international doctrine about, simply put, humanitarian cyberspace.”

Wallach focused on the challenges of international cooperation in handling emerging technology. He described the different approaches of the US, China and the EU in dealing with these “largely ungoverned spaces.”

Within the US, Wallach said, there is a cult of innovation and adding restraints on technologies may hamper development. He described climate change and emerging technologies as the “two most destabilizing factors at the moment” which require international cooperation.

“The big questions are being avoided on the national level and we really don’t have any effective mechanisms for international cooperation,” Wallach said. “Who’s really making decisions about enhancement technologies, about the metaverse, and whose interests are being served by those decisions?”

This year’s forum is co-sponsored by the Schmidt Program on Artificial Intelligence, Emerging Technologies and National Power. The program, first announced in December 2021, is a new initiative of International Security Studies and was made possible by a $15.3 million donation from Eric Schmidt, the current technical advisor of Google, and his wife Wendy Schmidt, co-founder of the Schmidt Foundation.

Edward Wittenstein, who introduced the event, called it a “great collaboration” between the Jackson Institute and the Yale Law School’s Center for Global Legal Challenges. Wittenstein is the director of International Security Studies and a lecturer at the Jackson Institute.

Hathaway described the forum, which is in its fifth year, as Wittenstein’s “brainchild.”

Students in the audience, many of whom are from the Yale Law School and the Jackson Institute, as well as attendees on Zoom participated in a question and answer session at the end of each panel discussion.

“It was definitely a lot of food for thought, especially since I come from the technical side,” Kelly Zhou ’23, who attended the event, said in an interview. “I think it’s good to appreciate the legal repercussions of certain classifications that are made.”

The forum will continue with its second and third sessions on March 4 and April 1.

MIRANDA JEYARETNAM
Miranda Jeyaretnam is the University desk editor. She previously covered the Jackson Institute of Global Affairs and developments at the National University of Singapore and Yale-NUS. Formerly the opinion editor under the YDN Board of 2022, she co-founded the News' Editorial Board and wrote for her opinion column 'Crossing the Aisle' in 2019-20. From Singapore, she is a junior in Pierson College, majoring in English.