Yale Daily News

The launch of DeepSeek, an open-source artificial intelligence model created in China, has the potential to impact how researchers and students think about AI. 

Although DeepSeek might not cause any revelatory changes in how AI is used in daily life, it could have deeper, longer-term implications, including increasing AI competition and changing how people approach open-source AI. According to researchers, despite some security concerns, DeepSeek likely will have positive impacts on societal use of AI, because it is open-source and easily accessible.

“I think we’ll see even more rapid development and diffusion of these models,” said Karman Lucero, an associate research scholar and fellow of the Paul Tsai China Center at the Law School.

Lucero believes that DeepSeek will become another tool that students, researchers and AI-inclined users can utilize, in addition to pre-existing models.

He finds DeepSeek particularly helpful for sorting through massive amounts of information but admits that other models, such as ChatGPT, are beneficial in other ways. 

“You can play with it a lot more and see how [the AI is] working more easily,” Lucero said, when asked about his experiences with DeepSeek. “You can modify it to whatever tasks you want to work on.”

Furthermore, it is becoming easier to download AI models onto personal devices, as the models themselves take up less storage, according to Kyle Jensen, director of entrepreneurship at the School of Management.

There are all sorts of models that specialize in certain fields, such as writing, coding, or math, Jensen explained. And as they become easier to access, Jensen suggested that people could become more reliant on AI, even to ask personal, emotional, or advice-related questions.

“So, the manner in which we interact with our data and with our computers will start to be more natural because of the ubiquity of these models, which are smaller and faster now,” Jensen said.

K. Sudhir, professor of private enterprise and management at the School of Management, believes that the creation of DeepSeek has demonstrated the increasing accessibility of open-source AI. However, DeepSeek used some questionable methods to create their AI service, said Sudhir. DeepSeek’s creators distilled a lot of the methodology that OpenAI’s creators used, and then, in making DeepSeek open-source, they published much of this information, potentially exposing OpenAI’s methodology, which they may not have wanted.

DeepSeek did not respond to a request for comment.

However, it is unclear whether DeepSeek will become as popular as other AI models.

“Whether [DeepSeek] will become a real competitor and what the long-term end game is for this is not entirely clear,” Sudhir said. “I imagine OpenAI has some tricks up their sleeve, as well. But fundamentally, what is shown is that there are external competitors who are going to be there.”

Sudhir believes that in the long run, AI development may not require billions of dollars, making the market open to more competitors. Ultimately, he thinks that the increasing accessibility of AI will have a positive impact on society. 

With people around the world working to advance AI, there is an element of luck in who develops the next model, like DeepSeek. To Sudhir, it was equally likely that someone in the U.S. would develop an equivalent to DeepSeek, as opposed to a Chinese company.

“If you have a technology whose costs go down, people are going to find new, innovative uses for it. The fact that it is open source, which makes it more transparent, is good, because now any changes or variations that you’re creating on the model also become transparent.”

While life and research at Yale is unlikely to be directly impacted by the developments of DeepSeek, Sudhir believes that Yale researchers will appreciate the increased access to open-source models.

This means that Yale researchers can more easily use AI for internal development without needing to send their data to OpenAI or other companies with closed-source AI. This allows them to keep their information more private.

That being said, DeepSeek raises some security concerns, according to Lucero. 

“There was an article in Wired that came out late last week that showed that DeepSeek models essentially failed every safety test that folks tried with it,” Lucero said. “And so when it comes to protecting information or protecting data or protecting privacy, there are concerns about a lack of safeguards in the model and code.”

The article Lucero referred to was published in Wired on Jan. 31. Users shouldn’t completely disregard DeepSeek, Lucero said, but if their work involves data security or privacy protections, they might treat the model with caution.

Yale’s AI platform Clarity allows students, faculty and staff to interact with University-approved AI.

ANYA GEIST
Anya Geist covers Science and Society for the News and is a staff writer for the WKND. Originally from Worcester, MA, she is a first-year in Silliman College and studies history.