Move over, Ken Jennings. Watson’s coming through.

Last night, about 40 students and game show enthusiasts attended a viewing of “Jeopardy!” sponsored by Yale’s Computer Science Department in Linsly-Chittenden Hall for the first part of the highly anticipated IBM Challenge, which pitted past champions Ken Jennings and Brad Rutter against a supercomputer named Watson.

Watson, an artificial intelligence program designed by IBM to answer questions about history, science, literature, pop culture, sports and art posed in normal language, held its ground against the seasoned competitors for the first half of the match, but by the end was tied with Rutter at $5,000. Professors and students who watched the episode said during a discussion panel held after the show that they were impressed by how well Watson performed overall, but were surprised at some of the simple mistakes the computer made during the match.

“One common misunderstanding about Watson is that he’s not connected to the Internet,” said Sam Spaulding ’13, a computer science major who competed in the Jeopardy! College Championship in November 2010. “He’s just packed with dictionaries, novels, atlases, maps, newspapers and other sources of information that give him access to the right answers.”

The department decided to hold the screening to both illustrate the advancement of artificial intelligence today and to enjoy Watson’s debut with company.

Spaulding explained that Watson first processes the prompts for the kind of question being taken under consideration. In computer science, he said, this is called discerning the lexical answer type, which refers to the grouping under which a computer is likely to find correct information. Since “Jeopardy!” answers are often framed in terms of categories such as “this president,” “this state” or “this movie,” he said, Watson can compare possible answers across parallel fields after parsing the relevant literature it contains in its servers.

“‘Jeopardy!’ … provides the ultimate challenge because the game’s clues involve analyzing subtle meaning, irony, riddles and other complexities in which humans excel and computers traditionally do not,” Spaulding said. “Unlike a game of chess, ‘Jeopardy!’ is not so easily formally defined; players constantly rely on rapid recall, learning experiences, memory and trigger-finger responses.”

Audience members applauded enthusiastically to Watson’s first appearance on the screen, despite the fact that he could neither see nor hear anything. As game host Alex Trebek explained during a break, what the viewers were actually watching was an avatar that represented Watson’s main servers, which were hidden backstage.

The real Watson, he said, was too heavy and too big to fit onstage, comprising two refrigerator units to prevent overheating, 300 processing cores and a memory capacity of 15 trillion bytes.

Like Jennings and Rutter, Watson was required to buzz in to give an answer, after receiving the clues electronically as a text file when Trebek read the questions to the other two players. Watson’s avatar, represented by IBM’s Smarter Planet icon, changed colors depending on its confidence in the possible answers it generated. Viewers were able to witness Watson’s “thought process” at the bottom of their television screens, which showed its answers and its respective confidence in each one.

“It’s like watching a computer sweat,” Trebek said, “except Watson knows what it knows and what it doesn’t know.”

But Watson even made a few mistakes.

“Stylish elegance, or students who all graduated in the same year,” one clue read. “What is ‘chic’?” Watson replied, but the correct answer, provided by Rutter, was “class.”

After the viewing of “Jeopardy!,” members of the Computer Science Department took questions from the audience, mostly to explain how Watson could answer incorrectly and the technology involved. Professors interviewed said part of the problem stemmed from the fact that some clues led Watson to the intersection of multiple words.

“There’s a lot of refinement of the question that has to be done to figure out exactly what kind of answers are appropriate,” Drew McDermott, computer science professor and director of the Center for Computational Vision and Control, said. “Many are very subtle, semantic categories. There’s some … categorization that Watson’s algorithms have to figure out, and that takes a lot of levels of processing.”

Among Watson’s weaknesses, Spaulding mentioned the computer’s failure to learn during the course of the match. At one point, for example, Ken Jennings incorrectly answered a question about 1920s America, a mistake that Watson repeated immediately after.

Other members of the audience said they were surprised at Watson’s overall rate of success.

“I thought it was fascinating to see the connection between computer science and decision-making,” Rasheq Rahman SOM ’11 said. “I was impressed by how fast Watson answered Trebek’s questions because it seemed like it got the answers right away.”

But McDermott stressed that plenty of research remains to be done in the field of artificial intelligence.

“I’m perfectly pleased that computers like Watson have the amount of power that they do,” he said, “But a lot of things still need to happen to fulfill A.I.’s dreams. Robots won’t be taking over the world anytime soon.”

Watson was named after IBM’s founder, Thomas J. Watson, who oversaw the company’s growth from 1914 to 1956. There will be two more “Jeopardy!” matches with Watson, tonight and on Wednesday evening.