This February, the Association for the Advancement of Artificial Intelligence, or AAAI, held its annual conference in Vancouver, Canada. Founded in 1979, the AAAI is a top scientific society devoted to advancing the understanding of intelligent behavior in machines. The conference has emerged as a critical destination to promote the exchange of ideas between researchers, practitioners, scientists, students, and engineers.

Faculty members and doctoral students from the School of Computing and Augmented Intelligence, or SCAI, part of the Ira A. Fulton Schools of Engineering at Arizona State University, were out in full force.

The conference accepted fifteen papers for publication by SCAI faculty members, including four by Assistant Professor Hua Wei. Faculty work addressed a broad range of artificial intelligence, or AI, topics with contributions that related to robotics, AI-generated imagery, Bayesian computation and large language models.

Professor Subbarao Kambhampati led a 3.5-hour tutorial titled, “The Role of Large Language Models (LLMs) in Planning.” Kambhampati has a long, established relationship with the AAAI, having served as a past organization president. He has also been an AAAI fellow for more than twenty years.

His talk took a critical look at the part LLMs, such as OpenAI, might play in tasks normally considered planning, and was part of his extensive and ongoing outreach on the potential societal impacts of artificial intelligence.

Kambhampati reiterated that LLMs can’t engage in planning in autonomous modes, or without significant assistance from a user, and got laughs when he joked, “I don’t have to write a paper saying that LLMs can’t fly.” He noted that AI success in generating predictive text, where the computer supplies the next logical word to complete a sentence or thought, has lent itself to the perception that these tools might be capable of abstract reasoning. But these systems are especially bad at key parts of the planning process, like verifying and self-critiquing potential solutions.

However, Kambhampati did discuss important ways in which these kinds of tools might be used in assistive modes, where they could help people formulate better plans. LLMs could, for example, take in large sets of data and generate potential plans to be reviewed by users. This activity could speed up the planning process and help people make good decisions more quickly.

Meanwhile, SCAI doctoral students showcased their efforts in various forums.

Associate Professor Yezhou Yang and students at AAAI

Maitreya Patel, a doctoral student of Associate Professor Yezhou Yang, presented the paper, “CONCEPTBED: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models,” authored by Yang’s Active Perception Group research team. The paper introduced the first large-scale concept-learning dataset that facilitates precise and accurate evaluations of personalized text-to-image generative models. Fellow researcher, Changhoon Kim, discussed the team’s work at the AAAI Doctoral Consortium.

Garima Agrawal presents her paper on stage at AAAI

Garima Agrawal, a doctoral student of Professor Huan Liu, presented a paper she co-authored with Kuntal Pal, Yuli Deng and Ying-Chih Chen titled “CyberQ: Generating Questions and Answers for Cybersecurity Education Using Knowledge Graph-Augmented LLMs.” The paper proposed CyberGen, a novel unification of large language models and knowledge graphs to automatically generate the questions and answers for cybersecurity.

Agrawal appreciated the opportunity to discuss her work.

“I was a bit nervous and apprehensive about presenting at one of the most prestigious and largest AI conferences. But people appreciated our work and saw it as a promising future direction,” she says. “AAAI was an invaluable platform for collaborating and networking with worldwide researchers, opening avenues for new research ideas.”

Ross Maciejewski, director of the School of Computing and Augmented Intelligence, was pleased by the turnout.

“This strong showing from our faculty underscores the important role the school is playing in the development of artificial intelligence,” he says. “It was also a great opportunity for our doctoral students to network and gain experience.”

For Kambhampati, who is also a fellow in the American Association for the Advancement of Science and the Association for Computing Machinery, the work goes on. Upon return from the conference, he was tapped to brief Arizona Supreme Court Chief Justice Robert M. Brutinel and the court’s Steering Committee on AI, giving a presentation called, “AI, Law & The Courts.”

Kambhampati, who has emerged as a national thought leader on AI topics, told the court that the advent of deep fakes, the ability of AI technologies to make very convincing forgeries of images and voices, could have significant ramifications on evidence admissibility and eyewitness testimony.

“The court seemed to appreciate the talk,” Kambhampati says.

Professor Subbarao Kambhampati and Chief Justice Robert M. Brutinel pose for a photo

Chief Justice Robert M. Brutinel stopped for a photo opportunity. Since AI-generated image tools struggle to accurately render hands, often producing hilarious versions with too many fingers that are the wrong shapes or sizes, Kambhampati suggested that they pose with their hands out.

“He helpfully put his hand out in front of him so his fingers can be counted. So, people can verify that it is not an AI-generated deep fake,” Kambhampati says with a laugh.