Siddharth Srivastava, an associate professor in the School of Computing and Augmented Intelligence, or SCAI, co-organized the AAAI 2024 Spring Symposium that was held from March 25 to March 27 at Stanford University in California.

Working with industry partners and the research and development team from Toyota Motor of North America, as well as with SCAI doctoral student Pulkit Verma, Srivastava created a program called, “User-Aligned Assessment of Adaptive AI Systems.”

It sought to address the key issue that, today, users of widely differing technical abilities are increasingly interacting with artificial intelligence and engineers designing new technology must consider how to address the safety concerns of all kinds of laypeople.

The issue of safety in the creation of new artificial intelligence is an often under-discussed area. Speakers and attendees of the symposium considered how AI-enabled devices can maintain compliance with safety standards and protect user privacy.

“We need more research on a new, emerging class of computational problems in AI — how to enable third-party and user-driven assessment of AI systems that can learn and plan,” Srivastava says. “This symposium served to highlight those new aspects of AI research and emerging perspectives for addressing them.”

Srivastava notes that many AI-enabled devices are unique in that they must be programmed without a lot of advance knowledge of the specific ways in which they can or will be used. For example, a robot capable of doing household chores must be able to operate in many different kinds of homes and adapt on the fly as humans make changes to their living environment. Engineers must design artificial intelligence systems that can keep people and property safe without being able to see each home or know exactly what each person will do in their living space.

Invited speakers at the symposium included UC San Diego Professor Kamalika Chaudhuri, Cadence Founders Chair Professor Sanjit A. Seshia from the University of California, Berkley and AAAI Senior Member and University of Texas Professor Sriraam Natarajan, who shed light on innovative approaches and offered diverse perspectives on safety problems.

Faculty presenters at the AI conference.

SCAI faculty members and doctoral students discussed four papers at the event. These included:

  • From Srivastava and Georgios Fainekos: “Safety Beyond Verification: The Need for Continual, User-Driven Assessment of AI Systems”
  • From Verma and Srivastava: “User-Aligned Autonomous Capability Assessment of Black-Box AI Systems”
  • From Assistant Professor Nakul Gopalan: “Algorithmic Challenges in Interactive Learning with Users”’
  • Can LLMs translate SATisfactorily? Assessing LLMs in Generating and Interpreting Formal Specifications

Srivastava doctoral students Rushang Karia, Daniel Bramblett and Verma, along with master’s student Daksh Dobhal, also presented the paper, “Can LLMs translate SATisfactorily? Assessing LLMs in Generating and Interpreting Formal Specifications.”

Srivastava was particularly proud of the work of his students, saying, “The paper presented a novel approach to the difficult problem of automatically evaluating correctness of large language models, or LLMs, in converting natural language to formal syntax and code specifications. Crucially, it achieved this without having to rely upon human annotations and shows that current LLMs are surprisingly inaccurate.”

Gopalan participated in a panel while Srivastava himself gave a talk called, “Safety Beyond Verification: The Need for Continual, User-Driven Assessment of AI Systems.”

A talk at the symposium in progress

“My talk was intended to serve as a discussion piece highlighting some of the new open problems and promising approaches for addressing them,” he says.

Srivastava’s work in organizing the conference is part of SCAI’s ongoing commitment to its leadership role in creating new forms of artificial intelligence and thinking about the societal impacts of AI.

He says, “We’re bringing together an excellent team of experts and rising researchers as co-organizers from areas that connect deeply with our topics of interest, including AI and formal methods.”

Learn more about the symposium.