What are the consequences if machines are intelligent but don’t have identities? What are the possible ways to hold machines, and their creators, accountable for their actions taken upon human society?

This is part of the work being explored by Yi “Max” Ren, an assistant professor of aerospace and mechanical engineering in the Ira A. Fulton Schools of Engineering at Arizona State University. Ren and a team of ASU researchers were recently awarded a National Science Foundation grant to study the attribution and secure training of generative models.

Generative models capture distributions of data from a range of high-dimensional, real-world content, such as the cat images that dominate the web, human speeches, driving behavior and material microstructures, among many other types of content. And by doing so, the models gain the ability to synthesize new content similar to the authentic content.

Read more on Full Circle