Tech experts say that users produce more than 34 million images per day using artificial intelligence, or AI, tools such as Midjourney and DALL-E 2. The results are often inventive and astonishing.

While people might find making AI-generated art a relaxing, creative outlet, these images come at a cost. Server farms, giant data centers full of computers, will consume more energy each year processing AI art than the entire country of Argentina. In 2023, Google used 5.6 billion gallons of water just cooling its servers.

The challenge of how to make these artistic tools available to those who want to use them while keeping an eye on sustainability is a problem that computer science doctoral student Maitreya Patel is keen to solve.

Patel has been working under the supervision of Yezhou “YZ” Yang, an associate professor of computer science and engineering in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University. Yang heads the Active Perception Group, a lab that studies computer vision and image generative AI.

Yang oversees several projects funded by grants from the National Science Foundation dedicated to researching computer visual recognition tools. Some of the novel work being done there seeks to make a system that can create an image, check out what it has produced and learn from the comparison. The computer might draw a dog, scan the image, ask itself if the picture looks like a dog and then update its programming based on the results.

As part of his doctoral research, Patel has created Eclipse, a resource-efficient tool that takes in text prompts and then produces images. He made a demonstration website where a user can type in a short description of what they would like to see, and the AI tool will generate a picture.

Get the full story on Full Circle.