Fashion is moving faster than ever. A designer wants to see how a jacket behaves in leather, wool and silk without cutting three prototypes. A game developer wants digital clothing that folds, stretches and swings like the real thing. An online shopper stares at a sweater on her phone, wondering how it will drape on her body instead of a model’s.

In all three cases, the problem is the same: Texture doesn’t translate. And solving that problem is where Jesus Aguilar is focusing his research.

Aguilar is a computer science undergraduate student in the School of Computing and Augmented Intelligence, part of the Ira A. Fulton Schools of Engineering at Arizona State University, who is working at an unusual intersection of computer vision, artificial intelligence, or AI, and fashion. His goal is to create AI-powered algorithms that understand not just how clothing looks, but how it behaves.

Screens are excellent at showing silhouette and color. But they are far less convincing when it comes to weight, stiffness, drape and surface detail — the tactile qualities that determine whether a garment feels luxurious or flimsy, structured or fluid. Today, many existing garment-digitization tools stop at shape.

“Right now, the AI systems I tested can convert images into 3D models of garments,” Aguilar says. “But if we talk about textures, it’s not something that’s covered and that’s what we wanted to work on.”

That limitation has real consequences. For designers, it means physical prototyping, cutting, sewing and discarding samples to test fabrics. For consumers, it fuels uncertainty, returns and waste. And for digital creators in gaming and entertainment, it results in clothing that looks right until it moves.

Aguilar’s project centers on testing and extending a model called ChatGarment, which uses computer vision to convert images — and in some cases text or video — into 3D digital garments. The tool performs well on simple designs like dresses and gowns, which dominate its original training data. However, Aguilar pushed it further, deliberately feeding it more complex garments with layered construction and intricate details.

“I mainly tested how this model would perform on garments outside of the data set they used originally,” he says. “For more complex details, it wasn’t well-trained to detect different patterns.”

As part of the Fulton Forge Student Research Expo, Aguilar worked under the supervision of Pavan Turaga, director of The GAME School and electrical engineering professor at ASU, to assemble a dataset of roughly 50 textures, photographing fabric types that challenge current models. The goal was not just better visuals, but to teach systems to recognize fabric properties that influence how clothing moves and feels.

Read the full story on Engineering News.