Home / Artificial Intelligence / Artificial Intelligence: These Animated AI Bots Learned to Dress Themselves—Awkwardly

Artificial Intelligence: These Animated AI Bots Learned to Dress Themselves—Awkwardly

Artificial Intelligence:

GIF: Alexander Clegg/GIT/Gizmodo

The ability to put our clothes on each day is something most of us take for granted, but as computer scientists from Georgia Institute of Technology recently found out, it’s a surprisingly complicated task—even for artificial intelligence.

As any toddler will gladly tell you, it’s not easy to dress oneself. It requires patience, physical dexterity, bodily awareness, and knowledge of where our body parts are supposed to go inside of clothing. Dressing can be a frustrating ordeal for young children, but with enough persistence, encouragement, and practice, it’s something most of us eventually learn to master.

As new research shows, the same learning strategy used by toddlers also applies to artificially intelligent computer characters. Using an AI technique known as reinforcement learning—the digital equivalent of parental encouragement—a team led by Alexander W. Clegg, a computer science PhD student at the Georgia Institute of Technology, taught animated bots to dress themselves. In tests, their animated bots could put on virtual t-shirts and jackets, or be partially dressed by a virtual assistant. Eventually, the system could help develop more realistic computer animation, or more practically, physical robotic systems capable of dressing individuals who struggle to do it themselves, such as people with disabilities or illnesses.

Putting clothes on, as Clegg and his colleagues point out in their new study, is a multifaceted process.

“We put our head and arms into a shirt or pull on pants without a thought to the complex nature of our interactions with the clothing,” the authors write in the study, the details of which will be presented at the SIGGRAPH Asia 2018 conference on computer graphics in December. “We may use one hand to hold a shirt open, reach our second hand into the sleeve, push our arm through the sleeve, and then reverse the roles of the hands to pull on the second sleeve. All the while, we are taking care to avoid getting our hand caught in the garment or tearing the clothing, often guided by our sense of touch.”

Computer animators are fully aware of these challenges, and often struggle to create realistic portrayals of characters putting their clothes on. To help in this regard, Clegg’s team turned to reinforcement learning—a technique that’s already being used to teach bots complex motor skills from scratch. With reinforcement learning, systems are motivated toward a designated goal by gaining points for desirable behaviors and losing points for counterproductive behaviors. It’s a trial-and-error process—but with cheers or boos guiding the system along as it learns effective “policies” or strategies for completing a goal.

The difference with self-dressing, however, is the need for haptic perception. Animated characters need to touch their clothing to infer progress. When dressing themselves, the bots must apply force to move their virtual arms through the clothing, while avoiding forces that could damage the garment, or cause a hand or elbow to get stuck. Consequently, the researchers had to add a second important element to the project: a physics engine capable of simulating the pulling, stretching, and manipulation of malleable materials, namely cloth.

During the training process, a bot gained points by successfully grasping the edge of a sleeve or poking its head through the collar. But when an action resulted in tearing or getting its arms hopelessly tangled, it would lose points.

In this example of a failed dressing attempt, the bot tears a hole through the shirt.
GIF: Alexander Clegg/GIT/Gizmodo

Very quickly into the project, however, the researchers realized that a single, coherent dressing policy wasn’t going to work. The complicated task of dressing had to be broken down into a series of sub-policies. But this makes sense; when we teach children to dress themselves, we teach it one step at a time. The act of dressing can’t be broken down into a single philosophical policy—it’s a step-by-step process that leads toward a desired goal. Clegg’s team developed a policy-sequencing algorithm for this very purpose; at any given stage, an animated bot knew where it was in the dressing process, and which step was required next such that it could advance toward the desired goal.

Clegg and his colleagues say their new paper is the first to show that reinforcement learning, in conjunction with cloth simulation, can be used to teach a “robust dressing control policy” to bots, even though it’s “necessary to separate the dressing task into several subtasks” and have

Read More

About admin

Check Also

Artificial Intelligence: Presidential candidate Andrew Yang talks A.I. and an extended-established payout

Artificial Intelligence: Presidential candidate Andrew Yang talks A.I. and an extended-established payout

Andrew Yang is the CEO and founder of Venture for America and just so happens to be running for president in 2020. Today, our very own Jeremy Kaplan sat down with Yang to not talk about politics, but to shed light on what the future with artificial intelligence looks like and how a universal payout could…

Leave a Reply

Your email address will not be published. Required fields are marked *