Artificial Intelligence can now Simulate the Task of Dressing

Artificial Intelligence can now Simulate the Task of Dressing

Animations show that getting dressed is not as simple as we think.

Whether someone is interested in science or not, Artificial Intelligence (AI) is one of those subjects everyone seems to take an immediate interest in. The usual perception of AI is automated robots and machines but there’s much more to it than that. The idea of Artificial Intelligence is not a new concept as it has been around us, for centuries, in the form of logic. Fast forward to 2018, and AI is more advanced than ever. It’s integrated into our everyday devices like smartphones, computers, cars, and even refrigerators. The continuous evolution of AI means that it can even automate our routine tasks that may seem mundane to many.

According to scientists at the Georgia Institute of Technology and Google Brain, the seemingly simple task of dressing up is rather complex and involves a lot of steps. The scientists used simulations to feed data into a neural network that ultimately ‘learned’ the task by splitting down the instructions into smaller, well-defined goals. They released a video showing that the characters tried to put on clothes on their torso in set sequences.

This allowed for hundreds of repetitions that led either to a positive or a negative response, depending upon the interpretation of the AI. A positive would mean the task is being done right, thus ‘telling’ the AI that it’s doing fine. During the study, the AI was able to re-enact various dressing methods similar to that of an animated character. Some of the tasks included wearing a T-Shirt, dressing a sleeve, and putting on a jacket. The framework is designed in such a way that only a single sequence of dressing can be used to allow the AI to complete the task successfully. Lead Author Alexander Clegg, a Doctoral Student at the Georgia Institute of Technology said,

“We’ve opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning. There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world.”

The AI uses touch feedback (aka Haptics) to wear and adjust clothes. The sequence is such that at the start, the collar is around the character’s neck. From there, the AI program takes control by checking the position of the clothing. There is a protocol that governs error checking. If any error is found, the AI adjusts the clothing in various forms. The best-suited adjustment is taken as a positive and is accepted. Putting on clothes seems such an easy task for us that we don’t even think about it while doing it. This is because, over the course of our life, we have been repeating this action frequently. For the AI, it is a relatively daunting task for now. As it repeats those actions further, it will adapt to most of the scenarios presented to it. Clegg also stated:

“Dressing seems easy to many of us because we practice it every single day. In reality, the dynamics of cloth make it very challenging to learn how to dress from scratch. We leverage simulation to teach a neural network to accomplish these complex tasks by breaking the task down into smaller pieces with well-defined goals, allowing the character to try the task thousands of times and providing reward or penalty signals when the character tries beneficial or detrimental changes to its policy.”

Since the clothes are nothing but a simulation, it is rather challenging to create animations for the character. This is because the animations are created for use in static environments and the character is more or less, always in motion. Another challenge presented is the complexity of the tasks. The dressing sequence involves prolonged movements with a variety of subtasks associated with these actions. Clegg explains:

“We’ve opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning. There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world.”

According to the team, there are future plans to further enhance the AI to assist in the healthcare sector along with robotics. Who knows, in the near future, robots may be dressing patients in hospitals.

Computer Scientist by qualification who loves to read, write, eat, and travel

  •  
    1
    Share
  •  
  •  
  • 1
  •  
  •  
  •  
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *