Results for ""
Very soon, you will be able to virtually try on clothes before buying them on Amazon. The researchers at Amazon’s Lab126 have published a series of papers that proposes a combination of artificial intelligence (AI) algorithm that can help create a “digital assistant” to help shoppers shop for clothes.
The researchers at Lab126, who have been responsible for creating successful Amazon products such as Fire TV, Kindle Fire and Echo, have developed a virtual try-out program which is image-based. They refer to it at Outfit-VITON. The program uses three separate AI algorithms to help shoppers ‘try out’ clothes and ascertain their fitting, fall and overall look before they buy them.
“Online apparel shopping offers the convenience of shopping from the comfort of one’s home, a large selection of items to choose from, and access to the latest products. However, online shopping does not enable physical try-on, thereby limiting customer understanding of how a garment will actually look on them,” the researchers wrote. “This critical limitation encouraged the development of virtual fitting rooms, where images of a customer wearing selected garments are generated synthetically to help compare and choose the most desired look.”
The first algorithm lets shoppers fine-tune their search results by describing variations on a product image; the second suggests similar products to the ones shoppers have selected and the third blends the image of the shopper with various clothing items from different product pages to demonstrate how the ensemble looks together.
“The program will work via a Generative adversarial network (GANs) are generative models trained to synthesize realistic samples that are indistinguishable from the original training data,” mentions the research paper published on Amazon Science, the company’s research blog. However, GAN doesn’t render effective output. Therefore, it uses Conditional GAN for better image generation since it can map images from one format to another.
Images pass through a program that creates a vector form which goes through a set of masks that de-emphasize some form features and amplify others. Another program takes a category for each image along with the category of the target item as an input and outputs values for prioritizing the masks, which are called subspace representations.
The Outfit-VITON’s entire system is trained on a benchmark of an outfit. The training dataset includes an outfit and complimentary items and items that don’t go well with the outfit. Therefore, post-training, Outfit-VITON produces vector images of every catalogued item. Finding complementary combinations now becomes a calculation that needs to be looked up.