How it Works: AI Fashion Styling

 

A question that I often get asked when I attend events or explain Intelistyle for the first time to people is “Is this the real thing. Is it real AI?”. Which used to surprise me but then I realised that there are so many companies out there that claim to do Machine Learning, but what they really do is package up a couple of Amazon, Google or Microsoft cloud ML APIs, combine them with a freely available dataset, and voila, here’s a service that they can sell to customers.

 

So I decided to do a write up of how our technology works to help our customers understand how this really works.

 

We constantly crawl the web, very much like google’s search engine does. Instead of indexing generic information though, we focus on fashion data. We have particular data sources that we prefer, like fashion magazines, social networking websites, retail websites and blogs. That process allows us to collect thousands of outfits put together by human stylists. We use images and text to get the most complete and accurate information.

 

Now as you can probably imagine most of the web’s images are quite noisy. How do you extract the individual garments that are included in an outfit with varied backgrounds, different poses and models. The approach we took, was to create a bounding box model that can create bounding boxes for each garment. Using that approach we were able to create a unique dataset of millions of outfits that we could use for training our AI styling model.

That dataset is constantly updated and quality controlled by our team. That allows us to keep up with the latest trends across different regions. We have clients in China, Europe and the Middle East and as you can imagine the trends in each of these regions are very different. What is considered fashionable in one region, isn’t necessarily in another.

 

Our Machine Learning team uses the latest academic research to craft a proprietary, bespoke set of AI models that analyse images and text. Each garment in our database is described using a 128-dimension “signature” or embedding. You can think of this as a very similar process to what Shazaam does for music tracks. Each of these signatures describes the important characteristics of each garment and leaves out the noise.

 

However to create styling intelligence that can perform as well as actual stylists a good dataset and embeddings was not enough. While talking to our clients we realised that there are fashion rules that can make or break an outfit. For example, an off the shoulder top with puff sleeves should not be styled with a skinny-fit blazer. Our model could not predict these rules as good as humans yet. 

 

The solution to that was to train another model that can detect rich attributes, such as it’s fabric, cut, style, colour and other unique characteristics and categories for each garment of our dataset. 

 

We also work with stylists with experience in brands such as M&S, Topshop and Vogue to create a unique set of ‘guidelines’ for our model to give preference to specific attributes when creating an outfit. And off course because no two regions are the same, we can customise those guidelines to particular trends. For example in the Middle East shorter hemlines always pair with a longer overcoat and in Asia slip dresses should be layered over shirts.

 

The result? A proprietary set of AI and data, that outperforms all published academic research and delivers outstanding styling quality, trusted by the world’s top luxury brands such as D&G, MaxMara and Lane Crawford. 

 

We even tested it against real stylists and fashion influencers at London Fashion Week. As Forbes reported, 70% of respondents unwittingly chose the looks created by our model. 

 

Tags:
Bitnami