Where Retail meets Tech

July 24, 2017

Deep learning: the new secret to great retail execution

Written by Mickaël Maillard

Technology is evolving at an increasingly rapid pace these days, and one of the latest developments could revolutionize image recognition in machines. It's called "deep learning." While the concept got exciting press coverage in the gaming world last year — when Google’s AlphaGo system beat a champion player in the ancient board game of Go — its importance extends much further.

This new form of Artificial Intelligence will have repercussions not just in gameplay, but in business, as well — including the retail execution field. 

 

Teaching Machines How to Learn

So, how does deep learning work?

First, we have to understand the historical perspective — the evolution of AI over time. The "classical" approach to AI was focused on explicit rules. If you wanted to build a system that could recognize an apple, you would include rules like "apples are red, yellow, or green," or set rules for a particular shape. But that rule-based system had troubles when the inputs were unclear. What if an apple is partly in shadow or obscured in a photo? What if it's a candy apple covered in caramel?

So instead of trying to make a program with an explicit laundry list of rules describing every visual feature of the products on a shelf, we started creating algorithms that would use thousands of examples to decide which visual features were important to recognize objects. This way, the algorithms could actually learn how to solve problems by using examples. These kind of algorithms that, as Arthur Samuel put it, "give computers the ability to learn without being explicitly programmed," are what we call "machine learning" and still serve as the founding principles of deep learning.

However, with these first machine-learning algorithms, the visual feature extraction in raw images was still a challenge. Natural images are often too complex to directly learn from them, so programmers had to provide an explicit, exhaustive list of visual features for the algorithm to learn what was relevant to recognize products. Designing this list of visual features (such as right angles, lines, colors etc.) was time consuming, poorly optimized, and most importantly inaccurate — the accuracy of the system was completely dependent on the competency of the engineer.

Here's where deep learning enters the picture. In contrast to the previous approaches, the deep learning approach deals with raw images, with algorithms that are able to build their own right visual feature sets. It requires little guidance from the programmer/engineer. In fact, deep learning algorithms succeed in creating their own visual features with amazing efficiency — leaving manual designs and engineers in the dust.

For visual product recognition — the kind we perform with Planorama — deep learning has several advantages that make it an ideal approach. The system can be fed with many good examples of the targets, under all sorts of conditions — apples in shadow, coated, half-bitten, with or without stems, and more. Over time, the system learns to self-adjust and recognize new examples of apples it has never seen. Without the need for explicit rules, the system can recognize its targets even with low-quality inputs. It's a generic system, fast to roll out for new product bases, adaptive, and robust to real conditions to get truly accurate results.

 

Leveraging Deep Learning in Stores

Checking the compliance of an in-store display is a perfect application of deep learning systems. At Planorama, we precisely use deep learning — along with some classical AI and image recognition — to analyze photos of in-store product displays. By comparing this analysis against a planogram, the system can quickly determine if the display is correct.

Importantly, deep learning algorithms grow on a diet of real product images in stores, not artificial studio images with ideal lighting, so it is ready to deploy in realistic conditions. They also make the whole process so much faster, returning an analysis in just five minutes and allowing for immediate correction of any mistakes.

Another area where deep learning makes a difference is accuracy. When sales representatives analyze by themselves, their accuracy rates vary between 60 and 85 percent — meaning 15 to 40 percent of the data they collect can be wrong. Thanks to deep learning, it is now possible to take out the element of human error. For example, at Planorama, we are reaching accuracy rates above 98 percent! 

 

Leaping Ahead With Planorama

The whole technology sphere is headed more toward deep learning and the parallel computing hardware that it runs fastest on. Expect paradigm-shifting changes in the next few years as major players like NVidia, AWS, Microsoft, Google, and Facebook strongly support this evolution.

Given all the new options for enhancing business, it's an exciting time to be in the retail world. At Planorama, we are all excited to bring deep learning to the masses, and to grow along with the technology renaissance.

request a demo


Topics: image recognition, retail execution, artificial intelligence, deep learning