Retail traffic is an important indicator of shopper behavior and a bellwether for retail sales results. If more people are visiting stores, there are more opportunities for retailers to convert shoppers into sales. When traffic is down, retailers have fewer opportunities, thus, typically, fewer sales. More traffic doesn't guarantee more sales, but store traffic and sales are correlated.
But traffic is notoriously difficult to measure. If you scanned the headlines about retail traffic during this past holiday season, you would likely have been more confused than enlightened. Depending upon which index you referred to, retail traffic was either significantly down, somewhat down, or up.
Why are there such discrepancies in these indexes - and whom should you believe?
Is retail traffic up or down?
The answer depends on where your numbers come from.
On January 16, 2014, ?The Wall Street Journal reported that retail store traffic over the holiday period (defined as November and December) was down 14.6 percent compared to the prior year. The source for this data point was a company called ShopperTrak, which says it has traffic counters installed in some "60,000 malls and large retailers across the country."
Retail analytics company RetailNext released its own holiday report. It counted "34.6 million shopping trips to specialty and larger format stores during the 2012 and 2013 season," and concluded that retail traffic was down 6.5 percent - less than half the decline that ShopperTrak reported.
To add to the confusion, a third retail analytics firm called Euclid, as reported in Digital Journal on January 3, 2014, claimed that there was a 9 percent increase in year-over-year traffic, based on "nearly 25 million domestic shopping sessions during December."
By itself, each index seems credible enough, but given the dramatically different conclusions, you have to wonder which one is correct - if any.
Making sense of the numbers
Part of the challenge is in understanding exactly where and how each of these companies is measuring store traffic. The ShopperTrak data, according to the WSJ article, is based on 60,000 malls and large retailers across the country, but in fact, according to ShopperTrak's own website, the company analyzes data from over 60,000 locations across 90 countries and territories. How many of these stores are in the US, and which ones were included in the index?
RetailNext bases its results on 34.6 million shopping trips. The company doesn't define what a "shopping trip" is; however, it does say that it collects data from 65,000 sensors in retail stores.
It's not clear how many unique stores are represented by the 34.6 million counts, but it's a relevant question. For example, if these 34.6 million counts were captured only from large stores that had average daily traffic of, say, 1,000 traffic counts per day during the November-December holiday period, we would conclude that the index is based on only about 600 unique stores.
If, however, the index included only small stores that had averaged only 100 counts per day, the 34.6 million counts would represent about 6,000 unique locations. It's important to understand how many unique stores are included in an index because the number reflects market coverage - if an index has insufficient coverage or is not based on a statistically significant and representative sample of the market, how valid will the results be?
Euclid is an entirely different story. This company gathers store traffic data from Wi-Fi signals captured from smartphones. The obvious problem here is that not everyone has a smartphone. According to a 2013 survey by Nielsen, 61 percent of mobile phone subscribers have smartphones - which means 39 percent don't. Furthermore, since some of these users may have their Wi-Fi signal turned off, some customers won't be tracked. So how can an index based on these data produce meaningful insights about the state of retail traffic? I don't believe it can.
Characterizing an entire industry
The retail industry is expansive and diverse. Any meaningful index of retail traffic should consider geography, store category, retailer size, physical site characteristics, and methodology for collecting the data. All these factors can create bias in the data and compromise the reliability of the results. Generalizing retail industry traffic trends from non-representative samples or indexes of vague composition is confusing at best and potentially misleading.
While I applaud the intent of companies that attempt to create indexes to bring insight to the industry, the confusion they can cause is counter-productive. Some might argue that any retail traffic index is better than no index, but I disagree. This is another case where knowledge really is power, and it is imperative to acquire accurate knowledge. How do we do that? I suggest we start with a healthy dose of skepticism.
Caveat lector - let the reader beware
As someone who has studied retail traffic and conversion rates for over a decade, I am disconcerted by what, in my reading, passes for retail traffic indexes. I can't blame companies for trying to build traffic indexes - I'm sure they believe their insights are valid. However, if a company produces an "index" and then suggests or implies that it is representative of the market in any way, they should be absolutely clear about the methodology they use and the veracity of their results.
Having a robust and meaningful retail traffic index would be extremely useful and provide important insights, but until one is created, there will be predictions and claims, and it's caveat lector - let the reader beware!
Mark Ryski is author of Conversion: The Last Great Retail Metric and When Retail Customers Count and CEO and founder of HeadCount Corporation. HeadCount is a leading authority on retail traffic and customer conversion analysis, providing tangible, actionable insights for use by managers across an organization, from executives to store managers. (www.headcount.com or follow us on LinkedIn).