A key part of any eCommerce site is the Product Details Page (PDP). Often when defining or optimising these pages we talk about social proof, scarcity messaging, up-sell and cross-sell, but rarely do we discuss the actual product description.
For mid-to-large scale re-sellers with a sizeable amount of products, the accepted practice is to simply use the manufacturer description, assume this is adequate and display it on the site. Yet this approach has minimal SEO benefit since search engines will see the description as duplicate content, it lacks personality and does nothing to build trust or increase the perception of expertise from a customer perspective. Manufacturer descriptions also have a tendency to focus on features as opposed to benefits, and as Theodore Levitt, Economist & Professor at Harvard Business School, once said:
“People don’t want to buy a quarter-inch drill, they want a quarter-inch hole.”
Typically, if a site isn’t converting as well as expected, we break out the MVT tools, call on our experience and start looking at landing pages, calls-to-actions and of course Checkout, rarely do we consider that there may be an issue with the actual product description.
We’ve become adept at providing additional functionality to help answer any questions a customer may have via customer reviews and customer Q&A’s but these mechanisms are only as good as the people providing the information and not all sites have the kind of traffic or customer engagement to make these work.
Our usual analysis methods seem to fall short when assessing the quality of a product description.
Examining the analytics may show people leaving the site on a certain page, in this case the PDP, but we don’t know whether this was due to pricing, unclear next steps or maybe they were just browsing and found the information they were looking for at that moment in time.
It’s the same with session replay tools; we may see someone continually scrolling up and down the Product Details Page and hypothesise that they were unable to find the information they were looking for, but we don’t know what that information was, so once again we can hypothesise that there is an issue but we don’t have enough information to diagnose what the issue may be.
The qualitative research route does little to fill the gap. Ethnographic studies aren’t suited to this kind of insight and due to the artificial nature of Formative Usability Testing, participants rarely purchase items during these sessions so the true questions that may affect their buying process don’t come to mind and, therefore, don’t get mentioned (even if they did, it’s not feasible to do this for every product in the product catalogue).
In short, it can be difficult to know whether the product description is actually helpful to customers or not.
We seem to already have well established patterns for assessing the effectiveness of FAQ content and the helpfulness of individual customer reviews.
But could this approach be applied to product descriptions?
We could employ a similar widget at the bottom of the product description which starts off by simply asking whether the content was helpful.
If it was, we thank the customer for their participation. If not, we ask them what information is missing. In addition, we could also offer the option to be contacted with an answer as an attempt to rescue the sale.
Once feedback has been submitted, rewards could be introduced to thank the customer for providing their feedback and act as encouragement for the customer to return to the site should they go elsewhere to find the information they were looking for. The rationale for the rewards is simply to try and turn a negative experience into a positive one as we tend to judge our experiences almost entirely by their peaks (whether good or bad) and how they ended (also known as the Peak-End Rule).
Would this approach help to exhaustively find all the issues with a product description? Probably not, but the feedback would act as a focused data point that could help close any information gaps.