We all remember hearing how 4 out of 5 dentists recommend a particular toothpaste. It gives confidence when going to the grocery store and pulling a brand name from the shelf.
Most of us have also heard how thesesurveys and comparisons are far too easy to manipulate.Why then, do we get so excited with ERP reports and survey results? We get them in troves, offering quadrants, “best of” lists and associated rankings.
Regardless of the source, in one way or another it all comes down to marketing and relying on data that is way too easy to influence. Even our own 2019 Digital Transformation, HCM, and ERP Report contains some benchmark data, which is why we focus more on root causes and outcomes and less on “average results.”
There is nothing wrong with reading reviews or looking at matrices, but understand how results are tabulated and where the data is coming from before letting such publications influence your purchase decisions. Consider the following scenarios of sampling bias and data manipulation when reading your next ERP analysis:
1) Geographic and coverage can bias ERP studies.A research firm has been hired to provide metrics showing a particular SAP system integrator has strong market penetration in the State of California. Even if the reseller is based in San Diego and only has clients as far North as Orange County, all that is needed is a survey sent across Southern California to companies that have implemented SAP to ask who they used as a reseller.
Chances are the reseller in question will come up with a pretty nice percentage, say 30%. The results of the survey could then state that the reseller has a 30% market share of SAP implementations surveyed in the State of California. This type of bias is called area or undercoverage bias, where there is inadequate representation of the population being reported upon.
2) The concept of undercoverage bias can also extend beyond geography. Consider a research company that has a long-standing relationship of conducting paid research for a particular software firm, such as Oracle, and has been asked to show ERP market penetration across Fortune 1000 companies. Because the research firm works with Oracle in this case, they will have built a marketing database that has a large number of contacts and subscribers that have been referred by Oracle.
We know there are only a handful of ERP companies that dominate in the Fortune 1000 space, Oracle being one of them.Therefore, if a research firm works with Oracle and sendsa survey to its subscriber database, the results are going to be biased and weighted towards Oracle as more of the recipients of the survey are Oracle users.
3) A consulting group conducts a survey asking companies that have recently implemented ERP if they felt their systems integratorcould or should have given them more direction during the implementation. This is an example of leading-question bias, where the intention of the response is hinted in the question. An alternative here would be to ask if they were satisfied with the direction provided by their implementer, leaving the response less biased.
4) An independent consulting firm conducts an open survey on ERP implementation success where one lucky winner receives a complimentary assessment of their ERP application. This is an example of voluntary response bias, as those who respond are more likely in a difficult position with their software and would like some help. Likewise, a survey asking for input on the importance of organizational change managementwould be biased with respondents who have strong opinions in either direction.
The examples of sample bias are endless. Keep in mind the term “bias” does not necessarily mean intentional misrepresentation as many cases are unrealized and unintentional, and the intent here is not to knock firms that do provide surveys and reports. The trick is to identify where biases may exist and understand that what you see is not necessarily what you get.