A standard piece of advice when implementing an ERP system is that it should never go live without exhaustive and comprehensive testing. Even small companies implementing Tier 3 systems, are urged to carry out 'Conference room pilots' to ensure that there are no nasty surprises come the big day.
But the latest news of problems with the ERP implementation at the candy maker Haribo – coupled with other failures such as the SAP failure at Lidl - shows that systems that fail to function as expected continue to be signed-off for release. So, something is clearly going wrong. Companies have not yet figured out how to avoid ERP failure.
When asking why and how this can be happening, there are several possibilities that need to be considered; primarily that:
There are several reasons why go-live testing is sometimes overlooked or is, at best, performed perfunctorily; all of them bad. One is the feeling that the old system is simply being replaced by more up-to-date software, so there's nothing really changing and therefore nothing really to test. The consultants were told what the old system did and were told to ensure that the new system does the same; so surely spending time and money on comprehensive testing is wasteful? This view is even more prevalent when the new system is from the same provider as the old one - “Isn't it just an upgrade?”
Well, firstly there is no such thing as “just an upgrade.” A new system, however hard the implementation team may try to make it look and feel like the previous system, is a different system and it's impossible for the people who wrote it, or implemented it, to understand all of the nuances of how your people work. So, it is impossible for them to give meaningful guarantees beyond saying that the software does what the people who wrote it intended it to.
Add to the mix the facts that all software has bugs that only raise their heads under very specific circumstances (thus making it virtually impossible for generic testing to uncover them) and that even electronically-transferred data can have errors, gaps and inconsistencies, and we have a recipe for disruption and failure. Other issues such as cybersecurity threats can undermine projects as well.
There are different aspects to system testing. At the basic level, programmers and developers will test the software that they have written to ensure that it works. But, when they do that, they are actually just checking that it can work: that when the data is correct and the right keys are pressed, the right result is obtained. So, the software passes the test and is released. And then a user has finger trouble and keys a price of $1000 and not $100, a user does a purchase receipt for a quantity expressed in tons instead of pounds, and a user enters an alphabetic character to a numeric field.
As testing moves downstream (i.e. from the programmers to the system integrators) other factors come into play. Obviously, the ERP system integrators want to prove that the system works, if only to ensure that their invoices get paid, but all projects have planned go-live dates and pressure to hit these dates can be great. That then can mean pressure to review test results optimistically - “Yes, there was a problem, but we can get around it with extra training.”
Regardless of time pressures, all software should be strenuously tested by people who really understand both how end-users are going to us it and what mistakes they are likely to make when they do (not all software is used by MIT graduates: some is used by people on minimum wage).
To an extent, this is linked to the previous point, in that testing carried out by the wrong people is inevitably testing done badly. But even good people can fail to do a good job if they don't know how to approach the task properly.
First and foremost, they must prepare a detailed and documented test plan that accurately reflects the way the system is intended to be used. It's not sufficient, for example, for it to say, “Test creation of a purchase order” because, in real life:
Taking that last example further: a company might have a standard cost of $1 per unit and get an invoice for boxes at $50 each. The team needs to check that receipt, invoice and payment all work as expected. And they should also be asking questions, and doing tests, to find out what happens if, say, there was not a valid standard cost at the time of receipt. How can this be identified? What should the procedure be for correcting the resultant postings? They should always remember Murphy's Law (“Anything that can go wrong, will go wrong.”) - the time to design the lifeboats is not after the ship has hit the iceberg.
People who count their ERP implementations in dozens and hundreds know the tricks that Murphy can play, and talking to them beforehand will help ensure that there are contingency plans in place to cope with anything that goes wrong when the new system is switched on.
A second consideration is that ERP systems are, by definition, integrated and so ripples can travel a long way. For example, a purchasing receipt will affect inventory, accounts payable and the GL as a minimum so, when an error or bug is found and corrected an any one of those areas, all of the others need to be re-tested to ensure that fixing one problem hasn't caused another. Effective business process management can help mitigate this risk.
This leads into the final point, and that is that final go-live testing must be about testing the system and not just the software. The system includes the processes, procedures and people that are essential to making the software work. It is pointless having software that, although functioning as intended, does not produce the results that the organization needs (processes). It is pointless having software that, although functioning as intended, is not backed up by clear and unambiguous instructions on how to use it properly (procedures).
And it is pointless having software that, although functioning as intended, can't be used efficiently and effectively because the key components in the system (people) have either not be trained properly or have failed to retain the knowledge that was imparted during that training. This is why you may be likely to underestimate organizational change management, despite that fact that organizational change management is the #1 key to digital transformation success.
So, before the new system is signed-off for live running, the software, the processes, the procedures and the people must all be rigorously tested, and tested together, in an environment as close to real-life operation as can be devised. Testing properly will identify whether or not it is wise to go live (if any element of testing fails, it obviously is not). But, when the tests show up problems, and the correct thing to do is to delay the go-live until identified problems have been addressed, this in itself can cause problems if the organization has not prepared a contingency plan.
The reason is that ERP systems take months, in fact often years, to implement and, with the targeted go-live date approaching, excitement and anticipation will be growing. The implementation team, to have gotten this far, will have had to overcome all kinds of obstacles, including the skeptics who have been predicting failure (“We had ERP in my previous company but never got it to work properly.”) Reputations will be at stake and there will be a fear that the momentum that has built up will be impossible to recreate if a halt is called.
So, when go-live testing (sometimes called a 'final dress rehearsal') shows up problems, there is great pressure to review the results through rose-tinted glasses and believe that 'it'll be alright on the night'. True; some small problems may well be resolved in time but going live without 100% successful testing is a gamble, and even small problems will put the implementation team under great stress and pressure during the early days of live running. Problems inevitably occur and organizations always discover that some users can make mistakes faster than those mistakes can be remedied.
The answer is that the purpose of go-live testing must be communicated to everyone within the organization well in advance, and the possibility of failure, and the possible reasons for that failure, must be openly discussed. There must be a contingency plan, and that plan should include being open and honest about the reasons for postponing go-live. If software had not been sufficiently tested, then that must be communicated. If some user departments were found not to be ready, that must be communicated (telling everyone that, if it is necessary to delay the project, it will also be necessary to communicate the reasons for the delay, may help concentrate minds sufficiently to ensure that delay is not necessary!).
Above all, though, there is no need to panic when go-live testing throws up problems. That is precisely what it is intended to do and, when the system is intended to provide essential functionality for years to come, proper testing is a small price to pay. This is something that good ERP consultants can help with to make your digital transformation more successful.