Metrics and Governance of Outsourced Services
‘Measure, measure, measure….’
Metrics are the foundation of good governance for outsourced services. Much as the mantra ‘Location, location, location’ applies when looking for property, ‘Measure, measure, measure’ can be just as useful a phrase in helping to ensure a new outsourced service is implemented successfully.
Sometimes though, it can be hard to quantify the success of a particular part of a service, especially if it is a softer part of the service. However, the harder it is to find an objective measurement, the more important it often is to the success of the service.
The ‘softer’ parts of a service need objective data the most
This is because the softer parts of the service are frequently the most controversial areas of an outsourced service. One example comes under what is often very broadly labelled the ‘communication skills’ of the new outsourced service resources. This can cover a multitude of problems and frustrations that the business customers face when they are transitioned from dealing with a familiar face at their desk to an unfamiliar voice offshore.
Equally, the supplier can become frustrated when they are hitting targets on hard and fast measures but feedback is negative overall due to perceptions of poor service. In this way, the issue can quickly become a big stumbling block and a problem so broad that it can be difficult to see where to begin trying to solve it.
Measure outcomes in order to drive improvements in soft skills
Tackling this problem from the angle of determining the level of communication skills in specific resources might seem attractive but it is destined to become onerous, subjective and a cause for more disagreements. A more efficient way to tackle this kind of issue is to find a way to objectively measure the quality of artefacts or deliverables that rely on good communication skills.
For example, a common part of an application development contract will be the review of artefacts by an onshore team of technical experts. These sessions need to be honest and frank and time should be taken to explain any review comments and questions comprehensively so that the quality of deliverables is improved over time. If, though, the experts start to feel they are repeating themselves or that their points are not being taken on board, the perception that they are wasting their time and that the new service is doomed to failure can quickly take hold and spread.
Here, measuring the outcome can involve simply tracking the number of times that an artefact is reviewed and re-reviewed before it is passed as fit for purpose by the expert panel. Obviously, the more times an artefact is reviewed, the worse the communication has been and once this performance data has been gathered, the governance meeting can organise appropriate plans to rectify. Similarly, data can be collected on recurring issues with separate deliverables; a high frequency of these indicating that conversations are not being understood or acted on.
Once you have some baseline data, the governance continuous improvement process can pick up producing and implementing plans to resolve and start pushing the numbers in the right direction.
Ensure measures drive the desired behaviours with an iterative approach
In this way, one can see that any effort invested in refining performance measures and ensuring comprehensive cover for all deliverables will be greatly returned in the longer term, and can be the difference in the making or breaking of a new service.
However, it is impossible to foresee all the issues that a new service will face and engineering KPIs, even for known issues, can sometimes be more of an art form than a science. It is therefore also important to take an iterative approach to all measures and ensure that time is set aside within the governance process to add or improve KPIs to areas of concern.
This approach is required not just for areas that it transpires are not covered but it is also important to remember that all measures and accompanying incentives have the potential to drive unforeseen and sometimes undesired behaviours.
The scenario already discussed involving artefact reviews is again instructive here. The artefacts in question are reviewed by the in-house team with the intention that quality of work is controlled onshore and, typically, there will also be a measure on the time taken by the offshore team to complete which incentivises speed of delivery. It quickly becomes obvious under this regime that it is preferable for the offshore team to push out work as quickly as possible, letting the onshore experts pick out any quality issues in their review- resulting in a bottle-neck at the review stage and a perception of poor quality within the expert team.
This way, a perfectly good measure has incentivised the practice of sending out poor quality work so long as it is sent out on time. This is where the governance review/iterative approach for KPIs needs to recognise that some more KPIs focused around quality (e.g. the re-reviews metric discussed earlier) should be added to the existing suite of metrics, thus ensuring correct balance to incentives that drive the correct behaviour.
"We are delighted to welcome David to the Quantum Plus team as a Senior Consultant" commented John Clemmow, CEO. "David has over 14 years' experience in the IT industry with strong expertise in vendor management, outsourced service transformation, technical project management and software delivery. He has played a lead role in implementing several sourcing deals including sourcing strategy, implementation and transformation and also has experience in contract preparation and negotiations. David has previously worked in the telecoms and financial services sectors, specialising in outsourcing software delivery and application management services. He has worked on several major multi-million pound legacy system transition programmes and also implemented 3rd line offshore support services in this capacity."