:






Process activities, methods and techniques. The key activities within the SLM process should include: Determine, negotiate, document and agree requirements for new or changed services in SLRs



The key activities within the SLM process should include:

  • Determine, negotiate, document and agree requirements for new or changed services in SLRs, and manage and review them through the Service Lifecycle into SLAs for operational services
  • Monitor and measure service performance achievements of all operational services against targets within SLAs
  • Collate, measure and improve customer satisfaction
  • Produce service reports
  • Conduct service review and instigate improvements within an overall Service Improvement Plan (SIP)
  • Review and revise SLAs, service scope OLAs, contracts, and any other underpinning agreements
  • Develop and document contacts and relationships with the business, customers and stakeholders
  • Develop, maintain and operate procedures for logging, actioning and resolving all complaints, and for logging and distributing compliments
  • Log and manage all complaints and compliments
  • Provide the appropriate management information to aid performance management and demonstrate service achievement
  • Make available and maintain up-to-date SLM document templates and standards.

The interfaces between the main activities are illustrated in Figure 4.6.

Figure 4.6 The Service Level Management process

Although Figure 4.6 illustrates all the main activities of SLM as separate activities, they should be implemented as one integrated SLM process that can be consistently applied to all areas of the businesses and to all customers. These activities are described in the following sections.

4.2.5.1 Designing SLA frameworks

Using the Service Catalogue as an aid, SLM must design the most appropriate SLA structure to ensure that all services and all customers are covered in a manner best suited to the organizations needs. There are a number of potential options, including the following.

Service-based SLA

This is where an SLA covers one service, for all the customers of that service for example, an SLA may be established for an organizations e-mail service covering all the customers of that service. This may appear fairly straightforward. However, difficulties may arise if the specific requirements of different customers vary for the same service, or if characteristics of the infrastructure mean that different service levels are inevitable (e.g. head office staff may be connected via a high-speed LAN, while local offices may have to use a lower-speed WAN line). In such cases, separate targets may be needed within the one agreement. Difficulties may also arise in determining who should be the signatories to such an agreement. However, where common levels of service are provided across all areas of the business, e.g. e-mail or telephony, the service-based SLA can be an efficient approach to use. Multiple classes of service, e.g. gold, silver and bronze, can also be used to increase the effectiveness of service-based SLAs.



Customer-based SLA

This is an agreement with an individual customer group, covering all the services they use. For example, agreements may be reached with an organizations finance department covering, say, the finance system, the accounting system, the payroll system, the billing system, the procurement system, and any other IT systems that they use. Customers often prefer such an agreement, as all of their requirements are covered in a single document. Only one signatory is normally required, which simplifies this issue.

A combination of either of these structures might be appropriate, providing all services and customers are covered, with no overlap or duplication.

Multi-level SLAs

Some organizations have chosen to adopt a multi-level SLA structure. For example, a three-layer structure as follows:

  • Corporate level: covering all the generic SLM issues appropriate to every customer throughout the organization. These issues are likely to be less volatile, so updates are less frequently required
  • Customer level: covering all SLM issues relevant to the particular customer group or business unit, regardless of the service being used
  • Service level: covering all SLM issues relevant to the specific service, in relation to a specific customer group (one for each service covered by the SLA).

As shown in Figure 4.7, such a structure allows SLAs to be kept to a manageable size, avoids unnecessary duplication, and reduces the need for frequent updates. However, it does mean that extra effort is required to maintain the necessary relationships and links within the Service Catalogue and the CMS.

Figure 4.7 Multi-level SLAs

Many organizations have found it valuable to produce standards and a set of pro-formas or templates that can be used as a starting point for all SLAs, SLRs and OLAs. The pro-forma can often be developed alongside the draft SLA. Guidance on the items to be included in an SLA is given in Appendix F. Developing standards and templates will ensure that all agreements are developed in a consistent manner, and this will ease their subsequent use, operation and management.

Make roles and responsibilities a part of the SLA. Consider three perspectives the IT provider, the IT customer and the actual users.

The wording of SLAs should be clear and concise and leave no room for ambiguity. There is normally no need for agreements to be written in legal terminology, and plain language aids a common understanding. It is often helpful to have an independent person, who has not been involved with the drafting, to do a final read-through. This often throws up potential ambiguities and difficulties that can then be addressed and clarified. For this reason alone, it is recommended that all SLAs contain a glossary, defining any terms and providing clarity for any areas of ambiguity.

It is also worth remembering that SLAs may have to cover services offered internationally. In such cases the SLA may have to be translated into several languages. Remember also that an SLA drafted in a single language may have to be reviewed for suitability in several different parts of the world (i.e. a version drafted in Australia may have to be reviewed for suitability in the USA or the UK and differences in terminology, style and culture must be taken into account).

Where the IT services are provided to another organization by an external service provider, sometimes the service targets are contained within a contract and at other times they are contained within an SLA or schedule attached to the contract. Whatever document is used, it is essential that the targets documented and agreed are clear, specific and unambiguous, as they will provide the basis of the relationship and the quality of service delivered.

4.2.5.2 Determine, document and agree requirements for new services and produce SLRs

This is one of the earliest activities within the Service Design stage of the Service Lifecycle. Once the Service Catalogue has been produced and the SLA structure has been agreed, a first SLR must be drafted. It is advisable to involve customers from the outset, but rather than going along with a blank sheet to start with, it may be better to produce a first outline draft of the performance targets and the management and operational requirements, as a starting point for more detailed and in-depth discussion. Be careful, though, not to go too far and appear to be presenting the customer with a fait accompli.

It cannot be over-stressed how difficult this activity of determining the initial targets for inclusion with an SLR or SLA is. All of the other processes need to be consulted for their opinion on what are realistic targets that can be achieved, such as Incident Management on incident targets. The Capacity and Availability Management processes will be of particular value in determining appropriate service availability and performance targets. If there is any doubt, provisional targets should be included within a pilot SLA that is monitored and adjusted through a service warranty period, as illustrated in Figure 3.5.

While many organizations have to give initial priority to introducing SLAs for existing services, it is also important to establish procedures for agreeing Service Level Requirements (SLRs) for new services being developed or procured.

The SLRs should be an integral part of the Service Design criteria, of which the functional specification is a part. They should, from the very start, form part of the testing/trialling criteria as the service progresses through the stages of design and development or procurement. This SLR will gradually be refined as the service progresses through the stages of its lifecycle, until it eventually becomes a pilot SLA during the early life support period. This pilot or draft SLA should be developed alongside the service itself, and should be signed and formalized before the service is introduced into live use.

It can be difficult to draw out requirements, as the business may not know what they want especially if not asked previously and they may need help in understanding and defining their needs, particularly in terms of capacity, security, availability and IT service continuity. Be aware that the requirements initially expressed may not be those ultimately agreed. Several iterations of negotiations may be required before an affordable balance is struck between what is sought and what is achievable and affordable. This process may involve a redesign of the service solution each time.

If new services are to be introduced in a seamless way into the live environment, another area that requires attention is the planning and formalization of the support arrangements for the service and its components. Advice should be sought from Change Management and Configuration Management to ensure the planning is comprehensive and covers the implementation, deployment and support of the service and its components. Specific responsibilities need to be defined and added to existing contracts/OLAs, or new ones need to be agreed. The support arrangements and all escalation routes also need adding to the CMS, including the Service Catalogue where appropriate, so that the Service Desk and other support staff are aware of them. Where appropriate, initial training and familiarization for the Service Desk and other support groups and knowledge transfer should be completed before live support is needed.

It should be noted that additional support resources (i.e. more staff) may be needed to support new services. There is often an expectation that an already overworked support group can magically cope with the additional effort imposed by a new service.

Using the draft agreement as a basis, negotiations must be held with the customer(s), or customer representatives to finalize the contents of the SLA and the initial service level targets, and with the service providers to ensure that these are achievable.

4.2.5.3 Monitor service performance against SLA

Nothing should be included in an SLA unless it can be effectively monitored and measured at a commonly agreed point. The importance of this cannot be overstressed, as inclusion of items that cannot be effectively monitored almost always results in disputes and eventual loss of faith in the SLM process. A lot of organizations have discovered this the hard way and as a result have absorbed heavy costs, both in a financial sense as well as in terms of negative impacts on their credibility.

A global network provider agreed availability targets for the provision of a managed network service. These availability targets were agreed at the point where the service entered the customers premises. However, the global network provider could only monitor and measure availability at the point the connection left its premises. The network links were provided by a number of different national telecommunications service providers, with widely varying availability levels. The result was a complete mismatch between the availability figures produced by the network provider and the customer, with correspondingly prolonged and heated debate and argument.

Existing monitoring capabilities should be reviewed and upgraded as necessary. Ideally this should be done ahead of, or in parallel with, the drafting of SLAs, so that monitoring can be in place to assist with the validation of proposed targets.

It is essential that monitoring matches the customers true perception of the service. Unfortunately this is often very difficult to achieve. For example, monitoring of individual components, such as the network or server, does not guarantee that the service will be available so far as the customer is concerned. Customer perception is often that although a failure might affect more than one service, all they are bothered about is the service they cannot access at the time of the reported incident though this is not always true, so caution is needed. Without monitoring all components in the end-to-end service (which may be very difficult and costly to achieve) a true picture cannot be gained. Similarly, users must be aware that they should report incidents immediately to aid diagnostics, especially if they are performance-related, so that the service provider is aware that service targets are being breached.

A considerable number of organizations use their Service Desk, linked to a comprehensive CMS, to monitor the customers perception of availability. This may involve making specific changes to incident/problem logging screens and may require stringent compliance with incident logging procedures. All of this needs discussion and agreement with the Availability Management process.

The Service Desk is also used to monitor incident response times and resolution times, but once again the logging screen may need amendment to accommodate data capture, and call-logging procedures may need tightening and must be strictly followed. If support is being provided by a third party, this monitoring may also underpin Supplier Management.

It is essential to ensure that any incident/problem-handling targets included in SLAs are the same as those included in Service Desk tools and used for escalation and monitoring purposes. Where organizations have failed to recognize this, and perhaps used defaults provided by the tool supplier, they have ended up in a situation where they are monitoring something different from that which has been agreed in the SLAs, and are therefore unable to say whether SLA targets have been met, without considerable effort to manipulate the data. Some amendments may be needed to support tools, to include the necessary fields so that relevant data can be captured.

Another notoriously difficult area to monitor is transaction response times (the time between sending a screen and receiving a response). Often end-to-end response times are technically very difficult to monitor. In such cases it may be appropriate to deal with this as follows:

  • Include a statement in the SLA along the following lines: The services covered by the SLA are designed for high-speed response and no significant delays should be encountered. If a response time delay of more than x seconds is experienced for more than y minutes, this should be reported immediately to the Service Desk.
  • Agree and include in the SLA an acceptable target for the number of such incidents that can be tolerated in the reporting period.
  • Create an incident category of poor response (or similar) and ensure that any such incidents are logged accurately and that they are related to the appropriate service.
  • Produce regular reports of occasions where SLA transaction response time targets have been breached, and instigate investigations via Problem Management to correct the situation.

This approach not only overcomes the technical difficulties of monitoring, but also ensures that incidents of poor response are reported at the time they occur. This is very important, as poor response is often caused by a number of transient interacting events that can only be detected if they are investigated immediately.

The preferred method, however, is to implement some form of automated client/server response time monitoring in close consultation with the Service Operation. Wherever possible, implement sampling or 'robot' tools and techniques to give indications of slow or poor performance. These tools provide the ability to measure or sample actual or very similar response times to those being experienced by a variety of users, and are becoming increasingly available and increasingly more cost-effective to use.

Some organizations have found that, in reality, poor response is sometimes a problem of user perception. The user, having become used to a particular level of response over a period of time, starts complaining as soon as this is slower. Take the view that if the user thinks the service is slow, then it is.

If the SLA includes targets for assessing and implementing Requests for Change (RFCs), the monitoring of targets relating to Change Management should ideally be carried out using whatever Change Management tool is in use (preferably part of an integrated Service Management support tool) and change logging screens and escalation processes should support this.

4.2.5.4 Collate, measure and improve customer satisfaction

There are a number of important soft issues that cannot be monitored by mechanistic or procedural means, such as customers overall feelings (these need not necessarily match the hard monitoring). For example, even when there have been a number of reported service failures, the customers may still feel positive about things, because they may feel satisfied that appropriate actions are being taken to improve things. Of course, the opposite may apply, and customers may feel dissatisfied with some issues (e.g. the manner of some staff on the Service Desk) when few or no SLA targets have been broken.

From the outset, it is wise to try and manage customers expectations. This means setting proper expectations and appropriate targets in the first place, and putting a systematic process in place to manage expectations going forward, as satisfaction = perception expectation (where a zero or positive score indicates a satisfied customer). SLAs are just documents, and in themselves do not materially alter the quality of service being provided (though they may affect behaviour and help engender an appropriate service culture, which can have an immediate beneficial effect, and make longer-term improvements possible). A degree of patience is therefore needed and should be built into expectations.

Where charges are being made for the services provided, this should modify customer demands. (Customers can have whatever they can cost-justify providing it fits within agreed corporate strategy and have authorized budget for, but no more.) Where direct charges are not made, the support of senior business managers should be enlisted to ensure that excessive or unrealistic demands are not placed on the IT provider by any individual customer group.

It is therefore recommended that attempts be made to monitor customer perception on these soft issues. Methods of doing this include:

  • Periodic questionnaires and customer surveys
  • Customer feedback from service review meetings
  • Feedback from Post Implementation Reviews (PIRs) conducted as part of the Change Management process on major changes, releases, new or changed services, etc.
  • Telephone perception surveys (perhaps at random on the Service Desk, or using regular customer liaison representatives)
  • Satisfaction survey handouts (left with customers following installations, service visits, etc.)
  • User group or forum meetings
  • Analysis of complaints and compliments.

Where possible, targets should be set for these and monitored as part of the SLA (e.g. an average score of 3.5 should be achieved by the service provider on results given, based on a scoring system of 1 to 5, where 1 is poor performance and 5 is excellent). Ensure that if users provide feedback they receive some return, and demonstrate to them that their comments have been incorporated in an action plan, perhaps a SIP. All customer satisfaction measurements should be reviewed, and where variations are identified, they should be analysed with action taken to rectify the variation.

4.2.5.5 Review and revise underpinning agreements and service scope

IT service providers are dependent to some extent on their own internal technical support teams or on external partners or suppliers. They cannot commit to meeting SLA targets unless their own support teams and suppliers performances underpin these targets. Contracts with external suppliers are mandatory, but many organizations have also identified the benefits of having simple agreements with internal support groups, usually referred to as OLAs. Underpinning agreements is a term used to refer to all underpinning OLAs, SLAs and contracts.

Often these agreements are referred to as back-to-back agreements. This is to reflect the need to ensure that all targets within underpinning or back-to-back agreements are aligned with, and support, targets agreed with the business in SLAs or OLAs. There may be several layers of these underpinning or back-to-back agreements with aligned targets. It is essential that the targets at each layer are aligned with, and support, the targets contained within the higher levels (i.e. those closest to the business targets).

OLAs need not be very complicated, but should set out specific back-to-back targets for support groups that underpin the targets included in SLAs. For example, if the SLA includes overall time to respond and fix targets for incidents (varying on the priority levels), then the OLAs should include targets for each of the elements in the support chain. It must be understood, however, that the incident resolution targets included in SLAs should not normally match the same targets included in contracts or OLAs with suppliers. This is because the SLA targets must include an element for all stages in the support cycle (e.g. detection time, Service Desk logging time, escalation time, referral time between groups etc, Service Desk review and closure time as well as the actual time fixing the failure).

The SLA target should cover the time taken to answer calls, escalate incidents to technical support staff, and the time taken to start to investigate and to resolve incidents assigned to them. In addition, overall support hours should be stipulated for all groups that underpin the required service availability times in the SLA. If special procedures exist for contacted staff (e.g. out-of-hours telephone support) these must also be documented.

OLAs should be monitored against OLA and SLA targets, and reports on achievements provided as feedback to the appropriate managers of each support team. This highlights potential problem areas, which may need to be addressed internally or by a further review of the SLA or OLA. Serious consideration should be given to introducing formal OLAs for all internal support teams, which contribute to the support of operational services.

Before committing to new or revised SLAs, it is therefore important that existing contractual arrangements are investigated and, where necessary, upgraded. This is likely to incur additional costs, which must either be absorbed by IT or passed on to the customer. In the latter case, the customer must agree to this, or the more relaxed targets in existing contracts should be agreed for inclusion in SLAs. This activity needs to be completed in close consultation with the Supplier Management process, to ensure not only that SLM requirements are met, but also that all other process requirements are considered, particularly supplier and contractual policies and standards.

4.2.5.6 Produce service reports

Immediately after the SLA is agreed and accepted, monitoring must be instigated, and service achievement reports must be produced. Operational reports must be produced frequently (weekly perhaps even more frequently) and, where possible, exception reports should be produced whenever an SLA has been broken (or threatened, if appropriate thresholds have been set to give an early warning). Sometimes difficulties are encountered in meeting the targets of new services during the early life support period because of the high volume of RFCs. Limiting the number of RFCs processed during the early life support period can limit the impact of changes.

The SLA reporting mechanisms, intervals and report formats must be defined and agreed with the customers. The frequency and format of Service Review Meetings must also be agreed with the customers. Regular intervals are recommended, with periodic reports synchronized with the reviewing cycle.

Periodic reports must be produced and circulated to customers (or their representatives) and appropriate IT managers a few days in advance of service level reviews, so that any queries or disagreements can be resolved ahead of the review meeting. The meeting is not then diverted by such issues.

The periodic reports should incorporate details of performance against all SLA targets, together with details of any trends or specific actions being undertaken to improve service quality. A useful technique is to include a SLA Monitoring (SLAM) chart at the front of a service report to give an at-a-glance overview of how achievements have measured up against targets. These are most effective if colour coded (Red, Amber, Green, and sometimes referred to as RAG charts as a result). Other interim reports may be required by IT management for OLA or internal performance reviews and/or supplier or contract management. This is likely to be an evolving process a first effort is unlikely to be the final outcome.

The resources required to produce and verify reports should not be underestimated. It can be extremely time-consuming, and if reports do not reflect the customers own perception of service quality accurately, they can make the situation worse. It is essential that accurate information from all areas and all processes (e.g. Incident Management, Problem Management, Availability Management, Capacity Management, Change and Configuration Management) is analysed and collated into a concise and comprehensive report on service performance, as measured against agreed business targets.

SLM should identify the specific reporting needs and automate production of these reports, as far as possible. The extent, accuracy and ease with which automated reports can be produced should form part of the selection criteria for integrated support tools. These service reports should not only include details of current performance against targets, but should also provide historic information on past performance and trends, so that the impact of improvement actions can be measured and predicted.

4.2.5.7 Conduct service reviews and instigate improvements within an overall SIP

Periodic review meetings must be held on a regular basis with customers (or their representatives) to review the service achievement in the last period and to preview any issues for the coming period. It is normal to hold such meetings monthly or, as a minimum, quarterly.

Actions must be placed on the customer and provider as appropriate to improve weak areas where targets are not being met. All actions must be minuted, and progress should be reviewed at the next meeting to ensure that action items are being followed up and properly implemented.

Particular attention should be focused on each breach of service level to determine exactly what caused the loss of service and what can be done to prevent any recurrence. If it is decided that the service level was, or has become, unachievable, it may be necessary to review, renegotiate, review-agree different service targets. If the service break has been caused by a failure of a third-party or internal support group, it may also be necessary to review the underpinning agreement or OLA. Analysis of the cost and impact of service breaches provides valuable input and justification of SIP activities and actions. The constant need for improvement needs to be balanced and focused on those areas most likely to give the greatest business benefit.

Reports should also be produced on the progress and success of the SIP, such as the number of SIP actions that were completed and the number of actions that delivered their expected benefit.

A spy in both camps Service Level Managers can be viewed with a certain amount of suspicion by both the IT service provider staff and the customer representatives. This is due to the dual nature of the job, where they are acting as an unofficial customer representative when talking to IT staff, and as an IT provider representative when talking to the customers. This is usually aggravated when having to represent the oppositions point of view in any meeting etc. To avoid this the Service Level Manager should be as open and helpful as possible (within the bounds of any commercial propriety) when dealing with both sides, although colleagues should never be openly criticized.

4.2.5.8 Review and revise SLAs, service scope and underpinning agreements

All agreements and underpinning agreements, including SLAs, underpinning contracts and OLAs, must be kept up-to-date. They should be brought under Change and Configuration Management control and reviewed periodically, at least annually, to ensure that they are still current and comprehensive, and are still aligned to business needs and strategy.

These reviews should ensure that the services covered and the targets for each are still relevant and that nothing significant has changed that invalidates the agreement in any way (this should include infrastructure changes, business changes, supplier changes, etc.). Where changes are made, the agreements must be updated under Change Management control to reflect the new situation. If all agreements are recorded as CIs within the CMS, it is easier to assess the impact and implement the changes in a controlled manner.

These reviews should also include the overall strategy documents, to ensure that all services and service agreements are kept in line with business and IT strategies and policies.

4.2.5.9 Develop contacts and relationships

It is very important that SLM develops trust and respect with the business, especially with the key business contacts. Using the Service Catalogue, especially the Business Service Catalogue element of it, enables SLM to be much more proactive. The Service Catalogue provides the information that enables SLM to understand the relationships between the services and the business units and business process that depend on those services. It should also provide the information on all the key business and IT contacts relating to the services, their use and their importance. In order to ensure that this is done in a consistent manner, SLM should perform the following activities:

  • Confirm stakeholders, customers and key business managers and service users.
  • Assist with maintaining accurate information within the Service Portfolio and Service Catalogue.
  • Be flexible and responsive to the needs of the business, customers and users, and understand current and planned new business processes and their requirements for new or changed services, documenting and communicating these requirements to all other processes as well as facilitating and innovating change wherever there is business benefit.
  • Develop a full understanding of business, customer and user strategies, plans, business needs and objectives, ensuring that IT are working in partnership with the business, customers and users, developing long-term relationships.
  • Regularly take the customer journey and sample the customer experience, providing feedback on customer issues to IT. (This applies to both IT customers and also the external business customers in their use of IT services).
  • Ensure that the correct relationship processes are in place to achieve objectives and that they are subjected to continuous improvement.
  • Conduct and complete customer surveys, assist with the analysis of the completed surveys and ensure that actions are taken on the results.
  • Act as an IT representative on organizing and attending user groups.
  • Proactively market and exploit the Service Portfolio and Service Catalogue and the use of the services within all areas of the business.
  • Work with the business, customers and users to ensure that IT provides the most appropriate levels of service to meet business needs currently and in the future.
  • Promote service awareness and understanding.
  • Raise the awareness of the business benefits to be gained from the exploitation of new technology.
  • Facilitate the development and negotiation of appropriate, achievable and realistic SLRs and SLAs between the business and IT.
  • Ensure the business, customers and users understand their responsibilities/commitments to IT (i.e. IT dependencies).
  • Assist with the maintenance of a register of all outstanding improvements and enhancements.

4.2.5.10 Complaints and compliments

The SLM process should also include activities and procedures for the logging and management of all complaints and compliments. The logging procedures are often performed by the Service Desk as they are similar to those of Incident Management and Request Fulfilment. The definition of a complaint and compliment should be agreed with the customers, together with agreed contact points and procedures for their management and analysis. All complaints and compliments should be recorded and communicated to the relevant parties. All complaints should also be actioned and resolved to the satisfaction of the originator. If not, there should be an escalation contact and procedure for all complaints that are not actioned and resolved within an appropriate timescale. All outstanding complaints should be reviewed and escalated to senior management where appropriate. Reports should also be produced on the numbers and types of complaints, the trends identified and actions taken to reduce the numbers received. Similar reports should also be produced for compliments.





sdamzavas.net - 2020 . ! , ...